Elon Musk drops a surprise curveball on Nvidia

Tesla (TSLA) just delivered a rare double whammy to Nvidia (NVDA) over the past weekend. 

CEO Elon Musk revealed that Tesla’s much-talked-about AI5 self-driving chips are nearly complete, and that the next one, A16, is already underway.

With the AI inference part covered, Musk said Sunday on X that Dojo 3 is being restarted, pushing Tesla back into large-scale AI training after previously pulling back.

Nvidia threw the first punch, though, when it rolled out Alpamayo” at CES 2026 (an open-source autonomous vehicle AI toolkit), aiming to become the default autonomy platform powering a ton of brands.

Musk responded swiftly, downplaying the risk.

Clearly, this is a mighty interesting time for the AV industry, with the tug-of-war between two giants in Nvidia and Tesla.

For Tesla, it’s all about building a closed loop that covers the entire AV stack.

  • Tesla-designed in-car compute (this includes AI5, which is “nearly done,” and the AI6, which is already underway)
  • Tesla’s camera-first software stack
  • Tesla’s data flywheel is powered by its own fleet

So for Tesla, it’s all about keeping the autonomy part within its potent ecosystem, as Nvidia looks to power everyone else. 

For investors, these promises aren’t new, which makes the follow-through all the more critical.

Elon Musk says Tesla’s AI5 chip nears completion as next-generation self-driving hardware advance

Photo by Bloomberg on Getty Images

Tesla’s chip roadmap signals faster, more independent future

Tesla is looking to tighten its grip on the hardware behind self-driving. 

More Tech Stocks:

Musk announced on Saturday in an X post that the EV giant is nearing completion of its AI5 self-driving computer chip, and that the AI6 is already in development.

According to Musk, the AI5 chips, which are manufactured by Taiwan Semiconductor Manufacturing Company, will enter high-volume production in 2027, replacing the AI4 hardware. Also, Tesla has lined up Samsung Electronics for U.S.-based chip manufacturing.

Tesla’s AI5 and AI6 chips are really about in-car inference

It’s pretty easy to get lost in the AI jargon, so it’s important to be clear about things at each step about what’s happening.

Related: BlackRock CEO delivers blunt warning on US national debt

The A15 and A16 move is essentially about “inference at the edge”. That’s basically running Tesla’s Full Self-Driving neural nets inside the car, instead of relying on a third-party compute stack. 

So if Tesla’s running the software on its own chips, it gains a major competitive edge:

  • Tesla doesn’t need Nvidia’s in-vehicle SoC (or its full “DRIVE” platform) for its cars.
  • Tesla gains control of unit costs, supply-chain leverage, and chip design.

It’s important to note, though, that Tesla already moved away from Nvidia for its in-car compute back in 2019, so the latest moves are more a doubling down than a switch.

Nvidia wants to power everyone else’s self-driving dreams

Nvidia is offering a full-stack solution to automakers, essentially a shortcut to full autonomy. Under its NVIDIA DRIVE umbrella, it’s basically selling an integrated “brain, operating system, and toolkit”.

Related: Goldman Sachs revamps Microsoft stock price target before earnings

So instead of building custom chips, software, safety frameworks, and whatnot, automakers can just plug and play into Nvidia’s robust ecosystem and get started. A big part of its appeal is that it’s essentially a hack for companies that don’t have Tesla’s decade-long autonomy effort or the billions to spend on R&D.

What Nvidia bundles together:

  • DRIVE AGX in-vehicle computers, such as the Orin and Thor.
  • A complete software stack that includes the DRIVE OS and DriveWorks.
  • DRIVE Hyperion, a reference vehicle platform that comes with validated sensors and architecture.
  • Safety and validation tools under the popular NVIDIA Halos umbrella, along with powerful AI models such as Alpamayo, in accelerating training and simulation.

Tesla pushes back into training, but Nvidia still sets the pace

Tesla is sharpening its in-car AI chips, but clearly, Nvidia still holds a critical edge in computing power. 

AI5 and AI6 are tailor-made for inference at the edge, but training frontier-scale models is a completely different challenge.

Training modern AI systems is remarkably compute-hungry. 

For perspective, Meta said it trained its AI model Llama 3.1 (405B) using over 16,000 Nvidia H100 GPUs. So if we factor in 700 watts per chip, that’s nearly 11.2 megawatts of power just for the GPUs. That level of scale is where Nvidia’s economics, availability, and ecosystem continue to dominate.

However, Tesla’s decision to restart Dojo 3 as it looks to foray into the training game again.

At this point, though, I feel the Dojo 3’s return most likely points to a hybrid future.

Tesla will continue to build on its training capacity using the AI5 and AI6 architectures, while still banking on Nvidia, where scale and economics matter. 

When we see strong evidence of large-scale training clusters running on Tesla silicon backed by throughput and cost data, that’s when the rivalry truly escalates on the training front.  

Related: Morgan Stanley tweaks Rocket Lab stock price target post-rally