Tesla’s Dojo, a timeline

Elon Musk isn't content with Tesla being just another car manufacturer. His vision is much grander—he wants Tesla to become a powerhouse in AI, particularly in the realm of autonomous driving. This ambition hinges on Tesla's custom supercomputer, Dojo, designed to train its Full Self-Driving (FSD) neural networks. While FSD isn't fully autonomous yet—it still requires a vigilant driver—Tesla believes that with enough data, computational power, and training, it can reach true self-driving capabilities. That's where Dojo plays a pivotal role.
Musk has been hinting at Dojo for years, but in 2024, he ramped up the conversation around this supercomputer. Now, in 2025, a new player has entered the scene: Cortex. Yet, Dojo remains crucial for Tesla, especially as electric vehicle sales face a downturn. Investors are eager for signs that Tesla can achieve full autonomy. Here's a timeline of Dojo's journey and the promises made along the way.
2019
First Mentions of Dojo
On April 22, at Tesla's Autonomy Day, the AI team took the stage to discuss Autopilot and Full Self-Driving, highlighting the AI behind these systems. Tesla revealed details about its custom-built chips designed specifically for neural networks and self-driving cars. During the event, Musk teased Dojo, describing it as a supercomputer for training AI. He also mentioned that all Tesla cars produced at that time were equipped with the necessary hardware for full self-driving, requiring only a software update to unlock the feature.
2020
Musk Begins the Dojo Roadshow
On February 2, Musk announced that Tesla would soon have over a million connected vehicles worldwide, equipped with the sensors and compute power needed for full self-driving. He praised Dojo's capabilities, saying, "Dojo, our training supercomputer, will be able to process vast amounts of video training data & efficiently run hyperspace arrays with a vast number of parameters, plenty of memory & ultra-high bandwidth between cores. More on this later."
On August 14, Musk reiterated Tesla's plan to develop Dojo, a neural network training computer to handle massive video data, calling it "a beast." He predicted the first version of Dojo would be ready in about a year, around August 2021.
By December 31, Musk clarified that while Dojo wasn't essential, it would enhance self-driving capabilities. He emphasized, "It isn’t enough to be safer than human drivers, Autopilot ultimately needs to be more than 10 times safer than human drivers."
2021
Tesla Makes Dojo Official
On August 19, Tesla officially announced Dojo at its first AI Day, an event aimed at attracting engineers to its AI team. Tesla introduced its D1 chip, which, alongside Nvidia's GPU, would power the Dojo supercomputer. The company planned to house 3,000 D1 chips in its AI cluster.
On October 12, Tesla released a Dojo Technology whitepaper, "a guide to Tesla’s configurable floating point formats & arithmetic." This document outlined a technical standard for a new type of binary floating-point arithmetic used in deep learning neural networks, which could be implemented in software, hardware, or a combination of both.
2022
Tesla Reveals Dojo Progress
On August 12, Musk announced that Tesla would "phase in Dojo," reducing the need for additional GPUs the following year.
On September 30, at Tesla's second AI Day, the company revealed the installation of the first Dojo cabinet, which underwent 2.2 megawatts of load testing. Tesla was building one tile per day, each made up of 25 D1 chips. They demonstrated Dojo running a Stable Diffusion model to generate an AI image of a "Cybertruck on Mars." The company set a target to complete a full Exapod cluster by Q1 2023 and planned to build seven Exapods in Palo Alto.
2023
A 'Long-Shot Bet'
On April 19, during Tesla's first-quarter earnings call, Musk described Dojo as having "the potential for an order of magnitude improvement in the cost of training." He also suggested that Dojo could become a sellable service to other companies, similar to Amazon Web Services. Musk called it a "long-shot bet" but one worth taking.
On June 21, Tesla's AI X account posted that the company's neural networks were already in customer vehicles. A graph showed Tesla's current and projected compute power, with Dojo production starting in July 2023. Musk confirmed that Dojo was online and running tasks at Tesla data centers. Tesla projected its compute power to be among the top five globally by February 2024 and aimed to reach 100 exaflops by October 2024.
On July 19, Tesla's second-quarter earnings report confirmed the start of Dojo production. Musk announced plans to spend over $1 billion on Dojo through 2024.
On September 6, Musk posted on X that Tesla was limited by AI training compute but that Nvidia and Dojo would address this. He highlighted the challenge of managing the roughly 160 billion frames of video Tesla receives daily from its cars.
2024
Plans to Scale
On January 24, during Tesla's fourth-quarter and full-year earnings call, Musk acknowledged Dojo as a high-risk, high-reward project. He mentioned Tesla's dual path with Nvidia and Dojo, confirming that Dojo was operational and scaling up, with plans for Dojo 1.5, Dojo 2, and beyond.
On January 26, Tesla announced a $500 million investment to build a Dojo supercomputer in Buffalo. Musk downplayed the investment on X, noting it was equivalent to a 10k H100 system from Nvidia, and that Tesla would spend more on Nvidia hardware that year. He emphasized that the cost of being competitive in AI was at least several billion dollars annually.
On April 30, at TSMC's North American Technology Symposium, it was revealed that Dojo's next-generation training tile, the D2, was in production. The D2 would integrate the entire Dojo tile onto a single silicon wafer, rather than using 25 chips.
On May 20, Musk announced that the rear portion of the Giga Texas factory extension would house a "super dense, water-cooled supercomputer cluster."
On June 4, a CNBC report revealed that Musk had diverted thousands of Nvidia chips meant for Tesla to X and xAI. Musk initially denied the report but later clarified on X that Tesla didn't have a location ready for the chips, so they would have sat in a warehouse. He noted that the Giga Texas extension would house 50k H100s for FSD training.
Musk also shared that of the roughly $10 billion in AI-related expenditures Tesla planned for that year, about half was internal, including the Tesla-designed AI inference computer and sensors in all cars, plus Dojo. Nvidia hardware accounted for about two-thirds of the cost for building AI training superclusters, with Tesla's Nvidia purchases estimated at $3B to $4B.
On July 1, Musk revealed on X that current Tesla vehicles might not have the right hardware for the company's next-gen AI model, which would require a 5x increase in parameter count, necessitating an upgrade to the vehicle inference computer.
Nvidia Supply Challenges
On July 23, during Tesla's second-quarter earnings call, Musk highlighted the high demand for Nvidia hardware, making it difficult to obtain GPUs. He stressed the need to focus more on Dojo to ensure Tesla had the necessary training capability. Musk saw a path to being competitive with Nvidia using Dojo.
Tesla's investor deck predicted that Tesla's AI training capacity would increase to roughly 90,000 H100 equivalent GPUs by the end of 2024, up from around 40,000 in June. Later that day, Musk posted on X that Dojo 1 would have "roughly 8k H100-equivalent of training online by end of year," along with photos of the supercomputer, which resembled Tesla's Cybertrucks in design.
From Dojo to Cortex
On July 30, Musk mentioned that AI5 was about 18 months away from high-volume production, responding to concerns about older hardware being left behind.
On August 3, Musk shared a walkthrough of "the Tesla supercompute cluster at Giga Texas (aka Cortex)," which would consist of roughly 100,000 H100/H200 Nvidia GPUs and massive storage for video training of FSD and Optimus.
On August 26, Musk posted a video of Cortex, describing it as "the giant new AI training supercluster being built at Tesla HQ in Austin to solve real-world AI."
2025
No Updates on Dojo in 2025
On January 29, Tesla's Q4 and full-year 2024 earnings call made no mention of Dojo. Instead, Cortex, the new AI training supercluster at the Austin gigafactory, took center stage. Tesla's shareholder deck noted the completion of Cortex, which comprised roughly 50,000 H100 Nvidia GPUs.
"Cortex helped enable V13 of FSD (Supervised), which boasts major improvements in safety and comfort thanks to a 4.2x increase in data, higher resolution video inputs ... among other enhancements," according to the letter.
During the call, CFO Vaibhav Taneja mentioned that Tesla accelerated the buildout of Cortex to speed up the rollout of FSD V13. He reported that accumulated AI-related capital expenditures, including infrastructure, "so far has been approximately $5 billion." For 2025, Taneja expected AI-related capital expenditures to remain flat.
This story was originally published on August 10, 2024, and will be updated as new information becomes available.
Related article
Elon Musk's Ambitious Plan for Tesla Dojo AI Supercomputer Unveiled
For years, Elon Musk has been vocal about Dojo, the AI supercomputer at the heart of Tesla's AI ambitions. In July 2024, Musk emphasized its importance, announcing that Tesla's AI team would intensify efforts on Dojo ahead of the much-anticipated robotaxi reveal in October.So, what exactly is Dojo,
3 Days Left Until TechCrunch Sessions: AI Opens Its Doors at UC Berkeley
In just three short days, the future of artificial intelligence will step into the spotlight at TechCrunch Sessions: AI at UC Berkeley’s Zellerbach Hall. This Thursday, June 5, mar
Imagen 4 is Google’s newest AI image generator
Google has just unveiled its latest image-generating AI model, Imagen 4, promising users an even better visual experience than its predecessor, Imagen 3. Announced at Google I/O 20
Comments (0)
0/200
Elon Musk isn't content with Tesla being just another car manufacturer. His vision is much grander—he wants Tesla to become a powerhouse in AI, particularly in the realm of autonomous driving. This ambition hinges on Tesla's custom supercomputer, Dojo, designed to train its Full Self-Driving (FSD) neural networks. While FSD isn't fully autonomous yet—it still requires a vigilant driver—Tesla believes that with enough data, computational power, and training, it can reach true self-driving capabilities. That's where Dojo plays a pivotal role.
Musk has been hinting at Dojo for years, but in 2024, he ramped up the conversation around this supercomputer. Now, in 2025, a new player has entered the scene: Cortex. Yet, Dojo remains crucial for Tesla, especially as electric vehicle sales face a downturn. Investors are eager for signs that Tesla can achieve full autonomy. Here's a timeline of Dojo's journey and the promises made along the way.
2019
First Mentions of Dojo
On April 22, at Tesla's Autonomy Day, the AI team took the stage to discuss Autopilot and Full Self-Driving, highlighting the AI behind these systems. Tesla revealed details about its custom-built chips designed specifically for neural networks and self-driving cars. During the event, Musk teased Dojo, describing it as a supercomputer for training AI. He also mentioned that all Tesla cars produced at that time were equipped with the necessary hardware for full self-driving, requiring only a software update to unlock the feature.
2020
Musk Begins the Dojo Roadshow
On February 2, Musk announced that Tesla would soon have over a million connected vehicles worldwide, equipped with the sensors and compute power needed for full self-driving. He praised Dojo's capabilities, saying, "Dojo, our training supercomputer, will be able to process vast amounts of video training data & efficiently run hyperspace arrays with a vast number of parameters, plenty of memory & ultra-high bandwidth between cores. More on this later."
On August 14, Musk reiterated Tesla's plan to develop Dojo, a neural network training computer to handle massive video data, calling it "a beast." He predicted the first version of Dojo would be ready in about a year, around August 2021.
By December 31, Musk clarified that while Dojo wasn't essential, it would enhance self-driving capabilities. He emphasized, "It isn’t enough to be safer than human drivers, Autopilot ultimately needs to be more than 10 times safer than human drivers."
2021
Tesla Makes Dojo Official
On August 19, Tesla officially announced Dojo at its first AI Day, an event aimed at attracting engineers to its AI team. Tesla introduced its D1 chip, which, alongside Nvidia's GPU, would power the Dojo supercomputer. The company planned to house 3,000 D1 chips in its AI cluster.
On October 12, Tesla released a Dojo Technology whitepaper, "a guide to Tesla’s configurable floating point formats & arithmetic." This document outlined a technical standard for a new type of binary floating-point arithmetic used in deep learning neural networks, which could be implemented in software, hardware, or a combination of both.
2022
Tesla Reveals Dojo Progress
On August 12, Musk announced that Tesla would "phase in Dojo," reducing the need for additional GPUs the following year.
On September 30, at Tesla's second AI Day, the company revealed the installation of the first Dojo cabinet, which underwent 2.2 megawatts of load testing. Tesla was building one tile per day, each made up of 25 D1 chips. They demonstrated Dojo running a Stable Diffusion model to generate an AI image of a "Cybertruck on Mars." The company set a target to complete a full Exapod cluster by Q1 2023 and planned to build seven Exapods in Palo Alto.
2023
A 'Long-Shot Bet'
On April 19, during Tesla's first-quarter earnings call, Musk described Dojo as having "the potential for an order of magnitude improvement in the cost of training." He also suggested that Dojo could become a sellable service to other companies, similar to Amazon Web Services. Musk called it a "long-shot bet" but one worth taking.
On June 21, Tesla's AI X account posted that the company's neural networks were already in customer vehicles. A graph showed Tesla's current and projected compute power, with Dojo production starting in July 2023. Musk confirmed that Dojo was online and running tasks at Tesla data centers. Tesla projected its compute power to be among the top five globally by February 2024 and aimed to reach 100 exaflops by October 2024.
On July 19, Tesla's second-quarter earnings report confirmed the start of Dojo production. Musk announced plans to spend over $1 billion on Dojo through 2024.
On September 6, Musk posted on X that Tesla was limited by AI training compute but that Nvidia and Dojo would address this. He highlighted the challenge of managing the roughly 160 billion frames of video Tesla receives daily from its cars.
2024
Plans to Scale
On January 24, during Tesla's fourth-quarter and full-year earnings call, Musk acknowledged Dojo as a high-risk, high-reward project. He mentioned Tesla's dual path with Nvidia and Dojo, confirming that Dojo was operational and scaling up, with plans for Dojo 1.5, Dojo 2, and beyond.
On January 26, Tesla announced a $500 million investment to build a Dojo supercomputer in Buffalo. Musk downplayed the investment on X, noting it was equivalent to a 10k H100 system from Nvidia, and that Tesla would spend more on Nvidia hardware that year. He emphasized that the cost of being competitive in AI was at least several billion dollars annually.
On April 30, at TSMC's North American Technology Symposium, it was revealed that Dojo's next-generation training tile, the D2, was in production. The D2 would integrate the entire Dojo tile onto a single silicon wafer, rather than using 25 chips.
On May 20, Musk announced that the rear portion of the Giga Texas factory extension would house a "super dense, water-cooled supercomputer cluster."
On June 4, a CNBC report revealed that Musk had diverted thousands of Nvidia chips meant for Tesla to X and xAI. Musk initially denied the report but later clarified on X that Tesla didn't have a location ready for the chips, so they would have sat in a warehouse. He noted that the Giga Texas extension would house 50k H100s for FSD training.
Musk also shared that of the roughly $10 billion in AI-related expenditures Tesla planned for that year, about half was internal, including the Tesla-designed AI inference computer and sensors in all cars, plus Dojo. Nvidia hardware accounted for about two-thirds of the cost for building AI training superclusters, with Tesla's Nvidia purchases estimated at $3B to $4B.
On July 1, Musk revealed on X that current Tesla vehicles might not have the right hardware for the company's next-gen AI model, which would require a 5x increase in parameter count, necessitating an upgrade to the vehicle inference computer.
Nvidia Supply Challenges
On July 23, during Tesla's second-quarter earnings call, Musk highlighted the high demand for Nvidia hardware, making it difficult to obtain GPUs. He stressed the need to focus more on Dojo to ensure Tesla had the necessary training capability. Musk saw a path to being competitive with Nvidia using Dojo.
Tesla's investor deck predicted that Tesla's AI training capacity would increase to roughly 90,000 H100 equivalent GPUs by the end of 2024, up from around 40,000 in June. Later that day, Musk posted on X that Dojo 1 would have "roughly 8k H100-equivalent of training online by end of year," along with photos of the supercomputer, which resembled Tesla's Cybertrucks in design.
From Dojo to Cortex
On July 30, Musk mentioned that AI5 was about 18 months away from high-volume production, responding to concerns about older hardware being left behind.
On August 3, Musk shared a walkthrough of "the Tesla supercompute cluster at Giga Texas (aka Cortex)," which would consist of roughly 100,000 H100/H200 Nvidia GPUs and massive storage for video training of FSD and Optimus.
On August 26, Musk posted a video of Cortex, describing it as "the giant new AI training supercluster being built at Tesla HQ in Austin to solve real-world AI."
2025
No Updates on Dojo in 2025
On January 29, Tesla's Q4 and full-year 2024 earnings call made no mention of Dojo. Instead, Cortex, the new AI training supercluster at the Austin gigafactory, took center stage. Tesla's shareholder deck noted the completion of Cortex, which comprised roughly 50,000 H100 Nvidia GPUs.
"Cortex helped enable V13 of FSD (Supervised), which boasts major improvements in safety and comfort thanks to a 4.2x increase in data, higher resolution video inputs ... among other enhancements," according to the letter.
During the call, CFO Vaibhav Taneja mentioned that Tesla accelerated the buildout of Cortex to speed up the rollout of FSD V13. He reported that accumulated AI-related capital expenditures, including infrastructure, "so far has been approximately $5 billion." For 2025, Taneja expected AI-related capital expenditures to remain flat.
This story was originally published on August 10, 2024, and will be updated as new information becomes available.












