Tesla is building an AI supercomputer cluster in Austin called Cortex whose scale and energy consumption are impressive. A video recently released by Musk shows part of its internal structure, indicating the gradual progress of this huge project. Cortex is expected to contain 70,000 AI servers, with an initial power consumption of 130 megawatts, which will increase to 500 megawatts in the future. It will be mainly used for training Tesla's fully autonomous driving system (FSD) and Optimus robots and other projects. The project uses Nvidia H100 graphics cards and Tesla self-developed hardware, and is equipped with a powerful liquid cooling system to cope with the huge energy needs.
A video recently shared by Elon Musk shows the internals of Tesla's Cortex AI supercomputer cluster, which is part of Tesla's "Giga Texas" factory and is expected to require 130 megawatts of cooling at startup and electricity, which will grow to 500 MW by 2026. The video shows a large number of server racks being assembled, expected to contain 70,000 AI servers, containing a large number of Nvidia H100 graphics cards and Tesla's self-developed hardware. The Cortex supercomputer cluster is being built to "solve real-world AI problems," including training Tesla's fully autonomous driving system and Optimus robots. Its powerful cooling system and energy consumption are also impressive. This is just one of many supercomputer clusters that Musk is developing, demonstrating his ambitions in the field of artificial intelligence.
The construction progress and final scale of Cortex will have a profound impact on Tesla's future development of autonomous driving and robotics technology, and will also provide a new reference benchmark for other companies in the AI field. In the future, Cortex’s practical application effects and impact on the development of artificial intelligence deserve continued attention.