At the recent CES exhibition, Nvidia CEO Jensen Huang announced that the company's AI chip performance improvement speed has exceeded the historical standards of Moore's Law. This statement has attracted widespread attention, especially in the context of the tech community's stagnation of AI progress.
Moore's Law was proposed by Intel co-founder Gordon Moore in 1965, predicting that the number of transistors on computer chips will roughly double each year, thereby doubled the chip performance accordingly. However, in recent years, Moore's Law has slowed significantly, making Nvidia's breakthrough even more compelling.
Huang Renxun pointed out that Nvidia's latest data center super chips are more than 30 times faster than the previous generation when running AI inference workloads. "We can build architectures, chips, systems, libraries and algorithms at the same time, and if we can do that, we can go beyond Moore's Law because we can innovate throughout the technology stack." This comprehensive technological innovation makes Nvidia maintains a leading position in the field of AI chips.
Currently, leading AI labs such as Google, OpenAI and Anthropic are using Nvidia's AI chips to train and run AI models. Therefore, the advancement of these chips will directly affect the capabilities of AI models and thus promote the development of the entire AI industry.
Huang Renxun also mentioned that there are now three active AI expansion rules: pre-training, post-training and test-time calculation. He stressed that Moore's Law is so important in computing history because it drives the reduction of computational costs, and performance improvements in the inference process will also lead to the reduction of inference costs. This view provides economic feasibility for the widespread application of AI models.
Although some people expressed concerns about whether NVIDIA's expensive chips can continue to lead in the field of reasoning, Huang Renxun said that the latest GB200NVL72 chip is 30 to 40 times faster than the H100 chip in inference workloads, which will make the AI inference model More economical and affordable. This performance improvement not only enhances Nvidia's market competitiveness, but also provides more possibilities for the popularization of AI technology.
Huang Renxun emphasized that improving computing power is a direct and effective way to solve the problems of computational performance and cost affordability in reasoning. He expects that with the continuous advancement of computing technology, the cost of AI models will continue to decline, although some models in companies such as OpenAI are currently running at a higher cost. This prediction paints an optimistic picture for the future development of AI technology.
Huang Renxun said that today's AI chips have increased by 1,000 times compared to ten years ago, which is a speed of progress far exceeding Moore's Law, and he believes that this trend will not stop any time soon. This continuous technological innovation will bring more breakthroughs and opportunities to the AI industry.
Key points: Nvidia CEO Huang Renxun said that the company's AI chip performance improvement has surpassed Moore's Law. The latest GB200NVL72 chips are 30 to 40 times faster on AI inference workloads than previous generations. Huang Renxun predicts that with the improvement of computing power, the cost of using AI models will gradually decrease. These progress not only demonstrates Nvidia's technological strength, but also points out the direction for the future development of AI technology.