In recent years, generative AI technology has developed rapidly, but the traditional method of simply relying on increasing data and computing power to improve AI performance has reached a bottleneck. The editor of Downcodes learned that many top AI scientists pointed out that the field of AI is shifting from an era of scale expansion to a new stage that focuses on breakthrough innovation. This change means that AI development will pay more attention to improving model quality rather than simply pursuing scale expansion. New technical paths and methods are being explored and applied, bringing new opportunities and challenges to the future development of the AI field.
With the rapid development of generative AI, the industry's traditional understanding that bigger is better is changing. Many top AI scientists recently stated that the method of improving AI performance simply by increasing the amount of data and computing power is approaching a bottleneck, and new technological breakthrough directions are emerging.
Ilya Sutskever, co-founder of Safe Superintelligence and OpenAI, recently expressed his opinion that traditional pre-training methods have entered a performance plateau. This assertion is particularly striking because it was his early advocacy of large-scale pre-training methods that gave rise to ChatGPT. Today, he said that the field of AI has moved from an era of scale expansion to an era of miracles and discoveries.
Currently, large model training faces multiple challenges: training costs of tens of millions of dollars, the risk of hardware failure caused by system complexity, long test cycles, and limitations of data resources and energy supply. These problems have prompted researchers to explore new technological paths.
Among them, test-time compute technology has received widespread attention. This approach allows the AI model to generate and evaluate multiple options in real time during use, rather than directly giving a single answer. OpenAI researcher Noam Brown made a vivid analogy: asking the AI to think about a game of poker for 20 seconds is as effective as expanding the model size and training time by 100,000 times.
Currently, many top AI laboratories, including OpenAI, Anthropic, xAI and DeepMind, are actively developing their own technical versions. OpenAI has applied this technology in its latest model o1, and chief product officer Kevin Weil said that through these innovative methods, they see a lot of opportunities to improve model performance.
Industry experts believe that this change in technology route may reshape the competitive landscape of the entire AI industry and fundamentally change the demand structure of AI companies for various resources. This marks that AI development is entering a new stage that pays more attention to quality improvement rather than pure scale expansion.
New technological breakthroughs have brought new development opportunities to the AI industry and also proposed new thinking on the future direction of AI development. The editor of Downcodes believes that in the future development, more innovative technologies will continue to emerge in the AI field, promoting the development of AI technology to a deeper level, and ultimately benefiting human society.