There has been heated discussion in the field of artificial intelligence recently, and there are constant disputes about whether its development has reached a bottleneck. Anthropic co-founder Jack Clark recently published an article expressing optimism about the development prospects of AI, and expounded his views using OpenAI's o3 model as an example. He believes that AI development is not stagnant, but is accelerating, but more innovative methods are needed, such as combining reinforcement learning and additional computing power.
Has the development of artificial intelligence reached a bottleneck? Jack Clark, co-founder of Anthropic, made it clear in a recent newsletter that this is not the case. He believes that the o3 model recently released by OpenAI shows that the development of AI has not slowed down, but may be accelerating.
In a newsletter titled "Import AI," Clark refutes claims that AI development is reaching its limits. “Anyone who tells you that progress is slowing or that scaling is hitting a bottleneck is wrong,” he wrote. He noted that OpenAI’s new o3 model is proof that there is still huge room for growth in AI, but a different approach will be needed. Instead of simply scaling up the model, o3 models take advantage of reinforcement learning and additional computing power at runtime.
Clark believes this ability to "think out loud" at runtime opens up entirely new possibilities for scaling. He expects this trend to accelerate in 2025, when companies will begin to combine traditional methods, such as larger base models, with new ways of using computing during training and inference. This coincides with what OpenAI said when it first launched its o model series.
Clark warned that most people probably didn't anticipate how quickly AI would develop. “I think basically no one realizes how significant the advances in AI will be in the future.”
However, he noted that computational cost is a major challenge. The most advanced version of o3 requires 170 times the computing power of its base version, which already requires more resources than o1, which itself requires more resources than GPT-4o.
Clark explained that these new systems make costs more difficult to predict. In the past, cost was simply determined by model size and output length. But for o3, resource requirements may vary based on specific tasks.
Despite these challenges, Clark remains convinced that combining traditional scaling methods with new ones will lead to "more significant" AI advances in 2025 than we have seen so far.
Clark's prediction sparked interest in Anthropic's own plans. The company has yet to release an "inference" or "test-time" model capable of competing with OpenAI's o-series or Google's Gemini Flash Thinking.
The previously announced flagship model Opus 3.5 remains on hold, reportedly because its performance improvements are not enough to justify operating costs. Although some believe this is indicative of broader challenges in scaling large language models, Opus 3.5 is not a complete failure. The model is said to have helped train the new Sonnet 3.5, which has become the most popular language model on the market.
Jack Clark’s point of view provides a new perspective on the future development direction of artificial intelligence. Although computing costs are still a huge challenge, the application of innovative methods and the continued development of large models indicate that artificial intelligence technology will usher in a more significant development. progress. In the future, we will see AI demonstrate its powerful capabilities in more fields.