xAI's next-generation AI model, Grok3, originally planned to be released at the end of 2024, failed to come out as scheduled, causing industry concern. Grok3 is positioned as a product to compete with GPT-4 and Gemini, has image analysis and question answering capabilities, and is used on Musk’s social platform X. Delays in its development process, as well as delays in similar projects by other AI companies, have triggered thinking about the bottlenecks of current AI training methods.
Recently, the next-generation AI model Grok3, which was originally scheduled to be released at the end of 2024 by xAI, an AI company founded by Elon Musk, failed to arrive as scheduled. This news has attracted widespread attention in the industry. Grok3 is regarded as xAI's main competitive product against OpenAI's GPT-4 and Google's Gemini. It has image analysis and question and answer capabilities, and has been applied on Musk's social network X.
Last summer, Musk said on the X platform that Grok3 would be "an anticipated major breakthrough after 100,000 H100 training." But on January 2, just after the new year, Grok3 still failed to come out, and there was no sign that it would be released soon. It is reported that an intermediate version "Grok2.5" may be launched first.
This phenomenon is not an isolated case. Many companies in the AI industry have experienced similar embarrassing situations. Take AI startup Anthropic as an example. Last year, it failed to launch the successor of its Claude3Opus model, Claude3.5Opus, as scheduled, and even decided to abandon the release of the model after announcing that the model would be launched by the end of 2024. In addition, Google and OpenAI have also experienced delays in the release of their flagship models.
Behind this series of delays may reflect the bottlenecks faced by current AI training methods. In the past, enterprises were able to achieve significant performance gains through massive computing power and larger data sets for training. But as the performance improvement of each generation of models gradually shrinks, companies begin to seek alternative technologies to break through the current bottleneck. In an interview with podcast host Lex Fridman, Musk said that although his expectations for Grok 3 are high, he also admitted that it may not be realized.
In addition, the small size of the xAI team may also be one of the important reasons for the delay of Grok3. Among competitors, xAI has relatively limited resources and manpower, which undoubtedly makes them face more challenges when developing new models.
Overall, the delay of Grok3 is not only a small episode of xAI, but also a general trend that the entire AI industry shows when facing technical bottlenecks.
The delayed release of Grok3 highlights the technical challenges and commercial pressures faced by large-scale language model research and development. In the future, the AI industry needs to explore new training methods and technologies to deal with the bottleneck of model performance improvement and improve research and development efficiency.