OpenAI quietly updated the underlying model of ChatGPT and upgraded it to GPT-4o. This update is not the release of a new model, but an improvement based on user feedback to improve the user experience. OpenAI officials briefly announced the news on social media and expressed the hope that users will like this new version. Although official information is limited, judging from user feedback and information disclosed by OpenAI insiders, this update is mainly reflected in more detailed step-by-step reasoning, more detailed explanations, and improved image generation capabilities.
Yesterday, ChatGPT officially announced in a low-key manner on social network X that the AI tool started using the new GPT-4o model last week. ChatGPT said: “Since last week, a new GPT-4o model has appeared in ChatGPT. I hope you all like it, and if you haven’t experienced it yet, check it out! We think you will like it.”
While the ChatGPT app account on X did not provide further information immediately after the post, sources at OpenAI revealed that the new model was updated based on user feedback.
In subsequent release notes, OpenAI explained that they introduced an update to the GPT-4o model and found through experiments and user feedback that users preferred this new version. While this is not an entirely new model, they are working to find better ways to measure and communicate behavioral improvements in the model.
Many users have found that the new model seems to be able to perform more detailed step-by-step reasoning and give more detailed explanations. However, an OpenAI spokesperson said that the model's inference process has not changed, and ChatGPT mainly responds to specific prompts from users when describing its reasoning. Even before the official announcement, many users had noticed that ChatGPT's performance seemed to have improved.
In addition, users reported that GPT-4o's image generation capabilities were also activated in ChatGPT. Although the previous GPT-4 model already had the ability to generate images, this update allows GPT-4o to generate higher-quality images more efficiently. The new model not only processes text, but also converts pixels into tokens, enabling more accurate and realistic image generation.
While many users welcomed the update, some were critical, arguing that OpenAI should more clearly explain the changes in user experience and behavior of the model. Some users even think that the changes in this update are not significant enough and feel a bit superficial.
When asked whether there were different versions of GPT-4o in ChatGPT and the API, an OpenAI spokesperson said that they often make small improvements in ChatGPT and the API. Based on the needs of developers, they will launch the best version of the API.
OpenAI confirmed in its official developer account that the new model has been launched in the API, and developers can use "chatgpt-4o-latest" to test the latest improvements. Regarding the details of this update, the OpenAI team is also actively responding to questions from users and developers.
Focus on
The new GPT-4o model is online and modified based on user feedback.
Users found that the model performed better and was able to perform more detailed step-by-step reasoning.
The new model supports image generation, improving generation quality and efficiency.
All in all, the update of the GPT-4o model reflects OpenAI's emphasis on user experience and its ability to continue iterative improvements. Although the update did not bring revolutionary changes, its optimization in details improved user satisfaction and accumulated experience for subsequent larger-scale model upgrades. In the future, OpenAI is expected to make improvements in communicating updated information more transparently.