OpenAI recently announced an internal scale to track the progress of its large language models in general artificial intelligence (AGI). This move not only demonstrates OpenAI’s ambitions in the AGI field, but also provides the industry with a new standard for measuring AI development. . This scale divides the implementation of AGI into five levels, from today's chatbots to AI that can perform the work of the entire organization. Each level represents a significant improvement in AI capabilities. The editor of Downcodes will explain this in detail and analyze its significance and potential impact.
According to Bloomberg, OpenAI has created an internal scale to track the progress of its large language models in general artificial intelligence (AGI). This move not only demonstrates OpenAI’s ambitions in the field of AGI, but also provides the industry with a new standard for measuring AI development.
The scale is divided into five levels: 1. Level 1: Current chatbots, such as ChatGPT, belong to this level. 2. Level 2: Systems capable of solving basic problems at the PhD level. OpenAI claims to be close to this level. 3. Level 3: AI agents capable of taking actions on behalf of users. 4. Level 4: AI capable of creating new innovations. 5. Level 5: AI that can perform the work of the entire organization and is considered the final step towards achieving AGI.
However, experts disagree on the timeline for AGI implementation. OpenAI CEO Sam Altman said in October 2023 that AGI is still five years away. But even if AGI can be realized, it will require an investment of billions of dollars in computing resources.
It is worth noting that the announcement of this scoring standard coincides with OpenAI’s announcement to cooperate with Los Alamos National Laboratory to explore how to safely use advanced AI models (such as GPT-4) to assist biological research. The collaboration aims to establish a set of safety and other assessment factors for the U.S. government that can be used to test various AI models in the future.
Although OpenAI declined to provide details on how models are assigned to these internal levels, Bloomberg reported that company leadership recently demonstrated a research project using the GPT-4 AI model, arguing that the project demonstrated some new skills similar to human reasoning capabilities. .
This method of quantifying AGI progress helps to provide a more rigorous definition of AI development and avoid subjective interpretation. However, it also raises some concerns about AI safety and ethics. In May, OpenAI disbanded its security team, with some former employees saying the company's security culture had taken a back seat to product development, although OpenAI denied this.
OpenAI's move provides new directions and standards for AGI research, but it also raises concerns about security and ethics. The final realization of AGI is still uncertain and requires continued attention and caution. The future development direction of AI will be affected by this scale and related research, and deserves continued attention.