Meta has released a new cutting-edge AI risk policy framework, aiming to reduce the risk of cutting-edge AI models and take corresponding control measures based on the risk level. The framework divides AI models into two categories: high-risk and key-risk, and formulates corresponding strategies for different levels, such as stopping development, restricting access rights, etc. This move aims to improve the security and transparency of AI development and prevent potential threats such as the proliferation of biological weapons and economic fraud.
Meta recently released a new risk policy framework aimed at evaluating and reducing the risks posed by cutting-edge AI models and stop developing or limiting the release of these systems if necessary. The framework, called the Frontier AI Framework, elaborates on how Meta will divide AI models into two categories: high-risk and critical risks, and takes corresponding measures accordingly to reduce risks to "tolerable levels."
In this framework, critical risks are defined as being able to uniquely facilitate the execution of specific threat scenarios. High risk means that the model may significantly increase the possibility of achieving threat scenarios, but does not directly facilitate execution. Threat scenarios include the proliferation of biological weapons, whose capabilities are comparable to known biological agents, and the widespread economic damage to an individual or company caused by large-scale long-term fraud and fraud.
For models that meet critical risk thresholds, Meta will cease development and open access to the model to only a few experts, while implementing security protections to prevent hackers or data breaches where technically and commercially feasible. For high-risk models, Meta restricts access and takes risk mitigation measures to reduce risks to moderate levels, ensuring that the model does not significantly improve the execution capabilities of threat scenarios.
Meta said its risk assessment process will involve multidisciplinary experts and internal company leaders to ensure that opinions on all aspects are fully considered. This new framework applies only to the company's state-of-the-art models and systems, whose capabilities match or exceed current technical levels.
Meta hopes that by sharing its advanced AI system development methods, it can improve transparency and promote outside discussion and research on AI assessment and risk quantitative science. At the same time, the company emphasized that the decision-making process for AI evaluation will continue to evolve and improve with the development of technology, including ensuring that the results of its testing environment can truly reflect the performance of the model in actual operation.
Key points:
Meta launches a new risk policy framework to assess and reduce risks from cutting-edge AI models.
Key risk models will be discontinued and expert access will be restricted; high-risk models will implement access restrictions and mitigation measures.
The risk assessment process will involve multidisciplinary experts, striving to improve transparency and scientificity.
Meta's new framework sets higher security standards for the development and deployment of future AI models, reflecting its commitment to responsible AI development. This not only helps to reduce the potential risks of AI technology, but also provides valuable experience and reference for the industry.