In order to improve the safety and reliability of artificial intelligence models, OpenAI recently announced a series of major initiatives. These initiatives are designed to strengthen internal security processes and impose tighter controls on the development and deployment of high-risk AI models, thereby minimizing potential risks and ensuring the responsible development of AI technology.
Artificial intelligence company OpenAI has strengthened its internal security processes, established a security advisory group and given its board of directors veto power over high-risk artificial intelligence. OpenAI has updated its preparatory framework to clarify the path to assess, mitigate and determine the risks inherent in models. The company rates models according to risk levels and takes corresponding mitigation measures. Models assessed as high risk will not be able to be deployed or further developed. OpenAI has also established a cross-functional security advisory group to review expert reports and make higher-level recommendations. The board and leadership will jointly decide whether to release or shelve the model. With this move, OpenAI seeks to prevent high-risk products or processes from being approved without the knowledge or approval of the board of directors.
Through these new measures, OpenAI shows that it takes artificial intelligence safety issues seriously and sets a good example for other artificial intelligence companies. This signifies that the artificial intelligence industry is actively exploring safer and more responsible development and deployment models, laying a solid foundation for the healthy development of artificial intelligence technology.