OpenAI is actively exploring the application potential of large language models in the field of biological threat intelligence. They developed the GPT-4 early warning system, which is designed to identify potential bioweapons threats and incorporate them into OpenAI's risk prevention framework. This system is not intended to replace existing intelligence collection methods, but hopes to improve the efficiency of information acquisition and assist human experts in making judgments. A recent study involving 100 participants preliminarily evaluated the effectiveness of GPT-4 in auxiliary information acquisition. The results showed that GPT-4 combined with the Internet can slightly improve accuracy and completeness, but the effect is limited, and the study also pointed out that only evaluating information Acquisition has its own limitations.
OpenAI, an American artificial intelligence research company, recently started developing an early warning system for GPT-4 to explore whether large-scale language models can improve the efficiency of obtaining information about biological threats compared to the Internet. The system is designed to serve as a "trigger" that signals the potential presence of a biological weapon, requiring further investigation, while incorporating OpenAI's prevention framework. In a related study, 100 participants showed that using GPT-4 combined with the Internet resulted in a slight improvement in the accuracy and completeness of biohazard tasks, but the effect was not significant. The study highlights the limitations of only assessing information acquisition rather than practical application, and does not explore GPT-4's potential contribution to the development of new biological weapons. At the same time, OpenAI announced the release of multiple new models to provide more application options.
Although the results of this study by OpenAI are limited, it demonstrates its emphasis on the responsible application of artificial intelligence technology and provides valuable experience for future research on large-scale language models in the field of biosecurity. Further research needs to focus on the performance and potential risks of GPT-4 in practical applications, and develop a more complete security mechanism.