Recently, Webmaster Home reported the results of a survey on the potential risks of super artificial intelligence, which attracted widespread attention. The survey reveals AI researchers' concerns about the potential risks of super artificial intelligence, particularly the small but real possibility that it could lead to the extinction of the human race. The survey results also show that researchers have disagreements and uncertainties about the development speed and future social impact of AI technology, and have expressed strong concerns about the risks of AI technology being used maliciously, such as deep forgery, manipulation of public opinion, and the manufacture of weapons.
Webmaster Home reported that AI researchers are generally worried that the development of super artificial intelligence may have a negligible but very small risk of leading to human extinction. In the largest survey of AI researchers, about 58% of researchers believed there was a 5% probability of causing human extinction or other extremely bad AI-related outcomes. The survey also showed that AI researchers are widely divided on the timetable for future AI technology milestones and are uncertain about the possible social consequences of being driven by AI. In addition, researchers have urgent concerns about AI-driven scenarios such as deepfakes, manipulating public opinion, and manufacturing weapons.
All in all, the results of this survey remind us that while enjoying the convenience brought by the advancement of AI technology, we must attach great importance to its potential risks and actively explore effective risk management and control mechanisms to ensure that AI technology can develop safely and sustainably to benefit mankind.