Recently, the editor of Downcodes learned that the lawsuit between Elon Musk and Sam Altman accidentally exposed OpenAI’s early internal emails, revealing the fierce power struggle between the company’s founders over the control of artificial intelligence. The content of the email has raised widespread concerns about the future development direction and security of artificial intelligence, especially the issue of control over human-level artificial intelligence (AGI). In the email, OpenAI co-founder Ilya Suzkovel expressed concerns about Musk’s excessive control of AI, believing that this may lead to the risk of AGI dictatorship, and called for the establishment of reasonable mechanisms to prevent this from happening.
Picture source note: The picture is generated by AI, and the picture authorization service provider Midjourney
In an email to Musk and Altman in September 2017, Suzkowir pointed out that Musk's desire for control may pose a potential threat to future human-level artificial intelligence (AGI). He emphasized in the email that "the current structure provides a path for you to eventually have unilateral absolute control of AGI." Although Musk has said that he does not want to control the final AGI, Suzkville believes that he behavior shows a focus on absolute control.
Notably, the email came less than six months after Musk resigned from OpenAI over disagreements over how the company was funded. Sutskower made clear his concerns in the email, saying, "As the company makes real progress with AGI, you may choose to maintain absolute control of the company, even if that is not your current intention."
In the second half of the email, Suzkwer further pointed out that OpenAI’s goal is to create a better future and avoid the emergence of AGI dictatorship. He mentioned Musk's concerns that Google DeepMind founder Demis Hassabis might create an AGI dictatorship, and believed that this concern was reasonable. He therefore called for the creation of a structure that would prevent Musk from becoming a dictator, especially in the context of technology enabling this possibility.
The content of the email is particularly striking today because it reveals that the issue of control is a complex and sensitive topic when it comes to creating artificial intelligence with human intelligence. As time went on, Suzkowir himself left OpenAI to form a new company focused on artificial intelligence security, a change that added weight to the content of the email.
The exposure of this email reminds us once again that as artificial intelligence develops rapidly, it is crucial to pay attention to technical ethics and safety. We need to establish a better mechanism to ensure that artificial intelligence technology can benefit mankind instead of becoming a tool that threatens the future of mankind. The editor of Downcodes will continue to pay attention to the latest developments in the field of artificial intelligence and bring more valuable information to readers.