At the Beijing Intelligent Source Conference, large AI models once again became the focus. Many domestic and foreign AI leaders and Turing Award winners gathered together to conduct heated discussions on the future development and supervision of AI. OpenAI founder Sam Altman, Meta chief AI scientist Yann LeCun and other industry leaders have expressed their opinions, triggering fierce debates about the future of the GPT model, AI supervision and other aspects. The editor of Downcodes will take you to review the wonderful moments of the conference and interpret the key information behind this AI feast.
The popularity of Chat GPT has not yet dissipated, and a conference has pushed the attention of large AI models to new heights.
At the just-held Beijing Zhiyuan Conference, many legendary AI tycoons gathered, including Zhang Bo and Zhang Hongjiang, the most cutting-edge domestic AI leaders, as well as four figures including Geoffery Hinton, Yann LeCun, Yao Qizhi, and Joseph Sifakis. Spirit Award winner, Open AI founder Sam Altman, PaLM-E and RoBERTa and other AI company executives.
Because each Zhiyuan conference adheres to a professional academic ideological line, it has a very high reputation among elite circles in the field of artificial intelligence at home and abroad, but to the public, it is a little cold. At this conference, Sam Altman pointed out that in order to understand the development of general AI technology, Open AI must promote AI research changes.
However, this statement has been opposed by many AI tycoons. Among them, Stuart Russell, a professor at the University of California, Berkeley, criticized Chat GPT and GPT-4 developed by Open AI for not "answering" questions. They do not understand the world and are not a development step for general AI. Yang Likun even directly pointed out that the current GPT autoregressive model lacks planning and reasoning capabilities, and the GPT system may be abandoned in the future.
In addition to fierce academic debates, how to regulate current AI and the subsequent development direction of AI have also become the focus of discussion at this meeting.
How will AI be regulated in the future?
Since 2023, while generative AI has swept many fields at an overwhelming speed, various problems caused by AI have also intensified the concerns of the outside world.
In our country, “AI fraud” has become one of the areas of recent social concern. A few days ago, the police in Baotou, Inner Mongolia reported a case of fraud using AI. Mr. Guo, the legal representative of a company in Fuzhou, was defrauded of 4.3 million yuan in 10 minutes. According to reports, scammers used AI face-changing and onomatopoeia technology to pretend to be acquaintances to commit fraud. Coincidentally, Xiao Liu from Changzhou, Jiangsu Province was tricked by a scammer pretending to be his classmate to send voice and video calls. After seeing the "real person", Xiao Liu believed it and "lent" 6,000 yuan to the scammer.
Source: Douyin
In fact, the reason behind the occurrence of AI fraud cases is related to the rapid development of current AI technology and the continuous lowering of the threshold for technology synthesis. From a follow-up perspective, if AI large model technology continues to make breakthroughs, in the future it will gradually shift from facial synthesis to full body synthesis and 3D synthesis technology, and the synthesis effect will be more realistic.
In the United States, whether AI will affect elections has become a focus of local media discussion. According to the Associated Press, today's sophisticated generative AI tools can "clone" someone's voice and image in seconds, creating a large number of "fake materials." As long as it is bound to a powerful social media algorithm, AI can quickly target the audience and spread the message, destroying the election on an unprecedented scale and speed.
Other media in the United States predict that since the next presidential election in the United States will be held next year, it is not ruled out that the two parties in the United States will use AI technology for propaganda, fundraising and other activities. More importantly, Chat GPT has excellent performance in text capabilities. As a result, a candidate's team can generate a beautifully worded speech in just a few seconds.
Based on various concerns about AI from the outside world, a few days ago, "AI Godfather" Jeffrey Hinton, Anthropic CEO Dario Amodai, Google Deep Mind CEO Demis Hassabis, etc. More than 350 executives and experts in the field of AI have signed a joint statement stating that "mitigating the extinction risk posed by AI should become a global priority alongside other society-scale risks such as epidemics and nuclear war."
Regarding the issue of how to regulate subsequent AI, Sam Altman pointed out at the Intelligent Source Conference that Open AI is currently solving this problem in a variety of ways. First, as early as May 26, Open AI launched an incentive program, investing US$1 million to solicit effective AI governance solutions from the society.
Secondly, Sam Altman believes that humans are unable to detect some malicious models doing some evil things. Open AI is currently investing in some new and complementary directions, hoping to achieve breakthroughs. But scalable supervision is an attempt to use AI systems to assist humans in discovering defects in other systems, while explanation ability is to use GPT-4 to explain GPT-2 neurons. Although there is still a long way to go, Open AI believes in machine learning technology It can further improve AI explainability. At the same time, Sam Altman also believes that in the future, only by making models smarter and more helpful can we better realize the target advantages of general AI and reduce AI risks.
Finally, although Open AI will not launch a GPT-5 version in the short term, the world may have more powerful AI systems in the next ten years, and the world needs to prepare in advance. Open AI's subsequent core work on large AI models is still training, and it is preparing to establish a global database to reflect global AI values and preferences and share AI security research with the world in real time.
In addition to Open AI's own efforts, Sam Altman also called for global joint efforts to improve the supervision of AI. For example, Sam Altman pointed out that China currently has some of the best AI talent in the world, considering that solving the difficulty of aligning AI systems requires the best minds from around the world.
Therefore, Sam Altman also hopes that Chinese AI researchers can contribute to AI risks in the future. Tegmark also believes that currently China has done the most in regulating artificial intelligence, followed by Europe and finally the United States.
Image source: Wisdom Conference
In addition, Sam Altman also pointed out that there are difficulties in cooperation in global AI supervision, but this is actually an opportunity. While AI is bringing the world together, it also needs to introduce a systematic framework and safety standards in the future.
However, considering the current intensification of the game between global powers, geopolitical conflicts are breaking out at multiple points, and governments of various countries have different attitudes towards generative AI, this will make it difficult for global cooperation on AI regulation to be implemented in the short term. Impact on the market business of generative AI companies.
Europe has always been at the forefront of AI regulation. In May, the EU was close to passing legislation on the regulation of artificial intelligence technology. This is also expected to become the world's first comprehensive artificial intelligence bill and may become a precedent for developed economies.
European Commission President Ursula von der Leyen previously said in an interview with the media, “We want artificial intelligence systems to be accurate, reliable, safe and non-discriminatory, regardless of their source. The introduction of relevant EU laws and regulations may make Open AI will subsequently withdraw from the EU market. Therefore, how to continuously improve its generative AI model in accordance with adjustments to global regulatory policies is not only a problem encountered by Open AI itself, but also an issue that the entire industry needs to continue to pay attention to.
All in all, the Beijing Intelligent Source Conference presented a scene of vigorous development in the field of AI, and also highlighted the urgency of AI regulation and the importance of global cooperation. In the future, how to balance the development and supervision of AI technology still requires global joint efforts to explore. The editor of Downcodes will continue to pay attention to the trends in the AI field and bring you more exciting reports.