The California State Assembly recently passed a high-profile AI safety bill, SB1047, which aims to regulate the development and deployment of large-scale AI models, triggering heated discussions in the technology community. The core of the bill is to mandate rigorous safety testing of large-scale AI models that cost more than $100 million to train, and requires the setting up of emergency stop mechanisms and timely reporting after accidents. The passage of this bill marks an important step in California's artificial intelligence safety regulation, and its influence may spread across the country and even the world.
Recently, California lawmakers passed a controversial artificial intelligence safety bill, SB1047, which now only needs a final procedural vote before it will be sent to California Governor Gavin Newsom for signature. The advancement of this bill has caused fierce debate within Silicon Valley, with voices of support and opposition intertwined and becoming the focus of everyone's attention.
Picture source note: The picture is generated by AI, and the picture is authorized by the service provider Midjourney
California State Senator Scott Wiener introduced the bill, saying he was proud of the diverse coalition behind the bill and that everyone believed innovation was as important as safety. He believes that artificial intelligence has huge potential to change the world, and the implementation of this bill is to ensure that this potential can be released safely.
Tech giants have had mixed reactions, with xAI CEO Elon Musk also expressing support. He pointed out on social media that California should pass SB1047 to prevent the misuse of artificial intelligence. He mentioned that he has supported the regulation of artificial intelligence for more than 20 years and believes that it is necessary to regulate potential risks.
At the heart of the bill is a requirement for companies developing large-scale AI models (with training costs exceeding $100 million) to conduct comprehensive security testing before release. At the same time, the bill requires these companies to set up "emergency stop" functions in crisis situations and report to the California Attorney General within 72 hours after a safety incident occurs. The newly established "Frontier Model Bureau" will be responsible for monitoring compliance, and repeated violations will face fines of up to $30 million.
Supporters include luminaries in the field of artificial intelligence, such as Geoffrey Hinton and Yoshua Bengio, who believe that this bill is critical to combating the risks of artificial intelligence and may even provide a national Setting standards for AI safety.
However, opponents believe that the bill may inhibit innovation and lead to brain drain. California technology companies such as Google, OpenAI and Meta have expressed concerns about the bill, believing that small companies will be disproportionately affected, and technological progress may also be affected. Slow down. They called for regulations to be enacted at the federal level to avoid inconsistencies between states.
Bill SB1047 passed the California Appropriations Committee with amendments and is now awaiting a final vote in the state Legislature. If California Governor Newsom signs the bill, it will become the first similar artificial intelligence regulatory law in the United States and could have far-reaching consequences for the global technology industry and the future regulation of artificial intelligence.
Highlight:
? ** Bill Passed **: California’s SB1047 Artificial Intelligence Safety Bill was passed and is awaiting the governor’s signature.
? **Safety Measures**: The bill requires high-cost AI models to undergo safety testing and set up emergency stop functions.
? **Reactions from all parties**: There are voices of support and opposition among technology giants, who are worried about the impact on innovation and brain drain.
The final fate of the SB1047 bill and its impact on the development of artificial intelligence deserve continued attention. It is not only related to California’s technological future, but may also provide an important reference case for global artificial intelligence regulation.