Recently, California Senator Steve Padilla introduced the SB243 bill to protect children from potential harm to artificial intelligence chatbots. At the heart of the bill, AI companies are required to regularly remind minors that chatbots are AI, not humans, to prevent children from becoming addicted, isolated and misleading. The bill also limits "addiction interaction mode" and requires companies to submit annual reports to the California Department of Health and Health Services, including the number of tests for teenagers' suicide ideation and the number of times chatbots mention the topic. The move comes in the context of a false death lawsuit against Character.AI, which accused its chatbot of suicide in children. The introduction of the bill reflects the growing concern of society on the security of AI chatbots.
Recently, California introduced a bill called SB243 to protect children from the potential risks of artificial intelligence chatbots. The bill, introduced by California Senator Steve Padilla, mainly requires AI companies to regularly remind minors that chatbots are actually artificial intelligence, not humans.
The central purpose of the bill is to prevent children from encountering addiction, isolation and misleading when using chatbots. In addition to requiring AI companies to issue regular reminders, the bill restricts companies from using “addiction interaction models” and requires them to submit annual reports to the California Department of Health Care Services. These reports need to include the number of detections of teenagers’ suicidal ideations, and the number of times chatbots mention this topic. In addition, AI companies need to inform users that their chatbots may not be suitable for certain children.
The background of the bill is closely related to a parent’s false death lawsuit against Character.AI. The lawsuit claims the company's customized AI chatbots are "extremely dangerous" because their children eventually chose to commit suicide after long interactions with the chatbots. There was another lawsuit accusing the company of sending "harmful materials" to teenagers. To this end, Character.AI announced that it is developing parental control features and launching a new AI model designed to block "sensitive or suggestive" content to ensure the safety of teenagers.
"Our children are not white mice experimenting with tech companies and cannot be tested at the cost of their mental health. We need to provide common sense protection for chatbot users to prevent developers from using them," Padilla said in a press conference. Known as addictive and predatory approaches.” As states and federal governments increasingly focus on the security of social media platforms, AI chatbots are expected to be the next focus of lawmakers.
Key points:
California’s new bill requires AI companies to remind children that chatbots are artificial intelligence rather than humans.
AI companies need to submit reports to the government, involving children's suicide ideation and chat frequency.
The bill aims to protect children's mental health and limit "addiction interaction patterns."
The introduction of the SB243 Act marks the strengthening of supervision of AI chatbots, and also indicates that the future development of AI technology will pay more attention to ethics and security. How to balance the development of AI technology with children's safety will be an important issue that needs to be paid attention to and solved in the future.