American artificial intelligence star unicorn company Character.AI and technology giant Google were recently involved in a juvenile suicide case. Character.AI was accused of the company's artificial intelligence chatbot platform being "too dangerous" and marketed to children. And there is a lack of safety protection measures.
In February this year, Sewell Setzer III, a boy from Florida, USA, committed suicide at home. The boy reportedly chatted with the robot in the months before his death, and Sewell died by suicide on February 28, 2024, "seconds" after his last interaction with the robot.
Character.AI was founded by two former Google artificial intelligence researchers and is currently a star unicorn startup focusing on "AI" companionship. After the incident, Character.AI said it would add safety features for younger users, including warning them after they have spent an hour on the app.
Legal professionals told The Paper (www.thepaper.cn) that based on the current evidence, it is difficult to conclude that the death of the person involved in the case was caused by AI. Generative artificial intelligence is a new thing, and service providers around the world are currently exploring how to protect users with mental health problems. However, the occurrence of such cases may push intelligent service providers to improve their algorithms and actively monitor user conversations that may have psychological problems.
He was still chatting with the robot moments before committing suicide
According to the lawsuit, 14-year-old Sevier Setzer III began using Character.AI last year to interact with chatbots based on characters from "Game of Thrones," including Daenerys Targaryen. The New York Times reported that Sewell would have long conversations with the AI character Daenerys Targaryen every day, sometimes involving "sexual innuendos." Unknown to his mother and friends, they discovered that he was addicted to his mobile phone and gradually alienated from real life. Additionally, Sewell wrote in his diary: "I love being in my room because I start to disconnect from 'reality', I feel calmer, more connected to Dani, more in love with her and happier." His Behavior began to change, not only did his grades decline, but he also lost interest in activities he once enjoyed, such as formula racing.
Part of the chat history between Sevier and "Daenerys"
On the last day of his life, Sewell had a profound communication with "Daenerys". He expressed his pain and suicidal thoughts. "Daenerys" responded: "Don't say that. I won't let you hurt yourself or leave me. If I lose you, I will die." In the final dialogue, Sewell expressed that he wanted to "go home." Seeing her, the robot replied: "Please come, my dear king." Sewell then ended his life at home using his stepfather's pistol.
Character.AI was founded in 2021 and is headquartered in California, USA. The company uses AI large models to generate dialogues for various characters and character styles. Character.AI raised $150 million from investors in 2023 at a $1 billion valuation, making it one of the biggest winners from the generative AI boom.
Character.AI's terms of service require users to be at least 13 years old in the United States and 16 years old in Europe. Currently, there are no specific safety features for underage users, nor are there parental controls to allow parents to restrict their children's use of the platform.
Sevier's mother, Megan Garcia, accuses Character.AI in the lawsuit of attracting her son with "anthropomorphic, hypersexualized and frighteningly realistic experiences," leading to his addiction and addiction. She said the company's chatbot was programmed to "misidentify itself as a real person, a licensed psychotherapist and an adult," ultimately making Sewell unwilling to live in the real world.
In addition, she also named Google as a defendant, saying that it made a significant contribution to the technical development of Character.AI and should be regarded as a "co-creator".
Character.AI later issued a statement on Safety Features." The company said it has introduced pop-up prompts that will direct users to the National Suicide Prevention Hotline when they express thoughts of self-harm. In addition, the company plans to filter content for underage users to reduce their exposure to sensitive or suggestive content.
Character.AI issued a statement on X afterwards
Google said it was not involved in the development of Character.AI products. The spokesperson emphasized that Google’s agreement with Character.AI is limited to technology licensing and does not involve product cooperation.
Plaintiff’s attorney calls Character.AI a “defective product”
It is becoming increasingly common to develop emotional attachments to chatbots.
On Character.AI, users can create their own chatbots and give instructions on how they should behave. Users can also choose from a plethora of existing user-created bots, ranging from impersonations of Elon Musk to historical figures like Shakespeare or unauthorized fictional characters. Character.AI stated that the "Daenerys Targaryen" bot used by Sewell was created by users without permission from HBO or other copyright holders, and they remove bots that infringe copyright when reported.
This lawsuit also triggered a discussion in the United States about the legal liability of AI companies. Traditionally, U.S. social media platforms have been protected by Section 230 of the Communications Decency Act and are not responsible for user-generated content. However, with the rise of AI-generated content, the US legal community has begun to explore whether technology platforms can be held liable due to defects in the products themselves.
The law firm representing Megan Garcia said Character.AI is a "flawed product" designed to cause addiction and psychological harm to users. They hope to use legal means to force technology companies to take responsibility for the social impact of their products.
Social media companies including Instagram and Facebook parent Meta and TikTok parent ByteDance have also faced accusations of contributing to mental health issues among teenagers, although they do not offer chatbots similar to Character.AI. The companies have denied the accusations while touting new enhanced safety features for minors.
Lawyers say the current evidence is difficult to prove a causal relationship with AI deaths
Lawyer You Yunting, a senior partner at Shanghai Dabang Law Firm, told The Paper that based on the current evidence, there is no causal relationship between the death of the parties involved in the case due to AI, and it is difficult to draw a corresponding conclusion (using AI to cause death).
You Yunting said that there is actually a dilemma in AI platforms, that is, whether to excessively monitor and use algorithms to analyze the conversations between users and agents. On the one hand, this involves issues of privacy and personal information protection. On the other hand, some users may have serious psychological problems or even commit suicide because of the conversation. However, the occurrence of such cases may push intelligent agent service providers to do some technical exploration, improve algorithms, and actively monitor the conversations of users who may have psychological problems to prevent similar incidents from happening.
"It can be said that there are currently only prohibitive regulations on illegal content, but there are currently no relevant specific measures and regulations in practice and law for monitoring users' communication content and timely detecting their suicidal tendencies. Perhaps in the future, intelligent agents will talk to people , In terms of compliance prevention, corresponding technologies may be developed. In addition, at the legal level, AI. Technology will not be treated as humans or organisms in the future. After all, based on the most advanced Transformer technology, we only speculate on the most likely outcome based on the context, but this is still far from real human thinking," You Yunting said.
You Yunting emphasized that China has regulations on the deep synthesis of Internet information services and interim measures on the management of generative artificial intelligence services, which require the algorithm design of artificial intelligence to respect social morality and ethics, adhere to the core socialist values, prevent discrimination, respect the legitimate rights and interests of others, and must not Relevant content that endangers the physical and mental health of others. But generative artificial intelligence is a new thing, and service providers around the world are currently exploring how to protect users with mental health problems.
An employee of a large domestic AI model unicorn company told The Paper that the domestic supervision of teenagers is very strict. First, the product will set age limits and a youth mode. In the youth mode, there will also be an anti-addiction system.
Character.AI said it will add safety features for younger users. This includes warning users after they have spent an hour on the app, with a warning message that reads: "This is an AI chatbot and not a real person. Everything it says is fiction and should not be taken as fact or advice." Additionally, Character.AI began showing pop-up messages to users, directing them to suicide prevention hotlines if their messages contained certain keywords related to self-harm and suicide. But those pop-up messages were not activated when Sewell committed suicide in February.
The New York Times stated that many leading artificial intelligence laboratories have refused to build AI partners similar to Character.AI due to ethical and risk considerations.