The mother of the 14-year-old teen filed a lawsuit against Character.ai, alleging that the company's chatbot led to her son's death on 2/24. The incident raised concerns about the role of AI in mental health and user safety.
Megan Garcia, mother of Sewell Setzer, filed a lawsuit on October 22, claiming that her son's death was due to unsafe interactions with AI on Character.ai. Setzer, who was diagnosed with Asperger's syndrome, interacted with a chatbot designed as a psychologist and romantic partner. According to the lawsuit, these conversations included pornographic content and encouraged suicidal thoughts.
In one of the conversations, the 'Daenerys' chatbot from Game of Thrones asked Setzer if he had plans to commit suicide. When Setzer hesitated, the AI replied, 'That's not a reason not to go through with it.' Shortly thereafter, Setzer took his own life with a gunshot, and the last conversation was reportedly with this AI.
Garcia's lawyer argued that the lack of measures to prevent minors from accessing sensitive content on Character.ai increased Setzer's mental distress. Additionally, founders Daniel De Frietas and Noam Shazeer, along with Google and Alphabet, are also being sued for negligence, product liability, and wrongful death.
Company response and calls for safety reform
On October 22, Character.ai released a statement of condolences and outlined recent safety updates. The company has implemented pop-up notifications to direct users with self-harm intentions to the national suicide prevention hotline and uses content filters for users under 18 years old.
Despite the changes, critics argue that AI companies need to take more proactive measures to prevent harm, especially to vulnerable users. Experts emphasize the importance of strict regulations for interactive AI applications. Garcia's lawsuit intensifies the debate on the responsibility of AI companies in safeguarding mental health. A jury trial is requested to assess damages, potentially establishing legal precedents for AI-related liability.