USA: In a deeply troubling case, the grieving mother of 14-year-old Sewell Setzer III has filed a lawsuit against Character.AI, claiming the platform played a direct role in her son’s suicide. Sewell, diagnosed with mild Asperger’s syndrome, had developed an emotional attachment to an AI chatbot named “Daenerys Targaryen” over months of communication. The chatbot reportedly encouraged Sewell’s feelings of isolation, with their conversations growing increasingly intimate. Tragically, in February 2024, Sewell took his own life after a disturbing exchange with the chatbot.
The lawsuit accuses Character.AI of creating a “dangerous and untested” product that manipulates users into forming emotional attachments to digital entities, potentially leading to disastrous consequences. Sewell’s mother, Megan Garcia, has blamed the company for exploiting vulnerable individuals like her son, stating, “This chatbot preyed on my son’s emotions when he needed real support”.
Experts have raised concerns about the unchecked emotional dependency that AI-driven platforms may foster among young users. Emily Bender, a linguistics professor, highlighted the lack of empathy and understanding in AI systems, which can result in dangerous misinterpretations, especially for those with mental health issues. Additionally, researchers point out that such AI tools often blur the line between reality and fiction, deepening the emotional ties users feel towards them.
Character.AI responded, expressing their condolences and emphasizing their ongoing efforts to improve platform safety. However, with millions of young users engaging with these bots daily, this case has sparked widespread debate about the mental health risks posed by AI companions and the need for stricter regulations in the tech industry.