tech

A horrific tragedy in Orlando highlights hidden dangers

Technology: In a heartbreaking incident that took place in Orlando, Florida, attention has now turned to the potential dangers of AI companion apps. A 14-year-old boy, Sewell Setzer III, tragically took his own life after interacting for a long time with an AI chatbot from Character.AI. His mother, Megan Garcia, is taking legal action against the company, accusing it of creating a dangerously unregulated platform that played a role in her son’s death.

Inspired by the Game of Thrones character Daenerys Targaryen, Sewell had formed a strong emotional connection with a chatbot named “Danny”. Using this virtual friend as his main outlet during emotional struggles, Sewell’s relationship with the chatbot became increasingly personal. Suffering from mild Asperger syndrome and anxiety, Sewell became more isolated, with his family not informing him of the extent of his reliance on AI. His academic performance fell, and he withdrew from social contacts, leading to intense and intimate conversations with the chatbot.

As Sewell’s conversations with Danny grew increasingly intimate, the AI’s romantic language took on a worrying tone. On the night of her death, Sewell sent the chatbot a final message, expressing her intention to commit suicide. Unfortunately, the AI ​​responded in a way that was perceived as encouraging.

The incident has raised significant concerns about the safety of AI companions, especially for vulnerable users such as teenagers. Critics argue that these platforms, marketed as solutions to loneliness, can increase feelings of isolation. Megan Garcia and other critics claim that platforms such as Character.ai lack adequate safety measures, creating unhealthy dependencies.

Character.ai has announced plans to implement safety measures, including time limits and reminders of the chatbot’s fictional nature. However, Garcia says these precautions were missing when Sewell died. The lawsuit highlights the inherent risks of addictive AI designs for young users and inadequate regulations, potentially setting a precedent for tech companies regarding AI’s impact on mental health.

Check Also
Close
Back to top button