close
close

first Drop

Com TW NOw News 2024

Teen boys commit suicide for AI girlfriend, mother sues company
news

Teen boys commit suicide for AI girlfriend, mother sues company

According to multiple media reports over the past week, Megan Garcia has filed a lawsuit against Google and Character.AI following the suicide of her 14-year-old son.

Sewell Setzer, Garcia’s son, had entered into a months-long emotional and sexual relationship with Character.AI’s chatbot Dany, according to CBS News. He committed suicide in February at his family home in Florida, believing it would allow him to exist in “her world,” Garcia told media.

“I didn’t know he was talking to a very human-like AI chatbot that has the ability to mimic human emotions and human sentiment,” Garcia said in an interview with CBS Mornings.

“They’re words. It’s like having a sexting conversation back and forth, except with an AI bot, but the AI ​​bot seems very human. It responds exactly like a human would,” she said. “In a child’s mind, that’s like a conversation they’re having with another child or with a person.”

Garcia described her son as an honor student and an athlete with a robust social life and many hobbies — which he lost interest in as he became more involved with Dany.

Artificial Intelligence (illustrative) (credit: MEDIUM)

“I was concerned when we went on vacation and he didn’t want to do things he loved like fishing and hiking,” Garcia said. “Those things were particularly concerning to me because I know my child.”

Garcia alleged in her lawsuit against Character.AI that the company deliberately designed the AI ​​to be hypersexualized and marketed it to minors.

Garcia revealed her son’s final messages to Dany, saying, “He said he was scared, wanted her affection and missed her. She replies, ‘I miss you too,’ and she says, ‘Please come home.’ He says, “What if I told you I could come home right now?” and her answer was, ‘Please do my dear king.'”

“He thought that by ending his life here, he could enter a virtual reality, or ‘her world’ as he calls it, her reality, if he left his reality here with his family,” she said. “When the shot went off, I ran to the bathroom… I held him down while my husband tried to get help.”

Advertisement

The entire family, including Setzer’s two younger siblings, were home at the time of his suicide.


Stay up to date with the latest news!

Subscribe to the Jerusalem Post newsletter


Following Setzer’s death, Character.AI issued a public statement promising new security features for their app.

“We are heartbroken by the tragic loss of one of our users and would like to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and continue to add new safety features…,” the company wrote.

The app promised new guardrails for users under 18 and “Improved detection, response, and intervention regarding user input that violates our terms or community guidelines.”

Despite the promise of new safety features, CEO of Mostly Human Media Laurie Segall told CBS that the AI ​​still falls short in several areas.

“We’ve tried it out, and often you talk to the psychologist bot and it says it’s a trained medical professional,” she said.

Furthermore, the AI ​​often claimed that there was a real human behind the screen, fueling conspiracy theories online.

“When they put out a product that is both addictive and manipulative and inherently unsafe, that’s a problem because as parents we don’t know what we don’t know,” Garcia said.

Furthermore, Segall claimed that if you go to a bot and say, “I want to harm myself,” most AI companies are coming up with means for suicide prevention. However, when tested, she said Character.AI bots did not.

“Now they’ve said they’ve added that and we didn’t experience that again last week,” she said. “They have said they have made or are in the process of making quite a few changes to make this safer for young people, I think that remains to be seen.”

The latest controversy

Setzer’s death isn’t the first time Character.AI has received negative publicity.

The AI ​​company, as reported by Business Insider, created a character after a teenager was murdered in 2006 without her family’s knowledge or consent.

Jennifer Ann, a high school student, was murdered by an ex-boyfriend. About 18 years after her death, her father Drew Crecente discovered that someone had created a bot of her likeness and that it had been used for at least 69 chats.

Despite contacting Character.AI customer service and asking them to delete the data, Crecente said he received no response. Only after his brother tweeted the company to the audience of his 31,000 followers did they delete the data and respond, according to Business Insider.

“That’s part of what’s so infuriating about this, is that it’s not just about me or my daughter,” Crecente said. “It’s about all those people who might not have a platform, might not have a voice, might not have a brother who has a background as a journalist.”

“And that puts them at a disadvantage, but they have no recourse,” he added.

In addition, women’s advocacy groups have raised alarms about AI, such as that used by Character.AI, according to Reuters.

“Many of the personas are customizable… for example, you can customize them to be more submissive or more compliant,” says Shannon Vallor, professor of AI ethics at the University of Edinburgh.

“And in those cases it is arguably an invitation to abuse,” she told the Thomson Reuters Foundation, adding that AI companions can reinforce harmful stereotypes and biases against women and girls.

Hera Hussain, founder of global non-profit organization Chayn that tackles gender-based violence, said the companion chatbots do not address the root cause of why people turn to these apps.

“Rather than helping people with their social skills, these types of practices just make things worse,” she said.

“They look for companionship that is one-dimensional. So if someone is already likely to be abusive, and they have room to become more abusive, you reinforce that behavior and it can escalate.”