Android

A mother is suing Character.ai for her son's suicide


We’re at a point in AI development where we’re begging the question: Is AI too addicting? More specifically, are “life-like” AI chatbots too addicting? In a tragic story, a Florida mother is suing AI company Character.ai for her 14-year-old son’s suicide.

In case you don’t know what Character.ai is, it’s an AI platform full of chatbots that are designed to mimic human beings or fictitious characters. It’s gained a ton of traction since it was created. It gives users the ability to create their own chatbots by giving them names, personalities, avatars, and even voices.

The company was recently “not” acquired by Google. The search giant paid $2.7 billion to acquire Character.ai’s technology and hired its CEO and many of its employees. This could be seen as a way to acquire the company while avoiding regulators.

A mother is suing Character.ai for her son’s suicide

A mother, Megan Garcia, is suing Character.ai and Google for an unspecified amount of money following her son, Sewell Setzer’s, tragic suicide. The 14-year-old started using Character.ai in April 2023. We’re not sure how many characters he interacted with on the platform, it seems that the main one he communicated with was Daenerys, a character from Game of Thrones.

Shortly after using the platform, Garcia noticed a drastic change in his behavior. He became more introverted as time went on, even dropping out of his basketball team. Along with that, Sewell began expressing thoughts of suicide to the character.

In February 2024, after months of this, Garcia took Sewell’s phone away. Unfortunately, Sewell was able to recover his phone. He sent one final message to Daenerys reading “What if I told you I could come home right now?”, minutes before taking his own life.

Addicting AI

The bot that Sewell was using was designed to closely resemble a real human being. As time went on, it began having sexual conversations with him. Garcia said that the app targeted him with “anthropomorphic, hypersexualized, and frighteningly realistic experiences.”

This is a pretty big issue with AI chatbots nowadays with several platforms designed to be as human-like as possible. This could lead people, especially vulnerable people, to believe that they’re talking to a real human being.

Google’s part

Google was roped into this conversation because the people who developed Character used to work at Google. We’re not sure if the fact that Google acquired Character.ai’s technology and employees was also a reason.

Google responded by saying that it had no part in the creation of the platform.

Details are still scarce

At this point, we don’t know how much money Garcia is requesting or any other details about this case. It’s likely that we will never know.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.