Apps

Character.AI institutes new safety measures for AI chatbot conversations



Character.AI has rolled out new safety features and policies for building and interacting with the AI-powered virtual personalities it hosts. The new measures aim to make the platform safer for all users, but particularly younger people. The update includes more control over how minors engage with the AI chatbot, more content moderation, and better detection of the AI discussing topics like self-harm.

Though not cited in the blog post about the update, Character AI linked to the announcement in a post on X expressing condolences to the family of a 14-year-old who spent months interacting with one of Character.AI’s chatbots before taking his own life. His family has now filed a lawsuit against Character.AI for wrongful death, citing a lack of safeguards for the AI chatbots as a contributor to his suicide​.

AI chat guardrails





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.