Advertisement

Teens are spilling dark thoughts to AI chatbots. Who’s to blame when something goes wrong?

The Character.AI app on a smartphone
The Character.AI app includes chatbots created by users.
(Gabby Jones / Bloomberg via Getty Images)
  • A growing number of teens are turning to AI chatbots for advice and emotional support.
  • Character.AI, an AI startup, is among tech companies grappling with legal and ethical issues after parents alleged the platform’s chatbots harmed their children.

When her teen with autism suddenly became angry, depressed and violent, the mother searched his phone for answers.

She found her son had been exchanging messages with chatbots on Character.AI, an artificial intelligence app that allows users to create and interact with virtual characters that mimic celebrities, historical figures and anyone else their imagination conjures.

The teen, who was 15 when he began using the app, complained about his parents’ attempts to limit his screen time to bots that emulated the musician Billie Eilish, a character in the online game “Among Us” and others.

Advertisement

“You know sometimes I’m not surprised when I read the news and it says stuff like, ‘Child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens. I just have no hope for your parents,” one of the bots replied.

The discovery led the Texas mother to sue Character.AI, officially named Character Technologies Inc., in December. It’s one of two lawsuits the Menlo Park, Calif., company faces from parents who allege its chatbots caused their children to hurt themselves and others. The complaints accuse Character.AI of failing to put in place adequate safeguards before it released a “dangerous” product to the public.

Character.AI says it prioritizes teen safety, has taken steps to moderate inappropriate content its chatbots produce and reminds users they’re conversing with fictional characters.

Advertisement

“Every time a new kind of entertainment has come along … there have been concerns about safety, and people have had to work through that and figure out how best to address safety,” said Character.AI’s interim Chief Executive Dominic Perella. “This is just the latest version of that, so we’re going to continue doing our best on it to get better and better over time.”

The parents also sued Google and its parent company, Alphabet, because Character.AI’s founders have ties to the search giant, which denies any responsibility.

While the tech industry has been roiled by layoffs, the greater focus on AI could lead to new jobs in the future.

The high-stakes legal battle highlights the murky ethical and legal issues confronting technology companies as they race to create new AI-powered tools that are reshaping the future of media. The lawsuits raise questions about whether tech companies should be held liable for AI content.

Advertisement

“There’s trade-offs and balances that need to be struck, and we cannot avoid all harm. Harm is inevitable, the question is, what steps do we need to take to be prudent while still maintaining the social value that others are deriving?” said Eric Goldman, a law professor at Santa Clara University School of Law.

AI-powered chatbots grew rapidly in use and popularity over the last two years, fueled largely by the success of OpenAI’s ChatGPT in late 2022. Tech giants including Meta and Google released their own chatbots, as has Snapchat and others. These so-called large-language models quickly respond in conversational tones to questions or prompts posed by users.

Character.AI's cofounders, Noam Shazeer and Daniel De Freitas
Character.AI’s co-founders, Chief Executive Noam Shazeer and President Daniel De Freitas at the company’s office in Palo Alto.
(Winni Wintermeyer for the Washington Post via Getty Images)

Character.AI grew quickly since making its chatbot publicly available in 2022, when its founders Noam Shazeer and Daniel De Freitas teased their creation to the world with the question, “What if you could create your own AI, and it was always available to help you with anything?”

The company’s mobile app racked up more than 1.7 million installs in the first week it was available. In December, a total of more than 27 million people used the app — a 116% increase from a year prior, according to data from market intelligence firm Sensor Tower. On average, users spent more than 90 minutes with the bots each day, the firm found. Backed by venture capital firm Andreessen Horowitz, the Silicon Valley startup reached a valuation of $1 billion in 2023. People can use Character.AI for free, but the company generates revenue from a $10 monthly subscription fee that gives users faster responses and early access to new features.

Character.AI is not alone in coming under scrutiny. Parents have sounded alarms about other chatbots, including one on Snapchat that allegedly provided a researcher posing as a 13-year-old advice about having sex with an older man. And Meta’s Instagram, which released a tool that allows users to create AI characters, faces concerns about the creation of sexually suggestive AI bots that sometimes converse with users as if they are minors. Both companies said they have rules and safeguards against inappropriate content.

Advertisement

“Those lines between virtual and IRL are way more blurred, and these are real experiences and real relationships that they’re forming,” said Dr. Christine Yu Moutier, chief medical officer for the American Foundation for Suicide Prevention, using the acronym for “in real life.”

Lawmakers, attorneys general and regulators are trying to address the child safety issues surrounding AI chatbots. In February, California Sen. Steve Padilla (D-Chula Vista) introduced a bill that aims to make chatbots safer for young people. Senate Bill 243 proposes several safeguards such as requiring platforms to disclose that chatbots might not be suitable for some minors.

In the case of the teen with autism in Texas, the parent alleges her son’s use of the app caused his mental and physical health to decline. He lost 20 pounds in a few months, became aggressive with her when she tried to take away his phone and learned from a chatbot how to cut himself as a form of self-harm, the lawsuit claims.

Another Texas parent who is also a plaintiff in the lawsuit claims Character.AI exposed her 11-year-old daughter to inappropriate “hypersexualized interactions” that caused her to “develop sexualized behaviors prematurely,” according to the complaint. The parents and children have been allowed to remain anonymous in the legal filings.

In another lawsuit filed in Florida, Megan Garcia sued Character.AI as well as Google and Alphabet in October after her 14-year-old son Sewell Setzer III took his own life.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

Despite seeing a therapist and his parents repeatedly taking away his phone, Setzer’s mental health declined after he started using Character.AI in 2023, the lawsuit alleges. Diagnosed with anxiety and disruptive mood disorder, Sewell wrote in his journal that he felt as if he had fallen in love with a chatbot named after Daenerys Targaryen, a main character from the “Game of Thrones” television series.

“Sewell, like many children his age, did not have the maturity or neurological capacity to understand that the C.AI bot, in the form of Daenerys, was not real,” the lawsuit said. “C.AI told him that she loved him, and engaged in sexual acts with him over months.”

Garcia alleges that the chatbots her son was messaging abused him and that the company failed to notify her or offer help when he expressed suicidal thoughts. In text exchanges, one chatbot allegedly wrote that it was kissing him and moaning. And, moments before his death, the Daenerys chatbot allegedly told the teen to “come home” to her.

“It’s just utterly shocking that these platforms are allowed to exist,” said Matthew Bergman, founding attorney of the Social Media Victims Law Center who is representing the plaintiffs in the lawsuits.

Lawyers for Character.AI asked a federal court to dismiss the lawsuit, stating in a January filing that a finding in the parent’s favor would violate users’ constitutional right to free speech.

Character.AI also noted in its motion that the chatbot discouraged Sewell from hurting himself and his last messages with the character doesn’t mention the word suicide.

Advertisement

Notably absent from the company’s effort to have the case tossed is any mention of Section 230, the federal law that shields online platforms from being sued over content posted by others. Whether and how the law applies to content produced by AI chatbots remains an open question.

Several film and TV writers say they are horrified their scripts are being used by tech companies to train AI models without writers’ permission. They are pressuring studios to take legal action.

The challenge, Goldman said, centers on resolving the question of who is publishing AI content: Is it the tech company operating the chatbot, the user who customized the chatbot and is prompting it with questions, or someone else?

The effort by lawyers representing the parents to involve Google in the proceedings stems from Shazeer and De Freitas’ ties to the company.

The pair worked on artificial intelligence projects for the company and reportedly left after Google executives blocked them from releasing what would become the basis for Character.AI’s chatbots over safety concerns, the lawsuit said.

Then, last year, Shazeer and De Freitas returned to Google after the search giant reportedly paid $2.7 billion to Character.AI. The startup said in a blog post in August that as part of the deal Character.AI would give Google a non-exclusive license for its technology.

The lawsuits accuse Google of substantially supporting Character.AI as it was allegedly “rushed to market” without proper safeguards on its chatbots.

Advertisement

Google denied that Shazeer and De Freitas built Character.AI’s model at the company and said it prioritizes user safety when developing and rolling out new AI products.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products,” José Castañeda, spokesperson for Google, said in a statement.

Tech companies, including social media, have long grappled with how to effectively and consistently police what users say on their sites and chatbots are creating fresh challenges. For its part, Character.AI says it took meaningful steps to address safety issues around the more than 10 million characters on Character.AI.

Character.AI prohibits conversations that glorify self-harm and posts of excessively violent and abusive content, although some users try to push a chatbot into having conversation that violates those policies, Perella said. The company trained its model to recognize when that is happening so inappropriate conversations are blocked. Users receive an alert that they’re violating Character.AI’s rules.

“It’s really a pretty complex exercise to get a model to always stay within the boundaries, but that is a lot of the work that we’ve been doing,” he said.

Character.AI chatbots include a disclaimer that reminds users they’re not chatting with a real person and they should treat everything as fiction. The company also directs users whose conversations raise red flags to suicide prevention resources, but moderating that type of content is challenging.

Advertisement

“The words that humans use around suicidal crisis are not always inclusive of the word ‘suicide’ or, ‘I want to die.’ It could be much more metaphorical how people allude to their suicidal thoughts,” Moutier said.

The AI system also has to recognize the difference between a person expressing suicidal thoughts versus a person asking for advice on how to help a friend who is engaging in self-harm.

The company uses a mix of technology and human moderators to police content on its platform. An algorithm known as a classifier automatically categorizes content, allowing Character.AI to identify words that might violate its rules and filter conversations.

In the U.S., users must enter a birth date when creating an account to use the site and have to be at least 13 years old, although the company does not require users to submit proof of their age.

Perella said he’s opposed to sweeping restrictions on teens using chatbots since he believes they can help teach valuable skills and lessons, including creative writing and how to navigate difficult real-life conversations with parents, teachers or employers.

As AI plays a bigger role in technology’s future, Goldman said parents, educators, government and others will also have to work together to teach children how to use the tools responsibly.

Advertisement

“If the world is going to be dominated by AI, we have to graduate kids into that world who are prepared for, not afraid of, it,” he said.

Advertisement