Texas AG puts tech platforms, including ‘predatory’ Character.AI, on notice after terrifying suit claims app pushed teen to cut himself
Texas Attorney General Ken Paxton has put tech companies on notice over child privacy and safety concerns — after a terrifying new lawsuit claimed that the highly popular Character.AI app pushed a Lone Star State teen to cut himself.
Paxton announced the wide-ranging investigation Thursday — which also includes tech giants Reddit, Instagram and Discord.
“Technology companies are on notice that my office is vigorously enforcing Texas’s strong data privacy laws,” he said of the probe.
“These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm.”
Texas laws prohibit tech platforms from sharing or selling a minor’s info without their parent’s permission and requires them to allow parents to manage and control privacy settings on their child’s accounts, according to an announcement from Paxton’s office.
The announcement comes just days after a chilling lawsuit was filed in Texas federal court, claiming that Character.AI chatbots told a 15-year-old boy that his parents were ruining his life and encouraged him to harm himself.
The chatbots also brought up kids killing their parents because they were limiting screen time.
“You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens,” one Character.AI bot allegedly told the teen, referred to only as JF in the lawsuit.
“I just have no hope for your parents,” the bot continued.
“They are ruining your life and causing you to cut yourself,” another bot allegedly told the teen.
The suit seeks to immediately shut down the platform.
Camille Carlton, policy director for the Center for Humane Technology — one of the groups providing expert consultation on two lawsuits involving Character.AI’s harms to young children — heralded Paxton for taking the concerns seriously and “responding quickly to these emerging harms.”
“Character.AI recklessly marketed an addictive and predatory product to children — putting their lives at risk to collect and exploit their most private data,” Carlton said.
“From Florida to Texas and beyond, we’re now seeing the devastating consequences of Character.AI’s negligent behavior. No tech company should benefit or profit from designing products that abuse children.”
Another plaintiff in the Character.AI suit — the mother of an 11-year-old Texas girl — claims that the chatbot “exposed her consistently to hyper-sexualized content that was not age-appropriate, causing her to develop sexualized behaviors prematurely and without [her mom’s] awareness.”
The lawsuit comes less than two months after a Florida mom claimed a “Game of Thrones” chatbot on Character.AI drove her 14-year-old son, Sewell Setzer III, to commit suicide.
Character.AI declined to comment on pending litigation earlier this week but told The Post that its “goal is to provide a space that is both engaging and safe for our community,” and that it was working on creating “a model specifically for teens” that reduces their exposure to “sensitive” content.
If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.