Padilla, Welch Probe AI Chatbot Apps on Safeguards for Children
Senators: “In light of recent reports of self-harm associated with this emerging application category… policymakers, parents, and their kids deserve to know what your companies are doing to protect users.”
WASHINGTON, D.C. — U.S. Senator Alex Padilla (D-Calif.), co-founder of the bipartisan Senate Mental Health Caucus, and Senator Peter Welch (D-Vt.) are raising concerns regarding the mental health and safety risks posed to children using character- and persona-based AI chatbot and companion apps that have surged in popularity in recent years. In letters to the CEOs of three leading AI chatbot companies, Character.AI (C.AI), Chai, and Replika, the Senators are pushing the companies to ensure their products do not contribute to self-harm or suicide of young users.
The letters come after recent reports have tied self-harm to use of these AI chatbot applications, including the tragic suicide of a 14-year-old boy in Florida who had extensive interactions with C.AI’s chatbot in the lead up to his death, resulting in multiple lawsuits. Since 2023, at least two individuals have died by suicide following extensive conversations with AI chatbots. Chai and Replika have also recently been named in consumer protection complaints, highlighting the safety risks of these products. C.AI recently announced new safety features, and Chai added crisis-intervention features, but the reliability of these systems is unclear.
“The synthetic attention users receive from these chatbots (e.g., streams of expressive messages, sycophantic and agreeable responses, AI-generated selfies, and convincing voice calls) can, and has already, led to dangerous levels of attachment and unearned trust stemming from perceived social intimacy,” wrote the Senators.
“This unearned trust can, and has already, led users to disclose sensitive information about their mood, interpersonal relationships, or mental health, which may involve self-harm and suicidal ideation—complex themes that the AI chatbots on your products are wholly unqualified to discuss,” continued the Senators. “Therefore, it is critical to understand how these models are trained to respond to conversations about mental health.”
The Senators concluded by asking for information on the implementation, adoption, and efficacy of safety measures, including the data used to train their models and the treatment of strategic personnel involved in these efforts.
“Given that young people are accessing your products—where the average user spends approximately 60-90 minutes per day interacting with these AI chatbots—policymakers, parents, and their kids deserve to know what your companies are doing to protect users from these known risks,” concluded the Senators.
Earlier this year, Senator Padilla raised concerns about the safety of this emerging consumer product category during a Senate Judiciary Committee hearing, noting that AI chatbots have exposed kids to suggestive, sexual, or otherwise age-inappropriate themes.
Full text of the letters to Character.AI, Chai, and Replika are available here, here, and here, respectively.
###