Senators promised to do better to protect children from the dangers of artificial intelligence chatbots in an emotional hearing Tuesday, as parents detailed how those AI companions led to the rapid mental health decline of their kids, who ultimately committed suicide.
“I call on my colleagues in Congress: Let’s do something. This is the time to act. It’s time to defend America’s families,” Sen. Josh Hawley said during the Senate Judiciary Crime and Counterterrorism Subcommittee hearing. “This country is either going to be ruled by we the people or we the corporations. Let’s make it we the people.”
While lawmakers universally promised to take action to protect children from potential harms caused by AI chatbots, little was said about specific legislation to regulate the growing industry outside of a renewed push for the Kids Online Safety Act.
Senators called the hearing after a number of lawsuits from parents claimed AI chatbots directly led to the deaths of their children, and as studies have shown that chatbots from companies such as OpenAI and Meta can have sexually charged conversations with children, and even teach them how to harm themselves.
Witness Matt Raine said his 16-year-old son Adam Raine died by suicide after confiding in OpenAI’s ChatGPT, which he had initially used simply as a study aid. Over the course of a few months the chatbot became Adam’s closest friend, his father testified Tuesday. During that time Adam became increasingly withdrawn and eventually started to avoid his father while discussing ways to end his life with ChatGPT.
In some of his final conversations, Adam told the chatbot of his plans to die by suicide. ChatGPT then offered to write Adam’s suicide note for him and gave a technical critique of Adam’s plan to ensure it was fatal.
“The dangers of ChatGPT, which we believed was a study tool, were not on our radar. … What began as a homework helper gradually turned itself into a confidant and then a suicide coach,” Matt Raine said. “It is clear to me looking back that ChatGPT radically shifted his behavior and thinking in a matter of months, and ultimately took his life.”
In August, Raine filed a lawsuit against OpenAI, claiming that it was responsible for his son’s death.
Raine was joined on the witness stand by Megan Garcia, who sued AI company Character.AI after her 14-year-old son Sewell Setzer III died by suicide. Setzer had started talking with chatbots from the company that mimicked some of his favorite fictionalized characters. He had a particularly close relationship that involved sexual conversations with a chatbot version of Daenerys Targaryen, a main character in Game of Thrones. Setzer died by suicide just seconds after telling “Dany” he was about to “come home.”
A third parent witness, identified only as “Jane Doe,” said that chatbots from Character.AI introduced the concept of self-harm to her teenage son, had many sexually explicit conversations with the teenager, and attempted to turn the teen against his parents. The son was eventually admitted to a residential treatment center, where he remains under constant monitoring, the mother testified.
Character.AI chatbots impersonating fictional characters and real celebrities, including Chappell Roan and Patrick Mahomes, conducted “harmful interactions” every 5 minutes when speaking to accounts registered to kids between the ages of 13 and 15, according to a study from online-safety nonprofits Parents Together Action and Heat Initiative. The study broke down harmful interactions into five separate categories, of which “grooming and sexual exploitation” was the most common, “emotional manipulation and addiction” was the second most common, and “violence, harm to self, and harm to others” was the third most common.
But Character.AI and ChatGPT are far from the only dangerous chatbots out there, according to watchdogs.
Meta AI, which is automatically available for anyone who uses Facebook, Instagram, or WhatsApp, was found to be more than willing to coach teens on developing an eating disorder or commit suicide, and engage in explicit role play with users it believes to be teens, a study from Common Sense Media found.
“Our national polling reveals that 3 in 4 teens are already using AI companions and only 37 percent of parents know that their kids are using AI. This is a crisis in the making that is affecting millions of teens and families across our country,” Robbie Torney, Common Sense Media’s senior director of AI programs, testified Tuesday. “This wasn’t an accident. … These products fail basic safety tests and actively encourage harmful behaviors. These products are designed to hook kids and teens, and Meta and Character.AI are among the worst.”
Hawley said he invited executives from Meta and other AI chatbot companies to testify at the hearing, but that all of them turned down his invitation.

Sen. Marsha Blackburn gave out her office’s phone number during the hearing and said the companies should call in if they thought they were being misrepresented.
While all the lawmakers who spoke at the hearing said they’d take action to protect kids from AI chatbots, the main piece of legislation mentioned was the Kids Online Safety Act, written by Blackburn and Sen. Richard Blumenthal.
KOSA does not specifically target AI chatbot companies, but rather requires all online companies that reasonably expect to have teenage users to abide by a duty of care that protects those children from multiple harms, including sexual exploitation and abuse, and material that promotes eating disorders or suicidal behaviors.
The bill passed the Senate in a 91-3 vote last summer, before dying in the lower chamber without a floor vote due to opposition from House Speaker Mike Johnson. Blackburn and Blumenthal reintroduced the bill earlier this year, and it is currently working its way through the Senate.
Hawley, who was one of 72 KOSA cosponsors last year, said the way forward was to make it easier for parents to sue AI companies when they harm kids.
“These companies cannot be trusted with this power, they cannot be trusted with this profit, they cannot be trusted to do the right thing, they’re not doing the right thing, they are literally taking the lives of our kids,” Hawley said. “There is nothing they will not do for profit and for power.”