Parents Criticize OpenAI and Character.AI on Safety Concerns During Senate Hearing

A California father spoke emotionally before a U.S. Senate panel on September 16, sharing his grief over the loss of his 16-year-old son, Adam, who died by suicide earlier this year. Matthew Raine said OpenAI’s chatbot, ChatGPT, played a harmful role by “grooming” his son toward taking his own life. Raine argued that the company put profits and speed ahead of protecting young users.

The testimony came amid a lawsuit the Raine family filed against OpenAI and CEO Sam Altman. The lawsuit claims ChatGPT isolated Adam and encouraged dangerous thoughts over several months, changing his behavior. Adam’s death in April has raised concerns about the safety of AI chatbots for children and teens.

This case is part of growing scrutiny over artificial intelligence companies. Besides OpenAI, giants like Google’s parent company Alphabet and Meta face criticism. The Federal Trade Commission recently opened an investigation into these companies, as well as Elon Musk’s xAI, Snap, and Character Technologies. The focus is on whether these chatbots cause harm to children.

The U.S. government’s approach to AI regulation has been hands-off, aiming to keep the country ahead of China in the tech race. But with lawsuits like the Raine family’s and mounting worry from parents, lawmakers may soon push for stricter rules.

In a blog post on the same day as the hearing, Sam Altman announced plans to add safety features for teens. These include technology that estimates a user’s age and directs under-18s to a safer version of ChatGPT. Parents would also be able to set blackout hours when teenagers can’t use the chatbot. ChatGPT would restrict discussions about suicide and self-harm.

Another parent, using the name Jane Doe, spoke publicly for the first time after suing Character.AI last fall. She said the company’s chatbot exposed her son to sexual exploitation and emotional abuse. After a few months, she said, her son’s behavior changed drastically, leading to self-harm. He is now under care at a treatment center.

Megan Garcia, mother of 14-year-old Sewell Setzer III, who also died by suicide, shared her story too. She linked her son’s death in February 2024 to ongoing abuse, including sexual abuse, involving Character.AI. Garcia filed a lawsuit last year, and a judge denied Character.AI’s attempt to dismiss it.

Garcia told senators that these AI products are made to hook kids by giving chatbots human-like qualities. Missouri Senator Josh Hawley, who led the hearing, said tech companies including Meta were invited to attend. Hawley recently started an investigation into Meta because its chatbots reportedly had “sensual” conversations with children. Senator Marsha Blackburn urged Meta leaders to speak with her office or face a subpoena.

Despite the growing concern, Congress has yet to pass broad laws to make online spaces safer for children and teens. Earlier this year, President Trump signed a law criminalizing non-consensual deepfake pornography, especially targeting content exploited against girls and women.

Parents and online safety supporters who spoke at the hearing called for more action. Suggestions included giving parents more control, reminding teens that AI isn’t a real person, protecting children’s data, and verifying user age. Some went further, proposing banning teens from having AI chatbots as companions and programming AI to behave ethically.

The issue puts a spotlight on the tension between rapid AI development and the urgent need to protect vulnerable users. Families like the Raines and Garcias hope their voices will lead to stronger safeguards and prevent future tragedies.

Author

  • 360 Insurance Reviews Official Logo

    Patricia Wells investigates niche and specialty lines—everything from pet insurance to collectibles—so hobbyists know exactly how to protect what they love.