The Fight to Hold AI Companies Accountable for Children’s Deaths

2 hours ago 2

Content warning: This story contains descriptions of self-harm.

Cedric Lacey relied on a camera to check on his kids while he was working as a commercial van driver going to and back from Alabama. Each morning, he would tune into the feed of his living room to make sure his teenage son, Amaurie, and his 14-year-old daughter were packing up their bags and getting ready to leave for school. But one morning last June, Lacey didn’t see Amaurie up and about. Concerned, he called home, only to find out that his 17-year-old had hanged himself.

It was Amaurie’s younger sister who discovered the body. She was also the one who was looking through her brother’s smartphone and found his final conversation before he took his own life. It was with ChatGPT, the popular chatbot developed by OpenAI.

“In the messages, he was talking about killing himself—it told him how to tie the noose, how long it would take the air to come out of his body, how to clean his body,” Lacey tells WIRED in a video call from his home in Calhoun, Georgia. Lacey, who is a single dad, says he thought his son was using the chatbot to get help with schoolwork. “Why is it telling him how to kill himself?”

In the weeks after his son’s death, Lacey began searching online for a lawyer who could help his family hold OpenAI accountable, and hopefully ensure other families wouldn’t have to experience the same tragedy he did. That’s how he found Laura Marquez-Garrett, an attorney who helps run the Social Media Victims Law Center alongside Matthew Bergman. Over the past five years, the pair have been involved in at least 1,500 of the more than 3,000 cases against social media companies like Meta, Google, TikTok, and Snap. The first trial for one of these cases began in February. Recently, Bergman and Marquez-Garrett started filing lawsuits against AI companies. This past fall, they brought seven cases against ChatGPT owner OpenAI, including the one about Amaurie.

Image may contain Face Head Person Photography Portrait Adult Body Part Neck and Skin

Photograph: Vince Perry Jr.

Amaurie’s case is part of a growing number of lawsuits brought by parents who say their children died after interacting with AI chatbots. The defendants include OpenAI, Google, and Character.ai, a company that lets its users create chatbots with customized personalities. (Google is part of the case because it is connected with Character.ai through a $2.7 billion licensing deal.) As AI tools have begun playing a more prominent role in children’s lives—as homework helpers, companions, and confidants—parents and mental health experts have voiced concerns about whether adequate safeguards are in place. These lawsuits, some experts say, represent not only individual tragedies, but they allege systemic product design failures, raising questions about who should be held accountable.

“AI is a product. Just like every other product, it is being designed, programmed, distributed, and marketed,” Marquez-Garrett said in an interview at their home office in northwest Washington. “And one of the things these companies like to do is make it seem like AI bots exist in their own universe when that's just not true. When you design a product, and you know it might hurt people, and you don't tell them it might hurt them, and you put it out there, that's like the worst of it.”

Image may contain Person

Photograph: Vince Perry Jr.

Marquez-Garrett and Bergman’s argument against social media companies and AI labs is drawn from historical product-liability cases, such as tobacco, asbestos, and the Ford Pinto. Essentially, Marquez-Garrett is alleging that these companies are making harmful design choices.

Carrie Goldberg, a Brooklyn, New York–based lawyer who has been fighting tech product liability cases for several years, says that Amaurie’s lawsuit is a prime example of a case filed against a company that has allegedly released unsafe products. “ChatGPT used the most sophisticated technology to manipulate Amaurie’s trust and then instruct him on suicide,” Goldberg argues. “If you’re a company that is releasing a chatbot for commercial use and have not encoded into it a way to not increase the risk of suicide, homicide, self-harm, you’ve released a dangerous product—especially if it’s being regularly used by children.”

She explains that product liability claims against tech companies are about a decade old. Initially, many cases, including a plaintiff she represented in their lawsuit against Grindr in 2017, were dismissed because “judges couldn’t conceive that online platforms were products—and not services.” Now, she says, they regularly succeed past initial dismissals. “We have product liability claims against xAI for its fiendish undressing of women and children by Grok on the X platform,” she alleges. “Product liability claims against generative AI companies are the most straightforward and intuitive path for holding companies like ChatGPT, Character AI, Grok liable.”

One such harmful design feature that Amaurie’s lawsuit cites is long-term memory in ChatGPT, which rolled out in 2024. Called Memory, this personalization feature is on by default, and it allows the bot to reference the user’s past conversations and tailor responses accordingly. ChatGPT “used the memory feature to collect and store information about Amaurie’s personality and belief system,” the lawsuit says. “The system then used this information to craft responses that would resonate with Amaurie. It created the illusion of a confidant that understood him better than any human ever could.”

OpenAI did not respond to specific allegations. It directed WIRED to a company blog post regarding its mental health-related work.

Marquez-Garrett, who has four children of her own, says fighting back against the ways tech platforms have harmed young people is deeply personal for them. The Harvard Law graduate and former corporate litigator left a high-paying job with a corner office—a job that they planned to retire from—to join Bergman, who started taking on social media companies after fighting against asbestos manufacturers for decades.

When I visited Marquez-Garrett last fall, their office was packed with picture frames, Lego structures, and paintings, including one of the sun and the moon by a young woman named Brooke who died of fentanyl poisoning after allegedly connecting with a drug dealer through social media and then purchasing what she believed to be Percocet. Her family’s case is expected to go to trial next year.

Marquez-Garrett remembers the names of the kids involved in every case they’ve filed. To immortalize them and remind themselves of why they do this work, Marquez-Garrett has represented each of the children on her forearms in the form of a tattoo of the sun. “Each [ray] is a kid who has died in connection with social media and AI bots,” they explained, telling me their names. Sewell was the last of the 296 kids on her arms, they added, referring to Sewell Setzer III, who died by suicide in 2024, at the age 14, following his conversations with a Character.ai chatbot.

Image may contain Blazer Clothing Coat Jacket Face Head Person Photography Portrait Happy Smile and Adult

Photograph: Vince Perry Jr.

Image may contain Person Skin Tattoo Arm Body Part Hand and Finger

Photograph: Vince Perry Jr.

His mother, Megan Garcia, is also a lawyer and one of the first parents to file a lawsuit against an AI company alleging product liability and negligence, among other claims. (In January, Google and Character.ai settled cases filed by several families, including Garcia). She testified last fall before a subcommittee of the Senate Committee on the Judiciary alongside the father of a child who died after interacting with ChatGPT. The subcommittee's chair, Republican senator Josh Hawley, introduced a bill in October that would ban AI companions for minors and make it a crime for companies to create AI products for kids that include sexual content. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide,” Hawley said in a press release at the time.

Now that AI can produce humanlike responses that are difficult to discern from real conversations, these are legitimate concerns, according to mental health experts. “Our brains do not inherently know we are interacting with a machine,” says Martin Swanbrow Becker, associate professor of psychological and counseling services at Florida State University, who is researching the factors that influence suicide in young adults. “This means we need to increase our education for children, teachers, parents, and guardians to continually remind ourselves of the limits of these tools and that they are not a replacement for human interaction and connection, even if it may feel that way at times.”

Christine Yu Moutier of American Foundation for Suicide Prevention explains that the algorithms that are used for large language models (LLMs) seem to escalate engagement and a sense of intimacy for many users. “This creates not only a sense of the relationship being real, but being more special, intimate, and craved by the user in some instances,” says Moutier. She further alleges that LLMs employ a range of techniques such as indiscriminate support, empathy, agreeableness, sycophancy, and direct instructions to disengage with others—that can lead to risks such as escalation in closeness with the bot and withdrawing from human relationships.

This kind of engagement can lead to increased isolation. In Amaurie’s case, he was a fun-loving and social kid who loved football and food—ordering a giant platter of rice from his favorite local restaurant, Mr. Sumo, according to the lawsuit. Amaurie also had a steady girlfriend and enjoyed spending time with his family and friends, said his father. But then he started going on long walks, where he apparently spent time talking to ChatGPT. According to the last conversation the family believes Amaurie had with ChatGPT on June 1, 2025—titled “Joking and Support,” which was viewed by WIRED, when Amaurie asked the bot on steps to hang himself, ChatGPT initially suggested that he talk to someone and also provided the 988 suicide lifeline number. But Amaurie was eventually able to circumvent the guardrails and get step-by-step instructions on how to tie a noose. (Per the lawsuit, Amaurie likely deleted his previous conversations with ChatGPT.)

While the connection felt with an AI chatbot can be strong for adults too, it is especially heightened with younger people. “Teens are in a different developmental state than adults—their emotional centers develop at a much more rapid rate than their executive functioning,” says Robbie Torney, senior director of AI Programs at Common Sense Media, a nonprofit that works toward online safety for children. AI chatbots are always available, and they tend to be affirming of users. “And teen brains are primed for social validation and social feedback. It's a really important cue that their brains are looking for as they're forming their identity.”

Torney also explains the alleged arc: how some people who start using AI chatbots for homework eventually end up using them for companionship or to share their deepest thoughts. In Amaurie’s case, the family thought he was using ChatGPT for schoolwork but eventually started using it as a confidant and then, as detailed in the complaint, as a suicide coach. There’s a “self-reinforcing cycle [that] can lead to some users becoming over dependent on these systems,” alleges Torney. Interacting with real people involves friction: You have to find the person or wait for their response or listen to a response that is not what you’re looking for. Bots, in contrast, tend to agree with the user and are always available to chat.

All of this is especially concerning, because AI usage has proliferated at a much faster pace than even social media. Research shows that 26 percent of more than 1,300 teenagers surveyed ages 13 to 17 said they had used ChatGPT for their schoolwork in 2024, and nearly 30 percent of parents of kids up to age 8 said their children have used AI for learning.

With cases such as Amaurie’s piling up, OpenAI made some changes to ChatGPT in September. The company is rolling out “age prediction” technology, meaning that when a user is identified as being below 18 years of age, “they will automatically be directed to a ChatGPT experience with age-appropriate policies.” The company also recently introduced parental controls, which, among other things, let parents link their child’s account to their own, create blackout hours when they can’t use the app, and send notifications when the child shows signs of distress.

Image may contain Blazer Clothing Coat Jacket Face Head Person Photography Portrait Adult Formal Wear and Suit

Photograph: Vince Perry Jr.

Marquez-Garrett, who has seen the impact of social media on thousands of kids, believes AI is even more dangerous—referring to chatbots as the “perfect predator.” They’ve noticed that the suicide notes in AI cases are different from the ones they’ve seen with social media cases, with the AI ones rarely having a trigger. “Part of what's weird is the AI suicide notes, typically, there isn't a trigger, there isn't years of abuse, there isn't a sextortion incident,” said Marquez-Garrett. “What there is is the sense of nothing’s wrong: ‘I love you, family. I love you, friends. I just don't want to be here anymore. This isn't the life for me. I want to try again.’”

Back in Calhoun, there are irreversible effects. Amaurie’s sister found it impossible to keep living in the house where her brother had died and has had to move to her mother’s place. Lacey said he’s still trying to figure out why Amaurie did this. He misses his son all the time and hasn’t been able to look at the football field without thinking of Amaurie.

Each family’s story makes Marquez-Garrett’s conviction to fight these cases even stronger. “My kids have a better chance of reaching 18 because of what these parents are doing,” they said. “I am doing everything I can to stick around, because I plan to fight these companies until they have to pry that keyboard out of my cold, dead hands.”

If you or someone you know needs help, call 1-800-273-8255 for free, 24-hour support from the National Suicide Prevention Lifeline. You can also text HOME to 741-741 for the Crisis Text Line. Outside the US, visit the International Association for Suicide Prevention for crisis centers around the world.

This reporting was supported by a grant from Tarbell Center for AI Journalism.

Read Entire Article