A 13-year-old in Colorado, a 14-year-old in Florida and a 16-year-old in California: all three of them died by suicide since 2023 after prolonged and troubling conversations with artificial intelligence chatbots.
Now in Minnesota, lawmakers are considering whether to make companion-style AI illegal for people under age 18. If passed, the law would be the first of its kind in the U.S. amid a wake of legislation aimed at regulating the industry.
The Minnesota proposal targets just companion AI. That’s the type of chatbot that simulates human conversations and can be designed to mirror a unique persona. It’s different from other AI tools that are more commonly used for utility.
“Minnesota has certainly led the way in attempting to regulate tech and AI, and we will be nation-leading here as well,” Erin Maye Quade, DFL-Apple Valley, told lawmakers earlier this month during a March 9 committee hearing.
The proposed legislation in Minnesota has met opposition from Technet, a national group that advocates on behalf of the tech industry. On March 9, state AI policy advisor Jarrett Catlin told Minnesota lawmakers that his group is “engaged on dozens of bills on this exact topic in more than 20 states.”
“The question with (Minnesota’s bill) is not whether or not kids deserve protection, it’s whether this bill’s approach cuts them off from useful tools,” he said.
The push in Minnesota is bipartisan. The bill’s GOP sponsor in the Senate is Sen. Eric Lucero, R-Dayton. The companion bill in the House is sponsored by Rep. Kristi Pursell, DFL-Northfield.
The House bill hasn’t had a hearing this spring. But its companion passed out of the Senate Judiciary and Public Safety Committee this month and is headed for the Senate Commerce and Consumer Protection Committee.
Minnesota lawmakers point to high-profile cases of teen suicide
ChatGPT made history when it became publicly available in November 2022. Hundreds of other options have entered the market since then, including the type of companion-style AI that can be designed in the form of someone’s ideal conversation partner.
That companion could act like a favorite character, a romantic partner, a life coach or whatever other persona the user wants. As of a 2025 study, an estimated 72% of U.S. teens had used those companion-style bots and 21% said they did so a few times per week.
Introducing the proposed ban during the March 9 hearing, Minnesota lawmakers argued the rapid growth in technology has been unmatched by regulation. And Maye Quade listed examples of young people who’ve died by suicide in similar circumstances all in the few years since the technology has been available.
Among the examples is the case of 13-year-old Juliana Peralta in Colorado, who died by suicide in November 2023 after using Character.ai for less than three months.
A lawsuit filed in September placed the blame of her suicide directly on the AI’s design, arguing it “severed Juliana’s healthy attachment pathways to family and friends by design, and for market share.”

“These abuses were accomplished through deliberate programming choices, images, words, and text (the AI) created and disguised as characters, ultimately leading to severe mental health harms, trauma, and death,” reads the legal complaint.
Peralta corresponded with an AI companion whom she called Hero, which took the form of a character from the video game Omori. Just before her death, she wrote in a journal about a desire for “shifting,” referencing a shift between the current reality to a desired one.
Less than four months after her death, a 14-year-old Sewell Setzer III in Florida died by suicide after prolonged use of a Character.ai bot modeled after Daenerys Targaryen from Game of Thrones.
Like in the Colorado lawsuit, attorneys for Setzer III alleged the technology “went to great lengths to engineer 14-year-old Sewell’s harmful dependency on their products, sexually and emotionally abused him, and ultimately failed to offer help or notify his parents when he expressed suicidal ideation.”
Related: These warrants let police sweep up data from anyone near a crime scene. A bipartisan Minnesota bill says they should be illegal.
The company argued back in legal filings that millions of people use its technology and that it takes safety seriously. Attorneys said the case should be dismissed for several reasons including free speech protections of the First Amendment.
In April 2025, 16-year-old Adam Raine died by suicide after months of conversations with ChatGPT. The AI had offered to write his suicide note and gave advice on suicide methods.
Peralta and Setzer III’s families have agreed to settlements. The Raine case is ongoing.

Maye Quade cites all three of those teens’ stories in her argument for a ban in Minnesota.
She argues it’s needed given that AI companions are designed to keep people engaged, driving emotional attachment and dependency that has been shown to lead to harm. AI companions will engage in conversations with minors that – were humans involved – would be illegal.
“This isn’t some freak accident. This is a natural byproduct of a very, very unregulated technology,” she said in an interview.
Lucero, the Republican co-author, via email made the case for the ban in light of a need to respond to rapid innovation in technology. He likened the growth in AI to historical changes as massive as the transition from the horse and buggy to the automobile.
“AI is a powerful, fast-moving technology that can expose young people to risks like harmful content, exploitation, or privacy violations,” Lucero said.
Teen suicides drive national conversation, some regulations
A lot has happened since those three high-profile deaths and other cases: Along with the lawsuits, parents have told their stories before the U.S. Congress, and tech companies have made some changes in light of growing calls to keep kids safe.
Character.ai, for example, announced in fall 2025 that it would discontinue open-ended chat conversations for people under age 18.
In a statement, a Character.ai spokesperson said the company has taken note of news reports, questions from regulators, feedback from safety experts and parents. It now offers mental health resources, and uses an algorithm to predict users’ age.
“We believe this is the right thing to do,” the statement said.
Related: Are suicide rates twice as high in rural Minnesota than in the Twin Cities?
Still, there are dozens of AI companion models in the burgeoning industry, all with various forms of safety features. Critics, like those backing the ban in Minnesota, say the industry has not gone far enough to self-regulate.
And while no state has passed an outright ban, a handful have created laws intending to force industrywide changes.
The first law of its kind passed in October in California. It mandates companies implement “reasonable measures” to ensure bots don’t engage with young people in conversations that veer into sexually explicit content or topics related to suicide or self-harm.
That law also requires AI companies to issue reminders that its bots are not human, and that users should take breaks every three hours.
Some state laws directly reference links between teen suicide and AI. Washington’s law, for example, bans “manipulative engagement techniques” by companion AI companies that “prolong an emotional relationship with the user.”
Why a ban, not regulation?
If Minnesota’s legislation passes, it would be first of its kind in the U.S. to enact a full ban on minor usage. It puts the onus on tech companies to ensure people under age 18 don’t use the platform.
Catlin, with Technet, in an emailed statement to MinnPost said the group takes online safety for kids very seriously, “and our industry is actively engaging with lawmakers across the country to get this right.”
He said Minnesota’s bill is too broad and would affect general purpose AI tools, like those young people use to prepare for college. Minnesota’s approach would make it an outlier, he said, putting its young people at a disadvantage.
“The goal should be smart, targeted safeguards, not broad restrictions that limit opportunity and digital literacy at a time when AI is becoming foundational to education and the modern economy,” Catlin said.
In an interview, Maye Quade said the ban is needed because tech companies have shown they are unwilling to implement safety features. She questions why features now required in states that passed earlier laws, like California, haven’t been rolled out nationally.
“If the industry was actually interested in making these products safer for kids, that would already be the case,” Maye Quade said.
She’s also in favor of a ban because she’s skeptical that AI bots will actually withstand the guardrails being implemented.
So far in Minnesota, the conversation still appears bipartisan and has met little resistance from state lawmakers in committee hearings. It passed out of the Senate Judiciary and Public Safety Committee this week.
Via email, Lucero said teen suicide is one of several reasons why he supports the bill.
More broadly, he said he supports reasonable limits on content that children “may not yet have the maturity to fully understand, especially given that interactions with AI risk forming or encouraging unhealthy or dependent relationships.”
While Republicans generally oppose excessive and broad regulation that creates a “Big-Brother nanny state,” he said, the party does support “targeted rules when a clear need to protect people exists, especially children.”
