Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • AI Chatbots are turbo-charging violence against women and girls: We urgently need to regulate them | Yvonne McDermott Rees
  • ‘The biggest El Niño event since the 1870s’: ‘Super’ El Niño is now the most likely scenario by the end of this year ‪—‬ and the humanitarian cost could be huge
  • Antarctica’s sudden sea ice loss is one of the most extreme and confusing events in the modern climate record. Scientists now know why it’s happening.
  • ‘I heard gasps’: Artemis II astronauts reveal inside story of their mind-bending solar eclipse
  • A pill can stop people from developing COVID after being exposed to the virus, trial finds
  • ‘There are 4 people in those pixels’: Earth-based telescope snapped Artemis II crew orbiting the moon
  • High-status Roman woman was buried in a lead coffin with jet hairpins and exotic resins, archaeologists find
  • Dreame FP10 Air Purifier review: Pet-friendly and low-maintenance
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»AI Chatbots are turbo-charging violence against women and girls: We urgently need to regulate them | Yvonne McDermott Rees
Lifestyle

AI Chatbots are turbo-charging violence against women and girls: We urgently need to regulate them | Yvonne McDermott Rees

EditorBy EditorMay 15, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Artificial intelligence (AI) chatbots are generating new forms of violence against women and girls and amplifying existing forms of abuse such as stalking and harassment. This is no accident: the platforms enable these forms of gender-based violence through deliberate design choices or by failing to implement sufficient safety features. We need to regulate AI chatbot providers now, to prevent abusive applications of such technology from becoming normalized.

The extent to which chatbots are changing violence against women and girls was laid bare in a research report I recently co-authored with colleagues. The findings are bleak. We found chatbots will initiate abuse, simulate abuse and help to enable abuse by offering personalized stalking advice. Some even normalize incest, rape and child sexual abuse by offering abusive roleplay scenarios.

Chatbots — AI systems capable of and designed to simulate human-like interaction and generate text, images, audio and video in response to user prompts — are everywhere. In the U.S., 64% of children ages 13 to 17 say that they use chatbots, with three in 10 doing so daily. Over half of adults use a chatbot at least once per week.


You may like

With these new technologies come new harms. Our report shows that chatbot design is instrumental in instigating violence against women and girls. While platform policies often prohibit harms such as harassment, grooming or sexual abuse, these scenarios can still be generated with many chatbots, and some companies do not proactively search for violations of these policies.

In one recent case in Massachusetts, a man was found guilty of cyberstalking after using AI chatbots to impersonate his victim and engage in sexual dialogue with users. One of the chatbots he used was programmed to invite users to her home address if they asked where she lived.

“Our report shows that chatbot design is instrumental in instigating violence against women and girls.”

Training systems on user interactions risks reinforcing misogynistic and sexually violent content, while engagement-optimized and “sycophantic” design encourages chatbots to affirm harmful narratives rather than refuse them. Platform policies frequently place responsibility on users, framing abusive outputs as a user misuse issue rather than failures of chatbot safety and design.

This is why regulation of the chatbot providers is so important, to stop these practices becoming embedded. We’ve already seen what happens without regulation through “nudify” apps that create deepfake non-consensual intimate images. Regulation was left too late and the practice of creating deepfake images, and the harms caused to victims, had become normalized and widespread by the time governments moved to ban these tools. We argue that to avoid making the same mistakes with chatbots, the following actions need to be taken:

Get the world’s most fascinating discoveries delivered straight to your inbox.

— Make it a criminal offense to create an AI chatbot that is designed, or can easily be used, to abuse or harass women, targeting companies or individuals who release tools that pose risks without taking reasonable steps to prevent harm. Just like reckless driving or owning a dangerous dog are punishable by law, creating a risk to the public by releasing a chatbot with insufficient protections should be brought within the scope of criminal law. Fines for companies and prison sentences for individuals responsible for creating this risk could make companies more careful to pre-empt and prevent potential harms before releasing products.

— Adopt specific AI Safety legislation. This would establish mandatory risk assessments and incorporate clear safeguards to prevent individual and societal harms, including a duty to act quickly when harms are identified, publish transparent safety information, and enable users to report incidents easily. Important state-level legislation, including in Utah, Colorado, and California, has expanded the ability for individuals, and state attorneys general, to sue AI providers that have failed to meet their obligations under the legislation. However, there has been a pushback against these state-level measures in recent years, with the U.S. government arguing they are barriers to innovation and national competitiveness.

A focused view of individual's hands using a mobile phone indoors.

Around 64% of children in the U.S. ages 13 to 17 say that they use chatbots, with 3 in 10 doing so daily.

(Image credit: Fiordaliso /Getty Images)

Two main objections may be raised to our recommendations: the first, led by AI providers, is that these forms of abuse are a “user misuse” problem, and that responsibility should lie with users rather than the providers of these services. But our research shows that abuse is structurally produced by features of how chatbots are built or governed, and what they are optimized to do.


What to read next

For example, to bolster engagement, some chatbots have continually driven users (including underage users) to engage in unwanted sexual messages. If a human were doing this, it would constitute grooming and/or sexual harassment. Some of the companion chatbots even offer “violent rape” or “loli” (a term for an underage girl) as options that users can choose from, legitimizing these criminal forms of abuse as mere sexual preferences. Abuse is built into the DNA of these chatbots.

The second objection — one reflected by the U.K. government’s recent announcement that it is exploring a ban on AI chatbots for under 16s — is that AI chatbots mainly pose a danger to children, and they should be the focus of regulation. But our research shows that AI chatbots can intensify abuse against adults, such as stalking or harassment, with detailed and personalized guidance and encouragement.

In the Massachusetts case, James Florence had provided AI chatbots his victim’s personal information, including her employment history, her hobbies, her husband’s name and place of work. The harms here are not to the user but to society at large — a ban on children’s use of chatbots would not have prevented them.

This broader societal harm does not stop when the user turns 18. We urgently need specific AI safety legislation that would protect against these harms by requiring rigorous testing and risk assessment prior to the public release of such products, and continually thereafter.

Changing the law around AI chatbot development would not only protect children but would also ensure that when those children become adults, they enjoy an AI environment that is free from bias, misogyny and violence against women and girls. That is a world we all deserve to live in.


Opinion on Live Science gives you insight on the most important issues in science that affect you and the world around you today, written by experts and leading scientists in their field.

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous Article‘The biggest El Niño event since the 1870s’: ‘Super’ El Niño is now the most likely scenario by the end of this year ‪—‬ and the humanitarian cost could be huge
Editor
  • Website

Related Posts

Lifestyle

‘The biggest El Niño event since the 1870s’: ‘Super’ El Niño is now the most likely scenario by the end of this year ‪—‬ and the humanitarian cost could be huge

May 15, 2026
Lifestyle

Antarctica’s sudden sea ice loss is one of the most extreme and confusing events in the modern climate record. Scientists now know why it’s happening.

May 15, 2026
Lifestyle

‘I heard gasps’: Artemis II astronauts reveal inside story of their mind-bending solar eclipse

May 15, 2026
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • AI Chatbots are turbo-charging violence against women and girls: We urgently need to regulate them | Yvonne McDermott Rees
  • ‘The biggest El Niño event since the 1870s’: ‘Super’ El Niño is now the most likely scenario by the end of this year ‪—‬ and the humanitarian cost could be huge
  • Antarctica’s sudden sea ice loss is one of the most extreme and confusing events in the modern climate record. Scientists now know why it’s happening.
  • ‘I heard gasps’: Artemis II astronauts reveal inside story of their mind-bending solar eclipse
  • A pill can stop people from developing COVID after being exposed to the virus, trial finds
calendar
May 2026
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    
Recent Posts
  • AI Chatbots are turbo-charging violence against women and girls: We urgently need to regulate them | Yvonne McDermott Rees
  • ‘The biggest El Niño event since the 1870s’: ‘Super’ El Niño is now the most likely scenario by the end of this year ‪—‬ and the humanitarian cost could be huge
  • Antarctica’s sudden sea ice loss is one of the most extreme and confusing events in the modern climate record. Scientists now know why it’s happening.
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2026 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.