Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • Australian GP: Oscar Piastri says McLaren not ‘where we were’ for 2026’s first race but ready to play long game | F1 News
  • Nicole Kidman AMC Ad Paycheck Revealed
  • Six-Story Apartment Proposal Sparks Community Uproar in Mission Viejo
  • March 3 ‘blood moon’ total lunar eclipse dazzles millions around the world (photos)
  • How to preorder Apple’s new MacBook Pros with M5 Pro, M5 Max
  • When Tyson Fury sparred ‘King of Kickboxing’ Rico Verhoeven, Oleksandr Usyk’s shock next opponent – new footage found | Boxing News
  • Best Dr. Martens Spring Sale Sandals and Slides Deals
  • Elementary school reverses discipline over same-sex marriage book
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Tech»Anthropic says Claude chatbot can now end abusive interactions
Tech

Anthropic says Claude chatbot can now end abusive interactions

EditorBy EditorAugust 19, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Harmful, abusive interactions plague AI chatbots. Researchers have found that AI companions like
Character.AI, Nomi, and Replika are unsafe for teens under 18, ChatGPT has the potential to reinforce users’ delusional thinking, and even OpenAI CEO Sam Altman has spoken about ChatGPT users developing an “emotional reliance” on AI. Now, the companies that built these tools are slowly rolling out features that can mitigate this behavior.

On Friday, Anthropic said its Claude chatbot can now end potentially harmful conversations, which “is intended for use in rare, extreme cases of persistently harmful or abusive user interactions.” In a press release, Anthropic cited examples such as sexual content involving minors, violence, and even “acts of terror.”

“We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future,” Anthropic said in its press release on Friday. “However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible. Allowing models to end or exit potentially distressing interactions is one such intervention.”

Mashable Light Speed

Anthropic provided an example of Claude ending a conversation in a press release.
Credit: Anthropic

Anthropic said Claude Opus 4 has a “robust and consistent aversion to harm,” which it found during the preliminary model welfare assessment as a pre-deployment test of the model. It showed a “strong preference against engaging with harmful tasks,” along with a “pattern of apparent distress when engaging with real-world users seeking harmful content, and a “tendency to end harmful conversations when given the ability to do so in simulated user interactions.”

Basically, when a user consistently sends abusive and harmful requests to Claude, it will refuse to comply and attempt to “productively redirect the interactions.” It only ends conversations as “a last resort” after it attempted to redirect the conversation multiple times. “The scenarios where this will occur are extreme edge cases,” Anthropic wrote, adding that “the vast majority of users will not notice or be affected by this feature in any normal product use, even when discussing highly controversial issues with Claude.”

If Claude has to use this feature, the user won’t be able to send new messages in that conversation, but they can still chat with Claude in a new conversation.

“We’re treating this feature as an ongoing experiment and will continue refining our approach,” Anthropic wrote. “If users encounter a surprising use of the conversation-ending ability, we encourage them to submit feedback by reacting to Claude’s message with Thumbs or using the dedicated ‘Give feedback’ button.”

Topics
Artificial Intelligence

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleLATE DRAMA as penalty controversy gives Leeds late goal against Everton!
Next Article Southwest pilot arrested after failing sobriety test before takeoff
Editor
  • Website

Related Posts

Tech

How to preorder Apple’s new MacBook Pros with M5 Pro, M5 Max

March 3, 2026
Tech

Save over $500 on Samsung 43-inch QN90F Neo QLED TV at Amazon

March 3, 2026
Tech

Apple announces new M5 MacBook Air with 512GB of starting storage

March 3, 2026
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • Australian GP: Oscar Piastri says McLaren not ‘where we were’ for 2026’s first race but ready to play long game | F1 News
  • Nicole Kidman AMC Ad Paycheck Revealed
  • Six-Story Apartment Proposal Sparks Community Uproar in Mission Viejo
  • March 3 ‘blood moon’ total lunar eclipse dazzles millions around the world (photos)
  • How to preorder Apple’s new MacBook Pros with M5 Pro, M5 Max
calendar
March 2026
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  
« Feb    
Recent Posts
  • Australian GP: Oscar Piastri says McLaren not ‘where we were’ for 2026’s first race but ready to play long game | F1 News
  • Nicole Kidman AMC Ad Paycheck Revealed
  • Six-Story Apartment Proposal Sparks Community Uproar in Mission Viejo
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2026 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.