Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • ‘If astrological compatibility exists, its effects should be observable’: TL;DR — it’s not
  • Humanoid robots have outpaced human runners in the half-marathon, beating the world record ‪—‬ here are the secrets to this astonishing feat
  • Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance
  • ZWO Seestar S30 Pro smart telescope review
  • Doctors partially delivered a baby at 25 weeks to perform a lifesaving surgery and then returned him to the womb
  • Poop-encrusted chamber pots from the Roman Empire reveal oldest known human cases of Crypto parasite
  • Weapons of the world quiz: Can you identify these historical objects of war?
  • ‘The detectors never stopped beeping!’ Nearly 3,000 coins discovered in field are Norway’s largest Viking hoard on record
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Tech»Anthropic’s AI safety policy just changed for this reason
Tech

Anthropic’s AI safety policy just changed for this reason

EditorBy EditorFebruary 25, 2026No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

When Anthropic launched years ago, the company wanted an industry-wide “race to the top” in artificial intelligence, instead of a race to the bottom in pursuit of customers and market dominance that would inadvertently lead to catastrophic safety risks.

So Anthropic adopted safety principles and policies that it hoped it competitors would also implement. In some instances, companies, including Google and OpenAI, did, according to Anthropic. Still, Anthropic’s hopes didn’t “pan out” as the company hoped, according to a blog post it published Tuesday.

The post announced that Anthropic, the maker of the AI chatbot Claude, is altering key safety practices to meet what it views as present-day challenges.

SEE ALSO:

Claude apps: How Anthropic will integrate Slack, Canva, and more

Specifically, Anthropic will no longer automatically pause model development if it could be considered dangerous; instead, it will consider its competitors’ actions and whether they release models with similar capabilities. Previously, Anthropic committed to safeguards that would reduce its models’ absolute risk, regardless of whether other AI developers did the same.

Mashable Light Speed

“The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level,” the company wrote. “We remain convinced that effective government engagement on AI safety is both necessary and achievable, and we aim to continue advancing a conversation grounded in evidence, national security interests, economic competitiveness, and public trust. But this is proving to be a long-term project—not something that is happening organically as AI becomes more capable or crosses certain thresholds.”

Though Anthropic said it aims to continue leading on safety, its latest decision reflects the breakneck speed at which competitors are releasing new models.

Anthropic has also been under intense pressure this week by the U.S. Defense Department, which is pressing the company to allow the military to use its AI tools for any purpose, including mass surveillance or the deployment of autonomous weapons without human oversight.

Anthropic has yet to relent on those points in contract negotiations with the Defense Department, reportedly stirring the ire of Defense Secretary Pete Hegseth, who threatened to sever the company’s relationship with the military, Axios reports.

Anthropic has participated in an AI pilot program for military-related imagery analysis, along with Google, OpenAI, and xAI, according to the New York Times. Though Claude has been the only chatbot working on the government’s classified systems, a Pentagon official said Anthropic could be replaced by another firm.


Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Topics
Artificial Intelligence
Social Good

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleNetball Super League: England’s Jess Shaw targeting NSL final after return to Loughborough Lightning for 2026 season | Netball News
Next Article Babies weren’t supposed to be mourned in the Roman Empire. These rare liquid-gypsum burials prove otherwise.
Editor
  • Website

Related Posts

Tech

iPhone exploit DarkSword has been released in the wild

March 24, 2026
Tech

The U.S. router ban: Everything you need to know

March 24, 2026
Tech

Underage sexual content, self-harm info targeted by OpenAI’s new open-source prompts

March 24, 2026
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • ‘If astrological compatibility exists, its effects should be observable’: TL;DR — it’s not
  • Humanoid robots have outpaced human runners in the half-marathon, beating the world record ‪—‬ here are the secrets to this astonishing feat
  • Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance
  • ZWO Seestar S30 Pro smart telescope review
  • Doctors partially delivered a baby at 25 weeks to perform a lifesaving surgery and then returned him to the womb
calendar
May 2026
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    
Recent Posts
  • ‘If astrological compatibility exists, its effects should be observable’: TL;DR — it’s not
  • Humanoid robots have outpaced human runners in the half-marathon, beating the world record ‪—‬ here are the secrets to this astonishing feat
  • Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2026 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.