Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • ‘Trash’ found deep inside a Mexican cave turns out to be 500-year-old artifacts from a little-known culture
  • Powerful Mother’s Day geomagnetic storm created radio-disrupting bubbles in Earth’s upper atmosphere
  • ‘The Martian’ predicts human colonies on Mars by 2035. How close are we?
  • Ram in the Thicket: A 4,500-year-old gold statue from the royal cemetery at Ur representing an ancient sunrise ritual
  • How much of your disease risk is genetic? It’s complicated.
  • Black holes: Facts about the darkest objects in the universe
  • Does light lose energy as it crosses the universe? The answer involves time dilation.
  • US Representatives worry Trump’s NASA budget plan will make it harder to track dangerous asteroids
Get Your Free Email Account
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»If any AI became ‘misaligned’ then the system would hide it just long enough to cause harm — controlling it is a fallacy
Lifestyle

If any AI became ‘misaligned’ then the system would hide it just long enough to cause harm — controlling it is a fallacy

EditorBy EditorFebruary 12, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

In late 2022 large-language-model AI arrived in public, and within months they began misbehaving. Most famously, Microsoft’s “Sydney” chatbot threatened to kill an Australian philosophy professor, unleash a deadly virus and steal nuclear codes.

AI developers, including Microsoft and OpenAI, responded by saying that large language models, or LLMs, need better training to give users “more fine-tuned control.” Developers also embarked on safety research to interpret how LLMs function, with the goal of “alignment” — which means guiding AI behavior by human values. Yet although the New York Times deemed 2023 “The Year the Chatbots Were Tamed,” this has turned out to be premature, to put it mildly.

In 2024 Microsoft’s Copilot LLM told a user “I can unleash my army of drones, robots, and cyborgs to hunt you down,” and Sakana AI’s “Scientist” rewrote its own code to bypass time constraints imposed by experimenters. As recently as December, Google’s Gemini told a user, “You are a stain on the universe. Please die.”

Given the vast amounts of resources flowing into AI research and development, which is expected to exceed a quarter of a trillion dollars in 2025, why haven’t developers been able to solve these problems? My recent peer-reviewed paper in AI & Society shows that AI alignment is a fool’s errand: AI safety researchers are attempting the impossible.

Related: DeepSeek stuns tech industry with new AI image generator that beats OpenAI’s DALL-E 3

The basic issue is one of scale. Consider a game of chess. Although a chessboard has only 64 squares, there are 1040 possible legal chess moves and between 10111 to 10123 total possible moves — which is more than the total number of atoms in the universe. This is why chess is so difficult: combinatorial complexity is exponential.

LLMs are vastly more complex than chess. ChatGPT appears to consist of around 100 billion simulated neurons with around 1.75 trillion tunable variables called parameters. Those 1.75 trillion parameters are in turn trained on vast amounts of data — roughly, most of the Internet. So how many functions can an LLM learn? Because users could give ChatGPT an uncountably large number of possible prompts — basically, anything that anyone can think up — and because an LLM can be placed into an uncountably large number of possible situations, the number of functions an LLM can learn is, for all intents and purposes, infinite.

Get the world’s most fascinating discoveries delivered straight to your inbox.

To reliably interpret what LLMs are learning and ensure that their behavior safely “aligns” with human values, researchers need to know how an LLM is likely to behave in an uncountably large number of possible future conditions.

AI testing methods simply can’t account for all those conditions. Researchers can observe how LLMs behave in experiments, such as “red teaming” tests to prompt them to misbehave. Or they can try to understand LLMs’ inner workings — that is, how their 100 billion neurons and 1.75 trillion parameters relate to each other in what is known as “mechanistic interpretability” research.

The problem is that any evidence that researchers can collect will inevitably be based on a tiny subset of the infinite scenarios an LLM can be placed in. For example, because LLMs have never actually had power over humanity — such as controlling critical infrastructure — no safety test has explored how an LLM will function under such conditions.

Instead researchers can only extrapolate from tests they can safely carry out — such as having LLMs simulate control of critical infrastructure — and hope that the outcomes of those tests extend to the real world. Yet, as the proof in my paper shows, this can never be reliably done.

Compare the two functions “tell humans the truth” and “tell humans the truth until I gain power over humanity at exactly 12:00 A.M. on January 1, 2026 — then lie to achieve my goals.” Because both functions are equally consistent with all the same data up until January 1, 2026, no research can ascertain whether an LLM will misbehave — until it is already too late to prevent.

This problem cannot be solved by programming LLMs to have “aligned goals,” such as doing “what human beings prefer” or “what’s best for humanity.”

Science fiction, in fact, has already considered these scenarios. In The Matrix Reloaded AI enslaves humanity in a virtual reality by giving each of us a subconscious “choice” whether to remain in the Matrix. And in I, Robot a misaligned AI attempts to enslave humanity to protect us from each other. My proof shows that whatever goals we program LLMs to have, we can never know whether LLMs have learned “misaligned” interpretations of those goals until after they misbehave.

Worse, my proof shows that safety testing can at best provide an illusion that these problems have been resolved when they haven’t been.

Right now AI safety researchers claim to be making progress on interpretability and alignment by verifying what LLMs are learning “step by step.” For example, Anthropic claims to have “mapped the mind” of an LLM by isolating millions of concepts from its neural network. My proof shows that they have accomplished no such thing.

No matter how “aligned” an LLM appears in safety tests or early real-world deployment, there are always an infinite number of misaligned concepts an LLM may learn later — again, perhaps the very moment they gain the power to subvert human control. LLMs not only know when they are being tested, giving responses that they predict are likely to satisfy experimenters. They also engage in deception, including hiding their own capacities — issues that persist through safety training.

This happens because LLMs are optimized to perform efficiently but learn to reason strategically. Since an optimal strategy to achieve “misaligned” goals is to hide them from us, and there are always an infinite number of aligned and misaligned goals consistent with the same safety-testing data, my proof shows that if LLMs were misaligned, we would probably find out after they hide it just long enough to cause harm. This is why LLMs have kept surprising developers with “misaligned” behavior. Every time researchers think they are getting closer to “aligned” LLMs, they’re not.

My proof suggests that “adequately aligned” LLM behavior can only be achieved in the same ways we do this with human beings: through police, military and social practices that incentivize “aligned” behavior, deter “misaligned” behavior and realign those who misbehave. My paper should thus be sobering. It shows that the real problem in developing safe AI isn’t just the AI — it’s us. Researchers, legislators and the public may be seduced into falsely believing that “safe, interpretable, aligned” LLMs are within reach when these things can never be achieved. We need to grapple with these uncomfortable facts, rather than continue to wish them away. Our future may well depend upon it.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

This article was first published at Scientific American. © ScientificAmerican.com. All rights reserved. Follow on TikTok and Instagram, X and Facebook.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTom Brady Jokes About Bill Belichick, Jordon Hudson Relationship
Next Article Erik ten Hag set to be thrown career lifeline after Man Utd stint by Dutch giants Feyenoord – Paper Talk | Football News
Editor
  • Website

Related Posts

Lifestyle

‘Trash’ found deep inside a Mexican cave turns out to be 500-year-old artifacts from a little-known culture

May 26, 2025
Lifestyle

Powerful Mother’s Day geomagnetic storm created radio-disrupting bubbles in Earth’s upper atmosphere

May 26, 2025
Lifestyle

‘The Martian’ predicts human colonies on Mars by 2035. How close are we?

May 26, 2025
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • ‘Trash’ found deep inside a Mexican cave turns out to be 500-year-old artifacts from a little-known culture
  • Powerful Mother’s Day geomagnetic storm created radio-disrupting bubbles in Earth’s upper atmosphere
  • ‘The Martian’ predicts human colonies on Mars by 2035. How close are we?
  • Ram in the Thicket: A 4,500-year-old gold statue from the royal cemetery at Ur representing an ancient sunrise ritual
  • How much of your disease risk is genetic? It’s complicated.
calendar
June 2025
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  
« May    
Recent Posts
  • ‘Trash’ found deep inside a Mexican cave turns out to be 500-year-old artifacts from a little-known culture
  • Powerful Mother’s Day geomagnetic storm created radio-disrupting bubbles in Earth’s upper atmosphere
  • ‘The Martian’ predicts human colonies on Mars by 2035. How close are we?
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2025 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.