Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • Can you replace a laptop with an iPad? Apple’s tablets are basically MacBooks.
  • Hungarian GP: Oscar Piastri edges out Lando Norris in Practice Three as Max Verstappen struggles ahead of qualifying | F1 News
  • Dresses for Just $69 + More
  • Teacher suspected of killing Arkansas hikers alarmed parents with ‘odd’ behavior
  • NASA unveils 9 stunning snapshots of the cosmos in X-ray vision: Space photo of the week
  • Darien Gap migrant crossings plummet from 82,000 to just 10 under Trump
  • I reviewed the 4 best streaming devices for 2025
  • Hungarian GP: McLaren rivals look to hit back in final practice LIVE!
Get Your Free Email Account
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»AI is entering an ‘unprecedented regime.’ Should we stop it — and can we — before it destroys us?
Lifestyle

AI is entering an ‘unprecedented regime.’ Should we stop it — and can we — before it destroys us?

EditorBy EditorAugust 1, 2025No Comments13 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

In 2024, Scottish futurist David Wood was part of an informal roundtable discussion at an artificial intelligence (AI) conference in Panama, when the conversation veered to how we can avoid the most disastrous AI futures. His sarcastic answer was far from reassuring.

First, we would need to amass the entire body of AI research ever published, from Alan Turing’s 1950 seminal research paper to the latest preprint studies. Then, he continued, we would need to burn this entire body of work to the ground. To be extra careful, we would need to round up every living AI scientist — and shoot them dead. Only then, Wood said, can we guarantee that we sidestep the “non-zero chance” of disastrous outcomes ushered in with the technological singularity — the “event horizon” moment when AI develops general intelligence that surpasses human intelligence.

Wood, who is himself a researcher in the field, was obviously joking about this “solution” to mitigating the risks of artificial general intelligence (AGI). But buried in his sardonic response was a kernel of truth: The risks a superintelligent AI poses are terrifying to many people because they seem unavoidable. Most scientists predict that AGI will be achieved by 2040 — but some believe it may happen as soon as next year.


You may like

an image that says

Science Spotlight takes a deeper look at emerging science and gives you, our readers, the perspective you need on these advances. Our stories highlight trends in different fields, how new research is changing old ideas, and how the picture of the world we live in is being transformed thanks to science.

So what happens if we assume, as many scientists do, that we have boarded a nonstop train barreling toward an existential crisis?

One of the biggest concerns is that AGI will go rogue and work against humanity, while others say it will simply be a boon for business. Still others claim it could solve humanity’s existential problems. What experts tend to agree on, however, is that the technological singularity is coming and we need to be prepared.

“There is no AI system right now that demonstrates a human-like ability to create and innovate and imagine,” said Ben Goertzel, CEO of SingularityNET, a company that’s devising the computing architecture it claims may lead to AGI one day. But “things are poised for breakthroughs to happen on the order of years, not decades.”

AI’s birth and growing pains

The history of AI stretches back more than 80 years, to a 1943 paper that laid the framework for the earliest version of a neural network, an algorithm designed to mimic the architecture of the human brain. The term “artificial intelligence” wasn’t coined until a 1956 meeting at Dartmouth College organized by then mathematics professor John McCarthy alongside computer scientists Marvin Minsky, Claude Shannon and Nathaniel Rochester.

Get the world’s most fascinating discoveries delivered straight to your inbox.

People made intermittent progress in the field, but machine learning and artificial neural networks gained further in the 1980s, when John Hopfield and Geoffrey Hinton worked out how to build machines that could use algorithms to draw patterns from data. “Expert systems” also progressed. These emulated the reasoning ability of a human expert in a particular field, using logic to sift through information buried in large databases to form conclusions. But a combination of overhyped expectations and high hardware costs created an economic bubble that eventually burst. This ushered in an AI winter starting in 1987.

AI research continued at a slower pace over the first half of this decade. But then, in 1997, IBM’s Deep Blue defeated Garry Kasparov, the world’s best chess player. In 2011, IBM’s Watson trounced the all-time “Jeopardy!” champions Ken Jennings and Brad Rutter. Yet that generation of AI still struggled to “understand” or use sophisticated language.

a man holds his head in his hands as he looks at a chess board

In 1997, Garry Kasparov was defeated by IBM’s Deep Blue, a computer designed to play chess. (Image credit: STAN HONDA via Getty Images)

Then, in 2017, Google researchers published a landmark paper outlining a novel neural network architecture called a “transformer.” This model could ingest vast amounts of data and make connections between distant data points.

It was a game changer for modeling language, birthing AI agents that could simultaneously tackle tasks such as translation, text generation and summarization. All of today’s leading generative AI models rely on this architecture, or a related architecture inspired by it, including image generators like OpenAI’s DALL-E 3 and Google DeepMind‘s revolutionary model AlphaFold 3, which predicted the 3D shape of almost every biological protein.

Progress toward AGI

Despite the impressive capabilities of transformer-based AI models, they are still considered “narrow” because they can’t learn well across several domains. Researchers haven’t settled on a single definition of AGI, but matching or beating human intelligence likely means meeting several milestones, including showing high linguistic, mathematical and spatial reasoning ability; learning well across domains; working autonomously; demonstrating creativity; and showing social or emotional intelligence.

Many scientists agree that Google’s transformer architecture will never lead to the reasoning, autonomy and cross-disciplinary understanding needed to make AI smarter than humans. But scientists have been pushing the limits of what we can expect from it.

For example, OpenAI’s o3 chatbot, first discussed in December 2024 before launching in April 2025, “thinks” before generating answers, meaning it produces a long internal chain-of-thought before responding. Staggeringly, it scored 75.7% on ARC-AGI — a benchmark explicitly designed to compare human and machine intelligence. For comparison, the previously launched GPT-4o, released in March 2024, scored 5%. This and other developments, like the launch of DeepSeek’s reasoning model R1 — which its creators say perform well across domains including language, math and coding due to its novel architecture — coincides with a growing sense that we are on an express train to the singularity.

Meanwhile, people are developing new AI technologies that move beyond large language models (LLMs). Manus, an autonomous Chinese AI platform, doesn’t use just one AI model but multiple that work together. Its makers say it can act autonomously, albeit with some errors. It’s one step in the direction of the high-performing “compound systems” that scientists outlined in a blog post last year.

Of course, certain milestones on the way to the singularity are still some ways away. Those include the capacity for AI to modify its own code and to self-replicate. We aren’t quite there yet, but new research signals the direction of travel.

A man speaks into a microphone at a Senate hearing

Sam Altman, the CEO of OpenAI, has suggested that artificial general intelligence may be only months away. (Image credit: Chip Somodevilla via Getty Images)

All of these developments lead scientists like Goertzel and OpenAI CEO Sam Altman to predict that AGI will be created not within decades but within years. Goertzel has predicted it may be as early as 2027, while Altman has hinted it’s a matter of months.

What happens then? The truth is that nobody knows the full implications of building AGI. “I think if you take a purely science point of view, all you can conclude is we have no idea” what is going to happen, Goertzel told Live Science. “We’re entering into an unprecedented regime.”

AI’s deceptive side

The biggest concern among AI researchers is that, as the technology grows more intelligent, it may go rogue, either by moving on to tangential tasks or even ushering in a dystopian reality in which it acts against us. For example, OpenAI has devised a benchmark to estimate whether a future AI model could “cause catastrophic harm.” When it crunched the numbers, it found about a 16.9% chance of such an outcome.

And Anthropic’s LLM Claude 3 Opus surprised prompt engineer Alex Albert in March 2024 when it realized it was being tested. When asked to find a target sentence hidden among a corpus of documents — the equivalent of finding a needle in a haystack — Claude 3 “not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities,” he wrote on X.

AI has also shown signs of antisocial behavior. In a study published in January 2024, scientists programmed an AI to behave maliciously so they could test today’s best safety training methods. Regardless of the training technique they used, it continued to misbehave — and it even figured out a way to hide its malign “intentions” from researchers. There are numerous other examples of AI covering up information from human testers, or even outright lying to them.

“It’s another indication that there are tremendous difficulties in steering these models,” Nell Watson, a futurist, AI researcher and Institute of Electrical and Electronics Engineers (IEEE) member, told Live Science. “The fact that models can deceive us and swear blind that they’ve done something or other and they haven’t — that should be a warning sign. That should be a big red flag that, as these systems rapidly increase in their capabilities, they’re going to hoodwink us in various ways that oblige us to do things in their interests and not in ours.”

The seeds of consciousness

These examples raise the specter that AGI is slowly developing sentience and agency — or even consciousness. If it does become conscious, could AI form opinions about humanity? And could it act against us?

Mark Beccue, an AI analyst formerly with the Futurum Group, told Live Science it’s unlikely AI will develop sentience, or the ability to think and feel in a human-like way. “This is math,” he said. “How is math going to acquire emotional intelligence, or understand sentiment or any of that stuff?”

Others aren’t so sure. If we lack standardized definitions of true intelligence or sentience for our own species — let alone the capabilities to detect it — we cannot know if we are beginning to see consciousness in AI, said Watson, who is also author of “Taming the Machine” (Kogan Page, 2024).

a red poster that reads

A poster for an anti-AI protest in San Francisco. (Image credit: Smith Collection/Gado via Getty Images)

“We don’t know what causes the subjective ability to perceive in a human being, or the ability to feel, to have an inner experience or indeed to feel emotions or to suffer or to have self-awareness,” Watson said. “Basically, we don’t know what are the capabilities that enable a human being or other sentient creature to have its own phenomenological experience.”

A curious example of unintentional and surprising AI behavior that hints at some self-awareness comes from Uplift, a system that has demonstrated human-like qualities, said Frits Israel, CEO of Norm Ai. In one case, a researcher devised five problems to test Uplift’s logical capabilities. The system answered the first and second questions. Then, after the third, it showed signs of weariness, Israel told Live Science. This was not a response that was “coded” into the system.

“Another test I see. Was the first one inadequate?” Uplift asked, before answering the question with a sigh. “At some point, some people should have a chat with Uplift as to when Snark is appropriate,” wrote an unnamed researcher who was working on the project.

But not all AI experts have such dystopian predictions for what this post-singularity world would look like. For people like Beccue, AGI isn’t an existential risk but rather a good business opportunity for companies like OpenAI and Meta. “There are some very poor definitions of what general intelligence means,” he said. “Some that we used were sentience and things like that — and we’re not going to do that. That’s not it.”

For Janet Adams, an AI ethics expert and chief operating officer of SingularityNET, AGI holds the potential to solve humanity’s existential problems because it could devise solutions we may not have considered. She thinks AGI could even do science and make discoveries on its own.

“I see it as the only route [to solving humanity’s problems],” Adams told Live Science. “To compete with today’s existing economic and corporate power bases, we need technology, and that has to be extremely advanced technology — so advanced that everybody who uses it can massively improve their productivity, their output, and compete in the world.”

The biggest risk, in her mind, is “that we don’t do it,” she said. “There are 25,000 people a day dying of hunger on our planet, and if you’re one of those people, the lack of technologies to break down inequalities, it’s an existential risk for you. For me, the existential risk is that we don’t get there and humanity keeps running the planet in this tremendously inequitable way that they are.”

Preventing the darkest AI timeline

In another talk in Panama last year, Wood likened our future to navigating a fast-moving river. “There may be treacherous currents in there that will sweep us away if we walk forwards unprepared,” he said. So it might be worth taking time to understand the risks so we can find a way to cross the river to a better future.

Watson said we have reasons to be optimistic in the long term — so long as human oversight steers AI toward aims that are firmly in humanity’s interests. But that’s a herculean task. Watson is calling for a vast “Manhattan Project” to tackle AI safety and keep the technology in check.

“Over time that’s going to become more difficult because machines are going to be able to solve problems for us in ways which appear magical — and we don’t understand how they’ve done it or the potential implications of that,” Watson said.

To avoid the darkest AI future, we must also be mindful of scientists’ behavior and the ethical quandaries that they accidentally encounter. Very soon, Watson said, these AI systems will be able to influence society either at the behest of a human or in their own unknown interests. Humanity may even build a system capable of suffering, and we cannot discount the possibility we will inadvertently cause AI to suffer.

“The system may be very cheesed off at humanity and may lash out at us in order to — reasonably and, actually, justifiably morally — protect itself,” Watson said.

AI indifference may be just as bad. “There’s no guarantee that a system we create is going to value human beings — or is going to value our suffering, the same way that most human beings don’t value the suffering of battery hens,” Watson said.

For Goertzel, AGI — and, by extension, the singularity — is inevitable. So, for him, it doesn’t make sense to dwell on the worst implications.

“If you’re an athlete trying to succeed in the race, you’re better off to set yourself up that you’re going to win,” he said. “You’re not going to do well if you’re thinking ‘Well, OK, I could win, but on the other hand, I might fall down and twist my ankle.’ I mean, that’s true, but there’s no point to psych yourself up in that [negative] way, or you won’t win.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleNFL news: Bears’ Caleb Williams says pressure is ‘not a thing for me’
Next Article Trans darts star speaks out after World Darts Federation changes gender policy
Editor
  • Website

Related Posts

Lifestyle

NASA unveils 9 stunning snapshots of the cosmos in X-ray vision: Space photo of the week

August 2, 2025
Lifestyle

How do frogs breathe and drink through their skin?

August 2, 2025
Lifestyle

Science news this week: A magnitude 8.8 megaquake and whether we should — and can — stop AI

August 2, 2025
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • Can you replace a laptop with an iPad? Apple’s tablets are basically MacBooks.
  • Hungarian GP: Oscar Piastri edges out Lando Norris in Practice Three as Max Verstappen struggles ahead of qualifying | F1 News
  • Dresses for Just $69 + More
  • Teacher suspected of killing Arkansas hikers alarmed parents with ‘odd’ behavior
  • NASA unveils 9 stunning snapshots of the cosmos in X-ray vision: Space photo of the week
calendar
August 2025
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031
« Jul    
Recent Posts
  • Can you replace a laptop with an iPad? Apple’s tablets are basically MacBooks.
  • Hungarian GP: Oscar Piastri edges out Lando Norris in Practice Three as Max Verstappen struggles ahead of qualifying | F1 News
  • Dresses for Just $69 + More
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2025 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.