Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • Late DRAMA as Wolves STUN Aston Villa in West Midlands derby
  • DeVon Franklin, Maria Castillo’s NAACP Image Awards Date Night
  • OC Headlines as Supervisor Campaign Flairs Over Landfill
  • Acing this new AI exam — which its creators say is the toughest in the world — might point to the first signs of AGI
  • Pokémon Presents: Every Pokémon Day announcement today
  • Sa stops Luiz's acrobatic effort to keep score level!
  • Simu Liu Last Name Pronunciation
  • ICE slams PA county for releasing sex assault suspect with detainer
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»Acing this new AI exam — which its creators say is the toughest in the world — might point to the first signs of AGI
Lifestyle

Acing this new AI exam — which its creators say is the toughest in the world — might point to the first signs of AGI

EditorBy EditorFebruary 27, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Researchers at the Center for AI Safety and Scale AI have published “Humanity’s Last Exam” — a test designed to measure how close today’s most powerful artificial intelligence (AI) models are to meeting or exceeding human-level knowledge across several domains.

The test was launched in January 2025, but scientists outlined the framework and their thinking behind its design for the first time in a new study published Jan. 28 in the journal Nature. It contains a corpus of 2,500 questions across more than 100 subjects, with input from more than 1,000 subject-matter experts from 500 institutions across 50 countries.

The exam consists of multiple-choice and short-answer questions, each of which has a known solution that is “unambiguous and easily verifiable but cannot be quickly answered by internet retrieval.”


You may like

At launch, the researchers tested OpenAI’s GPT-4o and o1 models, Google’s Gemini 1.5 Pro, Anthropic’s Claude 3.5 Sonnet and DeepSeek R1. OpenAI’s o1 system notched the top spot with a score of just 8.3%.

Despite this poor performance, the researchers wrote at the time that “given the rapid pace of AI development, it is plausible that models could exceed 50% accuracy on HLE by the end of 2025.”

As of Feb. 12, 2026, the highest score achieved so far is 48.4%, set by Google’s Gemini 3 Deep Think. Human experts, meanwhile, score around 90% in their respective domains.

Testing the smartest machines in the world

Humanity’s Last Exam was intentionally designed to be extremely difficult for AI models. During early development, the researchers put out a global call for submissions from subject matter experts across numerous domains.

Get the world’s most fascinating discoveries delivered straight to your inbox.

The researchers enforced strict submission criteria requiring questions to be precise, unambiguous, solvable and non-searchable. They didn’t want models to cheat by performing a simple web search, or for any of the questions to already appear online — thus increasing the likelihood a given model would have the answer in its training dataset.

Each question submitted was then fed to the AI models. The team automatically rejected any questions the models could answer correctly.

More than 70,000 submissions were attempted, resulting in approximately 13,000 questions that stumped LLMs. These were then vetted by a team of subject matter experts, approved by the research team, and presented to the scientific community for open feedback.


You may like

Ultimately, the researchers narrowed the total submissions down to 2,500 questions that generally fall within the realm of PhD-level testing.

An example of a trivia question in the exam is: “In Greek mythology, who was Jason’s maternal great-grandfather?”

Meanwhile, an example of a physics question asks for the relationship between different forces during motion in a scenario where a block is placed on a horizontal rail (and can slide frictionlessly) while also being attached to a rigid, massless rod of an unknown length.

The breadth of questions and scope of subjects covered by Humanity’s Last Exam sets it apart from similar benchmarking tools, its creators say.

Common tests, such as the Massive Multitask Language Understanding (MMLU) dataset, which was authored with participation from Center for AI Safety founder Dan Hendrycks, only test a small subset of expert-level domain knowledge, primarily focusing on coding and mathematics.

Even state-of-the-art benchmarks such as Francois Chollet’s ARC-AGI suite struggle to outpace the memorization and searchability problems that the creators of Humanity’s Last Exam suggest the new test addresses. Gemini’s Deep Think, for example, achieved 84.6% on the ARC-AGI-2 benchmark, just a week after failing to reach 50% on the HLE test.

The ultimate prize is general intelligence

Humanity’s Last Exam likely represents the AI world’s best attempt to date at measuring the broad-spectrum capabilities of modern AI models relative to human experts, but the study’s authors categorically state that achieving a high score on the HLE is in no way indicative of the arrival of artificial general intelligence (AGI).

“High accuracy on HLE would demonstrate expert-level performance on closed-ended, verifiable questions and cutting-edge scientific knowledge, but it would not alone suggest autonomous research capabilities or artificial general intelligence,” the scientists said in the study.

“Doing well on HLE is a necessary, but not a sufficient criterion to say that machines have reached true intelligence,” Manuel Schottdorf, a neuroscientist at the University of Delaware’s Department of Psychological and Brain Sciences, said in a recent statement. Schottdorf is one of the many experts whose question was accepted into the HLE’s corpus.

“They will have to be good enough to solve these questions, but that as a fact alone can’t allow us to conclude that machines are truly intelligent.”

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticlePokémon Presents: Every Pokémon Day announcement today
Next Article OC Headlines as Supervisor Campaign Flairs Over Landfill
Editor
  • Website

Related Posts

Lifestyle

The sun just experienced its first ‘spotless days’ in 4 years — but we’re not in the clear yet

February 27, 2026
Lifestyle

Just in time for the total lunar eclipse, this beginner-friendly telescope is now $100 off at Amazon

February 27, 2026
Lifestyle

‘Thermodynamic computer’ can mimic AI neural networks — using orders of magnitude less energy to generate images

February 27, 2026
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • Late DRAMA as Wolves STUN Aston Villa in West Midlands derby
  • DeVon Franklin, Maria Castillo’s NAACP Image Awards Date Night
  • OC Headlines as Supervisor Campaign Flairs Over Landfill
  • Acing this new AI exam — which its creators say is the toughest in the world — might point to the first signs of AGI
  • Pokémon Presents: Every Pokémon Day announcement today
calendar
February 2026
M T W T F S S
 1
2345678
9101112131415
16171819202122
232425262728  
« Jan    
Recent Posts
  • Late DRAMA as Wolves STUN Aston Villa in West Midlands derby
  • DeVon Franklin, Maria Castillo’s NAACP Image Awards Date Night
  • OC Headlines as Supervisor Campaign Flairs Over Landfill
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2026 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.