Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • Lawsuit over calories in David protein bars is dropped
  • Moving to Florida: Relocation Guide for 2024
  • Luke Bryan credits religious upbringing for guiding him through stardom
  • Cash relief coming for Minneapolis small businesses – is it enough?
  • Living in Canada: The Definitive Guide
  • US Army suspends aircrew after helicopter flyover at Kid Rock’s Nashville home
  • San Clemente Leaders Review ‘State of the Beach’ As Citizen Initiative Heads to Ballot
  • China’s huge push to reduce air pollution had an unexpected consequence in the Arctic
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»Scientists discover major differences in how humans and AI ‘think’ — and the implications could be significant
Lifestyle

Scientists discover major differences in how humans and AI ‘think’ — and the implications could be significant

EditorBy EditorApril 1, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

We know that artificial intelligence (AI) can’t think the same way as a person, but new research has revealed how this difference might affect AI’s decision-making, leading to real-world ramifications humans might be unprepared for.

The study, published Feb. 2025 in the journal Transactions on Machine Learning Research, examined how well large language models (LLMs) can form analogies.

They found that in both simple letter-string analogies and digital matrix problems — where the task was to complete a matrix by identifying the missing digit — humans performed well but AI performance declined sharply.

While testing the robustness of humans and AI models on story-based analogy problems, the study found the models were susceptible to answer-order effects — differences in responses due to the order of treatments in an experiment — and may have also been more likely to paraphrase.

Altogether, the study concluded that AI models lack “zero-shot” learning abilities, where a learner observes samples from classes that weren’t present during training and makes predictions about the class they belong to according to the question.

Related: Punishing AI doesn’t stop it from lying and cheating — it just makes it hide better, study shows

Co-author of the study Martha Lewis, assistant professor of neurosymbolic AI at the University of Amsterdam, gave an example of how AI can’t perform analogical reasoning as well as humans in letter string problems.

Get the world’s most fascinating discoveries delivered straight to your inbox.

“Letter string analogies have the form of ‘if abcd goes to abce, what does ijkl go to?’ Most humans will answer ‘ijkm’, and [AI] tends to give this response too,” Lewis told Live Science. “But another problem might be ‘if abbcd goes to abcd, what does ijkkl go to? Humans will tend to answer ‘ijkl’ – the pattern is to remove the repeated element. But GPT-4 tends to get problems [like these] wrong.”

Why it matters that AI can’t think like humans

Lewis said that while we can abstract from specific patterns to more general rules, LLMs don’t have that capability. “They’re good at identifying and matching patterns, but not at generalizing from those patterns.”

Most AI applications rely to some extent on volume — the more training data is available, the more patterns are identified. But Lewis stressed pattern-matching and abstraction aren’t the same thing. “It’s less about what’s in the data, and more about how data is used,” she added.

To give a sense of the implications, AI is increasingly used in the legal sphere for research, case law analysis and sentencing recommendations. But with a lower ability to make analogies, it may fail to recognize how legal precedents apply to slightly different cases when they arise.

Given this lack of robustness might affect real-world outcomes, the study pointed out that this served as evidence that we need to carefully evaluate AI systems not just for accuracy but also for robustness in their cognitive capabilities.

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleUniversity of Minnesota student lost visa for drunk driving, not protests, ICE says
Next Article ‘We didn’t expect to find such a beautiful, thriving ecosystem’: Hidden world of life discovered beneath Antarctic iceberg
Editor
  • Website

Related Posts

Lifestyle

Moving to Florida: Relocation Guide for 2024

April 1, 2026
Lifestyle

Living in Canada: The Definitive Guide

March 31, 2026
Lifestyle

China’s huge push to reduce air pollution had an unexpected consequence in the Arctic

March 31, 2026
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • Lawsuit over calories in David protein bars is dropped
  • Moving to Florida: Relocation Guide for 2024
  • Luke Bryan credits religious upbringing for guiding him through stardom
  • Cash relief coming for Minneapolis small businesses – is it enough?
  • Living in Canada: The Definitive Guide
calendar
April 2026
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930  
« Mar    
Recent Posts
  • Lawsuit over calories in David protein bars is dropped
  • Moving to Florida: Relocation Guide for 2024
  • Luke Bryan credits religious upbringing for guiding him through stardom
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2026 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.