Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • Best 55-inch TV deal: Save 33% on the Hisense E6 QLED 4K Fire TV
  • England vs South Africa: Proteas seal 14-run victory in rain-affected T20I series opener in Cardiff | Cricket News
  • Sam Edelman Shoes for $17 & Splendid Shorts for $12
  • Looking back at Charlie Kirk’s career as a political activist
  • ‘Our hearts stopped’: Scientists find baby pterosaurs died in violent Jurassic storm 150 million years ago
  • What we know about the Charlie Kirk shooting at Utah Valley University
  • Preorder new Apple products on Amazon: AirPods Pro 3, Apple Watch Series 11, and more
  • Hillsin: Trainer Chris Honour wanted to make ‘good impression’ says BHA prosecutor | Racing News
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»AI is just as overconfident and biased as humans can be, study shows
Lifestyle

AI is just as overconfident and biased as humans can be, study shows

EditorBy EditorMay 4, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Although humans and artificial intelligence (AI) systems “think” very differently, new research has revealed that AIs sometimes make decisions as irrationally as we do.

In almost half of the scenarios examined in a new study, ChatGPT exhibited many of the most common human decision-making biases. Published April 8. in the journal Manufacturing & Service Operations Management, the findings are the first to evaluate ChatGPT’s behavior across 18 well-known cognitive biases found in human psychology.

The paper’s authors, from five academic institutions across Canada and Australia, tested OpenAI’s GPT-3.5 and GPT-4 — the two large language models (LLMs) powering ChatGPT — and discovered that despite being “impressively consistent” in their reasoning, they’re far from immune to human-like flaws.

What’s more, such consistency itself has both positive and negative effects, the authors said.

“Managers will benefit most by using these tools for problems that have a clear, formulaic solution,” study lead-author Yang Chen, assistant professor of operations management at the Ivey Business School, said in a statement. “But if you’re using them for subjective or preference-driven decisions, tread carefully.”

The study took commonly known human biases, including risk aversion, overconfidence and the endowment effect (where we assign more value to things we own) and applied them to prompts given to ChatGPT to see if it would fall into the same traps as humans.

Rational decisions — sometimes

The scientists asked the LLMs hypothetical questions taken from traditional psychology, and in the context of real-world commercial applicability, in areas like inventory management or supplier negotiations. The aim was to see not just whether AI would mimic human biases but whether it would still do so when asked questions from different business domains.

Get the world’s most fascinating discoveries delivered straight to your inbox.

GPT-4 outperformed GPT-3.5 when answering problems with clear mathematical solutions, showing fewer mistakes in probability and logic-based scenarios. But in subjective simulations, such as whether to choose a risky option to realize a gain, the chatbot often mirrored the irrational preferences humans tend to show.

“GPT-4 shows a stronger preference for certainty than even humans do,” the researchers wrote in the paper, referring to the tendency for AI to tend towards safer and more predictable outcomes when given ambiguous tasks.

More importantly, the chatbots’ behaviors remained mostly stable whether the questions were framed as abstract psychological problems or operational business processes. The study concluded that the biases shown weren’t just a product of memorized examples — but part of how AI reasons.

One of the surprising outcomes of the study was the way GPT-4 sometimes amplified human-like errors. “In the confirmation bias task, GPT-4 always gave biased responses,” the authors wrote in the study. It also showed a more pronounced tendency for the hot-hand fallacy (the bias to expect patterns in randomness) than GPT 3.5.

Conversely, ChatGPT did manage to avoid some common human biases, including base-rate neglect (where we ignore statistical facts in favor of anecdotal or case-specific information) and the sunk-cost fallacy (where decision making is influenced by a cost that has already been sustained, allowing irrelevant information to cloud judgment).

According to the authors, ChatGPT’s human-like biases come from training data that contains the cognitive biases and heuristics humans exhibit. Those tendencies are reinforced during fine-tuning, especially when human feedback further favors plausible responses over rational ones. When they come up against more ambiguous tasks, AI skews towards human reasoning patterns more so than direct logic.

“If you want accurate, unbiased decision support, use GPT in areas where you’d already trust a calculator,” Chen said. When the outcome depends more on subjective or strategic inputs, however, human oversight is more important, even if it’s adjusting the user prompts to correct known biases.

“AI should be treated like an employee who makes important decisions — it needs oversight and ethical guidelines,” co-author Meena Andiappan, an associate professor of human resources and management at McMaster University, Canada, said in the statement. “Otherwise, we risk automating flawed thinking instead of improving it.”

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleArchaeologists discover hundreds of metal objects, some up to 3,400 years old, on mysterious hilltop in Hungary
Next Article These are the sharpest images yet of planets being born around distant stars
Editor
  • Website

Related Posts

Lifestyle

‘Our hearts stopped’: Scientists find baby pterosaurs died in violent Jurassic storm 150 million years ago

September 10, 2025
Lifestyle

Electronics breakthrough means our devices may one day no longer emit waste heat, scientists say

September 10, 2025
Lifestyle

‘Incredibly exciting’: NASA claims it’s found the ‘clearest sign’ yet of past life on Mars

September 10, 2025
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • Best 55-inch TV deal: Save 33% on the Hisense E6 QLED 4K Fire TV
  • England vs South Africa: Proteas seal 14-run victory in rain-affected T20I series opener in Cardiff | Cricket News
  • Sam Edelman Shoes for $17 & Splendid Shorts for $12
  • Looking back at Charlie Kirk’s career as a political activist
  • ‘Our hearts stopped’: Scientists find baby pterosaurs died in violent Jurassic storm 150 million years ago
calendar
September 2025
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
2930  
« Aug    
Recent Posts
  • Best 55-inch TV deal: Save 33% on the Hisense E6 QLED 4K Fire TV
  • England vs South Africa: Proteas seal 14-run victory in rain-affected T20I series opener in Cardiff | Cricket News
  • Sam Edelman Shoes for $17 & Splendid Shorts for $12
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2025 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.