Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • Yellowstone’s volcano may be fueled in a very different way than we thought
  • Scientists identify 10,000 ‘impossible’ exoplanet candidates, potentially tripling the number of known alien worlds
  • Science news this week: Risky, lifesaving surgery performed on a baby in the womb, AI agent deletes a company database in 9 seconds, and the universe may end much sooner than expected
  • Breakthrough in experimental light-powered quantum computers could mean scaling them up is now far more viable
  • Mount Etna is like no other volcano on Earth, representing ‘a new type of volcanism,’ new research reveals
  • ‘Lifelong monogamy’ and ‘half orphans’: DNA analysis reveals clues about life on the Roman frontier after the fall of Rome
  • Can NASA and SpaceX really build a moon base in the next 10 years?
  • Birds in cities appear to dislike men less than women, and experts have no idea why
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»Scientists made AI agents ruder — and they performed better at complex reasoning tasks
Lifestyle

Scientists made AI agents ruder — and they performed better at complex reasoning tasks

EditorBy EditorFebruary 28, 2026No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

When artificial intelligence (AI) is allowed to behave more like a human communicator, it becomes a more effective debate partner that reaches more accurate conclusions, scientists have found.

Human communication is full of stops and starts, impassioned interruptions, unsure silences and ambiguity. AI, on the other hand, adheres to the formal communication style of computers — processing a command, formulating a response, delivering the output, and waiting patiently for the next command.

“Current multi-agent systems often feel artificial because they lack the messy, real-time dynamics of human conversation,” co-author of the study Yuichi Sei, Professor, Department of Infomatics at Tokyo’s University of Electro-Communications in Japan, said in a statement. “We wanted to see if giving agents the social cues we take for granted, like the ability to interrupt or the choice to stay quiet, would improve their collective intelligence.”


You may like

Sei and his co-workers proposed a framework where large language models (LLMs) didn’t have to adhere to the back-and-forth, wait-your-turn nature of computerized communication. Instead, an LLM could be assigned a personality that let it speak out of turn, cut off other speakers, or remain silent.

Beyond creating more humanlike methods of AI communication, the researchers found that such flexibility led to higher accuracy on complex tasks compared with that of standard LLMs.

A host of personalities

The team started by integrating traits into LLMs according to the “big five” personality types from classical psychology — openness, conscientiousness, extraversion, agreeableness and neuroticism.

The next step was to reprogram text-based LLMs to process responses sentence by sentence rather than generating a full response before the next one started, which allowed the researchers to carefully control the flow of discussion. They also compared the results between three conversational settings — fixed speaking order, dynamic speaking order, and dynamic speaking order with interruption enabled. The latter enabled the model to calculate an “urgency score” that let them grasp and process the conversation in real time.

Get the world’s most fascinating discoveries delivered straight to your inbox.

The urgency score was expressed in the conversation in several ways. If it spiked because the model spotted an error or a point it considered critical to the discussion, it could raise this immediately, regardless of whose turn it was to speak. If the urgency score was low, the model interpreted this as having nothing concrete to add, which reduced conversational “clutter” for its own sake.

Sei told Live Science that the team evaluated performance using 1,000 questions from the Massive Multitask Language Understanding (MMLU) benchmark — an AI reasoning test encompassing questions from different areas, including science and humanities.

“When one agent initially gave an incorrect answer, overall accuracy was 68.7% with fixed-order discussion, 73.8% with dynamic order, and 79.2% when interruption was allowed,” Sei said. “In a more difficult setting where two agents initially gave incorrect answers, accuracy was 37.2% with fixed order, 43.7% with dynamic order, and 49.5% with interruption enabled.”

Having shown that the personality-driven models were more accurate than traditional AI chatbots, Sei now wants to explore how these new findings can be applied in practice. The team plans to apply their findings to various domains featuring creative collaboration to understand the dynamic around how “digital personalities” can play out in decision-making within a group.

“In the future, AI agents will increasingly interact with one another and with humans in collaborative settings,” said Sei. “Our findings suggest that discussions shaped by personality, including the ability to interrupt when necessary, may sometimes produce better outcomes than strictly turn-based and uniformly polite exchanges.”

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleXiaomi Pad 8 Pro is the closest you can get to an iPad Pro on Android
Next Article D.C. Memo: What to make of Trump’s ‘war on fraud?’
Editor
  • Website

Related Posts

Lifestyle

Yellowstone’s volcano may be fueled in a very different way than we thought

May 2, 2026
Lifestyle

Scientists identify 10,000 ‘impossible’ exoplanet candidates, potentially tripling the number of known alien worlds

May 2, 2026
Lifestyle

Science news this week: Risky, lifesaving surgery performed on a baby in the womb, AI agent deletes a company database in 9 seconds, and the universe may end much sooner than expected

May 2, 2026
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • Yellowstone’s volcano may be fueled in a very different way than we thought
  • Scientists identify 10,000 ‘impossible’ exoplanet candidates, potentially tripling the number of known alien worlds
  • Science news this week: Risky, lifesaving surgery performed on a baby in the womb, AI agent deletes a company database in 9 seconds, and the universe may end much sooner than expected
  • Breakthrough in experimental light-powered quantum computers could mean scaling them up is now far more viable
  • Mount Etna is like no other volcano on Earth, representing ‘a new type of volcanism,’ new research reveals
calendar
May 2026
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    
Recent Posts
  • Yellowstone’s volcano may be fueled in a very different way than we thought
  • Scientists identify 10,000 ‘impossible’ exoplanet candidates, potentially tripling the number of known alien worlds
  • Science news this week: Risky, lifesaving surgery performed on a baby in the womb, AI agent deletes a company database in 9 seconds, and the universe may end much sooner than expected
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2026 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.