Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • George Kittle’s wife shares live reaction to Achilles injury
  • ‘The scientific cost would be severe’: A Trump Greenland takeover would put climate research at risk
  • Headlines Across OC as Angel Stadium Sale Debate Intensifies
  • Anti-Islam activists clash with pro-Muslim counter-protesters in Dearborn, Michigan
  • Best monitor deal: Get the 45-inch LG Ultragear gaming monitor for its lowest price yet
  • Slovakia U21 0 – 4 England U21
  • 13 Top Sleep Products That Transform Your Bedtime Routine for Better Rest
  • Firefighters rescue puppies from burning house
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»New ‘Dragon Hatchling’ AI architecture modeled after the human brain could be a key step toward AGI, researchers claim
Lifestyle

New ‘Dragon Hatchling’ AI architecture modeled after the human brain could be a key step toward AGI, researchers claim

EditorBy EditorNovember 13, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Researchers have designed a new type of large language model (LLM) that they propose could bridge the gap between artificial intelligence (AI) and more human-like cognition.

Called “Dragon Hatchling,” the model is designed to more accurately simulate how neurons in the brain connect and strengthen through learned experience, according to researchers from AI startup Pathway, which developed the model. They described it as the first model capable of “generalizing over time,” meaning it can automatically adjust its own neural wiring in response to new information.

In a study uploaded Sept. 30 to the preprint database arXiv, the team framed the model as a successor to existing architecture that underpins generative AI tools like ChatGPT and Google Gemini. What’s more, they suggested the model could provide the “missing link” between today’s AI technology and more advanced, brain-inspired models of intelligence.


You may like

“There’s a lot of ongoing discussion about specifically reasoning models, synthetic reasoning models today, whether they’re able to extend reasoning beyond patterns that they have seen in retaining data, whether they’re able to generalize reasoning to more complex reasoning patterns and longer reasoning patterns,” Adrian Kosowski, co-founder and chief scientific officer of Pathway, told the SuperDataScience podcast on Oct. 7.

“The evidence is largely inconclusive, with the general ‘no’ as the answer. Currently, machines don’t generalize reasoning as humans do, and this is the big challenge where we believe [the] architectures that we are proposing may make a real difference.”

A step towards AGI?

Teaching AI to think like humans is one of the most prized goals in the field. Yet reaching this level of simulated cognition — often referred to as artificial general intelligence (AGI) — remains elusive.

A key challenge is that human thinking is inherently messy. Our thoughts rarely come to us in neat, linear sequences of connected information. Instead, the human brain is more like a chaotic tangle of overlapping thoughts, sensations, emotions and impulses constantly vying for attention.

Get the world’s most fascinating discoveries delivered straight to your inbox.

Connected lines and dots network, illustration.

(Image credit: JESPER KLAUSEN / SCIENCE PHOTO LIBRARY/Getty Images)

In recent years, LLMs have taken the AI industry much closer to simulating human-like reasoning. LLMs are typically driven by transformer models (transformers), a type of deep learning framework that enables AI models to make connections between words and ideas during a conversation. Transformers are the “brains” behind generative AI tools like ChatGPT, Gemini and Claude, enabling them to interact with, and respond to, users with a convincing level of “awareness” (at least, most of the time).

Although transformers are extremely sophisticated, they also mark the edge of existing generative AI capabilities. One reason for this is because they don’t learn continuously; once an LLM is trained, the parameters that govern it are locked, meaning any new knowledge needs to be added through retraining or fine-tuning. When an LLM does encounter something new, it simply generates a response based on what it already knows.

Imagine dragon

Dragon Hatchling, on the other hand, is designed to dynamically adapt its understanding beyond its training data. It does this by updating its internal connections in real time as it processes each new input, similar to how neurons strengthen or weaken over time. This could support ongoing learning, the researchers said.


You may like

Unlike typical transformer architectures, which process information sequentially through stacked layers of nodes, Dragon Hatchling’s architecture behaves more like a flexible web that reorganizes itself as new information comes to light. Tiny “neuron particles” continuously exchange information and adjust their connections, strengthening some and weakening others.

Over time, new pathways form that help the model retain what it’s learned and apply it to future situations, effectively giving it a kind of short-term memory that influences new inputs. Unlike traditional LLMs, however, Dragon Hatchling’s memory comes from continual adaptations in its architecture, rather than from stored context in its training data.

In tests, Dragon Hatchling performed similarly to GPT-2 on benchmark language modeling and translation tasks — an impressive feat for a brand-new, prototype architecture, the team noted in the study.

Although the paper has yet to be peer-reviewed, the team hopes the model could serve as a foundational step toward AI systems that learn and adapt autonomously. In theory, that could mean AI models that get smarter the longer they stay online — for better or worse.

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleFormer secretaries unite in warning about China threat to America
Next Article Colombia’s president lashes out at ‘barbarian’ Trump
Editor
  • Website

Related Posts

Lifestyle

‘The scientific cost would be severe’: A Trump Greenland takeover would put climate research at risk

January 17, 2026
Lifestyle

New ‘Transformer’ humanoid robot can launch a shapeshifting drone off its back — watch it in action

November 19, 2025
Lifestyle

Medieval spear pulled from Polish lake may have belonged to prince or nobleman

November 19, 2025
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • George Kittle’s wife shares live reaction to Achilles injury
  • ‘The scientific cost would be severe’: A Trump Greenland takeover would put climate research at risk
  • Headlines Across OC as Angel Stadium Sale Debate Intensifies
  • Anti-Islam activists clash with pro-Muslim counter-protesters in Dearborn, Michigan
  • Best monitor deal: Get the 45-inch LG Ultragear gaming monitor for its lowest price yet
calendar
February 2026
M T W T F S S
 1
2345678
9101112131415
16171819202122
232425262728  
« Jan    
Recent Posts
  • George Kittle’s wife shares live reaction to Achilles injury
  • ‘The scientific cost would be severe’: A Trump Greenland takeover would put climate research at risk
  • Headlines Across OC as Angel Stadium Sale Debate Intensifies
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2026 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.