Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance
  • ZWO Seestar S30 Pro smart telescope review
  • Doctors partially delivered a baby at 25 weeks to perform a lifesaving surgery and then returned him to the womb
  • Poop-encrusted chamber pots from the Roman Empire reveal oldest known human cases of Crypto parasite
  • Weapons of the world quiz: Can you identify these historical objects of war?
  • ‘The detectors never stopped beeping!’ Nearly 3,000 coins discovered in field are Norway’s largest Viking hoard on record
  • ‘We can no longer ignore diseases in the deep human past’: Malaria influenced early humans’ migrations across Africa, study suggests
  • Used SpaceX rocket stage could hit the moon’s Einstein crater this summer, report finds
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance
Lifestyle

Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance

EditorBy EditorMay 1, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Google engineers have developed a method to compress artificial intelligence (AI) data so that it requires up to six times less working memory to function.

With the new system, called TurboQuant, AI algorithms could retain the same amount of information and perform equally powerful computations, but with significantly less memory hardware, the company says.

Current AI algorithms need a lot of working memory, also known as the key value (KV) cache, to work properly. This is where immediate computational results and other bits of info are stored temporarily during active processing.


You may like

For example, if you ask ChatGPT what the weather will be like tomorrow in your area, it may store words like “weather” and “tomorrow,” along with your location and partial guesses, like “It might be rainy,” in the KV cache while it generates its response. The larger an AI model’s KV cache is, the more information it can keep track of at once and the more powerful it is.

A single sentence uses only a few dozen tokens — the building blocks of AI prompts and output text — but storing hundreds of thousands of tokens in the KV cache for more sophisticated work can require tens of gigabytes of memory. These memory requirements scale linearly depending on the number of users, and ChatGPT is known to receive billions of requests every day.

The compression algorithm will decrease the amount of working memory an AI model needs to perform the same computations. It does so via a process called quantization, which results in values represented by fewer bits.

Although Google has been using quantization on its neural networks for many years, it has typically been applied statically — that is, the compression is done once and doesn’t change as the model runs. The difference with TurboQuant is that it reduces the KV cache’s memory in real time ‪—‬ a tricky feat given that it must keep the quantized data in the cache accurate and up-to-date while the model generates outputs.

Get the world’s most fascinating discoveries delivered straight to your inbox.

In a statement, Google representatives said TurboQuant “showed great promise for reducing key-value bottlenecks without sacrificing AI model performance” in tests in Meta’s Llama 3.1-8B, Google’s Gemma and Mistral AI models.

“This has potentially profound implications for all compression-reliant use cases, including and especially in the domains of search and AI,” they added.

Is this Google’s “DeepSeek moment”?

Google says TurboQuant could reduce the KV cache’s size by a factor of at least six times, using two methods: PolarQuant and Quantized Johnson-Lindenstrauss (QJL).


What to read next

To interpret these methods, it is important to understand that data in the AI’s working memory has been turned into vectors — groups of numbers that have a defined size (radius) and direction (angle). Vectors can be mathematically “rotated,” meaning they are reexpressed in a different, common coordinate system.

PolarQuant quantization reexpresses AI data from Cartesian coordinates (along X, Y and Z axes) into polar coordinates (angles around a single point). The rotation aligns the angles of the vectors more consistently, thereby allowing them to be compressed into fewer bits with less additional scaling information. The vectors then go through the QJL optimization method, where they are adjusted very slightly to correct any computational errors stemming from the quantization.

In a post on the social media platform X, Matthew Prince, CEO of web security company Cloudflare, called the compression breakthrough “Google’s DeepSeek” ‪—‬ a reference to the surprise release of the Chinese firm’s AI model that achieved comparable results to leading chatbots at a fraction of the cost.

Google’s March 24 unveiling of TurboQuant sent stocks in memory companies like SanDisk, Western Digital and Seagate plummeting. But although the discovery could prove pivotal in improving AI efficiency, it is still at the lab stage and has yet to be widely rolled out in real-world models.

Moreover, it will compress only the working memory used during inference. This is when it is generating a response to a prompt. A model’s training typically requires up to four times more memory than that, so the actual impact on memory will be relatively small.

This is what Merrill Lynch banker Vivek Arya explained to concerned investors in a note, according to ZDNet: “(The) 6x improvement in memory efficiency [will] likely [lead] to 6x increase in accuracy (model size) and/or context length (KV cache allocation), rather than 6x decrease in memory.”

Google officially unveiled TurboQuant at ICLR 2026, which took place April 23-27 in Rio de Janeiro, and will formally present PolarQuant and QJL at AISTATS 2026 in Tangier, Morocco, in early May.

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleZWO Seestar S30 Pro smart telescope review
Editor
  • Website

Related Posts

Lifestyle

ZWO Seestar S30 Pro smart telescope review

May 1, 2026
Lifestyle

Doctors partially delivered a baby at 25 weeks to perform a lifesaving surgery and then returned him to the womb

May 1, 2026
Lifestyle

Poop-encrusted chamber pots from the Roman Empire reveal oldest known human cases of Crypto parasite

April 30, 2026
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance
  • ZWO Seestar S30 Pro smart telescope review
  • Doctors partially delivered a baby at 25 weeks to perform a lifesaving surgery and then returned him to the womb
  • Poop-encrusted chamber pots from the Roman Empire reveal oldest known human cases of Crypto parasite
  • Weapons of the world quiz: Can you identify these historical objects of war?
calendar
May 2026
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    
Recent Posts
  • Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance
  • ZWO Seestar S30 Pro smart telescope review
  • Doctors partially delivered a baby at 25 weeks to perform a lifesaving surgery and then returned him to the womb
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2026 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.