Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • Best Apple deal: Save $200 on Apple 2025 MacBook Air 15-inch
  • Steve Clarke delivers damning Scotland verdict despite win over Belarus: ‘Most disappointed I’ve been in 72 games’ | Football News
  • Laguna Beach’s Lauren Conrad, Kristina Cavallari Reunite
  • Climate activists vandalize Columbus painting in Madrid
  • Gut bacteria trained by fiber can reverse fatty liver disease, study shows
  • Best Sonos deal: Save $20 on the Sonos Era 100 at Amazon
  • Littler's hilarious reaction after missing a DOZEN darts to start leg
  • Diane Keaton’s Friend on Final Weeks Before Her Death
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»Robots receive major intelligence boost thanks to Google DeepMind’s ‘thinking AI’ — a pair of models that help machines understand the world
Lifestyle

Robots receive major intelligence boost thanks to Google DeepMind’s ‘thinking AI’ — a pair of models that help machines understand the world

EditorBy EditorOctober 10, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Google DeepMind has unveiled a pair of artificial intelligence (AI) models that will enable robots to perform complex general tasks and reason in a way that was previously impossible.

Earlier this year, the company revealed the first iteration of Gemini Robotics, an AI model based on its Gemini large language model (LLM) — but specialized for robotics. This allowed machines to reason and perform simple tasks in physical spaces.

The new models, dubbed Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, greatly expand on the capabilities of the original version to handle multistep, “long-horizon” tasks and are a significant milestone towards robots assisting people in real-world use cases.

The baseline example Google points to is the banana test. The original AI model was capable of receiving a simple instruction like “place this banana in the basket,” and guiding a robotic arm to complete that command.

Powered by the two new models, a robot can now take a selection of fruit and sort them into individual containers based on color. In one demonstration, a pair of robotic arms (the company’s Aloha 2 robot) accurately sorts a banana, an apple and a lime onto three plates of the appropriate color. Further, the robot explains in natural language what it’s doing and why as it performs the task.

Gemini Robotics 1.5: Thinking while acting – YouTube
Gemini Robotics 1.5: Thinking while acting - YouTube


Watch On

“We enable it to think,” said Jie Tan, a senior staff research scientist at DeepMind, in the video. “It can perceive the environment, think step-by-step and then finish this multistep task. Although this example seems very simple, the idea behind it is really powerful. The same model is going to power more sophisticated humanoid robots to do more complicated daily tasks.”

AI-powered robotics of tomorrow

While the demonstration may seem simple on the surface, it demonstrates a number of sophisticated capabilities. The robot can spatially locate the fruit and the plates, identify the fruit and the color of all of the objects, match the fruit to the plates according to shared characteristics and provide a natural language output describing its reasoning.

It’s all possible because of the way the newest iterations of the AI models interact. They work together in much the same way a supervisor and worker do.

Google Robotics-ER 1.5 (the “brain”) is a vision-language model (VLM) that gathers information about a space and the objects located within it, processes natural language commands and can utilize advanced reasoning and tools to send instructions to Google Robotics 1.5 (the “hands and eyes”), a vision-language-action (VLA) model. Google Robotics 1.5 matches those instructions to its visual understanding of a space and builds a plan before executing them, providing feedback about its processes and reasoning throughout.

The two models are more capable than previous versions and can use tools like Google Search to complete tasks.

The team demonstrated this capacity by having a researcher ask Aloha to use recycling rules based on her location to sort some objects into compost, recycling and trash bins. The robot recognized that the user was located in San Francisco and found recycling rules on the internet to help it accurately sort trash into the appropriate receptacles.

Another advance represented in the new models is the ability to learn (and apply that learning) across multiple robotics systems. DeepMind representatives said in a statement that any learning gleaned across its Aloha 2 robot (the pair of robotics arms), Apollo humanoid robot and bi-arm Franka robot can be applied to any other system due to the generalized way the models learn and evolve.

“General-purpose robots need a deep understanding of the physical world, advanced reasoning, and general and dexterous control,” the Gemini Robotics Team said in a technical report on the new models. That kind of generalized reasoning means that the models can approach a problem with a broad understanding of physical spaces and interactions and problem-solve accordingly, breaking tasks down into small, individual steps that can be easily executed. This contrasts with earlier approaches, which relied on specialized knowledge that only applied to very specific, narrow situations and individual robots.

The scientists provided an additional example of how robots could help in a real-world scenario. They presented an Apollo robot with two bins and asked it to sort clothes by color — with whites going into one bin and other colors into the other. They then added an additional hurdle as the task progressed by moving the clothes and bins around, forcing the robot to reevaluate the physical space and react accordingly, which it managed successfully.

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticlePhiladelphia sports suffer miserable night not seen in over four decades
Next Article Grand Jury Indicts New York Attorney General Letitia James
Editor
  • Website

Related Posts

Lifestyle

Which planets are the youngest and oldest in our solar system?

October 12, 2025
Lifestyle

Physicists prove 65-year-old effect of relativity by making an object appear to move at the speed of light

October 12, 2025
Lifestyle

Hubble went supernova hunting — and found something unexpected: Space photo of the week

October 12, 2025
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • Best Apple deal: Save $200 on Apple 2025 MacBook Air 15-inch
  • Steve Clarke delivers damning Scotland verdict despite win over Belarus: ‘Most disappointed I’ve been in 72 games’ | Football News
  • Laguna Beach’s Lauren Conrad, Kristina Cavallari Reunite
  • Climate activists vandalize Columbus painting in Madrid
  • Gut bacteria trained by fiber can reverse fatty liver disease, study shows
calendar
October 2025
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  
« Sep    
Recent Posts
  • Best Apple deal: Save $200 on Apple 2025 MacBook Air 15-inch
  • Steve Clarke delivers damning Scotland verdict despite win over Belarus: ‘Most disappointed I’ve been in 72 games’ | Football News
  • Laguna Beach’s Lauren Conrad, Kristina Cavallari Reunite
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2025 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.