Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • Wyndham Clark hopes to be allowed back at Oakmont after ‘awful’ behaviour at 2025 US Open leads to ban | Golf News
  • Love Island USA’s Cierra Talks Nic, Olandria’s Relationship
  • Washington football and Cleveland baseball teams should change names back
  • ‘Fighting dragons’ light up little-known constellation in the Southern sky: Space photo of the week
  • LA Mayor Bass dodges questions on whether illegal immigrants should stay
  • Best AirPods deal: Get 30% off AirPods 4 at Amazon
  • 'Isak noise won't go away' | How much should Newcastle fans worry?
  • Eileen Fulton, As the World Turns Star, Dead at 91
Get Your Free Email Account
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»MIT’s human-like AI can control any robot and gain physical awareness using just a single camera
Lifestyle

MIT’s human-like AI can control any robot and gain physical awareness using just a single camera

EditorBy EditorJuly 18, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Scientists at MIT have developed a novel vision-based artificial intelligence (AI) system that can teach itself how to control virtually any robot without the use of sensors or pretraining.

The system gathers data about a given robot’s architecture using cameras, in much the same way that humans use their eyes to learn about themselves as they move.

This allows the AI controller to develop a self-learning model for operating any robot — essentially giving machines a humanlike sense of physical self-awareness.


You may like

Researchers achieved this breakthrough by creating a new control paradigm that uses cameras to map a video stream of a robot’s “visuomotor Jacobian field,” a depiction of the machine’s visible 3D points, to the robot’s actuators.

The AI model can then predict precision-motor movements. This makes it possible to turn non-traditional robot architectures, such as soft robotics and those designed with flexible materials, into autonomous units with only a few hours of training.

“Think about how you learn to control your fingers: you wiggle, you observe, you adapt,” explained Sizhe Lester Li, a PhD student at MIT CSAIL and lead researcher on the project, in a press release. “That’s what our system does. It experiments with random actions and figures out which controls move which parts of the robot.”

Related: Scientists burned, poked and sliced their way through new robotic skin that can ‘feel everything’

Get the world’s most fascinating discoveries delivered straight to your inbox.

Typical robotics solutions rely on precision engineering to create machines to exact specifications that can be controlled using pre-trained systems. These can require expensive sensors and AI models developed with hundreds or thousands of hours of fine-tuning in order to anticipate every possible permutation of movement. Gripping objects with handlike appendages, for example, remains a difficult challenge in the arenas of both machine engineering and AI system control.

Understanding the world around you

Using the “Jacobian field” mapping camera solution, in contrast, provides a low-cost, high-fidelity solution to the challenge of automating robot systems.

The team published its findings June 25 in the journal Nature. In it, they said the work was designed to imitate the human brain’s method for learning to control machines.

Our ability to learn and reconstruct 3D configurations and predict motion as a function of control is derived from vision alone. According to the paper, “people can learn to pick and place objects within minutes” when controlling robots with a video game controller, and “the only sensors we require are our eyes.”

The system’s framework was developed using two to three hours of multi-view videos of a robot executing randomly generated commands captured by 12 consumer-grade RGB-D video cameras.

This framework is made up of two key components. The first is a deep-learning model that essentially allows the robot to determine where it and its appendages are in 3-dimensional space. This allows it to predict how its position will change as specific movement commands are executed. The second is a machine-learning program that translates generic movement commands into code a robot can understand and execute.

The team tested the new training and control paradigm by benchmarking its effectiveness against traditional camera-based control methods. The Jacobian field solution surpassed those existing 2D control systems in accuracy — especially when the team introduced visual occlusion that caused the older methods to enter a fail state. Machines using the team’s method, however, successfully created navigable 3D maps even when scenes were partially occluded with random clutter.

Once the scientists developed the framework, it was then applied to various robots with widely varying architectures. The end result was a control program that requires no further human intervention to train and operate robots using only a single video camera.

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleLivvy Dunne dishes on gruesome injury
Next Article Trump orders DOJ to release Epstein grand jury testimony; NPR, PBS, foreign aid cuts clear Congress
Editor
  • Website

Related Posts

Lifestyle

‘Fighting dragons’ light up little-known constellation in the Southern sky: Space photo of the week

July 20, 2025
Lifestyle

Best drones 2025: Explore and capture the world from above

July 20, 2025
Lifestyle

Ötzi the Iceman and his neighbors had totally different ancestries, ancient DNA study finds

July 20, 2025
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • Wyndham Clark hopes to be allowed back at Oakmont after ‘awful’ behaviour at 2025 US Open leads to ban | Golf News
  • Love Island USA’s Cierra Talks Nic, Olandria’s Relationship
  • Washington football and Cleveland baseball teams should change names back
  • ‘Fighting dragons’ light up little-known constellation in the Southern sky: Space photo of the week
  • LA Mayor Bass dodges questions on whether illegal immigrants should stay
calendar
July 2025
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031  
« May    
Recent Posts
  • Wyndham Clark hopes to be allowed back at Oakmont after ‘awful’ behaviour at 2025 US Open leads to ban | Golf News
  • Love Island USA’s Cierra Talks Nic, Olandria’s Relationship
  • Washington football and Cleveland baseball teams should change names back
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2025 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.