Close Menu
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Facebook X (Twitter) WhatsApp
Trending
  • ‘Trash’ found deep inside a Mexican cave turns out to be 500-year-old artifacts from a little-known culture
  • Powerful Mother’s Day geomagnetic storm created radio-disrupting bubbles in Earth’s upper atmosphere
  • ‘The Martian’ predicts human colonies on Mars by 2035. How close are we?
  • Ram in the Thicket: A 4,500-year-old gold statue from the royal cemetery at Ur representing an ancient sunrise ritual
  • How much of your disease risk is genetic? It’s complicated.
  • Black holes: Facts about the darkest objects in the universe
  • Does light lose energy as it crosses the universe? The answer involves time dilation.
  • US Representatives worry Trump’s NASA budget plan will make it harder to track dangerous asteroids
Get Your Free Email Account
Facebook X (Twitter) WhatsApp
Baynard Media
  • Home
  • UNSUBSCRIBE
  • News
  • Lifestyle
  • Tech
  • Entertainment
  • Sports
  • Travel
Baynard Media
Home»Lifestyle»AI models can’t tell time or read a calendar, study reveals
Lifestyle

AI models can’t tell time or read a calendar, study reveals

EditorBy EditorMay 17, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

New research has revealed another set of tasks most humans can do with ease that artificial intelligence (AI) stumbles over — reading an analogue clock or figuring out the day on which a date will fall.

AI may be able to write code, generate lifelike images, create human-sounding text and even pass exams (to varying degrees of success) yet it routinely misinterprets the position of hands on everyday clocks and fails at the basic arithmetic needed for calendar dates.

Researchers revealed these unexpected flaws in a presentation at the 2025 International Conference on Learning Representations (ICLR). They also published their findings March 18 on the preprint server arXiv, so they have not yet been peer-reviewed .


You may like

“Most people can tell the time and use calendars from an early age. Our findings highlight a significant gap in the ability of AI to carry out what are quite basic skills for people,” study lead author Rohit Saxena, a researcher at the University of Edinburgh, said in a statement. These shortfalls must be addressed if AI systems are to be successfully integrated into time-sensitive, real-world applications, such as scheduling, automation and assistive technologies.”

To investigate AI’s timekeeping abilities, the researchers fed a custom dataset of clock and calendar images into various multimodal large language models (MLLMs), which can process visual as well as textual information. The models used in the study include Meta’s Llama 3.2-Vision, Anthropic’s Claude-3.5 Sonnet, Google’s Gemini 2.0 and OpenAI’s GPT-4o.

And the results were poor, with the models being unable to identify the correct time from an image of a clock or the day of the week for a sample date more than half the time.

Related: Current AI models a ‘dead end’ for human-level intelligence, scientists agree

Get the world’s most fascinating discoveries delivered straight to your inbox.

However, the researchers have an explanation for AI’s surprisingly poor time-reading abilities.

“Early systems were trained based on labelled examples. Clock reading requires something different — spatial reasoning,” Saxena said. “The model has to detect overlapping hands, measure angles and navigate diverse designs like Roman numerals or stylized dials. AI recognizing that ‘this is a clock’ is easier than actually reading it.”

Dates proved just as difficult. When given a challenge like “What day will the 153rd day of the year be?,” the failure rate was similarly high: AI systems read clocks correctly only 38.7% and calendars only 26.3%.

This shortcoming is similarly surprising because arithmetic is a fundamental cornerstone of computing, but as Saxena explained, AI uses something different. “Arithmetic is trivial for traditional computers but not for large language models. AI doesn’t run math algorithms, it predicts the outputs based on patterns it sees in training data,” he said. So while it may answer arithmetic questions correctly some of the time, its reasoning isn’t consistent or rule-based, and our work highlights that gap.”

The project is the latest in a growing body of research that highlights the differences between the ways AI “understands” versus the way humans do. Models derive answers from familiar patterns and excel when there are enough examples in their training data, yet they fail when asked to generalize or use abstract reasoning.

“What for us is a very simple task like reading a clock may be very hard for them, and vice versa,” Saxena said.

The research also reveals the problem AI has when it’s trained with limited data — in this case comparatively rare phenomena like leap years or obscure calendar calculations. Even though LLMs have plenty of examples that explain leap years as a concept, that doesn’t mean they make the requisite connections required to complete a visual task.

The research highlights both the need for more targeted examples in training data and the need to rethink how AI handles the combination of logical and spatial reasoning, especially in tasks it doesn’t encounter often.

Above all, it reveals one more area where entrusting AI output too much comes at our peril.

“AI is powerful, but when tasks mix perception with precise reasoning, we still need rigorous testing, fallback logic, and in many cases, a human in the loop,” Saxena said.

Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleCelestron Nature DX 8×42 binocular review
Next Article Who needs more exercise: Women or men?
Editor
  • Website

Related Posts

Lifestyle

‘Trash’ found deep inside a Mexican cave turns out to be 500-year-old artifacts from a little-known culture

May 26, 2025
Lifestyle

Powerful Mother’s Day geomagnetic storm created radio-disrupting bubbles in Earth’s upper atmosphere

May 26, 2025
Lifestyle

‘The Martian’ predicts human colonies on Mars by 2035. How close are we?

May 26, 2025
Add A Comment

Comments are closed.

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Recent Posts
  • ‘Trash’ found deep inside a Mexican cave turns out to be 500-year-old artifacts from a little-known culture
  • Powerful Mother’s Day geomagnetic storm created radio-disrupting bubbles in Earth’s upper atmosphere
  • ‘The Martian’ predicts human colonies on Mars by 2035. How close are we?
  • Ram in the Thicket: A 4,500-year-old gold statue from the royal cemetery at Ur representing an ancient sunrise ritual
  • How much of your disease risk is genetic? It’s complicated.
calendar
June 2025
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
30  
« May    
Recent Posts
  • ‘Trash’ found deep inside a Mexican cave turns out to be 500-year-old artifacts from a little-known culture
  • Powerful Mother’s Day geomagnetic storm created radio-disrupting bubbles in Earth’s upper atmosphere
  • ‘The Martian’ predicts human colonies on Mars by 2035. How close are we?
About

Welcome to Baynard Media, your trusted source for a diverse range of news and insights. We are committed to delivering timely, reliable, and thought-provoking content that keeps you informed
and inspired

Categories
  • Entertainment
  • Lifestyle
  • News
  • Sports
  • Tech
  • Travel
Facebook X (Twitter) Pinterest WhatsApp
  • Contact Us
  • About Us
  • Privacy Policy
  • Disclaimer
  • UNSUBSCRIBE
© 2025 copyrights reserved

Type above and press Enter to search. Press Esc to cancel.