Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Robinhood CEO Says We’re at Cusp of a ‘Prediction Market Supercycle’

February 11, 2026

Okay, now exactly half of xAI’s founding team has left the company

February 11, 2026

Moderna says FDA refuses to review its application for flu vaccine

February 11, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Meta’s Yann LeCun Says Current AI Models Lack 4 Key Human Traits
Tech

Meta’s Yann LeCun Says Current AI Models Lack 4 Key Human Traits

IQ TIMES MEDIABy IQ TIMES MEDIAMay 25, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


What do all intelligent beings have in common? Four things, according to Meta’s chief AI scientist, Yann LeCun.

At the AI Action Summit in Paris earlier this year, political leaders and AI experts gathered to discuss AI development. LeCun shared his baseline definition of intelligence with IBM’s AI leader, Anthony Annunziata.

“There’s four essential characteristics of intelligent behavior that every animal, or relatively smart animal, can do, and certainly humans,” he said. “Understanding the physical world, having persistent memory, being able to reason, and being able to plan, and planning complex actions, particularly planning hierarchically.”

LeCun said AI, especially large language models, have not hit this threshold, and incorporating these capabilities would require a shift in how they are trained. That’s why many of the biggest tech companies are cobbling capabilities onto existing models in their race to dominate the AI game, he said.

“For understanding the physical world, well, you train a separate vision system. And then you bolt it on the LLM. For memory, you know, you use RAG, or you bolt some associative memory on top of it, or you just make your model bigger,” he said. RAG, which stands for retrieval augmented generation, is a way to enhance the outputs of large language models using external knowledge sources. It was developed at Meta.

All those, however, are just “hacks,” LeCun said.

LeCun has spoken on several occasions about an alternative he calls world-based models. These are models trained on real-life scenarios and have higher levels of cognition than pattern-based AI. LeCun, in his chat with Annunziata, offered another definition.

“You have some idea of the state of the world at time T, you imagine an action it might take, the world model predicts what the state of the world is going to be from the action you took,” he said.

But, he said, the world evolves according to an infinite and unpredictable set of possibilities, and the only way to train for them is through abstraction.

Meta is already experimenting with this through V-JEPA, a model it released to the public in February. Meta describes it as a non-generative model that learns by predicting missing or masked parts of a video.

“The basic idea is that you don’t predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation, and hopefully this representation will eliminate all the details that cannot be predicted,” he said.

The concept is similar to how chemists established a fundamental hierarchy for the building blocks of matter.

“We created abstractions. Particles, on top of this, atoms, on top of this, molecules, on top of this, materials,” he said. “Every time we go up one layer, we eliminate a lot of information about the layers below that are irrelevant for the type of task we’re interested in doing.”

That, in essence, is another way of saying we’ve learned to make sense of the physical world by creating hierarchies.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Robinhood CEO Says We’re at Cusp of a ‘Prediction Market Supercycle’

February 11, 2026

XAI Loses Another Cofounder, Jimmy Ba

February 11, 2026

Lyft Stock Sinks 16% After Disappointing Q4 Earnings Stall Comeback

February 11, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

San Francisco parents juggle work and kids amid teachers strike

February 10, 2026

Butler’s University’s new Deaf education curriculum draws concern

February 9, 2026

Student shot in Maryland high school and another student is in custody, police say

February 9, 2026

San Francisco teachers go on strike for first time in decades

February 9, 2026
Education

San Francisco parents juggle work and kids amid teachers strike

By IQ TIMES MEDIAFebruary 10, 20260

SAN FRANCISCO (AP) — Connor Haught has been juggling virtual work meetings and arts and…

Butler’s University’s new Deaf education curriculum draws concern

February 9, 2026

Student shot in Maryland high school and another student is in custody, police say

February 9, 2026

San Francisco teachers go on strike for first time in decades

February 9, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.