Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The ‘Claude-Gap’ Relationship: One Partner Sleeps, Another Vibe Codes

March 28, 2026

How One Business Owner Is Using AI During Tax Season

March 28, 2026

‘Informed’ Polymarket Traders Have Netted $143 Million Since 2024

March 28, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Meta’s Yann LeCun Says Current AI Models Lack 4 Key Human Traits
Tech

Meta’s Yann LeCun Says Current AI Models Lack 4 Key Human Traits

IQ TIMES MEDIABy IQ TIMES MEDIAMay 25, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


What do all intelligent beings have in common? Four things, according to Meta’s chief AI scientist, Yann LeCun.

At the AI Action Summit in Paris earlier this year, political leaders and AI experts gathered to discuss AI development. LeCun shared his baseline definition of intelligence with IBM’s AI leader, Anthony Annunziata.

“There’s four essential characteristics of intelligent behavior that every animal, or relatively smart animal, can do, and certainly humans,” he said. “Understanding the physical world, having persistent memory, being able to reason, and being able to plan, and planning complex actions, particularly planning hierarchically.”

LeCun said AI, especially large language models, have not hit this threshold, and incorporating these capabilities would require a shift in how they are trained. That’s why many of the biggest tech companies are cobbling capabilities onto existing models in their race to dominate the AI game, he said.

“For understanding the physical world, well, you train a separate vision system. And then you bolt it on the LLM. For memory, you know, you use RAG, or you bolt some associative memory on top of it, or you just make your model bigger,” he said. RAG, which stands for retrieval augmented generation, is a way to enhance the outputs of large language models using external knowledge sources. It was developed at Meta.

All those, however, are just “hacks,” LeCun said.

LeCun has spoken on several occasions about an alternative he calls world-based models. These are models trained on real-life scenarios and have higher levels of cognition than pattern-based AI. LeCun, in his chat with Annunziata, offered another definition.

“You have some idea of the state of the world at time T, you imagine an action it might take, the world model predicts what the state of the world is going to be from the action you took,” he said.

But, he said, the world evolves according to an infinite and unpredictable set of possibilities, and the only way to train for them is through abstraction.

Meta is already experimenting with this through V-JEPA, a model it released to the public in February. Meta describes it as a non-generative model that learns by predicting missing or masked parts of a video.

“The basic idea is that you don’t predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation, and hopefully this representation will eliminate all the details that cannot be predicted,” he said.

The concept is similar to how chemists established a fundamental hierarchy for the building blocks of matter.

“We created abstractions. Particles, on top of this, atoms, on top of this, molecules, on top of this, materials,” he said. “Every time we go up one layer, we eliminate a lot of information about the layers below that are irrelevant for the type of task we’re interested in doing.”

That, in essence, is another way of saying we’ve learned to make sense of the physical world by creating hierarchies.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

The ‘Claude-Gap’ Relationship: One Partner Sleeps, Another Vibe Codes

March 28, 2026

How One Business Owner Is Using AI During Tax Season

March 28, 2026

‘Informed’ Polymarket Traders Have Netted $143 Million Since 2024

March 28, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

2 students dead and 7 injured in Tennessee school bus crash

March 27, 2026

Suburban Detroit school settles lawsuit over Pledge of Allegiance

March 27, 2026

Changes to Native American tuition waiver could expand access to higher education for thousands

March 27, 2026

Student loan borrowers in SAVE plan directed to prepare for repayment

March 27, 2026
Education

2 students dead and 7 injured in Tennessee school bus crash

By IQ TIMES MEDIAMarch 27, 20260

HUNTINGDON, Tenn. (AP) — A school bus crash in west Tennessee on Friday killed two…

Suburban Detroit school settles lawsuit over Pledge of Allegiance

March 27, 2026

Changes to Native American tuition waiver could expand access to higher education for thousands

March 27, 2026

Student loan borrowers in SAVE plan directed to prepare for repayment

March 27, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.