Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The haves and have nots of the AI gold rush

May 16, 2026

Research repository ArXiv will ban authors for a year if they let AI do all the work

May 16, 2026

Job Postings for This Tech Job Have Grown Over 700% in the Last Year

May 16, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Meta’s Yann LeCun Says Current AI Models Lack 4 Key Human Traits
Tech

Meta’s Yann LeCun Says Current AI Models Lack 4 Key Human Traits

IQ TIMES MEDIABy IQ TIMES MEDIAMay 25, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


What do all intelligent beings have in common? Four things, according to Meta’s chief AI scientist, Yann LeCun.

At the AI Action Summit in Paris earlier this year, political leaders and AI experts gathered to discuss AI development. LeCun shared his baseline definition of intelligence with IBM’s AI leader, Anthony Annunziata.

“There’s four essential characteristics of intelligent behavior that every animal, or relatively smart animal, can do, and certainly humans,” he said. “Understanding the physical world, having persistent memory, being able to reason, and being able to plan, and planning complex actions, particularly planning hierarchically.”

LeCun said AI, especially large language models, have not hit this threshold, and incorporating these capabilities would require a shift in how they are trained. That’s why many of the biggest tech companies are cobbling capabilities onto existing models in their race to dominate the AI game, he said.

“For understanding the physical world, well, you train a separate vision system. And then you bolt it on the LLM. For memory, you know, you use RAG, or you bolt some associative memory on top of it, or you just make your model bigger,” he said. RAG, which stands for retrieval augmented generation, is a way to enhance the outputs of large language models using external knowledge sources. It was developed at Meta.

All those, however, are just “hacks,” LeCun said.

LeCun has spoken on several occasions about an alternative he calls world-based models. These are models trained on real-life scenarios and have higher levels of cognition than pattern-based AI. LeCun, in his chat with Annunziata, offered another definition.

“You have some idea of the state of the world at time T, you imagine an action it might take, the world model predicts what the state of the world is going to be from the action you took,” he said.

But, he said, the world evolves according to an infinite and unpredictable set of possibilities, and the only way to train for them is through abstraction.

Meta is already experimenting with this through V-JEPA, a model it released to the public in February. Meta describes it as a non-generative model that learns by predicting missing or masked parts of a video.

“The basic idea is that you don’t predict at the pixel level. You train a system to run an abstract representation of the video so that you can make predictions in that abstract representation, and hopefully this representation will eliminate all the details that cannot be predicted,” he said.

The concept is similar to how chemists established a fundamental hierarchy for the building blocks of matter.

“We created abstractions. Particles, on top of this, atoms, on top of this, molecules, on top of this, materials,” he said. “Every time we go up one layer, we eliminate a lot of information about the layers below that are irrelevant for the type of task we’re interested in doing.”

That, in essence, is another way of saying we’ve learned to make sense of the physical world by creating hierarchies.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Job Postings for This Tech Job Have Grown Over 700% in the Last Year

May 16, 2026

AI Images Are Distorting Plastic Surgery Expectations

May 16, 2026

His Mom Was Dying of Dementia. He Built What She Needed in Weeks.

May 16, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Nashville HBCU Fisk University Launches $900M Campus Transformation

May 15, 2026

Justice Department alleges Yale illegally considered race in medical school admissions

May 14, 2026

Princess of Wales highlights Italy’s Reggio Approach for children

May 14, 2026

Pope Leo XIV warns of AI and weaponry leading to global annihilation

May 14, 2026
Education

Nashville HBCU Fisk University Launches $900M Campus Transformation

By IQ TIMES MEDIAMay 15, 20260

Fisk University President Agenia Clark on Thursday announced a $900 million plan to remake the…

Justice Department alleges Yale illegally considered race in medical school admissions

May 14, 2026

Princess of Wales highlights Italy’s Reggio Approach for children

May 14, 2026

Pope Leo XIV warns of AI and weaponry leading to global annihilation

May 14, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.