Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Sora’s shutdown could be a reality check moment for AI video

March 29, 2026

Developers Warn Flood of Vibe-Coded Apps Could Slow Apple Approvals

March 29, 2026

Bluesky leans into AI with Attie, an app for building custom feeds

March 28, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » ElevenLabs CEO: Voice is the next interface for AI
AI

ElevenLabs CEO: Voice is the next interface for AI

IQ TIMES MEDIABy IQ TIMES MEDIAFebruary 5, 2026No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


ElevenLabs co-founder and CEO Mati Staniszewski says voice is becoming the next major interface for AI – the way people will increasingly interact with machines as models move beyond text and screens.

Speaking at Web Summit in Doha, Staniszewski told TechCrunch voice models like those developed by ElevenLabs have recently moved beyond simply mimicking human speech — including emotion and intonation – to working in tandem with the reasoning capabilities of large language models. The result, he argued, is a shift in how people interact with technology. 

In the years ahead, he said, “hopefully all our phones will go back in our pockets, and we can immerse ourselves in the real world around us, with voice as the mechanism that controls technology.”

That vision fueled ElevenLabs’s $500 million raise this week at an $11 billion valuation, and it is increasingly shared across the AI industry. OpenAI and Google have both made voice a central focus of their next-generation models, while Apple appears to be quietly building voice-adjacent, always-on technologies through acquisitions like Q.ai. As AI spreads into wearables, cars, and other new hardware, control is becoming less about tapping screens and more about speaking, making voice a key battleground for the next phase of AI development. 

Iconiq Capital general partner Seth Pierrepont echoed that view onstage at Web Summit, arguing that while screens will continue to matter for gaming and entertainment, traditional input methods like keyboards are starting to feel “outdated.”

And as AI systems become more agentic, Pierrepont said, the interaction itself will also change, with models gaining guardrails, integrations, and context needed to respond with less explicit prompting from users. 

Staniszewski pointed to that agentic shift as one of the biggest changes underway. Rather than spelling out every instruction, he said future voice systems will increasingly rely on persistent memory and context built up over time, making interactions feel more natural and requiring less effort from users. 

Techcrunch event

Boston, MA
|
June 23, 2026

That evolution, he added, will influence how voice models are deployed. While high-quality audio models have largely lived in the cloud, Staniszewski said ElevenLabs is working toward a hybrid approach that blends cloud and on-device processing — a move aimed at supporting new hardware, including headphones and other wearables, where voice becomes a constant companion rather than a feature you decide when to engage with. 

ElevenLabs is already partnering with Meta to bring its voice technology to products including Instagram and Horizon Worlds, the company’s virtual reality platform. Staniszewski said he would also be open to working with Meta on its Ray-Ban smart glasses as voice-driven interfaces expand into new form factors. 

But as voice becomes more persistent and embedded in everyday hardware, it opens the door to serious concerns around privacy, surveillance, and how much personal data voice-based systems will store as they move closer to users’ daily lives — something companies like Google have already been accused of abusing.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Sora’s shutdown could be a reality check moment for AI video

March 29, 2026

Bluesky leans into AI with Attie, an app for building custom feeds

March 28, 2026

Stanford study outlines dangers of asking AI chatbots for personal advice

March 28, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

2 students dead and 7 injured in Tennessee school bus crash

March 27, 2026

Suburban Detroit school settles lawsuit over Pledge of Allegiance

March 27, 2026

Changes to Native American tuition waiver could expand access to higher education for thousands

March 27, 2026

Student loan borrowers in SAVE plan directed to prepare for repayment

March 27, 2026
Education

2 students dead and 7 injured in Tennessee school bus crash

By IQ TIMES MEDIAMarch 27, 20260

HUNTINGDON, Tenn. (AP) — A school bus crash in west Tennessee on Friday killed two…

Suburban Detroit school settles lawsuit over Pledge of Allegiance

March 27, 2026

Changes to Native American tuition waiver could expand access to higher education for thousands

March 27, 2026

Student loan borrowers in SAVE plan directed to prepare for repayment

March 27, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.