Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Mount Sinai nurses approve new contract ending strike at its NYC hospitals

February 11, 2026

In one of strongest endorsements from Trump admin, Dr. Oz says get measles vaccine

February 11, 2026

Study Reveals Processed Foods Cause Overeating and Slow Fat Loss

February 11, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Sam Altman comes out swinging at The New York Times
AI

Sam Altman comes out swinging at The New York Times

IQ TIMES MEDIABy IQ TIMES MEDIAJune 25, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


From the moment OpenAI CEO Sam Altman stepped onstage, it was clear this was not going to be a normal interview.

Altman and his chief operating officer, Brad Lightcap, stood awkwardly toward the back of the stage at a jam-packed San Francisco venue that typically hosts jazz concerts. Hundreds of people filled steep theatre-style seating on Tuesday night to watch Kevin Roose, a columnist with The New York Times, and Platformer’s Casey Newton record a live episode of their popular technology podcast, Hard Fork.

Altman and Lightcap were the main event, but they’d walked out too early. Roose explained that he and Newton were planning to — ideally, before OpenAI’s executives were supposed to come out — list off several headlines that had been written about OpenAI in the weeks leading up to the event.

“This is more fun that we’re out here for this,” said Altman. Seconds later, the OpenAI CEO asked, “Are you going to talk about where you sue us because you don’t like user privacy?”

Within minutes of the program starting, Altman hijacked the conversation to talk about The New York Times lawsuit against OpenAI and its largest investor, Microsoft, in which the publisher alleges that Altman’s company improperly used its articles to train large language models. Altman was particularly peeved about a recent development in the lawsuit, in which lawyers representing The New York Times asked OpenAI to retain consumer ChatGPT and API customer data.

“The New York Times, one of the great institutions, truly, for a long time, is taking a position that we should have to preserve our users’ logs even if they’re chatting in private mode, even if they’ve asked us to delete them,” said Altman. “Still love The New York Times, but that one we feel strongly about.”

For a few minutes, OpenAI’s CEO pressed the podcasters to share their personal opinions about the New York Times lawsuit — they demurred, noting that as journalists whose work appears in The New York Times, they are not involved in the lawsuit.

Altman and Lightcap’s brash entrance lasted only a few minutes, and the rest of the interview proceeded, seemingly, as planned. However, the flare-up felt indicative of the inflection point Silicon Valley seems to be approaching in its relationship with the media industry.

In the last several years, multiple publishers have brought lawsuits against OpenAI, Anthropic, Google, and Meta for training their AI models on copyrighted works. At a high level, these lawsuits argue that AI models have the potential to devalue, and even replace, the copyrighted works produced by media institutions.

But the tides may be turning in favor of the tech companies. Earlier this week, OpenAI competitor Anthropic received a major win in its legal battle against publishers. A federal judge ruled that Anthropic’s use of books to train its AI models was legal in some circumstances, which could have broad implications for other publishers’ lawsuits against OpenAI, Google, and Meta.

Perhaps Altman and Lightcap felt emboldened by the industry win heading into their live interview with The New York Times journalists. But these days, OpenAI is fending off threats from every direction, and that became clear throughout the night.

Mark Zuckerberg has recently been trying to recruit OpenAI’s top talent by offering them $100 million compensation packages to join Meta’s AI superintelligence lab, Altman revealed weeks ago on his brother’s podcast.

When asked whether the Meta CEO really believes in superintelligent AI systems, or if it’s just a recruiting strategy, Lightcap quipped: “I think [Zuckerberg] believes he is superintelligent.”

Later, Roose asked Altman about OpenAI’s relationship with Microsoft, which has reportedly been pushed to a boiling point in recent months as the partners negotiate a new contract. While Microsoft was once a major accelerant to OpenAI, the two are now competing in enterprise software and other domains.

“In any deep partnership, there are points of tension and we certainly have those,” said Altman. “We’re both ambitious companies, so we do find some flashpoints, but I would expect that it is something that we find deep value in for both sides for a very long time to come.”

OpenAI’s leadership today seems to spend a lot of time swatting down competitors and lawsuits. That may get in the way of OpenAI’s ability to solve broader issues around AI, such as how to safely deploy highly intelligent AI systems at scale.

At one point, Newton asked OpenAI’s leaders how they were thinking about recent stories of mentally unstable people using ChatGPT to traverse dangerous rabbit holes, including to discuss conspiracy theories or suicide with the chatbot.

Altman said OpenAI takes many steps to prevent these conversations, such as by cutting them off early, or directing users to professional services where they can get help.

“We don’t want to slide into the mistakes that I think the previous generation of tech companies made by not reacting quickly enough,” said Altman. To a follow-up question, the OpenAI CEO added, “However, to users that are in a fragile enough mental place, that are on the edge of a psychotic break, we haven’t yet figured out how a warning gets through.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Who will own your company’s AI layer? Glean’s CEO explains

February 11, 2026

Why the economics of orbital AI are so brutal

February 11, 2026

Threads’ new ‘Dear Algo’ AI feature lets you personalize your feed

February 11, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Yale suspends professor from teaching while reviewing his correspondence with Epstein

February 11, 2026

Gov. Gretchen Whitmer signs classroom smartphone ban for Michigan schools

February 11, 2026

Suspect in Canada school shooting is identified as 18-year-old

February 11, 2026

Gunman apprehended in southern Thailand after holding students and teachers hostage in school

February 11, 2026
Education

Yale suspends professor from teaching while reviewing his correspondence with Epstein

By IQ TIMES MEDIAFebruary 11, 20260

Yale University says a prominent computer science professor will not teach classes while it reviews…

Gov. Gretchen Whitmer signs classroom smartphone ban for Michigan schools

February 11, 2026

Suspect in Canada school shooting is identified as 18-year-old

February 11, 2026

Gunman apprehended in southern Thailand after holding students and teachers hostage in school

February 11, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.