Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Fractal Analytics’ muted IPO debut signals persistent AI fears in India

February 16, 2026

Microsoft Manager Explains How She Pivoted From Admin to an AI Role

February 16, 2026

MrBeast Is Hiring a ‘Head of TikTok’; What the Job Entails

February 16, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » OpenAI Wants to Pay Someone $555,000 a Year to Mitigate AI Downsides
Tech

OpenAI Wants to Pay Someone $555,000 a Year to Mitigate AI Downsides

IQ TIMES MEDIABy IQ TIMES MEDIADecember 28, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI wants to pay someone over half a million dollars to mitigate the downsides of AI.

If that seems like a lot of money, consider those potential downsides: job loss, misinformation, abuse by malicious actors, environmental destruction, and the erosion of human agency, to name a few.

CEO Sam Altman described the job as “stressful” in an X post on Saturday. “You’ll jump into the deep end pretty much immediately,” Altman wrote.

Altman said the “head of preparedness” position is “a critical role at an important time.”

“Models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities,” he wrote.

OpenAI’s ChatGPT has helped popularize AI chatbots among consumers, many of whom use the technology to research topics, draft emails, plan trips, or perform other simple tasks.

Some users also talk to the bots as an alternative to therapy, which has exacerbated mental health issues in some cases, encouraging delusions and other concerning behavior.

OpenAI said in October it was working with mental health professionals to improve how ChatGPT interacts with users who exhibit concerning behavior, including psychosis or self-harm.

OpenAI’s core mission is to develop artificial intelligence in a way that benefits all of humanity. It made safety protocols a central part of its operations from the outset. As it began releasing products, however, and pressure to turn a profit grew, some former staffers have said the company began to prioritize profit over safety.

Jan Leiki, the former leader of the now-dissolved safety team at OpenAI, said in a May 2024 post on X announcing his resignation that the company had lost sight of its mission to ensure the technology is deployed safely.

“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” he wrote. “But over the past years, safety culture and processes have taken a backseat to shiny products.”

Less than a week later, another staffer announced their resignation on X, also citing safety concerns. One former staffer, Daniel Kokotajlo, said in a May 2024 blog post that he resigned because he was “losing confidence that it would behave responsibly around the time of AGI.”

Kokotajlo later told Fortune that OpenAI initially had about 30 people researching safety issues related to AGI, a still theoretical version of AI that reasons as well as humans, but a series of departures reduced that head count by almost half.

The company’s former head of preparedness, Aleksander Madry, assumed a new role in July 2024. The position is part of OpenAI’s Safety Systems team, which develops safeguards, frameworks, and evaluations for the company’s models. The job pays $555,000 a year plus equity.

“You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline,” the job listing says.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Microsoft Manager Explains How She Pivoted From Admin to an AI Role

February 16, 2026

MrBeast Is Hiring a ‘Head of TikTok’; What the Job Entails

February 16, 2026

The Glut of ‘Why I Quit’ Letters Is Out of Control

February 16, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

How the Siege of Boston shaped the legacy of George Washington

February 16, 2026

Tre’ Johnson, the former NFL offensive lineman who became a high school history teacher, dies at 54

February 15, 2026

Social media posts extend Epstein fallout to student photo firm Lifetouch

February 13, 2026

Jury deadlocks in trial of Stanford University students after pro-Palestinian protests

February 13, 2026
Education

How the Siege of Boston shaped the legacy of George Washington

By IQ TIMES MEDIAFebruary 16, 20260

BOSTON (AP) — More than a decade before he became the country’s first president, George…

Tre’ Johnson, the former NFL offensive lineman who became a high school history teacher, dies at 54

February 15, 2026

Social media posts extend Epstein fallout to student photo firm Lifetouch

February 13, 2026

Jury deadlocks in trial of Stanford University students after pro-Palestinian protests

February 13, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.