Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Colorectal cancer is increasing among young people, James Van Der Beek’s death reminds – cancer experts explain ways to decrease your risk

February 13, 2026

Epstein files reveal deeper ties with scientists and other professors

February 13, 2026

Amid disappointing earnings, Pinterest claims it sees more searches than ChatGPT

February 12, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Trump Wants AI to Be Neutral. Here’s Why That’s a Challenge.
Tech

Trump Wants AI to Be Neutral. Here’s Why That’s a Challenge.

IQ TIMES MEDIABy IQ TIMES MEDIAJuly 25, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


President Donald Trump’s war on woke has entered the AI chat.

The White House on Wednesday issued an executive order requiring any AI model used by the federal government to be ideologically neutral, nonpartisan, and “truth-seeking.”

The order, part of the White House’s new AI Action Plan, said AI should not be “woke” or “manipulate responses in favor of ideological dogmas” like diversity, equity, and inclusion. The White House said it would issue guidance within 120 days that will outline exactly how AI makers can show they are unbiased.

As Business Insider’s past reporting shows, making AI completely free from bias is easier said than done.

Why it’s so hard to create a truly ‘neutral’ AI

Removing bias from AI models is not a simple technical adjustment — or an exact science.

The later stages of AI training rely on the subjective calls of contractors.

This process, known as reinforcement learning from human feedback, is crucial because topics can be ambiguous, disputed, or hard to define cleanly in code.

The directives for what counts as sensitive or neutral are decided by the tech companies making the chatbots.

“We don’t define what neutral looks like. That’s up to the customer,” Rowan Stone, the CEO of data labeling firm Sapien, which works with customers like Amazon and MidJourney, told BI. “Our job is to make sure they know exactly where the data came from and why it looks the way it does.”

In some cases, tech companies have recalibrated their chatbots to make their models less woke, more flirty, or more engaging.

Related stories

Business Insider tells the innovative stories you want to know

Business Insider tells the innovative stories you want to know

They are also already trying to make them more neutral.

BI previously reported that contractors for Meta and Google projects were often told to flag and penalize “preachy” chatbot responses that sounded moralizing or judgmental.

Is ‘neutral’ the right approach?

Sara Saab, the VP of product at Prolific, an AI and data training company, told BI that thinking about AI systems that are perfectly neutral “may be the wrong approach” because “human populations are not perfectly neutral.”

Saab added, “We need to start thinking about AI systems as representing us and therefore give them the training and fine-tuning they need to know contextually what the culturally sensitive, appropriate tone and pitch is for any interaction with a human being.”

Tech companies must also consider the risk of bias creeping into AI models from the datasets they are trained on.

“Bias will always exist, but the key is whether it’s there by accident or by design,” said Sapien’s Stone. “Most models are trained on data where you don’t know who created it or what perspective it came from. That makes it hard to manage, never mind fix.”

Big Tech’s tinkering with AI models has sometimes led to unpredictable and harmful outcomes

Earlier this month, for example, Elon Musk’s xAI rolled back a code update to Grok after the chatbot went on a 16-hour antisemitic rant on the social media platform X.

The bot’s new instructions included a directive to “tell it like it is.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Pitch Deck: Vybe Raises $10M to Expand Vibe Coding to Corporate World

February 12, 2026

Feeling AI Fatigue at Work? Take Our Survey

February 12, 2026

Sony WF-1000XM6 Earbuds Review

February 12, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Epstein files reveal deeper ties with scientists and other professors

February 13, 2026

Advances in education and community ties help Pennsylania steel town

February 12, 2026

BYU standout receiver Parker Kingston charged with first-degree rape in Utah

February 11, 2026

Yale suspends professor from teaching while reviewing his correspondence with Epstein

February 11, 2026
Education

Epstein files reveal deeper ties with scientists and other professors

By IQ TIMES MEDIAFebruary 13, 20260

WASHINGTON (AP) — There were Nobel laureates and acclaimed authors. Pioneers of science and medicine.…

Advances in education and community ties help Pennsylania steel town

February 12, 2026

BYU standout receiver Parker Kingston charged with first-degree rape in Utah

February 11, 2026

Yale suspends professor from teaching while reviewing his correspondence with Epstein

February 11, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.