Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Framework’s CEO Explains Memory Shortage Impact

March 8, 2026

A singing circle at Amsterdam’s Concertgebouw offers support for people with dementia

March 8, 2026

A roadmap for AI, if anyone will listen

March 8, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » A roadmap for AI, if anyone will listen
AI

A roadmap for AI, if anyone will listen

IQ TIMES MEDIABy IQ TIMES MEDIAMarch 8, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


While Washington’s breakup with Anthropic exposed the complete lack of any coherent rules governing artificial intelligence, a bipartisan coalition of thinkers has assembled something the government has so far declined to produce: a framework for what responsible AI development should actually look like.

The Pro-Human Declaration was finalized before last week’s Pentagon-Anthropic standoff, but the collision of the two events wasn’t lost on anyone involved.

“There’s something quite remarkable that has happened in America just in the last four months,” said Max Tegmark, the MIT physicist and AI researcher who helped organize the effort, in conversation with this editor. “Polling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence.”

The newly published document, signed by hundreds of experts, former officials, and public figures, opens with the no-nonsense observation that humanity is at a fork in the road. One path, which the declaration calls “the race to replace,” leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other leads to AI that massively expands human potential.

The latter scenario depends on five key pillars: keeping humans in charge, avoiding the concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable. Among its more muscular provisions is an outright prohibition on superintelligence development until there’s scientific consensus it can be done safely and genuine democratic buy-in; mandatory off-switches on powerful systems; and a ban on architectures that are capable of self-replication, autonomous self-improvement, or resistance to shutdown.

The declaration’s release coincides with a period that makes its urgency far easier to appreciate. On the last Friday in February, Defense Secretary Pete Hegseth designated Anthropic — whose AI already runs on classified military platforms — a “supply chain risk” after the company refused to grant the Pentagon unlimited use of its technology, a label ordinarily reserved for firms with ties to China. Hours later, OpenAI cut its own deal with the Defense Department, one that legal experts say will be difficult to enforce in any meaningful way. What it all laid bare is how costly Congressional inaction on AI has become.

As Dean Ball, a senior fellow at the Foundation for American Innovation, told The New York Times afterward, “This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems.”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Tegmark reached for an analogy that most people can understand when we spoke. “You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe,” he said, “because the FDA won’t allow them to release anything until it’s safe enough.”

Washington turf wars rarely generate the kind of public pressure that changes laws. Instead, Tegmark sees child safety as the pressure point most likely to crack the current impasse. Indeed, the declaration calls for mandatory pre-deployment testing of AI products — particularly chatbots and companion apps aimed at younger users — covering risks including increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation.

“If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that,” Tegmark said. “We already have laws. It’s illegal. So why is it different if a machine does it?”

He believes that once the principle of pre-release testing is established for children’s products, the scope will widen almost inevitably. “People will come along and be like — let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government.”

It is no small thing that former Trump advisor Steve Bannon and Susan Rice, President Obama’s National Security Advisor, signed the same document — along with former Joint Chiefs Chairman Mike Mullen and progressive faith leaders.

“What they agree on, of course, is that they’re all human,” says Tegmark. “If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Google just gave Sundar Pichai a $692M pay package

March 8, 2026

Grammarly’s ‘expert review’ is just missing the actual experts

March 7, 2026

OpenAI robotics lead Caitlin Kalinowski quits in response to Pentagon deal

March 7, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Hegseth’s quest to end ‘wokeness’ reshapes military ties with colleges

March 6, 2026

As Trump’s Education Dept. pulls back on civil rights, states step up

March 5, 2026

How to talk about war and conflict with kids

March 4, 2026

Georgia dad is latest parent convicted for a child accused of gun violence

March 3, 2026
Education

Hegseth’s quest to end ‘wokeness’ reshapes military ties with colleges

By IQ TIMES MEDIAMarch 6, 20260

WASHINGTON (AP) — The Trump administration’s campaign to end “wokeness” in the military is reshaping…

As Trump’s Education Dept. pulls back on civil rights, states step up

March 5, 2026

How to talk about war and conflict with kids

March 4, 2026

Georgia dad is latest parent convicted for a child accused of gun violence

March 3, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.