Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Alphabet CEO’s New $692M Pay Package Tied to Waymo, Wing Performance

March 7, 2026

Women in South Africa take up guns and martial arts for protection against gender violence

March 7, 2026

Boy left alone in hospital on day of his surgery adopted by his anesthesiologist

March 7, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Anthropic endorses California’s AI safety bill, SB 53
AI

Anthropic endorses California’s AI safety bill, SB 53

IQ TIMES MEDIABy IQ TIMES MEDIASeptember 8, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


On Monday, Anthropic announced an official endorsement of SB 53, a California bill from state senator Scott Wiener that would impose first-in-the-nation transparency requirements on the world’s largest AI model developers. Anthropic’s endorsement marks a rare and major win for SB 53, at a time when major tech groups like the Consumer Technology Association (CTA) and Chamber for Progress are lobbying against the bill.

“While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” said Anthropic in a blog post. “The question isn’t whether we need AI governance — it’s whether we’ll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former.”

If passed, SB 53 would require frontier AI model developers like OpenAI, Anthropic, Google, and xAI to develop safety frameworks, as well as release public safety and security reports before deploying powerful AI models. The bill would also establish whistleblower protections to employees who come forward with safety concerns.

Senator Wiener’s bill specifically focuses on limiting AI models from contributing to “catastrophic risks,” which the bill defines as the death of at least 50 people or more than a billion dollars in damages. SB 53 focuses on the extreme side of AI risk — limiting AI models from being used to provide expert-level assistance in the creation of biological weapons or being used in cyberattacks — rather than more near-term concerns like AI deepfakes or sycophancy.

California’s Senate approved a prior version of SB 53 but still needs to hold a final vote on the bill before it can advance to the governor’s desk. Governor Gavin Newsom has stayed silent on the bill so far, although he vetoed Senator Weiner’s last AI safety bill, SB 1047.

Bills regulating frontier AI model developers have faced significant pushback from both Silicon Valley and the Trump administration, which both argue that such efforts could limit America’s innovation in the race against China. Investors like Andreessen Horowitz and Y Combinator led some of the pushback against SB 1047, and in recent months, the Trump administration has repeatedly threatened to block states from passing AI regulation altogether.

One of the most common arguments against AI safety bills are that states should leave the matter up to federal governments. Andreessen Horowitz’s head of AI policy, Matt Perault, and chief legal officer, Jai Ramaswamy, published a blog post last week arguing that many of today’s state AI bills risk violating the Constitution’s Commerce Clause — which limits state governments from passing laws that go beyond their borders and impair interstate commerce.

Techcrunch event

San Francisco
|
October 27-29, 2025

However, Anthropic co-founder Jack Clark argues in a post on X that the tech industry will build powerful AI systems in the coming years and can’t wait for the federal government to act.

“We have long said we would prefer a federal standard,” said Clark. “But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored.”

OpenAI’s chief global affairs officer, Chris Lehane, sent a letter to Governor Newsom in August arguing that he should not pass any AI regulation that would push startups out of California — although the letter did not mention SB 53 by name.

OpenAI’s former head of policy research, Miles Brundage, said in a post on X that Lehane’s letter was “filled with misleading garbage about SB 53 and AI policy generally.” Notably, SB 53 aims to solely regulate the world’s largest AI companies — particularly ones that generated a gross revenue of more than $500 million.

Despite the criticism, policy experts say SB 53 is a more modest approach than previous AI safety bills. Dean Ball, a senior fellow at the Foundation for American Innovation and former White House AI policy adviser, said in an August blog post that he believes SB 53 has a good chance now of becoming law. Ball, who criticized SB 1047, said SB 53’s drafters have “shown respect for technical reality,” as well as a “measure of legislative restraint.”

Senator Wiener previously said that SB 53 was heavily influenced by an expert policy panel Governor Newsom convened — co-led by leading Stanford researcher and co-founder of World Labs, Fei-Fei Li — to advise California on how to regulate AI.

Most AI labs already have some version of the internal safety policy that SB 53 requires. OpenAI, Google DeepMind, and Anthropic regularly publish safety reports for their models. However, these companies are not bound by anyone but themselves, so sometimes they fall behind their self-imposed safety commitments. SB 53 aims to set these requirements as state law, with financial repercussions if an AI lab fails to comply.

Earlier in September, California lawmakers amended SB 53 to remove a section of the bill that would have required AI model developers to be audited by third parties. Tech companies have previously fought these types of third-party audits in other AI policy battles, arguing that they’re overly burdensome.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

Microsoft, Google, Amazon say Anthropic Claude remains available to non-defense customers

March 6, 2026

Anthropic’s Claude found 22 vulnerabilities in Firefox over two weeks

March 6, 2026

Anthropic’s Pentagon deal is a cautionary tale for startups chasing federal contracts

March 6, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Hegseth’s quest to end ‘wokeness’ reshapes military ties with colleges

March 6, 2026

As Trump’s Education Dept. pulls back on civil rights, states step up

March 5, 2026

How to talk about war and conflict with kids

March 4, 2026

Georgia dad is latest parent convicted for a child accused of gun violence

March 3, 2026
Education

Hegseth’s quest to end ‘wokeness’ reshapes military ties with colleges

By IQ TIMES MEDIAMarch 6, 20260

WASHINGTON (AP) — The Trump administration’s campaign to end “wokeness” in the military is reshaping…

As Trump’s Education Dept. pulls back on civil rights, states step up

March 5, 2026

How to talk about war and conflict with kids

March 4, 2026

Georgia dad is latest parent convicted for a child accused of gun violence

March 3, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.