Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Some Meta Executives Could Make Billions Under New Pay Package

March 25, 2026

OpenAI Kills Sora App As AI Compute Crunch Forces Hard Choices

March 25, 2026

OpenAI’s Sora was the creepiest app on your phone — now it’s shutting down

March 24, 2026
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Anthropic hands Claude Code more control, but keeps it on a leash
AI

Anthropic hands Claude Code more control, but keeps it on a leash

IQ TIMES MEDIABy IQ TIMES MEDIAMarch 24, 2026No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


For developers using AI, “vibe coding” right now comes down to babysitting every action or risking letting the model run unchecked. Anthropic says its latest update to Claude aims to eliminate that choice by letting the AI decide which actions are safe to take on its own — with some limits.  

The move reflects a broader shift across the industry, as AI tools are increasingly designed to act without waiting for human approval. The challenge is balancing speed with control: too many guardrails slows things down, while too few can make systems risky and unpredictable. Anthropic’s new “auto mode,” now in research preview — meaning it’s available for testing but not yet a finished product — is its latest attempt to thread that needle. 

Auto mode uses AI safeguards to review each action before it runs, checking for risky behavior the user didn’t request and for signs of prompt injection — a type of attack where malicious instructions are hidden in content that the AI is processing, causing it to take unintended actions. Any safe actions will proceed automatically, while the risky ones get blocked.

It’s essentially an extension of Claude Code’s existing “dangerously-skip-permissions” command, which hands all decision-making to the AI, but with a safety layer added on top.

The feature builds on a wave of autonomous coding tools from companies like GitHub and OpenAI, which can execute tasks on a developer’s behalf. But it takes it a step further by shifting the decision of when to ask for permission from the user to the AI itself. 

Anthropic hasn’t detailed the specific criteria its safety layer uses to distinguish safe actions from risky ones — something developers will likely want to understand better before adopting the feature widely. (TechCrunch has reached out to the company for more information on this front.)

Auto mode comes off the back of Anthropic’s launch of Claude Code Review, its automatic code reviewer designed to catch bugs before they hit the codebase, and Dispatch for Cowork, which allows users to send tasks to AI agents to handle work on their behalf.  

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Auto mode will roll out to Enterprise and API users in the coming days. The company says it currently only works with Claude Sonnet 4.6 and Opus 4.6, and recommends using the new feature in “isolated environments” — sandboxed setups that are kept separate from production systems, limiting the potential damage if something goes wrong.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

OpenAI’s Sora was the creepiest app on your phone — now it’s shutting down

March 24, 2026

Kentucky woman rejects $26M offer to turn her farm into a data center

March 24, 2026

Spotify tests new tool to stop AI slop from being attributed to real artists

March 24, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Chicano historian Rodolfo ‘Rudy’ Acuña dies at 93

March 24, 2026

Michigan pits small, big schools for grant funding

March 24, 2026

Georgia public schools may do daily weapons checks

March 24, 2026

Connecticut homeschool reporting requirements bill advances despite opposition

March 20, 2026
Education

Chicano historian Rodolfo ‘Rudy’ Acuña dies at 93

By IQ TIMES MEDIAMarch 24, 20260

LOS ANGELES (AP) — Rodolfo “Rudy” Acuña, a pioneering political activist, academic and historian who…

Michigan pits small, big schools for grant funding

March 24, 2026

Georgia public schools may do daily weapons checks

March 24, 2026

Connecticut homeschool reporting requirements bill advances despite opposition

March 20, 2026
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.