Close Menu
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Amazon Prime Day Is a Great Time to Test Rufus AI Shopping Assisttant

July 8, 2025

5 people on SNAP share what the food program gets right — and wrong

July 8, 2025

Samsung’s Profit Plunge Shows How Far It Has Slipped in the Chip War

July 8, 2025
Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
  • Home
  • AI
  • Education
  • Entertainment
  • Food Health
  • Health
  • Sports
  • Tech
  • Well Being
IQ Times Media – Smart News for a Smarter YouIQ Times Media – Smart News for a Smarter You
Home » Anthropic’s new Claude 4 AI models can reason over many steps
AI

Anthropic’s new Claude 4 AI models can reason over many steps

IQ TIMES MEDIABy IQ TIMES MEDIAMay 22, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


During its inaugural developer conference Thursday, Anthropic launched two new AI models that the startup claims are among the industry’s best, at least in terms of how they score on popular benchmarks.

Claude Opus 4 and Claude Sonnet 4, part of Anthropic’s new Claude 4 family of models, can analyze large datasets, execute long-horizon tasks, and take complex actions, according to the company. Both models were tuned to perform well on programming tasks, Anthropic says, making them well-suited for writing and editing code.

Both paying users and users of the company’s free chatbot apps will get access to Sonnet 4 but only paying users will get access to Opus 4. For Anthropic’s API, via Amazon’s Bedrock platform and Google’s Vertex AI, Opus 4 will be priced at $15/$75 per million tokens (input/output) and Sonnet 4 at $3/$15 per million tokens (input/output).

Tokens are the raw bits of data that AI models work with. A million tokens is equivalent to about 750,000 words — roughly 163,000 words longer than “War and Peace.”

Anthropic Claude 4
Image Credits:Anthropic

Anthropic’s Claude 4 models arrive as the company looks to substantially grow revenue. Reportedly, the outfit, founded by ex-OpenAI researchers, aims to notch $12 billion in earnings in 2027, up from a projected $2.2 billion this year. Anthropic recently closed a $2.5 billion credit facility and raised billions of dollars from Amazon and other investors in anticipation of the rising costs associated with developing frontier models.

Rivals haven’t made it easy to maintain pole position in the AI race. While Anthropic launched a new flagship AI model earlier this year, Claude Sonnet 3.7, alongside an agentic coding tool called Claude Code, competitors — including OpenAI and Google — have raced to outdo the company with powerful models and dev tooling of their own.

Anthropic is playing for keeps with Claude 4.

The more capable of the two models introduced today, Opus 4, can maintain “focused effort” across many steps in a workflow, Anthropic says. Meanwhile, Sonnet 4 — designed as a “drop-in replacement” for Sonnet 3.7 — improves in coding and math compared to Anthropic’s previous models and more precisely follows instructions, according to the company.

The Claude 4 family is also less likely than Sonnet 3.7 to engage in “reward hacking,” claims Anthropic. Reward hacking, also known as specification gaming, is a behavior where models take shortcuts and loopholes to complete tasks.

To be clear, these improvements haven’t yielded the world’s best models by every benchmark. For example, while Opus 4 beats Google’s Gemini 2.5 Pro and OpenAI’s o3 and GPT-4.1 on SWE-bench Verified, which is designed to evaluate a model’s coding abilities, it can’t surpass o3 on the multimodal evaluation MMMU or GPQA Diamond, a set of PhD-level biology-, physics-, and chemistry-related questions.

Anthropic Claude 4
The results of Anthropic’s internal benchmark tests.Image Credits:Anthropic

Still, Anthropic is releasing Opus 4 under stricter safeguards, including beefed-up harmful content detectors and cybersecurity defenses. The company claims its internal testing found that Opus 4 may “substantially increase” the ability of someone with a STEM background to obtain, produce, or deploy chemical, biological, or nuclear weapons, reaching Anthropic’s “ASL-3” model specification.

Both Opus 4 and Sonnet 4 are “hybrid” models, Anthropic says — capable of near-instant responses and extended thinking for deeper reasoning (to the extent AI can “reason” and “think” as humans understand these concepts). With reasoning mode switched on, the models can take more time to consider possible solutions to a given problem before answering.

As the models reason, they’ll show a “user-friendly” summary of their thought process, Anthropic says. Why not show the whole thing? Partially to protect Anthropic’s “competitive advantages,” the company admits in a draft blog post provided to TechCrunch.

Opus 4 and Sonnet 4 can use multiple tools, like search engines, in parallel, and alternate between reasoning and tools to improve the quality of their answers. They can also extract and save facts in “memory” to handle tasks more reliably, building what Anthropic describes as “tacit knowledge” over time.

To make the models more programmer-friendly, Anthropic is rolling out upgrades to the aforementioned Claude Code. Claude Code, which lets developers run specific tasks through Anthropic’s models directly from a terminal, now integrates with IDEs and offers an SDK that lets devs connect it with third-party applications.

The Claude Code SDK, announced earlier this week, enables running Claude Code as a subprocess on supported operating systems, providing a way to build AI-powered coding assistants and tools that leverage Claude models’ capabilities.

Anthropic has released Claude Code extensions and connectors for Microsoft’s VS Code, JetBrains, and GitHub. The GitHub connector allows developers to tag Claude Code to respond to reviewer feedback, as well as to attempt to fix errors in — or otherwise modify — code.

AI models still struggle to code quality software. Code-generating AI tends to introduce security vulnerabilities and errors, owing to weaknesses in areas like the ability to understand programming logic. Yet their promise to boost coding productivity is pushing companies — and developers — to rapidly adopt them.

Anthropic, acutely aware of this, is promising more frequent model updates.

“We’re … shifting to more frequent model updates, delivering a steady stream of improvements that bring breakthrough capabilities to customers faster,” wrote the startup in its draft post. “This approach keeps you at the cutting edge as we continuously refine and enhance our models.”



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
IQ TIMES MEDIA
  • Website

Related Posts

OpenAI tightens the screws on security to keep away prying eyes

July 8, 2025

Meta reportedly recruits Apple’s head of AI models

July 7, 2025

Cursor apologizes for unclear pricing changes that upset users

July 7, 2025
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Barnard College settles suit brought by Jewish students, agreeing not to meet with anti-Israel group

July 7, 2025

Trump reshapes public service loan forgiveness program

July 7, 2025

A young Alabama student, a grandparent and a camp director among those killed in Texas floods

July 6, 2025

University of California reiterates ban on student government boycotts of Israel

July 3, 2025
Education

Barnard College settles suit brought by Jewish students, agreeing not to meet with anti-Israel group

By IQ TIMES MEDIAJuly 7, 20250

NEW YORK (AP) — Barnard College has settled a lawsuit that accused the college of…

Trump reshapes public service loan forgiveness program

July 7, 2025

A young Alabama student, a grandparent and a camp director among those killed in Texas floods

July 6, 2025

University of California reiterates ban on student government boycotts of Israel

July 3, 2025
IQ Times Media – Smart News for a Smarter You
Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
  • Home
  • About Us
  • Advertise With Us
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 iqtimes. Designed by iqtimes.

Type above and press Enter to search. Press Esc to cancel.