Key Takeaways:
Anthropic launched Claude Opus 4.7 on April 16, 2026, that includes an 87.6% rating on the SWE-bench Verified check. The AI trade shift towards agentic autonomy sees Opus 4.7 outperform GPT-5.4 in complicated coding and finance. Builders should handle prices as the brand new mannequin makes use of 1.0 to 1.35 instances extra tokens than the earlier 4.6 model.
AI Evolution: Claude Opus 4.7 Launched With Enhanced Imaginative and prescient and Reminiscence
The San Francisco-based AI startup positioned the discharge as its most succesful usually accessible mannequin to this point. It serves as a focused improve over the Opus 4.6 model that arrived simply two months in the past in February.
Whereas the restricted Claude Mythos Preview stays in restricted testing for cybersecurity, Opus 4.7 is constructed for the broader market. It focuses particularly on software program engineering, long-horizon duties, and complicated monetary evaluation.
Efficiency metrics launched by Anthropic present the mannequin gaining vital floor in autonomous workflows. On the SWE-bench Verified coding benchmark, the brand new mannequin hit 87.6 p.c, up from the 80.8 p.c seen within the 4.6 launch.
The mannequin additionally managed to edge out its major competitors in a number of key classes. Anthropic reported that Opus 4.7 outperformed OpenAI’s GPT-5.4 and Google’s Gemini 3.1 Professional in device use and laptop interplay checks.
Probably the most seen modifications entails a large improve to the mannequin’s imaginative and prescient capabilities. Claude Opus 4.7 can now course of pictures as much as 2,576 pixels on the lengthy edge, which is triple the earlier decision restrict.
This visible enhance permits the AI to raised interpret complicated charts, person interfaces, and technical diagrams. Nevertheless, the corporate famous that higher-resolution pictures eat extra tokens, probably rising prices for high- quantity customers.
Anthropic additionally launched a brand new characteristic known as /ultrareview inside its Claude Code setting. This device permits skilled and max-tier customers to run multi-agent periods to establish bugs and design flaws in software program.
For monetary professionals, the mannequin reveals a better diploma of rigor in financial modeling. It achieved a 0.813 rating on the Common Finance module, representing a significant step up from the earlier model’s 0.767 score.
The pricing construction for the mannequin stays unchanged at $5 per million enter tokens and $25 per million output tokens. To assist handle bills throughout lengthy autonomous runs, Anthropic added a activity finances characteristic in public beta.
Directions to a T
Early suggestions from the developer neighborhood suggests the mannequin is extra literal in following directions. This alteration would possibly require customers to re-tune present prompts that have been optimized for older variations of the Claude household.
“Claude 4.7 is out, and utilizing it looks like getting into an F1 automobile. Way more energy, and it does precisely what you inform it at full pace. Your job is to choose the course and make the turns,” one person wrote on X.
Some testers have noticed that the up to date tokenizer can use as much as 1.35 instances extra tokens for a similar enter. Whereas this may result in quicker restrict depletion, the corporate argues that the efficiency per activity justifies the utilization.
Security stays a core focus, because the mannequin contains new automated safeguards to dam high-risk cybersecurity makes use of. Anthropic’s system card highlights improved honesty and a stronger resistance to producing dangerous content material.
The mannequin is now accessible by way of the Claude API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry. It retains the 1 million token context window launched earlier this yr.





