Anthropic released Claude Opus 4.7 today, April 16, 2026. It is their most capable generally available model, and a direct follow-up to Opus 4.6 from February.
The benchmarks beat Opus 4.6, GPT-5.4, and Google Gemini 3.1 Pro. Every model release says that.
What actually matters for the CEOs I advise is buried three lines down in the release notes. This post unpacks what's new, what's hype, and more importantly what this release signals about where enterprise AI is heading.
1. The "xhigh" Effort Level: Finer Control Over Thinking
Opus 4.7 introduces a new reasoning tier called xhigh ("extra high"), sitting between the existing high and max effort levels. It gives developers and enterprises a new dial on the tradeoff between depth of reasoning and latency.
Anthropic recommends starting with high or xhigh for coding and agentic use cases. That is a strong hint about where they see the biggest real-world lift.
Why it matters for CEOs: Your AI is no longer a single-speed engine. You can now tune how hard it thinks based on the stakes of the decision. Low-latency customer support? Use lower effort. Complex contract analysis or architectural review? Dial it up. This is a quiet but important shift. AI is maturing into a cost-aware utility, not a flat-rate chatbot.
2. Task Budgets for Agentic Loops
Opus 4.7 introduces task budgets, a way to tell the model how much time and compute to spend on a given agentic task, not just what the task is.
This is a bigger deal than it sounds. For the last year, the biggest complaint from enterprises running autonomous agents has been a simple one: "It works, but I can't predict what it'll cost." Task budgets put the CFO back in the room.
Why it matters: Agentic AI was previously hard to deploy in regulated or cost-sensitive environments because runtime was unbounded. With task budgets, you can now govern an autonomous workflow the same way you govern a human contractor: with scope, time, and spend limits.
3. High-Resolution Vision at 3.75 Megapixels
Opus 4.7 raises maximum image input resolution to 2,576 pixels on the long edge, about 3.75 megapixels, with 1:1 pixel coordinate mapping.
Translation: the model can now read your screenshots, design mockups, whiteboards, scanned contracts, and architectural diagrams at the fidelity a teammate would. Tiny text in a screenshot? Legible. A hand-drawn flow on a whiteboard? Interpretable. A marked-up PDF page? Precisely coordinate-mappable for annotation back.
Why it matters: Multimodal AI has been stuck in a "good enough for the demo" zone for two years. This release closes the gap between demo and production. Workflows that were blocked by poor OCR, low-resolution document scanning, or degraded chart reading just became viable.
4. Stronger Multi-Step, Long-Horizon Execution
Beyond the headline features, Opus 4.7 delivers meaningful improvements in long-horizon reasoning and complex, tool-dependent workflows. The kind of work real agentic systems require.
Anthropic's internal evaluations report more reliable agentic execution, better decision-making deep into long task chains, and fewer catastrophic drop-offs after many tool calls.
Why it matters: Most enterprise AI value lives in multi-step processes: procurement workflows, legal review chains, financial close tasks, customer onboarding journeys. A model that's only good at single-turn answers is a copilot. A model that's reliable across 40+ tool calls is infrastructure.
5. A New Tokenizer
Opus 4.7 ships with an updated tokenizer, which affects both output quality and cost-per-task accounting. Teams comparing price-per-output across model versions will need to re-benchmark, because token counts for the same input can shift meaningfully.
Why it matters: If you benchmark LLM spend, re-run your numbers before you renew contracts.
6. Availability and Pricing
Claude Opus 4.7 is available across all of Anthropic's Claude products and API, and through cloud providers Amazon (Bedrock), Microsoft, and Google, at the same price as Opus 4.6.
Why it matters: Same cost, meaningfully better capability. If you're already using Opus 4.6, upgrading is effectively free. If you're on an older Claude, GPT, or Gemini tier, this release resets the baseline of what "flagship" means.
The Quiet Signal Underneath All of This
Notice the pattern across every feature:
xhigheffort → deeper thinking, more controlled- Task budgets → more predictable autonomous execution
- 3.75MP vision → better input fidelity
- Long-horizon improvements → more reliable multi-step chains
Every one of these makes AI a better executor. Not a better thinker.
That's the line I keep drawing for the executives I work with: AI executes. Humans think.
Opus 4.7 doesn't change that. It sharpens it.
What This Means for Your AI Strategy
If your 2026 AI strategy is still "let's test ten tools and see what sticks," you're already behind. The models are now dramatically better at doing the work. That's no longer the constraint.
The real question, the one most leadership teams still can't answer, is this:
What work is actually worth doing?
That's not a model problem. It's a strategy problem. And no release (4.7, 5.0, or whatever Anthropic ships next) will solve it for you.
The companies pulling ahead right now aren't the ones testing the most tools. They're the ones who have done the hard thinking about where AI creates real value in their specific business, and have a disciplined plan to capture it.
If that sounds like something your organisation hasn't nailed yet, that's exactly what the AI Sweet Spot Framework and the workshops are designed to solve.
Because the frontier of what AI can do is no longer the problem. The frontier of what your business should do with it, is.