Claude Code's /ultrareview Gives You 3 Free Multi-Agent Reviews Before May 5
Claude Code's /ultrareview Gives You 3 Free Multi-Agent Reviews Before May 5
Anthropic shipped /ultrareview in Claude Code on April 22. It spins up a fleet of review agents in a remote sandbox — application logic, edge cases, security, performance — each one independently verifying findings before reporting. Pro and Max subscribers get three free runs that expire May 5. After that it's $5–$20 per review depending on diff size.
I've burned two of my three free runs on real PRs this week. The honest question for a solo dev isn't "is this cool" — it's "do I keep paying for it after the free tier dies, or is local /review good enough?"
Here's the answer after a week of actually using it.
What /ultrareview does that /review doesn't
Three things matter in the diff between local /review and remote /ultrareview.
Remote sandbox, parallel agents. Local /review runs one agent in your terminal session, sequentially through the diff. /ultrareview spins up four agents in parallel in Anthropic's sandbox — one for application logic, one for edge cases, one for security, one for performance. Each agent has its own context, its own focus, and its own pass through the diff. They run concurrently while you keep working.
Independent verification before surfacing. This is the part that matters. When the security agent claims it found an auth bypass, that finding gets independently reproduced by a verifier agent before it surfaces to you. False-positive rate drops noticeably. The local /review flag rate is roughly 50% noise; the /ultrareview flag rate after my first two runs is closer to 20% noise.
No local resource burn. The fleet runs in Anthropic's sandbox, not on your laptop. You can /ultrareview a 2,000-line refactor and not have your editor hitch. For solo devs running on M-series Macs this is mostly a non-issue, but for anyone working off a 4-year-old machine the latency story is real.
The pricing math
$5–$20 per review × the 3–8 meaningful PRs you ship a week comes to $15–$160 per month on top of your existing Max plan.
That's real money. Especially if you're already hitting Max usage ceilings on agent loops, this is meaningful incremental cost. The honest math for me, looking at my last 30 days: I shipped 23 PRs that I'd have wanted reviewed. At $10 average per review, that's $230/month on top of my Max plan. Not catastrophic, but well above the "is this clearly worth it" threshold.
Where /ultrareview has earned its slot
After two runs, one finding paid for the next year of worst-case usage.
The security agent caught a subtle auth bypass in a PR my local /review and my eyeballs both missed. The bypass was in a middleware change — the kind of thing where the test suite passes, the local review says LGTM, and the bug ships into a route that handles user data. The /ultrareview security agent flagged the specific control-flow path that bypassed the middleware, reproduced the bypass in a verifier pass, and surfaced it with a recommended fix. I spent maybe 90 seconds reading the finding and 10 minutes implementing the fix.
If that bypass had shipped, the recovery cost would have been substantially more than the $230/month would cost over a year. Not a hypothetical. A real PR with a real bug that real users would have hit. That single finding is the case for keeping /ultrareview in the workflow.
Where it's overkill
Tiny PRs, throwaway refactors, docs-only changes — burning $5–$20 on those is absurd. The feature needs user-level gating ("only on PRs > 200 lines, only on PRs touching auth/payments/data") and right now you have to self-discipline.
This is the actual pain point. The default of /ultrareview is "always run it." The right behavior is "run it on the 20% of PRs that actually matter." Anthropic should ship a config option that says "only auto-suggest /ultrareview for PRs matching these patterns." Until they do, the discipline is on you. Don't burn $20 on a typo fix.
What I'd do with the 3 free runs before May 5
Save them for the PRs that actually touch production. Payments. Auth. Data migrations. Anything that, if buggy, has a recovery cost meaningfully above the cost of a /ultrareview run.
Don't waste them on demos, refactors, or small features. The free runs are a learning opportunity for what /ultrareview is good at. The right way to learn that is to point it at the highest-stakes thing in your queue and see whether it surfaces something local review missed.
If it surfaces nothing on three consecutive high-stakes PRs, you have your answer for whether to pay $5–$20 going forward. If it surfaces one real bug that would have shipped, you have a different answer.
The meta-angle nobody is naming
This is the first consumer-facing agentic service where the billing model matches the actual cost. Spinning up four parallel sandboxes isn't free — Anthropic pays real money for compute every time you run /ultrareview. The unlimited-everything Max pricing era is quietly ending.
Expect more agentic features to ship with usage-based pricing on top of Max. Sub-agent fleets, deep-research runs, long-context analysis — all of these have real per-run compute costs that don't fit inside a flat monthly subscription. The economics of frontier-model agentic workflows don't sustain Max-style pricing for power users.
This isn't a complaint. It's the right model. The Max plan stays affordable for the modal user. Power users pay incremental costs for the workflows that genuinely cost Anthropic real money. The alternative is everyone paying a higher Max price to subsidize the heaviest 5% of users, which is the wrong tradeoff.
But it does change the mental model. In 2024 you picked a subscription tier and stopped thinking about per-token costs. In 2026 you pick a subscription tier and a usage-based budget for the agentic features that bill on top of it. Add it to your monthly stack budget review.
The honest answer
After May 5, I'm keeping /ultrareview and self-disciplining the trigger. Two free runs surfaced one real bug. At that hit rate, the math works out to "pay for /ultrareview whenever I'd otherwise be stressed about a deploy."
If you're a solo dev shipping under 5 PRs a week and they're all small features, skip it. Local /review is fine for that workload, and the savings of $50–$80/month is real. If you ship 8+ meaningful PRs a week and any of them touch payments, auth, or data migrations, pay for /ultrareview on those specific PRs.
You have until May 5 to decide. Use your three free runs deliberately.