A Roblox Auto-Farm Script Took Down Vercel — Your AI Tool Supply Chain Is the New Attack Surface
A Roblox Auto-Farm Script Took Down Vercel — Your AI Tool Supply Chain Is the New Attack Surface
Vercel confirmed a breach on April 19. I've read the public disclosure three times now because the attack chain is the kind of thing you'd dismiss as a plot point in a bad thriller if it showed up in a novel.
Here is the actual chain:
A Context.ai employee, in February, downloaded a Roblox auto-farm script. The script was laced with Lumma infostealer. Lumma scraped browser sessions, cookies, and saved credentials off the employee's machine. Among what it found: the login for Context.ai's own support@ address.
A Vercel employee — different company, different product, different timezone — had at some point signed up for Context.ai using "Sign in with Google" on their Vercel enterprise account. The standard OAuth flow. They clicked "Allow all" because that's what the button says and the product needed it to demo. That grant sat there for months.
With the Context.ai support@ inbox compromised, the attackers could reset the OAuth tokens for every Google-authenticated user of Context.ai. Which gave them, for that specific Vercel employee, effectively a token with full Google Workspace scope on Vercel's domain. From there, lateral movement into Vercel's internal infra. From there, customer keys.
By April 19, ShinyHunters was selling Supabase tokens, Datadog keys, Authkit credentials, and source code snapshots on BreachForums. The customers whose data leaked did not sign up for Context.ai. They signed up for Vercel. They had no business relationship with Context.ai at all.
This is the AI-tool-supply-chain story we've been expecting for a year. I want to walk through why, and then I want to give you the 30-minute audit you should run before the weekend.
Why This Is Different From Every Other Breach
"Company gets breached via third-party vendor" is not a new genre. Target got breached through an HVAC contractor in 2013. The pattern is old.
What's new is the density of the vendor relationships. A solo operator in 2026 typically has 5 to 15 "helpful AI tools" wired into their primary identity provider. Each one is a lateral-movement path into your main stack. Think about your own list: an AI note-taker on your calendar. An AI email assistant. A meeting transcriber. A code-review bot. An AI-powered analytics overlay. A research agent. A Slack summarizer. A customer-support copilot. Each one asked for "Sign in with Google" or "Sign in with GitHub" and then asked for broad OAuth scope because, genuinely, the product works better with that scope.
And each one lives at a small company with a small security team and a support inbox that can be phished. The attack surface isn't your identity provider. The attack surface is the graph of every vendor you ever granted scope to, each of their support inboxes, and each of their employees' home machines.
Context.ai is not an outlier. Context.ai is the first one to show up in the news. There will be more.
The OAuth Scope Problem in Plain English
When you click "Sign in with Google" on a new AI tool and the consent screen says "View and manage your email, calendar, contacts, and drive," what you are actually granting is a token that the vendor keeps on its servers. That token has the scope on the consent screen. If the vendor's servers are compromised, the attacker has that scope on your Google account.
This is functionally identical to giving the vendor a copy of your Google password, with one important caveat: you can revoke the token. If you know it's there. If you remember signing up for that tool. If you check.
Most AI tool onboarding flows push you toward "Allow all" because the restricted scope doesn't demo as well. A calendar assistant that only has read-only access to one calendar looks worse in a demo than one that has full read-write on your entire Workspace. Product teams notice this. The default scope on most "magic AI tool" signups is wildly more permissive than the tool actually needs.
The Context.ai flow specifically requested full Workspace admin scope. The Vercel employee granted it. Months later, that grant was the hinge.
The 30-Minute Audit
Put this on your calendar for Saturday morning. It takes less time than the grocery run and it will save you a very bad Monday if your year goes sideways.
Open Google OAuth grants. Go to myaccount.google.com/permissions. You will see every app that has ever asked for access to your Google account. Mine had 47 entries. Most people's first reaction is "wait, what is any of this." That is the correct reaction.
Go down the list. For each entry, ask yourself two questions: do I currently use this product, and does it need the scope it has. For anything you don't recognize, revoke immediately. For anything you do recognize but haven't used in 30+ days, revoke. For anything you actively use, check the granted scope. If it's full Workspace and the product only needs read access to one calendar, revoke and re-authenticate with the minimum scope.
Open GitHub OAuth apps and installed apps. Go to github.com/settings/applications. Same exercise. Solo developers tend to accumulate GitHub grants faster than Google grants because every "AI code tool" launches in the same month. Anything you don't actively use, revoke. Anything that has write access to all repos but only reads one, revoke and re-grant with per-repo scope.
Open Slack apps. Workspace admin → "Apps." Same exercise. Remove anything stale. For anything that still runs, check whether the bot has posting permissions in channels it shouldn't be in.
Open Notion integrations. Settings → "Connections" and "Integrations." Same exercise. A lot of solo devs have left Notion integrations wired to tools they quit using a year ago.
Open your password manager's OAuth list. 1Password, Bitwarden, and Dashlane all have an audit view for external authentication. Same drill.
When you're done, do the same audit for your payment-method-on-file list. Stripe customer portal. Paddle. Lemon Squeezy. Anywhere you have a credit card saved. If a vendor you no longer use still has your card and your email, they still have a foothold — not for RCE, but for "here's your renewal, it auto-charged" when you're not paying attention.
The Uncomfortable Part
The customers hit by the Vercel breach did not do anything wrong. They signed up for Vercel. Vercel is a reputable company. They got burned by a vendor-of-a-vendor they'd never heard of.
Your vendors' vendors are your threat model now, and your only real defense is minimizing the OAuth blast radius at the top of the chain. You cannot audit every support inbox at every SaaS startup in your graph. You can audit your own grants.
The other thing worth naming: Context.ai isn't going to be meaningfully punished for this. The discovery was public, the disclosure was fine, and most of the blast landed on their customers' customers. The economic incentives for AI-tool startups to take supply-chain security seriously are weak. The economic incentives for a single employee to click on a Roblox cheat link are stronger than you'd guess. We're going to keep seeing this.
My Post-Audit Numbers
I ran this exact audit on Sunday. Before: 47 Google OAuth grants, 23 GitHub app authorizations, 14 Slack apps, 11 Notion connections. After: 12, 8, 6, 4.
The three AI tools I rage-quit during the audit:
An "AI meeting summarizer" that I hadn't used in four months but still had full Google Calendar and Drive access. Revoked, and deleted the account.
A "code-review AI" that asked for read-write on every repo in my GitHub org during signup and never did anything I couldn't have done with a CodeRabbit workflow. Revoked, uninstalled.
An "email triage" bot that had been running quietly and, I discovered during audit, had access to send email on my behalf. I cannot tell you why I granted that, and that is the point — onboarding flows walk you past it.
The five I kept are all tools where I can name the specific thing they do every day, and the scope they have matches that thing.
What This Changes Going Forward
Two rules I'm using from now on.
Minimum viable scope, always. If an AI tool's signup flow doesn't offer a restricted-scope option, that's a signal, not a papercut. The product either has not thought about this or has decided the demo is more important. Either way, I'm skipping it and picking a competitor who has.
Quarterly audit on the calendar. Same 30-minute routine, every three months. It's cheaper than the alternative.
The Vercel breach isn't the last one of these we'll read about this year. It's probably not even the last one this quarter. But it is the one where the attack chain is clean enough — Roblox cheat, infostealer, support inbox, OAuth token, customer infra — that you can internalize the lesson in a single read. The lesson is: your AI productivity stack is a supply chain, you never onboard it with supply-chain rigor, and you can fix about 80% of that gap in half a Saturday.
Do the audit. It's boring. That's fine.