· 6 min read

KKR Just Stood Up a $10B Neocloud Run by the Ex-AWS CEO. The Indie Inference Floor Just Moved.

KKR Just Stood Up a $10B Neocloud Run by the Ex-AWS CEO. The Indie Inference Floor Just Moved.

KKR confirmed this week that it's launching Helix, a fresh $10B AI infrastructure company, with former AWS CEO Adam Selipsky at the helm. The plan is to compete directly with AWS, Azure, and Google Cloud on AI training and inference capacity. The category has been called "neoclouds" — purpose-built AI compute providers like CoreWeave, Lambda, Crusoe — and Helix is the largest single check ever written into the category.

Most coverage of this is enterprise infrastructure trade press for hyperscaler buyers. None of it is for solo operators. That's correct as far as it goes; I'm not going to be a Helix customer. But the announcement does move the floor on what hosted inference costs in 2027 in a way that does affect indie devs, and the read on what that floor move means is worth writing down.

What Helix actually is

Strip the "neocloud" framing and Helix is three things stacked together. First, KKR's $10B commitment, which buys roughly 50,000 H200-class GPUs at current per-unit pricing or a comparable mix of newer Blackwell hardware. Second, a CEO with credibility — Selipsky ran AWS through the AI buildout and has the operator chops to actually deploy and run that capacity. Third, a thesis: there's enough AI compute demand that a focused operator can win share against the hyperscalers without owning the application layer.

The thesis is the interesting part. The hyperscalers (AWS, Azure, GCP) bundle compute with proprietary services, identity, networking, storage, and a sales motion built around enterprise consolidation. Helix's bet is that the AI workload — train a model or run inference — has gotten big enough that customers will buy raw compute from a focused provider rather than pay the hyperscaler bundle tax.

If that bet is right, Helix wins share by being cheaper per unit of compute. If it's wrong, Helix is a $10B mistake. The interesting thing is that the bet doesn't have to be right at scale to matter for indie devs.

Why this matters even though I'll never be a customer

Helix is selling to AI labs, hyperscaler customers spilling over, and large enterprise inference workloads. The minimum check size will be in the tens of thousands per month at minimum. I am not the customer. You are not the customer. Neither is any sub-$10K MRR shop.

The downstream effect that does reach indie operators is on hosted inference pricing. Anthropic, OpenAI, Google, and the smaller hosted-model providers all buy compute from somewhere. Some of them buy from the hyperscalers. Some build their own. All of them are price-sensitive on compute because compute is their largest variable cost.

When a $10B operator with Selipsky-level credibility shows up offering compute at a discount to AWS list, it puts price pressure on every existing hyperscaler. AWS doesn't drop list pricing immediately, but they start being more willing to negotiate on big deals. Anthropic's compute negotiation gets a little easier. OpenAI's gets a little easier. The savings flow into eventual API pricing, with a lag that's usually six to eighteen months.

That's the floor move. The indie inference bill in 2027 has a slightly lower floor than it would have in a world without Helix. The magnitude is hard to quantify; my honest read is that it's a few percent on hosted inference, not a step change. But it's real, and the direction is unambiguous.

What I'm doing differently because of this

Almost nothing. The action items are subtle.

I'm extending the "stay month-to-month" inference posture I've been running for the last quarter. The 2027 price-compression bet was already strong before Helix; this announcement reinforces it. If you've been considering a multi-year inference commitment from one of the labs to lock in current pricing, the marginal case for that commitment got weaker today. Compute supply is going to be more abundant in 2027 than the spot market currently prices in.

I'm also adjusting how I think about inference-cost forecasting in product planning. The crude model I had been using was "assume hosted inference unit prices drop 30% per year." That's roughly the trend over the last 24 months, but it's a noisy curve. The Helix announcement, plus the SoftBank Roze project on the AI-build-out side, plus the steady drumbeat of Anthropic and OpenAI price cuts, suggests the trend continues at roughly that rate through 2027. If your product economics depend on inference cost coming down, you can plan against that with more confidence than you could three months ago.

I'm not making any infrastructure changes. I'm not switching providers. I'm not moving any workloads. The right response to this kind of announcement at indie scale is almost always "update your priors, do nothing operational." Operational changes cost real time. Updating priors is free.

The honest read on the "neocloud" category as a whole

The category has been hot in trade press for two years and the indie commentary has been mixed. Some indie operators have been excited about cheaper compute trickling down. Some have been skeptical that the category survives the next downturn. Helix is interesting partly because it's a forcing function on that question.

If a $10B, Selipsky-led neocloud succeeds at meaningfully eating share from AWS and Azure, the category is real. If it stalls — and the prior generation of neocloud raises and IPO performance has been mixed — the category is a niche serving the AI labs and not much else. Either way, the indie indirect effect is positive: more compute supply in absolute terms, slight downward pressure on hosted inference pricing, no operational implications for solo dev work.

The other read I keep coming back to: the people writing the largest checks into AI infrastructure right now are private equity, sovereign wealth funds, and hyperscalers themselves. Helix is the most visible of the PE entries this year. SoftBank's Roze project is the most visible sovereign-wealth-adjacent entry. Microsoft and Meta's capex commitments are the most visible hyperscaler entries. The capital is real. The capacity will exist. The question that remains open is the demand side — will AI workload growth keep pace with capacity growth — and the honest answer for indie operators is that we don't know, but our exposure is asymmetric in the direction we want. If demand keeps up, our business uses are unaffected. If supply outpaces demand, we get cheaper inference. Either way, we win.

Sources

Stay in the Loop

Get new posts delivered to your inbox. No spam, unsubscribe anytime.

Related Posts