The distribution problem

OpenAI is guaranteeing private-equity investors a 17.5 percent annual return to take its models into their portfolio companies — a number higher than most buyout funds actually earn

// Share
The distribution problem

OPENAI HAS PLEDGED $1.5bn to a joint venture in which its private-equity partners are guaranteed a 17.5% annual return. Not a target. A floor. The number is striking on its own terms — the median US buyout fund has delivered somewhere between 13% and 16% net over two decades, putting OpenAI's floor in top-quartile territory — but the more unusual thing is what the payment is for. The investors are not funding frontier research. They are not buying preferred equity in OpenAI itself. They are being paid, at near-top-quartile fund returns, to distribute OpenAI's software to companies they already own.

The structure, internally codenamed DeployCo and reported by the Financial Times, seeds $500m of OpenAI equity into a $10bn LLC, with TPG, Bain, Advent, Brookfield and Goanna putting in another $4bn. Brad Lightcap, until recently OpenAI's chief operating officer, will run it; OpenAI holds super-voting shares. DeployCo's clients will be the partners' portfolio companies — the mid-market industrials, healthcare-services businesses and consumer brands that these buyout shops carry in their inventory. In parallel, Google Cloud has unveiled a $750m fund to help McKinsey, Accenture and Deloitte train engineers and co-fund client projects. Thoma Bravo struck its own Google Cloud partnership earlier this month. Anthropic, per the FT, is in talks with Blackstone and Hellman & Friedman on a similar joint venture. Three different labs, three different structures, the same direction of capital flow — away from the customer, toward the distributor.

Paying to be sold

For forty years, enterprise software distribution ran one way. The vendor built the product, the channel found the customer, the margin flowed up. Oracle, Salesforce and Snowflake extracted gross margins north of 80% precisely because their products, once built, cost almost nothing to replicate. The systems integrators and consultancies that actually installed the software — the people who spent months inside a client's IT department wiring everything together — earned their cut from the customer, not from the vendor. The vendor was the sun; everyone else orbited.

That model depended on one assumption that AI has quietly broken: that the product, once sold, would deploy itself. It does not. The evidence on this point has been accumulating for two years, and by now it is nearly uniform in its verdict. Enterprise after enterprise launched AI pilots in 2022 and 2023, watched them stall in a proof-of-concept sandbox, and cancelled them when the budget cycle came around. The RAND Corporation puts the overall AI project failure rate above 80%. Boston Consulting Group's most recent global survey found only 5% of companies achieving substantial AI value at scale. S&P Global found that nearly half of all proofs-of-concept were scrapped before production — and that the abandonment rate had nearly tripled in a single year. Gartner projects that more than 40% of agentic AI projects will be cancelled by 2027.

What is killing these projects is not the models. The models work. What is killing them is the last mile: the organizational change management, the data infrastructure that is messier than anyone admitted in the sales meeting, the business-process redesign that turns out to require years rather than months. BCG's research suggests that only 10% of AI success comes from the underlying algorithm. Seventy percent comes from people and process — the part the vendor cannot ship.

The MIT NANDA Initiative's 2025 study of the GenAI divide isolated the single variable that separates the projects that survive from the ones that don't. When enterprises deploy AI using specialized external vendors, they succeed roughly two-thirds of the time. When they build internally with their own IT teams, the success rate falls to about one in five. That three-to-one gap is the mathematical foundation of DeployCo. A lab whose unit economics depend on long-horizon token consumption cannot afford a 78% churn rate before meaningful revenue accretes. It is cheaper to guarantee private equity a 17.5% floor than it is to watch four out of five enterprise deployments die on the operating table.

OpenAI's leadership has been saying as much, though in more diplomatic language. In a note to sales staff this month, chief revenue officer Denise Dresser called deployment — not technology — the biggest bottleneck. Sam Altman has spoken publicly about the "capability overhang," the idea that today's models can do far more than customers are currently using them for. It is a bullish framing, and there is a version of it that is genuinely bullish: if you believe the overhang closes, the runway is enormous. But the structure of DeployCo tells you something the rhetoric doesn't. A vendor confident its product will sell itself does not guarantee 17.5% floor returns to lock in five years of distribution. The signal sits in what the deal has to pay.

Dario Amodei has made much the same point, in more measured language, about the distance between what frontier models can do in a demo and what they actually produce inside a customer's workflow. Anthropic's annualized revenue has trebled in four months, driven largely by Claude Code and enterprise contracts — proof that closing that distance produces enormous returns. It is also, by implication, a measure of how much distance remains.

The captive continent

The private-equity surface is attractive to the labs for a reason that has nothing to do with finance. It is the largest captive distribution network in the world, and nobody built it for that purpose.

US buyout funds now hold roughly 11,800 portfolio companies. Globally, the figure exceeds 16,000 for companies held four years or longer. KKR, Bain, Advent, Brookfield — each of these firms runs hundreds of businesses across dozens of sectors, and every one of those businesses needs the same thing: someone embedded inside its operations who can make an AI tool actually run. OpenAI could hire its own army of enterprise salespeople — ServiceNow manages 20,000 customer-facing staff — but doing so would take years, cost billions, and produce revenue that flowed to the sales organization rather than to the lab. A single signed GP mandate, by contrast, reaches every portfolio company in the fund. It is enterprise sales without the enterprise sales force.

The GPs, meanwhile, have their own reasons to say yes, and they are not primarily about AI. Private equity is in a liquidity bind that the sector rarely discusses openly. More than half of global buyout-backed companies have now been held for four years or longer — the highest share on record, and roughly ten percentage points above the recent historical norm. The 2020 and 2021 vintages, funded at peak valuations during the zero-rate era, have returned far less cash than LPs were promised when they committed. The dry powder sitting uncommitted in private equity accounts is at a record high, but the distributions that would allow LPs to feel good about reinvesting have been sluggish.

What sharpens that pressure is a paradox: the largest institutional investors have been raising their private equity allocations at exactly the moment distributions have dried up. Major sovereign wealth funds and public pension systems have moved their PE targets up by several percentage points in the past two years. More capital is flowing in; less is flowing back out. The LPs are not fleeing — they are doubling down — but their patience for unrealized marks and management fees with no cash attached is running out. Value creation has become, in the most recent LP surveys, the primary criterion by which managers are being judged. The GP who can walk into an LP committee and show that artificial intelligence has materially improved the businesses in the portfolio — shortened their holding periods, expanded their margins, made them more saleable — has a story that every other GP is scrambling to tell.

A 17.5% floor from OpenAI, stacked on whatever the underlying businesses produce, sweetens that story considerably. Some firms have passed, citing concerns about the long-term profit profile and the flexibility of the arrangement. The ones leaning in are doing so because the instrument works equally well as a financial product and as a narrative. In both readings, it buys the GP something to say at the next LP meeting.

The Palantir pattern

There is a template for all of this, though it is a decade old and comes from an unlikely source.

Palantir built its business on a counterintuitive premise: that software, however powerful, cannot deploy itself into the messy reality of a large organization. So instead of selling licenses and leaving, Palantir sent engineers. Forward-deployed engineers — FDEs in the industry shorthand — who would embed themselves inside a customer's operations, write production code in the client's environment, and essentially force the software to work by sitting next to the people it was supposed to help. The model was expensive and slow and it made Palantir look, from a distance, more like a consultancy than a software company. It also worked. The company reported $4.48bn in fiscal 2025 revenue, growing at 56%, with gross margins that would make any traditional SaaS firm envious.

The catch is that the FDE model has a ceiling, and Palantir knows it. Engineers who can do what a forward-deployed engineer does are scarce and expensive. Revenue grows only as fast as the company can train and place them, and the analyst community has been circling this constraint for years — the concern is that FDE-dependent growth produces consulting-like economics that limit true software scalability. Palantir's chief technology officer has argued that the company's own AI platform is beginning to automate portions of what FDEs do — that the software is starting to deploy itself — but that aspiration and the current reality remain some distance apart.

What the rest of the industry has concluded is that it cannot wait for that gap to close. Monthly job listings for forward-deployed engineers across the technology sector rose roughly 800% between January and September of last year, and finished 2025 up more than 1,100%. Every lab is building the same motion, because every lab has realized the same thing: the frontier of AI revenue is not the model, it is the person who makes the model actually run inside someone's business.

OpenAI, under Lightcap, has already been hiring FDEs of its own. What DeployCo and Google's $750m fund represent is the admission that hiring one's own FDEs at scale is not fast enough. McKinsey has a senior partner assigned to every major corporate executive in the country. Accenture, which generated nearly $70bn in revenue last year on operating margins above 15%, has been sitting inside corporate IT departments for decades — in many cases, longer than the CIO who is now making AI decisions has been in the job. The labs cannot replicate that access. They can only rent it, and renting it requires paying for it.

There is a modern precedent for the consultancy half of this motion. Between 2015 and 2021, the same firms built billion-dollar practices around Salesforce, SAP, Oracle and Microsoft implementations, billing them as "digital transformation" and charging the enterprise for the privilege. The mechanics are roughly the same today, with one edit: Google is writing checks to the consultancy rather than the consultancy booking fees from the client. McKinsey's Philipp Nattermann described his firm's new Google Cloud unit as being built to help clients "reap the economic benefits" of AI transformation — a framing that, deliberately or not, locates the value entirely at the deployment layer rather than at the model layer. Accenture's chief strategy officer put it more plainly: AI is easy to try, hard to scale. His firm specializes in scale. What it did not previously have was the cloud vendor paying it to use that specialization.

An old pattern in a new market

The inversion feels novel in software because software has never seen it before. In other industries, it is almost a cliché.

When Apple launched the iPhone 3G in 2008, it restructured its relationship with AT&T from revenue-sharing to outright subsidy. AT&T absorbed roughly $400 per device to sell a $600 phone for $200, and spent the next two years clawing that subsidy back through mandatory data plan increases and unbundled text charges. Apple accepted the dilution because what it was buying — ubiquity, the iPhone in every pocket — was worth more than the per-unit margin it was giving up. The channel paid the friction; the vendor accepted the terms because the asset compounding on the back end was the platform itself.

The pharmaceutical parallel is more exact. Drug manufacturers have spent the past two decades paying pharmacy benefit managers — the powerful intermediaries who control which drugs a health plan will cover — rebates that have grown from a few percent of list price in the early 2010s to well above 50% for some products today. The manufacturer does not sell to the patient or even to the health plan; it pays the PBM to stock it, and absorbs the rebate as the cost of access to a market the PBM controls entirely. The top three PBMs now process nearly 80% of all US prescription drug claims. That concentration is what gives them pricing power, and that pricing power is what forces manufacturers to pay it. In the AI ecosystem, the consultancies and the PE operating teams are the PBMs. They control the enterprise formulary — the list of technology transformations a risk-averse corporate board will authorize — and the labs are paying for placement. Not because they are weak, but because the intermediary owns the relationship, and the relationship is the only thing that cannot be built faster than the market will wait.

Patient capital

One element of the DeployCo structure that has received less attention than it deserves is its duration. Five years, with OpenAI guaranteeing the return across the full term.

For a company spending tens of billions a year on compute, five years of committed capital at a deterministic rate is a planning instrument. It converts a slice of the enterprise revenue stream from a variable — dependent on deployment success, customer retention, token consumption — into something closer to a contract, which makes it easier to allocate compute between enterprise workloads and frontier research. Altman has spent the past eighteen months building exactly this kind of long-duration capital structure: the Stargate infrastructure build with Oracle and SoftBank, the multi-year arrangements with Microsoft and Nvidia. Five years of PE capital at a contracted floor sits naturally alongside those bets.

The PE side gets the mirror image. Capital at 17.5% over five years with OpenAI as counterparty is, in a market where most funds are struggling to show cash distributions, a serious yield instrument — economically similar to preferred equity with a scheduled redemption profile, underwritten by one of the few private companies whose revenue is growing fast enough to make the floor plausible. It is a structured product more than it is a venture investment, which is why several buyout shops passed: the same terms that provide the downside protection also limit the upside if the underlying business clears the floor by a significant margin.

Read together, the three deals describe a market that has quietly settled into a new shape. The first phase of the enterprise AI boom belonged to whichever lab had the best model. That race is not over, but it has become less decisive: the frontier systems now trade benchmark wins on a monthly rotation, and most serious enterprise customers buy from more than one lab simultaneously. What sells the contract is no longer the model. It is the person embedded inside the client's profit-and-loss account who can make the model produce results that show up in the quarterly report.

The labs have concluded that the cheapest way to get that person into that seat — faster than building a sales force, faster than training their own FDE fleet — is to pay private equity and the Big Four to bring them. Whether the 17.5% floor ever pays out is the variable that determines whether OpenAI has invented something new or simply discovered the price of a constraint it cannot solve by shipping a better model. Either way, a software vendor has now guaranteed its distribution partners a return above what most of them earn on their own terms, in exchange for access to a pipeline of captive deployments. Apple did it with handsets. Pharmaceutical manufacturers do it with formularies. The labs are doing it with enterprise AI. The pattern is old. The market is new. The direction of payment has reversed.

// The Daily

Get Vector in your inbox.

A free morning briefing on the AI revolution. Weekdays at 6am CT.