THE STRATEGY lasted approximately twelve months. In April 2025, Mustafa Suleyman — the DeepMind co-founder whom Microsoft had hired to lead its AI ambitions — laid out a doctrine of deliberate restraint. Microsoft would let the frontier labs sprint ahead, then build cheaper, more efficient models a few months behind. He called it "off-frontier," and he made it sound like a virtue. "It's cheaper to give a specific answer once you've waited for the first three or six months for the frontier to go first," Suleyman told CNBC at the time. "We call that off-frontier. That's actually our strategy."

On Thursday, that strategy was quietly buried. Suleyman told Bloomberg that Microsoft now intends to build state-of-the-art models across text, images, and audio by 2027 — not efficient second-tier alternatives, but the absolute frontier. The company has begun training on a cluster of Nvidia GB200 chips deployed last October and plans to ramp to frontier-scale compute within 12 to 18 months. For a company that has spent over $13 billion bankrolling OpenAI, the declaration amounts to a confession: distributing someone else's intelligence is not enough.

The pivot did not happen in a vacuum. A March reorganization stripped Suleyman of oversight for Copilot — Microsoft's flagship AI product for consumers and enterprises — and handed it to Jacob Andreou, a former Snap executive. Microsoft framed the move as letting Suleyman focus on what he does best; the numbers suggest it was also an acknowledgment of what had gone wrong. According to Sensor Tower data cited by CNBC, Copilot's consumer app had just 6 million daily active users in February, compared with 440 million for ChatGPT and 82 million for Google's Gemini. Among paid AI subscribers tracked by Recon Analytics, Copilot's market share contracted from 18.8% in July 2025 to 11.5% by January — a 39% decline in six months, enough for Google's Gemini to leapfrog it into second place. Microsoft 365 Copilot has 15 million paying enterprise seats, but that represents just 3.3% of the platform's 450 million commercial subscribers.

Model behavior

Yet building frontier models is a different undertaking entirely from wrapping someone else's in a productivity suite, and the shift raises questions about whether Microsoft is solving the right problem. Copilot's struggles are not primarily about model quality — it already runs on OpenAI's GPT-4o and, as of the March Wave 3 update, incorporates Anthropic's Claude for verification. The issue is product execution: surveys from Recon Analytics found that when workers have access to both Copilot and ChatGPT, only 18% choose the Microsoft tool while 76% prefer ChatGPT. Forced adoption (where Copilot is the only option) reaches 68%; voluntary preference collapses to 8%. The gap between those numbers is not a model problem. It is a design problem, a distribution problem, and arguably a pricing problem at $30 per user per month on top of existing Microsoft 365 subscriptions.

The renegotiated OpenAI partnership, finalized in October 2025, provides the legal clearance. Under the old terms, Microsoft was effectively prohibited from building broadly capable models of its own — a clause that made strategic sense when the relationship was exclusive but became untenable as OpenAI partnered with Oracle on the $500 billion Stargate infrastructure project and expanded its cloud footprint beyond Azure. The new deal freed Microsoft to pursue AGI independently while extending its IP rights through 2032; in return, Microsoft surrendered its right of first refusal as OpenAI's compute provider. OpenAI committed to purchasing $250 billion in Azure services, but the partnership now looks less like a marriage and more like a commercial agreement between companies that will increasingly compete.

What Suleyman is really arguing — and what Satya Nadella evidently reinforced at an internal gathering this week — is that the model layer is where the value will accrue. "The model is the product," Suleyman told CNBC in March. The logic is seductive: if AI models become the core intellectual property of the technology stack, then renting that IP from a partner (even one you partly own a 27% stake in) creates supply-chain risk, pricing exposure, and competitive vulnerability. Every major hyperscaler has reached some version of the same conclusion. Google has Gemini; Amazon has invested in Anthropic while building its own Nova models; Meta has committed to open-source Llama. Microsoft was the conspicuous holdout, the one trillion-dollar company content to let someone else do the hardest part.

Whether Microsoft can actually catch up by 2027 is the $37.5-billion-a-quarter question (that being what the company spent on AI infrastructure in a single recent quarter). Frontier model development requires not just compute but research talent — the kind that gravitates to labs like OpenAI, Anthropic, and Google DeepMind, where the work defines the field rather than follows it. Suleyman brought roughly 70 researchers from Inflection AI when he joined, but that cohort has yet to produce a general-purpose model that competes at the top of public benchmarks. The speech transcription tool announced Thursday is capable but, by Suleyman's own description, specialized and trained on fewer data points than the models it aims to one day rival.

The most revealing detail in Suleyman's Bloomberg interview may be the timeline itself. Two years to reach the frontier is ambitious for a team that, until recently, was told not to try. Microsoft's AI self-sufficiency mission is a bet that the company's unmatched distribution — 450 million M365 users, Azure's enterprise dominance — can compensate for a late start in the lab. History suggests that in technology, distribution without differentiation is a wasting asset. Then again, Microsoft has been counted out of paradigm shifts before, and it tends to spend its way back in. At least now it knows the price of admission.

For more, join 75,000 subscribers getting tech's favorite brief here

Keep Reading