OpenAI's leaked memo says new "Spud" model will make all its products "significantly better" Key Points - OpenAI's Chief Revenue Officer Denise Dresser lays out the company's strategic roadmap in an internal memo: a new model codenamed "Spud," an agent platform called "Frontier," and an expanded Amazon partnership. - According to the memo, the market has moved beyond simple prompts toward autonomous agents: customers now demand systems that can independently use tools and operate reliably in real business environments, Dresser claims. - She directly challenges competitor Anthropic, accusing the company of inflating its revenue figures by roughly $8 billion by booking revenue share payments to Amazon and Google on a gross rather than net basis. An internal memo from OpenAI Chief Revenue Officer Denise Dresser outlines the company's Q2 strategic direction.

The document covers five core priorities for the enterprise business and takes unusually sharp aim at Anthropic. Enterprise AI is entering a "more mature phase," Dresser writes in a memo leaked by The Verge. Raw model performance isn't enough anymore: Customers want to know how well AI fits into their workflows, control systems, and daily operations.

According to Dresser, OpenAI sees capacity, not demand, as the biggest bottleneck, with multi-year deals in the nine-figure range on the rise. "Spud" lays the groundwork for OpenAI's super app ambitions The memo references a new model codenamed "Spud," which Dresser calls an "important step in the intelligence foundation for the next generation of work." Early customer feedback suggests the model delivers stronger reasoning, better understanding of intentions and dependencies, and more reliable production results, she writes. Spud will make all of OpenAI's core products "significantly better," Dresser claims, as part of an iterative deployment strategy: push boundaries, ship real products, learn from real-world use, and feed those insights into better systems on the path to the "super app." OpenAI's compute advantage already shows up for customers through higher token limits, lower latency, and more reliable execution of complex workflows, according to the memo. "Frontier" signals OpenAI's shift from product to platform The market has moved from prompts to agents, Dresser says.

Customers want systems that use tools on their own, operate across workflows, and function reliably in real business environments, which requires orchestration, control, security, and governance. To address this, OpenAI is building an agent platform called "Frontier," which the memo positions "as the default platform for enterprise agents." According to Dresser, better models make the platform more valuable, deeper integration raises switching costs, and every workflow running through the system makes OpenAI harder to rip out.

"That is how we move from product vendor to operating infrastructure," she writes. Amazon deal gives OpenAI reach beyond Microsoft The Microsoft partnership has been "foundational to our success," Dresser writes, but has limited OpenAI's ability to meet companies where they actually work. For many, that means Amazon's Bedrock platform. Since the partnership was announced in late February, demand has been "frankly staggering," the memo states.

Dresser describes a so-called "Amazon Stateful Runtime Environment" that goes beyond simple model access to enable memory, context, and continuity across interactions, letting systems work more reliably over time across complex business processes. She lists three advantages: lower adoption barriers for AWS-native customers, a stronger foothold in regulated industries, and deeper integration down to production runtime for multi-level agents. OpenAI wants to own the full stack, including deployment The memo describes OpenAI as a platform with multiple entry points: ChatGPT for Work for knowledge work, Codex for software development, the API for embedded intelligence, Frontier as an agent platform, and the Amazon runtime for production-ready execution. "We should stop thinking like a company with separate product lines," Dresser writes.

The goal is "a flywheel we should be building around: better models drive more usage, more usage drives deeper integration, deeper integration drives multi-product adoption, and multi-product adoption makes us harder to replace." The biggest bottleneck in enterprise AI is whether companies can roll it out at scale, Dresser argues. To address that, OpenAI is building a service called "DeployCo" that will function as a deployment engine alongside so-called "Frontier Alliance" partners. OpenAI takes direct aim at Anthropic over revenue claims and compute gaps The sharpest section of the memo targets Anthropic.

Dresser accuses the competitor of building its narrative on "fear, restriction, and the idea that a small group of elites should control AI." OpenAI's "positive message" will win out over time, she argues, describing the landscape as "as competitive as I have ever seen it." According to Dresser, Anthropic's "strategic mistake" of not locking down enough compute is already showing in its products. Customers notice it through throttling, spotty availability, and a less reliable experience, she claims.

OpenAI recognized the exponential compute curve earlier and moved faster. Dresser acknowledges that Anthropic's early focus on coding tools gave it a head start, but argues that in a platform fight, that narrow focus could become a liability as AI spreads beyond developers to every team and industry. The most aggressive claim in the memo is financial: Dresser says Anthropic's stated run rate is inflated because the company grosses up revenue share payments to Amazon and Google, making its numbers look bigger than they are. OpenAI's own analysis puts the overstatement at around 8 billion dollars relative to Anthropic's reported 30 billion dollar run rate.

OpenAI reports its Microsoft revenue share on a net basis, "which is more inline [sic] with standards we would be held to as a public company," Dresser writes. None of these claims can be independently verified. Neither OpenAI nor Anthropic is publicly traded, so neither faces public reporting requirements. The Information has previously reported on accounting differences between the two companies. AI News Without the Hype – Curated by Humans Subscribe to THE DECODER for ad-free reading, a weekly AI newsletter, our exclusive "AI Radar" frontier report six times a year, full archive access, and access to our comment section. Subscribe now.

မူရင်းသတင်းရင်းမြစ်များ

ဤဆောင်းပါးကို အောက်ပါမူရင်းသတင်းများမှ စုစည်း၍ မြန်မာဘာသာသို့ ပြန်ဆိုထားခြင်း ဖြစ်ပါသည်။ အားလုံးသော အကြွေးအရ မူရင်းစာရေးသူများနှင့် ထုတ်ပြန်သူများကို သက်ဆိုင်ပါသည်။

မျှဝေပါ: