Scale vs. Efficiency
Why 2026 is the year AI has to deliver
2026 is the year AI has to prove it was worth the money. Two very different bets are being placed on how to survive what comes next.
For three years, the AI industry operated on a simple premise: spend now, profit later. Say “GPU clusters” or “data center expansion” and investors handed out premiums. Companies weren’t judged on returns. They were judged on how fast they could deploy capital.
That era is ending.
“2026 is the ‘show me the money’ year for AI,” Venky Ganesan, a partner at Menlo Ventures, told Axios. “Enterprises will need to see real ROI in their spend, and countries need to see meaningful increases in productivity growth to keep the AI spend and infrastructure going.”
The numbers behind this shift are stark. A July 2025 study from MIT’s Project NANDA found that 95% of enterprise generative AI pilots failed to deliver measurable ROI. Not because the models were flawed, but because organizations couldn’t integrate them into actual workflows. American enterprises spent an estimated $40 billion on AI systems in 2024. The vast majority saw zero bottom-line impact.
Meanwhile, 42% of companies abandoned most of their AI initiatives in 2025, up from 17% in 2024. The average organization scrapped nearly half of its AI proof-of-concepts before they ever reached production. Only 5% of custom enterprise AI tools made it from pilot to deployment.
As EY’s James Brundage put it: “Boards will stop counting tokens and pilots and start counting dollars.”
Against this backdrop, two of the leading AI labs are making dramatically different bets on what the market will reward next.
OpenAI is going all-in on scale. Sam Altman announced in November that the company has committed roughly $1.4 trillion in infrastructure deals over the next eight years. That figure averages $175 billion annually, more than Google’s entire yearly revenue. The commitments include $300 billion with Oracle, $250 billion with Microsoft Azure, $100 billion with Nvidia, $90 billion with AMD, and $38 billion with AWS. The company aims to eventually build a gigawatt of new data center capacity per week.
OpenAI’s revenue has grown impressively, from roughly $3.7 billion in 2024 to a projected $20 billion run rate at the end of 2025, with internal targets of $30 billion for 2026. But the gap between what the company earns and what it has committed to spend remains enormous. The company is projecting cumulative losses of $115 billion through 2029, with positive free cash flow not expected until 2030.
Anthropic is betting on efficiency. In a recent CNBC interview, President and co-founder Daniela Amodei described what she called the company’s governing principle: “Do more with less.”
“Anthropic has always had a fraction of what our competitors have had in terms of compute and capital,” Amodei said, “and yet, pretty consistently, we’ve had the most powerful, most performant models for the majority of the past several years.”
Anthropic has raised substantial capital, $27.3 billion total, including a $13 billion Series F in September that valued the company at $183 billion. Its revenue has grown from roughly $1 billion at the start of 2025 to an expected $9 billion by year’s end, with projections of $20-26 billion for 2026. But rather than locking up trillion-dollar infrastructure commitments, the company has focused on algorithmic improvements, post-training optimization, and flexible cloud arrangements that let it shift compute based on cost and demand.
The contrast is philosophical. OpenAI is wagering that the future belongs to whoever controls the most infrastructure, that raw scale will translate into capabilities competitors can’t match. Anthropic is wagering that efficiency gains and model optimization can keep pace with scale, and that the company burning less cash will be better positioned when the music stops.
Both strategies carry risk. OpenAI’s approach requires revenue to grow fast enough to service commitments that dwarf anything the tech industry has seen before. Anthropic’s approach requires continued algorithmic breakthroughs to remain competitive against labs with vastly more compute.
For enterprise buyers, the pressure is reshaping how money flows.
A recent TechCrunch survey of 24 enterprise-focused VCs found overwhelming consensus: companies will increase their AI budgets in 2026, but concentrate spending on fewer vendors. The era of “experimentation budgets for every department” is over.
“Today, enterprises are testing multiple tools for a single-use case, and there’s an explosion of startups focused on certain buying centers,” said Andrew Ferguson, a vice president at Databricks Ventures. “As enterprises see real proof points from AI, they’ll cut out some of the experimentation budget, rationalize overlapping tools and deploy that savings into the AI technologies that have delivered.”
Rob Biederman of Asymmetric Capital Partners was blunter: “Budgets will increase for a narrow set of AI products that clearly deliver results and will decline sharply for everything else. We expect a bifurcation where a small number of vendors capture a disproportionate share of enterprise AI budgets while many others see revenue flatten or contract.”
This consolidation will hit startups hardest. According to Kyndryl’s 2025 Readiness Report, 61% of CEOs say they face more pressure to prove AI returns than a year ago. That pressure flows downhill. Companies that can demonstrate measurable value will thrive. Those still promising future returns may find the patience has run out.
None of this means AI is failing. The technology works, often remarkably well. The MIT study found that 5% of integrated pilots are extracting millions in value. Code generation, in particular, has become a genuine productivity multiplier. Anthropic’s Claude Code alone generates over $500 million in annual run rate revenue, with usage growing more than 10x in three months.
But there’s a meaningful difference between “AI works” and “our AI deployment works.” The gap is organizational, not technological. As the MIT researchers noted, generic tools don’t adapt to enterprise workflows unless given the right context, governance, and integration. Most GenAI systems don’t retain feedback, adapt to context, or improve over time. They repeat the same mistakes and require extensive context input for each session.
The companies that figured this out are the ones extracting value. Everyone else is reconsidering. What happens next depends on which theory of the case proves correct.
If OpenAI’s bet pays off, if demand for AI compute continues to explode and the company’s revenue catches up to its commitments, then scale will be vindicated. The labs that didn’t build aggressively will be left behind. The infrastructure buildout will look prescient rather than reckless.
If Anthropic’s bet pays off, if efficiency gains continue to outpace raw scale and the market starts rewarding sustainable unit economics over growth at all costs, then the trillion-dollar commitments will look like liabilities. The leaner operation will have room to maneuver.
The enterprise consolidation adds another variable. If buyers concentrate spending on a handful of winners, the companies that didn’t make the cut will struggle regardless of their technology. And if the 95% failure rate persists, patience for AI investments may erode even among the believers.
Ganesan predicts some of the aggressive spending could bankrupt major companies. He also predicts a major AI company will IPO and American GDP growth will rise by over 100 basis points. Both outcomes seem plausible. Both might happen.
What’s clear is that the mood has shifted. Three years ago, the question was: How fast can you build? Now the question is: What did you get for all that money?
2026 will provide the answer.

AI is ABSOLUTELY worth it from my perspective. It has literally supercharged my personal productivity as an author.
And considering that all of them have gone to #1 in their categories, some quite large, such as Biology, it's impossible to dismiss them as AI slop vampire harem nonsense.
I use AI for writing & research, and it's not close between Claude & ChatGPT - the latter is just flat-out worse. ChatGPT can still do images, and the Deep Research tool is useful, but those are the only reasons I maintain a subscription.