Meta Doubles Down
New layoffs highlight a repeated failure to compete.
Reuters reported Friday that senior Meta executives have instructed leaders across the company to begin planning for workforce reductions that could eliminate 20 percent of its employees. At the reported scale, roughly 16,000 of Meta’s 79,000 workers would lose their jobs. A spokesperson called the news “speculative reporting about theoretical approaches.”
This report arrived amid a wave of similar announcements across the technology industry. Block cut 4,000 employees in February, citing AI as the reason. Oracle is reportedly planning to eliminate as many as 30,000 positions to expand AI data-center capacity. Atlassian cut 1,600 last week, with its CEO declaring that AI has changed the skills the company needs. Whether these cuts reflect genuine AI-driven efficiency or “AI washing” is now an open question.
Also last week, The New York Times reported that Meta had delayed the release of its next-generation AI model, code-named Avocado, from March to at least May. Internal testing showed that it trails the latest systems from Google, OpenAI, and Anthropic. The timing of the layoff report and the model delay is unlikely to be a coincidence. Far from trimming an AI-replaced workforce, Meta appears to be reducing headcount in order to finance an AI strategy that has now repeatedly failed to produce competitive models.
A failure to compete
Avocado was not intended as an incremental upgrade. Meta designed the model to compete directly with frontier systems from Google, OpenAI, and Anthropic, and to serve as the centerpiece of the company’s next phase of AI development. Internal testing revealed its performance to be somewhere between Google’s Gemini 2.5 and 3.0 models, meaning that it was already trailing models that would have been publicly available for months by the time of its planned March release. Meta quickly pushed the launch back to at least May.
The delay might seem innocuous in isolation, but this is not the first time Meta has failed to deliver a front-running model release. Their previous flagship, the 2-trillion-parameter Llama 4 Behemoth, was shelved last year after engineers struggled to demonstrate sufficient improvement over its predecessors. Two smaller Llama 4 models were still shipped under the code names Maverick and Scout, but drew a mixed reception with Meta’s own AI lead publicly admitting to quality problems. Avocado’s delay makes it the third consecutive model effort to miss its competitive target.
Meta responded to those earlier failures with sweeping structural changes. The company abandoned its open-source approach for frontier models, consolidated its AI labs, and installed former Scale AI CEO Alexandr Wang to lead a newly created Superintelligence Labs division. Avocado was the first model produced under that reorganized structure. Its shortcomings cannot be attributed to the team or the strategy that it replaced.
Three consecutive misses across two leadership structures, a strategic pivot from open-source to proprietary development, and a complete reorganization of the company’s AI labs suggest that the problem is more serious than a simple question of management. No amount of reshuffling has yet corrected whatever disconnect exists between Meta’s resources and its ability to produce frontier models.
No margin for error
Meta has outlined $115–135 billion in capital expenditure for 2026. That target places the company alongside Google, Amazon, and Microsoft as one of the four largest investors in AI infrastructure globally. The scale of Meta’s commitment is not unusual for this tier of the industry, but their position differs from those of their three main rivals in one critical respect.
Google, Amazon, and Microsoft each operate cloud businesses that rent computing capacity to thousands of enterprise customers. The same data centers and chips that train their own models generate revenue independently through those services. A model delay or a missed benchmark is a setback for these companies, but the infrastructure that supported the failed attempt will continue to earn its keep regardless.
Meta has no equivalent outlet. The company does not operate a cloud business, and its AI infrastructure does not generate revenue on its own. Every dollar spent on data centers and custom chips must be recouped through improvements to advertising, consumer products, and the company’s nascent AI assistant business. Each of those revenue paths depends on the models themselves performing at a competitive level, which means that model failures carry a much greater financial weight for Meta than they do for its peers.
When infrastructure costs are fixed and the models that justify those costs are delayed, operational costs are the only remaining variable. Arete downgraded Meta’s stock in the wake of the Avocado report, noting that the company’s expenses are rising faster than its revenue and that it “lacks the deep pool” of third-party demand that its competitors enjoy. For a company the size of Meta, layoffs are the only way to deliver operational savings large enough to matter.
Strategic surrender on the horizon
The most revealing detail in the recent reporting may be one that received comparatively little attention. According to The New York Times, Meta’s AI leadership discussed the possibility of temporarily licensing Google’s Gemini to power Meta’s own products while Avocado is brought up to competitive standard. No decision has been confirmed, but the fact that such a conversation happened in the first place seems highly significant.


