AI Central

AI Central

Much Ado About... What, Precisely?

Employment data has yet to catch up to a month of AI job panic.

Jordamøn's avatar
Jordamøn
Mar 05, 2026
∙ Paid

February treated us to a string of AI panic essays, all warning that the total displacement of the white-collar worker was just around the corner. Matt Shumer, CEO of OthersideAI, published a 5,000-word warning on X comparing AI’s arrival to the early weeks of the Covid-19 pandemic, a piece that drew over 85 million views. Ten days later, "The 2028 Global Intelligence Crisis" was published by Citrini Research, framing itself as a memo from 2028 in which AI-driven displacement had pushed unemployment past 10%, collapsed the residential mortgage market, and sent the S&P 500 down 38%. The Dow fell over 800 points on Monday, causing Citadel Securities to rush out a rebuttal arguing that the thesis misunderstood basic macroeconomics. For a few days it looked like the market might settle back into its usual rhythm of worry followed by rationalization.

Then, on Thursday, Jack Dorsey announced that Block would cut roughly 4,000 employees, shrinking from over 10,000 to just under 6,000. Q4 earnings, released the same day, showed gross profit up 24% year over year. Block was not in trouble. That was the point. Dorsey’s internal memo dispensed with the usual restructuring euphemisms and offered a sentence that read like it had been lifted from the Citrini report: “Intelligence tools have changed what it means to build and run a company.” Smaller teams working alongside AI, he wrote, can do more and do it better, and the capabilities of those tools are compounding week after week. Block’s stock rose 18% the next day.

The sequence has a somewhat suspicious tidiness to its narrative. A viral essay predicts a doom spiral; four days later a major company appears to validate the thesis; the market rewards the company for doing it. Citrini’s co-author Alap Shah called it a “textbook example” of the changes described in the report. The doomers felt vindicated. The optimists pointed out that one company cutting headcount does not constitute an economic collapse. Both camps missed the more interesting question, which is not whether AI will eliminate jobs but whether we can tell, right now, when it actually is.

What the numbers say

The labor market evidence, as of early March, remains ambiguous. Yale’s Budget Lab, which has been tracking AI’s impact on employment monthly since late 2025, continues to find no aggregate disruption. The share of workers in occupations with high, medium, and low AI exposure has stayed flat since ChatGPT’s release in late 2022. Unemployment duration for workers in highly exposed occupations shows no upward trend. The occupational mix is shifting slightly faster than in previous decades, but the trend predates generative AI. Martha Gimbel, the Budget Lab’s executive director, told Fortune in early February that “no matter which way you look at the data, at this exact moment, it just doesn’t seem like there’s major macroeconomic effects here.”

Oxford Economics, in a January report, went further, arguing that many companies attributing layoffs to AI are engaged in a kind of narrative laundering. “We suspect some firms are trying to dress up layoffs as a good news story rather than bad news, such as past over-hiring,” the report stated. AI-related cuts accounted for roughly 55,000 jobs in the first eleven months of 2025, which sounds alarming until one notes that it represents 4.5% of total reported layoffs and a rounding error against the 1.5 to 1.8 million Americans who lose jobs in any given month. Oxford’s logic was straightforward: if machines were replacing human labor at scale, output per remaining worker should be rising sharply, and productivity growth in the United States and other advanced economies has been unremarkable.

Wharton management professor Peter Cappelli offered the bluntest version of this argument. “The headline is, ‘It’s because of AI,’ but if you read what they actually say, they say, ‘We expect that AI will cover this work.’ Hadn’t done it. They’re just hoping. And they’re saying it because that’s what they think investors want to hear.”

Block complicates this framing. Dorsey did not gesture vaguely toward future AI potential; he claimed the transformation was already underway inside his own company and said he was cutting preemptively rather than reactively. The market’s enthusiastic response created precisely the dynamic Oxford Economics had warned about: investors rewarding AI-attributed layoffs with higher valuations, which in turn creates an incentive for other CEOs to frame their own cuts similarly regardless of the underlying cause. Dorsey himself acknowledged that part of the reduction addressed Covid-era overhiring, though the scale of the cut, nearly half the workforce from a profitable company, makes it difficult to attribute entirely to cleaning up a three-year-old hiring mistake.

Capability versus adoption

What Shumer’s essay, the Citrini report, and Dorsey’s memo share is less a rigorous causal model than a conviction that something qualitative has shifted in the past few months. The tools released in early February, OpenAI’s GPT-5.3-Codex and Anthropic’s Claude Opus 4.6, represent a step change in autonomous task completion. Shumer described telling AI what he wanted built, walking away for four hours, and returning to find the finished product requiring no corrections. Microsoft AI chief Mustafa Suleyman warned that most computer-based tasks would be fully automated within 12 to 18 months. Anthropic CEO Dario Amodei has said AI could displace half of all entry-level white-collar jobs within one to five years.

These predictions are not implausible, but they rely on projecting capability curves onto adoption curves, and the two have historically moved at very different speeds. Roughly 80% of U.S. businesses do not currently use AI. Organizational adoption involves regulatory friction, cultural inertia, IT procurement cycles, and the sheer complexity of real-world workflows that do not map neatly onto a demo. Every prior general-purpose technology, from the personal computer to the internet, took decades for its labor-market effects to fully materialize. Yale’s researchers found that the pace of occupational change since ChatGPT’s launch has been only marginally faster than similar periods following the introduction of computers and the internet.

There is, however, a limit to how much comfort one should draw from historical analogy. Previous adoption curves unfolded in a world where the tools required significant human skill to operate; a spreadsheet was powerful, but one had to learn it. AI’s distinctive property is that it collapses the gap between capability and accessibility. One describes what one wants in plain language, and the system attempts to deliver it. If the bottleneck has historically been the speed of human learning and organizational change, and the new tools reduce the amount of learning required, the adoption curve could compress in ways that historical precedent does not capture.

The market as narrator

Block’s layoffs are most revealing not as a labor-market data point but as a case study in how narrative creates its own momentum. Investors had spent weeks absorbing a story about AI’s imminent economic impact, and the layoffs confirmed that story in a way that translated directly into margin expansion and earnings beats. The Citrini report described this mechanism with precision: companies cut headcount, margins expand, earnings beat, stocks rally, and the gains accrue to the owners of capital and compute while the employment effects ripple outward with a lag.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Infogalactic AG · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture