AI Central

AI Central

Adopt or Else

Corporations are counting clicks and missing the point.

Jordamøn's avatar
Jordamøn
Feb 26, 2026
∙ Paid

Accenture, a company that employs nearly 800,000 people and bills itself as the world’s preeminent “reinvention partner,” began tracking its senior employees’ weekly logins to internal AI tools this month. An internal email, reported by the Financial Times, informed associate directors and senior managers that “use of our key tools will be a visible input to talent discussions” during the summer promotion cycle. The subtext was not subtle: demonstrate regular AI adoption, or your path to leadership stalls.

A continent away and several rungs down the corporate prestige ladder, Block, the fintech company behind Square and Cash App, is running a cruder version of the same experiment. Jack Dorsey has mandated that every employee use generative AI tools daily, built that requirement into performance evaluations, and, for good measure, required staff to send him weekly emails summarizing their five most recent accomplishments, which he then processes with AI. This comes alongside rolling layoffs that have eliminated roughly 10 percent of Block’s 11,000-person workforce since early February. At a recent all-hands meeting, one employee told leadership that “morale is probably the worst I’ve felt in four years.” Another wrote that “the overarching culture at Block is crumbling.”

These are not isolated cases. They represent the leading edge of what has become the dominant corporate playbook of early 2026: treat AI fluency as a performance metric, track it, and tie it to career advancement or, in the blunter formulations, to continued employment. KPMG announced that AI usage would be graded in its 2026 performance review cycle. Meta began evaluating employees on their “AI-driven impact” this year. Amazon set an 80 percent weekly usage target for its internal AI coding tools and closely monitors adoption rates. A survey of 1,295 U.S. business leaders conducted last year by AIResumeBuilder.com found that 58 percent of companies now require employees to use AI tools, with 10 percent of those reporting that they fire employees who refuse.

The logic driving this push is straightforward. Companies spent billions deploying AI infrastructure and enterprise licenses throughout 2024 and 2025, and the return on that investment depends on actual usage. A PwC survey found that just 10 to 12 percent of companies report seeing revenue or cost benefits from AI, while 56 percent say they have gotten nothing out of it. When executives look at those numbers, the instinct is to conclude that the problem is adoption, not the tools, and that the solution is to push harder.

The Confidence Gap

The workforce data tells a more complicated story.

ManpowerGroup’s 2026 Global Talent Barometer, based on interviews with nearly 14,000 workers across 19 countries, found that regular AI usage jumped 13 percent in 2025, to 45 percent of workers globally. But confidence in using the technology fell 18 percent over the same period. For the first time in three years, overall worker confidence declined. The gap was sharpest among older workers: Baby Boomers reported a 35 percent drop in technology confidence, Gen X a 25 percent decline. Meanwhile, 43 percent of all workers now fear automation will replace their job within two years, up five points from the prior year. More than half reported receiving no recent training of any kind, and 57 percent had no access to mentorship. “Workers are being handed tools without training, context, or support,” ManpowerGroup’s VP of global insights, Mara Stefan, told Fortune.

In other words, the companies pushing hardest for adoption are operating in an environment where usage is rising but trust, confidence, and organizational support are all moving in the opposite direction. The mandate is working, in the narrow sense that people are logging in. Whether it is producing the outcomes companies actually need is a different question.

A study published this month in Harvard Business Review, based on eight months of embedded research at a 200-person tech company, found that AI tools consistently intensified work rather than reducing it. Employees worked faster, took on tasks outside their traditional roles, and extended work into evenings, mornings, and breaks, often without being asked to do so. Product managers started writing code. Researchers took on engineering tasks. The initial effect felt empowering: workers described having a “partner” that made previously daunting tasks approachable. But the researchers observed what they called “workload creep,” as the expanded capacity became the new baseline for speed and responsiveness. Software engineers found themselves reviewing code from colleagues who were “vibe coding” with AI assistance, adding review burden to already full workloads. The study concluded that 83 percent of workers reported AI had increased their workload, with burnout disproportionately concentrated among associates and entry-level staff (62 and 61 percent, respectively) compared to C-suite leaders (38 percent). TechCrunch’s coverage of the study put it plainly: “The industry bet that helping people do more would be the answer to everything, but it may turn out to be the beginning of a different problem entirely.”

This creates an awkward feedback loop. When executives see that AI tools aren’t delivering expected returns, they mandate more usage. More usage, in the absence of adequate training and structural support, produces burnout and resentment. Burnt-out, resentful employees use the tools perfunctorily or cynically, which further suppresses the expected returns, which prompts executives to push harder still.

The Walkbacks and the Wreckage

Some companies have already run this loop to its conclusion and begun reining themselves in. Klarna’s CEO, Sebastian Siemiatkowski, spent much of 2024 and early 2025 publicly celebrating the company’s AI-driven headcount reduction, from roughly 5,500 employees to 3,400. Then, last year, he admitted to Bloomberg that the AI chatbot’s output was “lower quality” and that the company would start hiring human customer service agents again. Duolingo’s CEO declared the company “AI-first” in May 2025, announcing that contractors would be phased out wherever AI could do the work, only to eat his words within a week, clarifying that “I do not see AI as replacing what our employees do.” Shopify told teams they had to prove AI couldn’t do a job before requesting new headcount, a policy that generated enough internal backlash to prompt its own quiet recalibration.

The operational risks of forced adoption are also becoming visible.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Infogalactic AG · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture