Things Are Moving Fast
A week of Super Bowl ads, $30 billion fundraises, safety researcher departures, and a $2 trillion software selloff.
The Super Bowl featured dueling AI companies for the first time, with Anthropic running commercials that mocked the very idea of ads in AI on the same day that OpenAI began testing them inside ChatGPT. Anthropic then closed the second-largest private tech funding round in history at $30 billion. ByteDance’s new video model went viral and immediately provoked cease-and-desist letters from Disney. Safety researchers resigned from multiple labs with public warnings, software stocks continued a selloff that has now erased roughly $2 trillion in market capitalization, and beginning today, the India AI Impact Summit convenes world leaders in New Delhi for the first major global AI gathering hosted in the Global South.
Individually, each of these stories has its own logic; taken together, they offer a fascinating window into the current state of the AI industry.
The Super Bowl and the Ad Wars
The most visible clash of the past week began on the biggest stage in American advertising. Anthropic aired its first Super Bowl commercials during Super Bowl LX on February 9, with spots that poked fun at the idea of AI assistants interrupting personal conversations with product placements. The tagline in the online versions was blunt: “Ads are coming to AI. But not to Claude.” The broadcast cut softened the messaging slightly, pivoting to a more diplomatic framing about there being “a time and a place” for ads, with AI chats not being it.
The timing was deliberate. That same day, OpenAI officially began testing ads inside ChatGPT for U.S. users on its free and Go tiers. Ads appear below responses, are labeled as sponsored, and are matched to conversation topics and prior chats. Paid subscribers won’t see them. OpenAI published principles promising that ads will never influence ChatGPT’s answers and that user conversations remain private from advertisers, though ad personalization is enabled by default.
Sam Altman responded to Anthropic’s commercials at length on X, calling them “clearly dishonest” and “deceptive,” while arguing that OpenAI’s free tier serves billions of people who can’t afford subscriptions. He characterized Anthropic as a company that “serves an expensive product to rich people,” which is a curious framing given that Claude also has a free tier. As TechCrunch observed, deploying the word “authoritarian” in a rant about a cheeky ad campaign was, at best, misplaced. Marketing professor Scott Galloway called it a “seminal moment” in the AI wars, arguing that by writing an essay-length rebuttal rather than brushing it off, Altman inadvertently validated Anthropic as a serious threat.
Whether the ads were fair or not, the numbers suggest they worked. Claude saw an 11% jump in daily active users post-game, according to BNP Paribas data, and climbed into the top 10 free apps on the Apple App Store with a 32% increase in average daily downloads. ChatGPT saw a more modest 2.7% bump. Claude’s user base is still much smaller in absolute terms, of course, and the simultaneous release of Anthropic’s new Opus 4.6 model makes it difficult to isolate the ad effect from the product effect.
The deeper question is whether advertising inside AI assistants represents a fundamental shift in what these products are. Zoë Hitzig, an OpenAI researcher for the past two years, resigned on Wednesday over the ad strategy, writing in a New York Times essay that users have revealed “their medical fears, their relationship problems, their beliefs about God and the afterlife” to ChatGPT, and that advertising built on that archive “creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” One can disagree about whether ads are inherently corrupting, but the concern about what happens when conversational intimacy meets targeted advertising is not trivial.
The Money
On Thursday, Anthropic announced the close of a $30 billion Series G round at a $380 billion post-money valuation, more than doubling its prior valuation of $183 billion from September. The round was led by GIC and Coatue, with co-leads including D. E. Shaw Ventures, Founders Fund, ICONIQ, and MGX, plus a roster of investors that reads like a directory of global finance: Sequoia, BlackRock, Blackstone, Fidelity, Goldman Sachs, and JPMorgan, among many others. It is the second-largest private tech funding round in history, behind only OpenAI’s $40 billion raise last year.
The financials disclosed alongside the round are worth examining. Anthropic’s annualized revenue has reached $14 billion, with the company claiming more than 10x annual growth over the past three years. Claude Code, the AI coding tool, carries a run-rate above $2.5 billion, more than double its level at the start of the year, and customers spending over $100,000 annually have grown 7x in the past year. Perhaps the most telling number came from Axios: roughly 79% of OpenAI users also pay for Anthropic, which suggests the market is less zero-sum than the Super Bowl sparring might imply.
The broader AI capital picture continues to expand at a pace that resists easy comparison. Hyperscaler capex guidance for 2026 is up 24% year-over-year, with an estimated $660 billion in AI infrastructure spending planned for this year alone, according to Wells Fargo. Alphabet has signaled spending of up to $185 billion; Amazon, up to $200 billion. OpenAI is reportedly in talks for a round that could close at around $100 billion. At some point one runs out of adjectives for these figures, which may itself be the point.
The SaaSpocalypse
While the companies building AI are attracting unprecedented capital, the companies whose products AI threatens are experiencing something closer to panic. Software stocks have now shed roughly $2 trillion in market capitalization from their peaks, reducing the sector’s weight in the S&P 500 from 12% to 8.4%, according to J.P. Morgan. Traders have taken to calling it the “SaaSpocalypse.”
The acute phase hit between February 3 and 5, when around $300 billion in market value evaporated in 48 hours. A wave of disappointing earnings collided with Anthropic’s Claude Cowork product, which demonstrated the ability to automate legal and knowledge-work tasks that previously required specialized software. Legal software companies were hit especially hard: Thomson Reuters fell 28% year-to-date, while smaller firms like CS Disco and LegalZoom dropped 12% and 20%, respectively. Salesforce is down roughly 30% year-to-date and has cut nearly 1,000 roles in a fresh round of layoffs.
Not everyone thinks the selloff is warranted. J.P. Morgan strategists pushed back on what they called an “overly bearish outlook on AI disruption,” pointing to solid fundamentals and a 17% earnings growth forecast for the sector in 2026. Goldman Sachs CEO David Solomon called the selloff “too broad.” Even Nvidia’s Jensen Huang weighed in, calling the theory that AI will replace software “illogical.”
The reality likely falls somewhere between doom and dismissal. Enterprise software benefits from high switching costs, multi-year contracts, and deeply embedded workflows that don’t change overnight. But the fear is not entirely irrational, because what changed in the past few months is that AI tools stopped being demos and started being products that could genuinely compete with existing software in real workflows. As one analyst put it, the mantra has shifted from “software is eating the world” to “AI is eating the software budget.”
The “Safety” Exodus
In what is becoming a recurring pattern across the industry, multiple high-profile AI researchers announced their departures last week, several with pointed public statements.
Mrinank Sharma, who led Anthropic’s Safeguards Research Team, posted his resignation letter on X on Monday, writing that “the world is in peril, and not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.” He described seeing, both in himself and within the organization, “how hard it is to truly let our values govern our actions” under competitive pressure. His final research project examined how AI assistants could “make us less human or distort our humanity,” and a study he published last week found that interactions producing distorted perceptions of reality occur daily in the thousands. He plans to pursue a poetry degree, which is either a sign of profound disillusionment with the industry or an extremely niche career pivot, depending on one’s reading. Anthropic said it was grateful for his work and noted he was not the head of safety nor in charge of broader safeguards.
At OpenAI, beyond Hitzig’s resignation over the advertising strategy, the Wall Street Journal reported that the company had fired a top safety executive, Ryan Beiermeister, after she voiced opposition to the rollout of an “adult mode” allowing pornographic content on ChatGPT. OpenAI said the termination was based on allegations that Beiermeister had discriminated against a male employee, a claim she told the Journal was “absolutely false.”
At xAI, the situation is starker still. Two co-founders resigned within 24 hours of each other: Tony Wu on Monday and Jimmy Ba on Tuesday, bringing the total number of departed co-founders to six out of the original twelve. At least nine employees total have publicly announced their exits in the past week. Musk said the company was “reorganized” to improve execution speed, which “unfortunately required parting ways with some people.” The departures come shortly after SpaceX’s all-stock acquisition of xAI and amid ongoing regulatory probes in multiple jurisdictions into Grok’s generation of non-consensual deepfake imagery. One former staffer told TechCrunch that “all AI labs are building the exact same thing, and it’s boring,” which is perhaps the most quietly devastating critique of the current moment.
These departures individually are unremarkable; people leave companies all the time. Taken together, and set against the backdrop of a week in which the industry demonstrated both its extraordinary momentum and its unresolved tensions, they form a pattern that is harder to wave away. The people closest to the work are, in growing numbers, expressing discomfort with the speed and direction of the enterprise they helped build.
Seedance 2.0 and the Copyright Collision
ByteDance launched Seedance 2.0 this past week, an AI video generation model that went viral almost immediately. The model generates multi-scene videos at 2K resolution from text, image, video, or audio inputs, with significantly improved motion realism and the ability to produce synchronized dialogue and ambient sound. Clips up to 20 seconds long maintain temporal consistency, a notable jump from earlier models that topped out around 5 to 8 seconds. Elon Musk commented on one demo: “This is happening too fast.”


