Who Will Write the Rules?
India hosts the first Global South AI summit, rival labs unite on a Paris accelerator, and a federal deadline looms for US state AI laws.
The India AI Impact Summit opens today in New Delhi, the first global AI summit ever hosted in the Global South. In Paris, a new accelerator called F/ai has quietly united every major American AI lab under one roof for the first time. And in Washington, a March 11 deadline is approaching that could determine whether individual US states retain the right to regulate AI at all.
New Delhi: The Global South Makes Its Case
The India AI Impact Summit, running through February 20 at Bharat Mandapam, is a genuinely large-scale affair. More than 35,000 people have registered, over 100 countries are participating through working groups, and somewhere between 15 and 20 heads of government are expected to attend. The CEOs of Google, Nvidia, and Anthropic are reportedly on the guest list.
What makes the summit interesting is less its size than its framing. The previous entries in this series of global AI gatherings, from Bletchley Park in 2023 to the Paris AI Action Summit last year, were organized primarily around safety and risk. India has deliberately shifted the vocabulary. The summit’s organizing principles are “People, Planet, and Progress,” and its stated goal is to move from dialogue to what it calls demonstrable impact, meaning deployment, access, and measurable development outcomes rather than abstract risk frameworks.
The subtext is not particularly subtle. When the countries that build frontier AI models talk about governance, they tend to mean safety testing, disclosure requirements, and alignment research. When countries that primarily consume those models talk about governance, they mean something else entirely: compute access, multilingual capability, agricultural and healthcare applications, and the terms on which their citizens get to participate. India’s Electronics and IT Secretary S. Krishnan has been explicit that the summit aims to reposition the conversation away from what he frames as fear-driven regulation and toward delivery.
Whether the summit produces binding commitments or merely a “Leaders’ Declaration” of aspirational language remains to be seen. Global summits have a reliable track record of generating impressive communiqués and modest follow-through. But the sheer fact that this convening is happening in New Delhi, rather than London, Seoul, or San Francisco, represents a shift in gravity that the Western AI establishment will eventually need to reckon with.
Paris: Rivals Unite (With Strings Attached)
Meanwhile, Station F, the enormous Paris-based startup campus, announced last week that it has launched F/ai, a new accelerator program backed by an unprecedented coalition of AI companies. The partner list reads like a who’s-who of firms that normally spend their energy poaching each other’s employees and trading barbs on social media: OpenAI, Anthropic, Google, Meta, Microsoft, Mistral, AWS, Hugging Face, Qualcomm, AMD, Snowflake, and Sequoia Capital, among two dozen others.
The timing is notable. Anthropic and OpenAI spent Super Bowl Sunday in a very public advertising war over ads in AI chatbots, with Sam Altman calling Anthropic’s commercials “clearly dishonest” and Anthropic’s Daniela Amodei insisting the ads were really about Claude’s own principles. Days later, both companies were listed as partners in F/ai, pledging to provide mentorship, expertise, and access to senior leadership for a cohort of 20 early-stage European AI startups.
The charitable reading is that even bitter competitors recognize the value of ecosystem-building, and that Europe’s AI startup pipeline benefits everyone. Station F director Roxanne Varza told Wired the program is focused on rapid commercialization, aiming to help startups hit €1 million in revenue within six months.
The less charitable reading, which is probably the more accurate one, is that F/ai represents a sophisticated land grab. As Marta Vinaixa of Ryde Ventures pointed out to Wired, once a developer builds on top of a particular AI model, switching is rarely straightforward. By offering compute credits, mentorship, and API access to early-stage European founders, the American labs are creating dependency before these startups have even found their market. It is, in a sense, the AI equivalent of offering a free first month: generous right up until one tries to cancel the subscription.
The accelerator also raises questions about governance. When a startup in the F/ai cohort builds something that competes directly with one of its backers, how does that get handled? The details on conflict-of-interest protocols remain thin.
None of this makes F/ai a bad idea, necessarily. European AI startups genuinely need better access to capital and infrastructure, and Station F has a credible track record. But one should be clear-eyed about what it means when every major American AI company simultaneously agrees to “help” the European ecosystem. Generosity and strategy are not mutually exclusive.
Washington: The March 11 Deadline
Back in the United States, the federal government’s effort to pre-empt state AI regulation is approaching a critical juncture. President Trump’s December 2025 executive order directed the Commerce Department to publish, by March 11, an evaluation of state AI laws it deems “onerous” or inconsistent with federal policy. The same deadline applies to the FTC, which must issue a policy statement on when state laws requiring AI models to alter their outputs are pre-empted by federal prohibitions on deceptive practices.
The executive order also created a DOJ AI Litigation Task Force whose job is to challenge state AI laws in federal court on constitutional grounds, including the argument that such laws unconstitutionally burden interstate commerce. The order specifically cited Colorado’s AI Act, which requires developers of high-risk AI systems to take reasonable care against algorithmic discrimination, as an example of a law that “requires entities to embed ideological bias within models.”


