MCP: The Protocol Quietly Rewiring AI
The open standard connecting AI to everything else
A year ago, if you wanted your AI assistant to check your calendar, search your files, and update a spreadsheet, you needed three separate integrations, each custom-built, each maintained independently, each prone to breaking when something changed. Multiply that across every AI tool and every data source in an organization, and you get the “N×M problem”: a sprawling mess of bespoke connectors that nobody wants to build or maintain.
That problem is now largely solved. And the solution came from an unlikely place: Anthropic, the AI company behind Claude, which in November 2024 quietly released something called the Model Context Protocol, or MCP.
The Pitch
MCP is essentially a universal adapter for AI. Think of it like USB-C for your laptop: before USB-C, you needed different cables and ports for every peripheral, and nothing was guaranteed to work with anything else. USB-C gave us a single standard that everything could plug into. MCP does the same thing for AI connections.
Before MCP, developers faced a combinatorial nightmare. Ten AI applications talking to a hundred tools meant potentially a thousand different integrations. MCP collapses that to one: build your connector once, and any MCP-compatible AI can use it.
The protocol defines how AI systems request context (files, database queries, API calls) and how external tools respond. It’s bidirectional, meaning tools can also push information to AI systems when relevant. And it’s designed to be lightweight enough that a competent developer can spin up a working MCP server in an afternoon.
From Internal Tool to Industry Standard
What makes MCP’s trajectory remarkable isn’t the technology. It’s the adoption curve.
When Anthropic released MCP, it came with SDKs for Python and TypeScript, some reference implementations for common tools like Slack and GitHub, and not much else. It was, by the company’s own admission, a solution to an internal problem that they figured others might find useful.
By April 2025, MCP server downloads had grown from roughly 100,000 to over 8 million. By year’s end, the community had built thousands of servers covering everything from enterprise databases to consumer apps. More significantly, every major AI provider had adopted it.
The inflection point came in March 2025 when Sam Altman announced OpenAI’s full support for the protocol:
That endorsement, from Anthropic’s primary competitor, signaled that MCP had crossed from “interesting Anthropic project” to “industry infrastructure.”
Google followed. Microsoft integrated MCP into Copilot and announced Windows 11 support at Build 2025. AWS built it into Bedrock. The protocol that started as one company’s internal solution became, in the space of twelve months, the de facto standard for connecting AI to everything else.
Why It Won
MCP’s rapid adoption wasn’t inevitable. The AI industry isn’t known for rallying around shared standards, quite the opposite. So why did this one stick?
Part of the answer is timing. By late 2024, the industry had moved past the “chatbots are cool” phase and into the “how do we actually make these things useful” phase. That meant connecting AI to real systems: calendars, codebases, CRMs, databases. The pain of building those connections one-off was becoming acute just as MCP arrived to solve it.
Part of it was openness. Anthropic released MCP under a permissive open-source license with no strings attached. There was no proprietary lock-in, no licensing fees, no hidden agenda. Competitors could adopt it without ceding any advantage to Anthropic, and by adopting it, they neutralized any potential advantage Anthropic might have gained.
And part of it was being “good enough.” MCP wasn’t perfect at launch. Early versions had real limitations around security and cloud deployment. But it was simple, it worked, and it solved an immediate problem. By the time competitors might have developed alternatives, MCP had already reached critical mass.
The Linux Foundation Move
On December 9, 2025, Anthropic donated MCP to the newly formed Agentic AI Foundation, a project under the Linux Foundation:
The AAIF launched with platinum members including AWS, Google, Microsoft, OpenAI, Bloomberg, and Cloudflare—a roster that would have seemed implausible for any AI standard a year earlier.
The move matters for a few reasons. First, it removes any lingering concern about Anthropic controlling critical infrastructure. The protocol now operates under neutral governance, with community maintainers driving technical decisions. Second, it signals permanence. Enterprises don’t bet on protocols controlled by single vendors; they bet on open standards with transparent governance. MCP now has that credibility.
The AAIF also brought together two other founding projects: Block’s “goose“ (an open-source AI agent framework) and OpenAI’s “AGENTS.md“ (a standard for giving AI coding tools project-specific guidance). Together, these three projects form something like the basic plumbing for the emerging “agentic AI” era, the infrastructure that will let AI systems not just answer questions but actually do things.
What This Means
If you’ve used an AI coding assistant that can read your entire codebase, or an AI that can pull data from your company’s internal systems, or a chatbot that actually knows what’s on your calendar, you’ve probably used MCP, whether you knew it or not.
The protocol is now embedded in Claude, ChatGPT, Gemini, Microsoft Copilot, GitHub Copilot, VS Code, Cursor, and dozens of other tools. There are over 10,000 published MCP servers. Monthly SDK downloads across Python and TypeScript exceed 97 million.
For developers, MCP means building once and deploying everywhere. For enterprises, it means avoiding vendor lock-in while still integrating AI deeply into existing systems. For users, it means AI assistants that can actually access the information they need to be useful.
Why You Should Care
If you’re not a developer, you might reasonably wonder why any of this matters to you. Here’s the short version: MCP is the reason AI tools are about to get dramatically more useful.
Right now, most AI assistants are isolated. They can answer questions and generate text, but they can’t do much. They can’t see your files, check your calendar, or interact with the tools you actually use. Every integration is a special project that some company has to build and maintain.
MCP changes the economics of that. When there’s a universal standard, integrations get built once and work everywhere. The AI assistant you use next year will likely be able to connect to dozens or hundreds of tools and data sources that today would each require custom engineering. Your project files, your email, your research database, your publishing workflow, all potentially accessible through a single protocol.
This is how AI moves from “impressive demo” to “genuinely useful tool.” Not through bigger models (though those help), but through the connective tissue that lets AI actually touch the systems where your work lives.
The companies betting on AI infrastructure understand this. The race to build smarter models gets all the headlines, but the race to build better plumbing may matter more. MCP is currently winning that second race, and if you use AI tools at all, you’ll be affected by the outcome.
The Model Context Protocol specification and documentation are available at modelcontextprotocol.io.




Great post! Jordamøn is crushing it with these articles.
Thank you for all of these articles’ I’m learning so much useful information. If I can ask: In an article last week you mentioned that Anthropic and OpenAI are using very different approaches to expanding / building Ai. OpenAI is betting on infrastructure and spending billions while Anthropic is focused on better (?) products. What do you think making MCP open source do for for the growth of AI?