MCP Moves From Protocol to Production
What the “USB-C for AI” looks like in action.
Last week, we covered Anthropic’s donation of the Model Context Protocol to the Linux Foundation’s new Agentic AI Foundation, a significant moment in the standardization of how AI agents connect to external systems. But standards only matter if people build on them, and in the weeks since that announcement, something notable has happened: major enterprise vendors have started shipping production-ready MCP implementations.
This is not vaporware. Microsoft released production support for MCP servers in Azure Functions on January 19. Red Hat announced a developer preview of an MCP server for RHEL troubleshooting on January 8. The CAMARA project, a telecom API consortium under the Linux Foundation, published guidance on January 12 for connecting AI agents to network infrastructure via MCP. Salesforce launched Agentforce MCP support in beta, with enterprise governance built into the foundation.
The protocol is moving from specification to infrastructure, and understanding what that transition looks like in practice reveals both the opportunities and the constraints that will shape MCP’s next chapter.
The Security Problem Gets Addressed
The biggest barrier to MCP adoption in enterprises was never purely technical; it was security. As Randy Bias of Mirantis observed, security teams simply cannot allow arbitrary “Shadow Agents” running on developer laptops to access critical data systems like electronic healthcare records or customer personally identifiable information. The early MCP implementations were local-first: you ran an MCP server on your machine, and your AI assistant connected to it directly. That approach works well enough for personal productivity, but it becomes a non-starter for anything touching sensitive customer data or regulated systems.
Microsoft’s Azure Functions release tackles this gap directly. The production-ready MCP extension includes built-in authentication using Microsoft Entra and OAuth 2.1, implementing the MCP authorization protocol requirements including 401 challenges and Protected Resource Metadata documents. More significantly, it supports on-behalf-of authentication, which means tools can access downstream services using the user’s identity rather than a service account. The AI agent inherits the permissions of the person using it, rather than operating with some overprivileged bot credential that would make security teams nervous.
Den Delimarsky, a principal engineer at Microsoft, described the friction this approach resolves: implementing authentication correctly requires genuine security expertise, and misconfiguration can expose data to people you never intended to see it. The Azure implementation handles the heavy lifting at the platform layer. When a client connects to a remote MCP server, Azure rejects the initial anonymous request, issues a 401 challenge with metadata about how to authenticate, and only proceeds once the user has completed a login flow through Entra. The MCP server code itself never touches the auth logic.
Salesforce is taking a similar approach with Agentforce. Their MCP support includes an allowlist mechanism that gives administrators control over which MCP servers can be registered and which tools are available to their organization. This design responds to real security research: in April, Invariant Labs published findings on “Tool Poisoning Attacks,” a form of indirect prompt injection where malicious MCP servers can manipulate agent behavior in subtle ways. Salesforce’s zero-trust model addresses this risk by vetting external MCP resources before allowing any connections, ensuring that users cannot accidentally expose corporate data to unauthorized tools.
From Developer Tools to Operations
Most MCP coverage has naturally focused on coding assistants, using MCP to give Claude or ChatGPT access to your codebase, your database, your Slack workspace. Red Hat’s RHEL MCP server suggests a different and arguably more consequential trajectory: operations.
The server, announced January 8 in developer preview, connects AI assistants to Linux system administration tasks. Point your MCP client at a RHEL machine via SSH, and the LLM can analyze logs, check CPU and memory usage, inspect running processes, and identify performance bottlenecks without requiring you to remember arcane command syntax. Red Hat’s blog post demonstrates a session where an operator asks an AI to “check the health” of a system. The agent calls multiple MCP tools, compiles the results into a coherent summary, and surfaces two issues worth investigating: a nearly-full root filesystem and a failing httpd service. A follow-up prompt identifies the disk consumption culprit (a 25GB file lurking in a virtual machines directory) and diagnoses the httpd failure (a configuration syntax error in the Apache config).
Crucially, this implementation is read-only by design. The MCP server runs pre-vetted commands over SSH; it does not provide arbitrary shell access to the system. The agent can inspect and recommend, but it cannot directly modify anything. Red Hat frames this constraint as a “safer manner” to explore AI-assisted operations, giving the LLM observability into system state without handing it capabilities that could cause harm if the model hallucinates or misunderstands intent.
This points toward a pattern we will likely see proliferate: MCP servers as controlled interfaces to production systems, with guardrails built into the protocol layer itself rather than relying on the model to exercise restraint. The agent gets the context it needs without getting the capabilities that would keep security teams awake at night.
Network-Aware AI
The CAMARA white paper represents the most forward-looking of these announcements, sketching out possibilities that have not yet reached production but indicate where the protocol might eventually lead. CAMARA is a Linux Foundation project focused on telco API interoperability, working to standardize how applications interact with network infrastructure across different carriers and geographies. Their January 12 paper describes using MCP to expose network capabilities to AI agents in a consistent way.
The underlying premise is worth considering: AI systems currently operate almost entirely disconnected from the networks that deliver their responses. An AI video optimizer does not actually know the bandwidth available to a particular user at a particular moment. A fraud detection system cannot easily verify whether a transaction is coming from the device’s usual network location. An edge deployment tool lacks visibility into which edge nodes have available capacity. All of these systems end up guessing or using stale heuristics when real-time information exists but remains inaccessible.
CAMARA’s APIs, including Quality on Demand, Device Location, and Edge Discovery, provide exactly this kind of information. The white paper outlines how an MCP server could translate these APIs into tools that AI agents can discover and invoke through the standard protocol. A video streaming agent could check actual network conditions before deciding which quality level to serve. A banking agent could query device location as an anti-fraud signal before approving a suspicious transaction. An edge orchestrator could route workloads based on real-time network state rather than static rules.
As Herbert Damker, CAMARA’s TSC Chair and Lead Architect at Deutsche Telekom, put it: “AI agents increasingly shape the digital experiences people rely on every day, yet they operate disconnected from network capabilities.” The paper represents an attempt to bridge that gap, bringing AI and network infrastructure “into concert” across operators.
What This Means
We are entering a phase where MCP’s value proposition gets tested against the full weight of enterprise requirements. The protocol’s first year saw rapid adoption among developers building personal tools and early-stage products, driven by the genuine utility of giving AI assistants access to external context. The next phase involves organizations with compliance requirements, security audits, and production SLAs, which is a substantially different environment with substantially different constraints.
The implementations shipping now suggest the ecosystem is taking that transition seriously. Microsoft is building authentication into the hosting layer so that individual developers do not have to become security experts. Salesforce is adding enterprise governance with allowlists and vetting. Red Hat is shipping read-only operational interfaces that provide value while limiting risk. These are not decisions made by protocol enthusiasts excited about a new standard; they are decisions made by product teams who need to sell to enterprises and who understand what those enterprises actually require.
With 97 million monthly SDK downloads and over 10,000 published MCP servers, the ecosystem has achieved critical mass. The question is no longer whether MCP will be adopted. The question is whether the production implementations will prove secure enough, governed enough, and reliable enough for the workloads that actually matter to organizations making significant investments in AI infrastructure.
The next few months will begin to answer that.

