MCP in Action
Claude can now run third-party apps.
Anthropic announced this week that Claude can now display and interact with third-party applications directly inside its chat window. The update, which began rolling out on January 26th for all paid plans, means that users can view Slack messages, manipulate Figma designs, edit Canva projects, and manage Asana tasks without switching tabs or applications. The interfaces appear inline, rendered within the conversation itself.
This is a meaningful shift in what a conversational AI actually is. Claude has long been able to fetch data from external systems through the Model Context Protocol, the open standard Anthropic released in late 2024. But fetching data is passive; one asks a question, and Claude returns text describing what it found. The new extension, called MCP Apps, allows those external systems to serve up interactive user interfaces that render directly in the chat. When Claude connects to Amplitude, for instance, the analytics dashboard appears in the conversation, and one can adjust parameters and explore trends without ever opening Amplitude itself.
The launch partners include Amplitude, Asana, Box, Canva, Clay, Figma, Hex, monday.com, and Slack, with Salesforce expected soon. Each of these integrations transforms what was previously a describe-then-switch workflow into something more immediate. Ask Claude to help draft a project timeline, and the Asana board materializes in the chat; adjust the dates directly; continue the conversation with the updated context visible to both parties.
The Operating System Question
What Anthropic is building, and what OpenAI announced with its Apps SDK back in October, represents a quiet but significant challenge to how we think about application interfaces. Both companies are positioning their chat windows as a layer that sits above individual applications, orchestrating them rather than deferring to them.
OpenAI’s approach, announced at DevDay in October 2025, launched with partners including Booking.com, Canva, Coursera, Figma, Expedia, Spotify, and Zillow. The implementation differs in some technical details, but the vision is nearly identical: users should be able to accomplish tasks across multiple services without leaving the conversational interface. OpenAI’s Apps SDK is itself built on MCP, the protocol Anthropic open-sourced, which means the two companies are competing on execution while sharing an underlying standard.
The timing of Anthropic’s announcement suggests the competition is intensifying. MCP Apps surfaced as a proposal in November 2025, building on prior work from the community and incorporating elements of the OpenAI Apps SDK. The extension is designed to work not just in Claude but in any MCP-compatible host, including Visual Studio Code, the open-source agent framework Goose, and potentially ChatGPT itself. This interoperability matters: it means developers can build an interactive integration once and have it function across multiple AI platforms, reducing the fragmentation that plagued earlier attempts at AI application ecosystems.
What This Changes in Practice
The practical implications depend on how deeply one is willing to consolidate workflow into a single interface. Consider a product manager who currently switches between Slack for communication, Asana for project tracking, Figma for design review, and Amplitude for analytics. Each of those applications has its own notification system, its own login, its own interface conventions. The cognitive overhead of context-switching is real, even if it has become so familiar as to feel invisible.
With MCP Apps, that same product manager could theoretically conduct a morning review entirely within Claude. The Slack channels render in the chat; relevant messages surface based on the conversation’s context. The Asana board appears when the discussion turns to project status; tasks can be reassigned directly. The Figma prototype loads when reviewing a designer’s work; comments can be left inline. The analytics dashboard populates when discussing user metrics; parameters can be adjusted to test hypotheses.
Whether this consolidation improves productivity or merely shifts complexity remains an open question. Clare Liguori, a senior principal engineer at AWS, offered cautious endorsement in remarks accompanying the MCP announcement, suggesting that the ability to render dynamic interfaces directly in conversation addresses a genuine gap between what agentic tools can provide and how users naturally want to interact with them. The emphasis on “naturally” is doing significant work in that sentence; it presumes that conversation is indeed the natural modality for interacting with software, which is a claim that deserves scrutiny rather than acceptance.
Security and the Expanding Attack Surface
Running third-party interfaces inside an AI chat window introduces risks that did not exist when AI assistants merely described information in text. The MCP Apps extension renders external content in iframes with sandboxing, pre-declared templates, auditable message logs, and host-managed approval flows for any actions that modify external data. These are reasonable precautions, but they represent a new category of security concern for organizations evaluating AI adoption.
We explored the security landscape around MCP in detail yesterday:
https://aicentral.substack.com/p/the-model-context-problem
The short version is that researchers have already noted MCP connectors can be vulnerable to prompt injection and permission escalation, particularly when they expose access to sensitive systems. An interactive interface is a larger attack surface than a read-only data connection. If a malicious payload could be injected into, say, a Slack message that Claude then renders, the consequences differ from those of a traditional phishing attack. The AI is mediating the interaction, and users may extend to it a trust they would not extend to an unfamiliar email sender.
Anthropic and OpenAI both emphasize that users must explicitly grant permissions when connecting applications, and that actions modifying external data require approval. But the history of permission dialogs suggests that users often approve requests reflexively, particularly when the request occurs in a trusted context. The question is whether the AI chat window will come to feel like a trusted context by default, and what happens when that trust is exploited.
The Consolidation Thesis
Both OpenAI and Anthropic are betting that the AI chat interface will become the primary way knowledge workers interact with software, with individual applications receding into the background as service providers rather than direct interfaces. This is a significant bet. It implies that the conversational modality is flexible enough to accommodate tasks currently served by highly specialized interfaces, and that users will prefer mediated access over direct manipulation.
The strongest version of this thesis suggests that AI companies are building something closer to an operating system than an application. The operating system manages resources and provides a consistent interface through which users access diverse capabilities; the applications themselves become interchangeable components that the OS orchestrates. If Claude or ChatGPT successfully becomes that orchestration layer for knowledge work, the strategic implications for existing software companies are substantial.
But the thesis has weaknesses. Complex creative work often benefits from interfaces designed specifically for the task at hand; one would not want to edit a film through a chat window, regardless of how sophisticated the AI mediating the interaction. The power of specialized applications lies partly in their constraints, the way they structure possibility and guide attention. A conversational interface is by nature unconstrained, which is both its strength and its limitation.
For now, MCP Apps represents an incremental expansion of what AI assistants can do, not a fundamental transformation of how software works. The integrations are limited to a small set of launch partners. The interfaces are rendered in iframes, which constrains their capabilities. The security model is still evolving. But the direction is clear, and the competition between OpenAI and Anthropic to define this new interaction paradigm is accelerating. The question is no longer whether AI assistants will become application platforms, but how quickly, and who will set the terms.



I am continually fascinated by how rapidly AI is getting integrated but wonder if we're building a malicious cyberpunk world straight out of a dystopian sci fi novel instead of something better. Given who owns AI, better be ready to turn off the power to the AI server farms.
I am tempted to create a moltbot on my DGX Spark as a sort of "canary in the coal mine" warning system.
Based on the examples given, you're still task switching, and you're task switching in a worse way.
ALT-TABbing between my mail client, my IDE, my calendar, and my project management interface allows me to go back and forth with little friction--not to mention being able to set them side-by-side. In a linear chat interface, I'm scrolling between them. That seems like a step backward in efficiency: an "enhancement" in the true spirit of Microsoft.