Model Context Protocol
| Model Context Protocol | |
|---|---|
| Developer | Anthropic |
| Status | Stable |
| First published | November 2024 |
| Transport | HTTP, stdio |
| Encoding | JSON-RPC 2.0 |
| License | MIT |
| Website | https://modelcontextprotocol.io |
| Specification | https://spec.modelcontextprotocol.io |
| Repository | https://github.com/modelcontextprotocol |
- Main article: wp:Model_Context_Protocol
The Model Context Protocol (MCP) is an open standard developed by Anthropic that provides a standardized way for large language models to connect with external tools, data sources, and APIs. Often described as the "USB-C port for AI," MCP enables AI applications to access real-time information and perform actions beyond their training data.
MCP was created by David Soria Parra and Justin Spahr-Summers at Anthropic and deliberately reuses message-flow concepts from the Language Server Protocol (LSP). Following its release, MCP was adopted by major AI providers including OpenAI (March 2025) and Google DeepMind (April 2025).
Overview[edit | edit source]
MCP addresses a fundamental challenge in AI application development: integrating language models with external systems. Before MCP, developers needed to create custom integrations for each tool and data source, resulting in fragmented and difficult-to-maintain codebases.
Design philosophy[edit | edit source]
MCP follows several key principles:
- Vendor-agnostic – Works with any LLM provider, not just Anthropic
- Standardized interfaces – Consistent patterns for tool definition and invocation
- Security-focused – Built-in access control and audit capabilities
- Human-in-the-loop – Support for requiring human confirmation of actions
Architecture[edit | edit source]
MCP uses a client-server model with three main components:
MCP Host[edit | edit source]
The host contains orchestration logic and manages connections between clients and servers. It can host multiple clients simultaneously.
MCP Client[edit | edit source]
Clients convert requests into the structured MCP format. Each client maintains a one-to-one relationship with an MCP server and handles session management, response parsing, and error handling.
MCP Server[edit | edit source]
Servers expose tools, data sources, and APIs to clients. They are typically implemented as standalone applications or services that provide access to specific capabilities.
Technical specification[edit | edit source]
Transport layer[edit | edit source]
MCP supports two transport mechanisms:
- stdio – Standard input/output for lightweight, synchronous messaging
- SSE over HTTP – Server-Sent Events for asynchronous, event-driven communication
Message format[edit | edit source]
All messages use JSON-RPC 2.0 format:
{{{code}}}
Core primitives[edit | edit source]
MCP defines four core primitives:
| Primitive | Description |
|---|---|
| Resources | Data sources and files accessible to the model |
| Tools | Functions the model can invoke |
| Prompts | Pre-defined prompt templates |
| Sampling | Model inference requests |
Adoption[edit | edit source]
Since its launch in November 2024, MCP has seen rapid adoption:
- Integrated into Visual Studio Code via GitHub Copilot
- Supported by AWS for agentic AI development
- Hundreds of community-developed MCP servers available
Comparison with other protocols[edit | edit source]
MCP focuses on tool integration ("vertical" connections), while protocols like A2A handle agent-to-agent communication ("horizontal" connections). The two are complementary rather than competing standards.
See also[edit | edit source]
- Agent-to-Agent Protocol
- Agent Communication Protocol
- Common Text-based Response Protocol
- Tool Calling
References[edit | edit source]