97 million monthly SDK downloads. 81,000+ GitHub stars. 9,400+ registry entries. 16 months.
Those numbers are not projections. As of March 2026, the Model Context Protocol’s TypeScript and Python SDKs together hit 97 million monthly downloads — a 4,750% increase from the 2 million recorded at launch in November 2024. For context, React took approximately three years to reach comparable download scale. MCP did it in under eighteen months, and the month-over-month growth rate is still running at +18%.
If you have been treating MCP as an interesting Anthropic experiment, that period is over. Every major AI vendor — Anthropic, OpenAI, Google, Microsoft, AWS — now ships native MCP support. The protocol was donated to the Linux Foundation in December 2025, placing it on the same governance track as Kubernetes and PyTorch. The 2026 roadmap covers stateless transport, agent-to-agent communication, and enterprise-grade auth. This guide explains what changed, where MCP is headed, and how to build with it today.
How MCP Got to 97 Million Downloads
Anthropic open-sourced MCP in November 2024 with a straightforward premise: AI agents need a standard way to talk to tools, the same way browsers have HTTP for web servers and IDEs have LSP for language features. The protocol defines three primitives — Tools (functions the model calls), Resources (data the application reads), and Prompts (templates the user invokes) — and a transport layer for carrying those messages between clients and servers.
The adoption story moved in two distinct phases. Through Q1 2025, growth was driven by Cursor, Claude Desktop, and early adopters building custom integrations. The inflection came when ChatGPT added MCP support through its Apps SDK in April 2025, followed by Google’s Gemini API and Vertex AI Agent Builder in March 2026. At that point, MCP stopped being an Anthropic feature and became an industry standard — the protocol that every major client speaks, which means a server built once works across every major agent platform without modification.
78% of enterprise AI teams with 50+ practitioners now have at least one MCP-backed agent in production, up from 31% a year ago.[1] Average time to wire a new SaaS tool into an AI agent fell from 18 hours of custom function-calling code to 4.2 hours with MCP. The economics of standardization are not abstract.
The public registry grew from 1,200 servers in Q1 2025 to 9,400+ in April 2026, with 41% of enterprise teams maintaining at least one internal MCP server that does not appear in the public registry at all. That last figure matters: the visible registry is probably less than half of total MCP deployments.
Linux Foundation Governance Changes the Risk Calculation
On December 9, 2025, Anthropic donated MCP to the Linux Foundation’s newly formed Agentic AI Foundation (AAIF) — co-founded by OpenAI, Anthropic, Google, Microsoft, AWS, and Block. Google’s Agent-to-Agent (A2A) protocol, released in April 2025, joined the same foundation in June 2025. IBM’s Agent Communication Protocol merged into A2A in August 2025.
The governance move removes the single-vendor risk that kept enterprise architects cautious about committing to MCP as foundational infrastructure. Under Linux Foundation stewardship, MCP follows the same stability model as Kubernetes: a formal specification process, community Working Groups, versioned releases, and long-term support commitments. For teams making multi-year infrastructure decisions, this matters more than the download numbers.
The formal process is called Spec Enhancement Proposals (SEPs). Any contributor can open a SEP, Working Groups debate and refine it, and changes move through a transparent vote. This is meaningfully different from a single company’s internal roadmap, which can shift based on business priorities invisible to external developers.
The 2026 Roadmap: Four Priority Areas
The official 2026 MCP Roadmap, published March 13, 2026, identifies four areas for the current specification cycle. None of them are cosmetic.
Transport Scalability
The Streamable HTTP transport that shipped in 2025 made MCP production-capable, but running it at scale exposed a fundamental problem: sessions are stateful and pinned to a specific server process. In a horizontally scaled cloud deployment — Cloud Run, Kubernetes, any autoscaling infrastructure — this means a session that starts on one instance breaks if load balancing routes subsequent requests to a different instance.
Google proposed a stateless transport variant that removes session pinning. Under the new model, MCP servers can run statelessly across multiple instances, with sessions that survive server restarts and scale-out events. This change is targeted for the June 2026 specification cycle, with SDK support following shortly after. For teams currently managing sticky sessions or avoiding horizontal scaling to keep MCP operational, this is the fix.
Agent Communication and A2A
MCP defines how an AI agent talks to tools. It does not define how AI agents talk to each other. Google’s A2A protocol fills that gap — it standardizes agent discovery, communication, and task delegation between agents from different frameworks and vendors. Both protocols are now under AAIF governance, and the 2026 MCP Roadmap explicitly flags convergence with A2A as an active area of work.
The practical implication: MCP handles the tool layer (agent-to-API, agent-to-database, agent-to-filesystem), A2A handles the coordination layer (orchestrator-to-subagent, agent-to-agent task handoff). They are complementary by design. An agent system using both MCP and A2A can connect to any tool and coordinate with any other agent, regardless of framework.
Enterprise Readiness
OAuth 2.1 is being formalized as the standard auth flow for MCP, replacing the ad-hoc static client secret patterns most current servers use. Alongside this, the roadmap commits to well-defined behavior for MCP Gateways — proxy servers that mediate between clients and tool servers, which is how most enterprises will deploy MCP in regulated environments where direct client-to-tool connections are not permitted.
Audit logging is also getting a formal spec: a structured schema for recording what a client requested, what tools were called, what data was accessed, and what was returned. The MCP production hardening guide covers the current patterns for auth and gateway configuration while the formal spec matures.
Server Discovery via MCP Server Cards
Server Cards are a machine-readable metadata format exposed at a .well-known/mcp URL. A crawling registry can discover an MCP server’s capabilities, supported tools, auth requirements, and version without establishing a session. Combined with a centralized discovery service — conceptually npm for MCP servers — this replaces manual path configuration with search-and-install semantics. The registry infrastructure is under active development, with a public beta expected in Q3 2026.
MCP Reticle: Debugging That Actually Works
Anyone who has spent time debugging MCP tool calls knows the problem: JSON-RPC traffic between your client and server is invisible by default, logging is fragmented across process boundaries, and network sniffers do not understand the protocol semantics. MCP Inspector (the official tool) handles basic interactive testing but does not give you production-grade traffic visibility.
MCP Reticle fills that gap. It earned over 5,000 GitHub stars in its first month and has been called the Wireshark for MCP. Built on a Rust-powered core proxy that operates at the kernel level, it captures every JSON-RPC request, notification, and response between client and server with microsecond-level overhead — low enough to run in staging without distorting timing behavior.[2]
The debugging workflow: wrap your MCP server with Reticle as a transparent proxy (stdio or HTTP), reproduce the failure once, then open the Reticle UI. The split-pane interface automatically pairs requests with responses using JSON-RPC IDs — no manual log correlation. Color-coded message types (green for requests, blue for responses, red for errors) make the problem class visible at a glance. Reticle captures stderr separately for server crashes, which is usually where the real bug lives: stack traces, config errors, panics that never make it into the JSON-RPC stream.
Reticle supports all four MCP transport mechanisms: stdio, Streamable HTTP, WebSocket, and HTTP/SSE. One tool, every transport. The repository is at github.com/LabTerminal/mcp-reticle.
Build Your First MCP Server in Under 50 Lines
A functional MCP server is simpler than most developers expect. The following TypeScript implementation creates a server with one tool — enough to understand the structure before adding production complexity. Use the JSON Formatter to validate the schema objects as you build, and the AI Token Counter to understand how your tool descriptions contribute to context window usage as you scale.
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
import { z } from 'zod'
const server = new McpServer({
name: 'my-first-server',
version: '1.0.0',
})
// Register a tool — description is the model's instruction manual
server.tool(
'count_words',
'Count the number of words in a text string. Returns an integer count.',
{ text: z.string().describe('The text to count words in') },
async ({ text }) => {
const count = text.trim().split(/\s+/).filter(Boolean).length
return {
content: [{ type: 'text', text: `Word count: ${count}` }],
}
}
)
// Connect to stdio transport
const transport = new StdioServerTransport()
await server.connect(transport)
Three implementation details worth understanding before you go further:
Never log to stdout. In stdio transport mode, stdout is the MCP protocol channel. Any console.log you add will corrupt the JSON-RPC stream and cause cryptic parse errors on the client side. Use console.error for debug output — it routes to stderr, which is invisible to the protocol.
Tool descriptions are the model’s instruction manual. The description string is how the AI model decides whether and how to call your tool. Vague descriptions produce unreliable tool use. Be specific: what the tool does, what inputs it expects, and what it returns. The model will follow a good description precisely and ignore a bad one unpredictably.
One tool, one job. Five small tools are more reliable than one large tool that handles everything — the model makes better routing decisions when tool boundaries are clear, and debugging is significantly easier when each tool has a well-defined scope.
Connect this server to Claude Desktop by adding it to your claude_desktop_config.json:
{
"mcpServers": {
"my-first-server": {
"command": "node",
"args": ["/path/to/your/server.js"]
}
}
}
Once connected, Claude Desktop surfaces your count_words tool and calls it when users ask questions where word counting is relevant. That feedback loop — write a tool, see it called, observe what the model does with it — is the fastest way to understand how MCP behaves in practice. Use the Codebase Graph Visualizer to map the dependency structure of your MCP server implementations as they grow.
MCP vs A2A: Not a Competition
A recurring question is whether MCP and A2A are competing protocols. They are not. They operate at different layers of the agent stack.
MCP is the tool connectivity layer. It defines how an agent accesses external capabilities: calling an API, reading a file, querying a database, executing code. The agent is the client; the capability is the server. MCP says nothing about how multiple agents coordinate with each other.
A2A is the coordination layer. It defines how agents discover each other, delegate tasks, and communicate results. An orchestrator agent can assign subtasks to specialist agents, receive results asynchronously, and compose those results into a final output — all through A2A. A2A does not define how individual agents access tools.
In a production multi-agent system, you will use both: MCP for every agent’s tool access (filesystem, APIs, databases, internal services), and A2A for agent-to-agent task routing. The shared AAIF governance means both protocols evolve together, with explicit coordination on where they overlap. The fragmentation risk between the two is effectively zero under shared Linux Foundation stewardship.
For teams building agentic workflows today: implement MCP first (the ecosystem is larger and more mature), add A2A when you need multi-agent task delegation spanning different frameworks or organizations.
What Comes Next
The June 2026 specification cycle delivers stateless transport — the change that makes MCP servers genuinely cloud-native and removes the horizontal scaling constraint forcing workarounds. OAuth 2.1 formalization lands in the same cycle, alongside early MCP Server Card support for registry discovery.
H2 2026 targets MCP 2.0 with multi-tenant support and agent-addressing primitives, alongside a stable A2A v1.0. NIST is expected to reference both protocols in its AI agent standards guidance by Q4 2026 — the formal regulatory recognition that turns protocol adoption from a compliance best-practice into a hard requirement for regulated industries.
Gartner projects 40% of enterprise applications will include task-specific AI agents by end of 2026, up from less than 5% today.[3] Every agent needs to talk to tools; every tool integration following MCP works with every MCP-compatible client. The protocol that wins the tool connectivity standard wins the agentic AI infrastructure layer. MCP has.
Start with the official TypeScript SDK (@modelcontextprotocol/sdk), wire up your first server against Claude Desktop or Cursor, and instrument it with MCP Inspector for development and Reticle for production debugging. Every tool mentioned in this guide — the JSON Formatter, AI Token Counter, and Codebase Graph Visualizer — is available at wowhow.cloud/tools.
Written by
anup
Expert contributor at WOWHOW. Writing about AI, development, automation, and building products that ship.
Ready to ship faster?
Browse our catalog of 3,000+ premium dev tools, prompt packs, and templates.
Monday Memo · Free
One insight, every Monday. 7am IST. Zero fluff.
1 field report, 3 links, 1 tool we actually use. Join 11,200+ builders.
Comments · 0
No comments yet. Be the first to share your thoughts.