In March 2026, Anthropic’s Model Context Protocol crossed 97 million monthly SDK downloads. Days later, the first MCP Dev Summit North America drew 95+ sessions from Anthropic, Google, Datadog, and Microsoft in New York City. MCP is no longer an Anthropic project — it’s industry infrastructure under Linux Foundation governance.
In March 2026, Anthropic’s Model Context Protocol crossed 97 million monthly SDK downloads — making it one of the fastest-growing open-source projects in AI history. On April 2–3, the Agentic AI Foundation hosted the first MCP Dev Summit North America in New York City, drawing developers from Anthropic, Google, Microsoft, Datadog, Hugging Face, and dozens of enterprise teams deploying MCP in production. If you’ve been treating MCP as an Anthropic-specific integration layer, the events of the last four months require a significant update to that mental model. MCP is now infrastructure — neutral, governed, and permanent.
What MCP Is and Why It Became Inevitable
The Model Context Protocol is an open standard that defines how AI agents connect to external tools, APIs, and data sources. Before MCP, every AI integration was a custom build: your LLM application needed bespoke code to talk to your database, custom authentication for each API, and unique error-handling logic for every tool. Multiply that across dozens of tools and multiple AI providers, and the integration overhead became the dominant cost of building AI applications.
MCP eliminates that cost with a universal interface. Every tool that exposes an MCP server can be used by any MCP-compatible agent — Claude, ChatGPT, Gemini, Cursor, Microsoft Copilot, or a custom agent built with any framework. The USB-C analogy has become cliche in technical writing, but it’s accurate: MCP is the USB-C port for AI tool connections. Build the server once, use it with every AI client.
Anthropic released MCP in December 2024. Within twelve months, every major AI provider had adopted it as a first-class tool connection standard. By March 2026, MCP crossed 97 million combined monthly downloads of the Python and TypeScript SDKs. By April 2026, over 10,000 published MCP servers covering everything from developer tools to Fortune 500 enterprise data systems. The adoption curve is not gradual — it is vertical.
From Anthropic Property to Industry Standard: The Linux Foundation Move
In December 2025, Anthropic donated MCP to the newly formed Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation. This is not a minor governance update — it is the moment MCP formally crossed from “Anthropic’s protocol” to “AI industry infrastructure.”
The AAIF was co-founded by three companies that are direct competitors: Anthropic, OpenAI, and Block. Supporting members at launch included Google, Microsoft, Amazon Web Services, Cloudflare, and Bloomberg. The fact that OpenAI — the company most often positioned as competing against Anthropic — is a co-founder of the foundation that governs MCP tells you everything you need to know about where the industry landed on standardization. When competitors jointly govern a protocol, it is because the protocol benefits everyone more than any single-vendor alternative would.
Three key projects were donated to the AAIF at formation:
- MCP (Model Context Protocol) from Anthropic — the universal tool connection standard for AI agents
- goose from Block — an open-source autonomous coding agent built on MCP
- AGENTS.md from OpenAI — a specification for describing agent capabilities in a standardized format
Open governance under the Linux Foundation means MCP’s evolution is now subject to a community-governed specification process, not controlled by any single vendor. Protocol changes require consensus across the AAIF membership. This is exactly what enterprise procurement teams needed to see before committing to MCP as a foundational layer for production AI systems. The Linux Foundation has successfully governed foundational internet infrastructure before — the list includes Node.js, OpenAPI, and the Open Container Initiative. MCP is joining that tier.
The MCP Dev Summit North America: What It Revealed About Enterprise Adoption
The MCP Dev Summit North America on April 2–3, 2026 in New York City was the first major public event for the Agentic AI Foundation, and its 95+ sessions provide a snapshot of where enterprise MCP deployment actually stands in April 2026.
The themes that dominated the program reveal the maturity of production MCP:
Authentication and authorization at the MCP layer. The most requested sessions addressed OAuth, API key management, and per-tool permission scoping. The developer community has moved past “how do I connect to MCP” and arrived at “how do I deploy MCP securely in a multi-tenant environment.” This is exactly the maturity signal that separates exploration-phase technology from production-phase infrastructure.
Multi-agent coordination using A2A alongside MCP. Several sessions addressed the complementary relationship between MCP (agent-to-tool) and A2A (agent-to-agent), reflecting a widespread realization that complex agentic workflows require both protocols working together.
Enterprise observability. Datadog presented a session on instrumenting MCP calls for tracing, logging, and alerting in production environments. The fact that Datadog is building native MCP observability indicates that MCP traffic has become significant enough at scale to require dedicated monitoring infrastructure — the same inflection point that happened with HTTP API traffic a decade earlier.
MCP and A2A: The Two Protocols That Define How Agents Work
One of the most clarifying developments of early 2026 is the widespread recognition that MCP and A2A solve different problems and are complementary building blocks, not competing standards.
MCP handles agent-to-tool connections. When a Claude or GPT-5.4 agent needs to query a database, call an API, read a file system, or execute a function — that communication goes through MCP. MCP defines how the agent discovers what tools are available (via the MCP server’s capability list), how it calls them (via standardized JSON-RPC), and how it handles the results. MCP is the vertical connection between an agent and the resources it operates on.
A2A handles agent-to-agent communication. When one AI agent needs to delegate a subtask to a specialized agent, coordinate with an agent running on different infrastructure, or receive task requests from an orchestrating agent — that communication uses A2A. Google contributed A2A to the AAIF in mid-2025. A2A v1.0 shipped with gRPC transport, signed Agent Cards for cryptographic identity verification, and multi-tenancy support. SDKs are available in Python, Go, JavaScript, Java, and .NET.
A practical architecture for a production agentic system using both looks like this:
Orchestrator Agent (A2A orchestration layer)
|
+-- Inventory Agent --(MCP)--> Database tool
| --(MCP)--> Pricing API
|
+-- Order Agent ------(A2A)--> Supplier Agent
--(MCP)--> ERP systemThe orchestrator uses A2A to coordinate specialized agents. Each specialist uses MCP to operate its tools. Both protocols are under the AAIF. Both are production-ready. The developer community has been converging on this architecture throughout early 2026, and the MCP Dev Summit sessions confirmed it is the dominant pattern in enterprise deployments.
For developers deciding where to invest: if your agent needs to use tools, MCP is the immediate priority. If your system involves multiple specialized agents coordinating on complex tasks, add A2A. The typical order of operations is MCP first, A2A when the workflow genuinely requires agent autonomy and specialization across separate systems.
What 10,000 MCP Servers Actually Means
The 10,000+ published MCP servers stat deserves unpacking. This is not 10,000 duplicates or toy implementations — it represents the genuine coverage of tool categories that MCP has achieved in under 18 months:
- Developer infrastructure: GitHub, GitLab, Docker, Kubernetes, Terraform, database adapters for PostgreSQL, MySQL, MongoDB, and Redis
- Productivity systems: Google Workspace, Microsoft 365, Notion, Linear, Jira, and Confluence
- Data and analytics: Snowflake, BigQuery, dbt, Looker, Tableau, and Grafana
- Communication platforms: Slack, email via SMTP/IMAP, calendar APIs, and CRM integrations
- AI tooling: Hugging Face model serving, vector database adapters for Pinecone, Weaviate, and Chroma, and embedding pipelines
At 10,000 servers, MCP has achieved something no previous AI integration approach managed: a permissionless ecosystem where third-party developers build MCP servers without coordinating with Anthropic or any other AI provider. The same dynamic drove the explosion of npm packages and Docker Hub images. When the protocol is open and the tooling is accessible, the ecosystem compounds on its own. According to our analysis of the MCP server registry in April 2026, the categories with the fastest server growth are enterprise data connectors and developer tool integrations — exactly the categories that drive production AI deployment value.
What Developers Should Do Right Now
The MCP Linux Foundation governance move and the 97M download milestone have concrete implications for developers building AI applications today:
- Build MCP servers for your internal tools. If your organization has APIs, databases, or SaaS tools that your AI agents need to access, exposing them as MCP servers is now clearly the right investment. You’re building to a stable open standard with vendor-neutral governance, not a proprietary integration format that a single company controls. The server you build today will work with Claude, ChatGPT, and any future MCP-compatible client.
- Evaluate A2A for multi-agent workflows. If you’re building systems where multiple specialized agents need to coordinate — rather than a single agent operating many tools — review A2A v1.0. The signed Agent Cards provide cryptographic identity verification that enterprise deployments require before trusting agent-to-agent communication.
- Update your security posture for MCP production deployments. The dominant theme at the MCP Dev Summit was authentication. Review how your MCP servers handle authorization: are per-tool permissions scoped correctly? Are API keys managed at the MCP transport layer, not embedded in prompts? Are MCP call logs captured for audit purposes? These are the questions production deployments are wrestling with in 2026.
- Monitor the AAIF specification process. The AAIF publishes proposals and RFCs via its GitHub organization. Watching the spec process gives you advance notice of protocol changes before they land in SDK releases. For teams with significant MCP investment, participating in the RFC process is worth the time.
If you’re building AI-powered applications and want a production-ready starting point that includes MCP integration patterns, browse our developer tools collection for Next.js and TypeScript starter kits built for the current AI infrastructure stack. Use our free token counter tool to estimate context overhead when building MCP-powered long-context agent workflows.
The Bottom Line
MCP reaching 97 million monthly downloads and operating under Linux Foundation governance is the clearest signal yet that AI agent infrastructure has moved from experimental to standardized. The trajectory — from Anthropic’s internal protocol to a co-governed open standard backed by every major AI provider in 12 months — is without precedent in recent developer tooling history.
For developers, this changes the risk calculus for MCP investment fundamentally. Building on MCP is now comparable to building on HTTP or OAuth: you’re building on infrastructure that the industry has collectively decided to maintain and extend. The 10,000 servers, the 97 million downloads, and the Linux Foundation governance form a triple signal that MCP has permanently escaped the “might get deprecated by Anthropic” risk category.
The MCP Dev Summit North America confirmed that enterprise deployment has crossed the early adopter threshold. Teams at Datadog, Hugging Face, and Microsoft are not experimenting with MCP in sandboxes — they’re running it in production and building dedicated observability tooling for it. The developers who build MCP fluency now will be the ones who architect the next wave of enterprise AI applications. Read our Amazon Bedrock AgentCore guide to see how MCP fits into a full production agent deployment stack, and our AI coding tool comparison for the MCP-native environments where your agents will run.