We have all heard about model context protocol (MCP) in the context of artificial intelligence. In this article, we will dive into what MCP is and why it is becoming more important by the day. When APIs are already available then why do we need MCP? Although we have seen a large rise in popularity of MCP, is there staying power in this new protocol? In the first section, we will look at the parallels between APIs and MCP and then start to explore what sets it apart.
From APIs to model context protocol
A single isolated computer is limited in the amount of data that it can access and that has a direct impact on its usability. APIs were created to enable data transfer between systems. Just like APIs, Model Context Protocol (MCP) is the protocol for communication between AI agents that are using large language models (LLMs). APIs are primarily written for developers while MCP servers are created for AI agents (Johnson, 2025).
What is MCP?
MCP was introduced by Anthropic on November 25, 2024 as an open source standard to enable communication between AI assistants and external data sources. AI agents are constrained by the fragmentation of data in isolated systems (Anthropic, 2024). The protocol defines how agents can interact with external systems, elicit user input and enable automated agents.
At its core MCP utilizes the client server model and there are three main features for clients and servers.
- MCP servers: tools, resources, and prompts
- MCP clients: elicitation, roots, and sampling
To keep this article concise, focus will be kept on the most important feature of both client and server. For MCP servers, tools are the primary way to perform complex tasks and clients utilize elicitation to enable a two way communication between the agent and the user.
Instead of explicitly calling APIs, agents select and use the appropriate tools (functions) based on the input they receive from the user. If a tool requires certain parameters the agent will use elicitation to get the data from the user. This allows for a more responsive workflow where two way communication between LLM and the user is possible.
Why do we need MCP now?
A very valid question to ask is if APIs are already present then why is there a need for MCP? APIs are designed to connect fragmented data systems and SaaS applications already enable a two way communication with a user. So, why do we need MCP now?
The main need for MCP is that the user of external data has changed from developers to AI agents. A developer will usually program an application using APIs that behaves in a deterministic fashion. Whereas, AI agents will use the user prompt and make autonomous decisions to execute on the user request. By nature, the execution of a workflow by an AI agent is not deterministic.
APIs are a machine-executable contract which acts in a deterministic fashion. APIs work if the users of APIs know what action needs to be taken next (Posta, 2025). AI agents run on top of probabilistic LLMs which do not consistently deliver repeatable results across all tasks (Atil, 2024). Variance in a LLM’s response is expected and this poses a problem for autonomous execution.
MCP to the rescue
MCP solves the problem of variance in agent execution by providing high level abstraction that wraps functionality rather than API endpoints. Tools enable LLM models to perform actions like searching for a flight, booking a calendar and more (Understanding MCP Servers, 2026).
One common misconception for tools is that they are just an abstraction over existing API calls. Tools are not designed to be an abstraction over API calls but rather abstraction over functionality. If a lot of APIs are just exposed as tools it will increase the cost and context size for the agent which is not ideal (Johnson, 2025).
A tool may include multiple API calls in its implementation to achieve the desired outcome. An agent will review the list of available tools to automatically select the most appropriate tools and determine the appropriate order of execution.
MCP adoption boom
Since its release in 2024 MCP has seen a steady rise in popularity. The following chart from Google Trends showcases the relative interest in MCP since its launch.
A lot of companies have launched their own MCP servers to facilitate building autonomous agents. As of February 2026, the official MCP registry has over 6400 MCP servers already registered. This number of MCP servers is only expected to grow in the near future. The official registry for MCP servers is still in preview and the ecosystem has grown massively in less than a year.
Other major players in the market have adopted MCP and added support to their clients. OpenAI added MCP support to ChatGPT in March and Google added support a few weeks later in April 2025. This showcases the staying power of the protocol and the fast pace of adoption.
What lies ahead?
MCP is still in the early stages of widespread adoption where a lot of applications need to mature and start hitting production. Leonardo Pineryo from Pento AI summarized it the best “MCP’s first year transformed how AI systems connect to the world. Its second year will transform what they can accomplish” (2025).
Guardrails around tools is an area that will see further development as trust is one of the biggest concerns with AI agents. With better guardrails in the tools, an AI agent can be allowed to perform with more autonomy. Over the next year, MCP is certain to see continued growth, both in the sophistication of its capabilities and the volume of its application.
Get the TNW newsletter
Get the most important tech news in your inbox each week.