Introduction
Model Context Protocol, MCP, the new open-source standard that organizes how AI agents interact with external data and systems.
In this post we will see how MCP works, what it is, how it is applied and what its current limitations are. Watch my video here above for a further practical example of an MCP server coded in C# and .NET.
The problem
So, what problem is MCP trying to solve here?
Let’s imagine we want to connect four smartphones to our computer to access their data: until some time ago, I would have needed four different cables with different connectors and maybe even four USB ports.
The solution
Then the USB-C standard came along and sorted everything out: now I only need one type of cable, one type of port, one charger, and I can connect and charge phones of any brand.
So, think of MCP as the USB-C for AI agents:

MCP standardizes the way AI agents can connect to data, tools, and external systems. The idea, which isn’t all that original, is to replace the plethora of custom integration methods—each API or data source having its own custom access method—with a single, consistent interface.
According to MCP specifications, an MCP server exposes standard operations, lists what it offers in a standardized manner, and can perform an action when an AI agent, the MCP client, requests it. On the client side, the AI agent reads what the server has to offer, understands the description, parameters, and the meaning of the parameters, and thus knows if and when it is useful to call the server and how to call it.
Here, if you think about it, it’s really a stroke of genius—simple yet powerful. On one side, I have a standard interface, and on the other side, I have an LLM that learns the server’s intents from the standard interface and thus understands if and when to use it. And all of this can even work automatically, without human intervention.
The MCP protocol
So, let’s get a bit practical. MCP helps us standardize three things, fundamentally:
- Resources.
- Tools.
- Prompts.

Resources
An MCP server can provide resources to calling agents, and by resources, we mean context or knowledge base. For example, we can have an MCP server that encapsulates your database or a set of documents, and any AI agent can query your MCP server and obtain documents relevant to a prompt requested by its user. The brilliance here lies in having completely decoupled everything, meaning the MCP server has no knowledge of who is calling it, and AI agents use MCP servers without having hardcoded links or parameters in their own code.
Tools
Tools are simply functions that the MCP server exposes on its interface, nothing more, nothing less, and that AI agents can call.
Prompts
Prompts, finally, allow MCP servers to define reusable prompt templates and workflows, which AI agents can then present to users, as well as to the internal LLMs of the agent itself, to interact with the MCP server.
MCP marketplaces
The MCP standard was proposed by Anthropic last November, and within a few months, it has literally taken off. There are already numerous implementations of MCP servers scattered across the internet, covering practically everything. So, you can really go wild creating all sorts of AI agents simply by integrating these servers into your projects.
To facilitate the search for these MCP servers, some marketplaces have also emerged, and I mention the most important ones in this slide. I’ve counted five so far:

I would say the most comprehensive ones at the moment are the first two, MCP.so and MCPServers.org. However, it’s possible that Anthropic might decide to release its official marketplace in the near future.
Areas of improvement
We have seen how MCP is a very promising standard, but what are its weaknesses? If there are any. Well, there are a few at the moment, but that’s understandable since it’s a fairly young standard. Very young, I would say. Currently, the biggest limitation is the lack of a standard authentication and authorization mechanism between client and server.
Work is being done on it. The idea is to integrate OAuth 2.0 and 2.1. Another significant shortcoming is that there is currently no proper discovery protocol for the various MCP servers scattered across the network. Yes, we’ve seen that there are marketplaces, but if I wanted to allow my AI agent to perform discovery completely autonomously, to find the tools it needs on its own, well, that’s not possible yet.
We know that Anthropic is working on it. There will be a global registry sooner or later, and when it finally becomes available, we will definitely see another significant boost in the adoption of this protocol. Additionally, the ability to do tracing and debugging is missing, and that’s no small matter. Imagine, for example, that our AI agent calling these MCP servers encounters an error or something doesn’t go as expected:
What do we do? Currently, from the caller’s perspective, MCP servers are black boxes. If something goes wrong, it’s impossible for us to understand what’s happening. There’s also no standard for sequential calls and for resuming a workflow that might have been interrupted halfway due to an error.
For example, I have an AI agent that needs to make 10 calls to various MCP tools, and I encounter an error on the fifth call. What do I do? The management of the retry resume state is entirely the client’s responsibility, and there is no standard; everyone implements it in their own way. So, MCP is still young and has significant limitations. However, it is developed and promoted by Anthropic, has been well-received by the community, adopted by Microsoft, and also by Google, albeit with some reservations in this case.
Conclusions
So, I would say that the potential to become a de facto standard is definitely there, and it’s certainly worth spending time to study and adopt it in our AI agents.
Subscribe to my blog and YouTube channel for more ☺️