Sitemap

What is MCP and why you should pay attention

7 min readMar 28, 2025

--

TL;DR: MCP is really taking off. There are now thousands of MCP “servers”, and even though Anthropic invented it, just a few days ago OpenAI adopted it too. Servers are like “apps” for AI, but importantly they differ in that they can be used together much more flexibly. We are starting to see the beginning of an AI ecosystem, just like we did for mobile a decade ago.

Details:

MCP (Model Context Protocol) was released by Anthropic as an open standard in November 2024. While the initial reaction was muted, in the last few months it has really taken off. In late March, even OpenAI — Anthropic’s archrival — adopted it.

But what exactly is it and why is it a big deal?

What it is

At its core, MCP is a way of extending the functionality of an AI, in much the same way an app extends the functionality of a phone.

There are two key concepts to understand with MCP: MCP defines how a host application (like Claude Desktop) talks to those extensions called MCP servers. (There’s a third notion which is the notion of MCP Clients, but for this discussion, hosts and clients are more or less synonymous) The great thing about MCP is that it is an open standard, and that means different host applications can use the same MCP servers. Right now, there’s a few dozen host applications (here’s a list: https://github.com/punkpeye/awesome-mcp-clients). Not only does it include Claude Desktop and Claude Code, but tools like Cursor, and terminals like oterm.

While there are dozens of MCP hosts, there are now thousands of MCP servers and indeed there are web sites devoted to cataloging all of them (such as: https://mcp.so/). They have a plethora of use cases, with many of them being the standard way to give an AI access to more of the digital world. For an ecosystem to go from announcement to 5000 applications in a matter of months is downright amazing.

To give some concrete examples of different types of MCP Servers, let’s have a look at the suite of reference servers Anthropic released:

  • Google Maps: Local search, details of places, etc.
  • Slack: Send and receive messages.
  • Memory: Remember things across sessions.
  • Time: Time and timezone conversions.
  • Puppeteer: Interact with a headless browser and send back HTML and images.
  • EverArt: An image generator. Note that MCP is not limited to text at all.

Why it matters

What you are seeing is the emergence of the AI ecosystem. MCPs are the AI ecosystem’s first equivalent of apps. But there are some key differences compared to existing app ecosystems. These are a natural result of the fact that AI is more flexible than existing “rigid” structures like APIs. The input and output are text.

Open standards based extensibility

Hopefully, we can avoid an Android vs iOS situation, with both OpenAI and Anthropic adopting the standard. Perhaps this is one reason it has been as widely adopted by developers as it has: they implement it once, and suddenly their tool is accessible to dozens of host applications. I write it once, and all of a sudden, people using Claude or Cline or Gemini or whatever can use what I have built.

The power of integration and chaining

When you install a conventional app, it’s mostly an isolating experience — you can only really interact with one app at a time. If you want two apps to talk together, you have to build that yourself, or use some clumsy “glue” like Zapier.

With MCP, the host can take the results from one MCP server, and feed it to another MCP server; it can take results from multiple MCP servers and combine them. Here is one concrete example of how this is like a super-power.

  • I could listen on Slack for when someone says “Find us a place to go to dinner”
  • I could get results from Google Maps and Yelp MCP Servers and integrate them to give more comprehensive results
  • I could use the Memory MCP server to store and retrieve people’s food preferences based on what they said on Slack. I don’t have to use a database, Memory uses a knowledge graph representation which works really well with LLMs and is also incredibly free form.
  • I could use the OpenTable MCP server to make a reservation.
  • I could post on Slack “Hey I looked at all your food preferences, and nearby restaurants and I made a reservation for you at X.”

You can imagine many scenarios like this, but the key point here is to accomplish a goal, the MCP host can use many MCP servers together.

A step towards a mesh of agents

A single thing (let’s call it an agent) can be both a host and a server. For example, Claude Code is both a host (it can use MCP servers like GitHub to check code in) but it can also be a server itself (so that, for example, Claude Desktop could ask Claude Code to help it solve a coding problem). It is very easy to see where this is going: you now have something that can both make requests of other agents, and receive requests from other agents.

Isn’t this just the same thing as tools?

When I first learned about MCP, the big point I got stuck on was “isn’t this just the same thing as tools that developers have already been using?”

In a way, yes. In fact, MCP specifies three types of things to talk about: tools, resources (files, URLs, etc) and prompts.

But there are two fundamental differences:

  • Tools are targeted at developers who use it in pretty constrained situations. MCP is targeted at users.
  • MCP is more dynamic: each user can have their own set of tools that they add and remove.

A concrete example

I wanted to give a concrete example of how I’ve used a single MCP server in a very simple way to build myself a daily news bulletin system. This is not the fanciest example ever, but what surprised me was how easy it was. All I used was the Memory MCP server, which maintains a knowledge graph: entities (like me, or Mountain View) and relations (entities “Waleed” and “Mountain View” are linked by a “lives in” relationship). Claude Desktop is my MCP Host of choice. I started telling Claude Desktop to store things about me like my interests. It stored all of this in the memory system. I then asked it to pull the latest news for me (using web search). It was surprisingly good, but I didn’t want it to repeat the content the next day, so I also asked it to remember what it told me about yesterday so it doesn’t duplicate the content.

This is just the start. Next, I will add an MCP server for Google Tasks so I can add todos after reading it, or I can ask Claude Desktop to post a particular bit of the news bulletin to a friend via email. Or I will add an MCP server for my Calendar app and have the events of the day included in the news bulletin.

How would we have done this without MCP? I would have had to write the code for an app that lived at a URL, work out how I would capture user preferences, then hook it up to a web search.

But MCP allowed me to do all of that in a single pane (Claude Desktop) using nothing but natural language.

How does this affect me?

Hopefully this has encouraged you to at least experiment with MCP on your favorite host — you probably have one already. But more than that, if you’re working with AI based systems, you might want to also be thinking about how you participate in that ecosystem. Here are some questions you should be asking yourself:

  • Should I expose the capabilities of my system as an MCP server?
  • Does it give my users a new way to access the capabilities of the AI System?
  • Do I even need a user interface any more? Could any MCP host be my new interface?
  • Do I want to turn my application into an MCP host to expand its capabilities?

I am already thinking of doing so as part of my open source work on Islamic AI assistants (both the server and the host parts of the equation)

What’s next?

It’s still early days for MCP and there’s clearly a long way to go and a lot of stuff that has to be fixed.

Installation and setup

Right now adding an MCP server to your host involves editing JSON files, and either running docker or node locally. It’s clumsy.

Security, auth, etc

It’s still a pain to get auth running correctly. For example, to use the Google Drive integration, you need a developer API key and have to go through quite a few hoops to use it. Security, prompt injection and so on are still very basic. Current hosts have to keep asking you for permission before they do things.

Dynamic Discovery

The next step is standardizing a way to find out what MCP servers are available and make it accessible to the LLM itself so that the LLM can go and find the MCP servers it needs to use.

Conclusions

MCP seems kind of theoretical and complicated, but it’s actually a major step forward: it’s the beginning of an open AI ecosystem. In particular, the way that adding an MCP server extends the functionality of a host in such a way that empowers the AI to now integrate and chain multiple capabilities to accomplish a goal are a huge opportunity for new applications and flexibility of this tool. You should be asking yourself what the implications of the rapid uptake of MCP are for whatever it is you work on.

--

--

Waleed Kadous
Waleed Kadous

Written by Waleed Kadous

Co-founder of CVKey, ex Head Engineer Office of the CTO @ Uber, ex Principal Engineer @ Google.

Responses (3)