Introduction to Model Context Protocol (MCP): The USB-C of AI Integrations
Steven Gonsalvez

Steven Gonsalvez @stevengonsalvez

About: Just your friendly neighbourhood engineer in the beautiful mess that is technology.

Location:
london, United Kingdom
Joined:
Apr 13, 2021

Introduction to Model Context Protocol (MCP): The USB-C of AI Integrations

Publish Date: May 5
0 0

Introduction to Model Context Protocol (MCP): The USB-C of AI Integrations

Remember the dark ages of computer peripherals? That chaotic era when connecting a simple mouse required an archaeological expedition to find the right port? PS/2, serial, USB-A, mini-USB, micro-USB... a connectivity frustration in every dusty cable drawer.

Then came USB-C, and angels sang. One cable to rule them all.

Now imagine that same standardization revolution, but for AI models and the external tools they need to access. That, is exactly what Model Context Protocol (MCP) brings to the table.

The Problem: Integration Hell

Before MCP, connecting AI models to external tools often meant relying on specific frameworks like LangChain, Autogen, or CrewAI. Each framework provided its own way to build integrations and define tools, often requiring custom code and specific SDK knowledge for each target application. For instance, LangChain has a large collection of pre-built tools, but integrating a tool not already covered or using a different framework meant building it within that framework's specific constraints. This led to a fragmented ecosystem where tool compatibility and developer effort were tightly coupled to the chosen framework.

This leads to what I call the Law of Integration Despair (totally made up): "The complexity of maintaining an AI integration system grows exponentially with each new tool or model added."

Enter MCP: A Beautiful Standardization Story

Here's where Model Context Protocol swaggers onto the scene like the hero we didn't know we needed.

MCP is an open standard created by Anthropic (the folks behind Claude) that standardizes how AI applications connect with external tools, data sources, systems, and effectively any capability that can be controlled or accessed via code/software.

📚 Geek Corner
The M×N vs M+N Problem:

Before MCP (and wrappers like langchain), connecting M different AI models with N different tools required M×N unique integrations.

With MCP (and good agent frameworks), you just need M clients and N servers—a total of M+N components.

This is reminiscent of the famous Metcalfe's Law in networks, which states that the value of a network grows proportionally to the square of the number of users.

Similarly, the pain of maintaining AI integrations grew at a quadratic rate—until MCP arrived to linearize it. |

So why not just use REST APIs or gRPC directly?

Well, MCP actually borrows the best from both worlds. It keeps the descriptive, JSON-based simplicity of REST that plays nicely with LLMs, while also taking inspiration from gRPC’s “reflection” abilities — letting clients discover what tools are available without hardcoding the schema. The difference is that gRPC was designed to be compact and efficient for machine-to-machine comms — great for backend services, but not ideal for AI agents, which need more context, metadata, and flexibility for function calling. You’d have to bolt on extra layers to make gRPC work well for an LLM. MCP gives you that out of the box.

How MCP Actually Works: Simple Yet Powerful

At its core, MCP follows a client-server architecture that's deceptively simple:

  1. MCP Hosts: These are the programs where the action happens—like Claude Desktop, your IDE (vscode/cursor/trae), or a custom AI application built to use tools with MCP

  2. MCP Clients: These maintain the connections with servers, handling all the protocol details.

  3. MCP Servers: These lightweight programs expose specific capabilities (like accessing your GitHub repos or jira or reading your Slack messages) through the standardized MCP interface.

  4. Data Sources: These are what the servers connect to—your local files, databases, or remote services.

The beauty is in the standardization. Each MCP server exposes three main types of functionality:

  • Tools: Functions that models can call to perform actions (like searching files or posting to Slack)
  • Resources: Data sources that models can access (like your local filesystem)
  • Prompts: Pre-defined templates to help models use tools effectively

And here's where the magic happens: once you have an MCP server for GitHub and another for Slack, any MCP client can talk to them. Switch from Langchain to autogen or switch from Claude to gpt-4 ? No problem. The servers don't care—they speak MCP, not model-specific languages.

The "Before MCP" Landscape: Framework Fragmentation

Before MCP came along, the AI ecosystem was like the Wild West of integrations. Every framework had its own way of connecting to external tools:

  • LangChain had its Tools and Agents API
  • AutoGen created its own SDK for multi-agent workflows
  • CrewAI developed yet another approach to building agent systems

Each of these solutions worked within their own ecosystem, but they didn't talk to each other. It was like the early days of instant messaging, when you needed separate apps for AIM, MSN Messenger, Yahoo Messenger, and ICQ - each with their own accounts, interfaces, and protocols. Want to chat with friends across different platforms? Sorry, you'd need to run four different apps simultaneously and remember which friend used which service. (Remember when Trillian tried to unify them all? That's essentially what MCP is doing for AI integrations now.)

📚 Geek Corner
The Integration Tower of Babel:

The pre-MCP fragmentation mirrors the classic challenges described in the CAP theorem from distributed systems.

Each framework optimized for different properties (Consistency, Availability, or Partition tolerance) in their integration approaches.

LangChain leaned into flexibility, AutoGen chased multi-agent setups, and CrewAI tried to build better human-AI teamwork. Each had its own vibe — and its own quirks
Like programming languages that optimize for different use cases, no single framework could solve all integration patterns well.

MCP doesn't eliminate these tradeoffs, but it provides a common protocol for expressing them—similar to how HTTP became the standard protocol despite web frameworks having different philosophies. |

Getting Started with MCP: Baby Steps

If you're thinking, "This sounds great, but where do I even begin?"—don't worry. The MCP ecosystem is growing rapidly, with pre-built servers for popular services like:

  • GitHub
  • Slack
  • Google Drive
  • Local filesystem
  • Postgres databases
  • Web browsing via Puppeteer
  • And many more

So you can start by simply connecting these existing servers to MCP-compatible clients like Claude Desktop. No code required!

But if you're the type who likes to peek under the hood (and if you're reading a blog series on MCP, you probably are), you can also build your own MCP servers. The protocol is open and well-documented, with SDKs available in Python, TypeScript, Java, and more languages.

The MCP Ecosystem: Growing Rapidly

The coolest part? MCP is already being adopted by major players, not just Anthropic (Claude's creator):

  • Microsoft has integrated MCP support into Copilot Studio. Claude desktop (obviously) has advanced support
  • Development tools like zed, cline, roocode, replit, cursor, copilot , amazon q and even vscode have support for MCP which has
  • Every SaaS product is on the bandwagon offering an MCP server on top of their API.

MCP isn’t some trendy spec destined to collect dust — it’s rapidly becoming the glue for modern AI workflows, and the ecosystem is expanding fast.

📚 Geek Corner
Standardization Wars and Network Effects:

MCP's rapid adoption follows patterns similar to successful standards like HTTP, Bluetooth, and USB.

According to Shapiro and Varian's economic theory of standards adoption, successful standards combine early value with increasing returns to adoption.

MCP provides immediate value through pre-built servers, while gaining momentum through network effects as more tools become MCP-compatible.

The backing by major companies like Anthropic, Amazon, Microsoft, and the promise of Google and Openai to adopt the standard provides the legitimacy needed for a standard to reach critical mass.

What's truly clever is that MCP is an open standard—following the playbook that made the web successful rather than trying to create a walled garden. |

The Beginning of a New Era

We're standing at the beginning of a new era in AI integration. MCP is doing for AI what standardized containers did for shipping—creating a common interface that allows diverse systems to work together efficiently.

In the next post in this series, we'll dive deeper into the "why" of MCP, exploring specific use cases where it shines. We'll also look at how it compares to traditional API approaches and when you might choose one over the other.

Until then, I encourage you to check out the official MCP documentation and play with some of the pre-built servers. The revolution in AI integration is happening right now, and with MCP, you're invited to the party.


Got questions about MCP? Drop them in the comments below, and I'll do my best to answer them in upcoming posts!

Next up in this series: "Quick explore of MCP Ecosystem: Compatible Tools and Platforms?" where we'll dive deeper into specific use cases from dev to fun.

Comments 0 total

    Add comment