Gerson

Gerson

Passionate developer specializing in web development, cloud architecture, and system design.

TypeScriptReactNext.jsPythonFastAPISQLNode.jsAWS

Building AI Agents with the Model Context Protocol (MCP)

The Model Context Protocol is becoming the USB-C of AI tool integration. Learn how MCP works, how to build MCP servers, and how it enables AI agents to interact with databases, APIs, and external services through a standardized interface.

Gersonhttps://modelcontextprotocol.io/
AI network connections representing agent communication protocols

If you've been building AI-powered applications in 2026, you've almost certainly encountered the Model Context Protocol (MCP). Originally created by Anthropic in November 2024 and later donated to the Linux Foundation's Agentic AI Foundation, MCP has become the standard for how AI agents interact with external tools, databases, and APIs.

Often described as the "USB-C for AI," MCP provides a universal interface that eliminates the need for custom integrations for every new tool. Instead of writing bespoke code to connect your LLM to your database, your CMS, your deployment pipeline, and your monitoring stack, you write (or install) an MCP server for each, and any MCP-compatible client can use them all.

Why MCP Exists

Before MCP, every AI integration was a snowflake. Want Claude to query your database? Write a custom tool. Want GPT to read your GitHub issues? Write another custom tool. Want either of them to do both? Write four integrations (two models times two tools). The combinatorial explosion made it impractical to build agents that could interact with more than a handful of services.

MCP solves this with a client-server architecture:

  • MCP Hosts — The AI applications (Claude Code, VS Code extensions, custom apps) that need to access tools
  • MCP Clients — Protocol-level connectors maintained by the host, one per server
  • MCP Servers — Lightweight programs that expose specific capabilities (tools, resources, prompts) through the standardized protocol

Core Concepts

MCP servers can expose three types of capabilities:

Tools

Tools are functions that the AI model can call. They're the most common capability and represent actions like "query a database," "create a GitHub issue," or "send an email." Each tool has a name, description, and a JSON Schema defining its parameters.

server.ts — Defining an MCP Tool

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";

const server = new McpServer({
  name: "weather-server",
  version: "1.0.0",
});

server.tool(
  "get-forecast",
  "Get weather forecast for a city",
  { city: z.string().describe("City name") },
  async ({ city }) => {
    const response = await fetch(
      `https://api.weather.example.com/forecast?city=${encodeURIComponent(city)}`
    );
    const data = await response.json();
    return {
      content: [{ type: "text", text: JSON.stringify(data, null, 2) }],
    };
  }
);

Resources

Resources are data sources the AI can read — files, database records, API responses. Unlike tools, resources are read-only and are identified by URIs. Think of them as the "nouns" to tools' "verbs."

Prompts

Prompts are pre-built templates that help users interact with the server's capabilities effectively. They're optional but useful for providing guided workflows.

Building a Practical MCP Server

Let's build a more realistic example — an MCP server that gives an AI agent access to a PostgreSQL database. This is one of the most common use cases: letting an AI answer questions about your data without giving it raw database credentials.

db-server.ts

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import { Pool } from "pg";

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
});

const server = new McpServer({
  name: "postgres-readonly",
  version: "1.0.0",
});

// Tool: Run a read-only SQL query
server.tool(
  "query",
  "Execute a read-only SQL query against the database",
  {
    sql: z.string().describe("The SQL query to execute (SELECT only)"),
  },
  async ({ sql }) => {
    // Safety: only allow SELECT statements
    const trimmed = sql.trim().toUpperCase();
    if (!trimmed.startsWith("SELECT")) {
      return {
        content: [{ type: "text", text: "Error: Only SELECT queries are allowed." }],
        isError: true,
      };
    }

    const result = await pool.query(sql);
    return {
      content: [{
        type: "text",
        text: JSON.stringify(result.rows, null, 2),
      }],
    };
  }
);

// Tool: List all tables
server.tool(
  "list-tables",
  "List all tables in the database",
  {},
  async () => {
    const result = await pool.query(`
      SELECT table_name FROM information_schema.tables
      WHERE table_schema = 'public' ORDER BY table_name
    `);
    return {
      content: [{
        type: "text",
        text: result.rows.map((r: { table_name: string }) => r.table_name).join("\\n"),
      }],
    };
  }
);

// Resource: Database schema
server.resource(
  "schema",
  "postgres://schema",
  async (uri) => {
    const result = await pool.query(`
      SELECT table_name, column_name, data_type
      FROM information_schema.columns
      WHERE table_schema = 'public'
      ORDER BY table_name, ordinal_position
    `);
    return {
      contents: [{
        uri: uri.href,
        mimeType: "application/json",
        text: JSON.stringify(result.rows, null, 2),
      }],
    };
  }
);

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
}

main().catch(console.error);

The Multi-Agent Future

MCP is one piece of a broader standardization movement in agentic AI. Alongside Anthropic's MCP, Google introduced the Agent-to-Agent Protocol (A2A) for inter-agent communication, and the community developed AG-UI (Agent-User Interaction Protocol) for how agents communicate with frontends. Together, these protocols are creating an interoperable ecosystem where specialized agents can collaborate on complex tasks.

The pattern emerging is what some call the "microservices revolution for AI" — instead of one monolithic agent that tries to do everything, you orchestrate teams of specialized agents, each with access to specific MCP servers for their domain. A "puppeteer" orchestrator coordinates the specialists, routing tasks to the agent best equipped to handle them.

Pro Tip: Start with existing MCP servers from the official MCP servers repository before building custom ones. There are 75+ community-maintained servers covering databases, APIs, cloud services, and developer tools.

Getting Started

The fastest path to experimenting with MCP is through Claude Code or the Claude desktop app, both of which support MCP servers natively. Add a server to your configuration and the AI immediately gains access to its tools and resources.

For building custom servers, the official TypeScript and Python SDKs handle all the protocol plumbing — you just define your tools, resources, and prompts, and the SDK handles serialization, transport, and the connection lifecycle.

Resources