MCPSERV.CLUB
sofianhw

MCP Client-Server Python Example

MCP Server

Python demo for Model Context Protocol tools and resources via SSE

Stale(55)
0stars
2views
Updated May 7, 2025

About

A lightweight Python example that runs an MCP server exposing simple tools and resources over Server-Sent Events, while a client uses OpenAI GPT models to interact with those tools in real time.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

MCP Client‑Server Python Example

The MCP Client‑Server Python example is a minimal yet complete showcase of how an external service can expose tools and resources to an AI assistant using the Model Context Protocol (MCP). It solves a common pain point for developers: wiring an LLM to real‑world APIs or custom logic without building a bespoke integration layer for each model. By adhering to the MCP specification, this server acts as a contract‑based bridge that any compliant client—whether written in Python, JavaScript, or another language—can consume. The result is a plug‑and‑play architecture where the assistant can invoke tools like “addition” or retrieve static resources such as a greeting string simply by calling the server’s endpoints over Server‑Sent Events (SSE).

At its core, the server declares a set of tools and resources in a JSON schema that MCP clients can query. The SSE endpoint streams these definitions to connected clients, ensuring that any changes—such as adding a new calculator or updating the greeting text—are instantly reflected in downstream workflows. The client side then uses OpenAI’s chat completions to parse user input, determine which tool is required, and forward the request back to the server. This round‑trip demonstrates a full agentic loop: user → assistant → tool call → response, all orchestrated through the MCP contract.

Key capabilities of this example include:

  • Tool Exposure: The server offers a simple arithmetic tool () that can be invoked by name, showcasing how complex logic can be encapsulated behind a declarative interface.
  • Resource Sharing: Static data such as a greeting message is made available to clients, illustrating how non‑dynamic content can be served efficiently.
  • SSE Streaming: By using Server‑Sent Events, the server pushes updates to clients in real time, enabling dynamic tool discovery without polling.
  • OpenAI Integration: The client leverages GPT models to interpret natural language queries, automatically deciding when a tool call is necessary.

Typical use cases for this pattern include:

  • Data‑driven agents that need to query external databases or APIs (e.g., weather, stock prices) while keeping the model’s context lightweight.
  • Custom business logic such as order processing or recommendation engines that can be exposed as MCP tools for a conversational interface.
  • Rapid prototyping of agent workflows where developers can iterate on tool definitions without touching the LLM codebase.

Integrating this server into an existing AI pipeline is straightforward: any MCP‑compliant client can subscribe to the SSE stream, retrieve the tool list, and invoke calls via HTTP POST. The abstraction allows teams to swap out underlying services—be it a local Flask app or a cloud‑native function—without altering the assistant’s prompt logic. This decoupling not only speeds development but also enhances maintainability and security, as tool access can be governed centrally through the MCP server.

Overall, the MCP Client‑Server Python example demonstrates a powerful, standards‑based approach to extending LLM capabilities. It provides developers with a ready‑made template for building secure, extensible tool integrations that keep the conversational AI layer focused on language understanding while delegating domain‑specific tasks to dedicated services.