MCPSERV.CLUB
bertrandgressier

Demo MCP Basic Server

MCP Server

Enabling AI models with custom calculation tools

Stale(50)
1stars
2views
Updated Jun 25, 2025

About

A lightweight Node.js server that implements the Model Context Protocol, exposing simple arithmetic operations for AI models to consume during generation. It demonstrates how to extend model capabilities with external tools.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Demo MCP Basic – A Minimal Yet Powerful Model Context Protocol Server

The Demo MCP Basic server is a lightweight illustration of how the Model Context Protocol (MCP) can be used to extend an AI assistant’s capabilities by exposing custom tools over a simple HTTP interface. It solves the common problem of “black‑box” AI models that cannot perform domain‑specific calculations or data transformations without external help. By hosting a small set of arithmetic tools (addition, subtraction, multiplication, division) on the server, developers can immediately give a Gemini or Vertex AI model access to reliable, deterministic operations that would otherwise require the model to guess or rely on noisy text outputs.

At its core, the server implements the MCP specification: it listens for incoming connections on a specified port (default ), advertises available tools through the endpoint, and streams execution results back to the client. The client side, written in TypeScript, connects to this endpoint using the official AI SDK, fetches the tool list, and injects those tools into the model’s prompt. During generation, the AI can invoke a tool by name and receive a structured response that the client can parse and act upon. This pattern mirrors how modern LLMs use function calling, but it is fully controlled by the developer rather than being hard‑coded into the model.

Key capabilities of this demo include:

  • Secure tool discovery – The server’s endpoint exposes a signed JSON payload that the client validates before use, preventing accidental or malicious tool injection.
  • Plug‑and‑play integration – The client requires only a few lines of configuration to bind the MCP server to any Gemini or Vertex AI model, making it trivial to swap backends or add new tools later.
  • Extensible tool set – While the demo ships with basic arithmetic, developers can replace or augment these functions with database queries, API calls, or even custom machine‑learning inference services.
  • Real‑time streaming – Results are sent as Server‑Sent Events, allowing the AI to receive partial results or progress updates if a tool is long‑running.

Real‑world scenarios that benefit from this architecture include:

  • Financial assistants that need to perform precise calculations on user‑supplied data before generating a report.
  • Data‑analysis pipelines where an LLM orchestrates multiple API calls, each wrapped as a tool, to transform and summarize large datasets.
  • Interactive chatbots that can validate user input (e.g., checking ISBN numbers or validating dates) before proceeding with natural‑language responses.
  • Educational tutors that can solve math problems step by step, exposing each calculation as a tool invocation for transparency.

By decoupling the AI model from external logic, the Demo MCP Basic server provides a clean, secure, and developer‑friendly pathway to enrich AI assistants with deterministic, auditable functionality. Its minimal footprint makes it an excellent starting point for building production‑ready MCP servers that can scale to complex tool ecosystems.