MCPSERV.CLUB
jhgaylor

Express MCP Server Echo

MCP Server

Stateless echo server using Express and MCP

Stale(50)
1stars
0views
Updated Sep 24, 2025

About

A lightweight, stateless Model Context Protocol (MCP) server built with Express and TypeScript that echoes messages via resource, tool, and prompt components. Ideal for testing MCP integrations and learning MCP workflows.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Express MCP Server Echo – A Minimal, Stateless Model Context Protocol Demo

The Express MCP Server Echo is a lightweight, stateless implementation of the Model Context Protocol (MCP) built with Express and TypeScript. It addresses a common need for developers working with large language model (LLM) assistants: the ability to expose simple, reproducible functionality that can be invoked from an LLM without maintaining server state. By keeping the server stateless, it scales effortlessly behind load balancers or in containerized environments, making it ideal for rapid prototyping and integration testing.

At its core, the server offers a single “echo” capability that is split across three MCP components: resource, tool, and prompt. The echo resource () simply returns the supplied message, allowing an LLM to retrieve data via a standard URL pattern. The echo tool accepts a message argument and responds with the same text, demonstrating how an LLM can trigger server-side logic and receive a structured response. Finally, the echo prompt creates a user-facing prompt that displays the message, illustrating how prompts can be dynamically generated and injected into conversational flows. This triad showcases the full MCP lifecycle—from resource retrieval to tool execution to prompt creation—within a single, easy‑to-understand example.

Developers can leverage this server to test MCP interactions locally before deploying more complex services. For instance, a chatbot could use the echo tool to verify connectivity or to confirm that user input is being parsed correctly. In a continuous integration pipeline, the echo resource can act as a health check endpoint that ensures downstream services are reachable. Because the server uses modern Streamable HTTP transport, it supports real‑time streaming responses, a feature increasingly expected by LLMs that handle large outputs or long‑running computations.

Integration into AI workflows is straightforward. An LLM client initiates an call to establish protocol version and capabilities, then calls the echo tool via a JSON‑RPC request. The server responds with a simple JSON payload, which the client can embed into its next prompt or use as context for further reasoning. The stateless design means each request is independent, eliminating the need for session management or persistent storage—an advantage when scaling out across multiple instances.

In summary, the Express MCP Server Echo provides a clean, type‑safe example of how to expose basic functionality over MCP. It is especially useful for developers who want a quick, reliable testbed to validate tool invocation, resource fetching, and prompt generation before building more sophisticated services. Its minimal footprint, clear separation of concerns, and adherence to MCP standards make it a valuable component in any LLM‑centric development toolkit.