MCPSERV.CLUB
EvalsOne

MCP Connect

MCP Server

Bridge HTTP to local Stdio MCP servers in the cloud

Stale(65)
862stars
1views
Updated 11 days ago

About

MCP Connect translates HTTPS/SSE requests into Stdio communication, enabling cloud‑based AI services to securely interact with local MCP servers via tunnels such as Ngrok or Cloudflare Zero Trust, without modifying the underlying server implementation.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

open web ui example

MCP‑Bridge is a lightweight gateway that translates the OpenAI API format into MCP (Model Context Protocol) calls, enabling any OpenAI‑compatible client to harness the full power of MCP tools without native support. The server exposes standard OpenAI endpoints—such as and —while internally routing tool‑call logic through configured MCP servers. This abstraction removes the need for client‑side MCP integration, making it trivial to plug advanced tooling into existing workflows that already rely on the OpenAI API.

The core problem MCP‑Bridge solves is the mismatch between popular LLM APIs and the richer, context‑aware capabilities of MCP. Developers who want to use specialized tools—web scraping, database queries, or custom inference engines—often face the hurdle of writing bespoke adapters for each client. MCP‑Bridge eliminates that friction by presenting a single, familiar interface: send a request to with tool usage as defined by MCP, and the bridge will dispatch those calls to the appropriate MCP server (e.g., a fetch tool or custom resource). This approach preserves the developer experience of OpenAI while unlocking MCP’s flexible, extensible tool ecosystem.

Key capabilities include:

  • Bidirectional compatibility: Non‑streaming and streaming chat completions that respect MCP tool calls, as well as raw completions without tools.
  • MCP tool integration: The bridge automatically discovers and exposes MCP tools, allowing clients to invoke them via the standard field in OpenAI requests.
  • Sampling and resource handling: MCP sampling strategies are forwarded to the inference engine, enabling fine‑grained control over output generation.
  • SSE Bridge: External clients can subscribe to server‑sent events for real‑time updates, useful for building UI layers that react to partial completions.

In practice, MCP‑Bridge shines in scenarios where a team already uses an OpenAI‑compatible LLM provider but needs to augment it with custom tooling. For example, a data‑science workflow can integrate a database query tool through MCP while still interacting with the LLM via familiar OpenAI calls. Similarly, a web‑app built on Next.js can leverage MCP tools without rewriting its API layer, simply by pointing to the bridge’s endpoint.

Because MCP‑Bridge sits between the client and the inference engine, it also offers a clean separation of concerns. Clients remain agnostic to the underlying model or tool implementation, while the bridge can be reconfigured to swap inference backends (vLLM, Ollama, etc.) or add new MCP servers without touching client code. This modularity makes it an attractive component for scaling AI services, orchestrating multiple tools, or experimenting with new MCP extensions in production environments.