MCPSERV.CLUB
Azure-Samples

Azure Container Apps MCP Server

MCP Server

AI-powered agent platform with Azure OpenAI and DocumentDB

Stale(60)
28stars
2views
Updated Sep 19, 2025

About

A containerized MCP server that connects LLM providers like Azure OpenAI, OpenAI, and GitHub Models to a DocumentDB database. It exposes HTTP and SSE endpoints for agents to use tools such as adding, listing, completing, or deleting todo items.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Azure Container Apps – AI & MCP Playground

The Azure Container Apps AI & MCP Playground is a turnkey environment that demonstrates how the Model Context Protocol (MCP) can be leveraged to build AI‑driven applications that interact with external services and databases. It solves the common pain point of stitching together a language‑model backend, a stateful data store, and a set of actionable tools in a single, coherent workflow. Developers who want to prototype or deploy agents that can read, write, and manipulate data in Azure services find this playground invaluable because it eliminates the boilerplate of authentication, event handling, and persistence.

At its core, the server exposes a rich MCP API that supports both HTTP streaming and legacy Server‑Sent Events (SSE). This dual‑protocol approach gives clients the flexibility to choose a transport that matches their latency or compatibility requirements. The server is backed by a local DocumentDB instance, which stores the agent’s state and tool definitions. The host application—whether it be VS Code, Copilot, LlamaIndex, or LangChain—acts as a front‑end that sends user queries to the MCP server and renders responses in a terminal interface. The language‑model provider is pluggable; the demo ships with OpenAI, Azure OpenAI, and GitHub Models, allowing developers to switch providers without touching the MCP layer.

Key capabilities include:

  • Tool execution: The agent can invoke CRUD‑style operations on a to‑do list (add, list, complete, delete) via MCP tools that directly interact with DocumentDB.
  • Resource and prompt management: While still a work in progress, the architecture is designed to support dynamic resources and prompts, enabling agents to adapt their behavior at runtime.
  • Sampling control: Future releases will expose sampling parameters (temperature, top‑k, etc.) through MCP so that developers can fine‑tune the model’s output on a per‑request basis.

Real‑world scenarios for this playground are plentiful: an internal help desk bot that can pull ticket data from Azure Table Storage, a code‑review assistant that fetches repository metadata via GitHub Models, or an inventory manager that updates product counts in Cosmos DB. By abstracting the communication details behind MCP, developers can focus on crafting agent logic rather than worrying about transport protocols or state persistence.

The integration flow is straightforward: a user interacts with the host terminal, which forwards the request to the MCP server over HTTP or SSE. The server queries the selected LLM provider, receives a response that may include tool calls, executes those tools against DocumentDB, and streams the final output back to the host. This seamless loop enables low‑latency, stateful interactions that are essential for production AI assistants.