About
This sample demonstrates a Blazor .NET client and a TypeScript MCP server that work together on Azure Container Apps, using Azure OpenAI for AI agent functionality. The client securely connects to the server, which is only accessible from it.
Capabilities

Overview
The .NET OpenAI MCP Agent is a ready‑made MCP (Model Context Protocol) server that bridges .NET Blazor client applications with Azure OpenAI services through a lightweight TypeScript‑based MCP server. It addresses the common developer challenge of integrating AI assistants into modern web applications while maintaining a clean separation between the user interface, business logic, and the AI model itself. By deploying both the client and server as Azure Container Apps (ACA), the solution guarantees secure, isolated execution with built‑in authentication and network isolation—ensuring that only the authorized client can reach the MCP server.
At its core, the server exposes a set of resources and tools that an AI assistant can invoke. These include simple to‑do list operations, context‑aware prompts, and model sampling controls. The client side is a Blazor WebAssembly application that consumes these resources via the MCP protocol, allowing developers to embed conversational agents directly into their UI without writing custom middleware. The use of Azure OpenAI or GitHub Models as the underlying LLM means that the same server can operate against either Microsoft’s GPT‑5‑mini or OpenAI’s API, providing flexibility in cost and compliance.
Key capabilities of the MCP agent include:
- Secure authentication: ACA’s built‑in auth protects both client and server, eliminating the need for custom token handling.
- Resource discovery: The MCP server advertises available tools and prompts, letting the assistant discover what it can do on demand.
- Sampling controls: Developers can fine‑tune temperature, top‑p, and other generation parameters through the MCP interface.
- Containerized deployment: Both client and server run as separate container apps, simplifying scaling and CI/CD pipelines.
Typical use cases span from task‑management assistants that keep to‑do lists in sync across devices, to customer support bots that pull contextual data from a backend system. In any scenario where an AI needs to perform actions—create, read, update, delete—on external data while maintaining a conversational context, this MCP agent offers a plug‑and‑play foundation. By abstracting away the complexities of protocol handling and secure deployment, developers can focus on building richer user experiences rather than wrestling with infrastructure.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
SSE MCP Server Demo
Real‑time LLM tool execution with SSE and MCP
PiAPI MCP Server
Generate media via Claude with PiAPI integration
Gemini Twitter MCP Server
AI agent automates X posts with real‑time responses
AutoCAD LT AutoLISP MCP Server
LLM-powered AutoCAD LT control via AutoLISP
Twitch MCP Server
Real‑time Twitch chat integration via Quarkus MCP
Unity MCP Server
AI‑powered Unity Editor automation via MCP clients