About
A Python-based MCP server that bridges Microsoft Copilot Studio agents via the DirectLine API, maintaining stateful conversation context and exposing a query_agent tool for seamless integration with MCP clients.
Capabilities
The MCP Server for Copilot Studio Agents bridges the gap between Microsoft’s Copilot Studio and any Model Context Protocol‑compatible client, such as Claude or other AI assistants. By exposing a standardized MCP interface, it allows developers to treat Copilot Studio agents as first‑class tools within their AI workflows. This eliminates the need for custom SDKs or REST wrappers, letting teams leverage Copilot’s conversational intelligence directly from their preferred LLM environment.
At its core, the server translates MCP tool calls into DirectLine API requests. The single exposed tool, , forwards user queries to the configured Copilot Studio agent and streams back structured responses. It automatically manages conversation state—tracking conversation IDs and watermarks—to preserve context across multiple turns, ensuring that the agent’s memory behaves consistently with the MCP client’s expectations. This stateful interaction is crucial for building complex, multi‑step reasoning pipelines where the assistant must recall prior exchanges.
Key capabilities include:
- DirectLine integration: Seamlessly connect to any Copilot Studio agent via its DirectLine endpoint and bot key.
- Conversation context management: The server maintains IDs and watermarks, allowing the client to resume conversations without manual bookkeeping.
- Configurable agent definitions: Developers can register multiple agents with descriptive metadata, enabling the MCP client to discover and select the appropriate tool for a given task.
- Structured responses: Each call returns a clear success/error status alongside the agent’s reply, facilitating robust error handling in downstream logic.
Real‑world use cases span from customer support automation—where a Copilot agent handles FAQs while the LLM orchestrates escalation logic—to internal knowledge bases, where the assistant can pull policy documents or code snippets from Copilot while the LLM enriches them with contextual explanations. In data‑driven environments, developers can chain the tool with other MCP tools (e.g., database queries or API calls) to build end‑to‑end workflows that combine Copilot’s natural language understanding with specialized domain expertise.
Integration is straightforward: any MCP‑compatible client can list available tools, invoke , and manage conversation tokens just as it would with native LLM APIs. This plug‑and‑play design means that teams can quickly add Copilot Studio agents to their existing AI toolchains, unlocking powerful conversational capabilities without rewriting infrastructure.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
Plane Mcp Server
MCP Server: Plane Mcp Server
MCP Ethers Server
Your all‑in‑one Ethereum toolset for Claude
MCP SSE Servers
Reference implementations for Model Context Protocol over Server‑Sent Events
Simple MCP Server Example
FastAPI-powered context service for Model Context Protocol
Playwright Lighthouse MCP Server
Analyze web performance with Playwright and Lighthouse via MCP
Pansila MCP Server GDB
Remote GDB debugging with AI assistant integration