About
The Open Cluster Management MCP Server enables Generative AI systems to access and manage resources across hub and managed Kubernetes clusters via the Model Context Protocol, offering metrics, logs, and prompt templates for streamlined automation.
Capabilities

The Open Cluster Management (OCM) MCP Server bridges generative AI assistants with the full breadth of a Kubernetes multi‑cluster environment. By speaking the Model Context Protocol, Claude or other AI agents can discover, query, and manipulate resources across a hub cluster and all managed clusters without leaving the chat interface. This eliminates the need for manual sessions or custom scripts, allowing developers to ask high‑level questions—such as “What pods are running on cluster B?” or “Show me the latest alert from the monitoring stack”—and receive precise, context‑aware responses.
At its core, the server exposes a rich set of MCP tools that enable cluster‑aware operations. These include retrieving resources from the current context (the hub) or any managed cluster, establishing connections to a target cluster via a specified , and pulling metrics, logs, or alerts from integrated monitoring stacks. The tool set also supports interactions with OCM‑specific APIs like Managed Clusters, Policies, and Add‑ons, giving agents the ability to not only read but also modify cluster state—deploying a new policy or scaling an application can be scripted through the same conversational interface.
Complementing the tools are prompt templates designed for common OCM tasks. These reusable prompts help standardize agent behavior, ensuring consistent phrasing and context handling when orchestrating multi‑cluster operations. Alongside the templates, a collection of MCP resources links directly to official OCM documentation and related references, enabling the AI to surface authoritative guidance or troubleshooting steps within the conversation.
Real‑world use cases abound: a DevOps engineer can ask the assistant to list all pods that exceed memory thresholds across every cluster, receive aggregated metrics, and automatically trigger a scaling policy; an SRE can request the latest alerts from Prometheus and have them correlated with specific managed clusters, all while maintaining a single source of truth. In CI/CD pipelines, the server can be invoked to validate that new deployments adhere to cluster‑level policies before promotion. Because the MCP server treats the configured as the hub, it seamlessly navigates the OCM hierarchy, abstracting away complex network routing or authentication details.
What sets this MCP server apart is its tight integration with the OCM framework and its focus on interactive observability. The ability to pull real‑time logs, metrics, and alerts into a conversational AI workflow empowers teams to troubleshoot faster, automate compliance checks, and orchestrate cross‑cluster changes with confidence—all through natural language. This streamlined, protocol‑driven approach removes the friction traditionally associated with multi‑cluster management and positions AI assistants as first‑class collaborators in Kubernetes operations.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
National Parks MCP Server
Real‑time data on U.S. National Parks
Ankr Web3 MCP
MCP server for multi‑chain blockchain data via Ankr API
Firebase MCP
AI-driven access to Firebase services
Kit MCP Server
Lightweight, modular API server for quick prototyping
Hugging Face MCP Server
Read‑only access to Hugging Face Hub for LLMs
Aspire MCP Sample Server
Demo MCP server and Blazor chat client with Azure AI integration