About
A Go‑based MCP server that connects to a Kubernetes cluster via kubeconfig, enabling natural language interaction for querying, creating, updating, and deleting any resource type—including CRDs—and fine‑grained Helm release and repository operations.
Capabilities

Overview
The mcp-k8s server is a Kubernetes‑centric Model Context Protocol (MCP) implementation that bridges AI assistants with the full breadth of a Kubernetes cluster. It exposes a rich set of tools—resource querying, CRUD operations on both native and custom resources, and Helm release/repository management—through the MCP interface. By doing so, it turns natural‑language commands into concrete Kubernetes actions without requiring users to remember syntax or Helm flags. This capability is especially valuable for developers, operators, and educators who want to embed Kubernetes management into conversational AI workflows.
What Problem It Solves
Managing a Kubernetes cluster often involves memorizing complex command‑line syntax, juggling multiple tools (kubectl, Helm), and interpreting verbose error messages. For teams that rely on large language models (LLMs) to automate or assist with infrastructure tasks, the absence of a unified API for cluster operations creates friction. mcp-k8s removes this barrier by presenting the entire Kubernetes ecosystem as a set of MCP tools. Developers can now write natural‑language prompts that an LLM translates into precise API calls, enabling seamless interaction between AI assistants and cluster resources.
Core Capabilities
- Resource Discovery: Query all supported Kubernetes resource types, including custom resource definitions (CRDs), to provide context for the LLM.
- Fine‑Grained CRUD: Perform read, create, update, and delete operations on any resource type. Each operation can be independently enabled or disabled, allowing fine‑tuned security and workflow controls.
- Helm Integration: Manage Helm releases (list, get, install, upgrade, uninstall) and repositories (add, list, remove). Like CRUD operations, Helm actions are configurable on a per‑operation basis.
- Kubeconfig Connectivity: Connect to any cluster via a standard kubeconfig file, making the server portable across environments.
These features are exposed through simple, declarative MCP tools that the AI client can invoke without needing deep Kubernetes knowledge.
Use Cases & Real‑World Scenarios
-
Interactive Cluster Management
An operator can ask the AI, “Create a deployment with 3 replicas of nginx on the staging cluster.” The server translates this into a Kubernetes object and applies it, returning the status in plain English. -
Batch Operations via Natural Language
A DevOps engineer can describe a complex set of changes—“Scale all web‑tier deployments to 5 replicas and roll out a new config map.”—and the AI orchestrates the necessary API calls. -
Automated Troubleshooting
When a pod crashes, a user can prompt the AI to “Diagnose why my pod is in CrashLoopBackOff.” The server retrieves logs, events, and status fields, feeding the LLM a comprehensive context for diagnosis. -
Rapid Prototyping and Testing
Developers can quickly spin up environments by describing the desired state, letting the AI generate and apply manifests. Clean‑up of test resources can also be requested in natural language. -
Education & Training
Newcomers to Kubernetes can learn commands interactively: ask “What does do?” and receive an explanation, or request the AI to “Show me a sample Deployment YAML.”
Integration into AI Workflows
The server communicates over standard input/output, fitting naturally into existing MCP client setups. Once connected, an LLM can list available tools (, , ), ask for help, or chain multiple operations. Because each tool’s payload is a simple JSON schema, the AI can construct requests programmatically and parse responses without custom parsers. This plug‑and‑play model accelerates the development of intelligent assistants that automate infrastructure tasks, audit configurations, or provide on‑call support.
Unique Advantages
- Unified API for Native and Custom Resources: Unlike traditional tools that require separate commands for CRDs, mcp-k8s treats all resources uniformly.
- Configurable Operation Flags: Security teams can lock down destructive actions (e.g., delete) while still allowing read‑only queries.
- Helm + Kubectl in One Place: Eliminates context switching between and Helm, reducing cognitive load.
- Open‑Source Go Implementation: Built on the robust SDK and Kubernetes client libraries, ensuring compatibility with any Kubernetes distribution.
By consolidating cluster management into a single MCP
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
MCP STL 3D Relief Generator
Turn images into printable 3D reliefs instantly
ENS MCP Server
Real‑time ENS lookup via Model Context Protocol
Binary Ninja Cline MCP Server
Integrate Binary Ninja analysis into Cline via MCP
Puppeteer-Extra MCP Server
Stealth browser automation for LLM web interactions
Google Analytics MCP Server
Access Google Analytics data via Model Context Protocol.
Cato MCP Server
Integrate AI with Cato’s GraphQL API via MCP