About
K8M is a compact, AI‑driven console that simplifies Kubernetes cluster administration. It integrates ChatGPT features for resource insights, logs analysis, and command suggestions while providing an MCP server to enable large‑model controlled cluster operations.
Capabilities
k8m is an AI‑driven, lightweight Kubernetes dashboard that consolidates cluster management into a single executable. It tackles the common pain points of managing multiple Kubernetes environments—complex configuration, scattered tooling, and steep learning curves—by embedding an intelligent layer that interacts directly with cluster APIs while offering a user‑friendly web interface. The platform is built on Go for performance and on Baidu’s AMIS framework for a responsive UI, ensuring that operations remain fast even under heavy load.
The server brings AI into the operational loop in several tangible ways. It ships with built‑in large‑language models such as Qwen2.5-Coder-7B and DeepSeek-R1-Distill-Qwen-7B, and it allows developers to plug in private models via Ollama or other hosts. These models power features like word‑selection explanations, automatic YAML translation, and intelligent command suggestions that surface directly in the dashboard. By integrating k8s‑gpt, it also supports Chinese‑native guidance for resource descriptions and troubleshooting. The AI layer is exposed through MCP, providing 49 predefined Kubernetes tools that can be combined into more than a hundred composite operations—making it straightforward for other AI assistants to orchestrate complex cluster tasks.
Key capabilities include:
- Multi‑cluster discovery and in‑cluster registration: automatically scans kubeconfig directories, registers hosts, and applies fine‑grained RBAC per cluster or namespace.
- Unified permissions: a single identity controls both AI interactions and actual cluster actions, preventing privilege escalation.
- Pod file management: browse, edit, upload, and download files inside containers with real‑time log streaming and shell execution.
- API exposure: generate API keys, view Swagger docs, and integrate third‑party services through MCP.
- Operational automation: scheduled inspections, Lua‑based rule engines, and notifications to DingTalk, WeChat, or Feishu.
In practice, a DevOps team can deploy k8m on an existing cluster and immediately start querying the AI for help with resource limits, troubleshooting pods, or generating Helm charts. A data scientist can use the AI to translate complex YAML into plain language, while a security officer benefits from the tight integration of cluster permissions with AI actions. Because the server is fully open source and supports multiple databases, CI/CD pipelines can spin up k8m instances on demand, expose them via API keys, and let AI assistants perform day‑to‑day operations without manual intervention.
Related Servers
MindsDB MCP Server
Unified AI-driven data query across all sources
Homebrew Legacy Server
Legacy Homebrew repository split into core formulae and package manager
Daytona
Secure, elastic sandbox infrastructure for AI code execution
SafeLine WAF Server
Secure your web apps with a self‑hosted reverse‑proxy firewall
mediar-ai/screenpipe
MCP Server: mediar-ai/screenpipe
Skyvern
MCP Server: Skyvern
Weekly Views
Server Health
Information
Explore More Servers
MCP-RAG Server
Semantic vector search for real‑time AI documentation access
OpenStack MCP Server
Real‑time OpenStack resource queries via MCP protocol
Perigon MCP Server
Real‑time news API via Model Context Protocol
RSS Reader MCP Server
Fetch and parse RSS/Atom feeds with ease
Abhijeetka MCP Kubernetes Server
LLM‑driven control of your Kubernetes cluster
nativeMCP
C++ MCP server and host for local LLM tooling