About
K8s Eye is a Model Context Protocol server that connects to Kubernetes clusters, providing comprehensive resource management, diagnostics, and monitoring for pods, deployments, services, and more. It supports multiple transport protocols and AI clients.
Capabilities

Overview
The mcp‑k8s‑eye server bridges the gap between AI assistants and Kubernetes environments by exposing a rich set of cluster‑management capabilities through the Model Context Protocol. It solves the common pain point of having to manually juggle commands, custom scripts, and monitoring dashboards when troubleshooting or automating workloads. By turning every Kubernetes operation into a declarative tool, developers can let an AI assistant query cluster state, modify resources, or run diagnostics without leaving the conversation.
At its core, mcp‑k8s‑eye offers generic resource operations—list, get, create/update, delete, and describe—for all built‑in Kubernetes objects and CustomResourceDefinitions. It also provides targeted actions such as pod execution, log retrieval, and deployment scaling. These primitives empower an AI to perform complex sequences—e.g., “scale the web‑app deployment to 10 replicas and verify pod readiness”—in a single, well‑structured request.
Diagnostics are a standout feature. The server includes dedicated tools that analyze the health of Pods, Services, Deployments, StatefulSets, CronJobs, Ingresses, NetworkPolicies, Webhooks, and Nodes. Each analyzer inspects configuration details (selectors, TLS secrets, webhook references) and runtime status (replica counts, endpoint readiness, resource utilization), returning concise reports that an assistant can present or act upon. This eliminates the need for separate monitoring stacks for quick troubleshooting.
Monitoring capabilities focus on workload resource usage, exposing CPU and memory metrics for Pods, Deployments, ReplicaSets, StatefulSets, and DaemonSets. While node‑level and cluster‑level metrics are marked as future work, the current tools already give AI assistants real‑time insight into capacity and performance trends. Combined with diagnostics, this allows an assistant to recommend scaling actions or identify bottlenecks directly from chat.
Integration is straightforward: mcp‑k8s‑eye supports both Stdio and SSE transport protocols, making it compatible with a wide range of AI client setups. Multiple clients can connect simultaneously, each receiving the same set of tools and diagnostics. By configuring the environment variable to point at a kubeconfig, developers can run the server against any cluster they have access to. The result is a unified, AI‑driven interface that turns Kubernetes operations from manual toil into conversational, context-aware actions.
Related Servers
MindsDB MCP Server
Unified AI-driven data query across all sources
Homebrew Legacy Server
Legacy Homebrew repository split into core formulae and package manager
Daytona
Secure, elastic sandbox infrastructure for AI code execution
SafeLine WAF Server
Secure your web apps with a self‑hosted reverse‑proxy firewall
mediar-ai/screenpipe
MCP Server: mediar-ai/screenpipe
Skyvern
MCP Server: Skyvern
Weekly Views
Server Health
Information
Explore More Servers
PlayCanvas MCP Server
Automate PlayCanvas Editor with LLM-driven tools
Arcjet MCP Server
Secure your app with AI-driven context
Open Browser MCP Server
Launch a browser to a specified URL with minimal setup.
Enterprise MCP Server for ReportServer Integration
AI‑powered integration platform for ReportServer
Arxiv Search MCP Server
Fast, API-driven search of arXiv research papers.
MCP Chat Adapter
Bridge LLMs to OpenAI chat APIs via MCP