MCPSERV.CLUB
wenhuwang

K8s Eye

MCP Server

Unified Kubernetes cluster management and diagnostics tool

Stale(55)
24stars
0views
Updated 16 days ago

About

K8s Eye is a Model Context Protocol server that connects to Kubernetes clusters, providing comprehensive resource management, diagnostics, and monitoring for pods, deployments, services, and more. It supports multiple transport protocols and AI clients.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

cursor tools

Overview

The mcp‑k8s‑eye server bridges the gap between AI assistants and Kubernetes environments by exposing a rich set of cluster‑management capabilities through the Model Context Protocol. It solves the common pain point of having to manually juggle commands, custom scripts, and monitoring dashboards when troubleshooting or automating workloads. By turning every Kubernetes operation into a declarative tool, developers can let an AI assistant query cluster state, modify resources, or run diagnostics without leaving the conversation.

At its core, mcp‑k8s‑eye offers generic resource operations—list, get, create/update, delete, and describe—for all built‑in Kubernetes objects and CustomResourceDefinitions. It also provides targeted actions such as pod execution, log retrieval, and deployment scaling. These primitives empower an AI to perform complex sequences—e.g., “scale the web‑app deployment to 10 replicas and verify pod readiness”—in a single, well‑structured request.

Diagnostics are a standout feature. The server includes dedicated tools that analyze the health of Pods, Services, Deployments, StatefulSets, CronJobs, Ingresses, NetworkPolicies, Webhooks, and Nodes. Each analyzer inspects configuration details (selectors, TLS secrets, webhook references) and runtime status (replica counts, endpoint readiness, resource utilization), returning concise reports that an assistant can present or act upon. This eliminates the need for separate monitoring stacks for quick troubleshooting.

Monitoring capabilities focus on workload resource usage, exposing CPU and memory metrics for Pods, Deployments, ReplicaSets, StatefulSets, and DaemonSets. While node‑level and cluster‑level metrics are marked as future work, the current tools already give AI assistants real‑time insight into capacity and performance trends. Combined with diagnostics, this allows an assistant to recommend scaling actions or identify bottlenecks directly from chat.

Integration is straightforward: mcp‑k8s‑eye supports both Stdio and SSE transport protocols, making it compatible with a wide range of AI client setups. Multiple clients can connect simultaneously, each receiving the same set of tools and diagnostics. By configuring the environment variable to point at a kubeconfig, developers can run the server against any cluster they have access to. The result is a unified, AI‑driven interface that turns Kubernetes operations from manual toil into conversational, context-aware actions.