MCPSERV.CLUB
jakkaj

MCP Argo Server

MCP Server

JSON‑RPC CLI for running Argo Workflows on Kubernetes

Stale(50)
9stars
1views
Updated 12 days ago

About

A lightweight Go‑based MCP server that wraps Argo Workflows, exposing JSON‑RPC over STDIN/STDOUT for workflow submission, status monitoring, and result retrieval. It integrates with Foxy Contexts and client‑go for seamless Kubernetes interaction.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Argo Workflows Dashboard

The MCP Argo Server bridges the gap between AI assistants and Kubernetes‑based workflow orchestration by exposing Argo Workflows as a Model Context Protocol (MCP) service. By wrapping the native Argo API in JSON‑RPC over STDIN/STDOUT, the server allows an AI client—such as Claude or any MCP‑compliant assistant—to submit, monitor, and retrieve the results of complex workflow pipelines without needing direct Kubernetes access. This abstraction is especially valuable for developers who want to delegate heavy data processing or ML training jobs to a cluster while keeping the interaction simple and conversational.

At its core, the server implements three primary capabilities: workflow submission, status polling, and result retrieval. These operations are exposed through a set of lightweight CLI commands that internally use to communicate with the Kubernetes API and Argo’s workflow controller. The server also integrates Foxy Contexts for efficient RPC handling, ensuring that each request is routed correctly and that the client receives timely updates. Because it runs as a standalone Go binary, developers can spin up a local k3d cluster and launch the MCP server with minimal overhead, making it ideal for rapid prototyping or CI pipelines.

Key features include:

  • Declarative workflow management – Submit YAML definitions directly from the client, allowing AI assistants to trigger arbitrary pipelines.
  • Real‑time status monitoring – Poll for workflow progress and receive updates on completion or failure, enabling dynamic decision‑making in AI workflows.
  • Result extraction – Retrieve logs, artifacts, or output data once a workflow finishes, so the assistant can present actionable insights to users.
  • Containerized development – The project ships with a dev‑container configuration that installs a k3d cluster and Argo, ensuring consistency across environments.

Typical use cases span from data scientists automating model training jobs to DevOps teams orchestrating CI/CD pipelines. For instance, an AI assistant could ask a user for a dataset and desired model architecture; the MCP Argo Server would then submit an appropriate training workflow, monitor its progress, and return evaluation metrics—all through natural language interactions. In another scenario, a customer support bot could trigger a diagnostic workflow on a Kubernetes cluster and report the results back to the user, streamlining troubleshooting.

What sets this server apart is its minimal footprint and tight integration with MCP. By adhering to the JSON‑RPC standard over STDIN/STDOUT, it can be embedded in any environment that supports simple I/O streams—whether that's a cloud function, a local script, or an AI‑powered IDE extension. This design choice eliminates the need for complex networking setups while still leveraging Argo’s powerful workflow engine, giving developers a robust yet lightweight tool to augment AI assistants with orchestrated backend capabilities.