About
This MCP server exposes a comprehensive set of Cloud Foundry operations as language‑model tools, enabling natural‑language driven application, organization, service, route and network policy management.
Capabilities

The Cloud Foundry MCP Server bridges the gap between conversational AI assistants and cloud-native application lifecycle management. By exposing a rich set of Cloud Foundry operations as MCP tools, it lets developers and operators orchestrate deployments, scale services, and manage networking directly from an LLM-powered interface. This eliminates the need to manually run CLI commands or navigate a web console, enabling rapid prototyping and automated remediation workflows.
At its core, the server translates high‑level tool calls into authenticated requests against a Cloud Foundry API endpoint. Environment variables supplied by the client—such as , , and —are used to establish a session, while optional and parameters allow multi‑tenant or scoped operations. The result is a single, consistent API surface that can be consumed by any MCP‑compatible client, whether it’s Claude, GPT‑4o, or a custom assistant.
Key capabilities are grouped into five functional domains:
- Application Management – Create, list, scale, and delete apps with a single tool call.
- Organization & Space Management – Enumerate orgs, inspect details, and manage spaces.
- Service Management – Provision, bind, and clean up service instances.
- Route Management – Dynamically add or remove routes and map them to applications.
- Network Policy Management – Define secure communication paths between services.
Each tool is intentionally lightweight, returning concise JSON payloads that an assistant can immediately display or use in subsequent calls. The server also supports cloning applications and deleting orphaned routes, which are common pain points in continuous delivery pipelines.
In practice, this MCP server shines for scenarios such as automated rollback during a deployment failure, real‑time scaling based on user traffic predictions from an LLM, or generating environment‑specific configuration files through natural language queries. By integrating seamlessly into existing AI workflows—whether via SSE streams or custom client libraries—it empowers developers to embed cloud operations directly into conversational agents, dramatically reducing context switching and accelerating delivery cycles.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
Terminal Server (STDIO / SSE)
Run terminal commands from AI models
FastMCP Example Server
Run your MCP server with FastMCP and integrate it into Claude Desktop
GitBook MCP Server
MCP server for GitBook documentation
Clash Royale MCP Server
FastMCP powered Clash Royale API tools for AI agents
MCP-BOS
Modular, extensible MCP server framework for Claude Desktop
MLflow MCP Server
Natural language interface to MLflow experiments and models