About
A FastMCP-based server that submits text prompts to a remote Comfy UI workflow, polls for completion, and returns generated images either as URLs or files. It also supports LLM‑driven prompt generation via Ollama.
Capabilities
Comfy MCP Server is a lightweight, FastMCP‑powered service that bridges an AI assistant with a remote Comfy UI instance for image generation. By exposing the and endpoints, it lets a conversational agent like Claude submit textual prompts to a Comfy workflow and return the resulting image, either as a file path or an accessible URL. This removes the need for developers to manually orchestrate HTTP calls, JSON payloads, and polling logic—everything is encapsulated behind a simple MCP interface.
The server solves the common pain point of integrating complex, node‑based image pipelines into AI workflows. Developers can author a Comfy UI workflow once, export it as JSON, and then expose that logic through MCP. The assistant can simply call , passing a natural‑language prompt, and receive the rendered artwork without any awareness of the underlying nodes or queue management. For teams that already run Comfy servers (often on GPU‑powered machines), this server turns the heavy lifting into a reusable, stateless endpoint that scales with the assistant’s request volume.
Key capabilities include:
- Dynamic prompt handling – Accepts any string prompt and routes it to the configured Comfy node.
- Workflow abstraction – Uses a pre‑exported JSON workflow, so the assistant never needs to know node IDs or connections.
- Flexible output modes – Supports returning either a direct file path on the server or a public URL, simplifying downstream processing.
- Optional prompt generation – When an Ollama LLM is available, the server can auto‑generate rich image prompts from simple topics.
- Polling and status tracking – Internally monitors the Comfy queue until completion, hiding asynchronous complexity from the client.
Typical use cases span creative content generation, rapid prototyping of visual assets, and integrating AI‑driven art into chat or voice assistants. For example, a developer can configure the server with a style‑transfer workflow and then let Claude suggest image concepts on the fly, instantly delivering high‑quality visuals to users. In a production setting, the server can be deployed behind a load balancer and scaled horizontally, allowing multiple assistants to share a single Comfy backend without duplication.
Because it is built on FastMCP, the service inherits robust tooling for authentication, rate limiting, and telemetry. Developers familiar with MCP can plug this server into existing pipelines or extend it with custom endpoints, making Comfy’s powerful generative capabilities a first‑class citizen in any AI assistant ecosystem.
Related Servers
MindsDB MCP Server
Unified AI-driven data query across all sources
Homebrew Legacy Server
Legacy Homebrew repository split into core formulae and package manager
Daytona
Secure, elastic sandbox infrastructure for AI code execution
SafeLine WAF Server
Secure your web apps with a self‑hosted reverse‑proxy firewall
mediar-ai/screenpipe
MCP Server: mediar-ai/screenpipe
Skyvern
MCP Server: Skyvern
Weekly Views
Server Health
Information
Explore More Servers
Perplexica MCP Server
AI-Powered Search Engine via Model Context Protocol
Top Rank Agent
AI-powered tool integration for Chinese users via MCP
YOKATLAS API MCP Server
FastMCP interface to YÖKATLAS data for LLM tools
MasterGo Magic MCP
Connect MasterGo designs with AI models in seconds
Puppeteer Vision MCP Server
AI‑powered web scraper that turns pages into clean Markdown
VA Design System Monitor
Real-time monitoring and example generation for VA design components