MCPSERV.CLUB
lalanikarim

Comfy MCP Server

MCP Server

Generate images via Comfy UI from prompts

Stale(50)
31stars
0views
Updated 24 days ago

About

A FastMCP-based server that submits text prompts to a remote Comfy UI workflow, polls for completion, and returns generated images either as URLs or files. It also supports LLM‑driven prompt generation via Ollama.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Comfy MCP Server

Comfy MCP Server is a lightweight, FastMCP‑powered service that bridges an AI assistant with a remote Comfy UI instance for image generation. By exposing the and endpoints, it lets a conversational agent like Claude submit textual prompts to a Comfy workflow and return the resulting image, either as a file path or an accessible URL. This removes the need for developers to manually orchestrate HTTP calls, JSON payloads, and polling logic—everything is encapsulated behind a simple MCP interface.

The server solves the common pain point of integrating complex, node‑based image pipelines into AI workflows. Developers can author a Comfy UI workflow once, export it as JSON, and then expose that logic through MCP. The assistant can simply call , passing a natural‑language prompt, and receive the rendered artwork without any awareness of the underlying nodes or queue management. For teams that already run Comfy servers (often on GPU‑powered machines), this server turns the heavy lifting into a reusable, stateless endpoint that scales with the assistant’s request volume.

Key capabilities include:

  • Dynamic prompt handling – Accepts any string prompt and routes it to the configured Comfy node.
  • Workflow abstraction – Uses a pre‑exported JSON workflow, so the assistant never needs to know node IDs or connections.
  • Flexible output modes – Supports returning either a direct file path on the server or a public URL, simplifying downstream processing.
  • Optional prompt generation – When an Ollama LLM is available, the server can auto‑generate rich image prompts from simple topics.
  • Polling and status tracking – Internally monitors the Comfy queue until completion, hiding asynchronous complexity from the client.

Typical use cases span creative content generation, rapid prototyping of visual assets, and integrating AI‑driven art into chat or voice assistants. For example, a developer can configure the server with a style‑transfer workflow and then let Claude suggest image concepts on the fly, instantly delivering high‑quality visuals to users. In a production setting, the server can be deployed behind a load balancer and scaled horizontally, allowing multiple assistants to share a single Comfy backend without duplication.

Because it is built on FastMCP, the service inherits robust tooling for authentication, rate limiting, and telemetry. Developers familiar with MCP can plug this server into existing pipelines or extend it with custom endpoints, making Comfy’s powerful generative capabilities a first‑class citizen in any AI assistant ecosystem.