About
A Model Context Protocol server that exposes CrewAI Enterprise API endpoints, enabling start and monitor of crew tasks through Apify Actors with built‑in Pay Per Event charging.
Capabilities
Overview
CrewAI Enterprise MCP Server is a lightweight, cloud‑hosted gateway that exposes the full functionality of a CrewAI Enterprise instance to any MCP‑compatible AI assistant. By running on Apify’s Actor platform, it turns a standard CrewAI deployment into an instantly accessible, real‑time API that can be invoked through the Model Context Protocol. This solves a common pain point for developers: bridging the gap between a self‑hosted AI orchestration engine and conversational agents that need to trigger, monitor, and retrieve results from complex multi‑agent workflows without embedding proprietary code or managing authentication flows.
The server provides two core tools— and . With a single tool call, an assistant can launch a new crew task by passing arbitrary input payloads (queries, context, agent lists, etc.), and receive a unique crew ID in return. A subsequent call to polls the underlying CrewAI Enterprise API for progress, completion state, and any produced artifacts. Because the server uses Server‑Sent Events (SSE) for transport, it delivers updates in real time, enabling assistants to present live status bars or progressive responses to end users. The integration is intentionally minimal: developers only need to supply the CrewAI server URL and bearer token, either through Actor inputs or environment variables, and the MCP client can begin issuing calls immediately.
Key capabilities include built‑in Pay Per Event (PPE) billing that automatically charges for server start‑ups, tool executions, and listings, making it straightforward to monetize usage on Apify. The SSE transport ensures low‑latency communication, while the Actor’s standby mode keeps the endpoint responsive without incurring idle costs. Error handling and retry logic are baked in, so developers can focus on business logic rather than network resilience.
Typical use cases span from data‑driven research assistants that spawn multi‑agent pipelines for market analysis, to customer support bots that trigger background knowledge‑gathering tasks and return summarized answers. In enterprise settings, the MCP server can serve as a single point of integration for multiple AI assistants—each can invoke crew workflows, share results, and aggregate insights without direct access to the underlying CrewAI infrastructure. This decoupling simplifies security (only a bearer token is exposed), scaling (Apify handles load balancing), and governance (PPE allows fine‑grained cost monitoring).
In summary, the CrewAI Enterprise MCP Server turns a complex agent orchestration platform into an off‑the‑shelf, protocol‑compliant service that developers can plug into any AI workflow. Its real‑time communication, automatic billing, and straightforward configuration make it a compelling choice for teams looking to extend conversational agents with powerful, multi‑agent capabilities while keeping operational overhead low.
Related Servers
n8n
Self‑hosted, code‑first workflow automation platform
FastMCP
TypeScript framework for rapid MCP server development
Activepieces
Open-source AI automation platform for building and deploying extensible workflows
MaxKB
Enterprise‑grade AI agent platform with RAG and workflow orchestration.
Filestash
Web‑based file manager for any storage backend
MCP for Beginners
Learn Model Context Protocol with hands‑on examples
Weekly Views
Server Health
Information
Explore More Servers
MCP Servers Collection
A unified hub for Model Context Protocol servers
MCP Plexus
Secure, Multi-Tenant MCP Server for AI Backends
Comfy UI MCP Server
Local ComfyUI integration for note management and prompt generation
Mcp News
Fast, API-driven news retrieval for developers
Amazon Fresh Server
Simple note-taking MCP server with summarization tools
Chatmcp MCP Server Collector
Collect and submit MCP servers from anywhere