About
The GrowthBook MCP server lets large language model clients query and modify GrowthBook data directly. It provides experiment details, feature flag creation, and other management actions through a simple API.
Capabilities
The GrowthBook MCP server bridges the gap between large‑language‑model (LLM) assistants and the GrowthBook experimentation platform. By exposing a set of well‑defined resources, tools, and prompts over the MCP protocol, it lets developers query experiment metadata, create new feature flags, or modify existing experiments directly from within an AI‑powered workflow. This eliminates the need to switch between a browser interface and code, enabling rapid iteration on A/B tests or feature toggles through natural language commands.
At its core, the server offers a simple yet powerful API surface. Developers can request experiment details such as traffic allocation, target audiences, or success metrics; they can also add new feature flags with custom rollout strategies. Because the server authenticates via a GrowthBook API key or personal access token (PAT), it respects granular permissions—users cannot perform actions beyond what their PAT allows. This tight coupling between LLM commands and underlying platform permissions ensures that sensitive operations remain secure while still being highly accessible.
Key capabilities include:
- Experiment discovery – list, filter, and retrieve detailed experiment configurations without leaving the LLM environment.
- Feature flag management – create, update, or delete flags and assign them to user segments or rollout percentages.
- Contextual prompts – the MCP server supplies ready‑made prompts that guide the AI assistant in asking for necessary parameters (e.g., flag name, target audience).
- Environment awareness – configurable API and app URLs let the server point to on‑premises or custom GrowthBook deployments.
Real‑world scenarios where this MCP shines are plentiful. A product manager can ask the AI assistant to “add a new flag for dark mode and roll it out to 10 % of users” and watch the change materialize instantly. A data scientist can request “list all experiments that are currently under 50 % traffic” to identify low‑engagement tests. Even QA teams can use the server to validate that a newly created flag behaves as expected across multiple environments.
Integration into existing AI workflows is straightforward: the MCP server plugs into any LLM client that supports the protocol, such as Claude or OpenAI’s agents. Once added, developers can invoke server tools via natural language or through structured prompts, letting the assistant orchestrate complex experimentation tasks with minimal friction. The result is a smoother, faster decision‑making loop that keeps product experimentation tightly coupled to the conversational AI layer.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
ElevenLabs MCP Server
Text‑to‑speech with persistent voice history
kill-process-mcp
MCP Server: kill-process-mcp
Gmail AutoAuth MCP Server
Seamless Gmail integration with auto‑auth for AI assistants
Nobitex Market Data MCP Server
Real‑time crypto market stats from Nobitex API
MCP Think Tool Server
Structured reasoning for Claude's complex tasks
Cronlytic MCP Server
Seamless cron job management via LLMs