MCPSERV.CLUB
growthbook

GrowthBook MCP Server

MCP Server

LLM‑enabled access to GrowthBook experiments and flags

Active(80)
15stars
1views
Updated Sep 19, 2025

About

The GrowthBook MCP server lets large language model clients query and modify GrowthBook data directly. It provides experiment details, feature flag creation, and other management actions through a simple API.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

GrowthBook Server MCP server

The GrowthBook MCP server bridges the gap between large‑language‑model (LLM) assistants and the GrowthBook experimentation platform. By exposing a set of well‑defined resources, tools, and prompts over the MCP protocol, it lets developers query experiment metadata, create new feature flags, or modify existing experiments directly from within an AI‑powered workflow. This eliminates the need to switch between a browser interface and code, enabling rapid iteration on A/B tests or feature toggles through natural language commands.

At its core, the server offers a simple yet powerful API surface. Developers can request experiment details such as traffic allocation, target audiences, or success metrics; they can also add new feature flags with custom rollout strategies. Because the server authenticates via a GrowthBook API key or personal access token (PAT), it respects granular permissions—users cannot perform actions beyond what their PAT allows. This tight coupling between LLM commands and underlying platform permissions ensures that sensitive operations remain secure while still being highly accessible.

Key capabilities include:

  • Experiment discovery – list, filter, and retrieve detailed experiment configurations without leaving the LLM environment.
  • Feature flag management – create, update, or delete flags and assign them to user segments or rollout percentages.
  • Contextual prompts – the MCP server supplies ready‑made prompts that guide the AI assistant in asking for necessary parameters (e.g., flag name, target audience).
  • Environment awareness – configurable API and app URLs let the server point to on‑premises or custom GrowthBook deployments.

Real‑world scenarios where this MCP shines are plentiful. A product manager can ask the AI assistant to “add a new flag for dark mode and roll it out to 10 % of users” and watch the change materialize instantly. A data scientist can request “list all experiments that are currently under 50 % traffic” to identify low‑engagement tests. Even QA teams can use the server to validate that a newly created flag behaves as expected across multiple environments.

Integration into existing AI workflows is straightforward: the MCP server plugs into any LLM client that supports the protocol, such as Claude or OpenAI’s agents. Once added, developers can invoke server tools via natural language or through structured prompts, letting the assistant orchestrate complex experimentation tasks with minimal friction. The result is a smoother, faster decision‑making loop that keeps product experimentation tightly coupled to the conversational AI layer.