About
A Model Context Protocol (MCP) server implementation for running Locust load tests. This server enables seamless integration of Locust load testing capabilities with AI-powered development environment
Capabilities

Overview
The Locust MCP server bridges the gap between traditional load‑testing tools and modern AI‑powered development environments. By exposing Locust’s powerful, scriptable load‑testing engine through the Model Context Protocol (MCP), developers can trigger, monitor, and analyze performance tests directly from an LLM interface such as Claude Desktop or Cursor. This eliminates the need to manually run Locust from a terminal, parse logs, or switch contexts while debugging application performance.
At its core, the server offers a single, well‑defined tool: . This command accepts a Python test script and a set of runtime parameters—headless mode, target host, duration, user count, and spawn rate. When invoked by an LLM, the server launches Locust in either headless or UI mode, streams real‑time metrics back to the assistant, and returns a concise summary of key performance indicators. The ability to stream output means that developers can see throughput, latency, and error rates as the test progresses, allowing for immediate hypothesis testing or parameter tuning.
Key capabilities include:
- Headless and UI execution: Run tests without a browser or launch the full Locust dashboard for visual inspection.
- Configurable test parameters: Adjust users, spawn rate, and runtime on the fly to model realistic traffic patterns.
- HTTP/HTTPS support: Test any web service out of the box, including custom task scenarios defined in user scripts.
- Real‑time feedback: The MCP server streams live logs, enabling the LLM to provide contextual insights or suggest adjustments mid‑run.
Typical use cases span performance engineering, continuous integration pipelines, and exploratory testing. In a CI/CD workflow, an LLM can automatically trigger a load test after a new build, analyze the results, and flag potential bottlenecks. During debugging sessions, developers can ask the assistant to run a targeted test against a specific endpoint and receive instant metrics that inform whether a recent code change introduced regressions. The server’s integration with MCP also means it can be combined with other AI tools—such as prompt engineering or data analysis modules—to create a cohesive, end‑to‑end performance testing assistant.
What sets Locust MCP apart is its lightweight design and tight coupling to the MCP framework. Developers already familiar with MCP can add Locust support with minimal configuration, while the server’s headless mode ensures it fits cleanly into headless CI environments. The result is a seamless, AI‑driven workflow that turns complex load testing into an interactive, conversational experience.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
MCP Dockerized Server
Run MCP with yt-dlp inside a container
Unix Timestamps MCP Server
Convert ISO 8601 dates to Unix timestamps instantly
SafetyCulture MCP Server
Ask questions about your SafetyCulture data with natural language
CLI MCP Server
Secure, whitelisted command‑line execution for LLMs
Just Prompt
Unified LLM Control Across Multiple Providers
Angreal MCP Server
Discover and run angreal commands via AI assistants