About
A Model Context Protocol server that lets LLMs retrieve and query Burp Suite proxy history using SQL-like queries, enabling researchers and penetration testers to efficiently analyze requests and responses.
Capabilities

BurpMCP – AI‑Powered Burp Suite Extension
BurpMCP bridges the gap between traditional manual web‑app security testing and the rapidly evolving capabilities of large language models. By exposing Burp Suite’s rich request/response handling through the Model Context Protocol (MCP), it lets security researchers, bug‑bounty hunters and penetration testers harness an LLM as a “super‑intelligent sidekick” that can autonomously craft, tweak and send HTTP traffic while still giving the human operator full visibility and control.
At its core, BurpMCP runs an MCP server on and integrates directly with the Burp API. Users can right‑click any intercepted request, send it to BurpMCP, and have the request stored in a dedicated “Saved Requests” tab. From there, an LLM can retrieve the request via the tool, modify headers or payloads with regex replacements (much like Repeater but driven by AI), and dispatch new HTTP/1.1 or HTTP/2 calls that appear in Burp’s request logs. The extension also supports generating and monitoring Collaborator payloads for out‑of‑band testing, and logs every MCP message in a dedicated tab for debugging.
Key capabilities include:
- Context‑rich request storage – each saved request carries notes that the model can use to understand intent or target vulnerabilities.
- Regex‑based auto‑tweaking – quickly iterate on payloads without leaving the UI.
- Collaborator integration – automate external interaction capture for advanced exploitation scenarios.
- Transparent logging – see exactly what the LLM sends, facilitating audit and troubleshooting.
Real‑world use cases span from exploring unfamiliar attack surfaces to automating repetitive test steps. A researcher can prompt the model to “search for XXE vulnerabilities in this endpoint,” and the LLM will generate, send, and analyze responses—all while the researcher watches the traffic in Burp. In bug‑bounty programs, teams can delegate routine fuzzing or parameter enumeration to the LLM, freeing human time for higher‑level analysis.
Integrating BurpMCP into an AI workflow is straightforward: most MCP clients (Claude Desktop, Cursor, Dive, etc.) can be configured to point at the local server. For STDIO‑only clients a small bridge script is provided, ensuring broad compatibility across tools. The result is a seamless blend of human intuition and machine‑driven exploration, giving developers and security professionals a powerful ally in the ever‑expanding landscape of web application threats.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
NodeMCU MCP Server
AI‑powered management for ESP8266/NodeMCU devices
Synthcore 2.0 Mcp Server
MCP Server: Synthcore 2.0 Mcp Server
Slack MCP Server
Integrate Slack into Model Context Protocol workflows
Clash Royale MCP Server
FastMCP powered Clash Royale API tools for AI agents
Azure AI Vision Face Liveness MCP Server
Embed proof of presence in Agentic AI workflows
Lighthouse MCP Server
AI‑powered web performance & audit engine