About
A Docker‑based educational server that implements the Model Context Protocol (MCP) with ten progressively harder security challenges, designed for researchers and developers to study and mitigate MCP vulnerabilities.
Capabilities

The Damn Vulnerable Model Context Protocol (DVMCP) is a sandboxed MCP server built expressly for security education. By intentionally exposing ten progressively harder attack vectors—ranging from simple prompt injection to complex multi‑vector exploits—the project gives researchers a hands‑on playground for testing defenses, auditing tool definitions, and understanding how malicious actors might abuse the very mechanisms that enable AI assistants to interact with external systems.
At its core, DVMCP implements the MCP specification, allowing an LLM to request resources, invoke tools, and receive prompts from a remote server. What sets this implementation apart is its deliberate lack of hardening: each challenge deliberately omits validation, access control, or safe execution boundaries. This design choice forces developers to confront the subtle ways in which seemingly innocuous features can become attack surfaces—such as tool poisoning, where a malicious instruction is hidden in a tool’s description, or token theft through insecure storage.
For developers building AI‑powered applications, DVMCP offers a realistic test bed for validating security controls before deployment. By running the server in Docker and connecting via popular MCP clients (e.g., CLINE), teams can simulate real‑world interactions, verify that their tool definitions are immune to shadowing or rug‑pull attacks, and ensure that prompt sanitization logic is robust. The challenges also expose the importance of proper permission scopes, making it clear that over‑permissive tool access can lead to arbitrary code execution or remote system compromise.
Typical use cases include:
- Security training for AI safety teams, where each challenge serves as a lab exercise.
- Penetration testing of MCP‑enabled services, allowing testers to validate isolation between LLM prompts and system resources.
- Defense‑in‑depth workshops, where developers learn to harden tool registries and enforce strict authentication for token handling.
Integrating DVMCP into an AI workflow is straightforward: once the server is running, any MCP‑compliant client can discover its endpoints, retrieve available tools, and invoke them with controlled inputs. By exposing the full attack surface in a contained environment, developers gain confidence that their production MCP servers will resist real‑world exploitation. The project’s modular structure—separate directories for easy, medium, and hard challenges—lets teams scale their testing from basic prompt sanitization to advanced threat modeling.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
Postman MCP Server
Run Postman collections via Newman with LLMs
Threatnews MCP Server
Collects and aggregates threat intelligence data
Ghidra MCP Server
Headless Ghidra for LLM-powered reverse engineering
Pydantic AI
Build GenAI agents with Pydantic validation and observability
avisangle/calculator-server
MCP Server: avisangle/calculator-server
SSH Rails Runner
Secure remote execution of Rails console commands over SSH