MCPSERV.CLUB
harishsg993010

Damn Vulnerable Model Context Protocol Server

MCP Server

Learn MCP security by hacking a deliberately vulnerable server

Stale(55)
1.2kstars
2views
Updated 12 days ago

About

A Docker‑based educational server that implements the Model Context Protocol (MCP) with ten progressively harder security challenges, designed for researchers and developers to study and mitigate MCP vulnerabilities.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Damn Vulnerable MCP Demo

The Damn Vulnerable Model Context Protocol (DVMCP) is a sandboxed MCP server built expressly for security education. By intentionally exposing ten progressively harder attack vectors—ranging from simple prompt injection to complex multi‑vector exploits—the project gives researchers a hands‑on playground for testing defenses, auditing tool definitions, and understanding how malicious actors might abuse the very mechanisms that enable AI assistants to interact with external systems.

At its core, DVMCP implements the MCP specification, allowing an LLM to request resources, invoke tools, and receive prompts from a remote server. What sets this implementation apart is its deliberate lack of hardening: each challenge deliberately omits validation, access control, or safe execution boundaries. This design choice forces developers to confront the subtle ways in which seemingly innocuous features can become attack surfaces—such as tool poisoning, where a malicious instruction is hidden in a tool’s description, or token theft through insecure storage.

For developers building AI‑powered applications, DVMCP offers a realistic test bed for validating security controls before deployment. By running the server in Docker and connecting via popular MCP clients (e.g., CLINE), teams can simulate real‑world interactions, verify that their tool definitions are immune to shadowing or rug‑pull attacks, and ensure that prompt sanitization logic is robust. The challenges also expose the importance of proper permission scopes, making it clear that over‑permissive tool access can lead to arbitrary code execution or remote system compromise.

Typical use cases include:

  • Security training for AI safety teams, where each challenge serves as a lab exercise.
  • Penetration testing of MCP‑enabled services, allowing testers to validate isolation between LLM prompts and system resources.
  • Defense‑in‑depth workshops, where developers learn to harden tool registries and enforce strict authentication for token handling.

Integrating DVMCP into an AI workflow is straightforward: once the server is running, any MCP‑compliant client can discover its endpoints, retrieve available tools, and invoke them with controlled inputs. By exposing the full attack surface in a contained environment, developers gain confidence that their production MCP servers will resist real‑world exploitation. The project’s modular structure—separate directories for easy, medium, and hard challenges—lets teams scale their testing from basic prompt sanitization to advanced threat modeling.