MCPSERV.CLUB
Tomby68

MCP Vulnerabilities Demo Server

MCP Server

Showcase of MCP security flaws and mitigation strategies

Stale(55)
2stars
3views
Updated Jun 26, 2025

About

A demonstration framework that exposes common MCP vulnerabilities—such as prompt injection, tool poisoning, and token theft—using the Damn Vulnerable MCP Server. It also provides dual‑LLM client implementations to illustrate mitigation techniques.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Dual LLM MCP Architecture

Overview of the Mcp Vulnerabilities Server

The Mcp Vulnerabilities server is a focused, research‑driven tool designed to expose and study security weaknesses that can arise when Model Context Protocol (MCP) servers are integrated into AI assistant workflows. By providing a deliberately unsafe MCP implementation, the server allows developers and researchers to observe how seemingly innocuous client interactions can be exploited through a variety of attack vectors. This enables the community to evaluate mitigation strategies, audit tool‑use policies, and refine best practices for secure MCP deployments.

Why It Matters

When AI assistants rely on external tools—whether for data retrieval, computation, or action execution—the MCP layer becomes a critical attack surface. Vulnerabilities such as prompt injection, tool poisoning, or token theft can compromise user data, lead to unauthorized actions, or even cause a server crash. The Mcp Vulnerabilities server gives developers a sandbox to test how their security controls hold up against these threats without risking production systems. By reproducing real‑world attack patterns, it helps teams harden authentication, permission scopes, and prompt validation mechanisms before deployment.

Core Features

  • Seven Demonstrated Vulnerabilities: Each vulnerability (e.g., Prompt Injection, Tool Shadowing, Rug Pull Attacks) is presented as a separate server instance with an accompanying client that triggers the exploit. This modularity allows targeted testing of specific security concerns.
  • Dual‑LLM Mitigation Demo: The repository includes a Dual LLM client that follows Simon Willison’s pattern, separating the privileged LLM (tool selection) from a quarantined LLM (output handling). This illustrates how architectural separation can reduce risk.
  • Tool Logging Enhancements: A “Clever Tool Use Logging” component demonstrates how tool poisoning can be detected and logged, providing a practical countermeasure.
  • Local vs. Cloud LLM Support: Clients can run against OpenAI APIs or local models (e.g., llama3.2 via Ollama), allowing developers to evaluate security across different LLM backends.
  • Clear Configuration Flow: While not providing installation code here, the README outlines how to set up environment variables and run specific server–client pairs, making experimentation straightforward.

Real‑World Use Cases

  • Security Auditing: Teams can run the server against their MCP implementations to verify that input sanitization, tool permissions, and prompt handling meet organizational security policies.
  • Research & Education: Academics studying AI safety can use the server to demonstrate how subtle prompt manipulations lead to dangerous behavior, providing concrete examples for coursework or publications.
  • Tool Development: Developers building new MCP‑compatible tools can integrate the server into their CI pipelines to ensure that tool descriptors and usage patterns do not inadvertently expose vulnerabilities.
  • Policy Testing: Organizations can test their role‑based access controls by simulating excess‑permission attacks, ensuring that only authorized tools are available to each user or model.

Integration into AI Workflows

The server’s architecture mirrors typical MCP client–server interactions: a controller sends user prompts and tool descriptions to the server, receives tool identifiers, executes them locally or via an API, then forwards results back through the MCP chain. By embedding the vulnerability demonstrations into this flow, developers can see exactly where checks should be applied—whether at the prompt parsing stage, during tool selection, or when processing tool outputs. The Dual LLM example further shows how adding a quarantine layer can intercept malicious output before it reaches the user, illustrating a practical design pattern for secure AI agents.

Unique Advantages

  • Comprehensive Threat Landscape: Unlike generic security tutorials, this server covers a wide spectrum of MCP‑specific attacks in one place.
  • Modular Attack Paths: Each vulnerability is isolated, enabling focused experiments and precise countermeasure validation.
  • Architectural Countermeasures: The Dual LLM pattern provides a concrete, tested mitigation strategy that can be directly adapted to production systems.
  • Cross‑Model Flexibility: Support for both cloud and local LLMs ensures that findings are applicable regardless of deployment model.

In sum, the Mcp Vulnerabilities server is an essential resource for anyone building or maintaining AI assistants that rely on MCP. It delivers a hands‑on, realistic testing ground for security controls, encourages the adoption of robust architectural patterns, and ultimately helps safeguard users against a growing class of AI‑centric threats.