MCPSERV.CLUB
EdenYavin

Promptfuzzer MCP Server

MCP Server

Lightweight MCP for Garak LLM vulnerability scanning

Stale(50)
3stars
2views
Updated Sep 21, 2025

About

A compact Model Context Protocol server that integrates with Garak to list model types, models, and probes; run attacks on selected models and retrieve vulnerability reports via simple API calls.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Promptfuzzer MCP in action

The Promptfuzzer MCP server bridges AI assistants with Garak, an LLM vulnerability scanner. By exposing a set of lightweight tools over the Model Context Protocol, it lets developers trigger security checks on models directly from their favorite AI client—whether that’s Claude Desktop, Cursor, or any other MCP‑compatible interface. The server abstracts away the complexity of configuring Garak locally and provides a clean, declarative API for listing available models, selecting probes, executing scans, and retrieving reports—all within the same conversational flow that powers the assistant.

At its core, the server offers five key operations:

  • list_model_types – Enumerates supported backends (ollama, openai, huggingface, ggml), giving developers a quick view of where they can run scans.
  • list_models – For a chosen backend, it returns every model that Garak can interrogate, simplifying the discovery of target LLMs.
  • list_garak_probes – Shows every probe or attack script bundled with Garak, letting users choose the most relevant test for a given scenario.
  • run_attack – Executes a selected probe against a specified model, returning the vulnerabilities found in a structured list.
  • get_report – Provides the file path to the most recent scan report, enabling downstream processing or archival.

These capabilities are valuable because they eliminate manual setup and provide a consistent interface for automated security testing. Developers can embed vulnerability scans into CI/CD pipelines, trigger them on demand from a chat window, or surface findings in dashboards—all without leaving the AI assistant environment.

Real‑world use cases include:

  • Model vetting – Quickly assess whether a newly fine‑tuned model leaks sensitive data or exhibits unsafe behavior before deployment.
  • Compliance monitoring – Automate regular scans to satisfy regulatory requirements for data privacy and content safety.
  • Threat research – Experiment with new probes or custom attack scripts while observing results in real time.
  • DevOps integration – Hook the MCP endpoints into infrastructure tooling, so that every model build triggers a security audit.

The server’s design gives it unique advantages. By running Garak inside an MCP server, the heavy lifting of vulnerability analysis stays on a dedicated machine while the AI assistant remains lightweight. The API is intentionally minimal yet expressive, enabling rapid iteration and easy extension (e.g., adding support for Smithery AI or new model backends). Moreover, the ability to retrieve a report file path means that downstream tools can ingest raw data for advanced analytics or compliance reporting.

In summary, Promptfuzzer MCP turns a complex LLM security tool into a first‑class citizen of AI workflows. It empowers developers to safeguard models with minimal friction, integrates seamlessly into existing MCP ecosystems, and lays the groundwork for scalable, automated model security practices.