MCPSERV.CLUB
oborchers

Pacman MCP Server

MCP Server

Search and retrieve package data across major repositories

Stale(55)
7stars
1views
Updated Aug 19, 2025

About

Pacman is an MCP server that lets LLMs query popular package indices—PyPI, npm, crates.io, Docker Hub, and Terraform Registry—to find packages or fetch detailed metadata. It provides tools for searching, retrieving package info, and getting the latest Terraform module versions.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

mcp-server-pacman MCP server

Pacman is a Model Context Protocol (MCP) server designed to give language models instant, structured access to popular package repositories and container registries. By exposing a consistent set of tools and prompts, it turns the vast ecosystem of open‑source libraries—whether they’re Python packages on PyPI, JavaScript modules on npm, Rust crates on crates.io, Docker images on Docker Hub, or Terraform modules in the Registry—into first‑class data sources for AI assistants. This eliminates the need for developers to write custom scrapers or API wrappers, allowing LLMs to fetch real‑time package metadata and search results with a single, well‑defined call.

The server offers five core tools: , , , , and . Each tool is parameterized to target a specific index, query term, or package name and optionally limits the number of results. For example, can query any supported index (pypi, npm, crates, terraform) with a simple search string and return the top five matches by default. The tool pulls detailed metadata such as version history, dependencies, and download statistics for a particular package or specific release. Docker‑centric tools let the model explore image tags, pull counts, and maintainers, while Terraform utilities expose module descriptions, available versions, and the latest release.

Beyond tools, Pacman supplies a suite of ready‑made prompts—, , , etc.—that encapsulate common query patterns. These prompts provide a higher‑level interface for the model, allowing it to perform typical package discovery workflows without crafting tool calls manually. The prompts are intentionally lightweight: they only require a query string or package name, making them ideal for conversational scenarios where the user simply asks “Find a fast JSON parser in Python” or “Show me the latest version of aws/s3 module.”

In practice, Pacman becomes a linchpin in developer‑AI pipelines. A code‑generation assistant can ask the model to “search for a logging library in JavaScript,” receive structured results, and then embed the chosen package into the generated code. A documentation bot can pull dependency graphs or release notes automatically, while a CI/CD helper might verify that container images referenced in a deployment script are up to date. Because the server aggregates multiple registries under one protocol, it scales effortlessly across projects that span languages and ecosystems.

What sets Pacman apart is its focus on real‑time, authoritative data. Unlike static package lists or cached APIs, each query hits the live registry endpoints, ensuring that LLMs work with current version numbers, dependency trees, and security advisories. This guarantees that the AI’s suggestions are trustworthy and reduces friction for developers who rely on up‑to‑date libraries. The uniform MCP interface also means that any LLM capable of speaking the protocol can leverage Pacman without custom integration work, making it a plug‑and‑play component for any AI‑augmented development environment.