MCPSERV.CLUB
StacklokLabs

OCI Registry MCP Server

MCP Server

Query OCI registries with LLM-powered tools

Active(80)
11stars
1views
Updated 12 days ago

About

An SSE-based MCP server that enables LLM applications to retrieve image information, list tags, and fetch manifests or configs from OCI registries.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Trust Score

The OCI Registry MCP Server is a lightweight, event‑driven service that exposes a set of tools for interacting with OCI registries through the Model Context Protocol. By running this server, developers can give language models instant access to container image metadata without writing custom API clients or handling authentication logic themselves. The server listens for SSE‑based MCP requests and returns structured JSON responses that describe image digests, sizes, architectures, tags, manifests, and configuration blobs.

At its core, the server solves the problem of discovering container images in a way that is both machine‑readable and secure. When an AI assistant needs to verify that a particular image contains the expected runtime or to validate that a deployment pipeline is pulling the correct tag, it can simply invoke one of the provided tools. The server handles all the heavy lifting: negotiating authentication (bearer tokens, username/password, or Docker config), querying the registry’s catalog endpoints, and parsing the OCI schema. This removes a common source of friction for developers who would otherwise have to embed registry logic in their own code or rely on external tooling.

Key capabilities are delivered through four dedicated MCP tools:

  • get_image_info returns a concise summary of an image, including digest, size, OS/architecture, creation date, and layer count.
  • list_tags enumerates all tags for a specified repository, allowing dynamic discovery of available releases.
  • get_image_manifest fetches the full OCI manifest, exposing layer digests and configuration references.
  • get_image_config retrieves the image’s config blob, which contains environment variables, entrypoints, and other runtime metadata.

These tools are intentionally simple yet expressive enough to support a wide range of use cases. For example, a CI/CD pipeline can ask the model “Is this image built for arm64?” and receive an immediate answer, or a security scanner can prompt the assistant to “Show me all layers in this image” and then inspect them for vulnerabilities. In a chatbot that assists developers, the assistant can answer questions like “What tags are available for ?” without leaving the conversation context.

Integration into AI workflows is streamlined by ToolHive, a containerized deployment framework that automatically configures environment variables and secrets for the server. Once running, any MCP‑compatible client—whether a custom application or a hosted AI platform—can send tool invocation requests over SSE. The server responds in real time, enabling conversational agents to act on registry data as part of their reasoning loop. This tight coupling between AI intent and external system state is a standout feature, reducing latency compared to polling REST APIs and ensuring that the assistant’s knowledge remains up‑to‑date.

In summary, the OCI Registry MCP Server turns container registries into first‑class AI resources. It abstracts authentication, query logic, and data parsing behind a simple, declarative protocol that developers can plug into any LLM‑powered workflow. By exposing image metadata as structured tools, it empowers assistants to make informed decisions, automate compliance checks, and streamline developer operations—all without adding complexity to the client side.