MCPSERV.CLUB
yincongcyincong

VictoriaMetrics MCP Server

MCP Server

Fast, scalable metrics storage for Claude Desktop

Active(70)
7stars
2views
Updated Aug 8, 2025

About

Provides a Model Context Protocol interface to VictoriaMetrics, enabling write, query, and label operations from Claude Desktop via Smithery. Ideal for real‑time monitoring and analytics.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

VictoriaMetrics MCP Server Overview

The VictoriaMetrics MCP Server bridges the gap between AI assistants and high‑performance time‑series storage. It exposes a set of well‑defined tools that let an AI client like Claude read from and write to a VictoriaMetrics instance using the familiar Model Context Protocol (MCP). By doing so, developers can embed real‑time monitoring data directly into conversational workflows, enabling intelligent decision‑making and automated incident response without leaving the AI environment.

At its core, the server provides six primary capabilities:

  • Data ingestion ( and ) allows the assistant to push raw metric values or Prometheus exposition data into VictoriaMetrics, supporting both structured JSON payloads and the native Prometheus format.
  • Time‑range querying () gives access to historical series over arbitrary periods, enabling trend analysis or root‑cause investigations.
  • Instant querying () retrieves the current value of a metric, useful for real‑time alerts or status checks.
  • Metadata discovery ( and ) exposes the label namespace, letting the AI enumerate available dimensions or filter data by specific tags.

These tools are intentionally lightweight and typed, ensuring that the AI can validate inputs before sending requests. The server also supports environment variables (, , ) to point at different VictoriaMetrics endpoints, making it flexible for both single‑node and cluster deployments.

Typical use cases include:

  • Operational dashboards: An AI assistant can pull recent CPU usage or latency metrics and present them in natural language, answering questions like “What was the average response time last week?”.
  • Automated incident triage: By querying metrics and correlating them with logs, the assistant can suggest root causes or remedial actions during outages.
  • Data‑driven reporting: Scheduled queries can feed metrics into quarterly performance reports or compliance audits, all orchestrated through MCP calls.
  • Continuous monitoring: The AI can periodically execute instant queries to watch for threshold breaches and trigger alerts or escalation workflows.

Integration is straightforward: once the MCP server is running, any client that understands MCP can invoke these tools via standard tool calls. The server’s minimal footprint and clear API surface make it an ideal choice for developers who need reliable time‑series access within conversational AI pipelines, without the overhead of managing custom adapters or writing boilerplate code.