About
Provides a Model Context Protocol interface to VictoriaMetrics, enabling write, query, and label operations from Claude Desktop via Smithery. Ideal for real‑time monitoring and analytics.
Capabilities

The VictoriaMetrics MCP Server bridges the gap between AI assistants and high‑performance time‑series storage. It exposes a set of well‑defined tools that let an AI client like Claude read from and write to a VictoriaMetrics instance using the familiar Model Context Protocol (MCP). By doing so, developers can embed real‑time monitoring data directly into conversational workflows, enabling intelligent decision‑making and automated incident response without leaving the AI environment.
At its core, the server provides six primary capabilities:
- Data ingestion ( and ) allows the assistant to push raw metric values or Prometheus exposition data into VictoriaMetrics, supporting both structured JSON payloads and the native Prometheus format.
- Time‑range querying () gives access to historical series over arbitrary periods, enabling trend analysis or root‑cause investigations.
- Instant querying () retrieves the current value of a metric, useful for real‑time alerts or status checks.
- Metadata discovery ( and ) exposes the label namespace, letting the AI enumerate available dimensions or filter data by specific tags.
These tools are intentionally lightweight and typed, ensuring that the AI can validate inputs before sending requests. The server also supports environment variables (, , ) to point at different VictoriaMetrics endpoints, making it flexible for both single‑node and cluster deployments.
Typical use cases include:
- Operational dashboards: An AI assistant can pull recent CPU usage or latency metrics and present them in natural language, answering questions like “What was the average response time last week?”.
- Automated incident triage: By querying metrics and correlating them with logs, the assistant can suggest root causes or remedial actions during outages.
- Data‑driven reporting: Scheduled queries can feed metrics into quarterly performance reports or compliance audits, all orchestrated through MCP calls.
- Continuous monitoring: The AI can periodically execute instant queries to watch for threshold breaches and trigger alerts or escalation workflows.
Integration is straightforward: once the MCP server is running, any client that understands MCP can invoke these tools via standard tool calls. The server’s minimal footprint and clear API surface make it an ideal choice for developers who need reliable time‑series access within conversational AI pipelines, without the overhead of managing custom adapters or writing boilerplate code.
Related Servers
MindsDB MCP Server
Unified AI-driven data query across all sources
Homebrew Legacy Server
Legacy Homebrew repository split into core formulae and package manager
Daytona
Secure, elastic sandbox infrastructure for AI code execution
SafeLine WAF Server
Secure your web apps with a self‑hosted reverse‑proxy firewall
mediar-ai/screenpipe
MCP Server: mediar-ai/screenpipe
Skyvern
MCP Server: Skyvern
Weekly Views
Server Health
Information
Explore More Servers
MCP Atlassian
Integrate AI with Jira and Confluence
Twitter MCP Server
AI-powered Twitter integration without API keys
Python MCP SSE Server
SSE-powered Model Context Protocol server for real-time book search
MCP GitHub Mapper
Troubleshoot and integrate GitHub mappings for MCP servers
Nostr MCP Server
LLM‑powered Nostr integration with easy tool access
Damn Vulnerable Model Context Protocol Server
Learn MCP security by hacking a deliberately vulnerable server