MCPSERV.CLUB
isaacwasserman

Vega-Lite Data Visualization MCP Server

MCP Server

Visualize data with Vega‑Lite via LLM-powered tools

Stale(55)
88stars
3views
Updated 18 days ago

About

An MCP server that lets large language models store data tables and generate Vega‑Lite visualizations, returning either a full specification or a PNG image.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Vega‑Lite MCP Server Demo

The Vega‑Lite MCP server turns a conversational AI into an interactive data‑visualization assistant. Rather than requiring developers to hand‑craft Vega‑Lite JSON or embed plotting libraries in their applications, the server exposes a minimal set of tools that let an LLM store raw tabular data and later generate fully‑rendered charts. This bridges the gap between natural language analysis and visual exploration, enabling AI agents to present insights in a format that is both machine‑readable and human‑digestible.

At its core, the server offers two commands. accepts a name and an array of objects, persisting the table for later use. This step decouples data ingestion from visualization, allowing the LLM to first curate and summarize information before committing it. takes a previously stored table name and a Vega‑Lite specification string. Depending on the requested output type, it can return either the enriched Vega‑Lite JSON (useful for further programmatic manipulation) or a base64‑encoded PNG image. The ability to switch between text and image outputs gives developers flexibility: they can embed charts directly in reports or continue processing the specification within another toolchain.

The server’s design is intentionally lightweight, making it easy to integrate into existing MCP‑enabled workflows. Developers can add the server to their Claude Desktop configuration with a single JSON entry, specifying whether they want PNGs or text. Once registered, any AI assistant that understands MCP can call to stash a dataset and later invoke to produce charts on demand. This pattern is ideal for use cases such as automated business dashboards, data‑driven chatbots that answer queries with visual evidence, or research assistants that generate exploratory plots from statistical outputs.

Unique advantages stem from the server’s strict separation of concerns and its adherence to Vega‑Lite—a declarative, JSON‑based visualization grammar that is both expressive and lightweight. Because the LLM supplies only a Vega‑Lite spec, developers can rely on the server’s rendering engine to handle all intricacies of scaling, encoding, and layout. The PNG output eliminates the need for client‑side rendering libraries, simplifying deployment in environments where graphical toolkits are unavailable or undesirable.

In practice, a data analyst might ask an AI assistant to “show me the monthly sales trend for product X.” The LLM would first query a database, use to store the result, and then craft a Vega‑Lite specification for a line chart. The assistant would call , receive a PNG, and embed it directly in an email or chat. This seamless pipeline—from natural language to visual artifact—illustrates how the Vega‑Lite MCP server empowers developers to build richer, more interactive AI experiences without wrestling with visualization code.