MCPSERV.CLUB
monteslu

Vibe-Eyes

MCP Server

LLM-powered visual debugging for browser games

Active(89)
41stars
1views
Updated 23 days ago

About

Vibe-Eyes captures canvas frames and console/debug data from browser-based games, vectorizes images into SVG, and exposes the information via MCP so LLMs can view and debug in real time.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Vibe‑Eyes MCP Server Badge

Vibe‑Eyes is an experimental Model Context Protocol (MCP) server that gives large language models the ability to see and understand what is happening inside browser‑based games and applications. By capturing the visual state of HTML5 canvas elements and pairing that with real‑time debug data, Vibe‑Eyes bridges the gap between a developer’s code and the AI assistant that is meant to help debug or iterate on that code. This solves a common pain point: developers often need an AI to reason about visual bugs, performance hiccups, or gameplay logic, but LLMs lack any notion of the rendered scene. Vibe‑Eyes provides that missing visual context in a compact, machine‑friendly format.

At its core, the server follows a lightweight client–server architecture. A browser client runs inside the target page and periodically snapshots any canvas elements, intercepts console logs, errors, and unhandled exceptions, then streams this data over a WebSocket connection to the Node.js MCP server. The server vectorizes each snapshot into an SVG representation that preserves shape, color, and layout while drastically reducing file size. Debug logs and stack traces are stored alongside the SVGs, allowing an LLM to retrieve a holistic snapshot: “What’s on the screen right now?” and “Why did this error occur?”

For developers, Vibe‑Eyes adds several valuable capabilities to an AI workflow. First, the getGameDebug() MCP tool exposes a single call that returns both the latest SVG and any collected debug information, enabling seamless debugging conversations. Second, because vectorized images are lightweight, the server can stream updates in near real‑time without overwhelming bandwidth or storage. Third, by capturing unhandled exceptions with full stack traces, the tool turns otherwise opaque error logs into actionable insights that an LLM can interpret and suggest fixes for.

Typical use cases include interactive game development, where a developer can ask the AI to “explain why this sprite is stuck” or “suggest optimizations for frame rate drops,” and the assistant will base its response on the visual snapshot. In web application testing, Vibe‑Eyes can help QA engineers identify rendering glitches or layout bugs by providing the AI with a visual context of the failing page. Even in educational settings, students learning game programming can receive instant visual feedback from an AI tutor, making the debugging process more intuitive.

What sets Vibe‑Eyes apart is its seamless integration with existing MCP workflows and its focus on vectorized visual data rather than raw pixel dumps. This design choice preserves the essential structure of a scene while keeping payloads small, making it practical for real‑time interaction. Moreover, the server’s ability to intercept console output and unhandled exceptions means developers no longer need separate logging tools; the AI has everything it needs in one place. For any team that relies on LLMs to accelerate development, Vibe‑Eyes turns the abstract notion of “seeing” into a concrete, programmable capability.