MCPSERV.CLUB
wilsonchenghy

ShaderToy MCP Server

MCP Server

LLM-powered shader creation and exploration

Stale(55)
33stars
1views
Updated 19 days ago

About

The ShaderToy MCP Server connects large language models with the Shadertoy platform via Model Context Protocol, enabling LLMs to retrieve shader data, search for existing shaders, and generate complex GLSL code based on existing examples.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Complex Shader Generated by MCP

The ShaderToy-MCP server bridges the gap between large language models (LLMs) such as Claude and the ShaderToy ecosystem, a popular platform for creating, running, and sharing GLSL shaders. By exposing ShaderToy’s content through the Model Context Protocol (MCP), it enables an AI assistant to query, retrieve, and understand entire shader pages. This capability is crucial because LLMs typically lack direct access to web resources, limiting their ability to generate or refine complex shader code that builds upon existing examples.

At its core, the server offers two primary tools: and . The former fetches metadata, code snippets, and visual previews for any ShaderToy shader URL, allowing the assistant to analyze structure, performance metrics, and author contributions. The latter performs keyword-based searches across ShaderToy’s vast library, returning relevant shaders that match user intent. By combining these tools, an AI can discover inspiration, borrow patterns, and even adapt entire shader architectures to new creative briefs.

For developers integrating AI into graphics pipelines, this MCP server unlocks several powerful use cases. Designers can ask the assistant to “generate a realistic ocean shader” and receive code that not only compiles on ShaderToy but also includes attribution to the original author. Game studios can quickly prototype visual effects by searching for a “mountain terrain” shader and tweaking parameters through iterative LLM prompts. Educational platforms can leverage the server to provide students with step‑by‑step shader generation tutorials that reference real-world examples.

The integration workflow is straightforward: once the MCP server is running, Claude (or any other MCP‑compatible client) can invoke the tools directly from its chat interface. The assistant retrieves shader data, processes it with natural language understanding, and outputs new GLSL code that conforms to ShaderToy’s required function signature. This seamless interaction eliminates manual copy‑paste steps, reduces debugging time, and ensures that generated shaders are immediately runnable on ShaderToy’s live preview.

What sets ShaderToy-MCP apart is its ability to “learn from existing shaders.” Because the server exposes full shader code, an LLM can analyze patterns—such as noise functions for water or fractal techniques for mountains—and compose entirely new shaders that inherit those stylistic elements. This level of contextual awareness is rare among MCP servers, which often provide only metadata or limited APIs. By enabling deep code introspection and search‑driven synthesis, ShaderToy-MCP empowers developers to push creative boundaries while maintaining a tight feedback loop between AI generation and real‑world shader execution.