About
The ShaderToy MCP Server connects large language models with the Shadertoy platform via Model Context Protocol, enabling LLMs to retrieve shader data, search for existing shaders, and generate complex GLSL code based on existing examples.
Capabilities
The ShaderToy-MCP server bridges the gap between large language models (LLMs) such as Claude and the ShaderToy ecosystem, a popular platform for creating, running, and sharing GLSL shaders. By exposing ShaderToy’s content through the Model Context Protocol (MCP), it enables an AI assistant to query, retrieve, and understand entire shader pages. This capability is crucial because LLMs typically lack direct access to web resources, limiting their ability to generate or refine complex shader code that builds upon existing examples.
At its core, the server offers two primary tools: and . The former fetches metadata, code snippets, and visual previews for any ShaderToy shader URL, allowing the assistant to analyze structure, performance metrics, and author contributions. The latter performs keyword-based searches across ShaderToy’s vast library, returning relevant shaders that match user intent. By combining these tools, an AI can discover inspiration, borrow patterns, and even adapt entire shader architectures to new creative briefs.
For developers integrating AI into graphics pipelines, this MCP server unlocks several powerful use cases. Designers can ask the assistant to “generate a realistic ocean shader” and receive code that not only compiles on ShaderToy but also includes attribution to the original author. Game studios can quickly prototype visual effects by searching for a “mountain terrain” shader and tweaking parameters through iterative LLM prompts. Educational platforms can leverage the server to provide students with step‑by‑step shader generation tutorials that reference real-world examples.
The integration workflow is straightforward: once the MCP server is running, Claude (or any other MCP‑compatible client) can invoke the tools directly from its chat interface. The assistant retrieves shader data, processes it with natural language understanding, and outputs new GLSL code that conforms to ShaderToy’s required function signature. This seamless interaction eliminates manual copy‑paste steps, reduces debugging time, and ensures that generated shaders are immediately runnable on ShaderToy’s live preview.
What sets ShaderToy-MCP apart is its ability to “learn from existing shaders.” Because the server exposes full shader code, an LLM can analyze patterns—such as noise functions for water or fractal techniques for mountains—and compose entirely new shaders that inherit those stylistic elements. This level of contextual awareness is rare among MCP servers, which often provide only metadata or limited APIs. By enabling deep code introspection and search‑driven synthesis, ShaderToy-MCP empowers developers to push creative boundaries while maintaining a tight feedback loop between AI generation and real‑world shader execution.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
Poem MCP
Ancient Chinese poetry knowledge server
PowerShell MCP Server
Execute PowerShell commands from Claude with ease
Demo MCP Basic Server
Enabling AI models with custom calculation tools
Inbox MCP
LLM‑powered email assistant for instant inbox management
Whisper King MCP Server
A lightweight MCP server for whispering data
Dameng MCP Server
MCP service for Dameng 8 databases