About
Blender Open MCP connects local AI models (Ollama) to Blender using the Model Context Protocol, enabling users to issue natural language commands for 3D modeling tasks, asset retrieval, and rendering.
Capabilities
Blender Open MCP – AI‑Driven 3D Modeling Server
Blender Open MCP turns a locally hosted language model into a natural‑language interface for Blender. By exposing Blender’s full API through the Model Context Protocol, it allows developers to write concise prompts that translate into complex 3D operations—creating geometry, applying materials, querying scene data, or rendering images—all without touching Python code. This eliminates the need for manual scripting and lets designers iterate quickly with conversational commands.
The server runs alongside Ollama, the lightweight local model host, so no external cloud calls are required. Once started, it listens on a configurable HTTP endpoint and accepts MCP requests from any client that understands the protocol (e.g., Claude, OpenAI’s new tool‑calling feature). A dedicated Blender add‑on provides a convenient UI panel, enabling users to launch the server directly from within the 3D viewport and send prompts via a simple text field. The add‑on handles authentication, request formatting, and streaming of responses back into Blender’s UI.
Key capabilities include:
- Natural‑language control – Translate human intent (“Create a cube named ‘my_cube’”) into Blender operators and data‑structure changes.
- Scene interrogation – Tools such as return structured JSON about objects, layers, and camera settings.
- Asset integration – Optional PolyHaven support lets the model fetch HDRIs, textures, or 3D models on demand, streamlining asset pipelines.
- Rendering – The server can trigger Blender’s rendering engine and return image metadata or encoded data, enabling end‑to‑end AI workflows that generate visual outputs.
- Extensibility – Because it adheres to MCP, additional tools can be added with minimal effort, and other AI assistants can discover and invoke them automatically.
In practice, a designer could start Blender, open the MCP panel, type “Add a sphere with a reflective material and place it 2 meters above the origin,” and watch the scene update instantly. A game developer could ask for “Generate a low‑poly terrain with an HDRI from PolyHaven” and receive a ready‑to‑export level. A production pipeline might use MCP to generate shot‑ready renders from text prompts, integrating the output into a rendering farm or post‑production toolchain.
By coupling local LLM inference with Blender’s powerful graphics engine, Blender Open MCP provides a frictionless bridge between creative intent and technical execution. It empowers developers to build AI‑assisted workflows that are both expressive and reproducible, while keeping all data and computation on the user’s machine for speed and privacy.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Tags
Explore More Servers
Writers Muse MCP Server
Analyze style, generate blog content effortlessly
Patronus MCP Server
LLM Optimization & Evaluation Hub
MCP Workers AI
AI-powered Cloudflare Workers MCP integration
D4Rkm1 MCP Server
Simple, lightweight Model Context Protocol server
JLCPCB Parts MCP Server
Find JLCPCB-compatible components quickly
K8s MCP Server
Run Kubernetes CLI inside Claude via Docker