MCPSERV.CLUB
lumalabs

Luma API MCP

MCP Server

AI image and video generation powered by Luma Labs

Stale(50)
20stars
2views
Updated 18 days ago

About

Luma API MCP provides an easy-to-use interface for generating images and videos using Luma Labs’ models. It supports prompt-based creation, aspect ratios, style references, and video parameters like resolution, duration, and looping.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Luma API MCP – AI‑Driven Image & Video Generation Server

The Luma API MCP server bridges the gap between conversational AI assistants and advanced generative media services. By exposing a set of well‑defined tools for image and video creation, it lets developers enrich Claude or other MCP clients with on‑demand visual content without leaving the chat flow. This solves a common pain point: the need to call external APIs, handle authentication, and manage media assets manually when building AI‑powered applications. With a single MCP call, an assistant can generate custom graphics or short clips that align with user intent, dramatically improving engagement and interactivity.

At its core, the server offers two primary capabilities: Create Image and Create Video. The image tool accepts a descriptive prompt, optional aspect ratio, model selection (e.g., photon‑1 or photon‑flash‑1), and a rich set of reference images. Users can influence style, composition, or character appearance by supplying weighted image URLs—up to eight for general references and one each for style, character, or modification. This flexibility allows fine‑grained control over the visual output while keeping the interface simple for the assistant.

The video tool builds on the same concept but adds temporal dimensions. Clients specify a prompt, resolution, duration (5 s or 9 s), and optional loop behavior. Keyframe support—through , , or corresponding generation IDs—lets developers dictate the start and end frames, enabling seamless transitions or animated storytelling. The server’s models (ray‑2, ray‑flash‑2, ray‑1‑6) balance speed and quality, with typical generation times ranging from 5–15 seconds for images to 15–60 seconds for videos, depending on resolution and length.

Typical use cases include:

  • Chatbot‑powered design assistants that generate mockups or marketing visuals on demand.
  • Interactive storytelling apps where characters and scenes evolve in real time based on user choices.
  • Social media content creation that automates short video clips or image assets for campaigns.
  • Educational tools that illustrate concepts with custom diagrams or animated explanations.

Integration is straightforward for MCP‑aware developers: a single tool invocation within the conversation triggers the Luma API, and the assistant receives a media URL or binary payload ready for display. The server’s support for multiple aspect ratios, resolution options, and reference weighting gives developers granular control while keeping the client logic minimal. Unique advantages include the ability to blend multiple visual references in a single prompt, fine‑tuned control over video keyframes, and a lightweight interface that fits naturally into existing AI workflows.