MCPSERV.CLUB
MCP-Mirror

Luma AI MCP Server

MCP Server

AI video and image creation powered by Luma Dream Machine

Stale(50)
0stars
2views
Updated Apr 3, 2025

About

A Model Context Protocol server that interfaces with Luma AI's Dream Machine API to generate, manage, and manipulate AI‑generated videos and images, offering features like text‑to‑video, keyframe editing, audio addition, upscaling, and credit tracking.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Overview of the Bobtista Luma AI MCP Server

The Bobtista Luma AI MCP Server bridges the gap between conversational AI assistants and Luma AI’s Dream Machine, a powerful video‑generation platform. By exposing the Dream Machine API through the Model Context Protocol (MCP), the server lets language models request complex media creation tasks—such as text‑to‑video, video upscaling, or audio synthesis—directly from within a dialogue. This eliminates the need for developers to write custom HTTP clients or manage authentication flows, allowing AI assistants to treat media generation as a first‑class tool in the same way they handle data retrieval or calculation.

At its core, the server implements a rich set of tools that mirror the Dream Machine’s capabilities. Developers can launch new video generations with precise control over resolution, duration, aspect ratio, and keyframes, or convert images into short clips. Once a generation is queued, the server provides status checks and completion callbacks, enabling assistants to keep users informed or trigger downstream actions (e.g., storing the final clip in a cloud bucket). Advanced operations such as upscaling or adding AI‑generated audio are also available, ensuring that the media output can be refined post‑creation without leaving the MCP ecosystem.

The value proposition for developers lies in seamless integration and low‑overhead orchestration. By exposing Luma’s features as MCP tools, the server allows AI assistants to compose sophisticated media workflows—like generating a promotional video from a text brief, automatically upscaling it for high‑resolution displays, and adding background music—all within a single conversational session. This reduces boilerplate code, centralizes error handling, and makes it easier to audit usage through the server’s credit‑management endpoints.

Key features include:

  • Text‑to‑video and image‑to‑video generation with customizable keyframes.
  • Video extension/interpolation to create longer clips from short seeds.
  • Image generation with reference and style images, enabling consistent visual themes.
  • Audio addition that can be triggered post‑generation via callbacks.
  • Upscaling and interpolation, supporting resolutions up to 4K.
  • Generation tracking, listing, and deletion for robust lifecycle management.

Typical use cases span marketing automation (auto‑creating social media clips), content creation pipelines (integrating with editors or CMSs), and educational tools (generating instructional videos from prompts). In each scenario, the MCP server acts as a unified gateway that lets AI assistants orchestrate media production without exposing underlying API intricacies to end users.