About
The Met Museum MCP Server exposes the Metropolitan Museum of Art’s public collection to AI models, allowing users to list departments, search for objects, and retrieve detailed object data—including images—through a simple, natural‑language interface.
Capabilities
The Met Museum MCP Server bridges the vast digital collection of The Metropolitan Museum of Art with AI assistants that support the Model Context Protocol. By exposing a set of well‑defined tools, it lets conversational agents query, retrieve, and display artworks in natural language. This capability turns a static online catalog into an interactive knowledge source that can be leveraged for education, creative projects, or research workflows.
At its core, the server offers three primary tools: list-departments, search-museum-objects, and get-museum-object. The first tool enumerates every department within the museum, providing developers with a ready reference for filtering searches. The second allows users to issue keyword queries—optionally constrained by department, image availability, or title matching—and returns concise summaries of matching objects. The third fetches a single object by its unique ID, delivering rich metadata (artist, medium, dimensions) and optionally embedding a base‑64 encoded image directly into the server’s resource pool. This last feature is particularly valuable because it lets AI agents present visual content inline, enabling richer, multimodal interactions without external image hosting.
For developers building AI‑powered experiences, these tools translate into a seamless workflow. A user can ask an assistant to “show me paintings from the Asian Art department,” triggering a search-museum-objects call that returns relevant IDs. The assistant then calls get-museum-object for each ID, automatically receiving image resources that can be rendered in the chat interface. Because all data is sourced from The Met’s open‑access API, developers can trust the provenance and quality of the content while keeping their own infrastructure lightweight.
Real‑world use cases span museums, educational platforms, and creative tools. An online history course could embed the server to let students browse primary sources on demand; a design app might surface inspiration from van Gogh or Picasso directly within the interface; a virtual exhibition could programmatically curate collections based on user preferences. The MCP integration ensures that these scenarios remain consistent across any AI assistant that supports the protocol, from Claude Desktop to LibreChat.
What sets this server apart is its focus on open‑access art data combined with an intuitive API surface. By providing images as resources and supporting fine‑grained search parameters, it enables developers to craft highly interactive, data‑rich conversations without needing to manage large media files or complex authentication flows. The result is a plug‑and‑play component that unlocks the cultural wealth of The Met for anyone building AI applications.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
MCP Directory
Central hub for open‑source MCP servers
Cortex MCP Server
Context‑aware Cortex API access via natural language queries
Simple MCP Server for Local Sentiment Analysis
Local AI-driven news analysis and email alerts
Mcp Koii
Control Teenage Engineering EP-133 via text commands and MIDI
PubTator MCP Server
Biomedical literature annotation via MCP
PicGo Uploader MCP Server
Upload images via PicGo with MCP integration