About
A server that exposes Qdrant vector operations via both FastAPI and FastMCP, enabling node upsert, semantic search, and deletion with optional OpenAI embeddings.
Capabilities

Overview
The Qdrant MCP Server is a dual‑protocol gateway that exposes a vector‑based knowledge graph stored in Qdrant to AI assistants through both RESTful FastAPI and Model Context Protocol (MCP) interfaces. By abstracting the underlying Qdrant operations, it removes the need for developers to embed complex vector store logic directly into their applications or AI workflows. Instead, they can simply call high‑level endpoints to upsert nodes, perform semantic searches, or clean up the graph, while the server handles embedding generation with OpenAI, collection management, and authentication.
This solution addresses a common pain point in AI‑powered systems: the friction of connecting large language models to vector databases. Developers often struggle with token limits, embedding consistency, and secure access controls. The Qdrant MCP Server solves these issues by providing a single, well‑documented API surface that guarantees consistent behavior across both HTTP and MCP clients. It also centralizes configuration—such as Qdrant URLs, API keys, and collection names—through environment variables validated by Pydantic, ensuring that deployments are reproducible and secure.
Key capabilities include:
- Unified node management: Upsert, delete, and query nodes via concise JSON payloads.
- Semantic search: Leverages OpenAI embeddings to retrieve the most relevant nodes in a vector space.
- MCP compliance: Wraps responses in MCP envelopes, supports authentication secrets, and adheres to the standard response format expected by Claude and other AI assistants.
- Health monitoring: A endpoint that confirms connectivity to both Qdrant and OpenAI services.
- CORS and documentation: The FastAPI variant exposes interactive Swagger UI, making it easy to test endpoints in a browser.
Typical use cases span from building knowledge‑base powered chatbots that need to retrieve contextually relevant documents, to creating recommendation engines where user queries are matched against a graph of items. In research settings, the server can serve as a plug‑in for rapid prototyping of graph‑based retrieval pipelines without writing boilerplate code. For production deployments, its authentication support and standardized MCP responses make it a drop‑in component in larger AI orchestration frameworks.
By offering both FastAPI and MCP interfaces, the Qdrant MCP Server gives developers flexibility: they can choose REST for quick prototyping or internal tooling, and switch to MCP when integrating with AI assistants that expect the protocol. This duality, combined with robust embedding and vector search logic, positions the server as a practical bridge between language models and scalable vector databases.
Related Servers
MarkItDown MCP Server
Convert documents to Markdown for LLMs quickly and accurately
Context7 MCP
Real‑time, version‑specific code docs for LLMs
Playwright MCP
Browser automation via structured accessibility trees
BlenderMCP
Claude AI meets Blender for instant 3D creation
Pydantic AI
Build GenAI agents with Pydantic validation and observability
Chrome DevTools MCP
AI-powered Chrome automation and debugging
Weekly Views
Server Health
Information
Explore More Servers
Sanctions MCP Server
Real‑time sanctions screening via OFAC and global lists
Jupyter Earth MCP Server
Geospatial analysis in Jupyter notebooks via Model Context Protocol
Dex MCP Server
AI‑powered contact, note, and reminder management via Dex API
Monad Uniswap Trading MCP Server
AI‑powered crypto trading on Monad testnet
File Context Server
LLM-powered file system exploration and analysis
TiDB MCP Server
Seamless Model Context Protocol integration with TiDB serverless database