About
A lightweight MCP server that serves Qdrant‑backed RAG from Notion, Obsidian, Apple Notes and other markdown sources, enabling Claude Desktop to answer personal queries quickly.
Capabilities
Overview
Wandering RAG is a lightweight, command‑line driven server that bridges personal knowledge bases with AI assistants through the Model Context Protocol (MCP). It ingests semi‑structured documents from popular note‑taking platforms—such as Notion, Obsidian, and Apple Notes—converts them into a vector index stored in Qdrant, and exposes that index as an MCP service. This allows Claude Desktop (or any MCP‑compatible client) to query the user’s own data in a conversational manner, answering questions about events, dates, or any content that lives in the user’s notes.
The server solves a common pain point for developers and power users: accessing private, unstructured knowledge within an AI workflow without exposing it to external services. By running locally and using a self‑hosted vector database, Wandering RAG keeps sensitive information on the user’s machine while still offering the retrieval‑augmented generation (RAG) capabilities that modern assistants need. This is especially valuable for professionals who maintain extensive personal documentation—project logs, meeting notes, or research archives—and wish to retrieve that data on demand during a conversation.
Key features include:
- Multi‑source ingestion: Markdown files from Obsidian vaults, raw Apple Notes exports, and Notion pages can be indexed with a single command. The CLI supports future extensions for other formats.
- Vector search via Qdrant: Documents are embedded using a chosen model and stored in Qdrant, enabling fast semantic retrieval.
- MCP server exposure: Once the vector index is ready, launches an MCP endpoint that Claude Desktop can call to fetch relevant passages.
- Configurable for existing workflows: A small JSON snippet tells Claude Desktop how to start the server, including environment variables and command arguments.
- Extensible architecture: The modular CLI design allows developers to add new data sources or custom embedding pipelines without touching the MCP layer.
Typical use cases include:
- Personal knowledge management: Quickly answer “When did I adopt my cat?” or “What was the last time I changed her litter?” by querying notes stored locally.
- Team knowledge bases: A small business can run the server on a shared machine, letting team members ask questions about project documentation without exposing it to cloud services.
- Developer tooling: Embed code snippets, API docs, or design documents into the index so that a coding assistant can reference them during debugging sessions.
By integrating seamlessly with existing MCP workflows, Wandering RAG empowers developers to keep their data private while still enjoying the power of retrieval‑augmented generation. Its straightforward CLI, local vector store, and MCP compatibility make it a practical addition to any AI‑centric development environment.
Related Servers
Data Exploration MCP Server
Turn CSVs into insights with AI-driven exploration
BloodHound-MCP
AI‑powered natural language queries for Active Directory analysis
Google Ads MCP
Chat with Claude to analyze and optimize Google Ads campaigns
Bazi MCP
AI‑powered Bazi calculator for accurate destiny insights
Smart Tree
Fast AI-friendly directory visualization with spicy terminal UI
Google Search Console MCP Server for SEOs
Chat‑powered SEO insights from Google Search Console
Weekly Views
Server Health
Information
Explore More Servers
Node Code Sandbox MCP Server
Run JavaScript in isolated Docker containers on demand
Memory MCP Server (Go)
Persist knowledge graphs for AI assistants
MCP Google Spreadsheet
Control Google Drive & Sheets from AI assistants
Dart MCP Server
AI‑powered project management via Model Context Protocol
Desktop Commander MCP
AI‑powered file & terminal control in one chat
LLDB-MCP
AI‑assisted LLDB debugging via Claude