MCPSERV.CLUB
shredEngineer

Archive Agent

MCP Server

Intelligent file indexing with AI search and OCR

Active(80)
54stars
0views
Updated 16 days ago

About

Archive Agent is a local, on‑device file indexer that uses semantic chunking and Retrieval Augmented Generation to provide natural language search across PDFs, images, Markdown, and plain text. It automatically OCRs images, stores vectors in Qdrant, and exposes an MCP interface for seamless integration.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Archive Agent Screenshot

Overview

Archive Agent is an MCP‑enabled RAG (Retrieval Augmented Generation) engine that turns a local file system into an AI‑powered knowledge base. By indexing documents on‑device, it eliminates the need for cloud storage while still providing natural‑language search across PDFs, images, Markdown and plain text. The server exposes a clean MCP interface so any Claude or OpenAI‑compatible assistant can ask questions, retrieve relevant snippets, and generate responses that are grounded in the user’s own data.

The core value lies in combining semantic chunking, automatic OCR, and a local vector database (Qdrant) into a single pipeline. Files are scanned once, split into context‑aware chunks with headers that preserve document structure, and stored as embeddings. When a query arrives, the RAG engine retrieves the most semantically relevant chunks, reranks them with an optional second‑stage model, and expands the context before feeding it to the LLM. This workflow delivers precise answers without exposing sensitive documents to external services.

Key capabilities include:

  • On‑device indexing – supports PDFs, images, Markdown, and plaintext. File selection can be driven by glob patterns (e.g., ), and changes are automatically re‑indexed in parallel.
  • Automatic OCR – experimental image transcription that extracts entities and converts visual data into searchable text.
  • MCP integration – the server exposes resources, tools, and prompts that can be invoked directly by any MCP‑compatible client. No chatbot UI is required; the assistant simply calls the server’s methods.
  • AI provider flexibility – works with OpenAI, xAI/Grok, Claude (OpenAI‑compatible), Ollama, or LM Studio. Switching providers is as simple as updating the API URL in settings.
  • Scalable performance – multi‑threaded ingestion, request retry logic, and structured output prompts ensure fast, reliable operation even on large collections.

Typical use cases span personal knowledge management (e.g., a researcher indexing papers), enterprise document search, and privacy‑conscious workflows where the user wants an LLM to answer questions from proprietary files without uploading them to a third‑party service. By embedding the MCP server into existing toolchains, developers can enrich command‑line utilities or custom assistants with robust RAG capabilities while keeping all data local.