About
The Pagefind MCP Server indexes static sites with Pagefind, enabling quick search queries. It pushes indexed data to an MCP-compatible client and can optionally omit full page resources for lightweight snippets.
Capabilities

Overview
The Pagefind MCP server turns a static website into an AI‑friendly search engine. By exposing the Pagefind index through the Model Context Protocol, Claude and other AI assistants can query a site’s content in natural language and receive structured results without the need to parse raw HTML or build custom search APIs. This solves a common pain point for developers who ship static sites (Jekyll, Hugo, Eleventy, etc.) but still want their content discoverable by conversational agents.
At its core, the server listens for search queries and translates them into Pagefind queries. It then returns a JSON payload that contains the matching documents, relevance scores, and optional snippets. When the flag is omitted, the server also pushes full page resources—images, CSS, and JavaScript—into the AI’s context so that the assistant can reference or display them directly. If resources are suppressed, the server fetches each result page and extracts a short, markup‑free snippet to keep responses lightweight. This dual mode lets teams balance bandwidth and richness according to their deployment constraints.
Key capabilities include:
- Seamless static‑site integration – simply point the server at a built site’s root URL and it automatically indexes all pages using Pagefind.
- Context‑aware snippets – optional resource pushing lets the AI embed images or highlight code blocks, enhancing user interactions.
- Relevance‑scored results – the server returns confidence scores so developers can fine‑tune how much weight AI assistants give to each match.
- Lightweight operation – the option reduces payload size, ideal for environments with strict bandwidth limits or privacy requirements.
Typical use cases span from FAQ bots that surface the most relevant help articles, to knowledge‑base assistants that pull in recent blog posts or documentation snippets. In a CI/CD pipeline, the MCP can be launched after each static build, ensuring that the AI always queries the latest content. For teams building voice assistants or chat interfaces, this server removes the need to write custom search logic; the assistant can simply ask for “latest tutorials on X” and receive instantly searchable results.
What sets Pagefind MCP apart is its tight coupling with the Pagefind index, a fast client‑side search engine already optimized for static sites. By exposing that index through MCP, developers gain a ready‑made bridge between their site’s content and conversational AI without additional infrastructure. The result is a lightweight, production‑ready solution that empowers developers to deliver intelligent search experiences across web, mobile, and voice platforms.
Related Servers
MindsDB MCP Server
Unified AI-driven data query across all sources
Homebrew Legacy Server
Legacy Homebrew repository split into core formulae and package manager
Daytona
Secure, elastic sandbox infrastructure for AI code execution
SafeLine WAF Server
Secure your web apps with a self‑hosted reverse‑proxy firewall
mediar-ai/screenpipe
MCP Server: mediar-ai/screenpipe
Skyvern
MCP Server: Skyvern
Weekly Views
Server Health
Information
Explore More Servers
MCP Server Updater
PowerShell tool that auto‑detects, analyzes, and updates Claude MCP servers
Obsidian MCP REST Server
Link AI assistants to your Obsidian vault via a local REST API
Azure IoT Hub MCP Server
Read‑only monitoring of Azure IoT devices via MCP
Agent Construct
Central hub for AI tool access via MCP
OpenZIM MCP Server
Dynamic knowledge engine for LLMs using offline ZIM archives
Rootly MCP Server
Instant incident resolution inside your IDE