MCPSERV.CLUB
mbcrawfo

MCP KnowledgeBase Server

MCP Server

Store and search LLM memories with SQLite full‑text search

Stale(55)
5stars
1views
Updated 14 days ago

About

An MCP server that lets large language models persist conversational memories in a SQLite database and retrieve them via full‑text search, enabling context retention across sessions.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Docker Image Version

The MCP KnowledgeBase Server fills a critical gap for AI assistants that need to retain context across multiple turns of conversation. Traditional LLMs are stateless, meaning each prompt is processed in isolation without memory of prior interactions. This server solves that problem by providing a lightweight, persistent storage layer that captures “memories” as the assistant converses. When an LLM asks the server to remember a fact, the server writes it into a SQLite database; later, when the assistant needs to retrieve that information, it performs a full‑text search over the stored entries. This approach keeps the conversational state outside of the model, preserving privacy and allowing the LLM to focus on language generation rather than bookkeeping.

At its core, the server exposes a simple set of MCP resources. The memory resource accepts POST requests to add new entries and GET requests with a query parameter to search for relevant memories. Because SQLite’s full‑text indexing is used, searches are fast even when the database grows to thousands of entries. The server also ships with a “General Memory Usage” prompt that can be injected into an LLM’s instruction set, guiding the model on how to request or store memories without needing custom prompt engineering each time. Developers can override this default with a tailored prompt to match domain‑specific terminology or privacy policies.

Key capabilities include:

  • Persistent storage: Memories survive across server restarts and can be shared among multiple assistants or sessions.
  • Full‑text search: Rapid retrieval of relevant facts using SQLite’s built‑in FTS engine.
  • Docker and local deployment: Easy to run in containerized environments or directly from the .NET CLI, with configurable database paths.
  • Extensible prompts: Swap in custom instruction sets to control how the LLM interacts with memory.

Real‑world use cases are plentiful. A customer support bot can remember user preferences or past tickets, enabling personalized follow‑ups without re‑asking. A research assistant can store bibliographic snippets and quickly surface them during a literature review. In education, tutors can keep track of a student’s progress and adapt explanations accordingly. Because the server communicates via MCP, any AI platform that understands the protocol can integrate memory handling without modifying the core model.

The standout advantage of this MCP server is its minimal footprint coupled with powerful search. By leveraging SQLite, it avoids the complexity of external databases while still delivering efficient query performance. Its straightforward configuration—just set environment variables for the database location—and its compatibility with Docker make it a drop‑in component for developers building conversational AI workflows that require reliable, searchable memory.