MCPSERV.CLUB
basicmachines-co

Basic Memory

MCP Server

Local Markdown knowledge base for LLMs

Active(80)
1.9kstars
5views
Updated 12 days ago

About

Basic Memory lets you build persistent, editable knowledge through natural conversations with LLMs like Claude. It stores notes as Markdown files on your computer, enabling bi‑directional read/write via the Model Context Protocol.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Basic Memory in Action

Basic Memory is an MCP server that turns everyday conversations with large language models into a living, local knowledge base. Instead of each query being an isolated request that the model forgets after responding, this tool persists every interaction as a Markdown file on your machine. The server exposes simple read/write capabilities over the Model Context Protocol, allowing any MCP‑compatible assistant—Claude, Gemini, or others—to seamlessly load prior context and augment it in real time. For developers, this means you can build applications where the assistant “remembers” user preferences, project details, or domain facts without relying on external databases or cloud storage.

The core value lies in its local‑first philosophy. All data lives in a folder you control, formatted as plain Markdown with semantic tags that both humans and LLMs can understand. The server automatically parses these files, exposes them as resources, and lets the model read from or write to them via standard MCP tools. This eliminates the need for complex RAG pipelines, vector stores, or dedicated knowledge‑graph solutions while still providing a structured, traversable graph of topics. The LLM can follow internal links between notes, answer “what do I know about X?” queries, and even create new entries on the fly—all without leaving its own context window.

Key capabilities include:

  • Real‑time note creation and editing: The assistant writes directly into Markdown files, preserving a versioned history.
  • Context injection on new sessions: When a conversation starts, the server loads relevant files so the model can pick up where it left off.
  • Search and retrieval: The server exposes a search tool that scans the local knowledge base, returning matching notes for quick reference.
  • Semantic linking: By using Markdown link syntax and optional tags, the assistant can navigate a knowledge graph, providing deeper explanations or related topics.

Real‑world use cases are plentiful. A developer can keep a personal technical journal, an academic researcher can log literature notes, or a team can maintain a shared design spec—all while the assistant continuously updates and recalls relevant information. In customer support scenarios, an AI can pull from a local FAQ repository that grows organically as new issues are documented. Because the data never leaves your machine, privacy and compliance concerns are minimized.

Integrating Basic Memory into an AI workflow is straightforward: add the server to your MCP configuration, and expose its tools in the assistant’s prompt. The assistant then treats local notes like any other resource, enabling natural‑language commands such as “Create a note about coffee brewing methods” or “Find information about Ethiopian beans.” The result is an intelligent, persistent dialogue that feels like a living conversation partner rather than a stateless chatbot.