MCPSERV.CLUB
pi22by7

In Memoria

MCP Server

Persistent AI codebase memory via MCP

Active(100)
76stars
1views
Updated 13 days ago

About

In Memoria is an MCP server that gives AI coding assistants persistent, learned knowledge of your codebase—capturing naming conventions, patterns, and architectural decisions—to provide context-aware suggestions across sessions.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

In Memoria – Persistent Context for AI Code Assistants

In Memoria addresses a core pain point in the current generation of AI coding tools: session amnesia. Every time you start a new conversation with Claude, Copilot, or Cursor, the assistant has no knowledge of your project’s architecture, coding conventions, or past decisions. This forces developers to repeatedly explain the same patterns and leads to generic, style‑incompatible suggestions. In Memoria solves this by creating a persistent knowledge base that an AI assistant can query through the Model Context Protocol, giving it long‑term memory of your codebase.

The server exposes seventeen dedicated tools that perform deep analysis of a repository and learn the stylistic fingerprints that make your code unique. These tools are accessible to any MCP‑compliant client, so you can integrate In Memoria into existing workflows without modifying your AI provider. The learning process is fully automated: a single command parses the entire codebase, filters out build artifacts and dependencies, and builds statistical models of naming conventions, function signatures, architectural choices, and semantic relationships. The result is a rich context that the assistant can reference to produce suggestions that align with your established patterns.

Key capabilities include:

  • Native AST parsing for 12 languages (TypeScript, JavaScript, Python, Rust, Go, Java, C/C++, C#, Svelte, Vue, SQL) using a high‑performance Rust engine.
  • Pattern learning that captures naming schemes (e.g., , ), function signatures, and architectural decisions.
  • Semantic mapping that extracts code relationships and concepts, enabling the assistant to reason about dependencies and module boundaries.
  • Vector search via SurrealDB for fast semantic queries, backed by SQLite for structured pattern storage.
  • File‑watching and incremental updates so the knowledge base stays current as your code evolves.

In practice, In Memoria lets developers ask high‑level requests such as “refactor this function using our established patterns” and receive context‑aware suggestions that respect camelCase conventions, hook prefixes, event handlers, and more. It eliminates the need to re‑explain your style on every session, reduces cognitive load, and speeds up code reviews and refactoring tasks. By integrating seamlessly with MCP workflows, it offers a plug‑and‑play solution that enhances the intelligence of any AI assistant without sacrificing performance or requiring vendor lock‑in.