MCPSERV.CLUB
dariogriffo

PostgMem

MCP Server

Vector memory storage powered by PostgreSQL and pgvector

Stale(55)
12stars
2views
Updated Jul 24, 2025

About

PostgMem is a .NET MCP server that stores, retrieves, and semantically searches AI memories using PostgreSQL with the pgvector extension. It provides efficient vector similarity queries, tagging, and integration with AI agents.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

PostgMem: A PostgreSQL‑Backed Vector Memory Store for AI Assistants

PostgMem is a Model Context Protocol (MCP) server that solves the common pain point of persisting and retrieving semantic memories for AI agents. Traditional key‑value or relational stores fall short when an assistant needs to recall prior conversations, documents, or knowledge snippets based on meaning rather than exact text. PostgMem fills this gap by combining PostgreSQL’s reliability with the extension, enabling high‑performance similarity search over dense embeddings. This allows agents to ask “what did we talk about last week?” or “find documents similar to this query” and receive ranked results that reflect contextual relevance.

The server exposes a small set of MCP tools—Store, Search, Get, and Delete—that map directly to common memory operations. A developer can call to persist a JSON payload along with its embedding, automatically generated by an embedded text‑embedding service (defaulting to Ollama). lets the agent query by natural language, specifying a similarity threshold and optional tag filters to narrow results. retrieves a single memory by its UUID, while removes it from the database. These operations are wrapped in a clean, REST‑style interface that any MCP‑compliant client can invoke without custom adapters.

Key capabilities include:

  • Vector embeddings stored in a column, ensuring fast cosine similarity lookups via PostgreSQL’s native index support.
  • Tagging and filtering for categorical organization, enabling agents to segment memories by domain, source, or confidence level.
  • Confidence scoring to express the reliability of each memory, useful for downstream decision‑making or quality control.
  • Automatic embedding generation through a pluggable API, so the server can integrate with any text‑embedding provider.

In practice, PostgMem shines in scenarios such as personal knowledge bases, customer support history retrieval, or collaborative document search. An AI assistant can maintain a growing archive of user interactions and consult it in real time to provide context‑aware responses. Because the server is written in .NET 9 and uses ASP.NET Core, it can be deployed as a microservice within existing cloud stacks or run locally for rapid prototyping.

The MCP integration is the server’s standout feature. By adhering to the MCP specification, PostgMem allows agents like Claude or other LLM‑based tools to discover and invoke memory operations through a standard protocol, eliminating the need for custom SDKs. This plug‑and‑play approach accelerates development, reduces boilerplate, and ensures that memory management becomes a first‑class citizen in any AI workflow.