MCPSERV.CLUB
rawr-ai

Graphiti MCP Server

MCP Server

Multi‑project graph extraction on a shared Neo4j database

Stale(55)
71stars
2views
Updated 17 days ago

About

Graphiti MCP Server extracts entities and relationships from text, storing them in Neo4j. It supports multiple isolated project servers via Docker, sharing a single database while keeping each project's configuration separate.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Overview of the Graphiti MCP Server

The Graphiti MCP Server addresses a common bottleneck for developers building AI‑powered knowledge graphs: managing multiple extraction projects that share a single Neo4j database while remaining isolated from one another. Traditional deployments of Graphiti require a separate Docker Compose file for each project, which quickly becomes unwieldy as the number of knowledge graphs grows. This fork streamlines that workflow by introducing a single root server that orchestrates project‑specific MCP instances, each with its own extraction rules and model configuration. The result is a scalable, developer‑friendly architecture that keeps projects neatly compartmentalized yet fully integrated.

At its core, the server exposes an MCP interface that lets AI assistants like Claude send prompts to extract entities and relationships from arbitrary text. The extracted data is then stored in Neo4j, enabling rich graph queries and visual exploration. By running each project as an isolated Docker container, the server guarantees that a malformed prompt or a buggy extraction rule in one project cannot bring down the entire system. The root server handles configuration, status reporting, and automatic discovery of project ports—making it trivial for tooling to connect via the standard endpoint.

Key capabilities include:

  • Multi‑project orchestration: A single manages a root service plus any number of project containers, all sharing the same Neo4j instance.
  • Project isolation: Each container has its own , entity definitions, and model settings, preventing cross‑project interference.
  • Auto‑discovery for editors: Project ports are written to , allowing IDEs and other tools to find the MCP endpoints automatically.
  • Crash containment: Faulty prompts trigger a container restart without affecting other projects, improving reliability.
  • Hot reload: Updating a project’s configuration can be applied on the fly with , reducing development turnaround time.
  • Secure defaults: The server refuses to start with weak Neo4j credentials unless explicitly overridden, encouraging best security practices.

Typical use cases span from academic research labs that maintain multiple domain‑specific knowledge graphs to enterprise teams that need separate extraction pipelines for different product lines. A data scientist can spin up a new project, define custom entity schemas under , and immediately start feeding the MCP with new documents. An AI assistant can then query the graph via its own prompt, retrieving structured insights without needing to understand the underlying database schema.

Integrating Graphiti into an AI workflow is straightforward: the MCP endpoints are standard, so any assistant that supports MCP can connect to . The server’s status page () provides health metrics, while Neo4j’s web UI offers a visual representation of the evolving graph. This tight coupling between prompt‑driven extraction and graph querying makes Graphiti a powerful tool for building intelligent, data‑centric applications that leverage the strengths of both natural language processing and graph databases.