MCPSERV.CLUB
irskep

Persistproc

MCP Server

Shared process layer for multi‑agent workflows

Stale(55)
6stars
1views
Updated Sep 24, 2025

About

Persistproc is an MCP server and CLI tool that lets agents and humans manage, monitor, and interact with long‑running processes like web servers in real time. It reduces copy‑paste errors, centralizes logs, and enables multiple AI agents to read or restart processes without terminal access.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

persistproc in action

Overview

is an MCP (Model Context Protocol) server that acts as a lightweight, runtime‑only process manager for developers working with AI assistants. Its core purpose is to expose the output, status, and control of long‑running processes—such as web servers, build tools, or database instances—to any agent that speaks MCP. By doing so, it eliminates the need for manual copy‑and‑paste of terminal logs and allows agents to observe, restart, or terminate services directly from within the AI conversation. This streamlines iterative development cycles and keeps developers in a single workflow, whether they use Claude Code, Cursor, Gemini CLI, or any other MCP‑compatible tool.

The server is intentionally simple: it has no configuration files and does not replace full process supervisors like . Instead, processes are launched on demand via a command line wrapper () and then managed entirely in memory. The server keeps a persistent log for each process, streams new output to connected clients, and exposes control methods such as or . Agents can invoke these methods through MCP resources, enabling them to automatically react to errors, restart services after code changes, or aggregate logs from multiple services for diagnostics.

Key features include:

  • Unified log aggregation – a single endpoint that streams the combined output of all managed processes, making it trivial for agents to scan for errors across a multi‑service stack.
  • Runtime control – agents can programmatically restart or stop processes without touching the terminal, keeping the developer’s workflow uninterrupted.
  • Tool‑agnostic interface – any MCP client can interact with , so teams using different AI assistants or custom agents all gain the same visibility.
  • Zero configuration – processes are started and tracked at runtime, so there is no need to maintain separate supervisor configs or environment files.

Real‑world scenarios where shines include:

  • Web application development – a developer runs , and an agent can instantly read lint or type‑check errors, suggest fixes, and restart the server after a code patch.
  • Complex local stacks – when multiple services (API, frontend, build tools, database) must run simultaneously, an agent can read all logs in one place and pinpoint the root cause of a failure.
  • Continuous integration pipelines – CI jobs can start services through , capture logs in a structured format, and let AI agents generate debug reports or automated remediation steps.

By integrating into an AI‑driven development workflow, teams gain a single source of truth for process output and control, reduce context switching, and enable agents to act more autonomously. This leads to faster bug resolution, smoother multi‑agent collaboration, and a more cohesive developer experience across diverse tooling ecosystems.