About
A bearer‑authenticated FastAPI server that parses the Tactical RMM API schema into SQLite, provides local endpoint search, and forwards live requests securely to production. It also integrates LLM tools for RAG‑style query assistance.
Capabilities
Overview
The Mcp Trmm server is a lightweight, bearer‑authenticated wrapper that bridges local tooling and the Tactical RMM API. By ingesting the full RMM API specification into a SQLite database, it gives developers instant, searchable access to every endpoint while still allowing secure, real‑time calls to the production service. This dual capability eliminates the need to manually sift through lengthy API docs or write custom wrappers for each endpoint, enabling rapid prototyping and reliable integration with AI assistants.
At its core, the server parses the official RMM specification and converts it to JSON, then populates a compact SQLite schema (). This local index lets an LLM or CLI assistant perform RAG‑style path discovery: a user can query for “user list” or “agent status,” and the server returns matching endpoint paths, HTTP methods, descriptions, and even sample request/response schemas. Once the path is identified, the assistant can invoke the corresponding or tool to forward a live request through the authenticated FastAPI gateway, ensuring that all traffic is signed with the appropriate bearer token.
The server’s integration points are designed for seamless inclusion in existing AI workflows. The decorators expose two primary tools— for read‑only calls and for state‑changing operations. These tools can be called directly from a language model prompt, or accessed via the OpenWebUI‑compatible Swagger UI for manual testing. Because the server runs on FastAPI, it inherits automatic OpenAPI documentation and can be embedded in any microservice architecture without additional overhead.
Key features that set Mcp Trmm apart include:
- Secure bearer authentication for all outgoing requests, protecting sensitive RMM data.
- Fast local schema search using SQLite, giving instant endpoint lookup without network latency.
- RAG‑style discovery that lets LLMs navigate the API space naturally, reducing the learning curve for new developers.
- Dual query mode: local schema queries and live production calls are handled by the same toolset, ensuring consistency between documentation and execution.
- Extensible CLI utilities for schema conversion, database creation, and debugging, allowing developers to maintain the index as the API evolves.
In real‑world scenarios, teams can use Mcp Trmm to automate routine RMM tasks—such as pulling agent health reports, triggering patch deployments, or creating ticket workflows—directly from an AI assistant. For example, a support engineer could ask the assistant to “list all agents that failed the last health check” and receive a JSON payload returned from the live RMM API, all without writing any code. This accelerates response times, reduces manual errors, and opens the door to more sophisticated AI‑driven operational workflows.
Related Servers
n8n
Self‑hosted, code‑first workflow automation platform
FastMCP
TypeScript framework for rapid MCP server development
Activepieces
Open-source AI automation platform for building and deploying extensible workflows
MaxKB
Enterprise‑grade AI agent platform with RAG and workflow orchestration.
Filestash
Web‑based file manager for any storage backend
MCP for Beginners
Learn Model Context Protocol with hands‑on examples
Weekly Views
Server Health
Information
Explore More Servers
Devtools MCP Server
Bridge LLMs to Chrome DevTools Protocol
Playwright MCP Server
Browser automation via Playwright's accessibility tree for LLMs
MPC Tally API Server
Fetch DAO data with a single MCP call
DevDocs MCP Server
Model Context Protocol for Documentation Management
GitHub MCP Server with Organization Support
Create and manage GitHub repos in orgs via MCP
Civic MCP Hooks
Middleware for secure, auditable AI tool interactions