MCPSERV.CLUB
knocklabs

Knock MCP Server

MCP Server

Local Model Context Protocol server for integrating Knock with LLM agents

Active(92)
7stars
2views
Updated Sep 13, 2025

About

The Knock MCP Server provides a local Model Context Protocol (MCP) endpoint that exposes Knock's tools and workflows to LLM agents. It enables developers to build agent systems that trigger cross‑channel notifications and manage Knock accounts directly from language models.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Overview

The Knock Agent Toolkit is a Model Context Protocol (MCP) server that bridges AI assistants with the Knock notification platform. It solves a common pain point for developers building intelligent agents: how to trigger real‑world notifications—email, SMS, push, or webhooks—directly from an LLM without writing custom integrations. By exposing Knock’s rich Management and Notification APIs as MCP tools, the server lets agents call Knock actions through standard function‑calling syntax. This eliminates boilerplate, keeps business logic in one place, and guarantees that notification flows remain auditable and secure via Knock’s service tokens.

For developers using AI assistants such as Claude, Cursor, or Vercel’s AI SDK, the server offers a plug‑and‑play interface. Once started locally with a service token, it listens on a configurable port and presents a catalog of Knock tools. Agents can then request any supported operation—creating users, triggering workflows, sending messages—to the MCP client, which forwards the call to Knock. The toolkit also supports scoping through , , and parameters, ensuring that every notification is contextualized to the correct account or tenant without manual intervention.

Key capabilities include:

  • Comprehensive tool coverage: The server exposes a large subset of Knock’s API, from user and channel management to workflow orchestration.
  • Fine‑grained tool filtering: Developers can restrict the exposed tools to a subset or specific workflow keys, reducing attack surface and simplifying agent prompts.
  • Cross‑framework support: Dedicated helpers exist for Vercel AI SDK, LangChain, OpenAI SDK, and low‑level MCP integration, allowing teams to choose their preferred AI stack.
  • Contextual defaults: By configuring environment, user, and tenant defaults at server start‑up, agents can omit repetitive arguments, keeping prompts concise.

Real‑world scenarios that benefit from this MCP server include:

  • Automated onboarding: An agent can create a new user in Knock, assign them to a tenant, and trigger an onboarding workflow—all from a single LLM response.
  • Incident management: When an LLM detects a critical event, it can immediately notify the relevant team through Knock’s multi‑channel channels.
  • Customer support: A conversational agent can schedule follow‑up messages or trigger escalation workflows based on user sentiment.

Integration into AI workflows is straightforward: a developer configures the MCP client in their chosen framework, starts the Knock server with the desired tool set, and writes prompts that reference the available tools. The agent’s LLM then calls these tools as if they were native functions, and the Knock server translates them into authenticated API requests. This tight coupling enables developers to build sophisticated agent systems that not only reason and converse but also act—sending alerts, updating records, or orchestrating complex notification pipelines—all while leveraging Knock’s proven reliability and auditability.