About
A Model Completion Protocol server that exposes Databricks REST APIs for managing permissions, service principals and Git credentials, enabling LLMs to automate Databricks security tasks efficiently.
Capabilities
The Databricks Permissions MCP Server bridges the gap between large‑language models (LLMs) and Databricks’ fine‑grained access control mechanisms. By exposing a full set of permission‑management tools over the Model Completion Protocol, it lets AI assistants such as Claude perform real‑world administrative tasks—listing, assigning, and revoking access to notebooks, clusters, jobs, Unity Catalog objects, and even service principals—without leaving the conversational interface. This capability is particularly valuable for data‑engineering teams that rely on automated workflows to enforce least‑privilege principles across dynamic, multi‑tenant environments.
At its core, the server implements the MCP specification to provide a secure, authenticated channel for LLMs to call Databricks REST APIs. It handles token exchange, request formatting, and response parsing so that the assistant can issue high‑level commands like “set cluster permissions for user X to read/write” and receive confirmation in natural language. The async, asyncio‑based architecture ensures that multiple permission queries can be processed concurrently, keeping latency low even when managing dozens of resources.
Key features include a comprehensive toolkit for permissions management across all Databricks resource types: clusters, jobs, SQL warehouses, notebooks, and Unity Catalog entities. Developers can retrieve current permission sets, modify them atomically, or query the available permission levels for a given object type. The server also exposes service‑principal operations—listing, creating, updating, and deleting principals—enabling automated identity provisioning for CI/CD pipelines. Git credential management is another standout, allowing the assistant to maintain repository access tokens directly within Databricks, streamlining code‑delivery workflows.
Real‑world scenarios range from onboarding new data scientists—where an AI assistant can automatically grant read/write access to relevant notebooks and clusters—to audit compliance checks, where the model enumerates all users with admin rights across the workspace. In a continuous‑integration pipeline, a Claude-powered bot could trigger permission updates after a merge or rollback, ensuring that only the intended users retain access. By integrating seamlessly with existing MCP‑compatible clients like Claude Desktop, teams can embed these operations into everyday conversations, reducing context switching and accelerating governance tasks.
Related Servers
n8n
Self‑hosted, code‑first workflow automation platform
FastMCP
TypeScript framework for rapid MCP server development
Activepieces
Open-source AI automation platform for building and deploying extensible workflows
MaxKB
Enterprise‑grade AI agent platform with RAG and workflow orchestration.
Filestash
Web‑based file manager for any storage backend
MCP for Beginners
Learn Model Context Protocol with hands‑on examples
Weekly Views
Server Health
Information
Explore More Servers
Dataset Viewer MCP Server
Browse and analyze Hugging Face datasets with ease
Cheqd MCP Server
Secure AI-driven identity on the Cheqd network
Membase MCP Server
Decentralized AI memory storage for agents
ZIP MCP Server
Fast, configurable ZIP compression and decompression via MCP
Ksrk MCP Server Client
AI-powered web search and scraping via an MCP client
Mokei MCP Server
TypeScript toolkit for building and monitoring Model Context Protocol services