MCPSERV.CLUB
thrashr888

Terraform Registry MCP Server

MCP Server

AI‑powered access to Terraform Registry data

Stale(65)
0stars
2views
Updated May 30, 2025

About

Provides AI agents with tools to query provider info, resource usage, module metadata, and Terraform Cloud resources via the Model Context Protocol.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

terraform-registry MCP settings for Cursor

The Terraform Registry MCP Server is a lightweight bridge that lets AI assistants query the Terraform ecosystem directly from their conversational context. By exposing the Terraform Registry API through MCP, developers can ask an AI for provider details, resource examples, or module recommendations without leaving their IDE or chat interface. This eliminates the need to manually browse documentation sites, run commands locally, or write custom integration scripts.

At its core, the server offers a suite of core registry tools that mirror common Terraform tasks: fetches metadata about any provider, while supplies example code snippets and related resources. and let the assistant surface relevant modules and their documentation, making it easier to discover reusable components. The server also supports data‑source introspection () and argument detail retrieval (), giving developers granular insight into provider capabilities. For advanced use, , , and policy‑related tools (, ) provide deeper context around provider functions, guides, and policy libraries.

Beyond the public registry, the server integrates with Terraform Cloud through a separate set of tools that require an API token. These include organization and workspace management (, , ), run orchestration (, , ), and resource inspection (). This tight coupling allows an AI to not only suggest code but also trigger infrastructure changes, lock or unlock workspaces, and audit run history—all within a single conversational flow.

Real‑world scenarios benefit from this integration: a developer can ask the AI to "show me how to configure an AWS S3 bucket with encryption" and receive a ready‑to‑paste Terraform snippet; a DevOps engineer can request "list all private modules in my organization" and get an up‑to‑date inventory; or a CI/CD pipeline can be triggered by the assistant after reviewing a module’s metadata. By embedding Terraform knowledge directly into AI workflows, teams reduce context switching, avoid documentation drift, and accelerate infrastructure provisioning.

Unique to this server is its dual‑mode operation—public registry access combined with authenticated Terraform Cloud operations—within a single MCP endpoint. This unified interface means that developers can write one set of prompts for both discovery and execution, while the underlying server handles authentication, rate‑limiting, and data formatting. The result is a seamless, AI‑driven Terraform experience that streamlines both code generation and infrastructure lifecycle management.