Quick Navigation
Jump to the section you need
What is MCP?
Overview and core concepts
Architecture
Client-server design
Core Primitives
Tools, Resources, Prompts
Use Cases
Real-world examples
Transport
Communication mechanisms
Security
Auth & best practices
Getting Started
Find and use servers
Submit Server
Contribute to the directory
What is MCP?
Understanding the Model Context Protocol
The Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 that standardizes how AI systems like Large Language Models (LLMs) integrate with external data sources and tools. Think of it as a "universal remote" for AI applications.
Why MCP Matters
Before MCP, connecting AI to external data required building custom integrations for each tool and data source. MCP replaces these fragmented integrations with a single, universal protocol.
Key Benefits
Context Preservation
Tool Interoperability
Secure Access
Developer Efficiency
Industry Adoption (2025)
MCP has seen rapid adoption across the AI industry:
March 2025:
OpenAI officially adopted MCP across ChatGPT desktop, Agents SDK, and Responses API
Current:
Microsoft Copilot Studio, IBM BeeAI, Windsurf Editor, and Postman all support MCP
Ecosystem:
Over 700 MCP servers available, covering tools from GitHub, Slack, Google Drive, PostgreSQL, and more
Frameworks:
Integrates with LangChain, LangGraph, LlamaIndex, crewAI, and Microsoft Semantic Kernel
Architecture
How MCP works under the hood
MCP follows a client-server architecture inspired by the Language Server Protocol (LSP), enabling standardized communication between AI systems and external resources.
MCP Server
Exposes data sources and tools via standardized API
MCP Client
Connects to servers and relays data to AI models
AI Model
Uses context to generate better responses
Protocol Layers
MCP consists of two key layers:
Data Layer
JSON-RPC based protocol defining client-server communication, lifecycle management, and core primitives (tools, resources, prompts)
Transport Layer
Communication mechanisms enabling data exchange between clients and servers (stdio, Streamable HTTP/SSE)
Connection Model
Clients maintain a 1:1 connection with MCP servers. Each client manages one specific server, but AI applications can use multiple clients simultaneously to access different data sources and tools.
Core Primitives
The building blocks of MCP
MCP primitives define what clients and servers can offer each other, specifying the types of contextual information that can be shared and the range of actions that can be performed.
Tools (Model-Controlled)
Executable functions that perform actions or computations
Tools enable AI models to interact with external systems, execute operations, and perform computations. The model decides when and how to invoke these tools based on context.
Examples:
- • Query databases and execute SQL commands
- • Call external APIs and web services
- • Perform calculations and data processing
- • Execute system commands or scripts
- • Create, update, or delete resources
Resources (Application-Controlled)
Data sources that provide information to LLMs
Resources are similar to GET endpoints in REST APIs. They expose local or remote data to feed information into the LLM context without performing computation or causing side effects.
Examples:
- • File contents and documentation
- • Database records and schemas
- • API responses and configuration data
- • Code repositories and version history
- • Knowledge bases and wikis
Prompts (User-Controlled)
Reusable, structured templates for LLM interactions
Prompts are pre-built templates that standardize interactions with language models. They define reusable message sequences and workflows that guide LLM behavior in consistent, predictable ways.
Examples:
- • Code review templates with specific criteria
- • Bug report generation workflows
- • Documentation writing guides
- • Test case creation patterns
- • Data analysis frameworks
Additional Concepts
Roots
Entry points like file folders or database realms that define the scope of accessible resources
Sampling
Enables MCP servers to request LLM completions, allowing servers to implement agent-like behaviors
Real-World Use Cases
How MCP is being used in production
AI-Assisted Coding
Code editors like Cursor use MCP to transform into multi-functional tools. Install Slack MCP to send messages, Resend MCP to send emails, or Replicate MCP to generate images—all from your code editor.
Combine multiple servers to unlock powerful workflows like generating UI while simultaneously using image generation.
3D Modeling & Creative
The Blender MCP server enables users to describe 3D models in natural language. Text-to-3D workflows are being implemented for Unity, Unreal Engine, and other creative tools.
Democratizing complex creative software through conversational AI interfaces.
Enterprise Automation
Support chatbots can connect to multiple MCP servers: one to fetch customer info from CRM, another to create tickets in Jira, and another to search knowledge bases—all within a single conversation.
Seamlessly orchestrate complex workflows across enterprise systems.
Smart Home & IoT
MCP enables LLMs to interact with real-world devices through structured, schema-based tools. Control home appliances, optimize factory machinery, or manage industrial IoT deployments.
Translate natural language into action for both consumer and industrial applications.
Personalized Assistants
Build AI applications with persistent memory. Travel assistants that remember preferences, booking history, hotel ratings, and destination preferences across sessions.
Create truly personalized AI experiences that learn and adapt over time.
Data Analysis
Connect AI to databases, APIs, and business intelligence tools. Query data, generate reports, create visualizations, and derive insights through natural language.
Make data analysis accessible to non-technical users.
Popular MCP Servers in Use
The most widely adopted MCP servers include:
Transport Mechanisms
How clients and servers communicate
MCP supports multiple transport mechanisms for client-server communication, each optimized for different use cases.
stdio (Standard Input/Output)
Best for local, same-machine communication
How it works: The client launches the server as a subprocess and communicates over stdin/stdout
Benefits: Low overhead, simple setup, no network configuration needed
Security: No encryption needed since communication stays within the same machine
Use case: Development tools, local file access, single-machine integrations
Streamable HTTP / SSE
Modern HTTP-based transport for remote servers
How it works: Bidirectional JSON-RPC via HTTP endpoints with optional streaming using Server-Sent Events
Benefits: Remote access, firewall-friendly, supports cloud deployments
Security: HTTPS required in production, supports OAuth 2.1 authentication
Use case: Cloud services, remote APIs, multi-user applications
Note:
Standalone SSE transport was deprecated in protocol version 2024-11-05 and replaced by Streamable HTTP, which incorporates SSE as an optional streaming mechanism.
Choosing a Transport
Use stdio when:
- Building local tools and CLIs
- Single-machine integrations
- Development environments
- Simple subprocess communication
Use Streamable HTTP when:
- Deploying cloud services
- Building multi-user applications
- Accessing remote resources
- Production deployments
Security & Authentication
Best practices for secure MCP deployments
Critical Security Updates (2025)
- March 2025: MCP specification added OAuth 2.1 standardization with mandatory PKCE for all clients
- June 2025: Critical RCE vulnerability fixed in mcp-remote v0.1.16 — always use v0.1.16 or later
- July 2025: Security research found ~2,000 exposed MCP servers lacking authentication
OAuth 2.1 Authentication
As of March 2025, MCP standardizes authorization using OAuth 2.1, enabling secure delegation of authorization between clients and servers.
Key Security Features
Mandatory PKCE
Proof Key for Code Exchange required for all OAuth flows
HTTPS Required
All production deployments must use HTTPS encryption
OAuth Resource Servers
MCP servers classified as OAuth Resource Servers (June 2025 update)
Secure Session IDs
Cryptographically secure session IDs properly validated
Best Practices
Validate Origins
Always validate origin headers on incoming SSE/HTTP connections
Enable Authentication
Never deploy MCP servers without authentication in production
Use Latest Versions
Keep MCP dependencies updated, especially mcp-remote (≥v0.1.16)
DNS Rebinding Protection
Implement proper session ID validation to prevent attacks
Least Privilege
Grant only necessary permissions to MCP servers
Regular Audits
Review exposed endpoints and authentication configurations
Transport-Specific Security
stdio
Generally secure for local use since communication stays within the same machine. No encryption needed, but ensure subprocess isolation.
Streamable HTTP/SSE
Must use HTTPS in production. Enable authentication for all endpoints. Validate origins to prevent DNS rebinding attacks.
Getting Started
Find and use MCP servers from our directory
Finding Servers
Browse our growing collection of MCP servers using multiple discovery methods:
Search
Use the search bar on the homepage to find servers by name, description, or tags
Browse by Category
Explore servers organized by functionality like Development Tools, Databases, APIs & Services, and more
Filter
Use the advanced search page to filter by programming language, tags, and categories
Sort
Sort by popularity (stars), recent updates, or alphabetically
Using a Server
Each server page in our directory includes comprehensive information:
Description
Detailed overview of capabilities and features
Installation
Step-by-step setup instructions
Repository
Direct link to source code on GitHub
Statistics
GitHub stars, downloads, and popularity metrics
Metadata
Programming language, tags, and categories
Related Servers
Discover similar or complementary tools
Submit a Server
Share your MCP server with the community
Built an MCP server? Share it with the community and help grow the ecosystem!
Requirements
Open Source Repository
Hosted on GitHub with public access
Clear Documentation
README with installation, usage instructions, and examples
Working Implementation
Functional MCP server following the official specification
Appropriate License
Open source license (MIT, Apache 2.0, GPL, etc.)
Submission Process
Prepare your server repository with comprehensive documentation
Fill out the submission form with accurate server details
Our team reviews your submission for quality and completeness
Once approved, your server appears in the directory
Updates to your GitHub repository are automatically synced
API & Integration
Programmatic access to our directory
Access our MCP server directory programmatically using our REST API. Perfect for building custom dashboards, integrations, or automated workflows.
Available Endpoints
GET /api/servers
List all MCP servers in the directory
GET /api/search?q=query
Search servers by name, description, or tags
GET /api/categories
List all available categories
GET /api/filters
Get available filter options
Response Format
All endpoints return JSON with a consistent structure:
{
"success": true,
"data": [...],
"count": 10,
"page": 1,
"total": 250
}Rate Limiting
API requests are rate-limited to ensure fair usage. Current limits are generous for most use cases. Contact us if you need higher limits for production applications.
Community & Resources
Join the MCP ecosystem
Join the thriving MCP community! Connect with developers, share knowledge, and contribute to the growing ecosystem of AI integrations.