About
A Model Context Protocol server that exposes the Wikidata knowledge graph through SPARQL queries, supporting JSON, XML, Turtle, and CSV outputs with configurable timeouts. It runs on Cloudflare Workers and offers both SSE and HTTP transports for remote deployments.
Capabilities
Overview
The Wikidata SPARQL MCP server gives AI assistants instant, on‑demand access to the vast knowledge graph maintained by Wikidata. By exposing a single tool, it allows developers to embed complex semantic queries directly into conversational flows or data‑driven workflows. This eliminates the need for custom API wrappers, enabling a declarative query style that can retrieve structured facts, run introspection, or perform boolean checks—all within a unified interface.
At its core, the server runs on Cloudflare Workers, ensuring low latency and global reach. It supports both Server‑Sent Events (SSE) for real‑time, streaming interactions and a standard HTTP endpoint for legacy or simpler clients. A configurable timeout (1–60 seconds) protects against runaway queries, while multiple output formats—JSON, XML, Turtle, and CSV—give downstream systems the flexibility to consume results in the most convenient representation.
Key capabilities include:
- Full SPARQL support: Any valid query can be executed against Wikidata’s live graph, from simple statements to complex property traversals or ASK queries.
- Introspection: By using or property enumeration queries, assistants can discover schema details on the fly, enabling dynamic form generation or data validation.
- Result formatting: The parameter lets callers choose the most suitable serialization, making integration with spreadsheets, databases, or RDF stores straightforward.
- Timeout control: Developers can tune the to balance responsiveness with query complexity, ensuring that assistant sessions remain snappy.
Typical use cases span research assistants pulling up academic data (e.g., Nobel laureates or publication metrics), product recommendation engines querying entity relationships, or knowledge‑graph based chatbots that answer factual questions with verifiable provenance. In an AI workflow, the MCP server can be invoked after a natural‑language understanding step: the assistant translates user intent into a SPARQL query, sends it to the server, and then formats the returned data into a conversational response. Because the tool is unified and introspective, developers can prototype new queries rapidly without maintaining separate schemas or adapters.
What sets this server apart is its simplicity and breadth. A single tool that covers both schema discovery and data retrieval reduces the cognitive load on developers, while the dual transport modes guarantee compatibility across a wide range of MCP clients. Coupled with Cloudflare’s edge deployment, the Wikidata SPARQL MCP server provides a scalable, low‑latency bridge between AI assistants and one of the world’s largest structured knowledge bases.
Related Servers
n8n
Self‑hosted, code‑first workflow automation platform
FastMCP
TypeScript framework for rapid MCP server development
Activepieces
Open-source AI automation platform for building and deploying extensible workflows
MaxKB
Enterprise‑grade AI agent platform with RAG and workflow orchestration.
Filestash
Web‑based file manager for any storage backend
MCP for Beginners
Learn Model Context Protocol with hands‑on examples
Weekly Views
Server Health
Information
Explore More Servers
MCP Bitbucket
Local MCP server for seamless Bitbucket repository and issue management
Kubernetes MCP Server
Manage Kubernetes clusters directly from your development environment.
Hyperliquid MCP Server
Fetch Hyperliquid positions via Claude
TSGram MCP
AI code assistance via Telegram chats
NTeALan Dictionaries MCP Server
Unified API for dictionary data and contributions
AgentPM
AI‑powered product management for local development