MCPSERV.CLUB
disler

Just Prompt

MCP Server

Unified LLM Control Across Multiple Providers

Stale(60)
658stars
1views
Updated 11 days ago

About

Just Prompt is a lightweight MCP server that offers a single API to send prompts, retrieve responses, and orchestrate decisions across OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama. It supports string or file prompts, parallel model execution, and easy output management.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Just Prompt in Action

Just Prompt is a lightweight Model Control Protocol (MCP) server that unifies access to several leading large‑language‑model providers—OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama. By exposing a single set of MCP tools, it removes the need for developers to write provider‑specific adapters or manage multiple SDKs. Instead of juggling different APIs, a developer can issue a single MCP request and have the server dispatch the prompt to any combination of models, collect their responses, and return a consolidated result. This simplifies experimentation, benchmarking, and production deployments where multiple models must be compared or combined.

At its core, Just Prompt offers a suite of intuitive tools. The and utilities let users send arbitrary text or file‑based prompts to any list of models, while adds the convenience of persisting each model’s reply as a Markdown document. The most distinctive feature is , which orchestrates a multi‑model deliberation: several “board member” models provide independent answers, and a designated “CEO” model synthesizes those insights into a final decision. This pattern is ideal for scenarios that require collective reasoning, such as policy compliance checks or multi‑angle content evaluation.

Key capabilities include parallel execution of multiple models, automatic correction of model names based on a default list, and the ability to query available providers or specific models via and . All tools accept a provider‑prefixed model identifier, enabling rapid switching between providers with short aliases (e.g., for OpenAI or for Anthropic). This design choice streamlines configuration and reduces cognitive load when working across heterogeneous services.

Real‑world use cases span rapid prototyping, A/B testing of model performance, and building higher‑level decision engines. For example, a content moderation pipeline could run a prompt through several models to gather diverse perspectives before feeding the aggregated output into a compliance rule engine. In research, developers can benchmark latency and accuracy across providers without rewriting code for each API.

By encapsulating provider diversity behind a consistent MCP interface, Just Prompt empowers developers to focus on application logic rather than infrastructure plumbing. Its lightweight nature and straightforward tooling make it an attractive addition to any AI workflow that requires flexible, multi‑model orchestration.