MCPSERV.CLUB
jonigl

MCP Client for Ollama

MCP Server

Connect local LLMs to MCP servers with a powerful TUI

Active(80)
300stars
2views
Updated 11 days ago

About

A Python terminal application that links Ollama local models to one or more Model Context Protocol servers, enabling tool use, workflow automation, and human‑in‑the‑loop control without writing code.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

MCP Client for Ollama Demo

The MCP Client for Ollama (often shortened to ollmcp) is a terminal‑based interface that bridges local Ollama language models with any Model Context Protocol (MCP) server. By exposing the MCP’s rich set of capabilities—such as tool execution, prompt management, and advanced sampling controls—ollmcp turns a bare LLM into a fully fledged AI assistant that can browse the web, query databases, or trigger custom scripts without leaving the command line. For developers, this means a zero‑code path to prototype sophisticated agent workflows and test new tools in isolation before deploying them at scale.

At its core, ollmcp manages a collection of MCP servers and tools while letting you switch models on the fly. The client supports multiple transport mechanisms (STDIO, Server‑Sent Events, and HTTP streams), allowing it to connect to servers running locally or in the cloud. Its interactive menu lets you enable or disable entire tool sets, approve individual tool calls through a Human‑in‑the‑Loop (HIL) gate, and adjust context windows or temperature on the spot—all without restarting the model. This live reconfiguration is invaluable when debugging complex chains of tool calls or fine‑tuning a model’s personality for a specific domain.

Key features include:

  • Multi‑server orchestration – run several MCP backends concurrently and route calls to the most appropriate one.
  • Rich TUI – a navigable console with fuzzy search, real‑time streaming of model output, and colorized prompts.
  • Advanced sampling knobs – tweak 15+ parameters (e.g., , , ) directly from the interface.
  • System prompt editor – craft or override the assistant’s persona on demand.
  • HIL safety layer – review tool payloads before execution, preventing accidental data leaks or policy violations.

In practice, ollmcp shines in scenarios where developers need rapid iteration over tool pipelines: testing a new web‑scraping utility, integrating a spreadsheet query API, or prototyping a chatbot that needs to call an internal knowledge base. By decoupling the model from the tool logic, it allows teams to iterate on tooling independently while keeping the LLM configuration stable. Moreover, its hot‑reload capability for MCP servers makes it a natural fit in continuous integration workflows—any change to the server code or tool list can be reflected instantly in the client, reducing turnaround time from commit to test.

For teams building AI‑powered applications, ollmcp provides a lightweight yet powerful bridge between local LLMs and external services. It eliminates the need for custom middleware, offers granular control over model behavior, and embeds safety checks directly into the user experience—all within a familiar terminal environment.