About
gpt2099 is a Nushell‑based MCP client that lets you run large language models through a single, provider‑agnostic API. It stores editable conversation threads, supports document uploads, and can be extended with custom MCP servers for flexible tooling.
Capabilities
Overview
is a lightweight, scriptable MCP client built on Nushell that unifies access to multiple large‑language‑model providers—Anthropic, Cerebras, Cohere, Gemini, and OpenAI—through a single, consistent API. By hiding the idiosyncrasies of each provider behind a common interface, it lets developers prototype and deploy AI‑powered workflows without being locked into a particular vendor. The client stores conversation threads in , enabling persistent, editable context that survives across sessions and can be inspected or modified directly in the terminal.
The server solves a common pain point for developers: the difficulty of managing context windows when using different models. keeps a clear, editable history of messages and tool calls, so users can review, edit, or prune the conversation before sending it to a model. This transparency eliminates the “black‑box” history that many hosted APIs provide, giving developers fine‑grained control over what the model sees. It also supports document ingestion—PDFs, images, and plain text can be uploaded once and referenced in subsequent prompts with automatic content‑type detection and optional caching, making it easy to build knowledge bases or reference materials into a conversation.
Key capabilities include:
- Provider agnostic calls: A single command () works with any configured model, and new providers can be added via the provider API.
- Threaded conversations: Each thread is a first‑class object in ; users can bookmark, continue, or delete threads at will.
- Tool integration: The client itself exposes MCP endpoints for local file editing, and it can consume other MCP servers, effectively turning the terminal into a programmable AI assistant.
- Document handling: Uploaded files are stored in a structured format, and the client can embed relevant excerpts into prompts automatically.
Real‑world use cases span rapid prototyping of code generation tools, building command‑line chatbots that remember user preferences across sessions, and creating research assistants that pull in academic PDFs or internal documentation. Because all interactions happen in the Nushell environment, developers can chain commands with other shell utilities—filtering, parsing, or transforming data—without leaving the terminal. This tight integration makes it an attractive choice for dev‑ops, data scientists, and anyone who wants a fully scriptable, inspectable AI workflow that remains independent of cloud provider lock‑in.
Related Servers
n8n
Self‑hosted, code‑first workflow automation platform
FastMCP
TypeScript framework for rapid MCP server development
Activepieces
Open-source AI automation platform for building and deploying extensible workflows
MaxKB
Enterprise‑grade AI agent platform with RAG and workflow orchestration.
Filestash
Web‑based file manager for any storage backend
MCP for Beginners
Learn Model Context Protocol with hands‑on examples
Weekly Views
Server Health
Information
Tags
Explore More Servers
Deepseek R1 MCP Server
Reasoning‑optimized LLM server with 8192‑token context
ToDo App MCP Server
Simple task management for quick to-do lists
RagDocs MCP Server
Semantic document search with Qdrant and Ollama/OpenAI embeddings
Maestro MCP Server
Python-based integration for Maestro test orchestration
Codesys MCP Toolkit
Automate CODESYS projects via Model Context Protocol
HTTP + SSE OAuth MCP Server
OAuth‑secured MCP server for Streamable HTTP & SSE