MCPSERV.CLUB
avd1729

Onyx MCP Sandbox

MCP Server

Secure, multi‑language code execution in Docker

Stale(60)
1stars
2views
Updated Aug 29, 2025

About

Onyx is an MCP server that runs arbitrary code inside isolated Docker containers, supporting Python, Java, C/C++, Node.js, Rust and more. It is ideal for AI workflows and safe code execution in Claude Desktop or similar clients.

Capabilities

Resources
Access data sources
Tools
Execute functions
Prompts
Pre-built templates
Sampling
AI model interactions

Onyx MCP Sandbox in action

Onyx MCP Sandbox is a purpose‑built Model Context Protocol server that lets AI assistants such as Claude execute arbitrary code in a tightly controlled, containerised environment. By running each request inside a fresh Docker sandbox with network disabled, read‑only file systems, and strict resource limits, Onyx eliminates the security risks that normally accompany code execution while still offering the full flexibility of a multi‑language runtime. This makes it ideal for power users who want to prototype algorithms, run data transformations, or validate logic directly from a chat interface without exposing their host system to potentially malicious input.

At its core, Onyx exposes a single tool that accepts source code and a language identifier. The server selects the appropriate executor—Python, Java, C/C++, Node.js, Rust, or any future language added by the community—and spins up a Docker container using the corresponding official image. Inside the sandbox, the code is compiled or interpreted, executed with a limited CPU and memory budget, and its standard output, errors, and exit status are returned to the AI client. Because all logs stream to , clients can capture and display them safely, providing developers with immediate feedback on their code’s behaviour.

Key capabilities include:

  • Multi‑language support out of the box, with a clear path to add new runtimes.
  • Zero‑trust sandboxing: no network, non‑root users, and temporary file systems prevent data leakage.
  • Resource isolation: configurable CPU/memory caps and process limits protect the host from runaway processes.
  • CI‑ready architecture: automated tests validate each executor, ensuring reliability across updates.

Real‑world scenarios that benefit from Onyx are plentiful. A data scientist can run a quick Python script to sample a dataset directly within a chat, a software engineer can test a Java snippet for API usage, or a DevOps practitioner might prototype a Rust utility to process logs—all without leaving the AI interface. In continuous integration pipelines, Onyx can serve as a lightweight execution engine for language‑agnostic tests or code quality checks.

Integration is straightforward: once the server binary is available, any MCP‑compliant client—Claude Desktop being a primary example—can register it via a simple configuration entry. The client then exposes the tool, and developers can invoke it from prompts or tool calls. The seamless flow from chat to sandboxed execution, back to the assistant’s response, empowers developers to build richer, code‑aware AI workflows that are both powerful and secure.