SandboxAPI ships an MCP server at https://mcp.sandboxapi.dev/mcp. Drop the JSON config below into Claude Desktop, Cursor, VS Code, or any other MCP-compatible client. Your AI can now run code in 12 languages, install packages, and keep state across calls.
Every endpoint in the SandboxAPI HTTP API has a matching MCP tool. The agent can pick the mode that fits the moment.
Pick your client, paste the JSON, replace YOUR_API_KEY with your RapidAPI or direct API key. Done.
Edit the Claude Desktop config and restart the app.
{
"mcpServers": {
"sandboxapi": {
"url": "https://mcp.sandboxapi.dev/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}
~/Library/Application Support/Claude/claude_desktop_config.json
Add to your Cursor MCP config (Settings → MCP).
{
"mcpServers": {
"sandboxapi": {
"url": "https://mcp.sandboxapi.dev/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}
Settings → MCP → Add server
Compatible with the Continue and Cline MCP integrations.
{
"mcpServers": {
"sandboxapi": {
"url": "https://mcp.sandboxapi.dev/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_KEY"
}
}
}
}
~/.continue/config.json or via the extension UI
An agent in Claude Desktop creates a session, installs pandas, parses a CSV, and reports findings — all without leaving the chat.
The MCP integration is the fastest way to ship one of these primitives in your AI agent product.
Give your AI a persistent Python sandbox. Variables and DataFrames live across turns. Files persist. The agent can pip install what it needs and keep working.
Failing test? The agent runs the code, sees the traceback, edits the file, runs again. Tight loops via stateful sessions instead of restart-and-pray.
Build the next "ChatGPT advanced data analysis" but on your own stack and your own model. 12 languages, gVisor isolation, and you're out in days.
Get an API key, paste the JSON config, and your AI is running code in 30 seconds.