Guide

MCP-Enabled Terminal — How AI Agents Drive Your Shell

Model Context Protocol (MCP) is the standard interface AI agents use to call external tools. A terminal that speaks MCP becomes a tool surface: Claude, Cursor, or Windsurf can open panes, run commands, and manage workspaces without human input. This guide explains MCP, what tools a terminal should expose, and how Onda implements it.

What is MCP and why does a terminal need it?

MCP (Model Context Protocol) is an open specification from Anthropic that lets AI clients discover and invoke tools exposed by servers. For a terminal, MCP means the AI can call "split a pane" or "run this command in tab 3" as a first-class operation instead of asking a human to click.

Before MCP, agents either pasted code into a chat or used hacky AppleScript/Accessibility APIs to drive terminals. Those paths are brittle, platform-specific, and invisible to the agent's tool model.

MCP solves that. The terminal runs an MCP server over a Unix socket; the AI client is an MCP client; they speak the same schema. Tools are discoverable, typed, and composable. The agent's planner sees "I can split a pane" as an action with the same dignity as "I can read a file".

What tools should a terminal expose?

A useful MCP terminal surface covers four layers: panes (split/focus/close), tabs (new/close/focus/list), terminals (run/send/kill/list), and workspaces (create/focus/list). Plus introspection: context, status, app info.

Onda ships exactly this surface — 21 tools, grouped by layer. Panes and tabs let the agent shape your workspace. Terminal tools let it execute and read back. Workspace tools let it set up whole project environments. Introspection tools let it reason about what is already running before acting.

  • Panes: onda_pane_split, onda_pane_focus, onda_pane_close, onda_pane_list
  • Tabs: onda_tab_new, onda_tab_close, onda_tab_focus, onda_tab_list
  • Terminals: onda_terminal_run, onda_terminal_send, onda_terminal_kill, onda_terminal_list
  • Workspaces: onda_workspace_create, onda_workspace_focus, onda_workspace_list
  • Introspection: onda_ping, onda_status, onda_app_info, onda_context

How Onda implements MCP under the hood

Onda runs an MCP server inside the Electron main process, bound to a Unix socket at a predictable path. AI clients connect to the socket, exchange JSON-RPC, and call tools. No HTTP, no auth tokens — local-only.

The socket lives under `~/.onda/ipc.sock` (path configurable). The server implements the MCP spec with JSON-RPC 2.0 framing. Each tool maps to a handler in the main process that dispatches to the renderer via IPC and returns a typed response.

Because the socket is local-only and owned by your user, there is no network surface. Remote agents cannot reach it. If you need remote MCP, tunnel the socket over SSH with `-L` or use an MCP proxy — Onda does not build a public endpoint by design.

Configuring Claude Code, Cursor, and Windsurf to use Onda MCP

Add Onda as an MCP server in your client config (`~/.claude/mcp.json` for Claude Code, Cursor settings for Cursor, similar for Windsurf). Point it at the Onda Unix socket and restart the client.

For Claude Code, the MCP config is typically a JSON file with a list of servers. Each server entry has a command or socket path. Add Onda with `transport: "stdio"` proxied through Onda's IPC binary, or direct socket if your client supports it.

Once connected, the client lists Onda tools automatically. In Claude Code, type `/mcp` to verify. Cursor and Windsurf expose MCP tools in their tool palette.

Real workflows MCP unlocks

MCP turns "tell Claude what to do" into "Claude sets up the environment itself". Three common patterns: project bootstrap, multi-pane debugging, and parallel test runs.

Project bootstrap: "Create a workspace for my blog rebuild, open panes for dev server, tests, and git status." Claude calls `onda_workspace_create`, then three `onda_pane_split` calls, then `onda_terminal_run` for each pane. Done in under a second.

Multi-pane debugging: "Tail the API log in pane 1, run the failing test in pane 2, watch CPU in pane 3." One prompt, three panes configured.

Parallel test runs: "Run the test suite in three panes, one per shard, and summarize failures." Claude splits, runs, then reads back output via `onda_context`.

Security and sandboxing

MCP tools run with your user permissions — whatever your shell can do, the agent can do. Onda does not add a sandbox layer by default. For risky work, use a scoped shell or dedicated user account.

This is the same trust model as running any CLI that the agent launches. If you ask Claude to "rm stuff", the agent can execute it. Onda adds visibility (you see commands running in panes) and does not auto-confirm destructive operations, but the safety boundary is you.

For higher-assurance setups, run Onda in a Nix shell, a Docker container, or a sandbox user. The MCP surface remains the same.

Frequently asked questions

Do I need to write any code to use MCP with Onda?

No. Onda ships the MCP server built-in and enables it by default. Configure your AI client (Claude Code, Cursor, Windsurf) to point at the Onda socket and the tools appear. Zero code for the common path.

Does MCP require internet access?

No. MCP between Onda and a local AI client is entirely local (Unix socket, no network). The AI client may call a cloud LLM separately — that network activity is outside the MCP layer.

Can multiple AI agents use Onda MCP at the same time?

Yes. Onda's MCP server handles concurrent clients. You can have Claude Code and Cursor both connected, each calling tools. Conflicts are possible if both try to write to the same pane; design your prompts so agents operate on different panes or workspaces.

What if I want to add custom MCP tools to Onda?

Use the Onda plugin API. Plugins can register MCP tools that the built-in server exposes alongside the core 21. This lets you add project-specific tools — like `deploy_staging` or `open_db_console` — that any MCP client can call.

Is MCP a standard or just an Anthropic thing?

Open standard. Anthropic published the spec and reference implementations; OpenAI, Cursor, Windsurf, and others have adopted or are adopting MCP clients. The protocol is JSON-RPC-based, transport-agnostic, and versioned.

Does Warp, iTerm2, or Ghostty have MCP support?

As of April 2026, no. Warp has a proprietary AI layer but no MCP server. iTerm2 and Ghostty have neither. Onda is the first macOS terminal to ship native MCP integration. Warp has announced exploration but no release.

How do I debug MCP tool calls?

Onda logs MCP calls to `~/.onda/logs/mcp.log` when debug mode is on (set `ONDA_DEBUG=1`). You see tool name, args, and response per call. Claude Code also shows tool calls in its session log.

Try Onda free

macOS 12+ on Apple Silicon. Notarized by Apple. No account required.

Download Onda