Tutorial

Run Claude Code on Every Machine You Own

Akshay Sarode
Direct answer

Install celistrad on each machine. Pair each one in 30 seconds. From a single dashboard, spawn Claude Code on whichever host has the right files, GPU, or context. Watch all of them. Approve from your phone. Sandbox by default.

If you have more than one Mac, you've felt this question: which machine is that running on? The Cursor session you started yesterday on the GPU rig. The long Aider refactor on the laptop. The browser-agent run on the Mac mini under the TV. The plumbing for "all of them in one place" doesn't exist out of the box, so you end up SSHing in turns or remembering tmux session names.

This is the multi-machine setup I run. The same setup powers the screenshots on the Celistra homepage.

What the native options give you (and where they stop)

Claude Code's Remote Control lets your phone re-attach to one live session per machine. Their Agent Teams feature spawns coordinated sub-agents within a single Claude Code instance. Both are useful. Neither answers "I have three machines and I want one screen."

Step 1 — Install the daemon on each box

From celistra.dev/download, grab the right binary. macOS install is a .app; the daemon auto-launches. Linux is a tarball + systemd unit. Windows is a signed MSI that registers a Login Item.

# macOS — drag Celistra.app into /Applications, double-click once
# Linux
curl -L https://celistra.dev/dl/celistrad-linux-amd64.tar.gz | tar xz
sudo mv celistrad /usr/local/bin/
celistrad install-systemd-unit
sudo systemctl enable --now celistrad

The daemon listens on 127.0.0.1:33120. It does nothing until you pair it.

Step 2 — Pair each machine in 30 seconds

Open the dashboard at celistra.dev and sign in with Google. Click the Celistra tray icon on the machine → Pair this machine. The browser jumps to the dashboard with a one-time token. Constant-time compare, 600s TTL, single-use. The handshake completes silently. The machine appears in your fleet view.

Repeat for each box.

Step 3 — Spawn Claude Code on the GPU rig from your laptop

From the dashboard, click Spawn agent. Pick the host (your GPU rig). Paste:

claude --dangerously-skip-permissions --resume my-training-run

or, more sensibly:

claude --resume my-training-run

Hit Run. The PTY opens in your browser via xterm.js + WebSocket. Multiple browser tabs can attach to the same agent — there's a 64KB replay ring for late joiners.

Step 4 — Watch from your phone

The companion iOS / Android app pairs with your Firebase account. Every running agent shows up. Tap one to attach. When the agent asks for a permission ("can I run git push?"), you get a haptic push notification. Tap-and-hold to approve.

This is the part Remote Control can't do — the phone isn't viewing one session, it's the dashboard for the whole fleet.

Step 5 — Sandbox: keep Claude Code out of ~/Documents

By default every spawn is wrapped in sandbox-exec with a Seatbelt profile derived from a capability token: workspace_rw, network, subprocess, read_anywhere. The agent can read your machine but can only write inside the workspace folder you picked.

If a particular agent needs more (it wants to take screenshots, drive Safari, read protected files), grant it explicitly per-agent via the capabilities JSON. Every grant and every block decision is in the audit log.

Step 6 — Restart-on-crash for overnight jobs

In the spawn modal, toggle Auto-restart. The daemon will respawn the process up to 5 times in any 60-second window before backing off. The 30-day SQLite agent history records every spawn, every exit code, every restart.

Step 7 — When you're off Wi-Fi

The dashboard probes 127.0.0.1:33120 first; if not reachable, it falls back to the tunnel at <uid6>-<machine-slug>.tunnel.ujex.dev. The tunnel is Firebase-gated — admission token required, rotated via Firestore. nginx terminates Let's Encrypt TLS; certs pinned via CAA.

You can't tell the difference from the UI. You probably can't tell the difference from latency unless you're staring at it.

What this replaces

Old approachWhat replaces it
ssh -t host 'tmux new-session claude'Spawn modal · pick host · paste command · Run
Remember which tmux session is on which hostFleet view sorted by host
"Did the overnight run finish?"30-day history · exit code · runtime · last 64KB of output
Approve a permission by being at the machineHaptic phone push
Trust Claude Code with everything because --dangerously-skip-permissionsSandbox profile + capability tokens

What this doesn't replace

SSH still has a place. tmux still has a place. If you need a real shell on the machine for ops tasks, celistrad ctl gives you a REPL into the daemon's API but not a remote shell — for that, use SSH. Celistra is for processes, not for terminal access.

Cost

Free for 1 paired node. Pro is $2/month for 5 nodes; Team is $10/month for unlimited nodes plus RBAC. The daemon binary is and stays free. The dashboard is hosted on Firebase Hosting. The tunnel relay is Firebase-gated and free at indie scale.

FAQ

Can I run this on a Raspberry Pi?

Yes. The Linux ARM64 binary is built and signed. PTY + xterm.js works fine; latency is the same as any other Linux box. The Pi is great as a dedicated 'background processes' node.

What happens if the GPU rig goes offline mid-job?

The daemon notes the disconnect; the dashboard shows the host as offline. When it comes back, agents that had auto-restart on resume from where they were killed; everything else stays as 'exited' with the last known output.

Does this work behind CGNAT (carrier-grade NAT)?

Yes. The tunnel uses an outbound frp client to mail.ujex.dev:7000 — same CGNAT-friendly model as Cloudflare Tunnel. CGNAT is invisible to the daemon.

How does this compare to Tailscale + SSH?

Tailscale gives you a flat L3 network — that's a different layer. You can use Tailscale and Celistra together; the daemon doesn't care how packets get to 127.0.0.1:33120 from the dashboard.

Where does the data live?

Agent metadata (machines, agents, runs) lives in Firestore in your Firebase project. PTY output streams in real time over WebSocket and is buffered in a 64KB replay ring on the daemon. The 30-day history is SQLite on the daemon's disk — never leaves the machine.