Quickstart
Install the CLI, create a workspace, mount it, run your agent in five minutes.
The full path from "I have a Linux box" to "my agent is reading
/mnt/work". Total time: ~5 minutes (most of that is waiting for the
first format).
Prerequisites: a Linux x86_64 host (Fly.io app, EC2, dev VM) and an S3-compatible bucket you control.
1. Install the CLI
curl -fsSL https://artifacts.tonbo.dev/install.sh | bash
The installer auto-installs fuse3, downloads the latest signed
binary, and drops it on your PATH. See
Install for platform details.
2. Log in
artifacts login
Opens your browser to https://tonbo.io/login. After you sign in,
credentials are stored at ~/.config/artifacts/credentials.json (mode
0600); subsequent commands on this host read from there.
For headless hosts (CI runners, containers without a browser), pass
--token <bearer> instead — see Log in.
3. Create a workspace
export ARTIFACTS_S3_ACCESS_KEY_ID=<your-bucket-access-key>
export ARTIFACTS_S3_SECRET_ACCESS_KEY=<your-bucket-secret-key>
export ARTIFACTS_S3_REGION=auto # 'auto' for Tigris; otherwise your AWS region
artifacts workspace create cases \
--bucket panta-cases \
--endpoint https://fly.storage.tigris.dev
Three phases run automatically: reserve → format your bucket → confirm.
On success, your AK/SK get cached at
~/.config/artifacts/byo-credentials (mode 0600). Future shells and
containers don't need to re-export them. See
Storage credentials.
4. Mount
mkdir -p /mnt/work
artifacts mount cases /mnt/work
Mounting /mnt/work ...
$
artifacts mount daemonizes by default. Your prompt returns within a
second; the FUSE process detaches into the background. Verify:
mount | grep /mnt/work
# JuiceFS:tzu-cases-... on /mnt/work type fuse.juicefs (...)
5. Run your agent
cd /mnt/work
opencode # or any agent runtime that wants a POSIX filesystem
That's the whole loop. The agent does file IO normally; Tonbo handles the metadata round-trips and chunk fetches, and your bucket holds the durable state.
Importing existing data
Skip this if you started clean. If you already have a dataset to bring:
Stage to your bucket via rclone, then copy from bucket into the mount.
One-hop with aws s3 cp or s5cmd from source bucket into the mount.
Cleanup
artifacts unmount /mnt/work
artifacts workspace delete cases
Workspace delete clears the per-workspace metadata in Tonbo's Redis. Your bucket chunks are not touched. That's your data; you own it.