Skip to content

planctl User Manual

planctl is the Planck developer toolkit. It scaffolds projects, compiles ZSX templates into Zig code, and manages deployments to the Planck workbench.


Installing planctl

planctl ships as part of the Planck installer. If you've run the platform one-liner, it's already on your PATH:

macOS / Linux

bash
curl -sSL https://plancks.io/downloads/ctl.sh | sudo sh

Windows (PowerShell as Administrator)

powershell
iwr -useb https://plancks.io/downloads/ctl.ps1 | iex

installs planck, workbench, and planctl into /usr/local/bin/ (or C:\Program Files\Planck\ on Windows). The installer also creates ~/.planctl/config.yaml pointing planctl at the local workbench with admin creds.

yaml
server: https://workbench.example.com:2369
uid: admin
key: <your-api-key>

Quick Reference

CommandPurpose
planctl init <name> [--type wasm|app]Scaffold a new project
planctl build [<zig-build-args>...]Sync dependencies then run zig build
planctl <file.zsx>Compile single template to stdout
planctl <in_dir> <out_dir>Batch compile templates
planctl clean <dir>Remove generated files
planctl deploy --app | --service <name> | --allBuild + deploy
planctl undeploy --app | --service <name> | --allRemove from server
planctl start | stop | restart | statusLifecycle management
planctl backup <service> [--root <dir>] [--name <n>] [--archive]Create a workbench snapshot
planctl restore <path> --service <name> --target <path>Restore from a snapshot dir or .tar.zst

Project Initialization

bash
planctl init <project_name> [--type wasm|app]
  • --type wasm (default), WASM service project (compiles to .wasm, runs inside Planck)
  • --type app, Native shell app project (zeish HTTP server)

WASM Service (planctl init myservice --type wasm)

myservice/
  build.zig                 Build configuration
  build.zig.zon             Package manifest
  config.yaml               Planck service config
  src/
    app.zig                 WASM entry (exports init + process)
    dev.zig                 Native dev server (same handlers)
    domain/
      item.zig              Entity, Schema, param/body types
    api/
      find_all_items_handler.zig
      find_item_by_id_handler.zig
      create_item_handler.zig
      update_item_handler.zig
      delete_item_handler.zig
    zsx/                    Hand-edit .zsx templates here
      item_list.zsx
    ui/                     AUTO-GENERATED (never edit)
  public/                   Static files
  tests/
    domain_test.zig
    schema_test.zig

Shell App (planctl init myapp --type app)

myapp/
  build.zig
  build.zig.zon
  src/
    main.zig                Shell server entry
    api/
      example.zig           Example handler
    zsx/
    ui/
  public/
    index.html
    index.css
  services/                 Subdirectory for WASM services

Build Targets

Use planctl build <target> to fetch dependencies and forward to zig build <target> in one step. Plain zig build works too once Zig has fetched the deps from build.zig.zon.

WASM service:

bash
planctl build              # Default: build WASM module
planctl build wasm         # Explicit WASM target
planctl build dev          # Native dev server (http://127.0.0.1:3000)
planctl build test         # Run domain + schema tests
planctl build preprocess   # ZSX compilation only

Shell app:

bash
planctl build              # Build native executable
planctl build run          # Build + run server

ZSX Template Compiler

ZSX is a JSX-like template syntax that compiles to Zig code. Templates produce HTML by appending directly to an ArrayList(u8), no virtual DOM, no runtime overhead.

Compilation

bash
# Single file to stdout
planctl src/zsx/item_list.zsx

# Batch transform directory
planctl src/zsx/ src/ui/

# Explicit target language
planctl --target zig src/zsx/ src/ui/

# Clean generated files (only removes files with AUTO-GENERATED header)
planctl clean src/ui/

Template Syntax

HTML Elements

jsx
<div class="container">
  <h1>Hello World</h1>
  <br />
</div>

Expressions

Any Zig expression inside {}. Strings are HTML-escaped automatically.

jsx
<span>{user.name}</span>
<p>Total: {count + 1}</p>
<div>{formatDate(item.created_at)}</div>

For Loops

jsx
{for item in self.items}
    <tr>
        <td>{item.id}</td>
        <td>{item.name}</td>
    </tr>
{/for}

Conditionals

jsx
{if self.items.len == 0}
    <p>No items found.</p>
{else}
    <p>{self.items.len} items</p>
{/if}

Dynamic Attributes

jsx
<div class={myVariable}>...</div>
<button data-id="{item.id}">Click</button>

Components

PascalCase tags are component calls:

jsx
<MyComponent prop="value" />
<Card title="Title">Content</Card>

Complete Example

Source (src/zsx/item_list.zsx):

zig
const std = @import("std");
const Item = @import("../domain/item.zig").Item;

pub const ItemList = struct {
    items: []const Item,

    pub fn render(self: ItemList, out: *std.ArrayList(u8), allocator: std.mem.Allocator) !void {
        return (
            <div class="page">
                <h1>Items</h1>
                {if self.items.len == 0}
                    <p>No items found.</p>
                {else}
                    <table>
                        {for item in self.items}
                            <tr>
                                <td>{item.id}</td>
                                <td>{item.name}</td>
                            </tr>
                        {/for}
                    </table>
                {/if}
            </div>
        );
    }
};

Output (src/ui/item_list.zig): Auto-generated Zig code with appendSlice for static HTML and appendValue for dynamic expressions. HTML escaping is applied automatically to string values.


Deployment

Configuration

File: ~/.planctl/config.yaml

yaml
server: http://127.0.0.1:2369
uid: admin
key: <your-api-key>

Multi-profile:

yaml
default_profile: dev

profiles:
  - name: dev
    server: http://127.0.0.1:2369
    uid: admin
    key: dev-key

  - name: prod
    server: https://prod.workbench.internal:2369
    uid: admin
    key: prod-key

Resolution order (highest priority first):

  1. CLI flags (--server, --uid, --key, --profile)
  2. Environment variables (PLANCTL_SERVER, PLANCTL_UID, PLANCTL_KEY, PLANCTL_PROFILE)
  3. Selected profile from config file
  4. Top-level flat fields in config file
  5. Defaults: server=http://127.0.0.1:2369, uid=admin

Deploying to Specific Profiles

Profiles let you target different environments (dev, staging, prod) from the same machine without editing config files.

Setup (~/.planctl/config.yaml):

yaml
default_profile: dev

profiles:
  - name: dev
    server: http://127.0.0.1:2369
    uid: admin
    key: dev-api-key

  - name: staging
    server: https://staging.internal:2369
    uid: ci-admin
    key: staging-api-key

  - name: prod
    server: https://prod.internal:2369
    uid: deploy-admin
    key: prod-api-key

Selecting a profile:

bash
# Use --profile flag (highest priority)
planctl deploy --all --profile prod

# Or set PLANCTL_PROFILE env var
export PLANCTL_PROFILE=staging
planctl deploy --all

# Or rely on default_profile in config.yaml
# (deploys to "dev" in this example)
planctl deploy --all

Profile selection order:

  1. --profile CLI flag
  2. PLANCTL_PROFILE environment variable
  3. default_profile field in config.yaml
  4. First profile in the list

Mixing profiles with overrides:

You can override individual fields from a profile:

bash
# Use staging profile but override the server
planctl deploy --all --profile staging --server https://staging-2.internal:2369

# Use prod profile but override the key from env
PLANCTL_KEY=$ROTATED_KEY planctl deploy --all --profile prod

Profile resolution. Each field of the effective config (server, uid, key) is resolved independently, picking the first non-empty source from this list:

  1. CLI flags, --profile, --server, --key
  2. Environment variables, PLANCTL_PROFILE, PLANCTL_SERVER, PLANCTL_KEY
  3. Named profile in ~/.planctl/config.yaml (selected via --profile or the file's default_profile)
  4. Flat fields, top-level server / uid / key in ~/.planctl/config.yaml
  5. Built-in defaults, localhost:2369, admin

Resolution is per-field, not per-source, you can supply only the key on the CLI and let everything else come from the profile.

CI/CD example:

bash
# GitHub Actions, deploy to staging on PR merge
- name: Deploy to staging
  env:
    ZXC_PROFILE: staging
    ZXC_KEY: ${{ secrets.STAGING_KEY }}
  run: planctl deploy --all

# Deploy to production on release tag
- name: Deploy to production
  run: planctl deploy --all --profile prod --key ${{ secrets.PROD_KEY }}

Error handling:

If a profile name doesn't exist:

Error: profile 'staging' not found in /Users/you/.planctl/config.yaml.
Available profiles:
  - dev
  - prod

App Manifest

Each project needs an app.yaml in the root:

yaml
name: eshop
description: "eShop microservices demo"

Deploy Commands

bash
# Deploy shell app (builds + uploads binary + static files)
planctl deploy --app

# Deploy single WASM service (builds + uploads WASM)
planctl deploy --service product

# Deploy everything
planctl deploy --all

Common flags:

bash
--dry-run              # Print what would happen, skip network calls
--server <url>         # Override workbench URL
--uid <user>           # Override admin user
--key <api-key>        # Override admin key
--profile <name>       # Select config profile

What planctl deploy --app Does

In order:

  1. Read app.yaml from the project root.
  2. Authenticate to the workbench (POST /api/system-db/connect).
  3. Ensure the app record exists (POST /api/apps).
  4. Run zig build -Doptimize=ReleaseFast.
  5. Upload the compiled binary (POST /api/deploy-app).
  6. For each file under public/, upload it (POST /api/deploy-app).
  7. Restart the app (POST /api/app-lifecycle).

What planctl deploy --service <name> Does

In order:

  1. Read app.yaml to determine the parent app name.
  2. Authenticate to the workbench.
  3. Ensure the parent app record exists.
  4. Run zig build in services/<name>/.
  5. Read services/<name>/config.yaml.
  6. Register the service with the workbench (POST /api/deploy).
  7. Read the compiled WASM from zig-out/wasm/<name>.wasm.
  8. Upload the WASM, base64-encoded (POST /api/deploy).

The workbench auto-restarts the service with the new WASM module.

What planctl deploy --all Does

  1. Deploys the shell app (planctl deploy --app)
  2. Scans services/ directory
  3. Deploys each subdirectory as a WASM service
  4. Individual failures are logged but don't abort remaining services

Undeploy Commands

bash
planctl undeploy --service product          # Remove one service
planctl undeploy --app                      # Remove app (services must be removed first)
planctl undeploy --all                      # Remove all services + app
planctl undeploy --all --force              # Skip confirmation prompt

Lifecycle Commands

bash
planctl start --all                # Start app + all services
planctl stop --service product     # Stop one service
planctl restart --app              # Restart shell app
planctl status                     # Show running status (default: --all)

Status output:

SERVICE              APP        STATE      PORT     PID      CPU%     RSS(MB)
-------------------- ---------- ---------- -------- -------- -------- --------
product.db.command   eshop      running    24006    12345    2.3      45.6
order.db.command     eshop      running    24016    12346    1.1      32.1
kitchen.db.command   eshop      running    24020    12347    0.5      28.4

Backup & Restore

planctl backup / planctl restore wrap the workbench's snapshot API. A snapshot is a self-contained directory holding the DB data (data.shinydb), the currently-deployed WASM binary (service.wasm), the service config (service.yaml), and a manifest.json with SHA-256 integrity hashes. Restoring one rebuilds the whole service, no separate WASM redeploy needed.

Create a snapshot

bash
# Uses the service's configured backup_dir (from its config.yaml)
planctl backup product

# Override the snapshot root for a one-off (e.g. ad-hoc offsite copy)
planctl backup product --root /mnt/offsite/planck

# Custom snapshot dir name (defaults to {service}-{timestamp_ms})
planctl backup product --name pre-migration

# Also produce a portable .tar.zst next to the snapshot
planctl backup product --archive
FlagPurpose
--root <dir>Override the snapshot root directory. Defaults to backup_dir from the target service's config.yaml.
--name <name>Snapshot subdirectory name. Default: {service}-{timestamp_ms}.
--archiveAlso pack the snapshot as {snap_dir}.tar.zst. Requires tar + zstd on PATH.

Prerequisites: the target service must have backup_dir set in its config.yaml, or you must pass --root. The workbench refuses to default to a path under base_dir, the point of backups is landing on a different disk.

Restore a snapshot

bash
# From an unpacked snapshot dir
planctl restore /mnt/backups/product-1737000000000 \
  --service product \
  --target /var/lib/planck/product

# From a .tar.zst archive (auto-unpacked into the same parent dir)
planctl restore /mnt/offsite/product-1737000000000.tar.zst \
  --service product \
  --target /var/lib/planck/product
FlagPurpose
--service <name>Target service to restore into. Required.
--target <path>Destination data directory on the workbench host. Required.

Restore flow (driven by the workbench):

  1. Read manifest.json and verify SHA-256 of data.shinydb + service.wasm.
  2. Stop the running service process.
  3. If the snapshot carries a WASM binary, upload it into the service's deployment dir.
  4. Engine-restore data.shinydb into --target.
  5. Start the service back up.

Any failure past step 2 leaves the service stopped for operator inspection, no auto-rollback.

Scheduling snapshots

Recurring snapshots are configured in the workbench UI (Schedules panel) as task_type: snapshot. planctl has no dedicated scheduling subcommand, the scheduler lives server-side so it survives CLI sessions.

Retention is explicitly not managed yet. Snapshots are ~3× the size of a plain .shinydb backup (data + WASM + config). Pair the schedule with external cleanup until retention ships:

bash
# Keep 14 days of snapshots (run via cron on the backup volume's host)
find /mnt/backups/planck -maxdepth 1 -type d -mtime +14 \
  -exec rm -rf {} \;
find /mnt/backups/planck -maxdepth 1 -name "*.tar.zst" -mtime +14 \
  -delete

Service Configuration

Each WASM service has a config.yaml that defines its Planck instance. Generated by planctl init with sensible defaults.

yaml
name: product
address: "0.0.0.0"
service_type: command # "command" (primary + replica) or "standalone"
backup_dir: "/mnt/backups/product" # Default snapshot/backup output (NOT under base_dir)
max_sessions: 128

tls:
  enabled: false

session:
  idle_timeout_ms: 604800000 # 7 days

buffers:
  memtable: 16777216 # 16 MB
  vlog: 4194304 # 4 MB
  wal: 262144 # 256 KB

durability:
  enabled: true
  flush_interval_in_ms: 1000

replica:
  enabled: true # Auto-configured by workbench for "command" type
  sync_interval_ms: 5000

wasm:
  enabled: true
  port: 0 # Auto-assigned by workbench (3000+)
  min_instances: 2 # WASM instance pool
  max_instances: 8
  autoscale: true

# Also: file_sizes, index, cache, logging, gc, limits, security

Key fields:

  • service_type: command, workbench auto-creates a query replica with port+1
  • backup_dir, destination for snapshots triggered by planctl backup or scheduled snapshot tasks. Should live on a different disk than base_dir; it's deliberately not derived from base_dir for that reason.
  • wasm.port: 0, auto-assigned by workbench (formula: 3000 + (sdb_port - 24000) / 2)
  • wasm.min_instances / max_instances, WASM instance pool for concurrent request handling

Environment Variables

VariablePurposeDefault
PLANCTL_SERVERWorkbench URLhttp://127.0.0.1:2369
PLANCTL_UIDAdmin usernameadmin
PLANCTL_KEYAdmin API key(required)
PLANCTL_PROFILEConfig profile namedefault_profile from config

CI/CD example:

bash
PLANCTL_SERVER=https://prod:2369 ZXC_KEY=$PROD_KEY planctl deploy --all

Workflows

New WASM Service

bash
planctl init inventory --type wasm
cd inventory
# Edit src/domain/item.zig (your entity)
# Edit src/api/*_handler.zig (your handlers)
# Edit src/zsx/*.zsx (your templates)
zig build test                 # Verify
zig build dev                  # Test locally
planctl deploy --service inventory # Deploy to workbench

New Shell App with Services

bash
planctl init myapp --type app
cd myapp

# Create services
mkdir services
planctl init product --type wasm
mv product services/product

planctl init orders --type wasm
mv orders services/orders

# Deploy everything
planctl deploy --all

# Monitor
planctl status

Development Loop

bash
# Terminal 1: watch templates
planctl --watch src/zsx/ src/ui/

# Terminal 2: dev server
planctl build dev

# Terminal 3: deploy when ready
planctl deploy --service product

Adding a Dependency

bash
# Append the package to build.zig.zon with the correct hash
zig fetch --save=yaml https://github.com/kubkon/zig-yaml/archive/main.tar.gz

# Wire it into your Modules graph in build.zig (b.dependency("yaml", .{})
# .path("src/root.zig") + b.createModule + addImport, see the framework
# references for the full pattern), then build normally
planctl build run

Fresh Clone / CI Build

bash
git clone git@github.com:yourorg/yourapp.git
cd yourapp
planctl build test   # fetches deps and runs zig build test

Redeploy After Code Change

bash
# Single service
planctl deploy --service product

# Everything
planctl deploy --all

# Just restart (no rebuild)
planctl restart --service product

Safe Migration / Risky Change

bash
# 1. Take a snapshot before the change (uses service's backup_dir)
planctl backup product --name pre-migration

# 2. Deploy the change
planctl deploy --service product

# 3a. If the change worked, move on.
# 3b. If it didn't, restore the snapshot
planctl restore /mnt/backups/product/product-pre-migration \
  --service product \
  --target /var/lib/planck/product

Off-Host Backup Copy

bash
# 1. Snapshot + pack in one step
planctl backup product --archive

# 2. rsync the archive to an offsite server
rsync /mnt/backups/product/product-1737000000000.tar.zst \
  backup-host:/srv/planck-archives/

Troubleshooting

planctl: command not found

  • Add ~/.planck/bin to your PATH
  • Or set it in your shell profile: export PATH="$HOME/.planck/bin:$PATH"

Error: 'key' is required but not found

bash
mkdir -p ~/.planctl
cat > ~/.planctl/config.yaml << 'EOF'
server: http://127.0.0.1:2369
uid: admin
key: UGxhbmNrX0RlZmF1bHRfQWRtaW5fS2V5XzAwMTA=
EOF

Error: app.yaml not found

  • Run planctl commands from the project root (where app.yaml lives)
  • Or from a services/<name>/ subdirectory (planctl looks two levels up)

Error: Build failed

  • Check zig build output for compilation errors
  • Verify dependencies in build.zig.zon point to valid paths

zig fetch --save failed

  • zig fetch --save=<name> <url> needs a build.zig in the current directory, run it from a project root.
  • Check network reachability: the URL must be reachable from this machine.
  • Confirm the URL points at a tarball (.tar.gz, .tar.zst) or a git endpoint, a plain HTML page won't work.

Modules disambiguated to bson0, utils0, etc.

  • A dep is being constructed twice in your build graph. Almost always caused by calling b.dependency("foo", .{}).module("foo") somewhere, that asks the dep's own build.zig for its module instance, and in deep transitive graphs you end up with multiple instances of the same logical package.
  • Fix: switch to b.dependency("foo", .{}).path("src/root.zig") + b.createModule(...) + explicit addImport calls. See the framework references for the full pattern.

planctl clean didn't remove a file

  • clean only removes files starting with // AUTO-GENERATED by planctl
  • Hand-written .zig files in src/ui/ are preserved

Workbench connection refused

  • Verify workbench is running: curl http://127.0.0.1:2369/api/apps
  • Check ZXC_SERVER or ~/.planctl/config.yaml server field

planctl backup: backup_dir is not set in service config

  • Add backup_dir to the service's config.yaml (pick a path on a different disk than base_dir) and redeploy, or
  • Pass --root <path> on the command line for a one-off.

planctl backup --archive: tar or zstd not found

  • The archive step shells out to the system tar + zstd binaries. Install them (brew install zstd, apt install zstd, etc.) or drop the flag and archive out-of-band.

planctl restore: snapshot integrity check failed

  • The manifest's SHA-256 doesn't match the data/WASM file bytes. The snapshot is corrupt, do not attempt to salvage it by editing manifest.json. Pick a different snapshot or re-run the backup.

Service didn't come back up after restore

  • By design. Restore stops the service, writes new files, then asks the workbench to start it; if any step after stop fails the service stays stopped for operator inspection. Check ~/.planck/logs/workbench-*.out.log for the failure reason, fix the underlying issue, then planctl start --service <name>.