planctl User Manual
planctl is the Planck developer toolkit. It scaffolds projects, compiles ZSX templates into Zig code, and manages deployments to the Planck workbench.
Installing planctl
planctl ships as part of the Planck installer. If you've run the platform one-liner, it's already on your PATH:
macOS / Linux
curl -sSL https://plancks.io/downloads/ctl.sh | sudo shWindows (PowerShell as Administrator)
iwr -useb https://plancks.io/downloads/ctl.ps1 | iexinstalls planck, workbench, and planctl into /usr/local/bin/ (or C:\Program Files\Planck\ on Windows). The installer also creates ~/.planctl/config.yaml pointing planctl at the local workbench with admin creds.
server: https://workbench.example.com:2369
uid: admin
key: <your-api-key>Quick Reference
| Command | Purpose |
|---|---|
planctl init <name> [--type wasm|app] | Scaffold a new project |
planctl build [<zig-build-args>...] | Sync dependencies then run zig build |
planctl <file.zsx> | Compile single template to stdout |
planctl <in_dir> <out_dir> | Batch compile templates |
planctl clean <dir> | Remove generated files |
planctl deploy --app | --service <name> | --all | Build + deploy |
planctl undeploy --app | --service <name> | --all | Remove from server |
planctl start | stop | restart | status | Lifecycle management |
planctl backup <service> [--root <dir>] [--name <n>] [--archive] | Create a workbench snapshot |
planctl restore <path> --service <name> --target <path> | Restore from a snapshot dir or .tar.zst |
Project Initialization
planctl init <project_name> [--type wasm|app]--type wasm(default), WASM service project (compiles to.wasm, runs inside Planck)--type app, Native shell app project (zeish HTTP server)
WASM Service (planctl init myservice --type wasm)
myservice/
build.zig Build configuration
build.zig.zon Package manifest
config.yaml Planck service config
src/
app.zig WASM entry (exports init + process)
dev.zig Native dev server (same handlers)
domain/
item.zig Entity, Schema, param/body types
api/
find_all_items_handler.zig
find_item_by_id_handler.zig
create_item_handler.zig
update_item_handler.zig
delete_item_handler.zig
zsx/ Hand-edit .zsx templates here
item_list.zsx
ui/ AUTO-GENERATED (never edit)
public/ Static files
tests/
domain_test.zig
schema_test.zigShell App (planctl init myapp --type app)
myapp/
build.zig
build.zig.zon
src/
main.zig Shell server entry
api/
example.zig Example handler
zsx/
ui/
public/
index.html
index.css
services/ Subdirectory for WASM servicesBuild Targets
Use planctl build <target> to fetch dependencies and forward to zig build <target> in one step. Plain zig build works too once Zig has fetched the deps from build.zig.zon.
WASM service:
planctl build # Default: build WASM module
planctl build wasm # Explicit WASM target
planctl build dev # Native dev server (http://127.0.0.1:3000)
planctl build test # Run domain + schema tests
planctl build preprocess # ZSX compilation onlyShell app:
planctl build # Build native executable
planctl build run # Build + run serverZSX Template Compiler
ZSX is a JSX-like template syntax that compiles to Zig code. Templates produce HTML by appending directly to an ArrayList(u8), no virtual DOM, no runtime overhead.
Compilation
# Single file to stdout
planctl src/zsx/item_list.zsx
# Batch transform directory
planctl src/zsx/ src/ui/
# Explicit target language
planctl --target zig src/zsx/ src/ui/
# Clean generated files (only removes files with AUTO-GENERATED header)
planctl clean src/ui/Template Syntax
HTML Elements
<div class="container">
<h1>Hello World</h1>
<br />
</div>Expressions
Any Zig expression inside {}. Strings are HTML-escaped automatically.
<span>{user.name}</span>
<p>Total: {count + 1}</p>
<div>{formatDate(item.created_at)}</div>For Loops
{for item in self.items}
<tr>
<td>{item.id}</td>
<td>{item.name}</td>
</tr>
{/for}Conditionals
{if self.items.len == 0}
<p>No items found.</p>
{else}
<p>{self.items.len} items</p>
{/if}Dynamic Attributes
<div class={myVariable}>...</div>
<button data-id="{item.id}">Click</button>Components
PascalCase tags are component calls:
<MyComponent prop="value" />
<Card title="Title">Content</Card>Complete Example
Source (src/zsx/item_list.zsx):
const std = @import("std");
const Item = @import("../domain/item.zig").Item;
pub const ItemList = struct {
items: []const Item,
pub fn render(self: ItemList, out: *std.ArrayList(u8), allocator: std.mem.Allocator) !void {
return (
<div class="page">
<h1>Items</h1>
{if self.items.len == 0}
<p>No items found.</p>
{else}
<table>
{for item in self.items}
<tr>
<td>{item.id}</td>
<td>{item.name}</td>
</tr>
{/for}
</table>
{/if}
</div>
);
}
};Output (src/ui/item_list.zig): Auto-generated Zig code with appendSlice for static HTML and appendValue for dynamic expressions. HTML escaping is applied automatically to string values.
Deployment
Configuration
File: ~/.planctl/config.yaml
server: http://127.0.0.1:2369
uid: admin
key: <your-api-key>Multi-profile:
default_profile: dev
profiles:
- name: dev
server: http://127.0.0.1:2369
uid: admin
key: dev-key
- name: prod
server: https://prod.workbench.internal:2369
uid: admin
key: prod-keyResolution order (highest priority first):
- CLI flags (
--server,--uid,--key,--profile) - Environment variables (
PLANCTL_SERVER,PLANCTL_UID,PLANCTL_KEY,PLANCTL_PROFILE) - Selected profile from config file
- Top-level flat fields in config file
- Defaults: server=
http://127.0.0.1:2369, uid=admin
Deploying to Specific Profiles
Profiles let you target different environments (dev, staging, prod) from the same machine without editing config files.
Setup (~/.planctl/config.yaml):
default_profile: dev
profiles:
- name: dev
server: http://127.0.0.1:2369
uid: admin
key: dev-api-key
- name: staging
server: https://staging.internal:2369
uid: ci-admin
key: staging-api-key
- name: prod
server: https://prod.internal:2369
uid: deploy-admin
key: prod-api-keySelecting a profile:
# Use --profile flag (highest priority)
planctl deploy --all --profile prod
# Or set PLANCTL_PROFILE env var
export PLANCTL_PROFILE=staging
planctl deploy --all
# Or rely on default_profile in config.yaml
# (deploys to "dev" in this example)
planctl deploy --allProfile selection order:
--profileCLI flagPLANCTL_PROFILEenvironment variabledefault_profilefield in config.yaml- First profile in the list
Mixing profiles with overrides:
You can override individual fields from a profile:
# Use staging profile but override the server
planctl deploy --all --profile staging --server https://staging-2.internal:2369
# Use prod profile but override the key from env
PLANCTL_KEY=$ROTATED_KEY planctl deploy --all --profile prodProfile resolution. Each field of the effective config (server, uid, key) is resolved independently, picking the first non-empty source from this list:
- CLI flags,
--profile,--server,--key - Environment variables,
PLANCTL_PROFILE,PLANCTL_SERVER,PLANCTL_KEY - Named profile in
~/.planctl/config.yaml(selected via--profileor the file'sdefault_profile) - Flat fields, top-level
server/uid/keyin~/.planctl/config.yaml - Built-in defaults,
localhost:2369,admin
Resolution is per-field, not per-source, you can supply only the key on the CLI and let everything else come from the profile.
CI/CD example:
# GitHub Actions, deploy to staging on PR merge
- name: Deploy to staging
env:
ZXC_PROFILE: staging
ZXC_KEY: ${{ secrets.STAGING_KEY }}
run: planctl deploy --all
# Deploy to production on release tag
- name: Deploy to production
run: planctl deploy --all --profile prod --key ${{ secrets.PROD_KEY }}Error handling:
If a profile name doesn't exist:
Error: profile 'staging' not found in /Users/you/.planctl/config.yaml.
Available profiles:
- dev
- prodApp Manifest
Each project needs an app.yaml in the root:
name: eshop
description: "eShop microservices demo"Deploy Commands
# Deploy shell app (builds + uploads binary + static files)
planctl deploy --app
# Deploy single WASM service (builds + uploads WASM)
planctl deploy --service product
# Deploy everything
planctl deploy --allCommon flags:
--dry-run # Print what would happen, skip network calls
--server <url> # Override workbench URL
--uid <user> # Override admin user
--key <api-key> # Override admin key
--profile <name> # Select config profileWhat planctl deploy --app Does
In order:
- Read
app.yamlfrom the project root. - Authenticate to the workbench (
POST /api/system-db/connect). - Ensure the app record exists (
POST /api/apps). - Run
zig build -Doptimize=ReleaseFast. - Upload the compiled binary (
POST /api/deploy-app). - For each file under
public/, upload it (POST /api/deploy-app). - Restart the app (
POST /api/app-lifecycle).
What planctl deploy --service <name> Does
In order:
- Read
app.yamlto determine the parent app name. - Authenticate to the workbench.
- Ensure the parent app record exists.
- Run
zig buildinservices/<name>/. - Read
services/<name>/config.yaml. - Register the service with the workbench (
POST /api/deploy). - Read the compiled WASM from
zig-out/wasm/<name>.wasm. - Upload the WASM, base64-encoded (
POST /api/deploy).
The workbench auto-restarts the service with the new WASM module.
What planctl deploy --all Does
- Deploys the shell app (
planctl deploy --app) - Scans
services/directory - Deploys each subdirectory as a WASM service
- Individual failures are logged but don't abort remaining services
Undeploy Commands
planctl undeploy --service product # Remove one service
planctl undeploy --app # Remove app (services must be removed first)
planctl undeploy --all # Remove all services + app
planctl undeploy --all --force # Skip confirmation promptLifecycle Commands
planctl start --all # Start app + all services
planctl stop --service product # Stop one service
planctl restart --app # Restart shell app
planctl status # Show running status (default: --all)Status output:
SERVICE APP STATE PORT PID CPU% RSS(MB)
-------------------- ---------- ---------- -------- -------- -------- --------
product.db.command eshop running 24006 12345 2.3 45.6
order.db.command eshop running 24016 12346 1.1 32.1
kitchen.db.command eshop running 24020 12347 0.5 28.4Backup & Restore
planctl backup / planctl restore wrap the workbench's snapshot API. A snapshot is a self-contained directory holding the DB data (data.shinydb), the currently-deployed WASM binary (service.wasm), the service config (service.yaml), and a manifest.json with SHA-256 integrity hashes. Restoring one rebuilds the whole service, no separate WASM redeploy needed.
Create a snapshot
# Uses the service's configured backup_dir (from its config.yaml)
planctl backup product
# Override the snapshot root for a one-off (e.g. ad-hoc offsite copy)
planctl backup product --root /mnt/offsite/planck
# Custom snapshot dir name (defaults to {service}-{timestamp_ms})
planctl backup product --name pre-migration
# Also produce a portable .tar.zst next to the snapshot
planctl backup product --archive| Flag | Purpose |
|---|---|
--root <dir> | Override the snapshot root directory. Defaults to backup_dir from the target service's config.yaml. |
--name <name> | Snapshot subdirectory name. Default: {service}-{timestamp_ms}. |
--archive | Also pack the snapshot as {snap_dir}.tar.zst. Requires tar + zstd on PATH. |
Prerequisites: the target service must have backup_dir set in its config.yaml, or you must pass --root. The workbench refuses to default to a path under base_dir, the point of backups is landing on a different disk.
Restore a snapshot
# From an unpacked snapshot dir
planctl restore /mnt/backups/product-1737000000000 \
--service product \
--target /var/lib/planck/product
# From a .tar.zst archive (auto-unpacked into the same parent dir)
planctl restore /mnt/offsite/product-1737000000000.tar.zst \
--service product \
--target /var/lib/planck/product| Flag | Purpose |
|---|---|
--service <name> | Target service to restore into. Required. |
--target <path> | Destination data directory on the workbench host. Required. |
Restore flow (driven by the workbench):
- Read
manifest.jsonand verify SHA-256 ofdata.shinydb+service.wasm. - Stop the running service process.
- If the snapshot carries a WASM binary, upload it into the service's deployment dir.
- Engine-restore
data.shinydbinto--target. - Start the service back up.
Any failure past step 2 leaves the service stopped for operator inspection, no auto-rollback.
Scheduling snapshots
Recurring snapshots are configured in the workbench UI (Schedules panel) as task_type: snapshot. planctl has no dedicated scheduling subcommand, the scheduler lives server-side so it survives CLI sessions.
Retention is explicitly not managed yet. Snapshots are ~3× the size of a plain .shinydb backup (data + WASM + config). Pair the schedule with external cleanup until retention ships:
# Keep 14 days of snapshots (run via cron on the backup volume's host)
find /mnt/backups/planck -maxdepth 1 -type d -mtime +14 \
-exec rm -rf {} \;
find /mnt/backups/planck -maxdepth 1 -name "*.tar.zst" -mtime +14 \
-deleteService Configuration
Each WASM service has a config.yaml that defines its Planck instance. Generated by planctl init with sensible defaults.
name: product
address: "0.0.0.0"
service_type: command # "command" (primary + replica) or "standalone"
backup_dir: "/mnt/backups/product" # Default snapshot/backup output (NOT under base_dir)
max_sessions: 128
tls:
enabled: false
session:
idle_timeout_ms: 604800000 # 7 days
buffers:
memtable: 16777216 # 16 MB
vlog: 4194304 # 4 MB
wal: 262144 # 256 KB
durability:
enabled: true
flush_interval_in_ms: 1000
replica:
enabled: true # Auto-configured by workbench for "command" type
sync_interval_ms: 5000
wasm:
enabled: true
port: 0 # Auto-assigned by workbench (3000+)
min_instances: 2 # WASM instance pool
max_instances: 8
autoscale: true
# Also: file_sizes, index, cache, logging, gc, limits, securityKey fields:
service_type: command, workbench auto-creates a query replica with port+1backup_dir, destination for snapshots triggered byplanctl backupor scheduled snapshot tasks. Should live on a different disk thanbase_dir; it's deliberately not derived frombase_dirfor that reason.wasm.port: 0, auto-assigned by workbench (formula:3000 + (sdb_port - 24000) / 2)wasm.min_instances/max_instances, WASM instance pool for concurrent request handling
Environment Variables
| Variable | Purpose | Default |
|---|---|---|
PLANCTL_SERVER | Workbench URL | http://127.0.0.1:2369 |
PLANCTL_UID | Admin username | admin |
PLANCTL_KEY | Admin API key | (required) |
PLANCTL_PROFILE | Config profile name | default_profile from config |
CI/CD example:
PLANCTL_SERVER=https://prod:2369 ZXC_KEY=$PROD_KEY planctl deploy --allWorkflows
New WASM Service
planctl init inventory --type wasm
cd inventory
# Edit src/domain/item.zig (your entity)
# Edit src/api/*_handler.zig (your handlers)
# Edit src/zsx/*.zsx (your templates)
zig build test # Verify
zig build dev # Test locally
planctl deploy --service inventory # Deploy to workbenchNew Shell App with Services
planctl init myapp --type app
cd myapp
# Create services
mkdir services
planctl init product --type wasm
mv product services/product
planctl init orders --type wasm
mv orders services/orders
# Deploy everything
planctl deploy --all
# Monitor
planctl statusDevelopment Loop
# Terminal 1: watch templates
planctl --watch src/zsx/ src/ui/
# Terminal 2: dev server
planctl build dev
# Terminal 3: deploy when ready
planctl deploy --service productAdding a Dependency
# Append the package to build.zig.zon with the correct hash
zig fetch --save=yaml https://github.com/kubkon/zig-yaml/archive/main.tar.gz
# Wire it into your Modules graph in build.zig (b.dependency("yaml", .{})
# .path("src/root.zig") + b.createModule + addImport, see the framework
# references for the full pattern), then build normally
planctl build runFresh Clone / CI Build
git clone git@github.com:yourorg/yourapp.git
cd yourapp
planctl build test # fetches deps and runs zig build testRedeploy After Code Change
# Single service
planctl deploy --service product
# Everything
planctl deploy --all
# Just restart (no rebuild)
planctl restart --service productSafe Migration / Risky Change
# 1. Take a snapshot before the change (uses service's backup_dir)
planctl backup product --name pre-migration
# 2. Deploy the change
planctl deploy --service product
# 3a. If the change worked, move on.
# 3b. If it didn't, restore the snapshot
planctl restore /mnt/backups/product/product-pre-migration \
--service product \
--target /var/lib/planck/productOff-Host Backup Copy
# 1. Snapshot + pack in one step
planctl backup product --archive
# 2. rsync the archive to an offsite server
rsync /mnt/backups/product/product-1737000000000.tar.zst \
backup-host:/srv/planck-archives/Troubleshooting
planctl: command not found
- Add
~/.planck/binto your PATH - Or set it in your shell profile:
export PATH="$HOME/.planck/bin:$PATH"
Error: 'key' is required but not found
mkdir -p ~/.planctl
cat > ~/.planctl/config.yaml << 'EOF'
server: http://127.0.0.1:2369
uid: admin
key: UGxhbmNrX0RlZmF1bHRfQWRtaW5fS2V5XzAwMTA=
EOFError: app.yaml not found
- Run planctl commands from the project root (where
app.yamllives) - Or from a
services/<name>/subdirectory (planctl looks two levels up)
Error: Build failed
- Check
zig buildoutput for compilation errors - Verify dependencies in
build.zig.zonpoint to valid paths
zig fetch --save failed
zig fetch --save=<name> <url>needs abuild.zigin the current directory, run it from a project root.- Check network reachability: the URL must be reachable from this machine.
- Confirm the URL points at a tarball (
.tar.gz,.tar.zst) or a git endpoint, a plain HTML page won't work.
Modules disambiguated to bson0, utils0, etc.
- A dep is being constructed twice in your build graph. Almost always caused by calling
b.dependency("foo", .{}).module("foo")somewhere, that asks the dep's ownbuild.zigfor its module instance, and in deep transitive graphs you end up with multiple instances of the same logical package. - Fix: switch to
b.dependency("foo", .{}).path("src/root.zig")+b.createModule(...)+ explicitaddImportcalls. See the framework references for the full pattern.
planctl clean didn't remove a file
cleanonly removes files starting with// AUTO-GENERATED by planctl- Hand-written
.zigfiles insrc/ui/are preserved
Workbench connection refused
- Verify workbench is running:
curl http://127.0.0.1:2369/api/apps - Check
ZXC_SERVERor~/.planctl/config.yamlserver field
planctl backup: backup_dir is not set in service config
- Add
backup_dirto the service'sconfig.yaml(pick a path on a different disk thanbase_dir) and redeploy, or - Pass
--root <path>on the command line for a one-off.
planctl backup --archive: tar or zstd not found
- The archive step shells out to the system
tar+zstdbinaries. Install them (brew install zstd,apt install zstd, etc.) or drop the flag and archive out-of-band.
planctl restore: snapshot integrity check failed
- The manifest's SHA-256 doesn't match the data/WASM file bytes. The snapshot is corrupt, do not attempt to salvage it by editing
manifest.json. Pick a different snapshot or re-run the backup.
Service didn't come back up after restore
- By design. Restore stops the service, writes new files, then asks the workbench to start it; if any step after stop fails the service stays stopped for operator inspection. Check
~/.planck/logs/workbench-*.out.logfor the failure reason, fix the underlying issue, thenplanctl start --service <name>.