Skip to content

Introduction

Planck is a database, a web server, and a template engine running inside a single OS process. You build a service against it by writing one WebAssembly module, handlers, queries, templates, business rules, and the database loads that module into its own address space. There is no app server. There is no ORM. There is no message broker between your handler and your store.

The whole pitch is this: most of the boxes on a typical microservice diagram exist because the application and the database are different processes. Collapse them into one and the boxes disappear.

This document elaborates the architecture in details: what "discrete, dense, sovereign" actually means in code, what the four pillars look like at runtime, and which of the three architectural patterns (SCS, Monolith and Microservices) you should pick for a given problem.


The unit of deployment: discrete, dense, sovereign

A Planck service is one .wasm file and that's what we call discrete, dense, and sovereign. Those words aren't decorative - they describe three concrete properties of the binary.

Discrete

A service maps to exactly one bounded context. orders.wasm owns order placement, line items, totals, and the lifecycle from pending to paid. It does not own inventory. It does not own delivery routing. Cross-context work happens by emitting and observing events on the host bus (see pillar 04), not by reaching into another service's tables.

The binary boundary is the bounded-context boundary. There is no shared schema across services, no joined query that crosses domains, no library of "common" types pulled in everywhere. Two services talking about an "order" might use entirely different in-memory patterns, and that's expected.

Dense

Planck Service, is one process, that contains, db engine, web server and wasm runtime that runs .wasm which includes, domain specific types, business logic other helper code and ui.The artifact you ship and the artifact that runs are the same artifact.

Density is what makes the deployment story simple. There is no "app + database + cache + reverse-proxy + sidecar" tuple to keep in sync across environments. There is one file, one version, one checksum.

Sovereign

The service owns its data, its schema, its evolution cadence, and its release schedule. Nothing outside the service can read its store directly, the only way in is through the HTTP routes the service itself exposes. If orders wants to migrate its schema, it does so without coordinating with inventory. If inventory wants to roll back to last week's binary, that doesn't affect orders at all.

Sovereignty is what lets two teams ship two services without locking each other's release calendars together. It is also what makes the "database per service" rule from microservices literature cheap here: the database is already inside the service.


The four pillars

Pillar 01, The database is the runtime

In a typical stack, "the database" is a separate process listening on a socket, and "the runtime" is your application server. They speak a wire protocol (Postgres frontend/backend, MySQL, Redis, Mongo). Every read and write crosses a kernel boundary, gets serialized, gets parsed on the other side, and answers come back the same way.

In Planck, the database is the runtime. The DB process loads your .wasm module via Wasmer. When an HTTP request arrives, the DB:

  1. Routes it to your handler (registered when the module loaded).
  2. Calls the handler in-process.
  3. Hands back whatever bytes the handler produced.

There is no separate app server. There is no second deploy artifact. There is no "is the database up?" question that's different from "is the service up?", they are the same process. If the DB is running, your handlers are running.

What you ship:

  • One .wasm file (your service).
  • A config.yaml (ports, paths).
  • That's it.

What you don't ship:

  • An app server image.
  • A reverse proxy config.
  • A sidecar container.
  • A database driver, there is no driver, there is no wire.

Pillar 02, Direct access to the store

Because the handler and the store live in the same process, queries don't go over a socket. They go through a typed query builder that the DB exposes to the WASM module via host functions. The result is that read paths look like this (illustrative):

zig
const orders = try ctx.store("orders").query()
    .where("customer_id", .eq, customer_id)
    .where("status", .eq, "pending")
    .limit(20)
    .run();

That call doesn't open a connection. It doesn't allocate a result-set parser. It walks the in-memory index inside the database process and hands you back rows that the WASM module reads directly from the host's memory (with a typed view layered on top).

What this kills:

  • ORMs. There is no impedance mismatch to bridge, the store already speaks your domain types.
  • Connection pools. No connection.
  • Serialization tax on hot paths. The wire is gone.
  • The N+1 problem in its classical form. A second query is the same cost as the first; "round trip" isn't a thing.

What this means for design:

  • You do not prefetch and cache to avoid round trips. Just query again at the next layer that needs it.
  • You write business rules where they make sense, not where the wire forces you to (typically, rules migrate into the store layer in ORM stacks; here they stay in the handler).

Pillar 03, You own the response shape

Most application frameworks pick one response shape and ask you to live in it. JSON-only API frameworks. HTML-only server-rendered frameworks. Planck picks none.

A handler returns bytes. What those bytes are is your call, per route:

  • HTML fragment for an HTMX, Datastar, or Alpine swap. The template engine (ZSX) lives in the same process, render, return bytes, done.
  • JSON for a classical REST API or a SPA fetch.
  • Server-Sent Events for live tails and dashboards.
  • Raw bytes for a CSV export or a binary blob.
  • A redirect or an empty body with a status.

The same service can mix all of these. /orders returns an HTML fragment for the dashboard; /api/orders returns JSON for an external integration; /orders/stream opens an SSE channel for live status. Three routes, one binary, one store.

This is what makes Planck equally at home behind a hypermedia front end and a traditional JSON-consuming SPA. The engine is neutral; you pick the contract.

Pillar 04, Coordination is event-driven

WASM modules are sandboxed. They cannot open sockets. This is not a limitation we work around, it is the design. Without the ability to dial each other directly, services have no way to grow point-to-point coupling.

What they can do is publish to and subscribe from topics on the host bus. The bus is an in-process pub/sub channel that the DB runs alongside the WASM modules. When orders finalizes a sale, it emits order.placed. Anybody listening, inventory, delivery, analytics, picks it up. Nobody is named directly. Nobody is required to be online. Adding a new consumer is zero changes to the producer.

For browser-side live updates, the bus also fronts a Server-Sent Events endpoint, so a connected client subscribes to the same topic the services publish to.

What this replaces:

  • Kafka, RabbitMQ, NATS as deployment artifacts. The bus is part of the database process.
  • Service-mesh sidecars. There is no mesh, there are no sockets to mesh.
  • Point-to-point HTTP between services. A service that needs to ask another service something either: (a) listens for the answer on a topic, or (b) reads its own projection, kept in sync by an earlier event subscription.

What this asks of you:

  • Think in events, not requests, between services.
  • Maintain projections inside each service of the data it cares about from elsewhere. (This is Self-Contained Systems doctrine - see Pattern A below.)

Three Patterns you can build

The engine doesn't dictate an architecture. The same .wasm toolchain can produce three different shapes; pick the one your team already understands. (And if your team has a fourth shape, the engine still doesn't care, these three are the ones architects ask about most.)

One .wasm per bounded context. Data, logic, and UI all live in one binary. This is the Self-Contained Systems pattern from Stefan Tilkov / INNOQ.

Layout:

orders.wasm
├── stores: orders, line_items, customers (projection)
├── handlers (HTML):  /orders, /orders/:id, /checkout
├── handlers (JSON):  /api/orders, /api/orders/:id
├── templates:        order_card.zsx, checkout.zsx
└── subscribes:       inventory.reserved, payment.captured

The user-facing UI for "the orders area of the app" is served by the orders service itself. The shell app composes navigation across services but doesn't render their content. This sounds unusual to teams steeped in SPA-first thinking, but it is what makes SCS independently deployable end-to-end, front-end included.

When to pick it:

  • Most new projects most of the time.
  • Teams that want to ship features without coordinating multiple repos / deploy pipelines.
  • Apps where "the page" maps cleanly to a domain (ordering page, inventory page, delivery page).

Pattern B, Monolith

One .wasm that exposes handlers across every domain. But, and this is important, each domain still keeps its own store. Handlers are consolidated, data is partitioned by domain.

Layout:

app.wasm
├── stores: orders, customers, inventory, delivery     (per-domain)
├── handlers: /orders/*, /inventory/*, /delivery/*
└── templates: shared layout + per-domain templates

This is the "single deployable, multiple modules" pattern. You give up independent deployability of each domain in exchange for the operational simplicity of one binary, one log, one config.

When to pick it:

  • Small teams. Small enough that splitting the binary would not split the people working on it.
  • Early-stage products where boundaries between domains are still shifting.
  • Internal tools where deploy independence isn't worth the modeling cost.

The important bit: even in monolith shape, don't share a single store across domains. Domain-wise stores keep the option open to split into pattern A later.

Pattern C, Microservices

One .wasm per service, but every service exposes JSON and serves no HTML. The UI lives entirely in a separate SPA (React, Vue, Svelte, whatever). Each service is a classical REST/JSON microservice.

Layout (per service):

orders.wasm
├── store: orders, line_items
├── handlers (JSON only): /api/orders, /api/orders/:id
└── subscribes: inventory.reserved, payment.captured

(separate SPA artifact talks to all of /api/* endpoints)

You get the classical microservice contract, JSON in, JSON out, but without the deployment fleet. No service mesh, no broker, no sidecars. The bus replaces broker; the in-process DB replaces every per-service "app + DB" pair.

When to pick it:

  • You already have an SPA codebase and team, and rewriting it as hypermedia is not on the table.
  • You're integrating with a lot of non-browser consumers (mobile apps, partner APIs) where JSON is the contract.
  • The org is structured around "back-end teams" and "front-end teams" and that's not changing.

Mixing patterns

Nothing prevents you from running pattern A for most of the app and pattern C for one slice that has a heavy SPA on top. The patterns are about per-service choices, not global ones.


What's not in the stack (and why)

A useful way to understand Planck is to enumerate what's deliberately absent.

Not presentBecause
Separate app serverThe DB process runs your handlers.
ORMThe store already speaks your domain types via the typed query builder.
Database driver / connection poolNo wire, handler and store share an address space.
Message broker (Kafka/Rabbit/NATS)The bus is in-process pub/sub.
Service mesh / sidecarWASM can't open sockets, there is no mesh to provision.
Reverse proxy in front of the appThe DB is the HTTP server; only fronting need is TLS termination.
Cache layer (Redis as cache)Reads are direct memory access, there is no round trip to cache around.
Background-worker frameworkSubscribers on the bus do background work, in the service that owns it.

Each absent piece is a deployment artifact you don't manage, a failure mode you don't debug, and a version-skew problem you don't have.