Zero Distance Architecture
A pattern for building applications where code runs where data lives.
The premise
Most modern systems are slow not because they do too much work, but because they spend their work talking between processes. An app server calls a driver, the driver crosses a network, the database does ~5ms of real work, and the result makes the same trip back through serialization, deserialization, ORM hydration, and template rendering. Of the 80-200ms a user feels, almost none was computation.
Distance is the tax. Zero Distance Architecture is the design choice to stop paying it.
Definition
Zero Distance Architecture is the discipline of placing application code in the same process as the data it operates on, so that every query is a function call.
Five principles follow from that definition. None of them are negotiable inside a Zero Distance system. They constrain the shape of the deployment, but they unlock latency, isolation, and operational clarity that distributed systems can never reach.
The five principles
1. Co-location
Application code shares an address space with its primary data store.
Not "on the same host." Not "in the same Kubernetes pod." Not "behind a sidecar proxy." In the same process, sharing the same memory, with no marshalling between them.
A query is a function call. A write is a function call. Latency between code and data is bounded by L1/L2 cache, not by Ethernet.
| Anti-pattern | Zero Distance |
|---|---|
| App server → DB driver → DB process | App code embedded in DB process |
| Sidecar → REST → DB | App and DB share an address space |
| Lambda + RDS | A single binary that owns both |
2. One process per service
The unit of deployment is a single process that owns its code, its storage, its HTTP routes, and its lifecycle. Starting it is one syscall. Stopping it is one syscall. Replacing it is one filesystem swap and a kill-then-spawn.
This is the inverse of "twelve-factor app" item VI ("processes"), where state is externalized to a backing service. In Zero Distance, state is the service. Externalizing it would put distance back in.
3. Service sovereignty
Each service owns its own data. Nothing is shared by default, no schemas, no tables, no caches, no auth tokens. Two services that need to coordinate communicate over an explicit, network-distant interface, not a shared database.
This is the discipline microservices were supposed to bring but rarely do, because it's easier to point ten "microservices" at the same Postgres than to give each its own. Zero Distance enforces sovereignty by architecture: the database is the service. You can't share it without merging the services.
4. Local backpressure
Flow control is bounded by the process boundary. A slow client doesn't back up across a network; it backs up across a function call. A slow disk doesn't fan out into queue depth on a remote DB; it slows the producer in the same process.
This makes backpressure trivial to reason about, there is no intermediate buffer that can grow unboundedly between producer and consumer. The producer waits when the consumer is slow because they share the same scheduler.
5. Operational atomicity
The unit of operation, backup, restore, deploy, observe, scale, is the service. Not the cluster, not the namespace, not the schema. One service is one thing you back up, one thing you upgrade, one thing you move.
Multi-service operations (cross-service consistency, distributed transactions) are pushed to the application layer, where the developer made the choice to fan out, rather than absorbed into a generic distributed-DB substrate that no one fully understands.
What Zero Distance is not
It is not a rejection of distributed systems. Distance is fine; it just shouldn't be the default between an app and its primary data. Zero Distance services compose into distributed systems via explicit, visible network boundaries, boundaries the developer chose, not boundaries inherited from a stack.
It is not a return to the monolith. A monolith merges code; Zero Distance separates services but co-locates code-with-its-data. A correctly built Zero Distance system can be a monolith (one service), microservices (many small services), or Self-Contained Systems (one service per business capability). The architectural pattern is yours; Zero Distance is the floor under it.
It is not "stored procedures." Stored procedures couple code to a specific SQL dialect, run in a constrained, single-language environment, and have no access to network or HTTP. Zero Distance services run arbitrary WASM modules, in any language that compiles to WASM, with full network and HTTP capabilities. They are applications, not fragments of a query plan.
Patterns within Zero Distance
Monolith
One service. All your data in it. All your code inside it. Deployed as one binary. Zero Distance trivially: nothing crosses a process boundary.
This is the right default for a small team or a young product. It collapses operational complexity to its minimum without giving up the latency advantages of co-location.
Self-Contained Systems (SCS)
One service per business capability. Each owns its UI, its API, its data. Cross-system coordination is explicit and rare.
Zero Distance fits SCS like a glove: each Self-Contained System is one Zero Distance service, with its own data and its own deployment. The SCS principle of "minimize integration between systems" is enforced by the architecture.
Microservices
Many small services, each owning a slice of the domain. Each service is a Zero Distance unit: code-with-data in one process. Cross-service calls happen over explicit HTTP/gRPC.
The hard problem of microservices, that they're often "macroservices in disguise" because they share a database. It's impossible in Zero Distance: services that share data are by definition not separate services.
Trade-offs (named and accepted)
Zero Distance buys latency, isolation, and operational clarity. It pays in three places. Naming them up front prevents architectural surprise.
No cross-service joins in the database. If two services need a joined view, the join happens in application code or in a derived read-model. This is a feature, not a bug, it forces the data ownership question to the surface, where it belongs.
Stronger discipline around service boundaries. A service can't "just add a table" in someone else's database. Schema changes are local; cross-service changes are explicit migrations.
No global queries. "Find all orders across all tenants" doesn't have a one-line answer. It's a fan-out across services, with all the latency and complexity that implies. For systems where global queries are the dominant workload (analytics, BI), Zero Distance is the wrong pattern, use a warehouse downstream.
These are the same trade-offs serious microservice architectures make. Zero Distance just makes them honest.
When to use Zero Distance
- Latency-sensitive request paths. Hypermedia apps (HTMX, LiveView, Hotwire), real-time UIs, anything where response time is a primary UX feature.
- Workloads with strong service boundaries. Multi-tenant SaaS, bounded-context business apps, B2B platforms with clear domain separation.
- Operational simplicity goals. Teams that want one binary to deploy per service, not a stack of seven.
- Strict isolation requirements. Compliance regimes (HIPAA, PCI-DSS) where blast radius matters more than aggregate scale.
When not to use Zero Distance
- Analytical / BI workloads where global cross-service queries are the primary use case. Use a warehouse.
- Workloads needing distributed-write consensus (multi-region strong-consistency on writes). Use Spanner / CockroachDB and accept the latency floor that comes with consensus.
- Existing systems with deep ORM / schema sharing. The migration cost is real. Zero Distance pays back over years; it's not a quick refactor.
Reference implementation
Planck is the reference implementation of Zero Distance Architecture. One binary that bundles a storage engine, a WASM runtime for application code, an HTTP server, and a control plane. Each service is one process. Code runs in the same address space as the data it operates on.
You don't need Planck to do Zero Distance - the principles stand alone. Planck just removes the friction.
Further reading
- Self-Contained Systems - http://scs-architecture.org
- Beyond the Twelve-Factor App - useful contrast; ZDA inverts factor VI
- In-Process Databases - SQLite's positioning paper
- PostgreSQL Background Workers - the closest thing in mainstream RDBMS
Zero Distance Architecture is a public pattern. The text of this document is licensed CC BY 4.0 - copy it, quote it, build on it.