zlay#
an AT Protocol relay in zig. crawls PDS hosts directly and rebroadcasts their firehose as a single aggregated stream.
live instance: zlay.waow.tech — metrics dashboard
what it does#
a relay subscribes to every PDS on the network, verifies commit signatures, and serves the merged event stream to downstream consumers via com.atproto.sync.subscribeRepos. it also maintains a collection index for com.atproto.sync.listReposByCollection.
design#
- direct PDS crawl — no fan-out relay in between. the bootstrap relay (bsky.network) is called once at startup for the host list, then all data flows from each PDS.
- optimistic validation — on signing key cache miss, frames pass through immediately and the DID is queued for background resolution. >99.9% cache hit rate after warmup.
- inline collection index — RocksDB with two column families for bidirectional
(DID, collection)lookups. no sidecar process. - one thread per PDS — predictable memory, no GC. ~2,750 threads is fine; most are blocked on websocket reads.
dependencies#
| dependency | purpose |
|---|---|
| zat | AT Protocol primitives (CBOR, CAR, signatures, DID resolution) |
| websocket.zig | WebSocket client/server |
| pg.zig | PostgreSQL driver |
| rocksdb-zig | RocksDB bindings |
endpoints#
| endpoint | method |
|---|---|
com.atproto.sync.subscribeRepos |
WebSocket (port 3000) |
com.atproto.sync.listRepos |
GET |
com.atproto.sync.getRepoStatus |
GET |
com.atproto.sync.getLatestCommit |
GET |
com.atproto.sync.listReposByCollection |
GET |
com.atproto.sync.listHosts |
GET |
com.atproto.sync.requestCrawl |
POST |
build#
requires zig 0.15 and a C/C++ toolchain (for RocksDB).
zig build # build
zig build test # run tests
zig build -Doptimize=ReleaseSafe # release build
see docs/deployment.md for production deployment and docs/backfill.md for collection index backfill.
numbers#
| metric | value |
|---|---|
| code | ~6,000 lines |
| connected PDS hosts | ~2,750 |
| memory | ~2.9 GiB steady state |
| throughput | ~600 events/sec typical |