atproto relay implementation in zig zlay.waow.tech

docs: document memory tuning (thread stacks + c_allocator)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

+10 -2
+10 -2
docs/deployment.md
··· 71 71 - **monitoring**: prometheus + grafana via kube-prometheus-stack 72 72 - **terraform**: `infra/zlay/` in the relay repo 73 73 74 + ## memory tuning 75 + 76 + two changes brought steady-state memory from ~6.6 GiB down to ~2.9 GiB at 2,738 connected hosts: 77 + 78 + **thread stack sizes.** zig's default thread stack is 16 MB. with ~2,750 subscriber threads that maps 44 GB of virtual memory. most threads just read websockets and decode CBOR — 2 MB is generous. all `Thread.spawn` calls now pass `.{ .stack_size = 2 * 1024 * 1024 }`. the constant is defined in `main.zig` as `default_stack_size` for the threads spawned there; other modules use the literal directly. 79 + 80 + **c_allocator instead of GeneralPurposeAllocator.** GPA is a debug allocator — it tracks per-allocation metadata and never returns freed small allocations to the OS. since zlay links glibc (`build.zig:42`), `std.heap.c_allocator` gives us glibc malloc with per-thread arenas, madvise-based page return, and production-grade fragmentation mitigation. 81 + 74 82 ## resource usage 75 83 76 84 | metric | value | 77 85 |--------|-------| 78 - | memory | ~1.8 GiB steady state (1486 subscribers) | 86 + | memory | ~2.9 GiB steady state (~2,750 hosts) | 79 87 | CPU | ~1.5 cores peak | 80 88 | limits | 8 GiB memory, 250m CPU request | 81 89 | PVC | 20 GiB (events + RocksDB collection index) | 82 - | postgres | ~131 MiB | 90 + | postgres | ~238 MiB | 83 91 84 92 ## git push 85 93