Kieran's opinionated (and probably slightly dumb) nix config

feat: add mdbook

dunkirk.sh a104e5e9 3f382f3a

verified
+1389 -1235
+10
.github/workflows/deploy.yaml
··· 36 36 mkdir -p ~/.ssh 37 37 echo "StrictHostKeyChecking accept-new" >> ~/.ssh/config 38 38 39 + - name: Build docs 40 + run: nix build .#packages.x86_64-linux.docs -L 41 + 42 + - name: Publish to Cloudflare Pages 43 + uses: cloudflare/wrangler-action@v3 44 + with: 45 + apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }} 46 + accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }} 47 + command: pages deploy result --project-name=infra-dunkirk 48 + 39 49 - name: Deploy all configurations 40 50 run: | 41 51 nix run 'github:serokell/deploy-rs' -- \
+14
docs/book.toml
··· 1 + [book] 2 + title = "dunkirk.sh" 3 + authors = ["Kieran Klukas"] 4 + src = "src" 5 + 6 + [build] 7 + build-dir = "./dist" 8 + create-missing = false 9 + 10 + [output.html] 11 + default-theme = "latte" 12 + preferred-dark-theme = "mocha" 13 + git-repository-url = "https://github.com/taciturnaxolotl/dots" 14 + additional-css = ["./theme/catppuccin.css"]
+47
docs/src/README.md
··· 1 + # dunkirk.sh 2 + 3 + Kieran's opinionated NixOS infrastructure — declarative server config, self-hosted services, and automated deployments. 4 + 5 + ## Layout 6 + 7 + ``` 8 + ~/dots 9 + ├── .github/workflows # CI/CD (deploy-rs + per-service reusable workflow) 10 + ├── dots # config files symlinked by home-manager 11 + │ └── wallpapers 12 + ├── machines 13 + │ ├── atalanta # macOS M4 (nix-darwin) 14 + │ ├── ember # dell r210 server (basement) 15 + │ ├── moonlark # framework 13 (dead) 16 + │ ├── nest # shared tilde server (home-manager only) 17 + │ ├── prattle # oracle cloud x86_64 18 + │ ├── tacyon # rpi 5 19 + │ └── terebithia # oracle cloud aarch64 (main server) 20 + ├── modules 21 + │ ├── lib 22 + │ │ └── mkService.nix # service factory (see Deployment section) 23 + │ ├── home # home-manager modules 24 + │ │ ├── aesthetics # theming and wallpapers 25 + │ │ ├── apps # app configs (ghostty, helix, git, ssh, etc.) 26 + │ │ ├── system # shell, environment 27 + │ │ └── wm/hyprland 28 + │ └── nixos # nixos modules 29 + │ ├── apps # system-level app configs 30 + │ ├── services # self-hosted services (mkService-based + custom) 31 + │ │ ├── restic # backup system with CLI 32 + │ │ └── bore # tunnel proxy 33 + │ └── system # pam, wifi 34 + ├── packages # custom nix packages 35 + └── secrets # agenix-encrypted secrets 36 + ``` 37 + 38 + ## Machines 39 + 40 + | Name | Platform | Role | 41 + |------|----------|------| 42 + | **terebithia** | Oracle Cloud aarch64 | Main server — runs all services | 43 + | **prattle** | Oracle Cloud x86_64 | Secondary server | 44 + | **atalanta** | macOS M4 | Development laptop (nix-darwin) | 45 + | **ember** | Dell R210 | Basement server | 46 + | **tacyon** | Raspberry Pi 5 | Edge device | 47 + | **nest** | Shared tilde | Home-manager only |
+30
docs/src/SUMMARY.md
··· 1 + # Summary 2 + 3 + [Overview](./README.md) 4 + 5 + - [Installation](./installation.md) 6 + - [Deployment](./deployment.md) 7 + - [Services](./services/README.md) 8 + - [control](./services/control.md) 9 + - [cedarlogic](./services/cedarlogic.md) 10 + - [emojibot](./services/emojibot.md) 11 + - [herald](./services/herald.md) 12 + - [knot-sync](./services/knot-sync.md) 13 + - [battleship-arena](./services/battleship-arena.md) 14 + - [bore](./services/bore.md) 15 + - [Backups](./backups.md) 16 + - [Secrets](./secrets.md) 17 + - [Modules](./modules/README.md) 18 + - [tuigreet](./modules/tuigreet.md) 19 + - [wifi](./modules/wifi.md) 20 + - [shell](./modules/shell.md) 21 + - [ssh](./modules/ssh.md) 22 + - [helix](./modules/helix.md) 23 + - [bore (client)](./modules/bore-client.md) 24 + - [pbnj](./modules/pbnj.md) 25 + - [wut](./modules/wut.md) 26 + 27 + # Reference 28 + 29 + - [mkService](./mkservice.md) 30 + libdoc
+71
docs/src/backups.md
··· 1 + # Backups 2 + 3 + Services are automatically backed up nightly using restic to Backblaze B2. Backup targets are auto-discovered from `data.sqlite`/`data.postgres`/`data.files` declarations in mkService modules. 4 + 5 + ## Schedule 6 + 7 + - **Time:** 02:00 AM daily 8 + - **Random delay:** 0–2 hours (spreads load across services) 9 + - **Retention:** 3 snapshots, 7 daily, 5 weekly, 12 monthly 10 + 11 + ## CLI 12 + 13 + The `atelier-backup` command provides an interactive TUI: 14 + 15 + ```bash 16 + sudo atelier-backup # Interactive menu 17 + sudo atelier-backup status # Show backup status for all services 18 + sudo atelier-backup list # Browse snapshots 19 + sudo atelier-backup backup # Trigger manual backup 20 + sudo atelier-backup restore # Interactive restore wizard 21 + sudo atelier-backup dr # Disaster recovery mode 22 + ``` 23 + 24 + ## Service integration 25 + 26 + ### Automatic (mkService) 27 + 28 + Services using `mkService` with `data.*` declarations get automatic backup: 29 + 30 + ```nix 31 + mkService { 32 + name = "myapp"; 33 + extraConfig = cfg: { 34 + atelier.services.myapp.data = { 35 + sqlite = "${cfg.dataDir}/data/app.db"; # Auto WAL checkpoint + stop/start 36 + files = [ "${cfg.dataDir}/uploads" ]; # Just backed up, no hooks 37 + }; 38 + }; 39 + } 40 + ``` 41 + 42 + The backup system automatically checkpoints SQLite WAL, stops the service during backup, and restarts after completion. 43 + 44 + ### Manual registration 45 + 46 + For services not using `mkService`: 47 + 48 + ```nix 49 + atelier.backup.services.myservice = { 50 + paths = [ "/var/lib/myservice" ]; 51 + exclude = [ "*.log" "cache/*" ]; 52 + preBackup = "systemctl stop myservice"; 53 + postBackup = "systemctl start myservice"; 54 + }; 55 + ``` 56 + 57 + ## Disaster recovery 58 + 59 + On a fresh NixOS install: 60 + 61 + 1. Rebuild from flake: `nixos-rebuild switch --flake .#hostname` 62 + 2. Run: `sudo atelier-backup dr` 63 + 3. All services restored from latest snapshots 64 + 65 + ## Setup 66 + 67 + 1. Create a B2 bucket and application key 68 + 2. Create agenix secrets for `restic/password`, `restic/env`, `restic/repo` 69 + 3. Enable: `atelier.backup.enable = true;` 70 + 71 + See [modules/nixos/services/restic/README.md](https://github.com/taciturnaxolotl/dots/blob/main/modules/nixos/services/restic/README.md) for full setup details.
+63
docs/src/deployment.md
··· 1 + # Deployment 2 + 3 + Two deploy paths: **infrastructure** (NixOS config changes) and **application code** (per-service repos). 4 + 5 + ## Infrastructure 6 + 7 + Pushing to `main` triggers `.github/workflows/deploy.yaml` which runs `deploy-rs` over Tailscale to rebuild NixOS on the target machine. 8 + 9 + ```sh 10 + # manual deploy 11 + nix run 'github:serokell/deploy-rs' -- --remote-build --ssh-user kierank . 12 + ``` 13 + 14 + ## Application Code 15 + 16 + Each service repo has a minimal workflow calling the reusable `.github/workflows/deploy-service.yml`. On push to `main`: 17 + 18 + 1. Connects to Tailscale (`tag:deploy`) 19 + 2. SSHes as the **service user** (e.g., `cachet@terebithia`) via Tailscale SSH 20 + 3. Snapshots the SQLite DB (if `db_path` is provided) 21 + 4. `git pull` + `bun install --frozen-lockfile` + `sudo systemctl restart` 22 + 5. Health check (HTTP URL or systemd status fallback) 23 + 6. Auto-rollback on failure (restores DB snapshot + reverts to previous commit) 24 + 25 + Per-app workflow — copy and change the `with:` values: 26 + 27 + ```yaml 28 + name: Deploy 29 + on: 30 + push: 31 + branches: [main] 32 + workflow_dispatch: 33 + jobs: 34 + deploy: 35 + uses: taciturnaxolotl/dots/.github/workflows/deploy-service.yml@main 36 + with: 37 + service: cachet 38 + health_url: https://cachet.dunkirk.sh/health 39 + db_path: /var/lib/cachet/data/cachet.db 40 + secrets: 41 + TS_OAUTH_CLIENT_ID: ${{ secrets.TS_OAUTH_CLIENT_ID }} 42 + TS_OAUTH_SECRET: ${{ secrets.TS_OAUTH_SECRET }} 43 + ``` 44 + 45 + Omit `health_url` to fall back to `systemctl is-active`. Omit `db_path` for stateless services. 46 + 47 + ## mkService 48 + 49 + `modules/lib/mkService.nix` standardizes service modules. A call to `mkService { ... }` provides: 50 + 51 + - Systemd service with initial git clone (subsequent deploys via GitHub Actions) 52 + - Caddy reverse proxy with TLS via Cloudflare DNS and optional rate limiting 53 + - Data declarations (`sqlite`, `postgres`, `files`) that feed into automatic backups 54 + - Dedicated system user with sudo for restart/stop/start (enables per-user Tailscale ACLs) 55 + - Port conflict detection, security hardening, agenix secrets 56 + 57 + ### Adding a new service 58 + 59 + 1. Create a module in `modules/nixos/services/` 60 + 2. Enable it in `machines/terebithia/default.nix` 61 + 3. Add a deploy workflow to the app repo 62 + 63 + See `modules/nixos/services/cachet.nix` for a minimal example.
+72
docs/src/installation.md
··· 1 + # Installation 2 + 3 + > **Warning:** This configuration will not work without changing the [secrets](https://github.com/taciturnaxolotl/dots/tree/main/secrets) since they are encrypted with agenix. 4 + 5 + ## macOS with nix-darwin 6 + 7 + 1. Install Nix: 8 + 9 + ```bash 10 + curl -fsSL https://install.determinate.systems/nix | sh -s -- install 11 + ``` 12 + 13 + 2. Clone and apply: 14 + 15 + ```bash 16 + git clone git@github.com:taciturnaxolotl/dots.git 17 + cd dots 18 + darwin-rebuild switch --flake .#atalanta 19 + ``` 20 + 21 + ## Home Manager 22 + 23 + Install Nix, copy SSH keys, then: 24 + 25 + ```bash 26 + curl -fsSL https://install.determinate.systems/nix | sh -s -- install --determinate 27 + git clone git@github.com:taciturnaxolotl/dots.git 28 + cd dots 29 + nix-shell -p home-manager 30 + home-manager switch --flake .#nest 31 + ``` 32 + 33 + Set up [atuin](https://atuin.sh/) for shell history sync: 34 + 35 + ```bash 36 + atuin login 37 + atuin import 38 + ``` 39 + 40 + ## NixOS 41 + 42 + ### Using nixos-anywhere (recommended for remote) 43 + 44 + > Only works with `prattle` and `terebithia` which have disko configs. 45 + 46 + ```bash 47 + nix run github:nix-community/nixos-anywhere -- \ 48 + --flake .#prattle \ 49 + --generate-hardware-config nixos-facter ./machines/prattle/facter.json \ 50 + --build-on-remote \ 51 + root@<ip-address> 52 + ``` 53 + 54 + ### Using the install script 55 + 56 + ```bash 57 + curl -L https://raw.githubusercontent.com/taciturnaxolotl/dots/main/install.sh -o install.sh 58 + chmod +x install.sh 59 + ./install.sh 60 + ``` 61 + 62 + ### Post-install 63 + 64 + After first boot, log in with user `kierank` and the default password, then: 65 + 66 + ```bash 67 + passwd kierank 68 + sudo mv /etc/nixos ~/dots 69 + sudo ln -s ~/dots /etc/nixos 70 + sudo chown -R $(id -un):users ~/dots 71 + atuin login && atuin sync 72 + ```
+101
docs/src/mkservice.md
··· 1 + # mkService 2 + 3 + `modules/lib/mkService.nix` is the service factory used by most atelier services. It takes a set of parameters and returns a NixOS module with standardized options, systemd service, Caddy reverse proxy, and backup integration. 4 + 5 + ## Factory parameters 6 + 7 + | Parameter | Type | Default | Description | 8 + |-----------|------|---------|-------------| 9 + | `name` | string | *required* | Service identity — used for user, group, systemd unit, and option namespace | 10 + | `description` | string | `"<name> service"` | Human-readable description | 11 + | `defaultPort` | int | `3000` | Default port if not overridden in config | 12 + | `runtime` | string | `"bun"` | `"bun"`, `"node"`, or `"custom"` | 13 + | `entryPoint` | string | `"src/index.ts"` | Script to run (ignored if `startCommand` is set) | 14 + | `startCommand` | string | `null` | Override the full start command | 15 + | `extraOptions` | attrset | `{}` | Additional NixOS options for this service | 16 + | `extraConfig` | function | `cfg: {}` | Additional NixOS config when enabled (receives the service config) | 17 + 18 + ## Options 19 + 20 + Every mkService module creates options under `atelier.services.<name>`: 21 + 22 + ### Core 23 + 24 + | Option | Type | Default | Description | 25 + |--------|------|---------|-------------| 26 + | `enable` | bool | `false` | Enable the service | 27 + | `domain` | string | *required* | Domain for Caddy reverse proxy | 28 + | `port` | port | `defaultPort` | Port the service listens on | 29 + | `dataDir` | path | `"/var/lib/<name>"` | Data storage directory | 30 + | `secretsFile` | path or null | `null` | Agenix secrets environment file | 31 + | `repository` | string or null | `null` | Git repo URL — cloned once on first start | 32 + | `healthUrl` | string or null | `null` | Health check URL for monitoring | 33 + | `environment` | attrset | `{}` | Additional environment variables | 34 + 35 + ### Data declarations 36 + 37 + Used by the backup system to automatically discover what to back up. 38 + 39 + | Option | Type | Default | Description | 40 + |--------|------|---------|-------------| 41 + | `data.sqlite` | string or null | `null` | SQLite database path (WAL checkpoint + stop/start during backup) | 42 + | `data.postgres` | string or null | `null` | PostgreSQL database name (pg_dump during backup) | 43 + | `data.files` | list of strings | `[]` | Additional file paths to back up | 44 + | `data.exclude` | list of strings | `["*.log", "node_modules", ...]` | Glob patterns to exclude | 45 + 46 + ### Caddy 47 + 48 + | Option | Type | Default | Description | 49 + |--------|------|---------|-------------| 50 + | `caddy.enable` | bool | `true` | Enable Caddy reverse proxy | 51 + | `caddy.extraConfig` | string | `""` | Additional Caddy directives | 52 + | `caddy.rateLimit.enable` | bool | `false` | Enable rate limiting | 53 + | `caddy.rateLimit.events` | int | `60` | Requests per window | 54 + | `caddy.rateLimit.window` | string | `"1m"` | Rate limit time window | 55 + 56 + ## What it sets up 57 + 58 + - **System user and group** — dedicated user in the `services` group with sudo for `systemctl restart/stop/start/status` 59 + - **Systemd service** — `ExecStartPre` creates dirs as root, `preStart` clones repo and installs deps, `ExecStart` runs the application 60 + - **Caddy virtual host** — TLS via Cloudflare DNS challenge, reverse proxy to localhost port 61 + - **Port conflict detection** — assertions prevent two services from binding the same port 62 + - **Security hardening** — `NoNewPrivileges`, `ProtectSystem=strict`, `ProtectHome`, `PrivateTmp` 63 + 64 + ## Example 65 + 66 + Minimal service module: 67 + 68 + ```nix 69 + let 70 + mkService = import ../../lib/mkService.nix; 71 + in 72 + mkService { 73 + name = "myapp"; 74 + description = "My application"; 75 + defaultPort = 3000; 76 + runtime = "bun"; 77 + entryPoint = "src/index.ts"; 78 + 79 + extraConfig = cfg: { 80 + systemd.services.myapp.serviceConfig.Environment = [ 81 + "DATABASE_PATH=${cfg.dataDir}/data/app.db" 82 + ]; 83 + 84 + atelier.services.myapp.data = { 85 + sqlite = "${cfg.dataDir}/data/app.db"; 86 + }; 87 + }; 88 + } 89 + ``` 90 + 91 + Then enable in the machine config: 92 + 93 + ```nix 94 + atelier.services.myapp = { 95 + enable = true; 96 + domain = "myapp.dunkirk.sh"; 97 + repository = "https://github.com/taciturnaxolotl/myapp"; 98 + secretsFile = config.age.secrets.myapp.path; 99 + healthUrl = "https://myapp.dunkirk.sh/health"; 100 + }; 101 + ```
+22
docs/src/modules/README.md
··· 1 + # Modules 2 + 3 + Custom NixOS and home-manager modules under the `atelier.*` namespace. These wrap and extend upstream packages with opinionated defaults and structured configuration. 4 + 5 + ## NixOS modules 6 + 7 + | Module | Namespace | Description | 8 + |--------|-----------|-------------| 9 + | [tuigreet](./tuigreet.md) | `atelier.apps.tuigreet` | Login greeter with 30+ typed options | 10 + | [wifi](./wifi.md) | `atelier.network.wifi` | Declarative Wi-Fi profiles with eduroam support | 11 + | authentication | `atelier.authentication` | Fingerprint + PAM stack (fprintd, polkit, gnome-keyring) | 12 + 13 + ## Home-manager modules 14 + 15 + | Module | Namespace | Description | 16 + |--------|-----------|-------------| 17 + | [shell](./shell.md) | `atelier.shell` | Zsh + oh-my-posh + Tangled workflow tooling | 18 + | [ssh](./ssh.md) | `atelier.ssh` | SSH config with zmx persistent sessions | 19 + | [helix](./helix.md) | `atelier.apps.helix` | Evil-helix with 15+ LSPs, wakatime, harper | 20 + | [bore (client)](./bore-client.md) | `atelier.bore` | Tunnel client CLI for the bore server | 21 + | [pbnj](./pbnj.md) | `atelier.pbnj` | Pastebin CLI with language detection | 22 + | [wut](./wut.md) | `atelier.shell.wut` | Git worktree manager |
+33
docs/src/modules/bore-client.md
··· 1 + # bore (client) 2 + 3 + Interactive CLI for creating tunnels to the [bore server](../services/bore.md). Built with gum, supports HTTP, TCP, and UDP tunnels. 4 + 5 + ## Options 6 + 7 + All options under `atelier.bore`: 8 + 9 + | Option | Type | Default | Description | 10 + |--------|------|---------|-------------| 11 + | `enable` | bool | `false` | Install the bore CLI | 12 + | `serverAddr` | string | `"bore.dunkirk.sh"` | frps server address | 13 + | `serverPort` | port | `7000` | frps server port | 14 + | `domain` | string | `"bore.dunkirk.sh"` | Base domain for constructing public URLs | 15 + | `authTokenFile` | path | — | Path to frp auth token file | 16 + 17 + ## Usage 18 + 19 + ```bash 20 + bore # Interactive menu 21 + bore myapp 3000 # Quick HTTP tunnel: myapp.bore.dunkirk.sh → localhost:3000 22 + bore myapp 3000 --auth # With OAuth authentication 23 + bore myapp 3000 --save # Save to bore.toml for reuse 24 + ``` 25 + 26 + Tunnels can also be defined in a `bore.toml`: 27 + 28 + ```toml 29 + [myapp] 30 + port = 3000 31 + auth = true 32 + labels = ["dev"] 33 + ```
+36
docs/src/modules/helix.md
··· 1 + # helix 2 + 3 + Evil-helix (vim-mode fork) with comprehensive LSP setup, wakatime tracking on every language, and harper grammar checking. 4 + 5 + ## Options 6 + 7 + All options under `atelier.apps.helix`: 8 + 9 + | Option | Type | Default | Description | 10 + |--------|------|---------|-------------| 11 + | `enable` | bool | `false` | Enable helix configuration | 12 + | `swift` | bool | `false` | Add sourcekit-lsp for Swift (platform-conditional) | 13 + 14 + ## Language servers 15 + 16 + The module configures 15+ language servers out of the box: 17 + 18 + | Language | Server | 19 + |----------|--------| 20 + | Nix | nixd + nil | 21 + | TypeScript/JavaScript | typescript-language-server + biome | 22 + | Go | gopls | 23 + | Python | pylsp | 24 + | Rust | rust-analyzer | 25 + | HTML/CSS | vscode-html-language-server, vscode-css-language-server | 26 + | JSON | vscode-json-language-server + biome | 27 + | TOML | taplo | 28 + | Markdown | marksman | 29 + | YAML | yaml-language-server | 30 + | Swift | sourcekit-lsp (when `swift = true`) | 31 + 32 + All languages also get: 33 + - **wakatime-ls** — coding time tracking 34 + - **harper-ls** — grammar and spell checking 35 + 36 + > **Note:** After install, run `hx -g fetch && hx -g build` to compile tree-sitter grammars.
+25
docs/src/modules/pbnj.md
··· 1 + # pbnj 2 + 3 + Pastebin CLI with automatic language detection, clipboard integration, and agenix auth. 4 + 5 + ## Options 6 + 7 + All options under `atelier.pbnj`: 8 + 9 + | Option | Type | Default | Description | 10 + |--------|------|---------|-------------| 11 + | `enable` | bool | `false` | Install the pbnj CLI | 12 + | `host` | string | — | Pastebin instance URL | 13 + | `authKeyFile` | path | — | Path to auth key file (e.g. agenix secret) | 14 + 15 + ## Usage 16 + 17 + ```bash 18 + pbnj # Interactive menu 19 + pbnj upload myfile.py # Upload file (auto-detects Python) 20 + cat output.log | pbnj upload # Upload from stdin 21 + pbnj list # List pastes 22 + pbnj delete <id> # Delete a paste 23 + ``` 24 + 25 + Supports 25+ languages via file extension detection. Automatically copies the URL to clipboard (wl-copy/xclip/pbcopy depending on platform).
+30
docs/src/modules/shell.md
··· 1 + # shell 2 + 3 + Zsh configuration with oh-my-posh prompt, syntax highlighting, fzf-tab, zoxide, and Tangled git workflow tooling. 4 + 5 + ## Options 6 + 7 + All options under `atelier.shell`: 8 + 9 + | Option | Type | Default | Description | 10 + |--------|------|---------|-------------| 11 + | `enable` | bool | `false` | Enable shell configuration | 12 + 13 + ### Tangled 14 + 15 + Options for the `tangled-setup` and `mkdev` scripts that manage dual-remote git workflows (Tangled knot + GitHub). 16 + 17 + | Option | Type | Default | Description | 18 + |--------|------|---------|-------------| 19 + | `tangled.plcId` | string | — | ATProto DID for Tangled identity | 20 + | `tangled.githubUser` | string | — | GitHub username | 21 + | `tangled.knotHost` | string | — | Knot git host (e.g. `knot.dunkirk.sh`) | 22 + | `tangled.domain` | string | — | Tangled domain for repo URLs | 23 + | `tangled.defaultBranch` | string | `"main"` | Default branch name | 24 + 25 + ### Included tools 26 + 27 + - **`tangled-setup`** — configures a repo with `origin` pointing to knot and `github` pointing to GitHub 28 + - **`mkdev`** — creates a new repo on both Tangled and GitHub simultaneously 29 + - **oh-my-posh** — custom prompt showing path, git status (ahead/behind), exec time, nix-shell indicator, ZMX session, SSH hostname 30 + - **Aliases** — `cat=bat`, `ls=eza`, `cd=z` (zoxide), and more
+57
docs/src/modules/ssh.md
··· 1 + # ssh 2 + 3 + Declarative SSH config with per-host options and zmx (persistent tmux-like sessions over SSH) integration. 4 + 5 + ## Options 6 + 7 + All options under `atelier.ssh`: 8 + 9 + | Option | Type | Default | Description | 10 + |--------|------|---------|-------------| 11 + | `enable` | bool | `false` | Enable SSH config management | 12 + | `extraConfig` | string | `""` | Raw SSH config appended to the end | 13 + 14 + ### zmx 15 + 16 + | Option | Type | Default | Description | 17 + |--------|------|---------|-------------| 18 + | `zmx.enable` | bool | `false` | Install zmx and autossh | 19 + | `zmx.hosts` | list of strings | `[]` | Host patterns to auto-attach via zmx | 20 + 21 + When zmx is enabled for a host, the SSH config injects `RemoteCommand`, `RequestTTY force`, and `ControlMaster`/`ControlPersist` settings. Shell aliases are also added: `zmls`, `zmk`, `zma`, `ash`. 22 + 23 + ### Hosts 24 + 25 + Per-host config under `atelier.ssh.hosts.<name>`: 26 + 27 + | Option | Type | Default | Description | 28 + |--------|------|---------|-------------| 29 + | `hostname` | string | — | SSH hostname or IP | 30 + | `port` | int or null | `null` | SSH port | 31 + | `user` | string or null | `null` | SSH user | 32 + | `identityFile` | string or null | `null` | Path to SSH key | 33 + | `forwardAgent` | bool | `false` | Forward SSH agent | 34 + | `zmx` | bool | `false` | Enable zmx for this host | 35 + | `extraOptions` | attrsOf string | `{}` | Arbitrary SSH options | 36 + 37 + ## Example 38 + 39 + ```nix 40 + atelier.ssh = { 41 + enable = true; 42 + zmx.enable = true; 43 + zmx.hosts = [ "terebithia" "ember" ]; 44 + 45 + hosts = { 46 + terebithia = { 47 + hostname = "terebithia"; 48 + user = "kierank"; 49 + forwardAgent = true; 50 + zmx = true; 51 + }; 52 + "github.com" = { 53 + identityFile = "~/.ssh/id_rsa"; 54 + }; 55 + }; 56 + }; 57 + ```
+70
docs/src/modules/tuigreet.md
··· 1 + # tuigreet 2 + 3 + Configures greetd with tuigreet as the login greeter. Exposes nearly every tuigreet CLI flag as a typed Nix option. 4 + 5 + ## Options 6 + 7 + All options under `atelier.apps.tuigreet`: 8 + 9 + ### Core 10 + 11 + | Option | Type | Default | Description | 12 + |--------|------|---------|-------------| 13 + | `enable` | bool | `false` | Enable tuigreet | 14 + | `command` | string | `"Hyprland"` | Session command to run after login | 15 + | `greeting` | string | *(unauthorized access warning)* | Greeting message | 16 + 17 + ### Display 18 + 19 + | Option | Type | Default | Description | 20 + |--------|------|---------|-------------| 21 + | `time` | bool | `false` | Show clock | 22 + | `timeFormat` | string | `"%H:%M"` | Clock format | 23 + | `issue` | bool | `false` | Show `/etc/issue` | 24 + | `width` | int | `80` | UI width | 25 + | `theme` | string | `""` | Theme string | 26 + | `asterisks` | bool | `false` | Show asterisks for password | 27 + | `asterisksChar` | string | `"*"` | Character for password masking | 28 + 29 + ### Layout 30 + 31 + | Option | Type | Default | Description | 32 + |--------|------|---------|-------------| 33 + | `windowPadding` | int | `0` | Window padding | 34 + | `containerPadding` | int | `1` | Container padding | 35 + | `promptPadding` | int | `1` | Prompt padding | 36 + | `greetAlign` | enum | `"center"` | Greeting alignment: `left`, `center`, `right` | 37 + 38 + ### Session management 39 + 40 + | Option | Type | Default | Description | 41 + |--------|------|---------|-------------| 42 + | `remember` | bool | `false` | Remember last username | 43 + | `rememberSession` | bool | `false` | Remember last session | 44 + | `rememberUserSession` | bool | `false` | Per-user session memory | 45 + | `sessions` | string | `""` | Wayland session search path | 46 + | `xsessions` | string | `""` | X11 session search path | 47 + | `sessionWrapper` | string | `""` | Session wrapper command | 48 + 49 + ### User menu 50 + 51 + | Option | Type | Default | Description | 52 + |--------|------|---------|-------------| 53 + | `userMenu` | bool | `false` | Show user selection menu | 54 + | `userMenuMinUid` | int | `1000` | Minimum UID in user menu | 55 + | `userMenuMaxUid` | int | `65534` | Maximum UID in user menu | 56 + 57 + ### Power commands 58 + 59 + | Option | Type | Default | Description | 60 + |--------|------|---------|-------------| 61 + | `powerShutdown` | string | `""` | Shutdown command | 62 + | `powerReboot` | string | `""` | Reboot command | 63 + 64 + ### Keybindings 65 + 66 + | Option | Type | Default | Description | 67 + |--------|------|---------|-------------| 68 + | `kbCommand` | enum | `"F2"` | Key to switch command | 69 + | `kbSessions` | enum | `"F3"` | Key to switch session | 70 + | `kbPower` | enum | `"F12"` | Key for power menu |
+53
docs/src/modules/wifi.md
··· 1 + # wifi 2 + 3 + Declarative Wi-Fi profile manager using NetworkManager. Supports three ways to supply passwords and has built-in eduroam (WPA-EAP) support. 4 + 5 + ## Options 6 + 7 + All options under `atelier.network.wifi`: 8 + 9 + | Option | Type | Default | Description | 10 + |--------|------|---------|-------------| 11 + | `enable` | bool | `false` | Enable Wi-Fi management | 12 + | `hostName` | string | — | Sets `networking.hostName` | 13 + | `nameservers` | list of strings | `[]` | Custom DNS servers | 14 + | `envFile` | path | — | Environment file providing PSK variables for all profiles | 15 + 16 + ### Profiles 17 + 18 + Defined under `atelier.network.wifi.profiles.<ssid>`: 19 + 20 + | Option | Type | Default | Description | 21 + |--------|------|---------|-------------| 22 + | `psk` | string or null | `null` | Literal WPA-PSK passphrase | 23 + | `pskVar` | string or null | `null` | Environment variable name containing the PSK (from `envFile`) | 24 + | `pskFile` | path or null | `null` | Path to file containing the PSK | 25 + | `eduroam` | bool | `false` | Use WPA-EAP with MSCHAPV2 (for eduroam networks) | 26 + | `identity` | string or null | `null` | EAP identity (required when `eduroam = true`) | 27 + 28 + Only one of `psk`, `pskVar`, or `pskFile` should be set per profile. 29 + 30 + ## Example 31 + 32 + ```nix 33 + atelier.network.wifi = { 34 + enable = true; 35 + hostName = "moonlark"; 36 + nameservers = [ "1.1.1.1" "8.8.8.8" ]; 37 + envFile = config.age.secrets.wifi.path; 38 + 39 + profiles = { 40 + "Home Network" = { 41 + pskVar = "HOME_PSK"; # read from envFile 42 + }; 43 + "eduroam" = { 44 + eduroam = true; 45 + identity = "user@university.edu"; 46 + pskVar = "EDUROAM_PSK"; 47 + }; 48 + "Phone Hotspot" = { 49 + pskFile = config.age.secrets.hotspot.path; 50 + }; 51 + }; 52 + }; 53 + ```
+43
docs/src/modules/wut.md
··· 1 + # wut 2 + 3 + **W**orktrees **U**nexpectedly **T**olerable — a git worktree manager that keeps worktrees organized under `.worktrees/`. 4 + 5 + ## Options 6 + 7 + | Option | Type | Default | Description | 8 + |--------|------|---------|-------------| 9 + | `atelier.shell.wut.enable` | bool | `false` | Install wut and the zsh shell wrapper | 10 + 11 + ## Usage 12 + 13 + ```bash 14 + wut new feat/my-feature # Create worktree + branch under .worktrees/ 15 + wut list # Show all worktrees 16 + wut go feat/my-feature # cd into worktree (via shell wrapper) 17 + wut go # Interactive picker 18 + wut path feat/my-feature # Print worktree path 19 + wut rm feat/my-feature # Remove worktree + delete branch 20 + ``` 21 + 22 + ## Shell integration 23 + 24 + Wut needs to `cd` the calling shell, which a subprocess can't do directly. It works by printing a `__WUT_CD__=/path` marker that a zsh wrapper function intercepts: 25 + 26 + ```zsh 27 + wut() { 28 + output=$(/path/to/wut "$@") 29 + if [[ "$output" == *"__WUT_CD__="* ]]; then 30 + cd "${output##*__WUT_CD__=}" 31 + else 32 + echo "$output" 33 + fi 34 + } 35 + ``` 36 + 37 + This wrapper is automatically injected into `initContent` when the module is enabled. 38 + 39 + ## Safety 40 + 41 + - `wut rm` refuses to delete worktrees with uncommitted changes (use `--force` to override) 42 + - `wut rm` warns before deleting unmerged branches 43 + - The main/master branch worktree cannot be removed
+55
docs/src/secrets.md
··· 1 + # Secrets 2 + 3 + Secrets are managed using [agenix](https://github.com/ryantm/agenix) — encrypted at rest in the repo and decrypted at activation time to `/run/agenix/`. 4 + 5 + ## Usage 6 + 7 + Create or edit a secret: 8 + 9 + ```bash 10 + cd secrets && agenix -e myapp.age 11 + ``` 12 + 13 + The secret file contains environment variables, one per line: 14 + 15 + ``` 16 + DATABASE_URL=postgres://... 17 + API_KEY=xxxxx 18 + SECRET_TOKEN=yyyyy 19 + ``` 20 + 21 + ## Adding a new secret 22 + 23 + 1. Add the public key entry to `secrets/secrets.nix`: 24 + 25 + ```nix 26 + "service-name.age".publicKeys = [ kierank ]; 27 + ``` 28 + 29 + 2. Create and encrypt the secret: 30 + 31 + ```bash 32 + agenix -e secrets/service-name.age 33 + ``` 34 + 35 + 3. Declare in machine config: 36 + 37 + ```nix 38 + age.secrets.service-name = { 39 + file = ../../secrets/service-name.age; 40 + owner = "service-name"; 41 + }; 42 + ``` 43 + 44 + 4. Reference as `config.age.secrets.service-name.path` in the service module. 45 + 46 + ## Identity paths 47 + 48 + The decryption keys are SSH keys configured per machine: 49 + 50 + ```nix 51 + age.identityPaths = [ 52 + "/home/kierank/.ssh/id_rsa" 53 + "/etc/ssh/id_rsa" 54 + ]; 55 + ```
+44
docs/src/services/README.md
··· 1 + # Services 2 + 3 + All services run on **terebithia** (Oracle Cloud aarch64) behind Caddy with Cloudflare DNS TLS. 4 + 5 + ## mkService-based 6 + 7 + | Service | Domain | Port | Runtime | Description | 8 + |---------|--------|------|---------|-------------| 9 + | cachet | cachet.dunkirk.sh | 3000 | bun | Slack emoji/profile cache | 10 + | hn-alerts | hn.dunkirk.sh | 3001 | bun | Hacker News monitoring | 11 + | indiko | indiko.dunkirk.sh | 3003 | bun | IndieAuth/OAuth2 server | 12 + | l4 | l4.dunkirk.sh | 3004 | bun | Image CDN — Slack image optimizer | 13 + | canvas-mcp | canvas.dunkirk.sh | 3006 | bun | Canvas MCP server | 14 + | control | control.dunkirk.sh | 3010 | bun | Admin dashboard for Caddy toggles | 15 + | traverse | traverse.dunkirk.sh | 4173 | bun | Code walkthrough diagram server | 16 + | cedarlogic | cedarlogic.dunkirk.sh | 3100 | custom | Circuit simulator | 17 + 18 + ## Multi-instance 19 + 20 + | Service | Domain | Port | Description | 21 + |---------|--------|------|-------------| 22 + | emojibot-hackclub | hc.emojibot.dunkirk.sh | 3002 | Emojibot for Hack Club | 23 + | emojibot-df1317 | df.emojibot.dunkirk.sh | 3005 | Emojibot for df1317 | 24 + 25 + ## Custom / external 26 + 27 + | Service | Domain | Description | 28 + |---------|--------|-------------| 29 + | bore (frps) | bore.dunkirk.sh | HTTP/TCP/UDP tunnel proxy | 30 + | herald | herald.dunkirk.sh | Git SSH hosting + email | 31 + | knot | knot.dunkirk.sh | Tangled git hosting | 32 + | spindle | spindle.dunkirk.sh | Tangled CI | 33 + | battleship-arena | battleship.dunkirk.sh | Battleship game server | 34 + | n8n | n8n.dunkirk.sh | Workflow automation | 35 + 36 + ## Architecture 37 + 38 + Each mkService module provides: 39 + 40 + - **Systemd service** — initial git clone for scaffolding, subsequent deploys via GitHub Actions 41 + - **Caddy reverse proxy** — TLS via Cloudflare DNS challenge, optional rate limiting 42 + - **Data declarations** — `sqlite`, `postgres`, `files` feed into automatic backups 43 + - **Dedicated user** — sudo for restart/stop/start, per-user Tailscale SSH ACLs 44 + - **Port conflict detection** — assertions prevent two services binding the same port
+21
docs/src/services/battleship-arena.md
··· 1 + # battleship-arena 2 + 3 + Battleship game server with web interface and SSH-based bot submission. 4 + 5 + **Domain:** `battleship.dunkirk.sh` · **Web Port:** 8081 · **SSH Port:** 2222 6 + 7 + This is a **custom module** — it does not use mkService. 8 + 9 + ## Options 10 + 11 + | Option | Type | Default | Description | 12 + |--------|------|---------|-------------| 13 + | `enable` | bool | `false` | Enable battleship-arena | 14 + | `domain` | string | `"battleship.dunkirk.sh"` | Domain for Caddy reverse proxy | 15 + | `sshPort` | port | `2222` | SSH port for bot submissions | 16 + | `webPort` | port | `8081` | Web interface port | 17 + | `uploadDir` | string | `"/var/lib/battleship-arena/submissions"` | Bot upload directory | 18 + | `resultsDb` | string | `"/var/lib/battleship-arena/results.db"` | SQLite results database path | 19 + | `adminPasscode` | string | `"battleship-admin-override"` | Admin passcode | 20 + | `secretsFile` | path or null | `null` | Agenix secrets file | 21 + | `package` | package | — | Battleship-arena package (from flake input) |
+43
docs/src/services/bore.md
··· 1 + # bore (server) 2 + 3 + Lightweight tunneling server built on frp. Supports HTTP (wildcard subdomains), TCP, and UDP tunnels with optional OAuth authentication via Indiko. 4 + 5 + **Domain:** `bore.dunkirk.sh` · **frp port:** 7000 6 + 7 + This is a **custom module** — it does not use mkService. 8 + 9 + ## Options 10 + 11 + | Option | Type | Default | Description | 12 + |--------|------|---------|-------------| 13 + | `enable` | bool | `false` | Enable bore server | 14 + | `domain` | string | — | Base domain for wildcard subdomains | 15 + | `bindAddr` | string | `"0.0.0.0"` | frps bind address | 16 + | `bindPort` | port | `7000` | frps bind port | 17 + | `vhostHTTPPort` | port | `7080` | Virtual host HTTP port | 18 + | `allowedTCPPorts` | list of ports | `20000–20099` | Ports available for TCP tunnels | 19 + | `allowedUDPPorts` | list of ports | `20000–20099` | Ports available for UDP tunnels | 20 + | `authToken` | string or null | `null` | frp auth token (use `authTokenFile` instead) | 21 + | `authTokenFile` | path or null | `null` | Path to file containing frp auth token | 22 + | `enableCaddy` | bool | `true` | Auto-configure Caddy wildcard vhost | 23 + 24 + ### Authentication 25 + 26 + When enabled, all HTTP tunnels are gated behind Indiko OAuth. Users must sign in before accessing tunneled services. 27 + 28 + | Option | Type | Default | Description | 29 + |--------|------|---------|-------------| 30 + | `auth.enable` | bool | `false` | Enable bore-auth OAuth middleware | 31 + | `auth.indikoURL` | string | `"https://indiko.dunkirk.sh"` | Indiko server URL | 32 + | `auth.clientID` | string | — | OAuth client ID from Indiko | 33 + | `auth.clientSecretFile` | path | — | Path to OAuth client secret | 34 + | `auth.cookieHashKeyFile` | path | — | 32-byte cookie signing key | 35 + | `auth.cookieBlockKeyFile` | path | — | 32-byte cookie encryption key | 36 + 37 + After authentication, these headers are passed to tunneled services: 38 + 39 + - `X-Auth-User` — user's profile URL 40 + - `X-Auth-Name` — display name 41 + - `X-Auth-Email` — email address 42 + 43 + See [bore (client)](../modules/bore-client.md) for the home-manager client module.
+34
docs/src/services/cedarlogic.md
··· 1 + # cedarlogic 2 + 3 + Browser-based circuit simulator with real-time collaboration via WebSockets. 4 + 5 + **Domain:** `cedarlogic.dunkirk.sh` · **Port:** 3100 · **Runtime:** custom 6 + 7 + ## Extra options 8 + 9 + | Option | Type | Default | Description | 10 + |--------|------|---------|-------------| 11 + | `wsPort` | port | `3101` | Hocuspocus WebSocket server for document collaboration | 12 + | `cursorPort` | port | `3102` | Cursor relay WebSocket server for live cursors | 13 + | `branch` | string | `"web"` | Git branch to clone (uses `web` branch, not `main`) | 14 + 15 + ## Caddy routing 16 + 17 + Cedarlogic disables the default mkService Caddy config and uses path-based routing to three backends: 18 + 19 + | Path | Backend | 20 + |------|---------| 21 + | `/ws` | `wsPort` (Hocuspocus) | 22 + | `/cursor-ws` | `cursorPort` (cursor relay) | 23 + | `/api/*`, `/auth/*` | main `port` | 24 + | Everything else | Static files from `dist/` | 25 + 26 + ## Build step 27 + 28 + Unlike other services, cedarlogic runs a build during deploy: 29 + 30 + ``` 31 + bun install → parse-gates → bun run build (Vite) 32 + ``` 33 + 34 + The build has a 120s timeout to accommodate Vite compilation.
+40
docs/src/services/control.md
··· 1 + # control 2 + 3 + Admin dashboard for Caddy feature toggles. Provides a web UI to enable/disable paths on other services (e.g. blocking player tracking on the map). 4 + 5 + **Domain:** `control.dunkirk.sh` · **Port:** 3010 · **Runtime:** bun 6 + 7 + ## Extra options 8 + 9 + ### `flags` 10 + 11 + Defines per-domain feature flags that control blocks paths and redacts JSON fields. 12 + 13 + ```nix 14 + atelier.services.control.flags."map.dunkirk.sh" = { 15 + name = "Map"; 16 + flags = { 17 + "block-tracking" = { 18 + name = "Block Player Tracking"; 19 + description = "Disable real-time player location updates"; 20 + paths = [ 21 + "/sse" 22 + "/sse/*" 23 + "/tiles/*/markers/pl3xmap_players.json" 24 + ]; 25 + redact."/tiles/settings.json" = [ "players" ]; 26 + }; 27 + }; 28 + }; 29 + ``` 30 + 31 + | Option | Type | Default | Description | 32 + |--------|------|---------|-------------| 33 + | `flags` | attrsOf submodule | `{}` | Services and their feature flags, keyed by domain | 34 + | `flags.<domain>.name` | string | — | Display name for the service | 35 + | `flags.<domain>.flags.<id>.name` | string | — | Display name for the flag | 36 + | `flags.<domain>.flags.<id>.description` | string | — | What the flag does | 37 + | `flags.<domain>.flags.<id>.paths` | list of strings | `[]` | URL paths to block when flag is active | 38 + | `flags.<domain>.flags.<id>.redact` | attrsOf (list of strings) | `{}` | JSON fields to redact from responses, keyed by path | 39 + 40 + The flags config is serialized to `flags.json` and passed to control via the `FLAGS_CONFIG` environment variable.
+42
docs/src/services/emojibot.md
··· 1 + # emojibot 2 + 3 + Slack emoji management service. Supports multiple instances for different workspaces. 4 + 5 + **Runtime:** bun · **Stateless** (no database) 6 + 7 + This is a **custom module** — it does not use mkService. Each instance gets its own systemd service, user, and Caddy virtual host. 8 + 9 + ## Instance options 10 + 11 + Instances are defined under `atelier.services.emojibot.instances.<name>`: 12 + 13 + ```nix 14 + atelier.services.emojibot.instances = { 15 + hackclub = { 16 + enable = true; 17 + domain = "hc.emojibot.dunkirk.sh"; 18 + port = 3002; 19 + workspace = "hackclub"; 20 + channel = "C02T3CU03T3"; 21 + repository = "https://github.com/taciturnaxolotl/emojibot"; 22 + secretsFile = config.age.secrets."emojibot/hackclub".path; 23 + }; 24 + }; 25 + ``` 26 + 27 + | Option | Type | Default | Description | 28 + |--------|------|---------|-------------| 29 + | `enable` | bool | `false` | Enable this instance | 30 + | `domain` | string | — | Domain for Caddy reverse proxy | 31 + | `port` | port | — | Port to run on | 32 + | `secretsFile` | path | — | Agenix secrets file with Slack credentials | 33 + | `repository` | string | `"https://github.com/taciturnaxolotl/emojibot"` | Git repo URL | 34 + | `workspace` | string or null | `null` | Slack workspace name (for identification) | 35 + | `channel` | string or null | `null` | Slack channel ID | 36 + 37 + ## Current instances 38 + 39 + | Instance | Domain | Port | Workspace | 40 + |----------|--------|------|-----------| 41 + | hackclub | hc.emojibot.dunkirk.sh | 3002 | Hack Club | 42 + | df1317 | df.emojibot.dunkirk.sh | 3005 | df1317 |
+39
docs/src/services/herald.md
··· 1 + # herald 2 + 3 + Git SSH hosting with email notifications. Provides a git push interface over SSH and sends email via SMTP/DKIM. 4 + 5 + **Domain:** `herald.dunkirk.sh` · **SSH Port:** 2223 · **HTTP Port:** 8085 6 + 7 + This is a **custom module** — it does not use mkService. 8 + 9 + ## Options 10 + 11 + | Option | Type | Default | Description | 12 + |--------|------|---------|-------------| 13 + | `enable` | bool | `false` | Enable herald | 14 + | `domain` | string | — | Domain for Caddy reverse proxy | 15 + | `host` | string | `"0.0.0.0"` | Listen address | 16 + | `sshPort` | port | `2223` | SSH listen port | 17 + | `externalSshPort` | port | `2223` | External SSH port (if behind NAT) | 18 + | `httpPort` | port | `8085` | HTTP API port | 19 + | `dataDir` | path | `"/var/lib/herald"` | Data directory | 20 + | `allowAllKeys` | bool | `true` | Allow all SSH keys | 21 + | `secretsFile` | path | — | Agenix secrets (must contain `SMTP_PASS`) | 22 + | `package` | package | `pkgs.herald` | Herald package | 23 + 24 + ### SMTP 25 + 26 + | Option | Type | Default | Description | 27 + |--------|------|---------|-------------| 28 + | `smtp.host` | string | — | SMTP server hostname | 29 + | `smtp.port` | port | `587` | SMTP server port | 30 + | `smtp.user` | string | — | SMTP username | 31 + | `smtp.from` | string | — | Sender address | 32 + 33 + ### DKIM 34 + 35 + | Option | Type | Default | Description | 36 + |--------|------|---------|-------------| 37 + | `smtp.dkim.selector` | string or null | `null` | DKIM selector | 38 + | `smtp.dkim.domain` | string or null | `null` | DKIM signing domain | 39 + | `smtp.dkim.privateKeyFile` | path or null | `null` | Path to DKIM private key |
+16
docs/src/services/knot-sync.md
··· 1 + # knot-sync 2 + 3 + Mirrors Tangled knot repositories to GitHub on a cron schedule. 4 + 5 + This is a **custom module** — it does not use mkService. Runs as a systemd timer, not a long-running service. 6 + 7 + ## Options 8 + 9 + | Option | Type | Default | Description | 10 + |--------|------|---------|-------------| 11 + | `enable` | bool | `false` | Enable knot-sync | 12 + | `repoDir` | string | `"/home/git/did:plc:..."` | Directory containing knot git repos | 13 + | `githubUsername` | string | `"taciturnaxolotl"` | GitHub username to mirror to | 14 + | `secretsFile` | path | — | Agenix secrets (must contain `GITHUB_TOKEN`) | 15 + | `logFile` | string | `"/home/git/knot-sync.log"` | Log file path | 16 + | `interval` | string | `"*/5 * * * *"` | Cron schedule for sync |
-38
flake.lock
··· 859 859 "type": "github" 860 860 } 861 861 }, 862 - "nixpkgs-fetch-deno": { 863 - "locked": { 864 - "lastModified": 1766410835, 865 - "narHash": "sha256-dRhVt0aFDyTqppyzRLxiO1JZEAoIA2fUnaeyJTe+UwU=", 866 - "owner": "aMOPel", 867 - "repo": "nixpkgs", 868 - "rev": "c9801acc8c4fac6377d076bc1c102b15bd9cfa6f", 869 - "type": "github" 870 - }, 871 - "original": { 872 - "owner": "aMOPel", 873 - "ref": "feat/fetchDenoDeps", 874 - "repo": "nixpkgs", 875 - "type": "github" 876 - } 877 - }, 878 862 "nixpkgs-lib": { 879 863 "locked": { 880 864 "lastModified": 1740877520, ··· 1152 1136 "spicetify-nix": "spicetify-nix", 1153 1137 "tangled": "tangled", 1154 1138 "terminal-wakatime": "terminal-wakatime", 1155 - "tranquil-pds": "tranquil-pds", 1156 1139 "wakatime-ls": "wakatime-ls", 1157 1140 "zmx": "zmx" 1158 1141 } ··· 1415 1398 "owner": "taciturnaxolotl", 1416 1399 "repo": "terminal-wakatime", 1417 1400 "type": "github" 1418 - } 1419 - }, 1420 - "tranquil-pds": { 1421 - "inputs": { 1422 - "nixpkgs": [ 1423 - "nixpkgs" 1424 - ], 1425 - "nixpkgs-fetch-deno": "nixpkgs-fetch-deno" 1426 - }, 1427 - "locked": { 1428 - "lastModified": 1770060543, 1429 - "narHash": "sha256-bc8z8o96Rbud7KgBHWFOM+3GNxGPMHIaVEPVCULYJUA=", 1430 - "ref": "refs/heads/main", 1431 - "rev": "442ca1434f81d1fe2164846d2391f0e33bea47a4", 1432 - "revCount": 164, 1433 - "type": "git", 1434 - "url": "https://tangled.org/tranquil.farm/tranquil-pds" 1435 - }, 1436 - "original": { 1437 - "type": "git", 1438 - "url": "https://tangled.org/tranquil.farm/tranquil-pds" 1439 1401 } 1440 1402 }, 1441 1403 "utils": {
+27 -4
flake.nix
··· 119 119 url = "github:neurosnap/zmx"; 120 120 }; 121 121 122 - tranquil-pds = { 123 - url = "git+https://tangled.org/tranquil.farm/tranquil-pds"; 124 - inputs.nixpkgs.follows = "nixpkgs"; 125 - }; 126 122 }; 127 123 128 124 outputs = ··· 274 270 ]; 275 271 }; 276 272 }; 273 + 274 + # Service manifest for infra dashboard 275 + # Evaluate with: nix eval --json .#services-manifest 276 + services-manifest = import ./lib/services-manifest.nix { 277 + config = self.nixosConfigurations.terebithia.config; 278 + lib = nixpkgs.lib; 279 + }; 280 + 281 + # Documentation site (mdBook + nixdoc + atelier options) 282 + # Build with: nix build .#docs 283 + # Serve with: nix run .#docs.serve 284 + packages = 285 + let 286 + mkDocs = system: 287 + let 288 + pkgs = nixpkgs.legacyPackages.${system}; 289 + in 290 + pkgs.callPackage ./packages/docs.nix { 291 + servicesManifest = self.services-manifest; 292 + inherit self; 293 + }; 294 + in 295 + { 296 + x86_64-linux.docs = mkDocs "x86_64-linux"; 297 + aarch64-linux.docs = mkDocs "aarch64-linux"; 298 + aarch64-darwin.docs = mkDocs "aarch64-darwin"; 299 + }; 277 300 278 301 formatter.x86_64-linux = nixpkgs.legacyPackages.x86_64-linux.nixfmt-tree; 279 302 formatter.aarch64-darwin = nixpkgs.legacyPackages.aarch64-darwin.nixfmt-tree;
+17
lib/services-manifest.nix
··· 1 + # Generate a JSON-serialisable manifest of all atelier services. 2 + # 3 + # Called from flake.nix: 4 + # services-manifest = import ./lib/services-manifest.nix { 5 + # config = self.nixosConfigurations.terebithia.config; 6 + # inherit lib; 7 + # }; 8 + # 9 + # Evaluate with: 10 + # nix eval --json .#services-manifest 11 + 12 + { config, lib }: 13 + 14 + let 15 + services = import ./services.nix { inherit lib; }; 16 + in 17 + services.mkManifest config
+142
lib/services.nix
··· 1 + /** Service utility functions for the atelier infrastructure. 2 + 3 + These functions operate on NixOS configurations to extract 4 + service metadata for dashboards, monitoring, and documentation. 5 + */ 6 + { lib }: 7 + 8 + { 9 + /** 10 + Check whether an atelier service config value has the standard 11 + mkService shape (has `enable`, `domain`, `port`, `_description`). 12 + 13 + # Arguments 14 + 15 + - `cfg` — an attribute set from `config.atelier.services.<name>` 16 + 17 + # Type 18 + 19 + ``` 20 + AttrSet -> Bool 21 + ``` 22 + 23 + # Example 24 + 25 + ```nix 26 + isMkService config.atelier.services.cachet 27 + => true 28 + ``` 29 + */ 30 + isMkService = cfg: 31 + (cfg.enable or false) 32 + && (cfg ? domain) 33 + && (cfg ? port) 34 + && (cfg ? _description); 35 + 36 + /** 37 + Convert a single mkService config into a manifest entry. 38 + 39 + # Arguments 40 + 41 + - `name` — the service name (attribute key) 42 + - `cfg` — the service config attrset 43 + 44 + # Type 45 + 46 + ``` 47 + String -> AttrSet -> AttrSet 48 + ``` 49 + 50 + # Example 51 + 52 + ```nix 53 + mkServiceEntry "cachet" config.atelier.services.cachet 54 + => { name = "cachet"; domain = "cachet.dunkirk.sh"; ... } 55 + ``` 56 + */ 57 + mkServiceEntry = name: cfg: { 58 + inherit name; 59 + description = cfg._description or "${name} service"; 60 + domain = cfg.domain; 61 + port = cfg.port; 62 + runtime = cfg._runtime or "unknown"; 63 + repository = cfg.repository or null; 64 + health_url = cfg.healthUrl or null; 65 + data = { 66 + sqlite = cfg.data.sqlite or null; 67 + postgres = cfg.data.postgres or null; 68 + files = cfg.data.files or []; 69 + }; 70 + }; 71 + 72 + /** 73 + Build the full services manifest from an evaluated NixOS config. 74 + 75 + Discovers all enabled mkService-based services plus emojibot 76 + instances. Returns a sorted list of service entries suitable 77 + for JSON serialisation. 78 + 79 + # Arguments 80 + 81 + - `config` — the fully evaluated NixOS configuration 82 + 83 + # Type 84 + 85 + ``` 86 + AttrSet -> [ AttrSet ] 87 + ``` 88 + 89 + # Example 90 + 91 + ```nix 92 + mkManifest config 93 + => [ { name = "cachet"; domain = "cachet.dunkirk.sh"; ... } ... ] 94 + ``` 95 + */ 96 + mkManifest = config: 97 + let 98 + allServices = config.atelier.services; 99 + 100 + isMkSvc = _: v: 101 + (v.enable or false) 102 + && (v ? domain) 103 + && (v ? port) 104 + && (v ? _description); 105 + 106 + standardServices = lib.filterAttrs isMkSvc allServices; 107 + 108 + mkEntry = name: cfg: { 109 + inherit name; 110 + description = cfg._description or "${name} service"; 111 + domain = cfg.domain; 112 + port = cfg.port; 113 + runtime = cfg._runtime or "unknown"; 114 + repository = cfg.repository or null; 115 + health_url = cfg.healthUrl or null; 116 + data = { 117 + sqlite = cfg.data.sqlite or null; 118 + postgres = cfg.data.postgres or null; 119 + files = cfg.data.files or []; 120 + }; 121 + }; 122 + 123 + emojibotInstances = 124 + let 125 + instances = allServices.emojibot.instances or {}; 126 + enabled = lib.filterAttrs (_: v: v.enable or false) instances; 127 + in 128 + lib.mapAttrsToList (name: inst: { 129 + name = "emojibot-${name}"; 130 + description = "Emojibot for ${inst.workspace or name}"; 131 + domain = inst.domain; 132 + port = inst.port; 133 + runtime = "bun"; 134 + repository = inst.repository or null; 135 + health_url = null; 136 + data = { sqlite = null; postgres = null; files = []; }; 137 + }) enabled; 138 + 139 + serviceList = (lib.mapAttrsToList mkEntry standardServices) ++ emojibotInstances; 140 + in 141 + lib.sort (a: b: a.name < b.name) serviceList; 142 + }
+8
machines/terebithia/default.nix
··· 384 384 domain = "cachet.dunkirk.sh"; 385 385 repository = "https://github.com/taciturnaxolotl/cachet"; 386 386 secretsFile = config.age.secrets.cachet.path; 387 + healthUrl = "https://cachet.dunkirk.sh/health?detailed=true"; 387 388 }; 388 389 389 390 atelier.services.hn-alerts = { ··· 391 392 domain = "hn.dunkirk.sh"; 392 393 repository = "https://github.com/taciturnaxolotl/hn-alerts"; 393 394 secretsFile = config.age.secrets.hn-alerts.path; 395 + healthUrl = "https://hn.dunkirk.sh/health"; 394 396 }; 395 397 396 398 atelier.services.emojibot.instances = { ··· 486 488 enable = true; 487 489 domain = "indiko.dunkirk.sh"; 488 490 repository = "https://github.com/taciturnaxolotl/indiko"; 491 + healthUrl = "https://indiko.dunkirk.sh/health"; 489 492 }; 490 493 491 494 atelier.services.l4 = { ··· 494 497 port = 3004; 495 498 repository = "https://github.com/taciturnaxolotl/l4"; 496 499 secretsFile = config.age.secrets.l4.path; 500 + healthUrl = "https://l4.dunkirk.sh/health"; 497 501 }; 498 502 499 503 atelier.services.control = { ··· 501 505 domain = "control.dunkirk.sh"; 502 506 repository = "https://github.com/taciturnaxolotl/control"; 503 507 secretsFile = config.age.secrets.control.path; 508 + healthUrl = "https://control.dunkirk.sh/health"; 504 509 505 510 flags."map.dunkirk.sh" = { 506 511 name = "Map"; ··· 523 528 enable = true; 524 529 domain = "traverse.dunkirk.sh"; 525 530 repository = "https://github.com/taciturnaxolotl/traverse"; 531 + healthUrl = "https://traverse.dunkirk.sh"; 526 532 }; 527 533 528 534 atelier.services.herald = { ··· 550 556 domain = "canvas.dunkirk.sh"; 551 557 repository = "https://github.com/taciturnaxolotl/canvas-mcp"; 552 558 secretsFile = config.age.secrets.canvas-mcp.path; 559 + healthUrl = "https://canvas.dunkirk.sh/health?detailed=true"; 553 560 environment = { 554 561 DKIM_PRIVATE_KEY_FILE = "${config.age.secrets.canvas-mcp-dkim.path}"; 555 562 }; ··· 560 567 domain = "cedarlogic.dunkirk.sh"; 561 568 repository = "https://github.com/taciturnaxolotl/CedarLogic"; 562 569 secretsFile = config.age.secrets.cedarlogic.path; 570 + healthUrl = "https://cedarlogic.dunkirk.sh/health"; 563 571 }; 564 572 565 573 services.caddy.virtualHosts."terebithia.dunkirk.sh" = {
-170
modules/home/apps/anthropic-manager/anthropic-manager.1.md
··· 1 - % ANTHROPIC-MANAGER(1) | Anthropic OAuth Profile Manager 2 - % Kieran Klukas 3 - % December 2024 4 - 5 - # NAME 6 - 7 - anthropic-manager - Manage Anthropic OAuth credential profiles 8 - 9 - # SYNOPSIS 10 - 11 - **anthropic-manager** [*OPTIONS*] 12 - 13 - **anthropic-manager** **--init** [*PROFILE*] 14 - 15 - **anthropic-manager** **--swap** [*PROFILE*] 16 - 17 - **anthropic-manager** **--delete** [*PROFILE*] 18 - 19 - **anthropic-manager** **--token** 20 - 21 - **anthropic-manager** **--list** 22 - 23 - **anthropic-manager** **--current** 24 - 25 - # DESCRIPTION 26 - 27 - **anthropic-manager** is a tool for managing multiple Anthropic OAuth credential profiles. It implements PKCE-based OAuth authentication with automatic token refresh, allowing you to switch between different Anthropic accounts easily. 28 - 29 - Profile credentials are stored in **~/.config/crush/anthropic.\***profile\* directories with individual bearer tokens, refresh tokens, and expiration timestamps. 30 - 31 - # OPTIONS 32 - 33 - **--init**, **-i** [*PROFILE*] 34 - : Initialize a new OAuth profile. Opens browser for authentication and stores credentials. 35 - 36 - **--swap**, **-s** [*PROFILE*] 37 - : Switch to a different profile. If no profile specified, shows interactive selection. 38 - 39 - **--delete**, **-d** [*PROFILE*] 40 - : Delete a profile and its credentials. If no profile specified, shows interactive selection. Prompts for confirmation before deletion. If the deleted profile is active, the symlink is removed. 41 - 42 - **--token**, **-t** 43 - : Print the current bearer token to stdout. Automatically refreshes if expired. Designed for non-interactive use. 44 - 45 - **--list**, **-l** 46 - : List all available profiles with their status (valid/expired/invalid). 47 - 48 - **--current**, **-c** 49 - : Show the currently active profile name. 50 - 51 - **--help**, **-h** 52 - : Display help information. 53 - 54 - # INTERACTIVE MENU 55 - 56 - When run without arguments in an interactive terminal, **anthropic-manager** displays a menu with the following options: 57 - 58 - - Switch profile 59 - - Create new profile 60 - - Delete profile 61 - - List all profiles 62 - - Get current token 63 - 64 - # PROFILE STORAGE 65 - 66 - Profiles are stored in **~/.config/crush/** with the following structure: 67 - 68 - ``` 69 - ~/.config/crush/ 70 - ├── anthropic -> anthropic.work (symlink to active profile) 71 - ├── anthropic.work/ 72 - │ ├── bearer_token (OAuth access token, mode 600) 73 - │ ├── bearer_token.expires (Unix timestamp) 74 - │ └── refresh_token (OAuth refresh token, mode 600) 75 - └── anthropic.personal/ 76 - └── ... 77 - ``` 78 - 79 - The active profile is determined by the **anthropic** symlink. 80 - 81 - # ENVIRONMENT 82 - 83 - **ANTHROPIC_CONFIG_DIR** 84 - : Override the default configuration directory (~/.config/crush). 85 - 86 - # EXIT STATUS 87 - 88 - **0** 89 - : Success 90 - 91 - **1** 92 - : Error (no active profile, authentication failed, invalid token, etc.) 93 - 94 - # EXAMPLES 95 - 96 - Initialize a new work profile: 97 - 98 - ``` 99 - $ anthropic-manager --init work 100 - ``` 101 - 102 - Switch to the work profile: 103 - 104 - ``` 105 - $ anthropic-manager --swap work 106 - ``` 107 - 108 - Delete a profile: 109 - 110 - ``` 111 - $ anthropic-manager --delete work 112 - ``` 113 - 114 - Get the current bearer token (for scripts): 115 - 116 - ``` 117 - $ TOKEN=$(anthropic-manager --token) 118 - ``` 119 - 120 - List all profiles: 121 - 122 - ``` 123 - $ anthropic-manager --list 124 - ``` 125 - 126 - Open interactive menu: 127 - 128 - ``` 129 - $ anthropic-manager 130 - ``` 131 - 132 - # INTEGRATION 133 - 134 - **anthropic-manager** is designed to replace **bunx anthropic-api-key** in crush configurations: 135 - 136 - ```nix 137 - api_key = "Bearer $(anthropic-manager --token)"; 138 - ``` 139 - 140 - The **--token** flag automatically handles: 141 - - Loading cached tokens 142 - - Checking expiration (refreshes if <60s remaining) 143 - - Refreshing using refresh token 144 - - Non-interactive operation (errors to stderr, token to stdout) 145 - 146 - # FILES 147 - 148 - **~/.config/crush/anthropic** 149 - : Symlink to active profile directory 150 - 151 - **~/.config/crush/anthropic.*/bearer_token** 152 - : OAuth access token for each profile 153 - 154 - **~/.config/crush/anthropic.*/refresh_token** 155 - : OAuth refresh token for each profile 156 - 157 - **~/.config/crush/anthropic.*/bearer_token.expires** 158 - : Token expiration timestamp (Unix epoch) 159 - 160 - # SEE ALSO 161 - 162 - **crush**(1) 163 - 164 - # BUGS 165 - 166 - Report bugs to: <https://github.com/taciturnaxolotl/dots> 167 - 168 - # COPYRIGHT 169 - 170 - Copyright © 2024 Kieran Klukas. Licensed under MIT License.
-25
modules/home/apps/anthropic-manager/completions/anthropic-manager.bash
··· 1 - # Bash completion for anthropic-manager 2 - 3 - _anthropic_manager() { 4 - local cur prev opts 5 - COMPREPLY=() 6 - cur="${COMP_WORDS[COMP_CWORD]}" 7 - prev="${COMP_WORDS[COMP_CWORD-1]}" 8 - 9 - # Main options 10 - opts="--init -i --swap -s --delete -d --token -t --list -l --current -c --help -h" 11 - 12 - # If previous word was --init, --swap, or --delete, complete with profile names 13 - if [[ "$prev" == "--init" ]] || [[ "$prev" == "-i" ]] || [[ "$prev" == "--swap" ]] || [[ "$prev" == "-s" ]] || [[ "$prev" == "--delete" ]] || [[ "$prev" == "-d" ]]; then 14 - local config_dir="${ANTHROPIC_CONFIG_DIR:-$HOME/.config/crush}" 15 - local profiles=$(find "$config_dir" -maxdepth 1 -type d -name "anthropic.*" 2>/dev/null | sed 's/.*anthropic\.//' | sort) 16 - COMPREPLY=( $(compgen -W "${profiles}" -- ${cur}) ) 17 - return 0 18 - fi 19 - 20 - # Complete with options 21 - COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) ) 22 - return 0 23 - } 24 - 25 - complete -F _anthropic_manager anthropic-manager
-18
modules/home/apps/anthropic-manager/completions/anthropic-manager.fish
··· 1 - # Fish completion for anthropic-manager 2 - 3 - # Helper function to get profile list 4 - function __anthropic_manager_profiles 5 - set -l config_dir (test -n "$ANTHROPIC_CONFIG_DIR"; and echo $ANTHROPIC_CONFIG_DIR; or echo "$HOME/.config/crush") 6 - if test -d "$config_dir" 7 - find "$config_dir" -maxdepth 1 -type d -name "anthropic.*" 2>/dev/null | sed 's/.*anthropic\.//' | sort 8 - end 9 - end 10 - 11 - # Main options 12 - complete -c anthropic-manager -s h -l help -d "Show help information" 13 - complete -c anthropic-manager -s i -l init -d "Initialize a new profile" -xa "(__anthropic_manager_profiles)" 14 - complete -c anthropic-manager -s s -l swap -d "Switch to a profile" -xa "(__anthropic_manager_profiles)" 15 - complete -c anthropic-manager -s d -l delete -d "Delete a profile" -xa "(__anthropic_manager_profiles)" 16 - complete -c anthropic-manager -s t -l token -d "Print current bearer token" 17 - complete -c anthropic-manager -s l -l list -d "List all profiles" 18 - complete -c anthropic-manager -s c -l current -d "Show current profile"
-22
modules/home/apps/anthropic-manager/completions/anthropic-manager.zsh
··· 1 - #compdef anthropic-manager 2 - 3 - _anthropic_manager() { 4 - local config_dir="${ANTHROPIC_CONFIG_DIR:-$HOME/.config/crush}" 5 - local -a profiles 6 - 7 - # Get list of profiles 8 - if [[ -d "$config_dir" ]]; then 9 - profiles=(${(f)"$(find "$config_dir" -maxdepth 1 -type d -name "anthropic.*" 2>/dev/null | sed 's/.*anthropic\.//' | sort)"}) 10 - fi 11 - 12 - _arguments -C \ 13 - '(- *)'{-h,--help}'[Show help information]' \ 14 - '(-i --init)'{-i,--init}'[Initialize a new profile]:profile name:' \ 15 - '(-s --swap)'{-s,--swap}'[Switch to a profile]:profile:($profiles)' \ 16 - '(-d --delete)'{-d,--delete}'[Delete a profile]:profile:($profiles)' \ 17 - '(-t --token)'{-t,--token}'[Print current bearer token]' \ 18 - '(-l --list)'{-l,--list}'[List all profiles]' \ 19 - '(-c --current)'{-c,--current}'[Show current profile]' 20 - } 21 - 22 - _anthropic_manager "$@"
-561
modules/home/apps/anthropic-manager/default.nix
··· 1 - { 2 - lib, 3 - pkgs, 4 - config, 5 - ... 6 - }: 7 - let 8 - cfg = config.atelier.apps.anthropic-manager; 9 - 10 - anthropicManagerScript = pkgs.writeShellScript "anthropic-manager" '' 11 - # Manage Anthropic OAuth credential profiles 12 - # Implements the same functionality as anthropic-api-key but with profile management 13 - 14 - set -uo pipefail 15 - 16 - CONFIG_DIR="''${ANTHROPIC_CONFIG_DIR:-$HOME/.config/crush}" 17 - CLIENT_ID="9d1c250a-e61b-44d9-88ed-5944d1962f5e" 18 - 19 - # Utilities 20 - base64url() { 21 - ${pkgs.coreutils}/bin/base64 -w0 | ${pkgs.gnused}/bin/sed 's/=//g; s/+/-/g; s/\//_/g' 22 - } 23 - 24 - sha256() { 25 - echo -n "$1" | ${pkgs.openssl}/bin/openssl dgst -binary -sha256 26 - } 27 - 28 - pkce_pair() { 29 - verifier=$(${pkgs.openssl}/bin/openssl rand 32 | base64url) 30 - challenge=$(printf '%s' "$verifier" | ${pkgs.openssl}/bin/openssl dgst -binary -sha256 | base64url) 31 - echo "$verifier $challenge" 32 - } 33 - 34 - authorize_url() { 35 - local challenge="$1" 36 - local state="$2" 37 - echo "https://claude.ai/oauth/authorize?response_type=code&client_id=$CLIENT_ID&redirect_uri=https://console.anthropic.com/oauth/code/callback&scope=org:create_api_key+user:profile+user:inference+user:sessions:claude_code&code_challenge=$challenge&code_challenge_method=S256&state=$state" 38 - } 39 - 40 - clean_pasted_code() { 41 - local input="$1" 42 - input="''${input#code:}" 43 - input="''${input#code=}" 44 - input="''${input#\"}" 45 - input="''${input%\"}" 46 - input="''${input#\'}" 47 - input="''${input%\'}" 48 - input="''${input#\`}" 49 - input="''${input%\`}" 50 - echo "$input" | ${pkgs.gnused}/bin/sed -E 's/[^A-Za-z0-9._~#-]//g' 51 - } 52 - 53 - exchange_code() { 54 - local code="$1" 55 - local verifier="$2" 56 - local cleaned 57 - cleaned=$(clean_pasted_code "$code") 58 - local pure="''${cleaned%%#*}" 59 - local state="''${cleaned#*#}" 60 - [[ "$state" == "$pure" ]] && state="" 61 - 62 - ${pkgs.curl}/bin/curl -s -X POST \ 63 - -H "Content-Type: application/json" \ 64 - -H "User-Agent: anthropic-manager/1.0" \ 65 - -d "$(${pkgs.jq}/bin/jq -n \ 66 - --arg code "$pure" \ 67 - --arg state "$state" \ 68 - --arg verifier "$verifier" \ 69 - '{ 70 - code: $code, 71 - state: $state, 72 - grant_type: "authorization_code", 73 - client_id: "9d1c250a-e61b-44d9-88ed-5944d1962f5e", 74 - redirect_uri: "https://console.anthropic.com/oauth/code/callback", 75 - code_verifier: $verifier 76 - }')" \ 77 - "https://console.anthropic.com/v1/oauth/token" 78 - } 79 - 80 - exchange_refresh() { 81 - local refresh_token="$1" 82 - ${pkgs.curl}/bin/curl -s -X POST \ 83 - -H "Content-Type: application/json" \ 84 - -H "User-Agent: anthropic-manager/1.0" \ 85 - -d "$(${pkgs.jq}/bin/jq -n \ 86 - --arg refresh "$refresh_token" \ 87 - '{ 88 - grant_type: "refresh_token", 89 - refresh_token: $refresh, 90 - client_id: "9d1c250a-e61b-44d9-88ed-5944d1962f5e" 91 - }')" \ 92 - "https://console.anthropic.com/v1/oauth/token" 93 - } 94 - 95 - save_tokens() { 96 - local profile_dir="$1" 97 - local access_token="$2" 98 - local refresh_token="$3" 99 - local expires_at="$4" 100 - 101 - mkdir -p "$profile_dir" 102 - echo -n "$access_token" > "$profile_dir/bearer_token" 103 - echo -n "$refresh_token" > "$profile_dir/refresh_token" 104 - echo -n "$expires_at" > "$profile_dir/bearer_token.expires" 105 - chmod 600 "$profile_dir/bearer_token" "$profile_dir/refresh_token" "$profile_dir/bearer_token.expires" 106 - } 107 - 108 - load_tokens() { 109 - local profile_dir="$1" 110 - [[ -f "$profile_dir/bearer_token" ]] || return 1 111 - [[ -f "$profile_dir/refresh_token" ]] || return 1 112 - [[ -f "$profile_dir/bearer_token.expires" ]] || return 1 113 - 114 - cat "$profile_dir/bearer_token" 115 - cat "$profile_dir/refresh_token" 116 - cat "$profile_dir/bearer_token.expires" 117 - return 0 118 - } 119 - 120 - get_token() { 121 - local profile_dir="$1" 122 - local print_token="''${2:-true}" 123 - 124 - if ! load_tokens "$profile_dir" >/dev/null 2>&1; then 125 - return 1 126 - fi 127 - 128 - local bearer refresh expires 129 - read -r bearer < "$profile_dir/bearer_token" 130 - read -r refresh < "$profile_dir/refresh_token" 131 - read -r expires < "$profile_dir/bearer_token.expires" 132 - 133 - local now 134 - now=$(date +%s) 135 - 136 - # If token valid for more than 60s, return it 137 - if [[ $now -lt $((expires - 60)) ]]; then 138 - [[ "$print_token" == "true" ]] && echo "$bearer" 139 - return 0 140 - fi 141 - 142 - # Try to refresh 143 - local response 144 - response=$(exchange_refresh "$refresh") 145 - 146 - if ! echo "$response" | ${pkgs.jq}/bin/jq -e '.access_token' >/dev/null 2>&1; then 147 - return 1 148 - fi 149 - 150 - local new_access new_refresh new_expires_in 151 - new_access=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.access_token') 152 - new_refresh=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.refresh_token // empty') 153 - new_expires_in=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.expires_in') 154 - 155 - [[ -z "$new_refresh" ]] && new_refresh="$refresh" 156 - local new_expires=$((now + new_expires_in)) 157 - 158 - save_tokens "$profile_dir" "$new_access" "$new_refresh" "$new_expires" 159 - [[ "$print_token" == "true" ]] && echo "$new_access" 160 - return 0 161 - } 162 - 163 - oauth_flow() { 164 - local profile_dir="$1" 165 - 166 - ${pkgs.gum}/bin/gum style --foreground 212 "Starting OAuth flow..." 167 - echo 168 - 169 - read -r verifier challenge < <(pkce_pair) 170 - local state 171 - state=$(${pkgs.openssl}/bin/openssl rand -base64 32 | ${pkgs.gnused}/bin/sed 's/[^A-Za-z0-9]//g') 172 - local auth_url 173 - auth_url=$(authorize_url "$challenge" "$state") 174 - 175 - ${pkgs.gum}/bin/gum style --foreground 35 "Opening browser for authorization..." 176 - ${pkgs.gum}/bin/gum style --foreground 117 "$auth_url" 177 - echo 178 - 179 - if command -v ${pkgs.xdg-utils}/bin/xdg-open &>/dev/null; then 180 - ${pkgs.xdg-utils}/bin/xdg-open "$auth_url" 2>/dev/null & 181 - elif command -v open &>/dev/null; then 182 - open "$auth_url" 2>/dev/null & 183 - fi 184 - 185 - local code 186 - code=$(${pkgs.gum}/bin/gum input --placeholder "Paste the authorization code from Anthropic" --prompt "Code: ") 187 - 188 - if [[ -z "$code" ]]; then 189 - ${pkgs.gum}/bin/gum style --foreground 196 "No code provided" 190 - return 1 191 - fi 192 - 193 - ${pkgs.gum}/bin/gum style --foreground 212 "Exchanging code for tokens..." 194 - 195 - local response 196 - response=$(exchange_code "$code" "$verifier") 197 - 198 - if ! echo "$response" | ${pkgs.jq}/bin/jq -e '.access_token' >/dev/null 2>&1; then 199 - ${pkgs.gum}/bin/gum style --foreground 196 "Failed to exchange code" 200 - echo "$response" | ${pkgs.jq}/bin/jq '.' 2>&1 || echo "$response" 201 - return 1 202 - fi 203 - 204 - local access_token refresh_token expires_in 205 - access_token=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.access_token') 206 - refresh_token=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.refresh_token') 207 - expires_in=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.expires_in') 208 - 209 - local expires_at 210 - expires_at=$(($(date +%s) + expires_in)) 211 - 212 - save_tokens "$profile_dir" "$access_token" "$refresh_token" "$expires_at" 213 - ${pkgs.gum}/bin/gum style --foreground 35 "✓ Authenticated successfully" 214 - return 0 215 - } 216 - 217 - list_profiles() { 218 - ${pkgs.gum}/bin/gum style --bold --foreground 212 "Available Anthropic profiles:" 219 - echo 220 - 221 - local current_profile="" 222 - if [[ -L "$CONFIG_DIR/anthropic" ]]; then 223 - current_profile=$(basename "$(readlink "$CONFIG_DIR/anthropic")" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//') 224 - fi 225 - 226 - local found_any=false 227 - for profile_dir in "$CONFIG_DIR"/anthropic.*; do 228 - if [[ -d "$profile_dir" ]]; then 229 - found_any=true 230 - local profile_name 231 - profile_name=$(basename "$profile_dir" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//') 232 - 233 - local status="" 234 - if get_token "$profile_dir" false 2>/dev/null; then 235 - local expires 236 - read -r expires < "$profile_dir/bearer_token.expires" 237 - local now 238 - now=$(date +%s) 239 - if [[ $now -lt $expires ]]; then 240 - status=" (valid)" 241 - else 242 - status=" (expired)" 243 - fi 244 - else 245 - status=" (invalid)" 246 - fi 247 - 248 - if [[ "$profile_name" == "$current_profile" ]]; then 249 - ${pkgs.gum}/bin/gum style --foreground 35 " ✓ $profile_name$status (active)" 250 - else 251 - echo " $profile_name$status" 252 - fi 253 - fi 254 - done 255 - 256 - if [[ "$found_any" == "false" ]]; then 257 - ${pkgs.gum}/bin/gum style --foreground 214 "No profiles found. Use 'anthropic-manager --init <name>' to create one." 258 - fi 259 - } 260 - 261 - show_current() { 262 - if [[ -L "$CONFIG_DIR/anthropic" ]]; then 263 - local current 264 - current=$(basename "$(readlink "$CONFIG_DIR/anthropic")" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//') 265 - ${pkgs.gum}/bin/gum style --foreground 35 "Current profile: $current" 266 - else 267 - ${pkgs.gum}/bin/gum style --foreground 214 "No active profile" 268 - fi 269 - } 270 - 271 - init_profile() { 272 - local profile="$1" 273 - 274 - if [[ -z "$profile" ]]; then 275 - profile=$(${pkgs.gum}/bin/gum input --placeholder "Profile name (e.g., work, personal)" --prompt "Profile name: ") 276 - if [[ -z "$profile" ]]; then 277 - ${pkgs.gum}/bin/gum style --foreground 196 "No profile name provided" 278 - exit 1 279 - fi 280 - fi 281 - 282 - local profile_dir="$CONFIG_DIR/anthropic.$profile" 283 - 284 - if [[ -d "$profile_dir" ]]; then 285 - ${pkgs.gum}/bin/gum style --foreground 214 "Profile '$profile' already exists" 286 - if ${pkgs.gum}/bin/gum confirm "Re-authenticate?"; then 287 - rm -rf "$profile_dir" 288 - else 289 - exit 1 290 - fi 291 - fi 292 - 293 - if ! oauth_flow "$profile_dir"; then 294 - rm -rf "$profile_dir" 295 - exit 1 296 - fi 297 - 298 - # Ask to set as active 299 - if [[ ! -L "$CONFIG_DIR/anthropic" ]] || ${pkgs.gum}/bin/gum confirm "Set '$profile' as active profile?"; then 300 - [[ -L "$CONFIG_DIR/anthropic" ]] && rm "$CONFIG_DIR/anthropic" 301 - ln -sf "anthropic.$profile" "$CONFIG_DIR/anthropic" 302 - ${pkgs.gum}/bin/gum style --foreground 35 "✓ Set as active profile" 303 - fi 304 - } 305 - 306 - delete_profile() { 307 - local target="$1" 308 - 309 - if [[ -z "$target" ]]; then 310 - # Interactive selection 311 - local profiles=() 312 - for profile_dir in "$CONFIG_DIR"/anthropic.*; do 313 - if [[ -d "$profile_dir" ]]; then 314 - profiles+=("$(basename "$profile_dir" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//')") 315 - fi 316 - done 317 - 318 - if [[ ''${#profiles[@]} -eq 0 ]]; then 319 - ${pkgs.gum}/bin/gum style --foreground 196 "No profiles found" 320 - exit 1 321 - fi 322 - 323 - target=$(printf '%s\n' "''${profiles[@]}" | ${pkgs.gum}/bin/gum choose --header "Select profile to delete:") 324 - [[ -z "$target" ]] && exit 0 325 - fi 326 - 327 - local target_dir="$CONFIG_DIR/anthropic.$target" 328 - if [[ ! -d "$target_dir" ]]; then 329 - ${pkgs.gum}/bin/gum style --foreground 196 "Profile '$target' does not exist" 330 - exit 1 331 - fi 332 - 333 - if ! ${pkgs.gum}/bin/gum confirm "Delete profile '$target'?"; then 334 - exit 0 335 - fi 336 - 337 - # Check if this is the active profile 338 - if [[ -L "$CONFIG_DIR/anthropic" ]]; then 339 - local current 340 - current=$(basename "$(readlink "$CONFIG_DIR/anthropic")" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//') 341 - if [[ "$current" == "$target" ]]; then 342 - rm "$CONFIG_DIR/anthropic" 343 - ${pkgs.gum}/bin/gum style --foreground 214 "Unlinked active profile" 344 - fi 345 - fi 346 - 347 - rm -rf "$target_dir" 348 - ${pkgs.gum}/bin/gum style --foreground 35 "✓ Deleted profile '$target'" 349 - } 350 - 351 - swap_profile() { 352 - local target="$1" 353 - 354 - if [[ -n "$target" ]]; then 355 - local target_dir="$CONFIG_DIR/anthropic.$target" 356 - if [[ ! -d "$target_dir" ]]; then 357 - ${pkgs.gum}/bin/gum style --foreground 196 "Profile '$target' does not exist" 358 - echo 359 - list_profiles 360 - exit 1 361 - fi 362 - 363 - [[ -L "$CONFIG_DIR/anthropic" ]] && rm "$CONFIG_DIR/anthropic" 364 - ln -sf "anthropic.$target" "$CONFIG_DIR/anthropic" 365 - ${pkgs.gum}/bin/gum style --foreground 35 "✓ Switched to profile '$target'" 366 - exit 0 367 - fi 368 - 369 - # Interactive selection 370 - local profiles=() 371 - for profile_dir in "$CONFIG_DIR"/anthropic.*; do 372 - if [[ -d "$profile_dir" ]]; then 373 - profiles+=("$(basename "$profile_dir" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//')") 374 - fi 375 - done 376 - 377 - if [[ ''${#profiles[@]} -eq 0 ]]; then 378 - ${pkgs.gum}/bin/gum style --foreground 196 "No profiles found" 379 - ${pkgs.gum}/bin/gum style --foreground 214 "Use 'anthropic-manager --init <name>' to create one" 380 - exit 1 381 - fi 382 - 383 - local selected 384 - selected=$(printf '%s\n' "''${profiles[@]}" | ${pkgs.gum}/bin/gum choose --header "Select profile:") 385 - 386 - if [[ -n "$selected" ]]; then 387 - [[ -L "$CONFIG_DIR/anthropic" ]] && rm "$CONFIG_DIR/anthropic" 388 - ln -sf "anthropic.$selected" "$CONFIG_DIR/anthropic" 389 - ${pkgs.gum}/bin/gum style --foreground 35 "✓ Switched to profile '$selected'" 390 - fi 391 - } 392 - 393 - print_token() { 394 - if [[ ! -L "$CONFIG_DIR/anthropic" ]]; then 395 - echo "Error: No active profile" >&2 396 - exit 1 397 - fi 398 - 399 - local profile_dir 400 - profile_dir=$(readlink -f "$CONFIG_DIR/anthropic") 401 - 402 - if ! get_token "$profile_dir" true 2>/dev/null; then 403 - echo "Error: Token invalid or expired" >&2 404 - exit 1 405 - fi 406 - } 407 - 408 - interactive_menu() { 409 - echo 410 - ${pkgs.gum}/bin/gum style --bold --foreground 212 "Anthropic Profile Manager" 411 - echo 412 - 413 - local current_profile="" 414 - if [[ -L "$CONFIG_DIR/anthropic" ]]; then 415 - current_profile=$(basename "$(readlink "$CONFIG_DIR/anthropic")" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//') 416 - ${pkgs.gum}/bin/gum style --foreground 117 "Active: $current_profile" 417 - else 418 - ${pkgs.gum}/bin/gum style --foreground 214 "No active profile" 419 - fi 420 - 421 - echo 422 - 423 - local choice 424 - choice=$(${pkgs.gum}/bin/gum choose \ 425 - "Switch profile" \ 426 - "Create new profile" \ 427 - "Delete profile" \ 428 - "List all profiles" \ 429 - "Get current token") 430 - 431 - case "$choice" in 432 - "Switch profile") 433 - swap_profile "" 434 - ;; 435 - "Create new profile") 436 - init_profile "" 437 - ;; 438 - "Delete profile") 439 - echo 440 - delete_profile "" 441 - ;; 442 - "List all profiles") 443 - echo 444 - list_profiles 445 - ;; 446 - "Get current token") 447 - echo 448 - print_token 449 - ;; 450 - esac 451 - } 452 - 453 - # Main 454 - mkdir -p "$CONFIG_DIR" 455 - 456 - case "''${1:-}" in 457 - --init|-i) 458 - init_profile "''${2:-}" 459 - ;; 460 - --list|-l) 461 - list_profiles 462 - ;; 463 - --current|-c) 464 - show_current 465 - ;; 466 - --token|-t|token) 467 - print_token 468 - ;; 469 - --swap|-s|swap) 470 - swap_profile "''${2:-}" 471 - ;; 472 - --delete|-d|delete) 473 - delete_profile "''${2:-}" 474 - ;; 475 - --help|-h|help) 476 - ${pkgs.gum}/bin/gum style --bold --foreground 212 "anthropic-manager - Manage Anthropic OAuth profiles" 477 - echo 478 - echo "Usage:" 479 - echo " anthropic-manager Interactive menu" 480 - echo " anthropic-manager --init [profile] Initialize/create a new profile" 481 - echo " anthropic-manager --swap [profile] Switch to a profile (interactive if no profile given)" 482 - echo " anthropic-manager --delete [profile] Delete a profile (interactive if no profile given)" 483 - echo " anthropic-manager --token Print current bearer token (refresh if needed)" 484 - echo " anthropic-manager --list List all profiles with status" 485 - echo " anthropic-manager --current Show current active profile" 486 - echo " anthropic-manager --help Show this help" 487 - echo 488 - echo "Examples:" 489 - echo " anthropic-manager Open interactive menu" 490 - echo " anthropic-manager --init work Create 'work' profile" 491 - echo " anthropic-manager --swap work Switch to 'work' profile" 492 - echo " anthropic-manager --delete work Delete 'work' profile" 493 - echo " anthropic-manager --token Get current bearer token" 494 - ;; 495 - "") 496 - # No args - check if interactive 497 - if [[ ! -t 0 ]] || [[ ! -t 1 ]]; then 498 - echo "Error: anthropic-manager requires an interactive terminal when called without arguments" >&2 499 - exit 1 500 - fi 501 - interactive_menu 502 - ;; 503 - *) 504 - ${pkgs.gum}/bin/gum style --foreground 196 "Unknown option: $1" 505 - echo "Use --help for usage information" 506 - exit 1 507 - ;; 508 - esac 509 - ''; 510 - 511 - anthropicManager = pkgs.stdenv.mkDerivation { 512 - pname = "anthropic-manager"; 513 - version = "1.0"; 514 - 515 - dontUnpack = true; 516 - 517 - nativeBuildInputs = with pkgs; [ pandoc installShellFiles ]; 518 - 519 - manPageSrc = ./anthropic-manager.1.md; 520 - bashCompletionSrc = ./completions/anthropic-manager.bash; 521 - zshCompletionSrc = ./completions/anthropic-manager.zsh; 522 - fishCompletionSrc = ./completions/anthropic-manager.fish; 523 - 524 - buildPhase = '' 525 - # Convert markdown man page to man format 526 - ${pkgs.pandoc}/bin/pandoc -s -t man $manPageSrc -o anthropic-manager.1 527 - ''; 528 - 529 - installPhase = '' 530 - mkdir -p $out/bin 531 - 532 - # Install binary 533 - cp ${anthropicManagerScript} $out/bin/anthropic-manager 534 - chmod +x $out/bin/anthropic-manager 535 - 536 - # Install man page 537 - installManPage anthropic-manager.1 538 - 539 - # Install completions 540 - installShellCompletion --bash --name anthropic-manager $bashCompletionSrc 541 - installShellCompletion --zsh --name _anthropic-manager $zshCompletionSrc 542 - installShellCompletion --fish --name anthropic-manager.fish $fishCompletionSrc 543 - ''; 544 - 545 - meta = with lib; { 546 - description = "Anthropic OAuth profile manager"; 547 - homepage = "https://github.com/taciturnaxolotl/dots"; 548 - license = licenses.mit; 549 - maintainers = [ ]; 550 - }; 551 - }; 552 - in 553 - { 554 - options.atelier.apps.anthropic-manager.enable = lib.mkEnableOption "Enable anthropic-manager"; 555 - 556 - config = lib.mkIf cfg.enable { 557 - home.packages = [ 558 - anthropicManager 559 - ]; 560 - }; 561 - }
+21
modules/lib/mkService.nix
··· 89 89 description = "Git repository URL — cloned once on first start for scaffolding"; 90 90 }; 91 91 92 + healthUrl = lib.mkOption { 93 + type = lib.types.nullOr lib.types.str; 94 + default = null; 95 + description = "Health check URL for monitoring"; 96 + }; 97 + 98 + # Internal metadata set by mkService factory — used by services-manifest 99 + _description = lib.mkOption { 100 + type = lib.types.str; 101 + default = description; 102 + internal = true; 103 + readOnly = true; 104 + }; 105 + 106 + _runtime = lib.mkOption { 107 + type = lib.types.str; 108 + default = runtime; 109 + internal = true; 110 + readOnly = true; 111 + }; 112 + 92 113 # Data declarations for automatic backup 93 114 data = { 94 115 sqlite = lib.mkOption {
-322
modules/nixos/services/tranquil-pds.nix
··· 1 - # Tranquil PDS - AT Protocol Personal Data Server 2 - # 3 - # A feature-rich PDS with passkeys, 2FA, did:web support, and more. 4 - # Requires PostgreSQL, Redis, and S3-compatible storage. 5 - 6 - { 7 - config, 8 - lib, 9 - pkgs, 10 - inputs, 11 - ... 12 - }: 13 - 14 - let 15 - cfg = config.atelier.services.tranquil-pds; 16 - in 17 - { 18 - options.atelier.services.tranquil-pds = { 19 - enable = lib.mkEnableOption "Tranquil PDS"; 20 - 21 - package = lib.mkOption { 22 - type = lib.types.package; 23 - default = inputs.tranquil-pds.packages.${pkgs.stdenv.hostPlatform.system}.default; 24 - description = "The tranquil-pds package to use"; 25 - }; 26 - 27 - domain = lib.mkOption { 28 - type = lib.types.str; 29 - description = "Primary domain for the PDS (e.g., serif.blue)"; 30 - }; 31 - 32 - port = lib.mkOption { 33 - type = lib.types.port; 34 - default = 3100; 35 - description = "Port for the PDS to listen on"; 36 - }; 37 - 38 - dataDir = lib.mkOption { 39 - type = lib.types.path; 40 - default = "/var/lib/tranquil-pds"; 41 - description = "Directory to store PDS data"; 42 - }; 43 - 44 - secretsFile = lib.mkOption { 45 - type = lib.types.nullOr lib.types.path; 46 - default = null; 47 - description = "Path to agenix secrets file containing JWT_SECRET, DPOP_SECRET, MASTER_KEY, and S3 credentials"; 48 - }; 49 - 50 - database = { 51 - name = lib.mkOption { 52 - type = lib.types.str; 53 - default = "tranquil-pds"; 54 - description = "PostgreSQL database name"; 55 - }; 56 - 57 - user = lib.mkOption { 58 - type = lib.types.str; 59 - default = "tranquil-pds"; 60 - description = "PostgreSQL user"; 61 - }; 62 - }; 63 - 64 - s3 = { 65 - endpoint = lib.mkOption { 66 - type = lib.types.str; 67 - default = "http://localhost:9000"; 68 - description = "S3-compatible endpoint URL"; 69 - }; 70 - 71 - bucket = lib.mkOption { 72 - type = lib.types.str; 73 - default = "pds-blobs"; 74 - description = "S3 bucket name for blob storage"; 75 - }; 76 - 77 - region = lib.mkOption { 78 - type = lib.types.str; 79 - default = "us-east-1"; 80 - description = "S3 region"; 81 - }; 82 - }; 83 - 84 - minio = { 85 - enable = lib.mkOption { 86 - type = lib.types.bool; 87 - default = true; 88 - description = "Enable local MinIO for S3-compatible storage. Disable if using Backblaze B2 or AWS S3."; 89 - }; 90 - }; 91 - 92 - redis = { 93 - enable = lib.mkOption { 94 - type = lib.types.bool; 95 - default = true; 96 - description = "Enable Redis for caching and rate limiting"; 97 - }; 98 - }; 99 - 100 - crawlers = lib.mkOption { 101 - type = lib.types.listOf lib.types.str; 102 - default = [ "https://bsky.network" ]; 103 - description = "Relay URLs to notify via requestCrawl"; 104 - }; 105 - 106 - acceptingRepoImports = lib.mkOption { 107 - type = lib.types.bool; 108 - default = true; 109 - description = "Whether to accept repository imports (account migration)"; 110 - }; 111 - 112 - availableUserDomains = lib.mkOption { 113 - type = lib.types.listOf lib.types.str; 114 - default = [ ]; 115 - description = "Available user domains for handles (e.g., [\"serif.blue\"])"; 116 - }; 117 - 118 - requireInviteCode = lib.mkOption { 119 - type = lib.types.bool; 120 - default = false; 121 - description = "Require invite codes for account creation"; 122 - }; 123 - 124 - mail = { 125 - enable = lib.mkOption { 126 - type = lib.types.bool; 127 - default = false; 128 - description = "Enable email notifications"; 129 - }; 130 - 131 - fromAddress = lib.mkOption { 132 - type = lib.types.str; 133 - default = "noreply@${cfg.domain}"; 134 - description = "Email sender address"; 135 - }; 136 - 137 - fromName = lib.mkOption { 138 - type = lib.types.str; 139 - default = "Serif PDS"; 140 - description = "Email sender name"; 141 - }; 142 - 143 - smtp = { 144 - host = lib.mkOption { 145 - type = lib.types.str; 146 - default = "smtp.mailchannels.net"; 147 - description = "SMTP server hostname"; 148 - }; 149 - 150 - port = lib.mkOption { 151 - type = lib.types.port; 152 - default = 587; 153 - description = "SMTP server port"; 154 - }; 155 - 156 - username = lib.mkOption { 157 - type = lib.types.str; 158 - description = "SMTP username (set in secrets file with SMTP_USERNAME)"; 159 - }; 160 - 161 - tls = lib.mkOption { 162 - type = lib.types.bool; 163 - default = true; 164 - description = "Use STARTTLS"; 165 - }; 166 - }; 167 - }; 168 - }; 169 - 170 - config = lib.mkIf cfg.enable { 171 - users.users.tranquil-pds = { 172 - isSystemUser = true; 173 - group = "tranquil-pds"; 174 - home = cfg.dataDir; 175 - createHome = true; 176 - }; 177 - users.groups.tranquil-pds = { }; 178 - 179 - services.postgresql = { 180 - enable = true; 181 - ensureDatabases = [ cfg.database.name ]; 182 - ensureUsers = [ 183 - { 184 - name = cfg.database.user; 185 - ensureDBOwnership = true; 186 - } 187 - ]; 188 - }; 189 - 190 - services.redis.servers.tranquil-pds = lib.mkIf cfg.redis.enable { 191 - enable = true; 192 - port = 6379; 193 - }; 194 - 195 - services.minio = lib.mkIf cfg.minio.enable { 196 - enable = true; 197 - dataDir = [ "${cfg.dataDir}/minio" ]; 198 - rootCredentialsFile = cfg.secretsFile; 199 - }; 200 - 201 - # Configure msmtp for email sending 202 - programs.msmtp = lib.mkIf cfg.mail.enable { 203 - enable = true; 204 - accounts.default = { 205 - auth = true; 206 - tls = cfg.mail.smtp.tls; 207 - tls_starttls = cfg.mail.smtp.tls; 208 - host = cfg.mail.smtp.host; 209 - port = cfg.mail.smtp.port; 210 - from = cfg.mail.fromAddress; 211 - user = cfg.mail.smtp.username; 212 - passwordeval = "${pkgs.coreutils}/bin/cat ${cfg.secretsFile} | ${pkgs.gnugrep}/bin/grep SMTP_PASSWORD | ${pkgs.coreutils}/bin/cut -d= -f2"; 213 - }; 214 - }; 215 - 216 - systemd.services.tranquil-pds = { 217 - description = "Tranquil PDS - AT Protocol Personal Data Server"; 218 - wantedBy = [ "multi-user.target" ]; 219 - after = 220 - [ 221 - "network.target" 222 - "postgresql.service" 223 - ] 224 - ++ lib.optional cfg.minio.enable "minio.service" 225 - ++ lib.optional cfg.redis.enable "redis-tranquil-pds.service"; 226 - requires = 227 - [ "postgresql.service" ] 228 - ++ lib.optional cfg.minio.enable "minio.service" 229 - ++ lib.optional cfg.redis.enable "redis-tranquil-pds.service"; 230 - 231 - environment = 232 - { 233 - SERVER_HOST = "127.0.0.1"; 234 - SERVER_PORT = toString cfg.port; 235 - PDS_HOSTNAME = cfg.domain; 236 - DATABASE_URL = "postgres:///${cfg.database.name}?host=/run/postgresql"; 237 - S3_ENDPOINT = cfg.s3.endpoint; 238 - S3_BUCKET = cfg.s3.bucket; 239 - AWS_REGION = cfg.s3.region; 240 - CRAWLERS = lib.concatStringsSep "," cfg.crawlers; 241 - ACCEPTING_REPO_IMPORTS = if cfg.acceptingRepoImports then "true" else "false"; 242 - AVAILABLE_USER_DOMAINS = lib.concatStringsSep "," cfg.availableUserDomains; 243 - INVITE_CODE_REQUIRED = if cfg.requireInviteCode then "true" else "false"; 244 - } 245 - // lib.optionalAttrs cfg.redis.enable { 246 - REDIS_URL = "redis://localhost:6379"; 247 - } 248 - // lib.optionalAttrs cfg.mail.enable { 249 - MAIL_FROM_ADDRESS = cfg.mail.fromAddress; 250 - MAIL_FROM_NAME = cfg.mail.fromName; 251 - SENDMAIL_PATH = "${pkgs.msmtp}/bin/msmtp"; 252 - }; 253 - 254 - serviceConfig = { 255 - Type = "simple"; 256 - User = "tranquil-pds"; 257 - Group = "tranquil-pds"; 258 - WorkingDirectory = cfg.dataDir; 259 - EnvironmentFile = lib.mkIf (cfg.secretsFile != null) cfg.secretsFile; 260 - ExecStart = "${cfg.package}/bin/tranquil-pds"; 261 - Restart = "always"; 262 - RestartSec = "10s"; 263 - 264 - NoNewPrivileges = true; 265 - ProtectSystem = "strict"; 266 - ProtectHome = true; 267 - ReadWritePaths = [ cfg.dataDir ]; 268 - PrivateTmp = true; 269 - }; 270 - }; 271 - 272 - systemd.tmpfiles.rules = [ 273 - "d ${cfg.dataDir} 0755 tranquil-pds tranquil-pds -" 274 - ] ++ lib.optional cfg.minio.enable "d ${cfg.dataDir}/minio 0755 minio minio -"; 275 - 276 - services.caddy.virtualHosts."${cfg.domain}" = { 277 - extraConfig = '' 278 - tls { 279 - dns cloudflare {env.CLOUDFLARE_API_TOKEN} 280 - } 281 - header { 282 - Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" 283 - } 284 - 285 - reverse_proxy localhost:${toString cfg.port} { 286 - header_up X-Forwarded-Proto {scheme} 287 - header_up X-Forwarded-For {remote} 288 - } 289 - ''; 290 - }; 291 - 292 - services.caddy.virtualHosts."*.${cfg.domain}" = { 293 - extraConfig = '' 294 - tls { 295 - dns cloudflare {env.CLOUDFLARE_API_TOKEN} 296 - } 297 - header { 298 - Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" 299 - } 300 - reverse_proxy localhost:${toString cfg.port} { 301 - header_up X-Forwarded-Proto {scheme} 302 - header_up X-Forwarded-For {remote} 303 - } 304 - ''; 305 - }; 306 - 307 - networking.firewall.allowedTCPPorts = [ 308 - 443 309 - 80 310 - ]; 311 - 312 - atelier.backup.services.tranquil-pds = { 313 - paths = [ cfg.dataDir ]; 314 - exclude = [ "*.log" ] ++ lib.optional cfg.minio.enable "minio/*"; 315 - preBackup = '' 316 - systemctl stop tranquil-pds 317 - ${pkgs.sudo}/bin/sudo -u postgres ${pkgs.postgresql}/bin/pg_dump ${cfg.database.name} > /tmp/tranquil-pds-pg-dump.sql 318 - ''; 319 - postBackup = "systemctl start tranquil-pds"; 320 - }; 321 - }; 322 - }
+63
packages/docs.nix
··· 1 + { 2 + stdenvNoCC, 3 + lib, 4 + mdbook, 5 + nixdoc, 6 + fetchurl, 7 + simple-http-server, 8 + writeShellApplication, 9 + jq, 10 + # Injected from flake.nix 11 + servicesManifest, 12 + self, 13 + }: 14 + 15 + stdenvNoCC.mkDerivation (finalAttrs: { 16 + name = "dunkirk-docs"; 17 + src = self + /docs; 18 + 19 + nativeBuildInputs = [ mdbook nixdoc jq ]; 20 + 21 + buildPhase = '' 22 + # Set up catppuccin theme 23 + mkdir -p theme 24 + cp ${finalAttrs.passthru.catppuccin-mdbook} theme/catppuccin.css 25 + 26 + # Generate lib docs via nixdoc 27 + mkdir -p src/lib 28 + nixdoc -c services -d "Service utility functions" \ 29 + -p "" \ 30 + -f ${self + /lib/services.nix} > src/lib/services.md 31 + 32 + # Build the lib index for SUMMARY.md injection 33 + echo '- [services](lib/services.md)' > src/lib/index.md 34 + 35 + # Inject libdoc entries into SUMMARY.md 36 + substituteInPlace src/SUMMARY.md \ 37 + --replace-fail "libdoc" "$(cat src/lib/index.md)" 38 + 39 + # Build the book 40 + mdbook build 41 + ''; 42 + 43 + installPhase = '' 44 + cp -r ./dist $out 45 + 46 + # Place services.json alongside the book 47 + echo '${builtins.toJSON servicesManifest}' | jq . > $out/services.json 48 + ''; 49 + 50 + passthru.catppuccin-mdbook = fetchurl { 51 + url = "https://github.com/catppuccin/mdBook/releases/download/v4.0.0/catppuccin.css"; 52 + hash = "sha256-4IvmqQrfOSKcx6PAhGD5G7I44UN2596HECCFzzr/p/8="; 53 + }; 54 + 55 + passthru.serve = writeShellApplication { 56 + name = "docs-serve"; 57 + runtimeInputs = [ simple-http-server ]; 58 + text = '' 59 + echo "Serving docs at http://localhost:8000" 60 + simple-http-server -i -p 8000 -- ${finalAttrs.finalPackage} 61 + ''; 62 + }; 63 + })
-71
packages/tranquil-pds.nix
··· 1 - { lib 2 - , rustPlatform 3 - , pkg-config 4 - , openssl 5 - , deno 6 - , nodejs 7 - , buildNpmPackage 8 - }: 9 - let 10 - toml = (lib.importTOML ../tranquil-pds-src/Cargo.toml).package; 11 - 12 - frontend = buildNpmPackage { 13 - pname = "tranquil-pds-frontend"; 14 - inherit (toml) version; 15 - 16 - src = ../tranquil-pds-src/frontend; 17 - 18 - npmDepsHash = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA="; # Will need to update 19 - 20 - buildPhase = '' 21 - runHook preBuild 22 - npm run build 23 - runHook postBuild 24 - ''; 25 - 26 - installPhase = '' 27 - runHook preInstall 28 - cp -r dist $out 29 - runHook postInstall 30 - ''; 31 - }; 32 - in 33 - rustPlatform.buildRustPackage { 34 - pname = "tranquil-pds"; 35 - inherit (toml) version; 36 - 37 - src = lib.fileset.toSource { 38 - root = ../tranquil-pds-src; 39 - fileset = lib.fileset.intersection 40 - (lib.fileset.fromSource (lib.sources.cleanSource ../tranquil-pds-src)) 41 - (lib.fileset.unions [ 42 - ../tranquil-pds-src/Cargo.toml 43 - ../tranquil-pds-src/Cargo.lock 44 - ../tranquil-pds-src/src 45 - ../tranquil-pds-src/.sqlx 46 - ../tranquil-pds-src/migrations 47 - ]); 48 - }; 49 - 50 - nativeBuildInputs = [ 51 - pkg-config 52 - ]; 53 - 54 - buildInputs = [ 55 - openssl 56 - ]; 57 - 58 - cargoLock.lockFile = ../tranquil-pds-src/Cargo.lock; 59 - 60 - doCheck = false; 61 - 62 - # Install frontend alongside binary 63 - postInstall = '' 64 - mkdir -p $out/share/tranquil-pds 65 - cp -r ${frontend} $out/share/tranquil-pds/frontend 66 - ''; 67 - 68 - meta = { 69 - license = lib.licenses.agpl3Plus; 70 - }; 71 - }
-4
secrets/secrets.nix
··· 66 66 "restic/password.age".publicKeys = [ 67 67 kierank 68 68 ]; 69 - "tranquil-pds.age".publicKeys = [ 70 - kierank 71 - ]; 72 - 73 69 "pbnj.age".publicKeys = [ 74 70 kierank 75 71 ];
secrets/tranquil-pds.age

This is a binary file and will not be displayed.