···1+# Deployment
2+3+Two deploy paths: **infrastructure** (NixOS config changes) and **application code** (per-service repos).
4+5+## Infrastructure
6+7+Pushing to `main` triggers `.github/workflows/deploy.yaml` which runs `deploy-rs` over Tailscale to rebuild NixOS on the target machine.
8+9+```sh
10+# manual deploy
11+nix run 'github:serokell/deploy-rs' -- --remote-build --ssh-user kierank .
12+```
13+14+## Application Code
15+16+Each service repo has a minimal workflow calling the reusable `.github/workflows/deploy-service.yml`. On push to `main`:
17+18+1. Connects to Tailscale (`tag:deploy`)
19+2. SSHes as the **service user** (e.g., `cachet@terebithia`) via Tailscale SSH
20+3. Snapshots the SQLite DB (if `db_path` is provided)
21+4. `git pull` + `bun install --frozen-lockfile` + `sudo systemctl restart`
22+5. Health check (HTTP URL or systemd status fallback)
23+6. Auto-rollback on failure (restores DB snapshot + reverts to previous commit)
24+25+Per-app workflow — copy and change the `with:` values:
26+27+```yaml
28+name: Deploy
29+on:
30+ push:
31+ branches: [main]
32+ workflow_dispatch:
33+jobs:
34+ deploy:
35+ uses: taciturnaxolotl/dots/.github/workflows/deploy-service.yml@main
36+ with:
37+ service: cachet
38+ health_url: https://cachet.dunkirk.sh/health
39+ db_path: /var/lib/cachet/data/cachet.db
40+ secrets:
41+ TS_OAUTH_CLIENT_ID: ${{ secrets.TS_OAUTH_CLIENT_ID }}
42+ TS_OAUTH_SECRET: ${{ secrets.TS_OAUTH_SECRET }}
43+```
44+45+Omit `health_url` to fall back to `systemctl is-active`. Omit `db_path` for stateless services.
46+47+## mkService
48+49+`modules/lib/mkService.nix` standardizes service modules. A call to `mkService { ... }` provides:
50+51+- Systemd service with initial git clone (subsequent deploys via GitHub Actions)
52+- Caddy reverse proxy with TLS via Cloudflare DNS and optional rate limiting
53+- Data declarations (`sqlite`, `postgres`, `files`) that feed into automatic backups
54+- Dedicated system user with sudo for restart/stop/start (enables per-user Tailscale ACLs)
55+- Port conflict detection, security hardening, agenix secrets
56+57+### Adding a new service
58+59+1. Create a module in `modules/nixos/services/`
60+2. Enable it in `machines/terebithia/default.nix`
61+3. Add a deploy workflow to the app repo
62+63+See `modules/nixos/services/cachet.nix` for a minimal example.
···1+# wifi
2+3+Declarative Wi-Fi profile manager using NetworkManager. Supports three ways to supply passwords and has built-in eduroam (WPA-EAP) support.
4+5+## Options
6+7+All options under `atelier.network.wifi`:
8+9+| Option | Type | Default | Description |
10+|--------|------|---------|-------------|
11+| `enable` | bool | `false` | Enable Wi-Fi management |
12+| `hostName` | string | — | Sets `networking.hostName` |
13+| `nameservers` | list of strings | `[]` | Custom DNS servers |
14+| `envFile` | path | — | Environment file providing PSK variables for all profiles |
15+16+### Profiles
17+18+Defined under `atelier.network.wifi.profiles.<ssid>`:
19+20+| Option | Type | Default | Description |
21+|--------|------|---------|-------------|
22+| `psk` | string or null | `null` | Literal WPA-PSK passphrase |
23+| `pskVar` | string or null | `null` | Environment variable name containing the PSK (from `envFile`) |
24+| `pskFile` | path or null | `null` | Path to file containing the PSK |
25+| `eduroam` | bool | `false` | Use WPA-EAP with MSCHAPV2 (for eduroam networks) |
26+| `identity` | string or null | `null` | EAP identity (required when `eduroam = true`) |
27+28+Only one of `psk`, `pskVar`, or `pskFile` should be set per profile.
29+30+## Example
31+32+```nix
33+atelier.network.wifi = {
34+ enable = true;
35+ hostName = "moonlark";
36+ nameservers = [ "1.1.1.1" "8.8.8.8" ];
37+ envFile = config.age.secrets.wifi.path;
38+39+ profiles = {
40+ "Home Network" = {
41+ pskVar = "HOME_PSK"; # read from envFile
42+ };
43+ "eduroam" = {
44+ eduroam = true;
45+ identity = "user@university.edu";
46+ pskVar = "EDUROAM_PSK";
47+ };
48+ "Phone Hotspot" = {
49+ pskFile = config.age.secrets.hotspot.path;
50+ };
51+ };
52+};
53+```
+43
docs/src/modules/wut.md
···0000000000000000000000000000000000000000000
···1+# wut
2+3+**W**orktrees **U**nexpectedly **T**olerable — a git worktree manager that keeps worktrees organized under `.worktrees/`.
4+5+## Options
6+7+| Option | Type | Default | Description |
8+|--------|------|---------|-------------|
9+| `atelier.shell.wut.enable` | bool | `false` | Install wut and the zsh shell wrapper |
10+11+## Usage
12+13+```bash
14+wut new feat/my-feature # Create worktree + branch under .worktrees/
15+wut list # Show all worktrees
16+wut go feat/my-feature # cd into worktree (via shell wrapper)
17+wut go # Interactive picker
18+wut path feat/my-feature # Print worktree path
19+wut rm feat/my-feature # Remove worktree + delete branch
20+```
21+22+## Shell integration
23+24+Wut needs to `cd` the calling shell, which a subprocess can't do directly. It works by printing a `__WUT_CD__=/path` marker that a zsh wrapper function intercepts:
25+26+```zsh
27+wut() {
28+ output=$(/path/to/wut "$@")
29+ if [[ "$output" == *"__WUT_CD__="* ]]; then
30+ cd "${output##*__WUT_CD__=}"
31+ else
32+ echo "$output"
33+ fi
34+}
35+```
36+37+This wrapper is automatically injected into `initContent` when the module is enabled.
38+39+## Safety
40+41+- `wut rm` refuses to delete worktrees with uncommitted changes (use `--force` to override)
42+- `wut rm` warns before deleting unmerged branches
43+- The main/master branch worktree cannot be removed
···1+# Secrets
2+3+Secrets are managed using [agenix](https://github.com/ryantm/agenix) — encrypted at rest in the repo and decrypted at activation time to `/run/agenix/`.
4+5+## Usage
6+7+Create or edit a secret:
8+9+```bash
10+cd secrets && agenix -e myapp.age
11+```
12+13+The secret file contains environment variables, one per line:
14+15+```
16+DATABASE_URL=postgres://...
17+API_KEY=xxxxx
18+SECRET_TOKEN=yyyyy
19+```
20+21+## Adding a new secret
22+23+1. Add the public key entry to `secrets/secrets.nix`:
24+25+```nix
26+"service-name.age".publicKeys = [ kierank ];
27+```
28+29+2. Create and encrypt the secret:
30+31+```bash
32+agenix -e secrets/service-name.age
33+```
34+35+3. Declare in machine config:
36+37+```nix
38+age.secrets.service-name = {
39+ file = ../../secrets/service-name.age;
40+ owner = "service-name";
41+};
42+```
43+44+4. Reference as `config.age.secrets.service-name.path` in the service module.
45+46+## Identity paths
47+48+The decryption keys are SSH keys configured per machine:
49+50+```nix
51+age.identityPaths = [
52+ "/home/kierank/.ssh/id_rsa"
53+ "/etc/ssh/id_rsa"
54+];
55+```
+44
docs/src/services/README.md
···00000000000000000000000000000000000000000000
···1+# Services
2+3+All services run on **terebithia** (Oracle Cloud aarch64) behind Caddy with Cloudflare DNS TLS.
4+5+## mkService-based
6+7+| Service | Domain | Port | Runtime | Description |
8+|---------|--------|------|---------|-------------|
9+| cachet | cachet.dunkirk.sh | 3000 | bun | Slack emoji/profile cache |
10+| hn-alerts | hn.dunkirk.sh | 3001 | bun | Hacker News monitoring |
11+| indiko | indiko.dunkirk.sh | 3003 | bun | IndieAuth/OAuth2 server |
12+| l4 | l4.dunkirk.sh | 3004 | bun | Image CDN — Slack image optimizer |
13+| canvas-mcp | canvas.dunkirk.sh | 3006 | bun | Canvas MCP server |
14+| control | control.dunkirk.sh | 3010 | bun | Admin dashboard for Caddy toggles |
15+| traverse | traverse.dunkirk.sh | 4173 | bun | Code walkthrough diagram server |
16+| cedarlogic | cedarlogic.dunkirk.sh | 3100 | custom | Circuit simulator |
17+18+## Multi-instance
19+20+| Service | Domain | Port | Description |
21+|---------|--------|------|-------------|
22+| emojibot-hackclub | hc.emojibot.dunkirk.sh | 3002 | Emojibot for Hack Club |
23+| emojibot-df1317 | df.emojibot.dunkirk.sh | 3005 | Emojibot for df1317 |
24+25+## Custom / external
26+27+| Service | Domain | Description |
28+|---------|--------|-------------|
29+| bore (frps) | bore.dunkirk.sh | HTTP/TCP/UDP tunnel proxy |
30+| herald | herald.dunkirk.sh | Git SSH hosting + email |
31+| knot | knot.dunkirk.sh | Tangled git hosting |
32+| spindle | spindle.dunkirk.sh | Tangled CI |
33+| battleship-arena | battleship.dunkirk.sh | Battleship game server |
34+| n8n | n8n.dunkirk.sh | Workflow automation |
35+36+## Architecture
37+38+Each mkService module provides:
39+40+- **Systemd service** — initial git clone for scaffolding, subsequent deploys via GitHub Actions
41+- **Caddy reverse proxy** — TLS via Cloudflare DNS challenge, optional rate limiting
42+- **Data declarations** — `sqlite`, `postgres`, `files` feed into automatic backups
43+- **Dedicated user** — sudo for restart/stop/start, per-user Tailscale SSH ACLs
44+- **Port conflict detection** — assertions prevent two services binding the same port
+21
docs/src/services/battleship-arena.md
···000000000000000000000
···1+# battleship-arena
2+3+Battleship game server with web interface and SSH-based bot submission.
4+5+**Domain:** `battleship.dunkirk.sh` · **Web Port:** 8081 · **SSH Port:** 2222
6+7+This is a **custom module** — it does not use mkService.
8+9+## Options
10+11+| Option | Type | Default | Description |
12+|--------|------|---------|-------------|
13+| `enable` | bool | `false` | Enable battleship-arena |
14+| `domain` | string | `"battleship.dunkirk.sh"` | Domain for Caddy reverse proxy |
15+| `sshPort` | port | `2222` | SSH port for bot submissions |
16+| `webPort` | port | `8081` | Web interface port |
17+| `uploadDir` | string | `"/var/lib/battleship-arena/submissions"` | Bot upload directory |
18+| `resultsDb` | string | `"/var/lib/battleship-arena/results.db"` | SQLite results database path |
19+| `adminPasscode` | string | `"battleship-admin-override"` | Admin passcode |
20+| `secretsFile` | path or null | `null` | Agenix secrets file |
21+| `package` | package | — | Battleship-arena package (from flake input) |
+43
docs/src/services/bore.md
···0000000000000000000000000000000000000000000
···1+# bore (server)
2+3+Lightweight tunneling server built on frp. Supports HTTP (wildcard subdomains), TCP, and UDP tunnels with optional OAuth authentication via Indiko.
4+5+**Domain:** `bore.dunkirk.sh` · **frp port:** 7000
6+7+This is a **custom module** — it does not use mkService.
8+9+## Options
10+11+| Option | Type | Default | Description |
12+|--------|------|---------|-------------|
13+| `enable` | bool | `false` | Enable bore server |
14+| `domain` | string | — | Base domain for wildcard subdomains |
15+| `bindAddr` | string | `"0.0.0.0"` | frps bind address |
16+| `bindPort` | port | `7000` | frps bind port |
17+| `vhostHTTPPort` | port | `7080` | Virtual host HTTP port |
18+| `allowedTCPPorts` | list of ports | `20000–20099` | Ports available for TCP tunnels |
19+| `allowedUDPPorts` | list of ports | `20000–20099` | Ports available for UDP tunnels |
20+| `authToken` | string or null | `null` | frp auth token (use `authTokenFile` instead) |
21+| `authTokenFile` | path or null | `null` | Path to file containing frp auth token |
22+| `enableCaddy` | bool | `true` | Auto-configure Caddy wildcard vhost |
23+24+### Authentication
25+26+When enabled, all HTTP tunnels are gated behind Indiko OAuth. Users must sign in before accessing tunneled services.
27+28+| Option | Type | Default | Description |
29+|--------|------|---------|-------------|
30+| `auth.enable` | bool | `false` | Enable bore-auth OAuth middleware |
31+| `auth.indikoURL` | string | `"https://indiko.dunkirk.sh"` | Indiko server URL |
32+| `auth.clientID` | string | — | OAuth client ID from Indiko |
33+| `auth.clientSecretFile` | path | — | Path to OAuth client secret |
34+| `auth.cookieHashKeyFile` | path | — | 32-byte cookie signing key |
35+| `auth.cookieBlockKeyFile` | path | — | 32-byte cookie encryption key |
36+37+After authentication, these headers are passed to tunneled services:
38+39+- `X-Auth-User` — user's profile URL
40+- `X-Auth-Name` — display name
41+- `X-Auth-Email` — email address
42+43+See [bore (client)](../modules/bore-client.md) for the home-manager client module.
+34
docs/src/services/cedarlogic.md
···0000000000000000000000000000000000
···1+# cedarlogic
2+3+Browser-based circuit simulator with real-time collaboration via WebSockets.
4+5+**Domain:** `cedarlogic.dunkirk.sh` · **Port:** 3100 · **Runtime:** custom
6+7+## Extra options
8+9+| Option | Type | Default | Description |
10+|--------|------|---------|-------------|
11+| `wsPort` | port | `3101` | Hocuspocus WebSocket server for document collaboration |
12+| `cursorPort` | port | `3102` | Cursor relay WebSocket server for live cursors |
13+| `branch` | string | `"web"` | Git branch to clone (uses `web` branch, not `main`) |
14+15+## Caddy routing
16+17+Cedarlogic disables the default mkService Caddy config and uses path-based routing to three backends:
18+19+| Path | Backend |
20+|------|---------|
21+| `/ws` | `wsPort` (Hocuspocus) |
22+| `/cursor-ws` | `cursorPort` (cursor relay) |
23+| `/api/*`, `/auth/*` | main `port` |
24+| Everything else | Static files from `dist/` |
25+26+## Build step
27+28+Unlike other services, cedarlogic runs a build during deploy:
29+30+```
31+bun install → parse-gates → bun run build (Vite)
32+```
33+34+The build has a 120s timeout to accommodate Vite compilation.
+40
docs/src/services/control.md
···0000000000000000000000000000000000000000
···1+# control
2+3+Admin dashboard for Caddy feature toggles. Provides a web UI to enable/disable paths on other services (e.g. blocking player tracking on the map).
4+5+**Domain:** `control.dunkirk.sh` · **Port:** 3010 · **Runtime:** bun
6+7+## Extra options
8+9+### `flags`
10+11+Defines per-domain feature flags that control blocks paths and redacts JSON fields.
12+13+```nix
14+atelier.services.control.flags."map.dunkirk.sh" = {
15+ name = "Map";
16+ flags = {
17+ "block-tracking" = {
18+ name = "Block Player Tracking";
19+ description = "Disable real-time player location updates";
20+ paths = [
21+ "/sse"
22+ "/sse/*"
23+ "/tiles/*/markers/pl3xmap_players.json"
24+ ];
25+ redact."/tiles/settings.json" = [ "players" ];
26+ };
27+ };
28+};
29+```
30+31+| Option | Type | Default | Description |
32+|--------|------|---------|-------------|
33+| `flags` | attrsOf submodule | `{}` | Services and their feature flags, keyed by domain |
34+| `flags.<domain>.name` | string | — | Display name for the service |
35+| `flags.<domain>.flags.<id>.name` | string | — | Display name for the flag |
36+| `flags.<domain>.flags.<id>.description` | string | — | What the flag does |
37+| `flags.<domain>.flags.<id>.paths` | list of strings | `[]` | URL paths to block when flag is active |
38+| `flags.<domain>.flags.<id>.redact` | attrsOf (list of strings) | `{}` | JSON fields to redact from responses, keyed by path |
39+40+The flags config is serialized to `flags.json` and passed to control via the `FLAGS_CONFIG` environment variable.
+42
docs/src/services/emojibot.md
···000000000000000000000000000000000000000000
···1+# emojibot
2+3+Slack emoji management service. Supports multiple instances for different workspaces.
4+5+**Runtime:** bun · **Stateless** (no database)
6+7+This is a **custom module** — it does not use mkService. Each instance gets its own systemd service, user, and Caddy virtual host.
8+9+## Instance options
10+11+Instances are defined under `atelier.services.emojibot.instances.<name>`:
12+13+```nix
14+atelier.services.emojibot.instances = {
15+ hackclub = {
16+ enable = true;
17+ domain = "hc.emojibot.dunkirk.sh";
18+ port = 3002;
19+ workspace = "hackclub";
20+ channel = "C02T3CU03T3";
21+ repository = "https://github.com/taciturnaxolotl/emojibot";
22+ secretsFile = config.age.secrets."emojibot/hackclub".path;
23+ };
24+};
25+```
26+27+| Option | Type | Default | Description |
28+|--------|------|---------|-------------|
29+| `enable` | bool | `false` | Enable this instance |
30+| `domain` | string | — | Domain for Caddy reverse proxy |
31+| `port` | port | — | Port to run on |
32+| `secretsFile` | path | — | Agenix secrets file with Slack credentials |
33+| `repository` | string | `"https://github.com/taciturnaxolotl/emojibot"` | Git repo URL |
34+| `workspace` | string or null | `null` | Slack workspace name (for identification) |
35+| `channel` | string or null | `null` | Slack channel ID |
36+37+## Current instances
38+39+| Instance | Domain | Port | Workspace |
40+|----------|--------|------|-----------|
41+| hackclub | hc.emojibot.dunkirk.sh | 3002 | Hack Club |
42+| df1317 | df.emojibot.dunkirk.sh | 3005 | df1317 |
+39
docs/src/services/herald.md
···000000000000000000000000000000000000000
···1+# herald
2+3+Git SSH hosting with email notifications. Provides a git push interface over SSH and sends email via SMTP/DKIM.
4+5+**Domain:** `herald.dunkirk.sh` · **SSH Port:** 2223 · **HTTP Port:** 8085
6+7+This is a **custom module** — it does not use mkService.
8+9+## Options
10+11+| Option | Type | Default | Description |
12+|--------|------|---------|-------------|
13+| `enable` | bool | `false` | Enable herald |
14+| `domain` | string | — | Domain for Caddy reverse proxy |
15+| `host` | string | `"0.0.0.0"` | Listen address |
16+| `sshPort` | port | `2223` | SSH listen port |
17+| `externalSshPort` | port | `2223` | External SSH port (if behind NAT) |
18+| `httpPort` | port | `8085` | HTTP API port |
19+| `dataDir` | path | `"/var/lib/herald"` | Data directory |
20+| `allowAllKeys` | bool | `true` | Allow all SSH keys |
21+| `secretsFile` | path | — | Agenix secrets (must contain `SMTP_PASS`) |
22+| `package` | package | `pkgs.herald` | Herald package |
23+24+### SMTP
25+26+| Option | Type | Default | Description |
27+|--------|------|---------|-------------|
28+| `smtp.host` | string | — | SMTP server hostname |
29+| `smtp.port` | port | `587` | SMTP server port |
30+| `smtp.user` | string | — | SMTP username |
31+| `smtp.from` | string | — | Sender address |
32+33+### DKIM
34+35+| Option | Type | Default | Description |
36+|--------|------|---------|-------------|
37+| `smtp.dkim.selector` | string or null | `null` | DKIM selector |
38+| `smtp.dkim.domain` | string or null | `null` | DKIM signing domain |
39+| `smtp.dkim.privateKeyFile` | path or null | `null` | Path to DKIM private key |
+16
docs/src/services/knot-sync.md
···0000000000000000
···1+# knot-sync
2+3+Mirrors Tangled knot repositories to GitHub on a cron schedule.
4+5+This is a **custom module** — it does not use mkService. Runs as a systemd timer, not a long-running service.
6+7+## Options
8+9+| Option | Type | Default | Description |
10+|--------|------|---------|-------------|
11+| `enable` | bool | `false` | Enable knot-sync |
12+| `repoDir` | string | `"/home/git/did:plc:..."` | Directory containing knot git repos |
13+| `githubUsername` | string | `"taciturnaxolotl"` | GitHub username to mirror to |
14+| `secretsFile` | path | — | Agenix secrets (must contain `GITHUB_TOKEN`) |
15+| `logFile` | string | `"/home/git/knot-sync.log"` | Log file path |
16+| `interval` | string | `"*/5 * * * *"` | Cron schedule for sync |
···1-# Fish completion for anthropic-manager
2-3-# Helper function to get profile list
4-function __anthropic_manager_profiles
5- set -l config_dir (test -n "$ANTHROPIC_CONFIG_DIR"; and echo $ANTHROPIC_CONFIG_DIR; or echo "$HOME/.config/crush")
6- if test -d "$config_dir"
7- find "$config_dir" -maxdepth 1 -type d -name "anthropic.*" 2>/dev/null | sed 's/.*anthropic\.//' | sort
8- end
9-end
10-11-# Main options
12-complete -c anthropic-manager -s h -l help -d "Show help information"
13-complete -c anthropic-manager -s i -l init -d "Initialize a new profile" -xa "(__anthropic_manager_profiles)"
14-complete -c anthropic-manager -s s -l swap -d "Switch to a profile" -xa "(__anthropic_manager_profiles)"
15-complete -c anthropic-manager -s d -l delete -d "Delete a profile" -xa "(__anthropic_manager_profiles)"
16-complete -c anthropic-manager -s t -l token -d "Print current bearer token"
17-complete -c anthropic-manager -s l -l list -d "List all profiles"
18-complete -c anthropic-manager -s c -l current -d "Show current profile"
···1-#compdef anthropic-manager
2-3-_anthropic_manager() {
4- local config_dir="${ANTHROPIC_CONFIG_DIR:-$HOME/.config/crush}"
5- local -a profiles
6-7- # Get list of profiles
8- if [[ -d "$config_dir" ]]; then
9- profiles=(${(f)"$(find "$config_dir" -maxdepth 1 -type d -name "anthropic.*" 2>/dev/null | sed 's/.*anthropic\.//' | sort)"})
10- fi
11-12- _arguments -C \
13- '(- *)'{-h,--help}'[Show help information]' \
14- '(-i --init)'{-i,--init}'[Initialize a new profile]:profile name:' \
15- '(-s --swap)'{-s,--swap}'[Switch to a profile]:profile:($profiles)' \
16- '(-d --delete)'{-d,--delete}'[Delete a profile]:profile:($profiles)' \
17- '(-t --token)'{-t,--token}'[Print current bearer token]' \
18- '(-l --list)'{-l,--list}'[List all profiles]' \
19- '(-c --current)'{-c,--current}'[Show current profile]'
20-}
21-22-_anthropic_manager "$@"
···0000000000000000000000
-561
modules/home/apps/anthropic-manager/default.nix
···1-{
2- lib,
3- pkgs,
4- config,
5- ...
6-}:
7-let
8- cfg = config.atelier.apps.anthropic-manager;
9-10- anthropicManagerScript = pkgs.writeShellScript "anthropic-manager" ''
11- # Manage Anthropic OAuth credential profiles
12- # Implements the same functionality as anthropic-api-key but with profile management
13-14- set -uo pipefail
15-16- CONFIG_DIR="''${ANTHROPIC_CONFIG_DIR:-$HOME/.config/crush}"
17- CLIENT_ID="9d1c250a-e61b-44d9-88ed-5944d1962f5e"
18-19- # Utilities
20- base64url() {
21- ${pkgs.coreutils}/bin/base64 -w0 | ${pkgs.gnused}/bin/sed 's/=//g; s/+/-/g; s/\//_/g'
22- }
23-24- sha256() {
25- echo -n "$1" | ${pkgs.openssl}/bin/openssl dgst -binary -sha256
26- }
27-28- pkce_pair() {
29- verifier=$(${pkgs.openssl}/bin/openssl rand 32 | base64url)
30- challenge=$(printf '%s' "$verifier" | ${pkgs.openssl}/bin/openssl dgst -binary -sha256 | base64url)
31- echo "$verifier $challenge"
32- }
33-34- authorize_url() {
35- local challenge="$1"
36- local state="$2"
37- echo "https://claude.ai/oauth/authorize?response_type=code&client_id=$CLIENT_ID&redirect_uri=https://console.anthropic.com/oauth/code/callback&scope=org:create_api_key+user:profile+user:inference+user:sessions:claude_code&code_challenge=$challenge&code_challenge_method=S256&state=$state"
38- }
39-40- clean_pasted_code() {
41- local input="$1"
42- input="''${input#code:}"
43- input="''${input#code=}"
44- input="''${input#\"}"
45- input="''${input%\"}"
46- input="''${input#\'}"
47- input="''${input%\'}"
48- input="''${input#\`}"
49- input="''${input%\`}"
50- echo "$input" | ${pkgs.gnused}/bin/sed -E 's/[^A-Za-z0-9._~#-]//g'
51- }
52-53- exchange_code() {
54- local code="$1"
55- local verifier="$2"
56- local cleaned
57- cleaned=$(clean_pasted_code "$code")
58- local pure="''${cleaned%%#*}"
59- local state="''${cleaned#*#}"
60- [[ "$state" == "$pure" ]] && state=""
61-62- ${pkgs.curl}/bin/curl -s -X POST \
63- -H "Content-Type: application/json" \
64- -H "User-Agent: anthropic-manager/1.0" \
65- -d "$(${pkgs.jq}/bin/jq -n \
66- --arg code "$pure" \
67- --arg state "$state" \
68- --arg verifier "$verifier" \
69- '{
70- code: $code,
71- state: $state,
72- grant_type: "authorization_code",
73- client_id: "9d1c250a-e61b-44d9-88ed-5944d1962f5e",
74- redirect_uri: "https://console.anthropic.com/oauth/code/callback",
75- code_verifier: $verifier
76- }')" \
77- "https://console.anthropic.com/v1/oauth/token"
78- }
79-80- exchange_refresh() {
81- local refresh_token="$1"
82- ${pkgs.curl}/bin/curl -s -X POST \
83- -H "Content-Type: application/json" \
84- -H "User-Agent: anthropic-manager/1.0" \
85- -d "$(${pkgs.jq}/bin/jq -n \
86- --arg refresh "$refresh_token" \
87- '{
88- grant_type: "refresh_token",
89- refresh_token: $refresh,
90- client_id: "9d1c250a-e61b-44d9-88ed-5944d1962f5e"
91- }')" \
92- "https://console.anthropic.com/v1/oauth/token"
93- }
94-95- save_tokens() {
96- local profile_dir="$1"
97- local access_token="$2"
98- local refresh_token="$3"
99- local expires_at="$4"
100-101- mkdir -p "$profile_dir"
102- echo -n "$access_token" > "$profile_dir/bearer_token"
103- echo -n "$refresh_token" > "$profile_dir/refresh_token"
104- echo -n "$expires_at" > "$profile_dir/bearer_token.expires"
105- chmod 600 "$profile_dir/bearer_token" "$profile_dir/refresh_token" "$profile_dir/bearer_token.expires"
106- }
107-108- load_tokens() {
109- local profile_dir="$1"
110- [[ -f "$profile_dir/bearer_token" ]] || return 1
111- [[ -f "$profile_dir/refresh_token" ]] || return 1
112- [[ -f "$profile_dir/bearer_token.expires" ]] || return 1
113-114- cat "$profile_dir/bearer_token"
115- cat "$profile_dir/refresh_token"
116- cat "$profile_dir/bearer_token.expires"
117- return 0
118- }
119-120- get_token() {
121- local profile_dir="$1"
122- local print_token="''${2:-true}"
123-124- if ! load_tokens "$profile_dir" >/dev/null 2>&1; then
125- return 1
126- fi
127-128- local bearer refresh expires
129- read -r bearer < "$profile_dir/bearer_token"
130- read -r refresh < "$profile_dir/refresh_token"
131- read -r expires < "$profile_dir/bearer_token.expires"
132-133- local now
134- now=$(date +%s)
135-136- # If token valid for more than 60s, return it
137- if [[ $now -lt $((expires - 60)) ]]; then
138- [[ "$print_token" == "true" ]] && echo "$bearer"
139- return 0
140- fi
141-142- # Try to refresh
143- local response
144- response=$(exchange_refresh "$refresh")
145-146- if ! echo "$response" | ${pkgs.jq}/bin/jq -e '.access_token' >/dev/null 2>&1; then
147- return 1
148- fi
149-150- local new_access new_refresh new_expires_in
151- new_access=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.access_token')
152- new_refresh=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.refresh_token // empty')
153- new_expires_in=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.expires_in')
154-155- [[ -z "$new_refresh" ]] && new_refresh="$refresh"
156- local new_expires=$((now + new_expires_in))
157-158- save_tokens "$profile_dir" "$new_access" "$new_refresh" "$new_expires"
159- [[ "$print_token" == "true" ]] && echo "$new_access"
160- return 0
161- }
162-163- oauth_flow() {
164- local profile_dir="$1"
165-166- ${pkgs.gum}/bin/gum style --foreground 212 "Starting OAuth flow..."
167- echo
168-169- read -r verifier challenge < <(pkce_pair)
170- local state
171- state=$(${pkgs.openssl}/bin/openssl rand -base64 32 | ${pkgs.gnused}/bin/sed 's/[^A-Za-z0-9]//g')
172- local auth_url
173- auth_url=$(authorize_url "$challenge" "$state")
174-175- ${pkgs.gum}/bin/gum style --foreground 35 "Opening browser for authorization..."
176- ${pkgs.gum}/bin/gum style --foreground 117 "$auth_url"
177- echo
178-179- if command -v ${pkgs.xdg-utils}/bin/xdg-open &>/dev/null; then
180- ${pkgs.xdg-utils}/bin/xdg-open "$auth_url" 2>/dev/null &
181- elif command -v open &>/dev/null; then
182- open "$auth_url" 2>/dev/null &
183- fi
184-185- local code
186- code=$(${pkgs.gum}/bin/gum input --placeholder "Paste the authorization code from Anthropic" --prompt "Code: ")
187-188- if [[ -z "$code" ]]; then
189- ${pkgs.gum}/bin/gum style --foreground 196 "No code provided"
190- return 1
191- fi
192-193- ${pkgs.gum}/bin/gum style --foreground 212 "Exchanging code for tokens..."
194-195- local response
196- response=$(exchange_code "$code" "$verifier")
197-198- if ! echo "$response" | ${pkgs.jq}/bin/jq -e '.access_token' >/dev/null 2>&1; then
199- ${pkgs.gum}/bin/gum style --foreground 196 "Failed to exchange code"
200- echo "$response" | ${pkgs.jq}/bin/jq '.' 2>&1 || echo "$response"
201- return 1
202- fi
203-204- local access_token refresh_token expires_in
205- access_token=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.access_token')
206- refresh_token=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.refresh_token')
207- expires_in=$(echo "$response" | ${pkgs.jq}/bin/jq -r '.expires_in')
208-209- local expires_at
210- expires_at=$(($(date +%s) + expires_in))
211-212- save_tokens "$profile_dir" "$access_token" "$refresh_token" "$expires_at"
213- ${pkgs.gum}/bin/gum style --foreground 35 "✓ Authenticated successfully"
214- return 0
215- }
216-217- list_profiles() {
218- ${pkgs.gum}/bin/gum style --bold --foreground 212 "Available Anthropic profiles:"
219- echo
220-221- local current_profile=""
222- if [[ -L "$CONFIG_DIR/anthropic" ]]; then
223- current_profile=$(basename "$(readlink "$CONFIG_DIR/anthropic")" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//')
224- fi
225-226- local found_any=false
227- for profile_dir in "$CONFIG_DIR"/anthropic.*; do
228- if [[ -d "$profile_dir" ]]; then
229- found_any=true
230- local profile_name
231- profile_name=$(basename "$profile_dir" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//')
232-233- local status=""
234- if get_token "$profile_dir" false 2>/dev/null; then
235- local expires
236- read -r expires < "$profile_dir/bearer_token.expires"
237- local now
238- now=$(date +%s)
239- if [[ $now -lt $expires ]]; then
240- status=" (valid)"
241- else
242- status=" (expired)"
243- fi
244- else
245- status=" (invalid)"
246- fi
247-248- if [[ "$profile_name" == "$current_profile" ]]; then
249- ${pkgs.gum}/bin/gum style --foreground 35 " ✓ $profile_name$status (active)"
250- else
251- echo " $profile_name$status"
252- fi
253- fi
254- done
255-256- if [[ "$found_any" == "false" ]]; then
257- ${pkgs.gum}/bin/gum style --foreground 214 "No profiles found. Use 'anthropic-manager --init <name>' to create one."
258- fi
259- }
260-261- show_current() {
262- if [[ -L "$CONFIG_DIR/anthropic" ]]; then
263- local current
264- current=$(basename "$(readlink "$CONFIG_DIR/anthropic")" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//')
265- ${pkgs.gum}/bin/gum style --foreground 35 "Current profile: $current"
266- else
267- ${pkgs.gum}/bin/gum style --foreground 214 "No active profile"
268- fi
269- }
270-271- init_profile() {
272- local profile="$1"
273-274- if [[ -z "$profile" ]]; then
275- profile=$(${pkgs.gum}/bin/gum input --placeholder "Profile name (e.g., work, personal)" --prompt "Profile name: ")
276- if [[ -z "$profile" ]]; then
277- ${pkgs.gum}/bin/gum style --foreground 196 "No profile name provided"
278- exit 1
279- fi
280- fi
281-282- local profile_dir="$CONFIG_DIR/anthropic.$profile"
283-284- if [[ -d "$profile_dir" ]]; then
285- ${pkgs.gum}/bin/gum style --foreground 214 "Profile '$profile' already exists"
286- if ${pkgs.gum}/bin/gum confirm "Re-authenticate?"; then
287- rm -rf "$profile_dir"
288- else
289- exit 1
290- fi
291- fi
292-293- if ! oauth_flow "$profile_dir"; then
294- rm -rf "$profile_dir"
295- exit 1
296- fi
297-298- # Ask to set as active
299- if [[ ! -L "$CONFIG_DIR/anthropic" ]] || ${pkgs.gum}/bin/gum confirm "Set '$profile' as active profile?"; then
300- [[ -L "$CONFIG_DIR/anthropic" ]] && rm "$CONFIG_DIR/anthropic"
301- ln -sf "anthropic.$profile" "$CONFIG_DIR/anthropic"
302- ${pkgs.gum}/bin/gum style --foreground 35 "✓ Set as active profile"
303- fi
304- }
305-306- delete_profile() {
307- local target="$1"
308-309- if [[ -z "$target" ]]; then
310- # Interactive selection
311- local profiles=()
312- for profile_dir in "$CONFIG_DIR"/anthropic.*; do
313- if [[ -d "$profile_dir" ]]; then
314- profiles+=("$(basename "$profile_dir" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//')")
315- fi
316- done
317-318- if [[ ''${#profiles[@]} -eq 0 ]]; then
319- ${pkgs.gum}/bin/gum style --foreground 196 "No profiles found"
320- exit 1
321- fi
322-323- target=$(printf '%s\n' "''${profiles[@]}" | ${pkgs.gum}/bin/gum choose --header "Select profile to delete:")
324- [[ -z "$target" ]] && exit 0
325- fi
326-327- local target_dir="$CONFIG_DIR/anthropic.$target"
328- if [[ ! -d "$target_dir" ]]; then
329- ${pkgs.gum}/bin/gum style --foreground 196 "Profile '$target' does not exist"
330- exit 1
331- fi
332-333- if ! ${pkgs.gum}/bin/gum confirm "Delete profile '$target'?"; then
334- exit 0
335- fi
336-337- # Check if this is the active profile
338- if [[ -L "$CONFIG_DIR/anthropic" ]]; then
339- local current
340- current=$(basename "$(readlink "$CONFIG_DIR/anthropic")" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//')
341- if [[ "$current" == "$target" ]]; then
342- rm "$CONFIG_DIR/anthropic"
343- ${pkgs.gum}/bin/gum style --foreground 214 "Unlinked active profile"
344- fi
345- fi
346-347- rm -rf "$target_dir"
348- ${pkgs.gum}/bin/gum style --foreground 35 "✓ Deleted profile '$target'"
349- }
350-351- swap_profile() {
352- local target="$1"
353-354- if [[ -n "$target" ]]; then
355- local target_dir="$CONFIG_DIR/anthropic.$target"
356- if [[ ! -d "$target_dir" ]]; then
357- ${pkgs.gum}/bin/gum style --foreground 196 "Profile '$target' does not exist"
358- echo
359- list_profiles
360- exit 1
361- fi
362-363- [[ -L "$CONFIG_DIR/anthropic" ]] && rm "$CONFIG_DIR/anthropic"
364- ln -sf "anthropic.$target" "$CONFIG_DIR/anthropic"
365- ${pkgs.gum}/bin/gum style --foreground 35 "✓ Switched to profile '$target'"
366- exit 0
367- fi
368-369- # Interactive selection
370- local profiles=()
371- for profile_dir in "$CONFIG_DIR"/anthropic.*; do
372- if [[ -d "$profile_dir" ]]; then
373- profiles+=("$(basename "$profile_dir" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//')")
374- fi
375- done
376-377- if [[ ''${#profiles[@]} -eq 0 ]]; then
378- ${pkgs.gum}/bin/gum style --foreground 196 "No profiles found"
379- ${pkgs.gum}/bin/gum style --foreground 214 "Use 'anthropic-manager --init <name>' to create one"
380- exit 1
381- fi
382-383- local selected
384- selected=$(printf '%s\n' "''${profiles[@]}" | ${pkgs.gum}/bin/gum choose --header "Select profile:")
385-386- if [[ -n "$selected" ]]; then
387- [[ -L "$CONFIG_DIR/anthropic" ]] && rm "$CONFIG_DIR/anthropic"
388- ln -sf "anthropic.$selected" "$CONFIG_DIR/anthropic"
389- ${pkgs.gum}/bin/gum style --foreground 35 "✓ Switched to profile '$selected'"
390- fi
391- }
392-393- print_token() {
394- if [[ ! -L "$CONFIG_DIR/anthropic" ]]; then
395- echo "Error: No active profile" >&2
396- exit 1
397- fi
398-399- local profile_dir
400- profile_dir=$(readlink -f "$CONFIG_DIR/anthropic")
401-402- if ! get_token "$profile_dir" true 2>/dev/null; then
403- echo "Error: Token invalid or expired" >&2
404- exit 1
405- fi
406- }
407-408- interactive_menu() {
409- echo
410- ${pkgs.gum}/bin/gum style --bold --foreground 212 "Anthropic Profile Manager"
411- echo
412-413- local current_profile=""
414- if [[ -L "$CONFIG_DIR/anthropic" ]]; then
415- current_profile=$(basename "$(readlink "$CONFIG_DIR/anthropic")" | ${pkgs.gnused}/bin/sed 's/^anthropic\.//')
416- ${pkgs.gum}/bin/gum style --foreground 117 "Active: $current_profile"
417- else
418- ${pkgs.gum}/bin/gum style --foreground 214 "No active profile"
419- fi
420-421- echo
422-423- local choice
424- choice=$(${pkgs.gum}/bin/gum choose \
425- "Switch profile" \
426- "Create new profile" \
427- "Delete profile" \
428- "List all profiles" \
429- "Get current token")
430-431- case "$choice" in
432- "Switch profile")
433- swap_profile ""
434- ;;
435- "Create new profile")
436- init_profile ""
437- ;;
438- "Delete profile")
439- echo
440- delete_profile ""
441- ;;
442- "List all profiles")
443- echo
444- list_profiles
445- ;;
446- "Get current token")
447- echo
448- print_token
449- ;;
450- esac
451- }
452-453- # Main
454- mkdir -p "$CONFIG_DIR"
455-456- case "''${1:-}" in
457- --init|-i)
458- init_profile "''${2:-}"
459- ;;
460- --list|-l)
461- list_profiles
462- ;;
463- --current|-c)
464- show_current
465- ;;
466- --token|-t|token)
467- print_token
468- ;;
469- --swap|-s|swap)
470- swap_profile "''${2:-}"
471- ;;
472- --delete|-d|delete)
473- delete_profile "''${2:-}"
474- ;;
475- --help|-h|help)
476- ${pkgs.gum}/bin/gum style --bold --foreground 212 "anthropic-manager - Manage Anthropic OAuth profiles"
477- echo
478- echo "Usage:"
479- echo " anthropic-manager Interactive menu"
480- echo " anthropic-manager --init [profile] Initialize/create a new profile"
481- echo " anthropic-manager --swap [profile] Switch to a profile (interactive if no profile given)"
482- echo " anthropic-manager --delete [profile] Delete a profile (interactive if no profile given)"
483- echo " anthropic-manager --token Print current bearer token (refresh if needed)"
484- echo " anthropic-manager --list List all profiles with status"
485- echo " anthropic-manager --current Show current active profile"
486- echo " anthropic-manager --help Show this help"
487- echo
488- echo "Examples:"
489- echo " anthropic-manager Open interactive menu"
490- echo " anthropic-manager --init work Create 'work' profile"
491- echo " anthropic-manager --swap work Switch to 'work' profile"
492- echo " anthropic-manager --delete work Delete 'work' profile"
493- echo " anthropic-manager --token Get current bearer token"
494- ;;
495- "")
496- # No args - check if interactive
497- if [[ ! -t 0 ]] || [[ ! -t 1 ]]; then
498- echo "Error: anthropic-manager requires an interactive terminal when called without arguments" >&2
499- exit 1
500- fi
501- interactive_menu
502- ;;
503- *)
504- ${pkgs.gum}/bin/gum style --foreground 196 "Unknown option: $1"
505- echo "Use --help for usage information"
506- exit 1
507- ;;
508- esac
509- '';
510-511- anthropicManager = pkgs.stdenv.mkDerivation {
512- pname = "anthropic-manager";
513- version = "1.0";
514-515- dontUnpack = true;
516-517- nativeBuildInputs = with pkgs; [ pandoc installShellFiles ];
518-519- manPageSrc = ./anthropic-manager.1.md;
520- bashCompletionSrc = ./completions/anthropic-manager.bash;
521- zshCompletionSrc = ./completions/anthropic-manager.zsh;
522- fishCompletionSrc = ./completions/anthropic-manager.fish;
523-524- buildPhase = ''
525- # Convert markdown man page to man format
526- ${pkgs.pandoc}/bin/pandoc -s -t man $manPageSrc -o anthropic-manager.1
527- '';
528-529- installPhase = ''
530- mkdir -p $out/bin
531-532- # Install binary
533- cp ${anthropicManagerScript} $out/bin/anthropic-manager
534- chmod +x $out/bin/anthropic-manager
535-536- # Install man page
537- installManPage anthropic-manager.1
538-539- # Install completions
540- installShellCompletion --bash --name anthropic-manager $bashCompletionSrc
541- installShellCompletion --zsh --name _anthropic-manager $zshCompletionSrc
542- installShellCompletion --fish --name anthropic-manager.fish $fishCompletionSrc
543- '';
544-545- meta = with lib; {
546- description = "Anthropic OAuth profile manager";
547- homepage = "https://github.com/taciturnaxolotl/dots";
548- license = licenses.mit;
549- maintainers = [ ];
550- };
551- };
552-in
553-{
554- options.atelier.apps.anthropic-manager.enable = lib.mkEnableOption "Enable anthropic-manager";
555-556- config = lib.mkIf cfg.enable {
557- home.packages = [
558- anthropicManager
559- ];
560- };
561-}
···89 description = "Git repository URL — cloned once on first start for scaffolding";
90 };
9100000000000000000000092 # Data declarations for automatic backup
93 data = {
94 sqlite = lib.mkOption {
···89 description = "Git repository URL — cloned once on first start for scaffolding";
90 };
9192+ healthUrl = lib.mkOption {
93+ type = lib.types.nullOr lib.types.str;
94+ default = null;
95+ description = "Health check URL for monitoring";
96+ };
97+98+ # Internal metadata set by mkService factory — used by services-manifest
99+ _description = lib.mkOption {
100+ type = lib.types.str;
101+ default = description;
102+ internal = true;
103+ readOnly = true;
104+ };
105+106+ _runtime = lib.mkOption {
107+ type = lib.types.str;
108+ default = runtime;
109+ internal = true;
110+ readOnly = true;
111+ };
112+113 # Data declarations for automatic backup
114 data = {
115 sqlite = lib.mkOption {