A container registry that uses the AT Protocol for manifest storage and S3 for blob storage. atcr.io
docker container atproto go

CLAUDE.md#

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Project Overview#

ATCR (ATProto Container Registry) is an OCI-compliant container registry that uses the AT Protocol for manifest storage and S3 for blob storage. Manifests are stored in users' Personal Data Servers (PDS) while layers are stored in S3.

Go Workspace#

The project uses a Go workspace (go.work) with two modules:

  • atcr.io — Main module (appview, hold, credential-helper, oauth-helper)
  • atcr.io/scanner — Scanner module (separate to isolate heavy Syft/Grype dependencies)

Build Commands#

Always build into the bin/ directory (-o bin/...), not the project root.

# Build main binaries
go build -o bin/atcr-appview ./cmd/appview
go build -o bin/atcr-hold ./cmd/hold
go build -o bin/docker-credential-atcr ./cmd/credential-helper
go build -o bin/oauth-helper ./cmd/oauth-helper

# Build scanner (separate module)
cd scanner && go build -o ../bin/atcr-scanner ./cmd/scanner && cd ..

# Build hold with billing support (optional build tag)
go build -tags billing -o bin/atcr-hold ./cmd/hold

# Tests
go test ./...                                    # all tests
go test ./pkg/atproto/...                        # specific package
go test -run TestManifestStore ./pkg/atproto/...  # specific test
go test -race ./...                              # race detector

# Docker
docker build -f Dockerfile.appview -t atcr.io/appview:latest .
docker build -f Dockerfile.hold -t atcr.io/hold:latest .
docker build -f Dockerfile.scanner -t atcr.io/scanner:latest .
docker-compose up -d

# Generate & run with config
./bin/atcr-appview config init config-appview.yaml
./bin/atcr-hold config init config-hold.yaml
./bin/atcr-appview serve --config config-appview.yaml
./bin/atcr-hold serve --config config-hold.yaml

# Scanner (env vars only, no YAML)
SCANNER_HOLD_URL=ws://localhost:8080 SCANNER_SHARED_SECRET=secret ./bin/atcr-scanner serve

# Usage report
go run ./cmd/usage-report --hold https://hold01.atcr.io
go run ./cmd/usage-report --hold https://hold01.atcr.io --from-manifests

# Utilities
go run ./cmd/db-migrate --help         # SQLite → libsql migration
go run ./cmd/record-query --help       # Query ATProto relay by collection
go run ./cmd/s3-test                   # S3 connectivity test
go run ./cmd/healthcheck <url>         # HTTP health check (for Docker)

Architecture Overview#

ATCR uses distribution/distribution as a library, extending it via middleware to route content to different backends:

  • Manifests → ATProto PDS (small JSON, stored as io.atcr.manifest records)
  • Blobs/Layers → S3 via hold service (presigned URLs for direct client-to-S3 transfers)
  • Authentication → ATProto OAuth with DPoP + Docker credential helpers

Four Components#

  1. AppView (cmd/appview) — OCI Distribution API server. Resolves identities, routes manifests to PDS, routes blobs to hold service, validates OAuth, issues registry JWTs. Includes web UI for browsing.
  2. Hold Service (cmd/hold) — BYOS blob storage. Embedded PDS with captain/crew/stats/scan records (all ATProto records in CAR store), S3-compatible storage, presigned URLs. Supports did:web (default) or did:plc identity with auto-recovery. Optional subsystems: admin UI, quotas, billing (Stripe), GC, scan dispatch, Bluesky status posts.
  3. Scanner (scanner/cmd/scanner) — Vulnerability scanning. Connects to hold via WebSocket, generates SBOMs (Syft), scans vulnerabilities (Grype). Priority queue with tier-based scheduling.
  4. Credential Helper (cmd/credential-helper) — Docker credential helper implementing ATProto OAuth flow, exchanges OAuth token for registry JWT.

Request Flow Summary#

Push: Client pushes to atcr.io/<identity>/<image>:<tag>. Registry middleware resolves identity → DID → PDS, discovers hold DID (from sailor profile defaultHold → legacy io.atcr.hold records → AppView default). Blobs go to hold via XRPC multipart upload (presigned S3 URLs). Manifests stored in user's PDS as io.atcr.manifest records with holdDid reference.

Pull: AppView fetches manifest from user's PDS. The manifest's holdDid field tells where blobs were stored. Blobs fetched from that hold via presigned download URLs. Pull always uses the historical hold from the manifest, even if the user changed their default since pushing.

Hold discovery priority (in findHoldDID(), pkg/appview/middleware/registry.go):

  1. Sailor profile's defaultHold (user preference)
  2. User's io.atcr.hold records (legacy)
  3. AppView's default_hold_did (fallback)

Name Resolution#

Pattern: atcr.io/<identity>/<image>:<tag> where identity is a handle or DID.

Resolution in pkg/atproto/resolver.go: Handle → DID (DNS/HTTPS) → PDS endpoint (DID document).

Nautical Terminology#

  • Sailors = registry users, Captains = hold owners, Crew = hold members
  • Holds = storage endpoints (BYOS), Quartermaster/Bosun/Deckhand = crew tiers

Hold Embedded PDS Records#

The hold's embedded PDS stores all operational data as ATProto records in a CAR store (not SQLite). SQLite holds only the records index and events.

Collection Cardinality Description
io.atcr.hold.captain Singleton Hold identity, owner DID, settings
io.atcr.hold.crew Per-member Crew membership + permissions
io.atcr.hold.layer Per-layer Layer metadata (digest, size, media type)
io.atcr.hold.stats Per-repo Push/pull counts per owner+repository
io.atcr.hold.scan Per-scan Vulnerability scan results
app.bsky.feed.post Status posts Online/offline status, push notifications
sh.tangled.actor.profile Singleton Hold profile (name, description, avatar)

Authentication#

Three token types flow through the system:

Token Issued By Used For Lifetime
OAuth (access+refresh) User's PDS AppView → PDS communication ~2h / ~90d
Registry JWT AppView Docker client → AppView 5 min
Service Token User's PDS AppView → Hold service 60s (cached 50s)
Docker Client ──Registry JWT──→ AppView ──OAuth──→ User's PDS ──Service Token──→ Hold

The credential helper never manages OAuth tokens directly — AppView owns the OAuth session and issues registry JWTs. See docs/OAUTH.md for full OAuth/DPoP implementation details.

Hold Authorization#

  • Public hold: Anonymous reads allowed. Writes require captain or crew with blob:write.
  • Private hold: Reads require crew with blob:read or blob:write. Writes require blob:write.
  • blob:write implicitly grants blob:read.
  • Captain has all permissions implicitly.
  • See docs/BYOS.md for full authorization model and permission matrix.

Key File Locations#

Responsibility Files
ATProto records & collections pkg/atproto/lexicon.go
DID/handle resolution pkg/atproto/resolver.go
PDS client (XRPC) pkg/atproto/client.go
Manifest ↔ ATProto storage pkg/atproto/manifest_store.go
Sailor profiles pkg/atproto/profile.go
Registry middleware (identity resolution, hold discovery) pkg/appview/middleware/registry.go
Auth middleware (JWT validation) pkg/appview/middleware/auth.go
Content routing (manifests vs blobs) pkg/appview/storage/routing_repository.go
Blob proxy to hold (presigned URLs) pkg/appview/storage/proxy_blob_store.go
Request context struct pkg/appview/storage/context.go
Database queries pkg/appview/db/queries.go
Database schema pkg/appview/db/schema.sql
OAuth client & session refresher pkg/auth/oauth/client.go
OAuth P-256 key management pkg/auth/oauth/keys.go
Hold PDS endpoints & auth pkg/hold/pds/xrpc.go, pkg/hold/pds/auth.go
Hold DID management (did:web, did:plc, PLC recovery) pkg/hold/pds/did.go
Hold captain records pkg/hold/pds/captain.go
Hold crew management pkg/hold/pds/crew.go
Hold push/pull stats (ATProto records in CAR store) pkg/hold/pds/stats.go
Hold layer records pkg/hold/pds/layer.go
Hold scan records & scanner integration pkg/hold/pds/scan.go, pkg/hold/pds/scan_broadcaster.go
Hold Bluesky status posts pkg/hold/pds/status.go
Hold OCI upload endpoints pkg/hold/oci/xrpc.go
Hold config pkg/hold/config.go
AppView config pkg/appview/config.go
Config marshaling (commented YAML) pkg/config/marshal.go
Scanner config (env-only) scanner/internal/config/config.go

Configuration#

ATCR uses Viper for config. YAML primary, env vars override. Generate defaults with config init.

Env var convention: Prefix + YAML path with _ separators:

  • AppView: ATCR_ (e.g., ATCR_SERVER_DEFAULT_HOLD_DID)
  • Hold: HOLD_ (e.g., HOLD_SERVER_PUBLIC_URL)
  • S3: standard AWS names (AWS_ACCESS_KEY_ID, S3_BUCKET, S3_ENDPOINT)
  • Scanner: SCANNER_ prefix (env-only, no Viper)

See config-appview.example.yaml and config-hold.example.yaml for all options. Config structs use comment struct tags for auto-generating commented YAML via MarshalCommentedYAML() in pkg/config/marshal.go.

Development Gotchas#

  • Do NOT run npm run css:build or npm run js:build manually — Air handles these on file change
  • Do NOT edit icons.svg directly — SVG icon sprite sheets (pkg/appview/public/icons.svg, pkg/hold/admin/public/icons.svg) are auto-generated from template icon references during build. Just reference icons by name in templates and the build will include them.
  • RoutingRepository is created fresh on EVERY request (no caching). Previous caching caused stale OAuth sessions and "invalid refresh token" errors. The OAuth refresher caches efficiently already (in-memory + DB).
  • Storage driver import: _ "github.com/distribution/distribution/v3/registry/storage/driver/s3-aws" — blank import required
  • Hold DID lookups use database (manifests table), not in-memory cache — persistent across restarts
  • Context keys (auth.method, puller.did) exist because Repository() receives context.Context from the distribution library interface — context values are the only way to pass data from HTTP middleware into the distribution middleware layer. Both are copied into RegistryContext inside Repository().
  • OAuth key types: AppView uses P-256 (ES256) for OAuth, not K-256 like PDS keys
  • Confidential vs public clients: Production uses P-256 key at /var/lib/atcr/oauth/client.key (auto-generated); localhost is always public client
  • Hold stats are ATProto records in CAR storeio.atcr.hold.stats records are stored via repomgr.PutRecord(), not in SQLite. Lost if CAR store is lost without backup.
  • PLC auto-update on boot — When using did:plc, LoadOrCreateDID() calls EnsurePLCCurrent() every startup. If local signing key or URL doesn't match plc.directory, it auto-updates (requires rotation key on disk).
  • Hold CAR store is the source of truth — Captain, crew, layer, stats, scan records, Bluesky posts, profiles are all ATProto records in the CAR store. SQLite holds only the records index and events.

Common Tasks#

Adding a new ATProto record type:

  1. Define schema in pkg/atproto/lexicon.go
  2. Add collection constant (e.g., MyCollection = "io.atcr.my-type")
  3. Add constructor function (e.g., NewMyRecord())
  4. Update client methods if needed

Modifying storage routing:

  1. Edit pkg/appview/storage/routing_repository.go
  2. Update Blobs() or Manifests() method
  3. Context passed via RegistryContext struct (pkg/appview/storage/context.go)

Changing name resolution:

  1. Modify pkg/atproto/resolver.go for DID/handle resolution
  2. Update pkg/appview/middleware/registry.go if changing routing
  3. findHoldDID() checks: sailor profile → io.atcr.hold records (legacy) → default hold DID

Working with OAuth client:

  • Self-contained: pass baseURL, handles client ID/redirect URI/scopes
  • Standard callback path: /auth/oauth/callback (all ATCR components)
  • See pkg/auth/oauth/client.go for NewClientApp(), refresher setup

Adding BYOS support for a user:

  1. User configures hold YAML (storage credentials, public URL, owner DID)
  2. User runs hold service — creates captain + crew records in embedded PDS
  3. User sets sailor profile defaultHold to their hold's DID
  4. AppView automatically routes blobs to user's storage — no AppView changes needed

Working with the database:

  • Base schema: pkg/appview/db/schema.sql — source of truth for fresh installs
  • Migrations: pkg/appview/db/migrations/*.yaml — only for ALTER/UPDATE/DELETE on existing DBs
  • Adding new tables: Add to schema.sql only (no migration needed)
  • Altering tables: Create migration AND update schema.sql to keep them in sync

Hold DID recovery/migration (did:plc):

  1. Back up rotation.key and DID string (from did.txt or plc.directory)
  2. Set database.did_method: plc and database.did: "did:plc:..." in config
  3. Provide rotation_key (multibase K-256 private key) — signing key auto-generates if missing
  4. On boot: LoadOrCreateDID() adopts the DID, EnsurePLCCurrent() auto-updates PLC directory if keys/URL changed
  5. Without rotation key: hold boots but logs warning about PLC mismatch

Adding web UI features:

  • Add handler in pkg/appview/handlers/
  • Register route in pkg/appview/routes/routes.go
  • Create template in pkg/appview/templates/pages/

Testing Strategy#

  • Mock ATProto client for manifest operations
  • Mock S3 driver for blob operations
  • Test name resolution independently
  • Integration tests require real PDS + S3

Documentation References#

  • BYOS Architecture: docs/BYOS.md
  • OAuth Implementation: docs/OAUTH.md
  • Hold Service: docs/hold.md
  • AppView: docs/appview.md
  • Hold XRPC Endpoints: docs/HOLD_XRPC_ENDPOINTS.md
  • Development Guide: docs/DEVELOPMENT.md
  • Billing/Quotas: docs/BILLING.md, docs/QUOTAS.md
  • Scanning: docs/SBOM_SCANNING.md
  • ATProto Spec: https://atproto.com/specs/oauth
  • OCI Distribution Spec: https://github.com/opencontainers/distribution-spec