WIP! A BB-style forum, on the ATmosphere! We're still working... we'll be back soon when we have something to show off!
node typescript hono htmx atproto

atBB Monorepo#

atBB is a decentralized BB-style forum built on the AT Protocol. Users own their posts on their own PDS; the forum's AppView indexes and serves them. Lexicon namespace: space.atbb.* (domain atbb.space is owned). License: AGPL-3.0.

The master project plan with MVP phases and progress tracking lives at docs/atproto-forum-plan.md.

Apps & Packages#

Apps (apps/)#

Servers and applications that are deployed or run as services.

App Description Port
@atbb/appview Hono JSON API server — indexes forum data, serves API 3000
@atbb/web Hono JSX + HTMX server-rendered web UI — calls appview API 3001

Packages (packages/)#

Shared libraries, tools, and utilities consumed by apps or used standalone.

Package Description
@atbb/db Drizzle ORM schema and connection factory for PostgreSQL
@atbb/lexicon AT Proto lexicon definitions (YAML) + generated TypeScript types

Dependency chain: @atbb/lexicon and @atbb/db build first, then @atbb/appview and @atbb/web build in parallel. Turbo handles this via ^build.

Development#

Setup#

devenv shell                    # enter Nix dev shell (Node.js, pnpm, turbo)
pnpm install                    # install all workspace dependencies
cp .env.example .env            # configure environment variables

Commands#

pnpm build                      # build all packages (lexicon → appview + web)
pnpm dev                        # start all dev servers with hot reload
pnpm test                       # run all tests across all packages
pnpm clean                      # remove all dist/ directories
devenv up                       # start appview + web servers via process manager
pnpm --filter @atbb/appview db:migrate  # run database migrations
pnpm --filter @atbb/appview dev # run a single package
pnpm --filter @atbb/appview test # run tests for a single package

Environment Variables#

See .env.example. Key variables:

  • PORT — server port (appview: 3000, web: 3001)
  • FORUM_DID — the forum's AT Proto DID
  • PDS_URL — URL of the forum's PDS
  • APPVIEW_URL — URL the web package uses to reach the appview API
  • FORUM_HANDLE, FORUM_PASSWORD — forum service account credentials

OAuth & session management (required for production):

  • OAUTH_PUBLIC_URL — public URL where AppView is accessible (used for client_id and redirect_uri)
  • SESSION_SECRET — signing key for session tokens (generate with openssl rand -hex 32)
  • SESSION_TTL_DAYS — session lifetime in days (default: 7)
  • REDIS_URL — optional Redis URL for session storage (recommended for multi-instance deployments)

Deployment#

Docker#

The project includes production-ready Docker infrastructure for single-container deployment:

# Build the Docker image
docker build -t atbb:latest .

# Run with docker-compose (recommended)
cp docker-compose.example.yml docker-compose.yml
# Edit docker-compose.yml with your DATABASE_URL, FORUM_DID, etc.
docker compose up -d

What's included:

  • Multi-stage Dockerfile (Node 22 Alpine, ~200MB final image)
  • Nginx reverse proxy serving both appview (port 3000) and web (port 3001) on port 80
  • Non-root user (atbb:atbb) for security
  • Health checks on /api/healthz
  • Production-ready entrypoint script

Key files:

  • Dockerfile — multi-stage build definition
  • entrypoint.sh — startup script (nginx + node servers)
  • nginx.conf — reverse proxy configuration
  • docker-compose.example.yml — orchestration template
  • docs/deployment-guide.md — comprehensive deployment instructions

Database migrations: The container does NOT auto-run migrations. Run manually before starting:

docker compose run --rm atbb pnpm --filter @atbb/appview db:migrate

Pre-Commit Checks#

Every commit automatically runs three checks in parallel via lefthook:

  1. Lint — oxlint scans staged TypeScript/JavaScript files for code quality issues
  2. Typecheckpnpm turbo lint runs type checking on affected packages
  3. Test — Vitest runs tests in packages with staged changes

Auto-Fixing Lint Issues#

Before committing, auto-fix safe lint violations:

# Fix all packages
pnpm turbo lint:fix

# Fix specific package
pnpm --filter @atbb/appview lint:fix

Bypassing Hooks (Emergency Only)#

In urgent situations, bypass hooks with:

git commit --no-verify -m "emergency: your message"

Use sparingly — hooks catch issues that would fail in CI.

CI/CD#

GitHub Actions Workflows#

.github/workflows/ci.yml — Runs on all pull requests (parallel jobs):

  • Lint: pnpm exec oxlint . — catches code quality issues
  • Type Check: pnpm turbo lint — verifies TypeScript types across all packages
  • Test: pnpm test — runs all tests with PostgreSQL 17 service container
  • Build: pnpm build — verifies compilation succeeds

.github/workflows/publish.yml — Runs on pushes to main branch:

  • Builds Docker image and publishes to GitHub Container Registry (GHCR)
  • Tags: latest (main branch) and sha-<commit> (specific commit)
  • Image: ghcr.io/atbb-community/atbb:latest

All checks must pass before merging a PR.

How Hooks Work#

  • Lefthook manages git hooks (lefthook.yml)
  • Oxlint provides fast linting (.oxlintrc.json)
  • Turbo filters checks to affected packages only
  • Hooks auto-install after pnpm install via prepare script

Testing Standards#

CRITICAL: Always run tests before committing code or requesting code review.

Running Tests#

# Run all tests
pnpm test

# Run tests for a specific package
pnpm --filter @atbb/appview test

# Run tests in watch mode during development
pnpm --filter @atbb/appview test --watch

# Run a specific test file
pnpm --filter @atbb/appview test src/lib/__tests__/config.test.ts

Environment Variables in Tests#

CRITICAL: Turbo blocks environment variables by default for cache safety. Tests requiring env vars must declare them in turbo.json:

{
  "tasks": {
    "test": {
      "dependsOn": ["^build"],
      "env": ["DATABASE_URL"]
    }
  }
}

Symptoms of missing declaration:

  • Tests pass when run directly (pnpm --filter @atbb/appview test)
  • Tests fail when run via Turbo (pnpm test) with undefined env vars
  • CI fails even though env vars are set in workflow
  • Database errors like database "username" does not exist (postgres defaults to system username when DATABASE_URL is unset)

Why this matters: Turbo's caching requires deterministic inputs. Environment variables that leak into tasks without declaration would make cache hits unpredictable. By explicitly declaring env vars in turbo.json, you tell Turbo to include them in the task's input hash and pass them through to the test process.

When adding new env vars to tests: Update turbo.json immediately, or tests will mysteriously fail when run via Turbo but pass when run directly.

When to Run Tests#

Before every commit:

pnpm test  # Verify all tests pass
git add .
git commit -m "feat: your changes"

Before requesting code review:

pnpm build  # Ensure clean build
pnpm test   # Verify all tests pass
# Only then push and request review

After fixing review feedback:

# Make fixes
pnpm test   # Verify tests still pass
# Push updates

Test Requirements#

All new features must include tests:

  • API endpoints: Test success cases, error cases, edge cases
  • Business logic: Test all code paths and error conditions
  • Error handling: Test that errors are caught and logged appropriately
  • Security features: Test authentication, authorization, input validation

Test quality standards:

  • Tests must be independent (no shared state between tests)
  • Use descriptive test names that explain what is being tested
  • Mock external dependencies (databases, APIs, network calls)
  • Test error paths, not just happy paths
  • Verify logging and error messages are correct

Red flags (do not commit):

  • Skipped tests (test.skip, it.skip) without Linear issue tracking why
  • Tests that pass locally but fail in CI
  • Tests that require manual setup or specific data
  • Tests with hardcoded timing (setTimeout, sleep) - use proper mocks

Example Test Structure#

describe("createForumRoutes", () => {
  it("returns forum metadata when forum exists", async () => {
    // Arrange: Set up test context with mock data
    const ctx = await createTestContext();

    // Act: Call the endpoint
    const res = await app.request("/api/forum");

    // Assert: Verify response
    expect(res.status).toBe(200);
    const data = await res.json();
    expect(data.name).toBe("Test Forum");
  });

  it("returns 404 when forum does not exist", async () => {
    // Test error case
    const ctx = await createTestContext({ emptyDb: true });
    const res = await app.request("/api/forum");
    expect(res.status).toBe(404);
  });
});

Test Coverage Expectations#

While we don't enforce strict coverage percentages, aim for:

  • Critical paths: 100% coverage (authentication, authorization, data integrity)
  • Error handling: All catch blocks should be tested
  • API endpoints: All routes should have tests
  • Business logic: All functions with branching logic should be tested

Do not:

  • Skip writing tests to "move faster" - untested code breaks in production
  • Write tests after requesting review - tests inform implementation
  • Rely on manual testing alone - automated tests catch regressions

Before Requesting Code Review#

CRITICAL: Run this checklist before requesting review to catch issues early:

# 1. Verify all tests pass
pnpm test

# 2. Check runtime dependencies are correctly placed
# (Runtime imports must be in dependencies, not devDependencies)
grep -r "from 'drizzle-orm'" apps/*/src  # If found, verify in dependencies
grep -r "from 'postgres'" apps/*/src    # If found, verify in dependencies

# 3. Verify error test coverage is comprehensive
# For API endpoints, ensure you have tests for:
# - Input validation (missing fields, wrong types, malformed JSON)
# - Error classification (network→503, server→500)
# - Error message clarity (user-friendly, no stack traces)

Common mistake: Adding error tests AFTER review feedback instead of DURING implementation. Write error tests immediately after implementing the happy path — they often reveal bugs in error classification and input validation that are better caught before review.

Lexicon Conventions#

  • Source of truth is YAML in packages/lexicon/lexicons/. Never edit generated JSON or TypeScript.
  • Build pipeline: YAML → JSON (scripts/build.ts) → TypeScript (@atproto/lex-cli gen-api).
  • Adding a new lexicon: Create a .yaml file under lexicons/space/atbb/, run pnpm --filter @atbb/lexicon build.
  • Record keys: Use key: tid for collections (multiple records per repo). Use key: literal:self for singletons.
  • References: Use com.atproto.repo.strongRef wrapped in named defs (e.g., forumRef, subjectRef).
  • Extensible fields: Use knownValues (not enum) for strings that may grow (permissions, reaction types, mod actions).
  • Record ownership:
    • Forum DID owns: forum.forum, forum.category, forum.role, modAction
    • User DID owns: post, membership, reaction

AT Protocol Conventions#

  • Unified post model: There is no separate "topic" type. A space.atbb.post without a reply ref is a topic starter; one with a reply ref is a reply.
  • Reply chains: replyRef has both root (thread starter) and parent (direct parent) — same pattern as app.bsky.feed.post.
  • MVP trust model: The AppView holds the Forum DID's signing keys directly and writes forum-level records on behalf of admins/mods after verifying their role. This will be replaced by AT Protocol privilege delegation post-MVP.

TypeScript / Hono Gotchas#

  • @types/node is required as a devDependency in every package that uses process.env or other Node APIs. tsx doesn't need it at runtime, but tsc builds will fail without it.
  • Hono JSX children: Use PropsWithChildren<T> from hono/jsx for components that accept children. Unlike React, Hono's FC<T> does not include children implicitly.
  • HTMX attributes in JSX: The typed-htmx package provides types for hx-* attributes. See apps/web/src/global.d.ts for the augmentation.
  • Glob expansion in npm scripts: @atproto/lex-cli needs file paths, not globs. Use bash -c 'shopt -s globstar && ...' to expand **/*.json in npm scripts.
  • .env loading: Dev scripts use Node's --env-file=../../.env flag to load the root .env file. No dotenv dependency needed.
  • API endpoint parameter type guards: Never trust TypeScript types for user input. Change handler parameter types from string to unknown and add explicit typeof checks. TypeScript types are erased at runtime — a request missing the text field will pass type checking but crash with TypeError: text.trim is not a function.
    // ❌ BAD: Assumes text is always a string at runtime
    export function validatePostText(text: string): { valid: boolean } {
      const trimmed = text.trim();  // Crashes if text is undefined!
      // ...
    }
    
    // ✅ GOOD: Type guard protects against runtime type mismatches
    export function validatePostText(text: unknown): { valid: boolean } {
      if (typeof text !== "string") {
        return { valid: false, error: "Text is required and must be a string" };
      }
      const trimmed = text.trim();  // Safe - text is proven to be a string
      // ...
    }
    
  • Hono JSON parsing safety: await c.req.json() throws SyntaxError for malformed JSON. Always wrap in try-catch and return 400 for client errors:
    let body: any;
    try {
      body = await c.req.json();
    } catch {
      return c.json({ error: "Invalid JSON in request body" }, 400);
    }
    

Error Handling Standards#

Follow these patterns for robust, debuggable production code:

API Route Handlers#

Required for all database-backed endpoints:

  1. Validate input parameters before database queries (return 400 for invalid input)
  2. Wrap database queries in try-catch with structured logging
  3. Check resource existence explicitly (return 404 for missing resources)
  4. Return proper HTTP status codes (400/404/500, not always 500)

Example pattern:

export function createForumRoutes(ctx: AppContext) {
  return new Hono().get("/", async (c) => {
    try {
      const [forum] = await ctx.db
        .select()
        .from(forums)
        .where(eq(forums.rkey, "self"))
        .limit(1);

      if (!forum) {
        return c.json({ error: "Forum not found" }, 404);
      }

      return c.json({ /* success response */ });
    } catch (error) {
      console.error("Failed to query forum metadata", {
        operation: "GET /api/forum",
        error: error instanceof Error ? error.message : String(error),
      });
      return c.json(
        { error: "Failed to retrieve forum metadata. Please try again later." },
        500
      );
    }
  });
}

Catch Block Guidelines#

DO:

  • Catch specific error types when possible (instanceof RangeError, instanceof SyntaxError)
  • Re-throw unexpected errors (don't swallow programming bugs like TypeError)
  • Log with structured context: operation name, relevant IDs, error message
  • Return user-friendly messages (no stack traces in production)

DON'T:

  • Use bare catch blocks that hide all error types
  • Return generic "try again later" for client errors (400) vs server errors (500)
  • Fabricate data in catch blocks (return null or fail explicitly)
  • Use empty catch blocks or catch without logging

Helper Functions#

Validation helpers should:

  • Return null for invalid input (not throw)
  • Re-throw unexpected errors
  • Use specific error type checking

Example:

export function parseBigIntParam(value: string): bigint | null {
  try {
    return BigInt(value);
  } catch (error) {
    if (error instanceof RangeError || error instanceof SyntaxError) {
      return null;  // Expected error for invalid input
    }
    throw error;  // Unexpected error - let it bubble up
  }
}

Serialization helpers should:

  • Avoid silent fallbacks (log warnings if fabricating data)
  • Prefer returning null over fake values ("0", new Date())
  • Document fallback behavior in JSDoc if unavoidable

Defensive Programming#

All list queries must have defensive limits:

.from(categories)
.orderBy(categories.sortOrder)
.limit(1000);  // Prevent memory exhaustion on unbounded queries

Filter deleted/soft-deleted records:

.where(and(
  eq(posts.rootPostId, topicId),
  eq(posts.deleted, false)  // Never show deleted content to users
))

Use ordering for consistent results:

.orderBy(asc(posts.createdAt))  // Chronological order for replies

Global Error Handler#

The Hono app must have a global error handler as a safety net:

app.onError((err, c) => {
  console.error("Unhandled error in route handler", {
    path: c.req.path,
    method: c.req.method,
    error: err.message,
    stack: err.stack,
  });
  return c.json(
    {
      error: "An internal error occurred. Please try again later.",
      ...(process.env.NODE_ENV !== "production" && {
        details: err.message,
      }),
    },
    500
  );
});

Testing Error Handling#

Test error classification, not just error catching. Users need actionable feedback: "retry later" (503) vs "report this bug" (500).

// ✅ Test network errors return 503 (retry later)
it("returns 503 when PDS connection fails", async () => {
  mockPutRecord.mockRejectedValueOnce(new Error("fetch failed"));
  const res = await app.request("/api/topics", {
    method: "POST",
    body: JSON.stringify({ text: "Test" })
  });
  expect(res.status).toBe(503);  // Not 500!
  const data = await res.json();
  expect(data.error).toContain("Unable to reach your PDS");
});

// ✅ Test server errors return 500 (bug report)
it("returns 500 for unexpected database errors", async () => {
  mockPutRecord.mockRejectedValueOnce(new Error("Database connection lost"));
  const res = await app.request("/api/topics", {
    method: "POST",
    body: JSON.stringify({ text: "Test" })
  });
  expect(res.status).toBe(500);  // Not 503!
  const data = await res.json();
  expect(data.error).not.toContain("PDS");  // Generic message for server errors
});

// ✅ Test input validation returns 400
it("returns 400 for malformed JSON", async () => {
  const res = await app.request("/api/topics", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: "{ invalid json }"
  });
  expect(res.status).toBe(400);
  const data = await res.json();
  expect(data.error).toContain("Invalid JSON");
});

Error classification patterns to test:

  • 400 (Bad Request): Invalid input, missing required fields, malformed JSON
  • 404 (Not Found): Resource doesn't exist (forum, post, user)
  • 503 (Service Unavailable): Network errors, PDS connection failures, timeouts — user should retry
  • 500 (Internal Server Error): Unexpected errors, database errors — needs bug investigation

Documentation & Project Tracking#

Keep these synchronized when completing work:

  1. docs/atproto-forum-plan.md — Master project plan with phase checklist

    • Mark items complete [x] when implementation is done and tested
    • Add brief status notes with file references and Linear issue IDs
    • Update immediately after completing milestones
  2. Linear issues — Task tracker at https://linear.app/atbb

    • Update status: Backlog → In Progress → Done
    • Add comments documenting implementation details when marking Done
    • Keep status in sync with actual codebase state, not planning estimates
  3. Workflow: When finishing a task:

    # 1. Run tests to verify implementation is correct
    pnpm test
    
    # 2. If tests pass, commit your changes
    git add .
    git commit -m "feat: your changes"
    
    # 3. Update plan document: mark [x] and add completion note
    # 4. Update Linear: change status to Done, add implementation comment
    # 5. Push and request code review
    # 6. After review approval: include "docs:" prefix when committing plan updates
    

Why this matters: The plan document and Linear can drift from reality as code evolves. Regular synchronization prevents rediscovering completed work and ensures accurate project status.

Bruno API Collections#

CRITICAL: Keep Bruno collections synchronized with API changes.

The bruno/ directory contains Bruno collections that serve dual purpose:

  1. Interactive API testing during development
  2. Version-controlled API documentation that stays in sync with code

When to Update Bruno Collections#

When adding a new API endpoint:

  1. Create a new .bru file in the appropriate bruno/AppView API/ subdirectory
  2. Follow the naming pattern: use descriptive names like Create Topic.bru, Get Forum Metadata.bru
  3. Include all request details: method, URL with variables, headers, body (if POST/PUT)
  4. Add comprehensive documentation in the docs block explaining:
    • Required/optional parameters
    • Expected response format with example
    • All possible error codes (400, 401, 404, 500, 503)
    • Authentication requirements
    • Validation rules
  5. Add assertions to validate responses automatically

When modifying an existing endpoint:

  1. Update the corresponding .bru file in bruno/AppView API/
  2. Update parameter descriptions if inputs changed
  3. Update response documentation if output format changed
  4. Update error documentation if new error cases added
  5. Update assertions if validation logic changed

When removing an endpoint:

  1. Delete the corresponding .bru file
  2. Update bruno/README.md if it referenced the removed endpoint

When adding new environment variables:

  1. Update bruno/environments/local.bru with local development values
  2. Update bruno/environments/dev.bru with deployment values
  3. Document the variable in bruno/README.md under "Environment Variables Reference"

Bruno File Template#

When creating new .bru files, follow this template:

meta {
  name: Endpoint Name
  type: http
  seq: 1
}

get {
  url: {{appview_url}}/api/path
}

params:query {
  param1: {{variable}}
}

headers {
  Content-Type: application/json
}

body:json {
  {
    "field": "value"
  }
}

assert {
  res.status: eq 200
  res.body.field: isDefined
}

docs {
  Brief description of what this endpoint does.

  Path/query/body params:
  - param1: Description (type, required/optional)

  Returns:
  {
    "field": "value"
  }

  Error codes:
  - 400: Bad request (invalid input)
  - 401: Unauthorized (requires auth)
  - 404: Not found
  - 500: Server error

  Notes:
  - Any special considerations or validation rules
}

Workflow Integration#

When committing API changes, update Bruno collections in the SAME commit:

# Example: Adding a new endpoint
git add apps/appview/src/routes/my-route.ts
git add apps/appview/src/routes/__tests__/my-route.test.ts
git add bruno/AppView\ API/MyRoute/New\ Endpoint.bru
git commit -m "feat: add new endpoint for X

- Implements POST /api/my-endpoint
- Adds validation for Y
- Updates Bruno collection with request documentation"

Why commit together: Bruno collections are API documentation. Keeping them in the same commit ensures the documentation is never out of sync with the implementation.

Testing Bruno Collections#

Before committing:

  1. Open the collection in Bruno
  2. Test each modified request against your local dev server (pnpm dev)
  3. Verify assertions pass (green checkmarks)
  4. Verify documentation is accurate and complete
  5. Check that error scenarios are documented (not just happy path)

Common Mistakes#

DON'T:

  • Commit API changes without updating Bruno collections
  • Use hardcoded URLs instead of environment variables ({{appview_url}})
  • Skip documenting error cases (only document 200 responses)
  • Leave placeholder/example data that doesn't match actual API behavior
  • Forget to update assertions when response format changes

DO:

  • Update Bruno files in the same commit as route implementation
  • Use environment variables for all URLs and test data
  • Document all HTTP status codes the endpoint can return
  • Include example request/response bodies that match actual behavior
  • Test requests locally before committing

Git Conventions#

  • Do not include Co-Authored-By lines in commit messages.
  • prior-art/ contains git submodules (Rust AppView, original lexicons, delegation spec) — reference material only, not used at build time.
  • Worktrees with submodules need submodule deinit --all -f then worktree remove --force to clean up.