Claude.md#
The role of this file is to describe common mistakes and confusion points that agents might encounter as they work in this project. If you ever encounter something in the project that surprises you, please alert the developer working with you and indicate that this is the case in the Claude.md file to help prevent future agents from having the same issue.
This project is greenfield. It is okay to change the schema entirely or make what might be breaking changes. We will sort out any backfill or backwards compatibility requirements when they are actually needed.
Development#
Setup#
devenv shell # enter Nix dev shell (Node.js, pnpm, turbo)
pnpm install # install all workspace dependencies
cp .env.example .env # configure environment variables
Auto-Fixing Lint Issues#
Before committing, auto-fix safe lint violations:
# Fix all packages
pnpm turbo lint:fix
# Fix specific package
pnpm --filter @atbb/appview lint:fix
Testing Standards#
CRITICAL: Always run tests before committing code or requesting code review.
Environment Variables in Tests#
CRITICAL: Turbo blocks environment variables by default for cache safety. Tests requiring env vars must declare them in turbo.json:
{
"tasks": {
"test": {
"dependsOn": ["^build"],
"env": ["DATABASE_URL"]
}
}
}
Symptoms of missing declaration:
- Tests pass when run directly (
pnpm --filter @atbb/appview test) - Tests fail when run via Turbo (
pnpm test) with undefined env vars - CI fails even though env vars are set in workflow
- Database errors like
database "username" does not exist(postgres defaults to system username when DATABASE_URL is unset)
Why this matters: Turbo's caching requires deterministic inputs. Environment variables that leak into tasks without declaration would make cache hits unpredictable. By explicitly declaring env vars in turbo.json, you tell Turbo to include them in the task's input hash and pass them through to the test process.
When adding new env vars to tests: Update turbo.json immediately, or tests will mysteriously fail when run via Turbo but pass when run directly.
When to Run Tests#
Before every commit:
pnpm test # Verify all tests pass
git add .
git commit -m "feat: your changes"
Before requesting code review:
pnpm build # Ensure clean build
pnpm test # Verify all tests pass
# Only then push and request review
After fixing review feedback:
# Make fixes
pnpm test # Verify tests still pass
# Push updates
Test Requirements#
All new features must include tests:
- API endpoints: Test success cases, error cases, edge cases
- Business logic: Test all code paths and error conditions
- Error handling: Test that errors are caught and logged appropriately
- Security features: Test authentication, authorization, input validation
Test quality standards:
- Tests must be independent (no shared state between tests)
- Use descriptive test names that explain what is being tested
- Mock external dependencies (databases, APIs, network calls)
- Test error paths, not just happy paths
- Verify logging and error messages are correct
Red flags (do not commit):
- Skipped tests (
test.skip,it.skip) without Linear issue tracking why - Tests that pass locally but fail in CI
- Tests that require manual setup or specific data
- Tests with hardcoded timing (
setTimeout,sleep) - use proper mocks - Placeholder/stub tests that don't actually test anything
Placeholder tests are prohibited:
// ❌ FORBIDDEN: Stub tests provide false confidence
it("assigns role successfully when admin has authority", async () => {
expect(true).toBe(true); // NOT A REAL TEST
});
// ✅ REQUIRED: Real tests with actual assertions
it("assigns role successfully when admin has authority", async () => {
const admin = await createUser(ctx, "Admin");
const member = await createUser(ctx, "Member");
const moderatorRole = await createRole(ctx, "Moderator", [], 20);
const res = await app.request(`/api/admin/members/${member.did}/role`, {
method: "POST",
headers: authHeaders(admin),
body: JSON.stringify({ roleUri: moderatorRole.uri })
});
expect(res.status).toBe(200);
const data = await res.json();
expect(data.roleAssigned).toBe("Moderator");
// Verify database state changed
const updatedMember = await getMembership(ctx, member.did);
expect(updatedMember.roleUri).toBe(moderatorRole.uri);
});
Why this matters: Stub tests pass in CI, creating false confidence that code is tested. They hide bugs that would be caught by real tests with actual assertions.
If you're unsure how to test something: Leave a // TODO: Add test for X comment and create a Linear issue. Never commit a stub test that pretends to test something.
Test Coverage Expectations#
While we don't enforce strict coverage percentages, aim for:
- Critical paths: 100% coverage (authentication, authorization, data integrity)
- Error handling: All catch blocks should be tested
- API endpoints: All routes should have tests
- Business logic: All functions with branching logic should be tested
Do not:
- Skip writing tests to "move faster" - untested code breaks in production
- Write tests after requesting review - tests inform implementation
- Rely on manual testing alone - automated tests catch regressions
Before Requesting Code Review#
CRITICAL: Run this checklist before requesting review to catch issues early:
# 1. Verify all tests pass
pnpm test
# 2. Check runtime dependencies are correctly placed
# (Runtime imports must be in dependencies, not devDependencies)
grep -r "from 'drizzle-orm'" apps/*/src # If found, verify in dependencies
grep -r "from 'postgres'" apps/*/src # If found, verify in dependencies
# 3. Verify error test coverage is comprehensive
# For API endpoints, ensure you have tests for:
# - Input validation (missing fields, wrong types, malformed JSON)
# - Error classification (network→503, server→500)
# - Error message clarity (user-friendly, no stack traces)
Common mistake: Adding error tests AFTER review feedback instead of DURING implementation. Write error tests immediately after implementing the happy path — they often reveal bugs in error classification and input validation that are better caught before review.
Lexicon Conventions#
- Source of truth is YAML in
packages/lexicon/lexicons/. Never edit generated JSON or TypeScript. - Build pipeline: YAML → JSON (
scripts/build.ts) → TypeScript (@atproto/lex-cli gen-api). - Adding a new lexicon: Create a
.yamlfile underlexicons/space/atbb/, runpnpm --filter @atbb/lexicon build. - Record keys: Use
key: tidfor collections (multiple records per repo). Usekey: literal:selffor singletons. - References: Use
com.atproto.repo.strongRefwrapped in named defs (e.g.,forumRef,subjectRef). - Extensible fields: Use
knownValues(notenum) for strings that may grow (permissions, reaction types, mod actions). - Record ownership:
- Forum DID owns:
forum.forum,forum.category,forum.role,modAction - User DID owns:
post,membership,reaction
- Forum DID owns:
AT Protocol Conventions#
- Unified post model: There is no separate "topic" type. A
space.atbb.postwithout areplyref is a topic starter; one with areplyref is a reply. - Reply chains:
replyRefhas bothroot(thread starter) andparent(direct parent) — same pattern asapp.bsky.feed.post. - MVP trust model: The AppView holds the Forum DID's signing keys directly and writes forum-level records on behalf of admins/mods after verifying their role. This will be replaced by AT Protocol privilege delegation post-MVP.
TypeScript / Hono Gotchas#
@types/nodeis required as a devDependency in every package that usesprocess.envor other Node APIs.tsxdoesn't need it at runtime, buttscbuilds will fail without it.- Hono JSX
children: UsePropsWithChildren<T>fromhono/jsxfor components that accept children. Unlike React, Hono'sFC<T>does not includechildrenimplicitly. - HTMX attributes in JSX: The
typed-htmxpackage provides types forhx-*attributes. Seeapps/web/src/global.d.tsfor the augmentation. - Glob expansion in npm scripts:
@atproto/lex-clineeds file paths, not globs. Usebash -c 'shopt -s globstar && ...'to expand**/*.jsonin npm scripts. .envloading: Dev scripts use Node's--env-file=../../.envflag to load the root.envfile. Nodotenvdependency needed.- API endpoint parameter type guards: Never trust TypeScript types for user input. Change handler parameter types from
stringtounknownand add explicittypeofchecks. TypeScript types are erased at runtime — a request missing thetextfield will pass type checking but crash withTypeError: text.trim is not a function.// ❌ BAD: Assumes text is always a string at runtime export function validatePostText(text: string): { valid: boolean } { const trimmed = text.trim(); // Crashes if text is undefined! // ... } // ✅ GOOD: Type guard protects against runtime type mismatches export function validatePostText(text: unknown): { valid: boolean } { if (typeof text !== "string") { return { valid: false, error: "Text is required and must be a string" }; } const trimmed = text.trim(); // Safe - text is proven to be a string // ... } - Hono JSON parsing safety:
await c.req.json()throwsSyntaxErrorfor malformed JSON. Always wrap in try-catch and return 400 for client errors:let body: any; try { body = await c.req.json(); } catch { return c.json({ error: "Invalid JSON in request body" }, 400); }
Middleware Patterns#
Middleware Composition#
CRITICAL: Authentication must precede authorization checks.
When using multiple middleware functions, order matters:
// ✅ CORRECT: requireAuth runs first, sets c.get("user")
app.post(
"/api/topics",
requireAuth(ctx), // Step 1: Restore session, set user
requirePermission(ctx, "createTopics"), // Step 2: Check permission
async (c) => {
const user = c.get("user")!; // Safe - guaranteed by middleware chain
// ... handler logic
}
);
// ❌ WRONG: requirePermission runs first, user is undefined
app.post(
"/api/topics",
requirePermission(ctx, "createTopics"), // user not set yet!
requireAuth(ctx),
async (c) => { /* ... */ }
);
Why this matters: requirePermission depends on c.get("user") being set by requireAuth. If authentication middleware doesn't run first, permission checks always fail with 401.
Testing middleware composition:
it("middleware chain executes in correct order", async () => {
// Verify requireAuth sets user before requirePermission checks it
const res = await app.request("/api/topics", {
method: "POST",
headers: { Cookie: "atbb_session=valid_token" }
});
// Should succeed if both middlewares run in order
expect(res.status).not.toBe(401);
});
Error Handling Standards#
Follow these patterns for robust, debuggable production code:
API Route Handlers#
Required for all database-backed endpoints:
- Validate input parameters before database queries (return 400 for invalid input)
- Wrap database queries in try-catch with structured logging
- Check resource existence explicitly (return 404 for missing resources)
- Return proper HTTP status codes (400/404/500, not always 500)
Example pattern:
export function createForumRoutes(ctx: AppContext) {
return new Hono().get("/", async (c) => {
try {
const [forum] = await ctx.db
.select()
.from(forums)
.where(eq(forums.rkey, "self"))
.limit(1);
if (!forum) {
return c.json({ error: "Forum not found" }, 404);
}
return c.json({ /* success response */ });
} catch (error) {
console.error("Failed to query forum metadata", {
operation: "GET /api/forum",
error: error instanceof Error ? error.message : String(error),
});
return c.json(
{ error: "Failed to retrieve forum metadata. Please try again later." },
500
);
}
});
}
Catch Block Guidelines#
DO:
- Catch specific error types when possible (
instanceof RangeError,instanceof SyntaxError) - Re-throw unexpected errors (don't swallow programming bugs like
TypeError) - Log with structured context: operation name, relevant IDs, error message
- Return user-friendly messages (no stack traces in production)
- Classify errors by user action (400, 503) vs server bugs (500)
DON'T:
- Use bare
catchblocks that hide all error types - Return generic "try again later" for client errors (400) vs server errors (500)
- Fabricate data in catch blocks (return null or fail explicitly)
- Use empty catch blocks or catch without logging
- Put two distinct operations in the same try block when they have different failure semantics — a failure in the second operation will report as failure of the first
Try Block Granularity:
When a try block covers multiple distinct operations in sequence, errors from later steps get reported with the wrong context. Split into separate try blocks when operations have meaningfully different failure messages:
// ❌ BAD: DB re-query failure reports "Failed to create category" even though
// the PDS write already succeeded — misleading for operators debugging
try {
const result = await createCategory(...); // PDS write succeeded
categoryUri = result.uri;
const [cat] = await db.select()...; // DB re-query fails here
} catch (error) {
consola.error("Failed to create category:", ...); // Inaccurate!
}
// ✅ GOOD: each operation has its own try block and specific error message
try {
const result = await createCategory(...);
categoryUri = result.uri;
} catch (error) {
consola.error("Failed to create category:", ...);
}
try {
const [cat] = await db.select()...;
} catch (error) {
consola.error("Failed to look up category ID after creation:", ...);
}
Programming Error Re-Throwing Pattern:
// ✅ CORRECT: Re-throw programming errors, catch runtime errors
try {
const result = await ctx.db.select()...;
return processResult(result);
} catch (error) {
// Re-throw programming errors (code bugs) - don't hide them
if (error instanceof TypeError ||
error instanceof ReferenceError ||
error instanceof SyntaxError) {
console.error("CRITICAL: Programming error detected", {
error: error.message,
stack: error.stack,
operation: "checkPermission"
});
throw error; // Let global error handler catch it
}
// Log and handle expected runtime errors (DB failures, network issues)
console.error("Database query failed", {
operation: "checkPermission",
error: error instanceof Error ? error.message : String(error)
});
return null; // Fail safely for business logic
}
Why re-throw programming errors:
TypeError= code bug (e.g.,role.permisions.includes()typo)ReferenceError= code bug (e.g., using undefined variable)SyntaxError= code bug (e.g., malformed JSON.parse in your code)- These should crash during development, not be silently logged
- Catching them hides bugs and makes debugging impossible
Error Classification Helper:
Create helper functions to classify errors consistently:
// File: src/lib/errors.ts
export function isProgrammingError(error: unknown): boolean {
return error instanceof TypeError ||
error instanceof ReferenceError ||
error instanceof SyntaxError;
}
export function isNetworkError(error: unknown): boolean {
if (!(error instanceof Error)) return false;
const msg = error.message.toLowerCase();
return msg.includes("fetch failed") ||
msg.includes("econnrefused") ||
msg.includes("enotfound") ||
msg.includes("timeout");
}
// Usage in route handlers:
} catch (error) {
if (isProgrammingError(error)) {
throw error; // Don't catch programming bugs
}
if (isNetworkError(error)) {
return c.json({
error: "Unable to reach external service. Please try again later."
}, 503); // User should retry
}
return c.json({
error: "An unexpected error occurred. Please contact support."
}, 500); // Server bug, needs investigation
}
Helper Functions#
Validation helpers should:
- Return
nullfor invalid input (not throw) - Re-throw unexpected errors
- Use specific error type checking
Example:
export function parseBigIntParam(value: string): bigint | null {
try {
return BigInt(value);
} catch (error) {
if (error instanceof RangeError || error instanceof SyntaxError) {
return null; // Expected error for invalid input
}
throw error; // Unexpected error - let it bubble up
}
}
Serialization helpers should:
- Avoid silent fallbacks (log warnings if fabricating data)
- Prefer returning
nullover fake values ("0",new Date()) - Document fallback behavior in JSDoc if unavoidable
Defensive Programming#
All list queries must have defensive limits:
.from(categories)
.orderBy(categories.sortOrder)
.limit(1000); // Prevent memory exhaustion on unbounded queries
Filter deleted/soft-deleted records:
.where(and(
eq(posts.rootPostId, topicId),
eq(posts.deleted, false) // Never show deleted content to users
))
Use ordering for consistent results:
.orderBy(asc(posts.createdAt)) // Chronological order for replies
Global Error Handler#
The Hono app must have a global error handler as a safety net:
app.onError((err, c) => {
console.error("Unhandled error in route handler", {
path: c.req.path,
method: c.req.method,
error: err.message,
stack: err.stack,
});
return c.json(
{
error: "An internal error occurred. Please try again later.",
...(process.env.NODE_ENV !== "production" && {
details: err.message,
}),
},
500
);
});
Testing Error Handling#
Test error classification, not just error catching. Users need actionable feedback: "retry later" (503) vs "report this bug" (500).
// ✅ Test network errors return 503 (retry later)
it("returns 503 when PDS connection fails", async () => {
mockPutRecord.mockRejectedValueOnce(new Error("fetch failed"));
const res = await app.request("/api/topics", {
method: "POST",
body: JSON.stringify({ text: "Test" })
});
expect(res.status).toBe(503); // Not 500!
const data = await res.json();
expect(data.error).toContain("Unable to reach your PDS");
});
// ✅ Test server errors return 500 (bug report)
it("returns 500 for unexpected database errors", async () => {
mockPutRecord.mockRejectedValueOnce(new Error("Database connection lost"));
const res = await app.request("/api/topics", {
method: "POST",
body: JSON.stringify({ text: "Test" })
});
expect(res.status).toBe(500); // Not 503!
const data = await res.json();
expect(data.error).not.toContain("PDS"); // Generic message for server errors
});
// ✅ Test input validation returns 400
it("returns 400 for malformed JSON", async () => {
const res = await app.request("/api/topics", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: "{ invalid json }"
});
expect(res.status).toBe(400);
const data = await res.json();
expect(data.error).toContain("Invalid JSON");
});
Error classification patterns to test:
- 400 (Bad Request): Invalid input, missing required fields, malformed JSON
- 404 (Not Found): Resource doesn't exist (forum, post, user)
- 503 (Service Unavailable): Network errors, PDS connection failures, timeouts — user should retry
- 500 (Internal Server Error): Unexpected errors, database errors — needs bug investigation
Security-Critical Code Standards#
When implementing authentication, authorization, or permission systems, follow these additional requirements:
1. Fail-Closed Security#
Security checks must deny access by default when encountering errors.
// ✅ CORRECT: Fail closed - deny access on any error
export async function checkPermission(
ctx: AppContext,
did: string,
permission: string
): Promise<boolean> {
try {
const [membership] = await ctx.db.select()...;
if (!membership || !membership.roleUri) {
return false; // No membership = deny access
}
const [role] = await ctx.db.select()...;
if (!role) {
return false; // Role deleted = deny access
}
return role.permissions.includes(permission) ||
role.permissions.includes("*");
} catch (error) {
if (isProgrammingError(error)) throw error;
console.error("Failed to check permissions - denying access", {
did, permission, error: ...
});
return false; // Error = deny access (fail closed)
}
}
// ❌ WRONG: Fail open - grants access on error
} catch (error) {
console.error("Permission check failed");
return true; // SECURITY BUG: grants access on DB error!
}
Test fail-closed behavior:
it("denies access when database query fails (fail closed)", async () => {
vi.spyOn(ctx.db, "select").mockRejectedValueOnce(new Error("DB connection lost"));
const result = await checkPermission(ctx, "did:plc:user", "createTopics");
expect(result).toBe(false); // Prove fail-closed behavior
expect(console.error).toHaveBeenCalledWith(
expect.stringContaining("denying access"),
expect.any(Object)
);
});
2. Security Test Coverage Requirements#
All security features require comprehensive tests covering:
- ✅ Happy path - Authorized user succeeds
- ✅ Unauthorized user - Returns 401 (not authenticated)
- ✅ Forbidden - Returns 403 (authenticated but lacks permission)
- ✅ Privilege escalation prevention - Cannot grant yourself higher privileges
- ✅ Peer protection - Cannot modify users with equal authority
- ✅ Fail-closed behavior - Database/network errors deny access
- ✅ Input validation - Malformed requests return 400, not 500
- ✅ Error classification - Network errors (503) vs server errors (500)
Example security test suite structure:
describe("POST /api/admin/members/:did/role (security-critical)", () => {
describe("Authorization", () => {
it("returns 401 when not authenticated", async () => { /* ... */ });
it("returns 403 when user lacks manageRoles permission", async () => { /* ... */ });
});
describe("Privilege Escalation Prevention", () => {
it("prevents admin from assigning owner role (higher authority)", async () => {
// Admin (priority 10) tries to assign Owner (priority 0) → 403
});
it("prevents admin from assigning admin role (equal authority)", async () => {
// Admin (priority 10) tries to assign Admin (priority 10) → 403
});
it("allows admin to assign moderator role (lower authority)", async () => {
// Admin (priority 10) assigns Moderator (priority 20) → 200
});
});
describe("Error Handling", () => {
it("returns 503 when PDS connection fails (network error)", async () => { /* ... */ });
it("returns 500 when database query fails (server error)", async () => { /* ... */ });
it("returns 400 for malformed roleUri (input validation)", async () => { /* ... */ });
});
});
3. Startup Failures for Missing Security Infrastructure#
Security-critical infrastructure must fail fast on startup, not at first request.
// ✅ CORRECT: Throw error on startup if ForumAgent unavailable
export async function seedDefaultRoles(ctx: AppContext) {
const agent = ctx.forumAgent;
if (!agent) {
console.error("CRITICAL: ForumAgent not available - role system non-functional", {
operation: "seedDefaultRoles",
forumDid: ctx.config.forumDid
});
throw new Error(
"Cannot seed roles without ForumAgent - permission system would be broken"
);
}
// ... seeding logic
}
// ❌ WRONG: Silent failure allows server to start without roles
if (!agent) {
console.warn("ForumAgent not available, skipping role seeding");
return { created: 0, skipped: 0 }; // Server starts but is broken!
}
Why this matters: If the permission system is broken, every request will fail authorization. It's better to fail startup loudly than silently deploy a non-functional system.
4. Security Code Review Checklist#
Before requesting review for authentication/authorization code, verify:
- All permission checks fail closed (deny access on error)
- Database errors in security checks are caught and logged
- Programming errors (TypeError) are re-thrown, not caught
- Privilege escalation is prevented (equal/higher authority blocked)
- Tests cover unauthorized (401), forbidden (403), and error cases
- Error messages don't leak internal details (priority values, permission names)
- Middleware composition is correct (auth before permission checks)
- Startup fails fast if security infrastructure is unavailable
Documentation & Project Tracking#
Keep these synchronized when completing work:
-
docs/atproto-forum-plan.md— Master project plan with phase checklist- Mark items complete
[x]when implementation is done and tested - Add brief status notes with file references and Linear issue IDs
- Update immediately after completing milestones
- Mark items complete
-
Linear issues — Task tracker at https://linear.app/atbb
- Update status: Backlog → In Progress → Done
- Add comments documenting implementation details when marking Done
- Keep status in sync with actual codebase state, not planning estimates
-
Workflow: When finishing a task:
# 1. Run tests to verify implementation is correct pnpm test # 2. If tests pass, commit your changes git add . git commit -m "feat: your changes" # 3. Update plan document: mark [x] and add completion note # 4. Update Linear: change status to Done, add implementation comment # 5. Push and request code review # 6. After review approval: include "docs:" prefix when committing plan updates
Why this matters: The plan document and Linear can drift from reality as code evolves. Regular synchronization prevents rediscovering completed work and ensures accurate project status.
Bruno API Collections#
CRITICAL: Keep Bruno collections synchronized with API changes.
The bruno/ directory contains Bruno collections that serve dual purpose:
- Interactive API testing during development
- Version-controlled API documentation that stays in sync with code
When to Update Bruno Collections#
When adding a new API endpoint:
- Create a new
.brufile in the appropriatebruno/AppView API/subdirectory - Follow the naming pattern: use descriptive names like
Create Topic.bru,Get Forum Metadata.bru - Include all request details: method, URL with variables, headers, body (if POST/PUT)
- Add comprehensive documentation in the
docsblock explaining:- Required/optional parameters
- Expected response format with example
- All possible error codes (400, 401, 404, 500, 503)
- Authentication requirements
- Validation rules
- Add assertions to validate responses automatically
When modifying an existing endpoint:
- Update the corresponding
.brufile inbruno/AppView API/ - Update parameter descriptions if inputs changed
- Update response documentation if output format changed
- Update error documentation if new error cases added
- Update assertions if validation logic changed
When removing an endpoint:
- Delete the corresponding
.brufile - Update
bruno/README.mdif it referenced the removed endpoint
When adding new environment variables:
- Update
bruno/environments/local.bruwith local development values - Update
bruno/environments/dev.bruwith deployment values - Document the variable in
bruno/README.mdunder "Environment Variables Reference"
Bruno File Template#
When creating new .bru files, follow this template:
meta {
name: Endpoint Name
type: http
seq: 1
}
get {
url: {{appview_url}}/api/path
}
params:query {
param1: {{variable}}
}
headers {
Content-Type: application/json
}
body:json {
{
"field": "value"
}
}
assert {
res.status: eq 200
res.body.field: isDefined
}
docs {
Brief description of what this endpoint does.
Path/query/body params:
- param1: Description (type, required/optional)
Returns:
{
"field": "value"
}
Error codes:
- 400: Bad request (invalid input)
- 401: Unauthorized (requires auth)
- 404: Not found
- 500: Server error
Notes:
- Any special considerations or validation rules
}
Workflow Integration#
When committing API changes, update Bruno collections in the SAME commit:
# Example: Adding a new endpoint
git add apps/appview/src/routes/my-route.ts
git add apps/appview/src/routes/__tests__/my-route.test.ts
git add bruno/AppView\ API/MyRoute/New\ Endpoint.bru
git commit -m "feat: add new endpoint for X
- Implements POST /api/my-endpoint
- Adds validation for Y
- Updates Bruno collection with request documentation"
Why commit together: Bruno collections are API documentation. Keeping them in the same commit ensures the documentation is never out of sync with the implementation.
Testing Bruno Collections#
Before committing:
- Open the collection in Bruno
- Test each modified request against your local dev server (
pnpm dev) - Verify assertions pass (green checkmarks)
- Verify documentation is accurate and complete
- Check that error scenarios are documented (not just happy path)
Common Mistakes#
DON'T:
- Commit API changes without updating Bruno collections
- Use hardcoded URLs instead of environment variables (
{{appview_url}}) - Skip documenting error cases (only document 200 responses)
- Leave placeholder/example data that doesn't match actual API behavior
- Forget to update assertions when response format changes
DO:
- Update Bruno files in the same commit as route implementation
- Use environment variables for all URLs and test data
- Document all HTTP status codes the endpoint can return
- Include example request/response bodies that match actual behavior
- Test requests locally before committing
Git Conventions#
- Do not include
Co-Authored-Bylines in commit messages. prior-art/contains git submodules (Rust AppView, original lexicons, delegation spec) — reference material only, not used at build time.- Worktrees with submodules need
submodule deinit --all -fthenworktree remove --forceto clean up.