···11+MIT License
22+33+Copyright (c) 2022-2023 Bluesky PBLLC
44+55+Permission is hereby granted, free of charge, to any person obtaining a copy
66+of this software and associated documentation files (the "Software"), to deal
77+in the Software without restriction, including without limitation the rights
88+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
99+copies of the Software, and to permit persons to whom the Software is
1010+furnished to do so, subject to the following conditions:
1111+1212+The above copyright notice and this permission notice shall be included in all
1313+copies or substantial portions of the Software.
1414+1515+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
1616+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
1717+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
1818+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
1919+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
2020+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
2121+SOFTWARE.
+51
Makefile
···11+22+SHELL = /bin/bash
33+.SHELLFLAGS = -o pipefail -c
44+55+.PHONY: help
66+help: ## Print info about all commands
77+ @echo "Helper Commands:"
88+ @echo
99+ @grep -E '^[a-zA-Z0-9_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf " \033[01;32m%-20s\033[0m %s\n", $$1, $$2}'
1010+ @echo
1111+ @echo "NOTE: dependencies between commands are not automatic. Eg, you must run 'deps' and 'build' first, and after any changes"
1212+1313+.PHONY: build
1414+build: ## Compile all modules
1515+ yarn build
1616+1717+.PHONY: test
1818+test: ## Run all tests
1919+ yarn test
2020+2121+.PHONY: run-dev-env
2222+run-dev-env: ## Run a "development environment" shell
2323+ cd packages/dev-env; yarn run start
2424+2525+.PHONY: run-pds
2626+run-pds: ## Run PDS locally
2727+ if [ ! -f "packages/pds/.dev.env" ]; then cp packages/pds/example.dev.env packages/pds/.dev.env; fi
2828+ cd packages/pds; ENV=dev yarn run start | yarn exec pino-pretty
2929+3030+.PHONY: run-plc
3131+run-plc: ## Run DID:PLC server locally
3232+ if [ ! -f "packages/plc/.dev.env" ]; then cp packages/plc/example.dev.env packages/plc/.dev.env; fi
3333+ cd packages/plc; ENV=dev yarn run start | yarn exec pino-pretty
3434+3535+.PHONY: lint
3636+lint: ## Run style checks and verify syntax
3737+ yarn verify
3838+3939+.PHONY: fmt
4040+fmt: ## Run syntax re-formatting
4141+ yarn prettier
4242+4343+.PHONY: deps
4444+deps: ## Installs dependent libs using 'yarn install'
4545+ yarn install --frozen-lockfile
4646+4747+.PHONY: nvm-setup
4848+nvm-setup: ## Use NVM to install and activate node+yarn
4949+ nvm install 18
5050+ nvm use 18
5151+ npm install --global yarn
+160
README.md
···11+# DID Placeholder (did:plc)
22+33+DID Placeholder is a cryptographic, strongly-consistent, and recoverable [DID](https://www.w3.org/TR/did-core/) method.
44+55+## Motivation
66+77+We introduced DID Placeholder because we weren't totally satisfied with any of the existing DID methods.
88+We wanted a strongly consistent, highly available, recoverable, and cryptographically secure method with cheap and fast propagation of updates.
99+1010+We cheekily titled the method "Placeholder", because we _don't_ want it to stick around. We're actively hoping to replace it with something less centralized.
1111+We expect a method to emerge that fits the bill within the next few years, likely a permissioned DID consortium.
1212+1313+## How it works
1414+This is not a fully-expressive DID format.
1515+Though it adheres to the DID spec, it is domain-specific and only allows for representing specific data types in a specific manner.
1616+There is the possibility that it could be extended to be more general in the future.
1717+1818+Each DID document is made up of just four pieces of data (for now):
1919+- `signingKey`
2020+- `recoveryKey`
2121+- `handle`
2222+- `atpPds` (Personal Data Server for the related AT Protocol repository)
2323+2424+DID documents are derived from a log of signed operations, ordered by the PLC server.
2525+2626+There are 5 operations that can be found in each log: `create`, `rotate_signing_key`, `rotate_recovery_key`, `update_handle`, and `update_atp_pds`.
2727+2828+Each operation is of the shape:
2929+```ts
3030+type Operation = {
3131+ type: string // operation type
3232+ prev: CID | null // pointer to the CID of the previous operation in the log
3333+ sig: string // base64url encoded signature of the operation
3434+ ... // other operation-specific data
3535+}
3636+```
3737+3838+Each operation contains a reference the the immediately preceding operation in the log and is signed by either the `signingKey` or the `recoveryKey`.
3939+4040+The DID itself is derived from the sha256 hash of the first operation in the log.
4141+It is then base32 encoded and truncated to 24 chars.
4242+4343+To illustrate:
4444+`did:plc:${base32Encode(sha256(createOp)).slice(0,24)}`
4545+4646+Operations are verified, ordered and made available by the PLC server.
4747+4848+The PLC server is constrained in it's capabilities.
4949+The operation logs are fully self-certifying, with the exception of their ordering.
5050+5151+Therefore, the PLC server's attacks are limited to:
5252+- Denial of service: rejecting valid operations, or refusing to serve some information about the DID
5353+- Misordering: In the event of a fork in DID document history, the server could choose to serve the "wrong" fork
5454+5555+### Signing and Recovery Keys
5656+5757+Both the `signingKey` and the `recoveryKey` are permissioned to make changes to the DID document.
5858+However, these keys are not equal.
5959+6060+As can be seen in the example document (below), only the `signingKey` is granted the ability to make assertions and invoke/delegate capabilities.
6161+6262+The recovery key on the other hand is capable of performing a "recovery operation"
6363+6464+### Account Recovery
6565+6666+The PLC server provides a 72hr window during which the `recoveryKey` can "rewrite" history.
6767+6868+This is to be used in adversarial situations in which a user's `signingKey` leaks or is being held by some custodian who turns out to be a bad actor.
6969+7070+In a situation such as this, the `recoveryKey` may be used to rotate both the `signingKey` and `recoveryKey`.
7171+7272+If a user wishes to recover from this situation, they sign a new operation rotating the `signingKey` to a key that they hold and set the `prev` of that operation to point to the most recent pre-attack operation.
7373+7474+## Example
7575+7676+Consider the following operation log:
7777+```ts
7878+[
7979+ {
8080+ type: 'create',
8181+ signingKey: 'did:key:zDnaejYFhgFiVF89LhJ4UipACLKuqo6PteZf8eKDVKeExXUPk',
8282+ recoveryKey: 'did:key:zDnaeSezF2TgCD71b5DiiFyhHQwKAfsBVqTTHRMvP597Z5Ztn',
8383+ handle: 'alice.example.com',
8484+ service: 'https://example.com',
8585+ prev: null,
8686+ sig: 'vi6JAl5W4FfyViD5_BKL9p0rbI3MxTWuh0g_egTFAjtf7gwoSfSe1O3qMOEUPX6QH3H0Q9M4y7gOLGblWkEwfQ'
8787+ },
8888+ {
8989+ type: 'update_handle',
9090+ handle: 'ali.example2.com',
9191+ prev: 'bafyreih2gihqzgq5qd6uqktyfpyxqxvpdnrpu2qunnkaxugbyquxumisuq',
9292+ sig: 'KL98ORpGmAJTqDsC9mWAYbhoDIv_-eZ3Nv0YqiPkbgx0ra96gYa3fQhIpZVxXFyNbu_4Y3JhPCvyJb8yDMe9Sg'
9393+ },
9494+ {
9595+ type: 'update_atp_pds',
9696+ service: 'https://example2.com',
9797+ prev: 'bafyreickw7v7mwncrganw645agsmwjciolknt4f6f5an5wt3nrjepqaoiu',
9898+ sig: 'AS-APea3xxR5-sq2i5v9IOsgbM5G5qAnB92tExZ8Z4vEy_GQbV8jmfY7zTx76P88AVXInZsO6yWX4UO7_xAIfg'
9999+ },
100100+ {
101101+ type: 'rotate_signing_key',
102102+ key: 'did:key:zDnaeh9v2RmcMo13Du2d6pjUf5bZwtauYxj3n9dYjw4EZUAR7',
103103+ prev: 'bafyreictfsrkdt5azni355vapqka5a7erqjsa3vv7iaf52yjlqqbzkwgga',
104104+ sig: 'VvcCoYVDluLZghv3i6ARyk1r7m1M32BPryJlTma1HTOx2CdbmIOUkVUbFa2LWi571fe-2yjTWY0IEAKfRiPAZg'
105105+ },
106106+ {
107107+ type: 'rotate_recovery_key',
108108+ key: 'did:key:zDnaedvvAsDE6H3BDdBejpx9ve2Tz95cymyCAKF66JbyMh1Lt',
109109+ prev: 'bafyreiazzldal6642usrcowrpztb5gjb73qla343ifnt5dfbxz4swmf5vi',
110110+ sig: 'Um1GVZZT9JgB2SKEbwoF4_Sip05QjH7r_g-Hcx7lIY-OhIg88ZKcN_N4TgzljgBGwe6qZb0u_0Vaq0c-S2WSDg'
111111+ }
112112+]
113113+```
114114+115115+The log produces the following document data:
116116+```ts
117117+{
118118+ did: 'did:plc:7iza6de2dwap2sbkpav7c6c6',
119119+ signingKey: 'did:key:zDnaeh9v2RmcMo13Du2d6pjUf5bZwtauYxj3n9dYjw4EZUAR7',
120120+ recoveryKey: 'did:key:zDnaedvvAsDE6H3BDdBejpx9ve2Tz95cymyCAKF66JbyMh1Lt',
121121+ handle: 'ali.example2.com',
122122+ atpPds: 'https://example2.com'
123123+}
124124+```
125125+126126+And the following DID document:
127127+```ts
128128+{
129129+ '@context': [
130130+ 'https://www.w3.org/ns/did/v1',
131131+ 'https://w3id.org/security/suites/ecdsa-2019/v1'
132132+ ],
133133+ id: 'did:plc:7iza6de2dwap2sbkpav7c6c6',
134134+ alsoKnownAs: [ 'https://ali.example2.com' ],
135135+ verificationMethod: [
136136+ {
137137+ id: 'did:plc:7iza6de2dwap2sbkpav7c6c6#signingKey',
138138+ type: 'EcdsaSecp256r1VerificationKey2019',
139139+ controller: 'did:plc:7iza6de2dwap2sbkpav7c6c6',
140140+ publicKeyMultibase: 'zSSa7w8s5aApu6td45gWTAAFkqCnaWY6ZsJ8DpyzDdYmVy4fARKqbn5F1UYBUMeVvYTBsoSoLvZnPdjd3pVHbmAHP'
141141+ },
142142+ {
143143+ id: 'did:plc:7iza6de2dwap2sbkpav7c6c6#recoveryKey',
144144+ type: 'EcdsaSecp256r1VerificationKey2019',
145145+ controller: 'did:plc:7iza6de2dwap2sbkpav7c6c6',
146146+ publicKeyMultibase: 'zRV2EDDvop2r2aKWTcCtei3NvuNEnR5ucTVd9U4CSCnJEiha2QFyTjdxoFZ6629iHxhmTModThGQzX1495ZS6iD4V'
147147+ }
148148+ ],
149149+ assertionMethod: [ 'did:plc:7iza6de2dwap2sbkpav7c6c6#signingKey' ],
150150+ capabilityInvocation: [ 'did:plc:7iza6de2dwap2sbkpav7c6c6#signingKey' ],
151151+ capabilityDelegation: [ 'did:plc:7iza6de2dwap2sbkpav7c6c6#signingKey' ],
152152+ service: [
153153+ {
154154+ id: 'did:plc:7iza6de2dwap2sbkpav7c6c6#atpPds',
155155+ type: 'AtpPersonalDataServer',
156156+ serviceEndpoint: 'https://example2.com'
157157+ }
158158+ ]
159159+}
160160+```
···11+// Jest doesn't like ES modules, so we need to transpile them
22+// For each one, add them to this list, add them to
33+// "workspaces.nohoist" in the root package.json, and
44+// make sure that a babel.config.js is in the package root
55+const esModules = ['get-port', 'node-fetch'].join('|')
66+77+// jestconfig.base.js
88+module.exports = {
99+ roots: ['<rootDir>/src', '<rootDir>/tests'],
1010+ transform: {
1111+ '^.+\\.ts$': 'ts-jest',
1212+ "^.+\\.js?$": "babel-jest"
1313+ },
1414+ transformIgnorePatterns: [`<rootDir>/node_modules/(?!${esModules})`],
1515+ testRegex: '(/tests/.*.(test|spec)).(jsx?|tsx?)$',
1616+ moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
1717+ setupFiles: ["<rootDir>/../../test-setup.ts"],
1818+ verbose: true,
1919+ testTimeout: 30000
2020+}
···11+import { Kysely, Migrator, PostgresDialect, SqliteDialect } from 'kysely'
22+import SqliteDB from 'better-sqlite3'
33+import { Pool as PgPool, types as pgTypes } from 'pg'
44+import { CID } from 'multiformats/cid'
55+import { cidForCbor } from '@atproto/common'
66+import * as plc from '@did-plc/lib'
77+import { ServerError } from './error'
88+import * as migrations from './migrations'
99+1010+export class Database {
1111+ migrator: Migrator
1212+ constructor(
1313+ public db: Kysely<DatabaseSchema>,
1414+ public dialect: Dialect,
1515+ public schema?: string,
1616+ ) {
1717+ this.migrator = new Migrator({
1818+ db,
1919+ migrationTableSchema: schema,
2020+ provider: {
2121+ async getMigrations() {
2222+ return migrations
2323+ },
2424+ },
2525+ })
2626+ }
2727+2828+ static sqlite(location: string): Database {
2929+ const db = new Kysely<DatabaseSchema>({
3030+ dialect: new SqliteDialect({
3131+ database: new SqliteDB(location),
3232+ }),
3333+ })
3434+ return new Database(db, 'sqlite')
3535+ }
3636+3737+ static postgres(opts: { url: string; schema?: string }): Database {
3838+ const { url, schema } = opts
3939+ const pool = new PgPool({ connectionString: url })
4040+4141+ // Select count(*) and other pg bigints as js integer
4242+ pgTypes.setTypeParser(pgTypes.builtins.INT8, (n) => parseInt(n, 10))
4343+4444+ // Setup schema usage, primarily for test parallelism (each test suite runs in its own pg schema)
4545+ if (schema !== undefined) {
4646+ if (!/^[a-z_]+$/i.test(schema)) {
4747+ throw new Error(
4848+ `Postgres schema must only contain [A-Za-z_]: ${schema}`,
4949+ )
5050+ }
5151+ pool.on('connect', (client) =>
5252+ // Shared objects such as extensions will go in the public schema
5353+ client.query(`SET search_path TO "${schema}",public`),
5454+ )
5555+ }
5656+5757+ const db = new Kysely<DatabaseSchema>({
5858+ dialect: new PostgresDialect({ pool }),
5959+ })
6060+6161+ return new Database(db, 'pg', schema)
6262+ }
6363+6464+ static memory(): Database {
6565+ return Database.sqlite(':memory:')
6666+ }
6767+6868+ async close(): Promise<void> {
6969+ await this.db.destroy()
7070+ }
7171+7272+ async migrateToLatestOrThrow() {
7373+ if (this.schema !== undefined) {
7474+ await this.db.schema.createSchema(this.schema).ifNotExists().execute()
7575+ }
7676+ const { error, results } = await this.migrator.migrateToLatest()
7777+ if (error) {
7878+ throw error
7979+ }
8080+ if (!results) {
8181+ throw new Error('An unknown failure occurred while migrating')
8282+ }
8383+ return results
8484+ }
8585+8686+ async validateAndAddOp(did: string, proposed: plc.Operation): Promise<void> {
8787+ const ops = await this._opsForDid(did)
8888+ // throws if invalid
8989+ const { nullified, prev } = await plc.document.assureValidNextOp(
9090+ did,
9191+ ops,
9292+ proposed,
9393+ )
9494+ const cid = await cidForCbor(proposed)
9595+9696+ await this.db
9797+ .transaction()
9898+ .setIsolationLevel('serializable')
9999+ .execute(async (tx) => {
100100+ await tx
101101+ .insertInto('operations')
102102+ .values({
103103+ did,
104104+ operation: JSON.stringify(proposed),
105105+ cid: cid.toString(),
106106+ nullified: 0,
107107+ createdAt: new Date().toISOString(),
108108+ })
109109+ .execute()
110110+111111+ if (nullified.length > 0) {
112112+ const nullfiedStrs = nullified.map((cid) => cid.toString())
113113+ await tx
114114+ .updateTable('operations')
115115+ .set({ nullified: 1 })
116116+ .where('did', '=', did)
117117+ .where('cid', 'in', nullfiedStrs)
118118+ .execute()
119119+ }
120120+121121+ // verify that the 2nd to last tx matches the proposed prev
122122+ // otherwise rollback to prevent forks in history
123123+ const mostRecent = await tx
124124+ .selectFrom('operations')
125125+ .select('cid')
126126+ .where('did', '=', did)
127127+ .where('nullified', '=', 0)
128128+ .orderBy('createdAt', 'desc')
129129+ .limit(2)
130130+ .execute()
131131+ const isMatch =
132132+ (prev === null && !mostRecent[1]) ||
133133+ (prev && prev.equals(CID.parse(mostRecent[1].cid)))
134134+ if (!isMatch) {
135135+ throw new ServerError(
136136+ 409,
137137+ `Proposed prev does not match the most recent operation: ${mostRecent?.toString()}`,
138138+ )
139139+ }
140140+ })
141141+ }
142142+143143+ async mostRecentCid(did: string, notIncluded: CID[]): Promise<CID | null> {
144144+ const notIncludedStr = notIncluded.map((cid) => cid.toString())
145145+146146+ const found = await this.db
147147+ .selectFrom('operations')
148148+ .select('cid')
149149+ .where('did', '=', did)
150150+ .where('nullified', '=', 0)
151151+ .where('cid', 'not in', notIncludedStr)
152152+ .orderBy('createdAt', 'desc')
153153+ .executeTakeFirst()
154154+ return found ? CID.parse(found.cid) : null
155155+ }
156156+157157+ async opsForDid(did: string): Promise<plc.Operation[]> {
158158+ const ops = await this._opsForDid(did)
159159+ return ops.map((op) => op.operation)
160160+ }
161161+162162+ async _opsForDid(did: string): Promise<plc.IndexedOperation[]> {
163163+ const res = await this.db
164164+ .selectFrom('operations')
165165+ .selectAll()
166166+ .where('did', '=', did)
167167+ .where('nullified', '=', 0)
168168+ .orderBy('createdAt', 'asc')
169169+ .execute()
170170+171171+ return res.map((row) => ({
172172+ did: row.did,
173173+ operation: JSON.parse(row.operation),
174174+ cid: CID.parse(row.cid),
175175+ nullified: row.nullified === 1,
176176+ createdAt: new Date(row.createdAt),
177177+ }))
178178+ }
179179+}
180180+181181+export default Database
182182+183183+export type Dialect = 'pg' | 'sqlite'
184184+185185+interface OperationsTable {
186186+ did: string
187187+ operation: string
188188+ cid: string
189189+ nullified: 0 | 1
190190+ createdAt: string
191191+}
192192+193193+interface DatabaseSchema {
194194+ operations: OperationsTable
195195+}
+9
packages/server/src/env.ts
···11+// NOTE: this file should be imported first, particularly before `@atproto/common` (for logging), to ensure that environment variables are respected in library code
22+import dotenv from 'dotenv'
33+44+const env = process.env.ENV
55+if (env) {
66+ dotenv.config({ path: `./.${env}.env` })
77+} else {
88+ dotenv.config()
99+}
···11+// NOTE this file can be edited by hand, but it is also appended to by the migrations:create command.
22+// It's important that every migration is exported from here with the proper name. We'd simplify
33+// this with kysely's FileMigrationProvider, but it doesn't play nicely with the build process.
44+55+export * as _20221020T204908820Z from './20221020T204908820Z-operations-init'
···11+# pg
22+33+Helpers for working with postgres
44+55+## Usage
66+77+### `with-test-db.sh`
88+99+This script allows you to run any command with a fresh, ephemeral/single-use postgres database available. When the script starts a Dockerized postgres container starts-up, and when the script completes that container is removed.
1010+1111+The environment variable `DB_POSTGRES_URL` will be set with a connection string that can be used to connect to the database. The [`PG*` environment variables](https://www.postgresql.org/docs/current/libpq-envars.html) that are recognized by libpq (i.e. used by the `psql` client) are also set.
1212+1313+**Example**
1414+1515+```
1616+$ ./with-test-db.sh psql -c 'select 1;'
1717+[+] Running 1/1
1818+ ⠿ Container pg-db_test-1 Healthy 1.8s
1919+2020+ ?column?
2121+----------
2222+ 1
2323+(1 row)
2424+2525+2626+[+] Running 1/1
2727+ ⠿ Container pg-db_test-1 Stopped 0.1s
2828+Going to remove pg-db_test-1
2929+[+] Running 1/0
3030+ ⠿ Container pg-db_test-1 Removed
3131+```
3232+3333+### `docker-compose.yaml`
3434+3535+The Docker compose file can be used to run containerized versions of postgres either for single use (as is used by `with-test-db.sh`), or for longer-term use. These are setup as separate services named `test_db` and `db` respectively. In both cases the database is available on the host machine's `localhost` and credentials are:
3636+3737+- Username: pg
3838+- Password: password
3939+4040+However, each service uses a different port, documented below, to avoid conflicts.
4141+4242+#### `test_db` service for single use
4343+4444+The single-use `test_db` service does not have any persistent storage. When the container is removed, data in the database disappears with it.
4545+4646+This service runs on port `5433`.
4747+4848+```
4949+$ docker compose up test_db # start container
5050+$ docker compose stop test_db # stop container
5151+$ docker compose rm test_db # remove container
5252+```
5353+5454+#### `db` service for persistent use
5555+5656+The `db` service has persistent storage on the host machine managed by Docker under a volume named `pg_atp_db`. When the container is removed, data in the database will remain on the host machine. In order to start fresh, you would need to remove the volume.
5757+5858+This service runs on port `5432`.
5959+6060+```
6161+$ docker compose up db -d # start container
6262+$ docker compose stop db # stop container
6363+$ docker compose rm db # remove container
6464+$ docker volume rm pg_atp_db # remove volume
6565+```