Our Personal Data Server from scratch! tranquil.farm
oauth atproto pds rust postgresql objectstorage fun

feat: filesystem blob storage & oauth fix #3

merged opened by lewis.moe targeting main from filesystem-blob-storage
  • filesystem blob storage is now default
  • also fixing a bug in delegation passkey oauth
Labels

None yet.

assignee
Participants 2
AT URI
at://did:plc:3fwecdnvtcscjnrx2p4n7alz/sh.tangled.repo.pull/3mcuqaftczs22
+2730 -380
Diff #0
+22 -9
.env.example
··· 14 # DATABASE_MIN_CONNECTIONS=10 15 # DATABASE_ACQUIRE_TIMEOUT_SECS=30 16 # ============================================================================= 17 - # Blob Storage (S3-compatible) 18 # ============================================================================= 19 - S3_ENDPOINT=http://localhost:9000 20 - AWS_REGION=us-east-1 21 - S3_BUCKET=pds-blobs 22 - AWS_ACCESS_KEY_ID=minioadmin 23 - AWS_SECRET_ACCESS_KEY=minioadmin 24 # ============================================================================= 25 - # Backups (S3-compatible) 26 # ============================================================================= 27 - # Set to enable automatic repo backups to S3 28 # BACKUP_S3_BUCKET=pds-backups 29 - # BACKUP_ENABLED=true 30 # ============================================================================= 31 # Valkey (for caching and distributed rate limiting) 32 # =============================================================================
··· 14 # DATABASE_MIN_CONNECTIONS=10 15 # DATABASE_ACQUIRE_TIMEOUT_SECS=30 16 # ============================================================================= 17 + # Blob Storage 18 # ============================================================================= 19 + # Backend: "filesystem" (default) or "s3" 20 + # BLOB_STORAGE_BACKEND=filesystem 21 + # For filesystem backend: 22 + BLOB_STORAGE_PATH=/var/lib/tranquil/blobs 23 + # For S3 backend: 24 + # S3_ENDPOINT=http://localhost:9000 25 + # AWS_REGION=us-east-1 26 + # S3_BUCKET=pds-blobs 27 + # AWS_ACCESS_KEY_ID=minioadmin 28 + # AWS_SECRET_ACCESS_KEY=minioadmin 29 # ============================================================================= 30 + # Backups 31 # ============================================================================= 32 + # Enable/disable automatic repo backups 33 + # BACKUP_ENABLED=true 34 + # Backend: "filesystem" (default) or "s3" 35 + # BACKUP_STORAGE_BACKEND=filesystem 36 + # For filesystem backend: 37 + BACKUP_STORAGE_PATH=/var/lib/tranquil/backups 38 + # For S3 backend: 39 # BACKUP_S3_BUCKET=pds-backups 40 + # Backup schedule and retention 41 + # BACKUP_RETENTION_COUNT=7 42 + # BACKUP_INTERVAL_SECS=86400 43 # ============================================================================= 44 # Valkey (for caching and distributed rate limiting) 45 # =============================================================================
+77
.sqlx/query-06eb7c6e1983b6121526ba63612236391290c2e63d37d2bb1cd89ea822950a82.json
···
··· 1 + { 2 + "db_name": "PostgreSQL", 3 + "query": "\n SELECT token, request_uri, provider as \"provider: SsoProviderType\",\n provider_user_id, provider_username, provider_email, created_at, expires_at\n FROM sso_pending_registration\n WHERE token = $1 AND expires_at > NOW()\n ", 4 + "describe": { 5 + "columns": [ 6 + { 7 + "ordinal": 0, 8 + "name": "token", 9 + "type_info": "Text" 10 + }, 11 + { 12 + "ordinal": 1, 13 + "name": "request_uri", 14 + "type_info": "Text" 15 + }, 16 + { 17 + "ordinal": 2, 18 + "name": "provider: SsoProviderType", 19 + "type_info": { 20 + "Custom": { 21 + "name": "sso_provider_type", 22 + "kind": { 23 + "Enum": [ 24 + "github", 25 + "discord", 26 + "google", 27 + "gitlab", 28 + "oidc" 29 + ] 30 + } 31 + } 32 + } 33 + }, 34 + { 35 + "ordinal": 3, 36 + "name": "provider_user_id", 37 + "type_info": "Text" 38 + }, 39 + { 40 + "ordinal": 4, 41 + "name": "provider_username", 42 + "type_info": "Text" 43 + }, 44 + { 45 + "ordinal": 5, 46 + "name": "provider_email", 47 + "type_info": "Text" 48 + }, 49 + { 50 + "ordinal": 6, 51 + "name": "created_at", 52 + "type_info": "Timestamptz" 53 + }, 54 + { 55 + "ordinal": 7, 56 + "name": "expires_at", 57 + "type_info": "Timestamptz" 58 + } 59 + ], 60 + "parameters": { 61 + "Left": [ 62 + "Text" 63 + ] 64 + }, 65 + "nullable": [ 66 + false, 67 + false, 68 + false, 69 + false, 70 + true, 71 + true, 72 + false, 73 + false 74 + ] 75 + }, 76 + "hash": "06eb7c6e1983b6121526ba63612236391290c2e63d37d2bb1cd89ea822950a82" 77 + }
+77
.sqlx/query-5031b96c65078d6c54954ce6e57ff9cbba4c48dd8a7546882ab5647114ffab4a.json
···
··· 1 + { 2 + "db_name": "PostgreSQL", 3 + "query": "\n DELETE FROM sso_pending_registration\n WHERE token = $1 AND expires_at > NOW()\n RETURNING token, request_uri, provider as \"provider: SsoProviderType\",\n provider_user_id, provider_username, provider_email, created_at, expires_at\n ", 4 + "describe": { 5 + "columns": [ 6 + { 7 + "ordinal": 0, 8 + "name": "token", 9 + "type_info": "Text" 10 + }, 11 + { 12 + "ordinal": 1, 13 + "name": "request_uri", 14 + "type_info": "Text" 15 + }, 16 + { 17 + "ordinal": 2, 18 + "name": "provider: SsoProviderType", 19 + "type_info": { 20 + "Custom": { 21 + "name": "sso_provider_type", 22 + "kind": { 23 + "Enum": [ 24 + "github", 25 + "discord", 26 + "google", 27 + "gitlab", 28 + "oidc" 29 + ] 30 + } 31 + } 32 + } 33 + }, 34 + { 35 + "ordinal": 3, 36 + "name": "provider_user_id", 37 + "type_info": "Text" 38 + }, 39 + { 40 + "ordinal": 4, 41 + "name": "provider_username", 42 + "type_info": "Text" 43 + }, 44 + { 45 + "ordinal": 5, 46 + "name": "provider_email", 47 + "type_info": "Text" 48 + }, 49 + { 50 + "ordinal": 6, 51 + "name": "created_at", 52 + "type_info": "Timestamptz" 53 + }, 54 + { 55 + "ordinal": 7, 56 + "name": "expires_at", 57 + "type_info": "Timestamptz" 58 + } 59 + ], 60 + "parameters": { 61 + "Left": [ 62 + "Text" 63 + ] 64 + }, 65 + "nullable": [ 66 + false, 67 + false, 68 + false, 69 + false, 70 + true, 71 + true, 72 + false, 73 + false 74 + ] 75 + }, 76 + "hash": "5031b96c65078d6c54954ce6e57ff9cbba4c48dd8a7546882ab5647114ffab4a" 77 + }
+22
.sqlx/query-6258398accee69e0c5f455a3c0ecc273b3da6ef5bb4d8660adafe63d8e3cd2d4.json
···
··· 1 + { 2 + "db_name": "PostgreSQL", 3 + "query": "SELECT email_verified FROM users WHERE email = $1 OR handle = $1", 4 + "describe": { 5 + "columns": [ 6 + { 7 + "ordinal": 0, 8 + "name": "email_verified", 9 + "type_info": "Bool" 10 + } 11 + ], 12 + "parameters": { 13 + "Left": [ 14 + "Text" 15 + ] 16 + }, 17 + "nullable": [ 18 + false 19 + ] 20 + }, 21 + "hash": "6258398accee69e0c5f455a3c0ecc273b3da6ef5bb4d8660adafe63d8e3cd2d4" 22 + }
+31
.sqlx/query-a4dc8fb22bd094d414c55b9da20b610f7b122b485ab0fd0d0646d68ae8e64fe6.json
···
··· 1 + { 2 + "db_name": "PostgreSQL", 3 + "query": "\n INSERT INTO external_identities (did, provider, provider_user_id, provider_username, provider_email)\n VALUES ($1, $2, $3, $4, $5)\n ", 4 + "describe": { 5 + "columns": [], 6 + "parameters": { 7 + "Left": [ 8 + "Text", 9 + { 10 + "Custom": { 11 + "name": "sso_provider_type", 12 + "kind": { 13 + "Enum": [ 14 + "github", 15 + "discord", 16 + "google", 17 + "gitlab", 18 + "oidc" 19 + ] 20 + } 21 + } 22 + }, 23 + "Text", 24 + "Text", 25 + "Text" 26 + ] 27 + }, 28 + "nullable": [] 29 + }, 30 + "hash": "a4dc8fb22bd094d414c55b9da20b610f7b122b485ab0fd0d0646d68ae8e64fe6" 31 + }
+32
.sqlx/query-dec3a21a8e60cc8d2c5dad727750bc88f5535dedae244f7b6e4afa95769b8f1a.json
···
··· 1 + { 2 + "db_name": "PostgreSQL", 3 + "query": "\n INSERT INTO sso_pending_registration (token, request_uri, provider, provider_user_id, provider_username, provider_email)\n VALUES ($1, $2, $3, $4, $5, $6)\n ", 4 + "describe": { 5 + "columns": [], 6 + "parameters": { 7 + "Left": [ 8 + "Text", 9 + "Text", 10 + { 11 + "Custom": { 12 + "name": "sso_provider_type", 13 + "kind": { 14 + "Enum": [ 15 + "github", 16 + "discord", 17 + "google", 18 + "gitlab", 19 + "oidc" 20 + ] 21 + } 22 + } 23 + }, 24 + "Text", 25 + "Text", 26 + "Text" 27 + ] 28 + }, 29 + "nullable": [] 30 + }, 31 + "hash": "dec3a21a8e60cc8d2c5dad727750bc88f5535dedae244f7b6e4afa95769b8f1a" 32 + }
+3
Cargo.lock
··· 6173 "bytes", 6174 "futures", 6175 "sha2", 6176 "tranquil-infra", 6177 ] 6178 6179 [[package]]
··· 6173 "bytes", 6174 "futures", 6175 "sha2", 6176 + "tokio", 6177 + "tracing", 6178 "tranquil-infra", 6179 + "uuid", 6180 ] 6181 6182 [[package]]
+2 -2
README.md
··· 12 13 ## What's different about Tranquil PDS 14 15 - It is a superset of the reference PDS, including: passkeys and 2FA (WebAuthn/FIDO2, TOTP, backup codes, trusted devices), SSO login and signup, did:web support (PDS-hosted subdomains or bring-your-own), multi-channel communication (email, discord, telegram, signal) for verification and alerts, granular OAuth scopes with a consent UI showing human-readable descriptions, app passwords with granular permissions (read-only, post-only, or custom scopes), account delegation (letting others manage an account with configurable permission levels), automatic backups to s3-compatible object storage (configurable retention and frequency, one-click restore), and a built-in web UI for account management, OAuth consent, repo browsing, and admin. 16 17 - The PDS itself is a single small binary with no node/npm runtime. It does require postgres, valkey, and s3-compatible storage, which makes setup heavier than the reference PDS's sqlite. The tradeoff is that these are battle-tested pieces of infra that we already know how to scale, back up, and monitor. 18 19 ## Quick Start 20
··· 12 13 ## What's different about Tranquil PDS 14 15 + It is a superset of the reference PDS, including: passkeys and 2FA (WebAuthn/FIDO2, TOTP, backup codes, trusted devices), SSO login and signup, did:web support (PDS-hosted subdomains or bring-your-own), multi-channel communication (email, discord, telegram, signal) for verification and alerts, granular OAuth scopes with a consent UI showing human-readable descriptions, app passwords with granular permissions (read-only, post-only, or custom scopes), account delegation (letting others manage an account with configurable permission levels), automatic backups (configurable retention and frequency, one-click restore), and a built-in web UI for account management, OAuth consent, repo browsing, and admin. 16 17 + The PDS itself is a single small binary with no node/npm runtime. It requires postgres and stores blobs on the local filesystem. Valkey is optional (enables distributed rate limiting for multi-node setups). The tradeoff vs the reference PDS's sqlite is that postgres is a battle-tested piece of infra that we already know how to scale, back up, and monitor. 18 19 ## Quick Start 20
+25 -2
crates/tranquil-infra/src/lib.rs
··· 8 pub enum StorageError { 9 #[error("IO error: {0}")] 10 Io(#[from] std::io::Error), 11 - #[error("S3 error: {0}")] 12 - S3(String), 13 #[error("Other: {0}")] 14 Other(String), 15 } ··· 33 stream: Pin<Box<dyn Stream<Item = Result<Bytes, std::io::Error>> + Send>>, 34 ) -> Result<StreamUploadResult, StorageError>; 35 async fn copy(&self, src_key: &str, dst_key: &str) -> Result<(), StorageError>; 36 } 37 38 #[derive(Debug, thiserror::Error)]
··· 8 pub enum StorageError { 9 #[error("IO error: {0}")] 10 Io(#[from] std::io::Error), 11 + #[error("Storage error: {0}")] 12 + Backend(String), 13 + #[error("Not found: {0}")] 14 + NotFound(String), 15 #[error("Other: {0}")] 16 Other(String), 17 } ··· 35 stream: Pin<Box<dyn Stream<Item = Result<Bytes, std::io::Error>> + Send>>, 36 ) -> Result<StreamUploadResult, StorageError>; 37 async fn copy(&self, src_key: &str, dst_key: &str) -> Result<(), StorageError>; 38 + } 39 + 40 + #[async_trait] 41 + pub trait BackupStorage: Send + Sync { 42 + async fn put_backup(&self, did: &str, rev: &str, data: &[u8]) -> Result<String, StorageError>; 43 + async fn get_backup(&self, storage_key: &str) -> Result<Bytes, StorageError>; 44 + async fn delete_backup(&self, storage_key: &str) -> Result<(), StorageError>; 45 + } 46 + 47 + pub fn backup_retention_count() -> u32 { 48 + std::env::var("BACKUP_RETENTION_COUNT") 49 + .ok() 50 + .and_then(|v| v.parse().ok()) 51 + .unwrap_or(7) 52 + } 53 + 54 + pub fn backup_interval_secs() -> u64 { 55 + std::env::var("BACKUP_INTERVAL_SECS") 56 + .ok() 57 + .and_then(|v| v.parse().ok()) 58 + .unwrap_or(86400) 59 } 60 61 #[derive(Debug, thiserror::Error)]
+1
crates/tranquil-pds/Cargo.toml
··· 79 80 [features] 81 external-infra = [] 82 83 [dev-dependencies] 84 ciborium = { workspace = true }
··· 79 80 [features] 81 external-infra = [] 82 + s3-storage = [] 83 84 [dev-dependencies] 85 ciborium = { workspace = true }
+4 -4
crates/tranquil-pds/src/api/backup.rs
··· 3 use crate::auth::BearerAuth; 4 use crate::scheduled::generate_full_backup; 5 use crate::state::AppState; 6 - use crate::storage::BackupStorage; 7 use axum::{ 8 Json, 9 extract::{Query, State}, ··· 249 "Created manual backup" 250 ); 251 252 - let retention = BackupStorage::retention_count(); 253 if let Err(e) = cleanup_old_backups( 254 state.backup_repo.as_ref(), 255 - backup_storage, 256 user.id, 257 retention, 258 ) ··· 275 276 async fn cleanup_old_backups( 277 backup_repo: &dyn BackupRepository, 278 - backup_storage: &BackupStorage, 279 user_id: uuid::Uuid, 280 retention_count: u32, 281 ) -> Result<(), String> {
··· 3 use crate::auth::BearerAuth; 4 use crate::scheduled::generate_full_backup; 5 use crate::state::AppState; 6 + use crate::storage::{BackupStorage, backup_retention_count}; 7 use axum::{ 8 Json, 9 extract::{Query, State}, ··· 249 "Created manual backup" 250 ); 251 252 + let retention = backup_retention_count(); 253 if let Err(e) = cleanup_old_backups( 254 state.backup_repo.as_ref(), 255 + backup_storage.as_ref(), 256 user.id, 257 retention, 258 ) ··· 275 276 async fn cleanup_old_backups( 277 backup_repo: &dyn BackupRepository, 278 + backup_storage: &dyn BackupStorage, 279 user_id: uuid::Uuid, 280 retention_count: u32, 281 ) -> Result<(), String> {
+3 -5
crates/tranquil-pds/src/api/server/reauth.rs
··· 82 .await 83 .unwrap_or_default(); 84 85 - let app_password_valid = app_password_hashes 86 - .iter() 87 - .fold(false, |acc, h| { 88 - acc | bcrypt::verify(&input.password, h).unwrap_or(false) 89 - }); 90 91 if !app_password_valid { 92 warn!(did = %&auth.0.did, "Re-auth failed: invalid password");
··· 82 .await 83 .unwrap_or_default(); 84 85 + let app_password_valid = app_password_hashes.iter().fold(false, |acc, h| { 86 + acc | bcrypt::verify(&input.password, h).unwrap_or(false) 87 + }); 88 89 if !app_password_valid { 90 warn!(did = %&auth.0.did, "Re-auth failed: invalid password");
+5 -1
crates/tranquil-pds/src/auth/mod.rs
··· 49 let chars: &[u8] = b"abcdefghijklmnopqrstuvwxyz234567"; 50 let mut rng = rand::thread_rng(); 51 let segments: Vec<String> = (0..4) 52 - .map(|_| (0..4).map(|_| chars[rng.gen_range(0..chars.len())] as char).collect()) 53 .collect(); 54 segments.join("-") 55 }
··· 49 let chars: &[u8] = b"abcdefghijklmnopqrstuvwxyz234567"; 50 let mut rng = rand::thread_rng(); 51 let segments: Vec<String> = (0..4) 52 + .map(|_| { 53 + (0..4) 54 + .map(|_| chars[rng.gen_range(0..chars.len())] as char) 55 + .collect() 56 + }) 57 .collect(); 58 segments.join("-") 59 }
+97 -16
crates/tranquil-pds/src/oauth/endpoints/authorize.rs
··· 311 312 if is_delegated { 313 tracing::info!("Redirecting to delegation auth"); 314 return redirect_see_other(&format!( 315 "/app/oauth/delegation?request_uri={}&delegated_did={}", 316 url_encode(&request_uri), ··· 2137 pub struct PasskeyStartInput { 2138 pub request_uri: String, 2139 pub identifier: String, 2140 } 2141 2142 #[derive(Debug, Serialize)] ··· 2394 .into_response(); 2395 } 2396 2397 - if state 2398 .oauth_repo 2399 .set_authorization_did(&passkey_start_request_id, &user.did, None) 2400 .await 2401 .is_err() 2402 { 2403 - return ( 2404 - StatusCode::INTERNAL_SERVER_ERROR, 2405 - Json(serde_json::json!({ 2406 - "error": "server_error", 2407 - "error_description": "An error occurred." 2408 - })), 2409 - ) 2410 - .into_response(); 2411 } 2412 2413 let options = serde_json::to_value(&rcr).unwrap_or(serde_json::json!({})); ··· 2497 } 2498 }; 2499 2500 let auth_state_json = match state 2501 .user_repo 2502 - .load_webauthn_challenge(&did, "authentication") 2503 .await 2504 { 2505 Ok(Some(s)) => s, ··· 2591 2592 if let Err(e) = state 2593 .user_repo 2594 - .delete_webauthn_challenge(&did, "authentication") 2595 .await 2596 { 2597 tracing::warn!(error = %e, "Failed to delete authentication state"); ··· 3368 } 3369 }; 3370 3371 - let password_valid = password_hashes 3372 - .iter() 3373 - .fold(false, |acc, hash| { 3374 - acc | bcrypt::verify(&form.app_password, hash).unwrap_or(false) 3375 - }); 3376 3377 if !password_valid { 3378 return (
··· 311 312 if is_delegated { 313 tracing::info!("Redirecting to delegation auth"); 314 + if let Err(e) = state 315 + .oauth_repo 316 + .set_request_did(&request_id, &user.did) 317 + .await 318 + { 319 + tracing::error!(error = %e, "Failed to set delegated DID on authorization request"); 320 + return redirect_to_frontend_error( 321 + "server_error", 322 + "Failed to initialize delegation flow", 323 + ); 324 + } 325 return redirect_see_other(&format!( 326 "/app/oauth/delegation?request_uri={}&delegated_did={}", 327 url_encode(&request_uri), ··· 2148 pub struct PasskeyStartInput { 2149 pub request_uri: String, 2150 pub identifier: String, 2151 + pub delegated_did: Option<String>, 2152 } 2153 2154 #[derive(Debug, Serialize)] ··· 2406 .into_response(); 2407 } 2408 2409 + let delegation_from_param = match &form.delegated_did { 2410 + Some(delegated_did_str) => { 2411 + match delegated_did_str.parse::<tranquil_types::Did>() { 2412 + Ok(delegated_did) if delegated_did != user.did => { 2413 + match state 2414 + .delegation_repo 2415 + .get_delegation(&delegated_did, &user.did) 2416 + .await 2417 + { 2418 + Ok(Some(_)) => Some(delegated_did), 2419 + Ok(None) => None, 2420 + Err(e) => { 2421 + tracing::warn!( 2422 + error = %e, 2423 + delegated_did = %delegated_did, 2424 + controller_did = %user.did, 2425 + "Failed to verify delegation relationship" 2426 + ); 2427 + None 2428 + } 2429 + } 2430 + } 2431 + _ => None, 2432 + } 2433 + } 2434 + None => None, 2435 + }; 2436 + 2437 + let is_delegation_flow = delegation_from_param.is_some() 2438 + || request_data.did.as_ref().map_or(false, |existing_did| { 2439 + existing_did 2440 + .parse::<tranquil_types::Did>() 2441 + .ok() 2442 + .map_or(false, |parsed| parsed != user.did) 2443 + }); 2444 + 2445 + if let Some(delegated_did) = delegation_from_param { 2446 + tracing::info!( 2447 + delegated_did = %delegated_did, 2448 + controller_did = %user.did, 2449 + "Passkey auth with delegated_did param - setting delegation flow" 2450 + ); 2451 + if state 2452 + .oauth_repo 2453 + .set_authorization_did(&passkey_start_request_id, &delegated_did, None) 2454 + .await 2455 + .is_err() 2456 + { 2457 + return OAuthError::ServerError("An error occurred.".into()).into_response(); 2458 + } 2459 + if state 2460 + .oauth_repo 2461 + .set_controller_did(&passkey_start_request_id, &user.did) 2462 + .await 2463 + .is_err() 2464 + { 2465 + return OAuthError::ServerError("An error occurred.".into()).into_response(); 2466 + } 2467 + } else if is_delegation_flow { 2468 + tracing::info!( 2469 + delegated_did = ?request_data.did, 2470 + controller_did = %user.did, 2471 + "Passkey auth in delegation flow - preserving delegated DID" 2472 + ); 2473 + if state 2474 + .oauth_repo 2475 + .set_controller_did(&passkey_start_request_id, &user.did) 2476 + .await 2477 + .is_err() 2478 + { 2479 + return OAuthError::ServerError("An error occurred.".into()).into_response(); 2480 + } 2481 + } else if state 2482 .oauth_repo 2483 .set_authorization_did(&passkey_start_request_id, &user.did, None) 2484 .await 2485 .is_err() 2486 { 2487 + return OAuthError::ServerError("An error occurred.".into()).into_response(); 2488 } 2489 2490 let options = serde_json::to_value(&rcr).unwrap_or(serde_json::json!({})); ··· 2574 } 2575 }; 2576 2577 + let controller_did: Option<tranquil_types::Did> = request_data 2578 + .controller_did 2579 + .as_ref() 2580 + .and_then(|s| s.parse().ok()); 2581 + let passkey_owner_did = controller_did.as_ref().unwrap_or(&did); 2582 + 2583 let auth_state_json = match state 2584 .user_repo 2585 + .load_webauthn_challenge(passkey_owner_did, "authentication") 2586 .await 2587 { 2588 Ok(Some(s)) => s, ··· 2674 2675 if let Err(e) = state 2676 .user_repo 2677 + .delete_webauthn_challenge(passkey_owner_did, "authentication") 2678 .await 2679 { 2680 tracing::warn!(error = %e, "Failed to delete authentication state"); ··· 3451 } 3452 }; 3453 3454 + let password_valid = password_hashes.iter().fold(false, |acc, hash| { 3455 + acc | bcrypt::verify(&form.app_password, hash).unwrap_or(false) 3456 + }); 3457 3458 if !password_valid { 3459 return (
+7 -1
crates/tranquil-pds/src/oauth/endpoints/delegation.rs
··· 127 .await 128 .is_err() 129 { 130 - tracing::warn!("Failed to set delegated DID on authorization request"); 131 } 132 133 let grant = match state
··· 127 .await 128 .is_err() 129 { 130 + return Json(DelegationAuthResponse { 131 + success: false, 132 + needs_totp: None, 133 + redirect_uri: None, 134 + error: Some("Failed to update authorization request".to_string()), 135 + }) 136 + .into_response(); 137 } 138 139 let grant = match state
+14 -18
crates/tranquil-pds/src/scheduled.rs
··· 15 use tranquil_types::{AtUri, CidLink, Did}; 16 17 use crate::repo::PostgresBlockStore; 18 - use crate::storage::{BackupStorage, BlobStorage}; 19 use crate::sync::car::encode_car_header; 20 21 async fn process_genesis_commit( ··· 537 repo_repo: Arc<dyn RepoRepository>, 538 backup_repo: Arc<dyn BackupRepository>, 539 block_store: PostgresBlockStore, 540 - backup_storage: Arc<BackupStorage>, 541 mut shutdown_rx: watch::Receiver<bool>, 542 ) { 543 - let backup_interval = Duration::from_secs(BackupStorage::interval_secs()); 544 545 info!( 546 interval_secs = backup_interval.as_secs(), 547 - retention_count = BackupStorage::retention_count(), 548 "Starting backup service" 549 ); 550 ··· 564 repo_repo.as_ref(), 565 backup_repo.as_ref(), 566 &block_store, 567 - &backup_storage, 568 ).await { 569 error!("Error processing scheduled backups: {}", e); 570 } ··· 592 repo_repo: &dyn RepoRepository, 593 backup_repo: &dyn BackupRepository, 594 block_store: &PostgresBlockStore, 595 - backup_storage: &BackupStorage, 596 user_id: uuid::Uuid, 597 did: String, 598 repo_root_cid: String, ··· 656 repo_repo: &dyn RepoRepository, 657 backup_repo: &dyn BackupRepository, 658 block_store: &PostgresBlockStore, 659 - backup_storage: &BackupStorage, 660 ) -> Result<(), String> { 661 - let backup_interval_secs = BackupStorage::interval_secs() as i64; 662 - let retention_count = BackupStorage::retention_count(); 663 664 let users_needing_backup = backup_repo 665 - .get_users_needing_backup(backup_interval_secs, 50) 666 .await 667 .map_err(|e| format!("DB error fetching users for backup: {:?}", e))?; 668 ··· 700 block_count = result.block_count, 701 "Created backup" 702 ); 703 - if let Err(e) = cleanup_old_backups( 704 - backup_repo, 705 - backup_storage, 706 - result.user_id, 707 - retention_count, 708 - ) 709 - .await 710 { 711 warn!(did = %result.did, error = %e, "Failed to cleanup old backups"); 712 } ··· 844 845 async fn cleanup_old_backups( 846 backup_repo: &dyn BackupRepository, 847 - backup_storage: &BackupStorage, 848 user_id: uuid::Uuid, 849 retention_count: u32, 850 ) -> Result<(), String> {
··· 15 use tranquil_types::{AtUri, CidLink, Did}; 16 17 use crate::repo::PostgresBlockStore; 18 + use crate::storage::{BackupStorage, BlobStorage, backup_interval_secs, backup_retention_count}; 19 use crate::sync::car::encode_car_header; 20 21 async fn process_genesis_commit( ··· 537 repo_repo: Arc<dyn RepoRepository>, 538 backup_repo: Arc<dyn BackupRepository>, 539 block_store: PostgresBlockStore, 540 + backup_storage: Arc<dyn BackupStorage>, 541 mut shutdown_rx: watch::Receiver<bool>, 542 ) { 543 + let backup_interval = Duration::from_secs(backup_interval_secs()); 544 545 info!( 546 interval_secs = backup_interval.as_secs(), 547 + retention_count = backup_retention_count(), 548 "Starting backup service" 549 ); 550 ··· 564 repo_repo.as_ref(), 565 backup_repo.as_ref(), 566 &block_store, 567 + backup_storage.as_ref(), 568 ).await { 569 error!("Error processing scheduled backups: {}", e); 570 } ··· 592 repo_repo: &dyn RepoRepository, 593 backup_repo: &dyn BackupRepository, 594 block_store: &PostgresBlockStore, 595 + backup_storage: &dyn BackupStorage, 596 user_id: uuid::Uuid, 597 did: String, 598 repo_root_cid: String, ··· 656 repo_repo: &dyn RepoRepository, 657 backup_repo: &dyn BackupRepository, 658 block_store: &PostgresBlockStore, 659 + backup_storage: &dyn BackupStorage, 660 ) -> Result<(), String> { 661 + let interval_secs = backup_interval_secs() as i64; 662 + let retention = backup_retention_count(); 663 664 let users_needing_backup = backup_repo 665 + .get_users_needing_backup(interval_secs, 50) 666 .await 667 .map_err(|e| format!("DB error fetching users for backup: {:?}", e))?; 668 ··· 700 block_count = result.block_count, 701 "Created backup" 702 ); 703 + if let Err(e) = 704 + cleanup_old_backups(backup_repo, backup_storage, result.user_id, retention) 705 + .await 706 { 707 warn!(did = %result.did, error = %e, "Failed to cleanup old backups"); 708 } ··· 840 841 async fn cleanup_old_backups( 842 backup_repo: &dyn BackupRepository, 843 + backup_storage: &dyn BackupStorage, 844 user_id: uuid::Uuid, 845 retention_count: u32, 846 ) -> Result<(), String> {
+5 -1
crates/tranquil-pds/src/sso/endpoints.rs
··· 1221 scopes: None, 1222 created_by_controller_did: None, 1223 }; 1224 - if let Err(e) = state.session_repo.create_app_password(&app_password_data).await { 1225 tracing::warn!("Failed to create initial app password: {:?}", e); 1226 } 1227
··· 1221 scopes: None, 1222 created_by_controller_did: None, 1223 }; 1224 + if let Err(e) = state 1225 + .session_repo 1226 + .create_app_password(&app_password_data) 1227 + .await 1228 + { 1229 tracing::warn!("Failed to create initial app password: {:?}", e); 1230 } 1231
+5 -5
crates/tranquil-pds/src/state.rs
··· 5 use crate::rate_limit::RateLimiters; 6 use crate::repo::PostgresBlockStore; 7 use crate::sso::{SsoConfig, SsoManager}; 8 - use crate::storage::{BackupStorage, BlobStorage, S3BlobStorage}; 9 use crate::sync::firehose::SequencedEvent; 10 use sqlx::PgPool; 11 use std::error::Error; ··· 32 pub event_notifier: Arc<dyn RepoEventNotifier>, 33 pub block_store: PostgresBlockStore, 34 pub blob_store: Arc<dyn BlobStorage>, 35 - pub backup_storage: Option<Arc<BackupStorage>>, 36 pub firehose_tx: broadcast::Sender<SequencedEvent>, 37 pub rate_limiters: Arc<RateLimiters>, 38 pub circuit_breakers: Arc<CircuitBreakers>, ··· 165 166 let repos = Arc::new(PostgresRepositories::new(db.clone())); 167 let block_store = PostgresBlockStore::new(db); 168 - let blob_store = S3BlobStorage::new().await; 169 - let backup_storage = BackupStorage::new().await.map(Arc::new); 170 171 let firehose_buffer_size: usize = std::env::var("FIREHOSE_BUFFER_SIZE") 172 .ok() ··· 195 sso_repo: repos.sso.clone(), 196 repos, 197 block_store, 198 - blob_store: Arc::new(blob_store), 199 backup_storage, 200 firehose_tx, 201 rate_limiters,
··· 5 use crate::rate_limit::RateLimiters; 6 use crate::repo::PostgresBlockStore; 7 use crate::sso::{SsoConfig, SsoManager}; 8 + use crate::storage::{BackupStorage, BlobStorage, create_backup_storage, create_blob_storage}; 9 use crate::sync::firehose::SequencedEvent; 10 use sqlx::PgPool; 11 use std::error::Error; ··· 32 pub event_notifier: Arc<dyn RepoEventNotifier>, 33 pub block_store: PostgresBlockStore, 34 pub blob_store: Arc<dyn BlobStorage>, 35 + pub backup_storage: Option<Arc<dyn BackupStorage>>, 36 pub firehose_tx: broadcast::Sender<SequencedEvent>, 37 pub rate_limiters: Arc<RateLimiters>, 38 pub circuit_breakers: Arc<CircuitBreakers>, ··· 165 166 let repos = Arc::new(PostgresRepositories::new(db.clone())); 167 let block_store = PostgresBlockStore::new(db); 168 + let blob_store = create_blob_storage().await; 169 + let backup_storage = create_backup_storage().await; 170 171 let firehose_buffer_size: usize = std::env::var("FIREHOSE_BUFFER_SIZE") 172 .ok() ··· 195 sso_repo: repos.sso.clone(), 196 repos, 197 block_store, 198 + blob_store, 199 backup_storage, 200 firehose_tx, 201 rate_limiters,
+3 -1
crates/tranquil-pds/src/storage/mod.rs
··· 1 pub use tranquil_storage::{ 2 - BackupStorage, BlobStorage, S3BlobStorage, StorageError, StreamUploadResult, 3 };
··· 1 pub use tranquil_storage::{ 2 + BackupStorage, BlobStorage, FilesystemBackupStorage, FilesystemBlobStorage, S3BackupStorage, 3 + S3BlobStorage, StorageError, StreamUploadResult, backup_interval_secs, backup_retention_count, 4 + create_backup_storage, create_blob_storage, 5 };
+95 -25
crates/tranquil-pds/tests/common/mod.rs
··· 1 use aws_config::BehaviorVersion; 2 use aws_sdk_s3::Client as S3Client; 3 use aws_sdk_s3::config::Credentials; 4 use chrono::Utc; 5 use reqwest::{Client, StatusCode, header}; 6 use serde_json::{Value, json}; 7 use sqlx::postgres::PgPoolOptions; 8 use std::collections::HashMap; 9 use std::sync::{Arc, OnceLock, RwLock}; 10 #[allow(unused_imports)] 11 use std::time::Duration; ··· 19 static MOCK_APPVIEW: OnceLock<MockServer> = OnceLock::new(); 20 static MOCK_PLC: OnceLock<MockServer> = OnceLock::new(); 21 static TEST_DB_POOL: OnceLock<sqlx::PgPool> = OnceLock::new(); 22 23 - #[cfg(not(feature = "external-infra"))] 24 use testcontainers::core::ContainerPort; 25 #[cfg(not(feature = "external-infra"))] 26 - use testcontainers::{ContainerAsync, GenericImage, ImageExt, runners::AsyncRunner}; 27 #[cfg(not(feature = "external-infra"))] 28 use testcontainers_modules::postgres::Postgres; 29 #[cfg(not(feature = "external-infra"))] 30 static DB_CONTAINER: OnceLock<ContainerAsync<Postgres>> = OnceLock::new(); 31 - #[cfg(not(feature = "external-infra"))] 32 static S3_CONTAINER: OnceLock<ContainerAsync<GenericImage>> = OnceLock::new(); 33 34 #[allow(dead_code)] ··· 42 43 fn has_external_infra() -> bool { 44 std::env::var("TRANQUIL_PDS_TEST_INFRA_READY").is_ok() 45 - || (std::env::var("DATABASE_URL").is_ok() && std::env::var("S3_ENDPOINT").is_ok()) 46 } 47 #[cfg(test)] 48 #[ctor::dtor] 49 fn cleanup() { 50 if has_external_infra() { 51 return; 52 } ··· 125 async fn setup_with_external_infra() -> String { 126 let database_url = 127 std::env::var("DATABASE_URL").expect("DATABASE_URL must be set when using external infra"); 128 - let s3_endpoint = 129 - std::env::var("S3_ENDPOINT").expect("S3_ENDPOINT must be set when using external infra"); 130 let plc_url = setup_mock_plc_directory().await; 131 unsafe { 132 - std::env::set_var( 133 - "S3_BUCKET", 134 - std::env::var("S3_BUCKET").unwrap_or_else(|_| "test-bucket".to_string()), 135 - ); 136 - std::env::set_var( 137 - "AWS_ACCESS_KEY_ID", 138 - std::env::var("AWS_ACCESS_KEY_ID").unwrap_or_else(|_| "minioadmin".to_string()), 139 - ); 140 - std::env::set_var( 141 - "AWS_SECRET_ACCESS_KEY", 142 - std::env::var("AWS_SECRET_ACCESS_KEY").unwrap_or_else(|_| "minioadmin".to_string()), 143 - ); 144 - std::env::set_var( 145 - "AWS_REGION", 146 - std::env::var("AWS_REGION").unwrap_or_else(|_| "us-east-1".to_string()), 147 - ); 148 - std::env::set_var("S3_ENDPOINT", &s3_endpoint); 149 std::env::set_var("MAX_IMPORT_SIZE", "100000000"); 150 std::env::set_var("SKIP_IMPORT_VERIFICATION", "true"); 151 std::env::set_var("PLC_DIRECTORY_URL", &plc_url); ··· 160 spawn_app(database_url).await 161 } 162 163 - #[cfg(not(feature = "external-infra"))] 164 async fn setup_with_testcontainers() -> String { 165 let s3_container = GenericImage::new("cgr.dev/chainguard/minio", "latest") 166 .with_exposed_port(ContainerPort::Tcp(9000)) ··· 178 let s3_endpoint = format!("http://127.0.0.1:{}", s3_port); 179 let plc_url = setup_mock_plc_directory().await; 180 unsafe { 181 std::env::set_var("S3_BUCKET", "test-bucket"); 182 std::env::set_var("AWS_ACCESS_KEY_ID", "minioadmin"); 183 std::env::set_var("AWS_SECRET_ACCESS_KEY", "minioadmin"); ··· 204 .build(); 205 let s3_client = S3Client::from_conf(s3_config); 206 let _ = s3_client.create_bucket().bucket("test-bucket").send().await; 207 let mock_server = MockServer::start().await; 208 setup_mock_appview(&mock_server).await; 209 let mock_uri = mock_server.uri(); ··· 232 #[cfg(feature = "external-infra")] 233 async fn setup_with_testcontainers() -> String { 234 panic!( 235 - "Testcontainers disabled with external-infra feature. Set DATABASE_URL and S3_ENDPOINT." 236 ); 237 } 238
··· 1 + #[cfg(feature = "s3-storage")] 2 use aws_config::BehaviorVersion; 3 + #[cfg(feature = "s3-storage")] 4 use aws_sdk_s3::Client as S3Client; 5 + #[cfg(feature = "s3-storage")] 6 use aws_sdk_s3::config::Credentials; 7 use chrono::Utc; 8 use reqwest::{Client, StatusCode, header}; 9 use serde_json::{Value, json}; 10 use sqlx::postgres::PgPoolOptions; 11 use std::collections::HashMap; 12 + use std::path::PathBuf; 13 use std::sync::{Arc, OnceLock, RwLock}; 14 #[allow(unused_imports)] 15 use std::time::Duration; ··· 23 static MOCK_APPVIEW: OnceLock<MockServer> = OnceLock::new(); 24 static MOCK_PLC: OnceLock<MockServer> = OnceLock::new(); 25 static TEST_DB_POOL: OnceLock<sqlx::PgPool> = OnceLock::new(); 26 + static TEST_TEMP_DIR: OnceLock<PathBuf> = OnceLock::new(); 27 28 + #[cfg(all(not(feature = "external-infra"), feature = "s3-storage"))] 29 + use testcontainers::GenericImage; 30 + #[cfg(all(not(feature = "external-infra"), feature = "s3-storage"))] 31 use testcontainers::core::ContainerPort; 32 #[cfg(not(feature = "external-infra"))] 33 + use testcontainers::{ContainerAsync, ImageExt, runners::AsyncRunner}; 34 #[cfg(not(feature = "external-infra"))] 35 use testcontainers_modules::postgres::Postgres; 36 #[cfg(not(feature = "external-infra"))] 37 static DB_CONTAINER: OnceLock<ContainerAsync<Postgres>> = OnceLock::new(); 38 + #[cfg(all(not(feature = "external-infra"), feature = "s3-storage"))] 39 static S3_CONTAINER: OnceLock<ContainerAsync<GenericImage>> = OnceLock::new(); 40 41 #[allow(dead_code)] ··· 49 50 fn has_external_infra() -> bool { 51 std::env::var("TRANQUIL_PDS_TEST_INFRA_READY").is_ok() 52 + || (std::env::var("DATABASE_URL").is_ok() 53 + && (std::env::var("S3_ENDPOINT").is_ok() || std::env::var("BLOB_STORAGE_PATH").is_ok())) 54 } 55 #[cfg(test)] 56 #[ctor::dtor] 57 fn cleanup() { 58 + if let Some(temp_dir) = TEST_TEMP_DIR.get() { 59 + let _ = std::fs::remove_dir_all(temp_dir); 60 + } 61 if has_external_infra() { 62 return; 63 } ··· 136 async fn setup_with_external_infra() -> String { 137 let database_url = 138 std::env::var("DATABASE_URL").expect("DATABASE_URL must be set when using external infra"); 139 let plc_url = setup_mock_plc_directory().await; 140 unsafe { 141 + if std::env::var("S3_ENDPOINT").is_ok() { 142 + let s3_endpoint = std::env::var("S3_ENDPOINT").unwrap(); 143 + std::env::set_var("BLOB_STORAGE_BACKEND", "s3"); 144 + std::env::set_var("BACKUP_STORAGE_BACKEND", "s3"); 145 + std::env::set_var("BACKUP_S3_BUCKET", "test-backups"); 146 + std::env::set_var( 147 + "S3_BUCKET", 148 + std::env::var("S3_BUCKET").unwrap_or_else(|_| "test-bucket".to_string()), 149 + ); 150 + std::env::set_var( 151 + "AWS_ACCESS_KEY_ID", 152 + std::env::var("AWS_ACCESS_KEY_ID").unwrap_or_else(|_| "minioadmin".to_string()), 153 + ); 154 + std::env::set_var( 155 + "AWS_SECRET_ACCESS_KEY", 156 + std::env::var("AWS_SECRET_ACCESS_KEY").unwrap_or_else(|_| "minioadmin".to_string()), 157 + ); 158 + std::env::set_var( 159 + "AWS_REGION", 160 + std::env::var("AWS_REGION").unwrap_or_else(|_| "us-east-1".to_string()), 161 + ); 162 + std::env::set_var("S3_ENDPOINT", &s3_endpoint); 163 + } else if std::env::var("BLOB_STORAGE_PATH").is_ok() { 164 + std::env::set_var("BLOB_STORAGE_BACKEND", "filesystem"); 165 + std::env::set_var("BACKUP_STORAGE_BACKEND", "filesystem"); 166 + } else { 167 + panic!("Either S3_ENDPOINT or BLOB_STORAGE_PATH must be set for external-infra"); 168 + } 169 std::env::set_var("MAX_IMPORT_SIZE", "100000000"); 170 std::env::set_var("SKIP_IMPORT_VERIFICATION", "true"); 171 std::env::set_var("PLC_DIRECTORY_URL", &plc_url); ··· 180 spawn_app(database_url).await 181 } 182 183 + #[cfg(all(not(feature = "external-infra"), not(feature = "s3-storage")))] 184 + async fn setup_with_testcontainers() -> String { 185 + let temp_dir = std::env::temp_dir().join(format!("tranquil-pds-test-{}", uuid::Uuid::new_v4())); 186 + let blob_path = temp_dir.join("blobs"); 187 + let backup_path = temp_dir.join("backups"); 188 + std::fs::create_dir_all(&blob_path).expect("Failed to create blob temp directory"); 189 + std::fs::create_dir_all(&backup_path).expect("Failed to create backup temp directory"); 190 + TEST_TEMP_DIR.set(temp_dir).ok(); 191 + let plc_url = setup_mock_plc_directory().await; 192 + unsafe { 193 + std::env::set_var("BLOB_STORAGE_BACKEND", "filesystem"); 194 + std::env::set_var("BLOB_STORAGE_PATH", blob_path.to_str().unwrap()); 195 + std::env::set_var("BACKUP_STORAGE_BACKEND", "filesystem"); 196 + std::env::set_var("BACKUP_STORAGE_PATH", backup_path.to_str().unwrap()); 197 + std::env::set_var("MAX_IMPORT_SIZE", "100000000"); 198 + std::env::set_var("SKIP_IMPORT_VERIFICATION", "true"); 199 + std::env::set_var("PLC_DIRECTORY_URL", &plc_url); 200 + } 201 + let mock_server = MockServer::start().await; 202 + setup_mock_appview(&mock_server).await; 203 + let mock_uri = mock_server.uri(); 204 + let mock_host = mock_uri.strip_prefix("http://").unwrap_or(&mock_uri); 205 + let mock_did = format!("did:web:{}", mock_host.replace(':', "%3A")); 206 + setup_mock_did_document(&mock_server, &mock_did, &mock_uri).await; 207 + MOCK_APPVIEW.set(mock_server).ok(); 208 + let container = Postgres::default() 209 + .with_tag("18-alpine") 210 + .with_label("tranquil_pds_test", "true") 211 + .start() 212 + .await 213 + .expect("Failed to start Postgres"); 214 + let connection_string = format!( 215 + "postgres://postgres:postgres@127.0.0.1:{}", 216 + container 217 + .get_host_port_ipv4(5432) 218 + .await 219 + .expect("Failed to get port") 220 + ); 221 + DB_CONTAINER.set(container).ok(); 222 + spawn_app(connection_string).await 223 + } 224 + 225 + #[cfg(all(not(feature = "external-infra"), feature = "s3-storage"))] 226 async fn setup_with_testcontainers() -> String { 227 let s3_container = GenericImage::new("cgr.dev/chainguard/minio", "latest") 228 .with_exposed_port(ContainerPort::Tcp(9000)) ··· 240 let s3_endpoint = format!("http://127.0.0.1:{}", s3_port); 241 let plc_url = setup_mock_plc_directory().await; 242 unsafe { 243 + std::env::set_var("BLOB_STORAGE_BACKEND", "s3"); 244 + std::env::set_var("BACKUP_STORAGE_BACKEND", "s3"); 245 + std::env::set_var("BACKUP_S3_BUCKET", "test-backups"); 246 std::env::set_var("S3_BUCKET", "test-bucket"); 247 std::env::set_var("AWS_ACCESS_KEY_ID", "minioadmin"); 248 std::env::set_var("AWS_SECRET_ACCESS_KEY", "minioadmin"); ··· 269 .build(); 270 let s3_client = S3Client::from_conf(s3_config); 271 let _ = s3_client.create_bucket().bucket("test-bucket").send().await; 272 + let _ = s3_client 273 + .create_bucket() 274 + .bucket("test-backups") 275 + .send() 276 + .await; 277 let mock_server = MockServer::start().await; 278 setup_mock_appview(&mock_server).await; 279 let mock_uri = mock_server.uri(); ··· 302 #[cfg(feature = "external-infra")] 303 async fn setup_with_testcontainers() -> String { 304 panic!( 305 + "Testcontainers disabled with external-infra feature. Set DATABASE_URL and BLOB_STORAGE_PATH (or S3_ENDPOINT)." 306 ); 307 } 308
+1 -4
crates/tranquil-pds/tests/oauth.rs
··· 1536 let access_jwt = account["accessJwt"].as_str().unwrap(); 1537 1538 let app_password_res = http_client 1539 - .post(format!( 1540 - "{}/xrpc/com.atproto.server.createAppPassword", 1541 - url 1542 - )) 1543 .header("Authorization", format!("Bearer {}", access_jwt)) 1544 .json(&json!({ "name": "oauth-test-app" })) 1545 .send()
··· 1536 let access_jwt = account["accessJwt"].as_str().unwrap(); 1537 1538 let app_password_res = http_client 1539 + .post(format!("{}/xrpc/com.atproto.server.createAppPassword", url)) 1540 .header("Authorization", format!("Bearer {}", access_jwt)) 1541 .json(&json!({ "name": "oauth-test-app" })) 1542 .send()
+138
crates/tranquil-pds/tests/oauth_security.rs
··· 1250 "Error should be InsufficientScope" 1251 ); 1252 }
··· 1250 "Error should be InsufficientScope" 1251 ); 1252 } 1253 + 1254 + #[tokio::test] 1255 + async fn test_delegation_oauth_token_sub_is_delegated_account() { 1256 + let url = base_url().await; 1257 + let http_client = client(); 1258 + let suffix = &uuid::Uuid::new_v4().simple().to_string()[..8]; 1259 + 1260 + let (controller_jwt, controller_did) = create_account_and_login(&http_client).await; 1261 + 1262 + let delegated_handle = format!("dlgsub{}", suffix); 1263 + let delegated_res = http_client 1264 + .post(format!("{}/xrpc/_delegation.createDelegatedAccount", url)) 1265 + .bearer_auth(&controller_jwt) 1266 + .json(&json!({ 1267 + "handle": delegated_handle, 1268 + "controllerScopes": "atproto" 1269 + })) 1270 + .send() 1271 + .await 1272 + .unwrap(); 1273 + assert_eq!( 1274 + delegated_res.status(), 1275 + StatusCode::OK, 1276 + "Should create delegated account" 1277 + ); 1278 + let delegated_account: Value = delegated_res.json().await.unwrap(); 1279 + let delegated_did = delegated_account["did"].as_str().unwrap(); 1280 + 1281 + assert_ne!( 1282 + delegated_did, controller_did, 1283 + "Delegated DID should be different from controller DID" 1284 + ); 1285 + 1286 + let redirect_uri = "https://example.com/deleg-sub-callback"; 1287 + let mock_client = setup_mock_client_metadata(redirect_uri).await; 1288 + let client_id = mock_client.uri(); 1289 + let (code_verifier, code_challenge) = generate_pkce(); 1290 + 1291 + let par_body: Value = http_client 1292 + .post(format!("{}/oauth/par", url)) 1293 + .form(&[ 1294 + ("response_type", "code"), 1295 + ("client_id", &client_id), 1296 + ("redirect_uri", redirect_uri), 1297 + ("code_challenge", &code_challenge), 1298 + ("code_challenge_method", "S256"), 1299 + ("scope", "atproto"), 1300 + ("login_hint", delegated_did), 1301 + ]) 1302 + .send() 1303 + .await 1304 + .unwrap() 1305 + .json() 1306 + .await 1307 + .unwrap(); 1308 + let request_uri = par_body["request_uri"].as_str().unwrap(); 1309 + 1310 + let auth_res = http_client 1311 + .post(format!("{}/oauth/delegation/auth", url)) 1312 + .header("Content-Type", "application/json") 1313 + .json(&json!({ 1314 + "request_uri": request_uri, 1315 + "delegated_did": delegated_did, 1316 + "controller_did": controller_did, 1317 + "password": "Testpass123!", 1318 + "remember_device": false 1319 + })) 1320 + .send() 1321 + .await 1322 + .unwrap(); 1323 + assert_eq!( 1324 + auth_res.status(), 1325 + StatusCode::OK, 1326 + "Delegation auth should succeed" 1327 + ); 1328 + let auth_body: Value = auth_res.json().await.unwrap(); 1329 + assert!( 1330 + auth_body["success"].as_bool().unwrap_or(false), 1331 + "Delegation auth should report success: {:?}", 1332 + auth_body 1333 + ); 1334 + 1335 + let consent_res = http_client 1336 + .post(format!("{}/oauth/authorize/consent", url)) 1337 + .header("Content-Type", "application/json") 1338 + .json(&json!({ 1339 + "request_uri": request_uri, 1340 + "approved_scopes": ["atproto"], 1341 + "remember": false 1342 + })) 1343 + .send() 1344 + .await 1345 + .unwrap(); 1346 + assert_eq!( 1347 + consent_res.status(), 1348 + StatusCode::OK, 1349 + "Consent should succeed" 1350 + ); 1351 + let consent_body: Value = consent_res.json().await.unwrap(); 1352 + let redirect_location = consent_body["redirect_uri"] 1353 + .as_str() 1354 + .expect("Expected redirect_uri"); 1355 + 1356 + let code = redirect_location 1357 + .split("code=") 1358 + .nth(1) 1359 + .unwrap() 1360 + .split('&') 1361 + .next() 1362 + .unwrap(); 1363 + 1364 + let token_res = http_client 1365 + .post(format!("{}/oauth/token", url)) 1366 + .form(&[ 1367 + ("grant_type", "authorization_code"), 1368 + ("code", code), 1369 + ("redirect_uri", redirect_uri), 1370 + ("code_verifier", &code_verifier), 1371 + ("client_id", &client_id), 1372 + ]) 1373 + .send() 1374 + .await 1375 + .unwrap(); 1376 + assert_eq!(token_res.status(), StatusCode::OK, "Token exchange should succeed"); 1377 + let tokens: Value = token_res.json().await.unwrap(); 1378 + 1379 + let sub = tokens["sub"].as_str().expect("Token response should have sub claim"); 1380 + 1381 + assert_eq!( 1382 + sub, delegated_did, 1383 + "Token sub claim should be the DELEGATED account's DID, not the controller's. Got {} but expected {}", 1384 + sub, delegated_did 1385 + ); 1386 + assert_ne!( 1387 + sub, controller_did, 1388 + "Token sub claim should NOT be the controller's DID" 1389 + ); 1390 + }
+3
crates/tranquil-storage/Cargo.toml
··· 13 bytes = { workspace = true } 14 futures = { workspace = true } 15 sha2 = { workspace = true }
··· 13 bytes = { workspace = true } 14 futures = { workspace = true } 15 sha2 = { workspace = true } 16 + tokio = { workspace = true } 17 + tracing = { workspace = true } 18 + uuid = { workspace = true }
+503 -129
crates/tranquil-storage/src/lib.rs
··· 1 - pub use tranquil_infra::{BlobStorage, StorageError, StreamUploadResult}; 2 3 use async_trait::async_trait; 4 use aws_config::BehaviorVersion; ··· 10 use bytes::Bytes; 11 use futures::Stream; 12 use sha2::{Digest, Sha256}; 13 use std::pin::Pin; 14 15 const MIN_PART_SIZE: usize = 5 * 1024 * 1024; 16 17 pub struct S3BlobStorage { 18 client: Client, ··· 40 .load() 41 .await; 42 43 - if let Ok(endpoint) = std::env::var("S3_ENDPOINT") { 44 - let s3_config = aws_sdk_s3::config::Builder::from(&config) 45 - .endpoint_url(endpoint) 46 - .force_path_style(true) 47 - .build(); 48 - Client::from_conf(s3_config) 49 - } else { 50 - Client::new(&config) 51 - } 52 } 53 54 - pub struct BackupStorage { 55 client: Client, 56 bucket: String, 57 } 58 59 - impl BackupStorage { 60 pub async fn new() -> Option<Self> { 61 - let backup_enabled = std::env::var("BACKUP_ENABLED") 62 - .map(|v| v != "false" && v != "0") 63 - .unwrap_or(true); 64 - 65 - if !backup_enabled { 66 - return None; 67 - } 68 - 69 let bucket = std::env::var("BACKUP_S3_BUCKET").ok()?; 70 let client = create_s3_client().await; 71 Some(Self { client, bucket }) 72 } 73 - 74 - pub fn retention_count() -> u32 { 75 - std::env::var("BACKUP_RETENTION_COUNT") 76 - .ok() 77 - .and_then(|v| v.parse().ok()) 78 - .unwrap_or(7) 79 - } 80 81 - pub fn interval_secs() -> u64 { 82 - std::env::var("BACKUP_INTERVAL_SECS") 83 - .ok() 84 - .and_then(|v| v.parse().ok()) 85 - .unwrap_or(86400) 86 - } 87 - 88 - pub async fn put_backup( 89 - &self, 90 - did: &str, 91 - rev: &str, 92 - data: &[u8], 93 - ) -> Result<String, StorageError> { 94 let key = format!("{}/{}.car", did, rev); 95 self.client 96 .put_object() ··· 99 .body(ByteStream::from(Bytes::copy_from_slice(data))) 100 .send() 101 .await 102 - .map_err(|e| StorageError::S3(e.to_string()))?; 103 104 Ok(key) 105 } 106 107 - pub async fn get_backup(&self, storage_key: &str) -> Result<Bytes, StorageError> { 108 let resp = self 109 .client 110 .get_object() ··· 112 .key(storage_key) 113 .send() 114 .await 115 - .map_err(|e| StorageError::S3(e.to_string()))?; 116 117 - let data = resp 118 - .body 119 .collect() 120 .await 121 - .map_err(|e| StorageError::S3(e.to_string()))? 122 - .into_bytes(); 123 - 124 - Ok(data) 125 } 126 127 - pub async fn delete_backup(&self, storage_key: &str) -> Result<(), StorageError> { 128 self.client 129 .delete_object() 130 .bucket(&self.bucket) 131 .key(storage_key) 132 .send() 133 .await 134 - .map_err(|e| StorageError::S3(e.to_string()))?; 135 136 Ok(()) 137 } ··· 151 .body(ByteStream::from(data)) 152 .send() 153 .await 154 - .map_err(|e| StorageError::S3(e.to_string()))?; 155 156 Ok(()) 157 } ··· 168 .key(key) 169 .send() 170 .await 171 - .map_err(|e| StorageError::S3(e.to_string()))?; 172 173 - let data = resp 174 - .body 175 .collect() 176 .await 177 - .map_err(|e| StorageError::S3(e.to_string()))? 178 - .into_bytes(); 179 - 180 - Ok(data) 181 } 182 183 async fn get_head(&self, key: &str, size: usize) -> Result<Bytes, StorageError> { ··· 190 .range(range) 191 .send() 192 .await 193 - .map_err(|e| StorageError::S3(e.to_string()))?; 194 195 - let data = resp 196 - .body 197 .collect() 198 .await 199 - .map_err(|e| StorageError::S3(e.to_string()))? 200 - .into_bytes(); 201 - 202 - Ok(data) 203 } 204 205 async fn delete(&self, key: &str) -> Result<(), StorageError> { ··· 209 .key(key) 210 .send() 211 .await 212 - .map_err(|e| StorageError::S3(e.to_string()))?; 213 214 Ok(()) 215 } ··· 217 async fn put_stream( 218 &self, 219 key: &str, 220 - mut stream: Pin<Box<dyn Stream<Item = Result<Bytes, std::io::Error>> + Send>>, 221 ) -> Result<StreamUploadResult, StorageError> { 222 use futures::StreamExt; 223 ··· 228 .key(key) 229 .send() 230 .await 231 - .map_err(|e| StorageError::S3(format!("Failed to create multipart upload: {}", e)))?; 232 233 let upload_id = create_resp 234 .upload_id() 235 - .ok_or_else(|| StorageError::S3("No upload ID returned".to_string()))? 236 .to_string(); 237 - 238 - let mut hasher = Sha256::new(); 239 - let mut total_size: u64 = 0; 240 - let mut part_number = 1; 241 - let mut completed_parts: Vec<CompletedPart> = Vec::new(); 242 - let mut buffer = Vec::with_capacity(MIN_PART_SIZE); 243 244 let upload_part = |client: &Client, 245 bucket: &str, ··· 264 .body(ByteStream::from(data)) 265 .send() 266 .await 267 - .map_err(|e| StorageError::S3(format!("Failed to upload part: {}", e)))?; 268 269 let etag = resp 270 .e_tag() 271 - .ok_or_else(|| StorageError::S3("No ETag returned for part".to_string()))? 272 .to_string(); 273 274 Ok(CompletedPart::builder() ··· 278 }) 279 }; 280 281 - loop { 282 - match stream.next().await { 283 - Some(Ok(chunk)) => { 284 - hasher.update(&chunk); 285 - total_size += chunk.len() as u64; 286 - buffer.extend_from_slice(&chunk); 287 288 - if buffer.len() >= MIN_PART_SIZE { 289 - let part_data = 290 - std::mem::replace(&mut buffer, Vec::with_capacity(MIN_PART_SIZE)); 291 - let part = upload_part( 292 - &self.client, 293 - &self.bucket, 294 - key, 295 - &upload_id, 296 - part_number, 297 - part_data, 298 - ) 299 - .await?; 300 - completed_parts.push(part); 301 - part_number += 1; 302 } 303 } 304 - Some(Err(e)) => { 305 - let _ = self 306 - .client 307 - .abort_multipart_upload() 308 - .bucket(&self.bucket) 309 - .key(key) 310 - .upload_id(&upload_id) 311 - .send() 312 - .await; 313 - return Err(StorageError::Io(e)); 314 - } 315 - None => break, 316 } 317 - } 318 319 - if !buffer.is_empty() { 320 let part = upload_part( 321 &self.client, 322 &self.bucket, 323 key, 324 &upload_id, 325 - part_number, 326 - buffer, 327 ) 328 .await?; 329 - completed_parts.push(part); 330 } 331 332 - if completed_parts.is_empty() { 333 - let _ = self 334 - .client 335 - .abort_multipart_upload() 336 - .bucket(&self.bucket) 337 - .key(key) 338 - .upload_id(&upload_id) 339 - .send() 340 - .await; 341 return Err(StorageError::Other("Empty upload".to_string())); 342 } 343 344 let completed_upload = CompletedMultipartUpload::builder() 345 - .set_parts(Some(completed_parts)) 346 .build(); 347 348 self.client ··· 353 .multipart_upload(completed_upload) 354 .send() 355 .await 356 - .map_err(|e| StorageError::S3(format!("Failed to complete multipart upload: {}", e)))?; 357 358 - let hash: [u8; 32] = hasher.finalize().into(); 359 Ok(StreamUploadResult { 360 sha256_hash: hash, 361 - size: total_size, 362 }) 363 } 364 ··· 372 .key(dst_key) 373 .send() 374 .await 375 - .map_err(|e| StorageError::S3(format!("Failed to copy object: {}", e)))?; 376 377 Ok(()) 378 } 379 }
··· 1 + pub use tranquil_infra::{ 2 + BackupStorage, BlobStorage, StorageError, StreamUploadResult, backup_interval_secs, 3 + backup_retention_count, 4 + }; 5 6 use async_trait::async_trait; 7 use aws_config::BehaviorVersion; ··· 13 use bytes::Bytes; 14 use futures::Stream; 15 use sha2::{Digest, Sha256}; 16 + use std::path::{Path, PathBuf}; 17 use std::pin::Pin; 18 + use std::sync::Arc; 19 20 const MIN_PART_SIZE: usize = 5 * 1024 * 1024; 21 + const EXDEV: i32 = 18; 22 + 23 + fn validate_key(key: &str) -> Result<(), StorageError> { 24 + let dominated_by_traversal = key 25 + .split('/') 26 + .filter(|seg| !seg.is_empty()) 27 + .try_fold(0i32, |depth, segment| match segment { 28 + ".." => { 29 + let new_depth = depth - 1; 30 + (new_depth >= 0).then_some(new_depth) 31 + } 32 + "." => Some(depth), 33 + _ => Some(depth + 1), 34 + }) 35 + .is_none(); 36 + 37 + let has_null = key.contains('\0'); 38 + let is_absolute = key.starts_with('/'); 39 + 40 + match (dominated_by_traversal, has_null, is_absolute) { 41 + (true, _, _) => Err(StorageError::Other(format!( 42 + "Path traversal detected in key: {}", 43 + key 44 + ))), 45 + (_, true, _) => Err(StorageError::Other(format!( 46 + "Null byte in key: {}", 47 + key.replace('\0', "\\0") 48 + ))), 49 + (_, _, true) => Err(StorageError::Other(format!( 50 + "Absolute path not allowed: {}", 51 + key 52 + ))), 53 + _ => Ok(()), 54 + } 55 + } 56 + 57 + async fn cleanup_orphaned_tmp_files(tmp_path: &Path) { 58 + let tmp_path = tmp_path.to_path_buf(); 59 + let cleaned = tokio::task::spawn_blocking(move || { 60 + std::fs::read_dir(&tmp_path) 61 + .into_iter() 62 + .flatten() 63 + .filter_map(Result::ok) 64 + .filter(|e| e.path().is_file()) 65 + .filter_map(|entry| std::fs::remove_file(entry.path()).ok()) 66 + .count() 67 + }) 68 + .await 69 + .unwrap_or(0); 70 + 71 + if cleaned > 0 { 72 + tracing::info!( 73 + count = cleaned, 74 + "Cleaned orphaned tmp files from previous run" 75 + ); 76 + } 77 + } 78 + 79 + async fn rename_with_fallback(src: &Path, dst: &Path) -> Result<(), StorageError> { 80 + match tokio::fs::rename(src, dst).await { 81 + Ok(()) => Ok(()), 82 + Err(e) if e.raw_os_error() == Some(EXDEV) => { 83 + tokio::fs::copy(src, dst).await?; 84 + tokio::fs::File::open(dst).await?.sync_all().await?; 85 + let _ = tokio::fs::remove_file(src).await; 86 + Ok(()) 87 + } 88 + Err(e) => Err(StorageError::Io(e)), 89 + } 90 + } 91 + 92 + async fn ensure_parent_dir(path: &Path) -> Result<(), StorageError> { 93 + if let Some(parent) = path.parent() { 94 + tokio::fs::create_dir_all(parent).await?; 95 + } 96 + Ok(()) 97 + } 98 + 99 + fn map_io_not_found(key: &str) -> impl FnOnce(std::io::Error) -> StorageError + '_ { 100 + |e| match e.kind() { 101 + std::io::ErrorKind::NotFound => StorageError::NotFound(key.to_string()), 102 + _ => StorageError::Io(e), 103 + } 104 + } 105 106 pub struct S3BlobStorage { 107 client: Client, ··· 129 .load() 130 .await; 131 132 + std::env::var("S3_ENDPOINT").ok().map_or_else( 133 + || Client::new(&config), 134 + |endpoint| { 135 + let s3_config = aws_sdk_s3::config::Builder::from(&config) 136 + .endpoint_url(endpoint) 137 + .force_path_style(true) 138 + .build(); 139 + Client::from_conf(s3_config) 140 + }, 141 + ) 142 } 143 144 + pub struct S3BackupStorage { 145 client: Client, 146 bucket: String, 147 } 148 149 + impl S3BackupStorage { 150 pub async fn new() -> Option<Self> { 151 let bucket = std::env::var("BACKUP_S3_BUCKET").ok()?; 152 let client = create_s3_client().await; 153 Some(Self { client, bucket }) 154 } 155 + } 156 157 + #[async_trait] 158 + impl BackupStorage for S3BackupStorage { 159 + async fn put_backup(&self, did: &str, rev: &str, data: &[u8]) -> Result<String, StorageError> { 160 let key = format!("{}/{}.car", did, rev); 161 self.client 162 .put_object() ··· 165 .body(ByteStream::from(Bytes::copy_from_slice(data))) 166 .send() 167 .await 168 + .map_err(|e| StorageError::Backend(e.to_string()))?; 169 170 Ok(key) 171 } 172 173 + async fn get_backup(&self, storage_key: &str) -> Result<Bytes, StorageError> { 174 let resp = self 175 .client 176 .get_object() ··· 178 .key(storage_key) 179 .send() 180 .await 181 + .map_err(|e| StorageError::Backend(e.to_string()))?; 182 183 + resp.body 184 .collect() 185 .await 186 + .map(|agg| agg.into_bytes()) 187 + .map_err(|e| StorageError::Backend(e.to_string())) 188 } 189 190 + async fn delete_backup(&self, storage_key: &str) -> Result<(), StorageError> { 191 self.client 192 .delete_object() 193 .bucket(&self.bucket) 194 .key(storage_key) 195 .send() 196 .await 197 + .map_err(|e| StorageError::Backend(e.to_string()))?; 198 199 Ok(()) 200 } ··· 214 .body(ByteStream::from(data)) 215 .send() 216 .await 217 + .map_err(|e| StorageError::Backend(e.to_string()))?; 218 219 Ok(()) 220 } ··· 231 .key(key) 232 .send() 233 .await 234 + .map_err(|e| StorageError::Backend(e.to_string()))?; 235 236 + resp.body 237 .collect() 238 .await 239 + .map(|agg| agg.into_bytes()) 240 + .map_err(|e| StorageError::Backend(e.to_string())) 241 } 242 243 async fn get_head(&self, key: &str, size: usize) -> Result<Bytes, StorageError> { ··· 250 .range(range) 251 .send() 252 .await 253 + .map_err(|e| StorageError::Backend(e.to_string()))?; 254 255 + resp.body 256 .collect() 257 .await 258 + .map(|agg| agg.into_bytes()) 259 + .map_err(|e| StorageError::Backend(e.to_string())) 260 } 261 262 async fn delete(&self, key: &str) -> Result<(), StorageError> { ··· 266 .key(key) 267 .send() 268 .await 269 + .map_err(|e| StorageError::Backend(e.to_string()))?; 270 271 Ok(()) 272 } ··· 274 async fn put_stream( 275 &self, 276 key: &str, 277 + stream: Pin<Box<dyn Stream<Item = Result<Bytes, std::io::Error>> + Send>>, 278 ) -> Result<StreamUploadResult, StorageError> { 279 use futures::StreamExt; 280 ··· 285 .key(key) 286 .send() 287 .await 288 + .map_err(|e| { 289 + StorageError::Backend(format!("Failed to create multipart upload: {}", e)) 290 + })?; 291 292 let upload_id = create_resp 293 .upload_id() 294 + .ok_or_else(|| StorageError::Backend("No upload ID returned".to_string()))? 295 .to_string(); 296 297 let upload_part = |client: &Client, 298 bucket: &str, ··· 317 .body(ByteStream::from(data)) 318 .send() 319 .await 320 + .map_err(|e| StorageError::Backend(format!("Failed to upload part: {}", e)))?; 321 322 let etag = resp 323 .e_tag() 324 + .ok_or_else(|| StorageError::Backend("No ETag returned for part".to_string()))? 325 .to_string(); 326 327 Ok(CompletedPart::builder() ··· 331 }) 332 }; 333 334 + struct UploadState { 335 + hasher: Sha256, 336 + total_size: u64, 337 + part_number: i32, 338 + completed_parts: Vec<CompletedPart>, 339 + buffer: Vec<u8>, 340 + } 341 342 + let initial_state = UploadState { 343 + hasher: Sha256::new(), 344 + total_size: 0, 345 + part_number: 1, 346 + completed_parts: Vec::new(), 347 + buffer: Vec::with_capacity(MIN_PART_SIZE), 348 + }; 349 + 350 + let abort_upload = || async { 351 + let _ = self 352 + .client 353 + .abort_multipart_upload() 354 + .bucket(&self.bucket) 355 + .key(key) 356 + .upload_id(&upload_id) 357 + .send() 358 + .await; 359 + }; 360 + 361 + let result: Result<UploadState, StorageError> = { 362 + let mut state = initial_state; 363 + 364 + let chunk_results: Vec<Result<Bytes, std::io::Error>> = stream.collect().await; 365 + 366 + for chunk_result in chunk_results { 367 + match chunk_result { 368 + Ok(chunk) => { 369 + state.hasher.update(&chunk); 370 + state.total_size += chunk.len() as u64; 371 + state.buffer.extend_from_slice(&chunk); 372 + 373 + if state.buffer.len() >= MIN_PART_SIZE { 374 + let part_data = std::mem::replace( 375 + &mut state.buffer, 376 + Vec::with_capacity(MIN_PART_SIZE), 377 + ); 378 + let part = upload_part( 379 + &self.client, 380 + &self.bucket, 381 + key, 382 + &upload_id, 383 + state.part_number, 384 + part_data, 385 + ) 386 + .await?; 387 + state.completed_parts.push(part); 388 + state.part_number += 1; 389 + } 390 + } 391 + Err(e) => { 392 + abort_upload().await; 393 + return Err(StorageError::Io(e)); 394 } 395 } 396 } 397 398 + Ok(state) 399 + }; 400 + 401 + let mut state = result?; 402 + 403 + if !state.buffer.is_empty() { 404 let part = upload_part( 405 &self.client, 406 &self.bucket, 407 key, 408 &upload_id, 409 + state.part_number, 410 + std::mem::take(&mut state.buffer), 411 ) 412 .await?; 413 + state.completed_parts.push(part); 414 } 415 416 + if state.completed_parts.is_empty() { 417 + abort_upload().await; 418 return Err(StorageError::Other("Empty upload".to_string())); 419 } 420 421 let completed_upload = CompletedMultipartUpload::builder() 422 + .set_parts(Some(state.completed_parts)) 423 .build(); 424 425 self.client ··· 430 .multipart_upload(completed_upload) 431 .send() 432 .await 433 + .map_err(|e| { 434 + StorageError::Backend(format!("Failed to complete multipart upload: {}", e)) 435 + })?; 436 437 + let hash: [u8; 32] = state.hasher.finalize().into(); 438 Ok(StreamUploadResult { 439 sha256_hash: hash, 440 + size: state.total_size, 441 }) 442 } 443 ··· 451 .key(dst_key) 452 .send() 453 .await 454 + .map_err(|e| StorageError::Backend(format!("Failed to copy object: {}", e)))?; 455 456 Ok(()) 457 } 458 } 459 + 460 + pub struct FilesystemBlobStorage { 461 + base_path: PathBuf, 462 + tmp_path: PathBuf, 463 + } 464 + 465 + impl FilesystemBlobStorage { 466 + pub async fn new(base_path: impl Into<PathBuf>) -> Result<Self, StorageError> { 467 + let base_path = base_path.into(); 468 + let tmp_path = base_path.join(".tmp"); 469 + tokio::fs::create_dir_all(&base_path).await?; 470 + tokio::fs::create_dir_all(&tmp_path).await?; 471 + cleanup_orphaned_tmp_files(&tmp_path).await; 472 + Ok(Self { 473 + base_path, 474 + tmp_path, 475 + }) 476 + } 477 + 478 + pub async fn from_env() -> Result<Self, StorageError> { 479 + let path = std::env::var("BLOB_STORAGE_PATH") 480 + .map_err(|_| StorageError::Other("BLOB_STORAGE_PATH not set".into()))?; 481 + Self::new(path).await 482 + } 483 + 484 + fn resolve_path(&self, key: &str) -> Result<PathBuf, StorageError> { 485 + validate_key(key)?; 486 + Ok(self.base_path.join(key)) 487 + } 488 + 489 + async fn atomic_write(&self, path: &Path, data: &[u8]) -> Result<(), StorageError> { 490 + use tokio::io::AsyncWriteExt; 491 + 492 + let tmp_file_name = uuid::Uuid::new_v4().to_string(); 493 + let tmp_path = self.tmp_path.join(&tmp_file_name); 494 + 495 + let mut file = tokio::fs::File::create(&tmp_path).await?; 496 + file.write_all(data).await?; 497 + file.sync_all().await?; 498 + drop(file); 499 + 500 + rename_with_fallback(&tmp_path, path).await 501 + } 502 + } 503 + 504 + #[async_trait] 505 + impl BlobStorage for FilesystemBlobStorage { 506 + async fn put(&self, key: &str, data: &[u8]) -> Result<(), StorageError> { 507 + let path = self.resolve_path(key)?; 508 + ensure_parent_dir(&path).await?; 509 + self.atomic_write(&path, data).await 510 + } 511 + 512 + async fn put_bytes(&self, key: &str, data: Bytes) -> Result<(), StorageError> { 513 + self.put(key, &data).await 514 + } 515 + 516 + async fn get(&self, key: &str) -> Result<Vec<u8>, StorageError> { 517 + let path = self.resolve_path(key)?; 518 + tokio::fs::read(&path).await.map_err(map_io_not_found(key)) 519 + } 520 + 521 + async fn get_bytes(&self, key: &str) -> Result<Bytes, StorageError> { 522 + self.get(key).await.map(Bytes::from) 523 + } 524 + 525 + async fn get_head(&self, key: &str, size: usize) -> Result<Bytes, StorageError> { 526 + use tokio::io::AsyncReadExt; 527 + let path = self.resolve_path(key)?; 528 + let mut file = tokio::fs::File::open(&path) 529 + .await 530 + .map_err(map_io_not_found(key))?; 531 + let mut buffer = vec![0u8; size]; 532 + let n = file.read(&mut buffer).await?; 533 + buffer.truncate(n); 534 + Ok(Bytes::from(buffer)) 535 + } 536 + 537 + async fn delete(&self, key: &str) -> Result<(), StorageError> { 538 + let path = self.resolve_path(key)?; 539 + tokio::fs::remove_file(&path).await.or_else(|e| { 540 + (e.kind() == std::io::ErrorKind::NotFound) 541 + .then_some(()) 542 + .ok_or(StorageError::Io(e)) 543 + }) 544 + } 545 + 546 + async fn put_stream( 547 + &self, 548 + key: &str, 549 + stream: Pin<Box<dyn Stream<Item = Result<Bytes, std::io::Error>> + Send>>, 550 + ) -> Result<StreamUploadResult, StorageError> { 551 + use futures::TryStreamExt; 552 + use tokio::io::AsyncWriteExt; 553 + 554 + let tmp_file_name = uuid::Uuid::new_v4().to_string(); 555 + let tmp_path = self.tmp_path.join(&tmp_file_name); 556 + let final_path = self.resolve_path(key)?; 557 + ensure_parent_dir(&final_path).await?; 558 + 559 + let file = tokio::fs::File::create(&tmp_path).await?; 560 + 561 + struct StreamState { 562 + file: tokio::fs::File, 563 + hasher: Sha256, 564 + total_size: u64, 565 + } 566 + 567 + let initial = StreamState { 568 + file, 569 + hasher: Sha256::new(), 570 + total_size: 0, 571 + }; 572 + 573 + let final_state = stream 574 + .map_err(StorageError::Io) 575 + .try_fold(initial, |mut state, chunk| async move { 576 + state.hasher.update(&chunk); 577 + state.total_size += chunk.len() as u64; 578 + state.file.write_all(&chunk).await?; 579 + Ok(state) 580 + }) 581 + .await?; 582 + 583 + final_state.file.sync_all().await?; 584 + drop(final_state.file); 585 + 586 + rename_with_fallback(&tmp_path, &final_path).await?; 587 + 588 + let hash: [u8; 32] = final_state.hasher.finalize().into(); 589 + Ok(StreamUploadResult { 590 + sha256_hash: hash, 591 + size: final_state.total_size, 592 + }) 593 + } 594 + 595 + async fn copy(&self, src_key: &str, dst_key: &str) -> Result<(), StorageError> { 596 + let src_path = self.resolve_path(src_key)?; 597 + let dst_path = self.resolve_path(dst_key)?; 598 + ensure_parent_dir(&dst_path).await?; 599 + tokio::fs::copy(&src_path, &dst_path) 600 + .await 601 + .map_err(map_io_not_found(src_key))?; 602 + tokio::fs::File::open(&dst_path).await?.sync_all().await?; 603 + Ok(()) 604 + } 605 + } 606 + 607 + pub struct FilesystemBackupStorage { 608 + base_path: PathBuf, 609 + tmp_path: PathBuf, 610 + } 611 + 612 + impl FilesystemBackupStorage { 613 + pub async fn new(base_path: impl Into<PathBuf>) -> Result<Self, StorageError> { 614 + let base_path = base_path.into(); 615 + let tmp_path = base_path.join(".tmp"); 616 + tokio::fs::create_dir_all(&base_path).await?; 617 + tokio::fs::create_dir_all(&tmp_path).await?; 618 + cleanup_orphaned_tmp_files(&tmp_path).await; 619 + Ok(Self { 620 + base_path, 621 + tmp_path, 622 + }) 623 + } 624 + 625 + pub async fn from_env() -> Result<Self, StorageError> { 626 + let path = std::env::var("BACKUP_STORAGE_PATH") 627 + .map_err(|_| StorageError::Other("BACKUP_STORAGE_PATH not set".into()))?; 628 + Self::new(path).await 629 + } 630 + 631 + fn resolve_path(&self, key: &str) -> Result<PathBuf, StorageError> { 632 + validate_key(key)?; 633 + Ok(self.base_path.join(key)) 634 + } 635 + } 636 + 637 + #[async_trait] 638 + impl BackupStorage for FilesystemBackupStorage { 639 + async fn put_backup(&self, did: &str, rev: &str, data: &[u8]) -> Result<String, StorageError> { 640 + use tokio::io::AsyncWriteExt; 641 + 642 + let key = format!("{}/{}.car", did, rev); 643 + let final_path = self.resolve_path(&key)?; 644 + ensure_parent_dir(&final_path).await?; 645 + 646 + let tmp_file_name = uuid::Uuid::new_v4().to_string(); 647 + let tmp_path = self.tmp_path.join(&tmp_file_name); 648 + 649 + let mut file = tokio::fs::File::create(&tmp_path).await?; 650 + file.write_all(data).await?; 651 + file.sync_all().await?; 652 + drop(file); 653 + 654 + rename_with_fallback(&tmp_path, &final_path).await?; 655 + Ok(key) 656 + } 657 + 658 + async fn get_backup(&self, storage_key: &str) -> Result<Bytes, StorageError> { 659 + let path = self.resolve_path(storage_key)?; 660 + tokio::fs::read(&path) 661 + .await 662 + .map(Bytes::from) 663 + .map_err(map_io_not_found(storage_key)) 664 + } 665 + 666 + async fn delete_backup(&self, storage_key: &str) -> Result<(), StorageError> { 667 + let path = self.resolve_path(storage_key)?; 668 + tokio::fs::remove_file(&path).await.or_else(|e| { 669 + (e.kind() == std::io::ErrorKind::NotFound) 670 + .then_some(()) 671 + .ok_or(StorageError::Io(e)) 672 + }) 673 + } 674 + } 675 + 676 + pub async fn create_blob_storage() -> Arc<dyn BlobStorage> { 677 + let backend = std::env::var("BLOB_STORAGE_BACKEND").unwrap_or_else(|_| "filesystem".into()); 678 + 679 + match backend.as_str() { 680 + "s3" => { 681 + tracing::info!("Initializing S3 blob storage"); 682 + Arc::new(S3BlobStorage::new().await) 683 + } 684 + _ => { 685 + tracing::info!("Initializing filesystem blob storage"); 686 + FilesystemBlobStorage::from_env() 687 + .await 688 + .unwrap_or_else(|e| { 689 + panic!( 690 + "Failed to initialize filesystem blob storage: {}. \ 691 + Set BLOB_STORAGE_PATH to a valid directory path.", 692 + e 693 + ); 694 + }) 695 + .pipe(Arc::new) 696 + } 697 + } 698 + } 699 + 700 + pub async fn create_backup_storage() -> Option<Arc<dyn BackupStorage>> { 701 + let enabled = std::env::var("BACKUP_ENABLED") 702 + .map(|v| v != "false" && v != "0") 703 + .unwrap_or(true); 704 + 705 + if !enabled { 706 + tracing::info!("Backup storage disabled via BACKUP_ENABLED=false"); 707 + return None; 708 + } 709 + 710 + let backend = std::env::var("BACKUP_STORAGE_BACKEND").unwrap_or_else(|_| "filesystem".into()); 711 + 712 + match backend.as_str() { 713 + "s3" => S3BackupStorage::new().await.map_or_else( 714 + || { 715 + tracing::error!( 716 + "BACKUP_STORAGE_BACKEND=s3 but BACKUP_S3_BUCKET is not set. \ 717 + Backups will be disabled." 718 + ); 719 + None 720 + }, 721 + |storage| { 722 + tracing::info!("Initialized S3 backup storage"); 723 + Some(Arc::new(storage) as Arc<dyn BackupStorage>) 724 + }, 725 + ), 726 + _ => FilesystemBackupStorage::from_env().await.map_or_else( 727 + |e| { 728 + tracing::error!( 729 + "Failed to initialize filesystem backup storage: {}. \ 730 + Set BACKUP_STORAGE_PATH to a valid directory path. \ 731 + Backups will be disabled.", 732 + e 733 + ); 734 + None 735 + }, 736 + |storage| { 737 + tracing::info!("Initialized filesystem backup storage"); 738 + Some(Arc::new(storage) as Arc<dyn BackupStorage>) 739 + }, 740 + ), 741 + } 742 + } 743 + 744 + trait Pipe: Sized { 745 + fn pipe<F, R>(self, f: F) -> R 746 + where 747 + F: FnOnce(Self) -> R, 748 + { 749 + f(self) 750 + } 751 + } 752 + 753 + impl<T> Pipe for T {}
+9 -27
docs/install-containers.md
··· 7 8 ## Prerequisites 9 10 - - A VPS with at least 2GB RAM and 20GB disk 11 - A domain name pointing to your server's IP 12 - A **wildcard TLS certificate** for `*.pds.example.com` (user handles are served as subdomains) 13 - Root or sudo access ··· 42 43 ## Standalone Containers (No Compose) 44 45 - If you already have postgres, valkey, and minio running on the host (eg., from the [Debian install guide](install-debian.md)), you can run just the app containers. 46 47 Build the images: 48 ```sh ··· 50 podman build -t tranquil-pds-frontend:latest ./frontend 51 ``` 52 53 - Run the backend with host networking (so it can access postgres/valkey/minio on localhost): 54 ```sh 55 podman run -d --name tranquil-pds \ 56 --network=host \ 57 --env-file /etc/tranquil-pds/tranquil-pds.env \ 58 tranquil-pds:latest 59 ``` 60 ··· 104 105 ```bash 106 mkdir -p /etc/containers/systemd 107 - mkdir -p /srv/tranquil-pds/{postgres,minio,valkey,certs,acme,config} 108 ``` 109 110 ## Create Environment File ··· 152 ```bash 153 source /srv/tranquil-pds/config/tranquil-pds.env 154 echo "$DB_PASSWORD" | podman secret create tranquil-pds-db-password - 155 - echo "$MINIO_ROOT_PASSWORD" | podman secret create tranquil-pds-minio-password - 156 ``` 157 158 ## Start Services and Initialize 159 160 ```bash 161 systemctl daemon-reload 162 - systemctl start tranquil-pds-db tranquil-pds-minio tranquil-pds-valkey 163 sleep 10 164 ``` 165 166 - Create the minio buckets: 167 - ```bash 168 - podman run --rm --pod tranquil-pds \ 169 - -e MINIO_ROOT_USER=minioadmin \ 170 - -e MINIO_ROOT_PASSWORD=your-minio-password \ 171 - cgr.dev/chainguard/minio-client:latest-dev \ 172 - sh -c "mc alias set local http://localhost:9000 \$MINIO_ROOT_USER \$MINIO_ROOT_PASSWORD && mc mb --ignore-existing local/pds-blobs && mc mb --ignore-existing local/pds-backups" 173 - ``` 174 - 175 Run migrations: 176 ```bash 177 cargo install sqlx-cli --no-default-features --features postgres ··· 215 ## Enable All Services 216 217 ```bash 218 - systemctl enable tranquil-pds-db tranquil-pds-minio tranquil-pds-valkey tranquil-pds-app tranquil-pds-frontend tranquil-pds-nginx 219 ``` 220 221 ## Configure Firewall ··· 260 261 ```sh 262 mkdir -p /srv/tranquil-pds/{data,config} 263 - mkdir -p /srv/tranquil-pds/data/{postgres,minio,valkey,certs,acme} 264 ``` 265 266 ## Clone Repository and Build Images ··· 340 ```sh 341 rc-service tranquil-pds start 342 sleep 15 343 - ``` 344 - 345 - Create the minio buckets: 346 - ```sh 347 - source /srv/tranquil-pds/config/tranquil-pds.env 348 - podman run --rm --network tranquil-pds_default \ 349 - -e MINIO_ROOT_USER="$MINIO_ROOT_USER" \ 350 - -e MINIO_ROOT_PASSWORD="$MINIO_ROOT_PASSWORD" \ 351 - cgr.dev/chainguard/minio-client:latest-dev \ 352 - sh -c 'mc alias set local http://minio:9000 $MINIO_ROOT_USER $MINIO_ROOT_PASSWORD && mc mb --ignore-existing local/pds-blobs && mc mb --ignore-existing local/pds-backups' 353 ``` 354 355 Run migrations:
··· 7 8 ## Prerequisites 9 10 + - A VPS with at least 2GB RAM 11 + - Disk space for blobs (depends on usage; plan for ~1GB per active user as a baseline) 12 - A domain name pointing to your server's IP 13 - A **wildcard TLS certificate** for `*.pds.example.com` (user handles are served as subdomains) 14 - Root or sudo access ··· 43 44 ## Standalone Containers (No Compose) 45 46 + If you already have postgres and valkey running on the host (eg., from the [Debian install guide](install-debian.md)), you can run just the app containers. 47 48 Build the images: 49 ```sh ··· 51 podman build -t tranquil-pds-frontend:latest ./frontend 52 ``` 53 54 + Run the backend with host networking (so it can access postgres/valkey on localhost) and mount the blob storage: 55 ```sh 56 podman run -d --name tranquil-pds \ 57 --network=host \ 58 --env-file /etc/tranquil-pds/tranquil-pds.env \ 59 + -v /var/lib/tranquil:/var/lib/tranquil:Z \ 60 tranquil-pds:latest 61 ``` 62 ··· 106 107 ```bash 108 mkdir -p /etc/containers/systemd 109 + mkdir -p /srv/tranquil-pds/{postgres,valkey,blobs,backups,certs,acme,config} 110 ``` 111 112 ## Create Environment File ··· 154 ```bash 155 source /srv/tranquil-pds/config/tranquil-pds.env 156 echo "$DB_PASSWORD" | podman secret create tranquil-pds-db-password - 157 ``` 158 159 ## Start Services and Initialize 160 161 ```bash 162 systemctl daemon-reload 163 + systemctl start tranquil-pds-db tranquil-pds-valkey 164 sleep 10 165 ``` 166 167 Run migrations: 168 ```bash 169 cargo install sqlx-cli --no-default-features --features postgres ··· 207 ## Enable All Services 208 209 ```bash 210 + systemctl enable tranquil-pds-db tranquil-pds-valkey tranquil-pds-app tranquil-pds-frontend tranquil-pds-nginx 211 ``` 212 213 ## Configure Firewall ··· 252 253 ```sh 254 mkdir -p /srv/tranquil-pds/{data,config} 255 + mkdir -p /srv/tranquil-pds/data/{postgres,valkey,blobs,backups,certs,acme} 256 ``` 257 258 ## Clone Repository and Build Images ··· 332 ```sh 333 rc-service tranquil-pds start 334 sleep 15 335 ``` 336 337 Run migrations:
+11 -41
docs/install-debian.md
··· 4 5 ## Prerequisites 6 7 - - A VPS with at least 2GB RAM and 20GB disk 8 - A domain name pointing to your server's IP 9 - A wildcard TLS certificate for `*.pds.example.com` (user handles are served as subdomains) 10 - Root or sudo access ··· 37 sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE pds TO tranquil_pds;" 38 ``` 39 40 - ## Install minio 41 42 ```bash 43 - curl -O https://dl.min.io/server/minio/release/linux-amd64/minio 44 - chmod +x minio 45 - mv minio /usr/local/bin/ 46 - mkdir -p /var/lib/minio/data 47 - useradd -r -s /sbin/nologin minio-user 48 - chown -R minio-user:minio-user /var/lib/minio 49 - cat > /etc/default/minio << 'EOF' 50 - MINIO_ROOT_USER=minioadmin 51 - MINIO_ROOT_PASSWORD=your-minio-password 52 - MINIO_VOLUMES="/var/lib/minio/data" 53 - MINIO_OPTS="--console-address :9001" 54 - EOF 55 - cat > /etc/systemd/system/minio.service << 'EOF' 56 - [Unit] 57 - Description=MinIO Object Storage 58 - After=network.target 59 - [Service] 60 - User=minio-user 61 - Group=minio-user 62 - EnvironmentFile=/etc/default/minio 63 - ExecStart=/usr/local/bin/minio server $MINIO_VOLUMES $MINIO_OPTS 64 - Restart=always 65 - LimitNOFILE=65536 66 - [Install] 67 - WantedBy=multi-user.target 68 - EOF 69 - systemctl daemon-reload 70 - systemctl enable minio 71 - systemctl start minio 72 ``` 73 74 - Create the buckets (wait a few seconds for minio to start): 75 - ```bash 76 - curl -O https://dl.min.io/client/mc/release/linux-amd64/mc 77 - chmod +x mc 78 - mv mc /usr/local/bin/ 79 - mc alias set local http://localhost:9000 minioadmin your-minio-password 80 - mc mb local/pds-blobs 81 - mc mb local/pds-backups 82 - ``` 83 84 ## Install valkey 85 ··· 142 143 ```bash 144 useradd -r -s /sbin/nologin tranquil-pds 145 cp /opt/tranquil-pds/target/release/tranquil-pds /usr/local/bin/ 146 147 cat > /etc/systemd/system/tranquil-pds.service << 'EOF' 148 [Unit] 149 Description=Tranquil PDS - AT Protocol PDS 150 - After=network.target postgresql.service minio.service 151 [Service] 152 Type=simple 153 User=tranquil-pds ··· 156 ExecStart=/usr/local/bin/tranquil-pds 157 Restart=always 158 RestartSec=5 159 [Install] 160 WantedBy=multi-user.target 161 EOF
··· 4 5 ## Prerequisites 6 7 + - A VPS with at least 2GB RAM 8 + - Disk space for blobs (depends on usage; plan for ~1GB per active user as a baseline) 9 - A domain name pointing to your server's IP 10 - A wildcard TLS certificate for `*.pds.example.com` (user handles are served as subdomains) 11 - Root or sudo access ··· 38 sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE pds TO tranquil_pds;" 39 ``` 40 41 + ## Create Blob Storage Directories 42 43 ```bash 44 + mkdir -p /var/lib/tranquil/blobs /var/lib/tranquil/backups 45 ``` 46 47 + We'll set ownership after creating the service user. 48 49 ## Install valkey 50 ··· 107 108 ```bash 109 useradd -r -s /sbin/nologin tranquil-pds 110 + chown -R tranquil-pds:tranquil-pds /var/lib/tranquil 111 cp /opt/tranquil-pds/target/release/tranquil-pds /usr/local/bin/ 112 113 cat > /etc/systemd/system/tranquil-pds.service << 'EOF' 114 [Unit] 115 Description=Tranquil PDS - AT Protocol PDS 116 + After=network.target postgresql.service 117 [Service] 118 Type=simple 119 User=tranquil-pds ··· 122 ExecStart=/usr/local/bin/tranquil-pds 123 Restart=always 124 RestartSec=5 125 + ProtectSystem=strict 126 + ProtectHome=true 127 + PrivateTmp=true 128 + ReadWritePaths=/var/lib/tranquil 129 [Install] 130 WantedBy=multi-user.target 131 EOF
+3 -3
docs/install-kubernetes.md
··· 4 5 - cloudnativepg (or your preferred postgres operator) 6 - valkey 7 - - s3-compatible object storage (minio operator, or just use a managed service) 8 - the app itself (it's just a container with some env vars) 9 10 You'll need a wildcard TLS certificate for `*.your-pds-hostname.example.com`. User handles are served as subdomains. 11 12 The container image expects: 13 - `DATABASE_URL` - postgres connection string 14 - - `S3_ENDPOINT`, `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `S3_BUCKET` 15 - - `BACKUP_S3_BUCKET` - bucket for repo backups (optional but recommended) 16 - `VALKEY_URL` - redis:// connection string 17 - `PDS_HOSTNAME` - your PDS hostname (without protocol) 18 - `JWT_SECRET`, `DPOP_SECRET`, `MASTER_KEY` - generate with `openssl rand -base64 48`
··· 4 5 - cloudnativepg (or your preferred postgres operator) 6 - valkey 7 + - a PersistentVolume for blob storage 8 - the app itself (it's just a container with some env vars) 9 10 You'll need a wildcard TLS certificate for `*.your-pds-hostname.example.com`. User handles are served as subdomains. 11 12 The container image expects: 13 - `DATABASE_URL` - postgres connection string 14 + - `BLOB_STORAGE_PATH` - path to blob storage (mount a PV here) 15 + - `BACKUP_STORAGE_PATH` - path for repo backups (optional but recommended) 16 - `VALKEY_URL` - redis:// connection string 17 - `PDS_HOSTNAME` - your PDS hostname (without protocol) 18 - `JWT_SECRET`, `DPOP_SECRET`, `MASTER_KEY` - generate with `openssl rand -base64 48`
+2 -1
frontend/src/routes/OAuthDelegation.svelte
··· 127 }, 128 body: JSON.stringify({ 129 request_uri: requestUri, 130 - identifier: controllerIdentifier.trim().replace(/^@/, '') 131 }) 132 }) 133
··· 127 }, 128 body: JSON.stringify({ 129 request_uri: requestUri, 130 + identifier: controllerIdentifier.trim().replace(/^@/, ''), 131 + delegated_did: delegatedDid 132 }) 133 }) 134
+780
frontend/src/routes/OAuthRegister.svelte
···
··· 1 + <script lang="ts"> 2 + import { navigate, routes, getFullUrl } from '../lib/router.svelte' 3 + import { api } from '../lib/api' 4 + import { _ } from '../lib/i18n' 5 + import { 6 + createRegistrationFlow, 7 + restoreRegistrationFlow, 8 + VerificationStep, 9 + KeyChoiceStep, 10 + DidDocStep, 11 + AppPasswordStep, 12 + } from '../lib/registration' 13 + import { 14 + prepareCreationOptions, 15 + serializeAttestationResponse, 16 + type WebAuthnCreationOptionsResponse, 17 + } from '../lib/webauthn' 18 + import AccountTypeSwitcher from '../components/AccountTypeSwitcher.svelte' 19 + 20 + let serverInfo = $state<{ 21 + availableUserDomains: string[] 22 + inviteCodeRequired: boolean 23 + availableCommsChannels?: string[] 24 + selfHostedDidWebEnabled?: boolean 25 + } | null>(null) 26 + let loadingServerInfo = $state(true) 27 + let serverInfoLoaded = false 28 + let ssoAvailable = $state(false) 29 + 30 + let flow = $state<ReturnType<typeof createRegistrationFlow> | null>(null) 31 + let passkeyName = $state('') 32 + let clientName = $state<string | null>(null) 33 + 34 + function getRequestUri(): string | null { 35 + const params = new URLSearchParams(window.location.search) 36 + return params.get('request_uri') 37 + } 38 + 39 + $effect(() => { 40 + if (!serverInfoLoaded) { 41 + serverInfoLoaded = true 42 + loadServerInfo() 43 + fetchClientName() 44 + checkSsoAvailable() 45 + } 46 + }) 47 + 48 + async function checkSsoAvailable() { 49 + try { 50 + const response = await fetch('/oauth/sso/providers') 51 + if (response.ok) { 52 + const data = await response.json() 53 + ssoAvailable = (data.providers?.length ?? 0) > 0 54 + } 55 + } catch { 56 + ssoAvailable = false 57 + } 58 + } 59 + 60 + async function fetchClientName() { 61 + const requestUri = getRequestUri() 62 + if (!requestUri) return 63 + 64 + try { 65 + const response = await fetch(`/oauth/authorize?request_uri=${encodeURIComponent(requestUri)}`, { 66 + headers: { 'Accept': 'application/json' } 67 + }) 68 + if (response.ok) { 69 + const data = await response.json() 70 + clientName = data.client_name || null 71 + } 72 + } catch { 73 + clientName = null 74 + } 75 + } 76 + 77 + $effect(() => { 78 + if (flow?.state.step === 'redirect-to-dashboard') { 79 + completeOAuthRegistration() 80 + } 81 + }) 82 + 83 + let creatingStarted = false 84 + $effect(() => { 85 + if (flow?.state.step === 'creating' && !creatingStarted) { 86 + creatingStarted = true 87 + flow.createPasskeyAccount() 88 + } 89 + }) 90 + 91 + async function loadServerInfo() { 92 + try { 93 + const restored = restoreRegistrationFlow() 94 + if (restored && restored.state.mode === 'passkey') { 95 + flow = restored 96 + serverInfo = await api.describeServer() 97 + } else { 98 + serverInfo = await api.describeServer() 99 + const hostname = serverInfo?.availableUserDomains?.[0] || window.location.hostname 100 + flow = createRegistrationFlow('passkey', hostname) 101 + } 102 + } catch (e) { 103 + console.error('Failed to load server info:', e) 104 + } finally { 105 + loadingServerInfo = false 106 + } 107 + } 108 + 109 + function validateInfoStep(): string | null { 110 + if (!flow) return 'Flow not initialized' 111 + const info = flow.info 112 + if (!info.handle.trim()) return $_('registerPasskey.errors.handleRequired') 113 + if (info.handle.includes('.')) return $_('registerPasskey.errors.handleNoDots') 114 + if (serverInfo?.inviteCodeRequired && !info.inviteCode?.trim()) { 115 + return $_('registerPasskey.errors.inviteRequired') 116 + } 117 + if (info.didType === 'web-external') { 118 + if (!info.externalDid?.trim()) return $_('registerPasskey.errors.externalDidRequired') 119 + if (!info.externalDid.trim().startsWith('did:web:')) return $_('registerPasskey.errors.externalDidFormat') 120 + } 121 + switch (info.verificationChannel) { 122 + case 'email': 123 + if (!info.email.trim()) return $_('registerPasskey.errors.emailRequired') 124 + break 125 + case 'discord': 126 + if (!info.discordId?.trim()) return $_('registerPasskey.errors.discordRequired') 127 + break 128 + case 'telegram': 129 + if (!info.telegramUsername?.trim()) return $_('registerPasskey.errors.telegramRequired') 130 + break 131 + case 'signal': 132 + if (!info.signalNumber?.trim()) return $_('registerPasskey.errors.signalRequired') 133 + break 134 + } 135 + return null 136 + } 137 + 138 + async function handleInfoSubmit(e: Event) { 139 + e.preventDefault() 140 + if (!flow) return 141 + 142 + const validationError = validateInfoStep() 143 + if (validationError) { 144 + flow.setError(validationError) 145 + return 146 + } 147 + 148 + if (!window.PublicKeyCredential) { 149 + flow.setError($_('registerPasskey.errors.passkeysNotSupported')) 150 + return 151 + } 152 + 153 + flow.clearError() 154 + flow.proceedFromInfo() 155 + } 156 + 157 + async function handlePasskeyRegistration() { 158 + if (!flow || !flow.account) return 159 + 160 + flow.setSubmitting(true) 161 + flow.clearError() 162 + 163 + try { 164 + const { options } = await api.startPasskeyRegistrationForSetup( 165 + flow.account.did, 166 + flow.account.setupToken!, 167 + passkeyName || undefined 168 + ) 169 + 170 + const publicKeyOptions = prepareCreationOptions(options as unknown as WebAuthnCreationOptionsResponse) 171 + const credential = await navigator.credentials.create({ 172 + publicKey: publicKeyOptions 173 + }) 174 + 175 + if (!credential) { 176 + flow.setError($_('registerPasskey.errors.passkeyCancelled')) 177 + flow.setSubmitting(false) 178 + return 179 + } 180 + 181 + const credentialResponse = serializeAttestationResponse(credential as PublicKeyCredential) 182 + 183 + const result = await api.completePasskeySetup( 184 + flow.account.did, 185 + flow.account.setupToken!, 186 + credentialResponse, 187 + passkeyName || undefined 188 + ) 189 + 190 + flow.setPasskeyComplete(result.appPassword, result.appPasswordName) 191 + } catch (err) { 192 + if (err instanceof DOMException && err.name === 'NotAllowedError') { 193 + flow.setError($_('registerPasskey.errors.passkeyCancelled')) 194 + } else if (err instanceof Error) { 195 + flow.setError(err.message || $_('registerPasskey.errors.passkeyFailed')) 196 + } else { 197 + flow.setError($_('registerPasskey.errors.passkeyFailed')) 198 + } 199 + } finally { 200 + flow.setSubmitting(false) 201 + } 202 + } 203 + 204 + async function completeOAuthRegistration() { 205 + const requestUri = getRequestUri() 206 + if (!requestUri || !flow?.account) { 207 + navigate(routes.dashboard) 208 + return 209 + } 210 + 211 + try { 212 + const response = await fetch('/oauth/register/complete', { 213 + method: 'POST', 214 + headers: { 215 + 'Content-Type': 'application/json', 216 + 'Accept': 'application/json', 217 + }, 218 + body: JSON.stringify({ 219 + request_uri: requestUri, 220 + did: flow.account.did, 221 + app_password: flow.account.appPassword, 222 + }), 223 + }) 224 + 225 + const data = await response.json() 226 + 227 + if (!response.ok) { 228 + flow.setError(data.error_description || data.error || $_('common.error')) 229 + return 230 + } 231 + 232 + if (data.redirect_uri) { 233 + window.location.href = data.redirect_uri 234 + return 235 + } 236 + 237 + navigate(routes.dashboard) 238 + } catch { 239 + flow.setError($_('common.error')) 240 + } 241 + } 242 + 243 + function isChannelAvailable(ch: string): boolean { 244 + const available = serverInfo?.availableCommsChannels ?? ['email'] 245 + return available.includes(ch) 246 + } 247 + 248 + function channelLabel(ch: string): string { 249 + switch (ch) { 250 + case 'email': 251 + return $_('register.email') 252 + case 'discord': 253 + return $_('register.discord') 254 + case 'telegram': 255 + return $_('register.telegram') 256 + case 'signal': 257 + return $_('register.signal') 258 + default: 259 + return ch 260 + } 261 + } 262 + 263 + let fullHandle = $derived(() => { 264 + if (!flow?.info.handle.trim()) return '' 265 + return `${flow.info.handle.trim()}.${flow.state.pdsHostname}` 266 + }) 267 + 268 + async function handleCancel() { 269 + const requestUri = getRequestUri() 270 + if (!requestUri) { 271 + window.history.back() 272 + return 273 + } 274 + 275 + try { 276 + const response = await fetch('/oauth/authorize/deny', { 277 + method: 'POST', 278 + headers: { 279 + 'Content-Type': 'application/json', 280 + 'Accept': 'application/json' 281 + }, 282 + body: JSON.stringify({ request_uri: requestUri }) 283 + }) 284 + 285 + const data = await response.json() 286 + if (data.redirect_uri) { 287 + window.location.href = data.redirect_uri 288 + } 289 + } catch { 290 + window.history.back() 291 + } 292 + } 293 + 294 + function goToLogin() { 295 + const requestUri = getRequestUri() 296 + if (requestUri) { 297 + navigate(routes.oauthLogin, { params: { request_uri: requestUri } }) 298 + } else { 299 + navigate(routes.login) 300 + } 301 + } 302 + </script> 303 + 304 + <div class="oauth-register-container"> 305 + {#if loadingServerInfo} 306 + <div class="loading"> 307 + <div class="spinner"></div> 308 + <p>{$_('common.loading')}</p> 309 + </div> 310 + {:else if flow} 311 + <header class="page-header"> 312 + <h1>{$_('oauth.register.title')}</h1> 313 + <p class="subtitle"> 314 + {#if clientName} 315 + {$_('oauth.register.subtitle')} <strong>{clientName}</strong> 316 + {:else} 317 + {$_('oauth.register.subtitleGeneric')} 318 + {/if} 319 + </p> 320 + </header> 321 + 322 + {#if flow.state.error} 323 + <div class="error">{flow.state.error}</div> 324 + {/if} 325 + 326 + {#if flow.state.step === 'info'} 327 + <div class="migrate-callout"> 328 + <div class="migrate-icon">โ†—</div> 329 + <div class="migrate-content"> 330 + <strong>{$_('register.migrateTitle')}</strong> 331 + <p>{$_('register.migrateDescription')}</p> 332 + <a href={getFullUrl(routes.migrate)} class="migrate-link"> 333 + {$_('register.migrateLink')} โ†’ 334 + </a> 335 + </div> 336 + </div> 337 + 338 + <AccountTypeSwitcher active="passkey" {ssoAvailable} oauthRequestUri={getRequestUri()} /> 339 + 340 + <div class="split-layout"> 341 + <div class="form-section"> 342 + <form onsubmit={handleInfoSubmit}> 343 + <div class="field"> 344 + <label for="handle">{$_('register.handle')}</label> 345 + <input 346 + id="handle" 347 + type="text" 348 + bind:value={flow.info.handle} 349 + placeholder={$_('register.handlePlaceholder')} 350 + disabled={flow.state.submitting} 351 + required 352 + autocomplete="off" 353 + /> 354 + {#if fullHandle()} 355 + <p class="hint">{$_('register.handleHint', { values: { handle: fullHandle() } })}</p> 356 + {/if} 357 + </div> 358 + 359 + <fieldset> 360 + <legend>{$_('register.contactMethod')}</legend> 361 + <div class="contact-fields"> 362 + <div class="field"> 363 + <label for="verification-channel">{$_('register.verificationMethod')}</label> 364 + <select id="verification-channel" bind:value={flow.info.verificationChannel} disabled={flow.state.submitting}> 365 + <option value="email">{channelLabel('email')}</option> 366 + {#if isChannelAvailable('discord')} 367 + <option value="discord">{channelLabel('discord')}</option> 368 + {/if} 369 + {#if isChannelAvailable('telegram')} 370 + <option value="telegram">{channelLabel('telegram')}</option> 371 + {/if} 372 + {#if isChannelAvailable('signal')} 373 + <option value="signal">{channelLabel('signal')}</option> 374 + {/if} 375 + </select> 376 + </div> 377 + 378 + {#if flow.info.verificationChannel === 'email'} 379 + <div class="field"> 380 + <label for="email">{$_('register.emailAddress')}</label> 381 + <input 382 + id="email" 383 + type="email" 384 + bind:value={flow.info.email} 385 + placeholder={$_('register.emailPlaceholder')} 386 + disabled={flow.state.submitting} 387 + required 388 + /> 389 + </div> 390 + {:else if flow.info.verificationChannel === 'discord'} 391 + <div class="field"> 392 + <label for="discord-id">{$_('register.discordId')}</label> 393 + <input 394 + id="discord-id" 395 + type="text" 396 + bind:value={flow.info.discordId} 397 + placeholder={$_('register.discordIdPlaceholder')} 398 + disabled={flow.state.submitting} 399 + required 400 + /> 401 + <p class="hint">{$_('register.discordIdHint')}</p> 402 + </div> 403 + {:else if flow.info.verificationChannel === 'telegram'} 404 + <div class="field"> 405 + <label for="telegram-username">{$_('register.telegramUsername')}</label> 406 + <input 407 + id="telegram-username" 408 + type="text" 409 + bind:value={flow.info.telegramUsername} 410 + placeholder={$_('register.telegramUsernamePlaceholder')} 411 + disabled={flow.state.submitting} 412 + required 413 + /> 414 + </div> 415 + {:else if flow.info.verificationChannel === 'signal'} 416 + <div class="field"> 417 + <label for="signal-number">{$_('register.signalNumber')}</label> 418 + <input 419 + id="signal-number" 420 + type="tel" 421 + bind:value={flow.info.signalNumber} 422 + placeholder={$_('register.signalNumberPlaceholder')} 423 + disabled={flow.state.submitting} 424 + required 425 + /> 426 + <p class="hint">{$_('register.signalNumberHint')}</p> 427 + </div> 428 + {/if} 429 + </div> 430 + </fieldset> 431 + 432 + <fieldset> 433 + <legend>{$_('registerPasskey.identityType')}</legend> 434 + <p class="section-hint">{$_('registerPasskey.identityTypeHint')}</p> 435 + <div class="radio-group"> 436 + <label class="radio-label"> 437 + <input type="radio" name="didType" value="plc" bind:group={flow.info.didType} disabled={flow.state.submitting} /> 438 + <span class="radio-content"> 439 + <strong>{$_('registerPasskey.didPlcRecommended')}</strong> 440 + <span class="radio-hint">{$_('registerPasskey.didPlcHint')}</span> 441 + </span> 442 + </label> 443 + <label class="radio-label" class:disabled={serverInfo?.selfHostedDidWebEnabled === false}> 444 + <input type="radio" name="didType" value="web" bind:group={flow.info.didType} disabled={flow.state.submitting || serverInfo?.selfHostedDidWebEnabled === false} /> 445 + <span class="radio-content"> 446 + <strong>{$_('registerPasskey.didWeb')}</strong> 447 + {#if serverInfo?.selfHostedDidWebEnabled === false} 448 + <span class="radio-hint disabled-hint">{$_('registerPasskey.didWebDisabledHint')}</span> 449 + {:else} 450 + <span class="radio-hint">{$_('registerPasskey.didWebHint')}</span> 451 + {/if} 452 + </span> 453 + </label> 454 + <label class="radio-label"> 455 + <input type="radio" name="didType" value="web-external" bind:group={flow.info.didType} disabled={flow.state.submitting} /> 456 + <span class="radio-content"> 457 + <strong>{$_('registerPasskey.didWebBYOD')}</strong> 458 + <span class="radio-hint">{$_('registerPasskey.didWebBYODHint')}</span> 459 + </span> 460 + </label> 461 + </div> 462 + {#if flow.info.didType === 'web'} 463 + <div class="warning-box"> 464 + <strong>{$_('registerPasskey.didWebWarningTitle')}</strong> 465 + <ul> 466 + <li><strong>{$_('registerPasskey.didWebWarning1')}</strong> {@html $_('registerPasskey.didWebWarning1Detail', { values: { did: `<code>did:web:yourhandle.${serverInfo?.availableUserDomains?.[0] || 'this-pds.com'}</code>` } })}</li> 467 + <li><strong>{$_('registerPasskey.didWebWarning2')}</strong> {$_('registerPasskey.didWebWarning2Detail')}</li> 468 + <li><strong>{$_('registerPasskey.didWebWarning3')}</strong> {$_('registerPasskey.didWebWarning3Detail')}</li> 469 + <li><strong>{$_('registerPasskey.didWebWarning4')}</strong> {$_('registerPasskey.didWebWarning4Detail')}</li> 470 + </ul> 471 + </div> 472 + {/if} 473 + {#if flow.info.didType === 'web-external'} 474 + <div class="field"> 475 + <label for="external-did">{$_('registerPasskey.externalDid')}</label> 476 + <input id="external-did" type="text" bind:value={flow.info.externalDid} placeholder={$_('registerPasskey.externalDidPlaceholder')} disabled={flow.state.submitting} required /> 477 + <p class="hint">{$_('registerPasskey.externalDidHint')} <code>https://{flow.info.externalDid ? flow.extractDomain(flow.info.externalDid) : 'yourdomain.com'}/.well-known/did.json</code></p> 478 + </div> 479 + {/if} 480 + </fieldset> 481 + 482 + {#if serverInfo?.inviteCodeRequired} 483 + <div class="field"> 484 + <label for="invite-code">{$_('register.inviteCode')} <span class="required">*</span></label> 485 + <input 486 + id="invite-code" 487 + type="text" 488 + bind:value={flow.info.inviteCode} 489 + placeholder={$_('register.inviteCodePlaceholder')} 490 + disabled={flow.state.submitting} 491 + required 492 + /> 493 + </div> 494 + {/if} 495 + 496 + <div class="actions"> 497 + <button type="submit" class="primary" disabled={flow.state.submitting}> 498 + {flow.state.submitting ? $_('common.loading') : $_('common.continue')} 499 + </button> 500 + </div> 501 + 502 + <div class="secondary-actions"> 503 + <button type="button" class="link-btn" onclick={goToLogin}> 504 + {$_('oauth.register.haveAccount')} 505 + </button> 506 + <button type="button" class="link-btn" onclick={handleCancel}> 507 + {$_('common.cancel')} 508 + </button> 509 + </div> 510 + </form> 511 + 512 + <div class="form-links"> 513 + <p class="link-text"> 514 + {$_('register.alreadyHaveAccount')} <a href="/app/login">{$_('register.signIn')}</a> 515 + </p> 516 + </div> 517 + </div> 518 + 519 + <aside class="info-panel"> 520 + <h3>{$_('registerPasskey.infoWhyPasskey')}</h3> 521 + <p>{$_('registerPasskey.infoWhyPasskeyDesc')}</p> 522 + 523 + <h3>{$_('registerPasskey.infoHowItWorks')}</h3> 524 + <p>{$_('registerPasskey.infoHowItWorksDesc')}</p> 525 + 526 + <h3>{$_('registerPasskey.infoAppAccess')}</h3> 527 + <p>{$_('registerPasskey.infoAppAccessDesc')}</p> 528 + </aside> 529 + </div> 530 + 531 + {:else if flow.state.step === 'key-choice'} 532 + <KeyChoiceStep {flow} /> 533 + 534 + {:else if flow.state.step === 'initial-did-doc'} 535 + <DidDocStep {flow} type="initial" onConfirm={() => flow?.createPasskeyAccount()} onBack={() => flow?.goBack()} /> 536 + 537 + {:else if flow.state.step === 'creating'} 538 + <div class="creating"> 539 + <div class="spinner"></div> 540 + <p>{$_('registerPasskey.creatingAccount')}</p> 541 + </div> 542 + 543 + {:else if flow.state.step === 'passkey'} 544 + <div class="passkey-step"> 545 + <h2>{$_('registerPasskey.setupPasskey')}</h2> 546 + <p>{$_('registerPasskey.passkeyDescription')}</p> 547 + 548 + <div class="field"> 549 + <label for="passkey-name">{$_('registerPasskey.passkeyName')}</label> 550 + <input 551 + id="passkey-name" 552 + type="text" 553 + bind:value={passkeyName} 554 + placeholder={$_('registerPasskey.passkeyNamePlaceholder')} 555 + disabled={flow.state.submitting} 556 + /> 557 + <p class="hint">{$_('registerPasskey.passkeyNameHint')}</p> 558 + </div> 559 + 560 + <button 561 + type="button" 562 + class="primary" 563 + onclick={handlePasskeyRegistration} 564 + disabled={flow.state.submitting} 565 + > 566 + {flow.state.submitting ? $_('common.loading') : $_('registerPasskey.registerPasskey')} 567 + </button> 568 + </div> 569 + 570 + {:else if flow.state.step === 'app-password'} 571 + <AppPasswordStep {flow} /> 572 + 573 + {:else if flow.state.step === 'verify'} 574 + <VerificationStep {flow} /> 575 + 576 + {:else if flow.state.step === 'updated-did-doc'} 577 + <DidDocStep {flow} type="updated" onConfirm={() => flow?.activateAccount()} /> 578 + 579 + {:else if flow.state.step === 'activating'} 580 + <div class="creating"> 581 + <div class="spinner"></div> 582 + <p>{$_('registerPasskey.activatingAccount')}</p> 583 + </div> 584 + {/if} 585 + {/if} 586 + </div> 587 + 588 + <style> 589 + .oauth-register-container { 590 + max-width: var(--width-lg); 591 + margin: var(--space-9) auto; 592 + padding: var(--space-7); 593 + } 594 + 595 + .loading, .creating { 596 + display: flex; 597 + flex-direction: column; 598 + align-items: center; 599 + gap: var(--space-4); 600 + padding: var(--space-8); 601 + } 602 + 603 + .loading p, .creating p { 604 + color: var(--text-secondary); 605 + } 606 + 607 + .page-header { 608 + margin-bottom: var(--space-6); 609 + } 610 + 611 + .page-header h1 { 612 + margin: 0 0 var(--space-2) 0; 613 + } 614 + 615 + .subtitle { 616 + color: var(--text-secondary); 617 + margin: 0; 618 + } 619 + 620 + .form-section { 621 + min-width: 0; 622 + } 623 + 624 + .form-links { 625 + margin-top: var(--space-6); 626 + } 627 + 628 + .link-text { 629 + text-align: center; 630 + color: var(--text-secondary); 631 + } 632 + 633 + .link-text a { 634 + color: var(--accent); 635 + } 636 + 637 + form { 638 + display: flex; 639 + flex-direction: column; 640 + gap: var(--space-5); 641 + } 642 + 643 + .field { 644 + display: flex; 645 + flex-direction: column; 646 + gap: var(--space-1); 647 + } 648 + 649 + label { 650 + font-size: var(--text-sm); 651 + font-weight: var(--font-medium); 652 + } 653 + 654 + input, select { 655 + padding: var(--space-3); 656 + border: 1px solid var(--border-color); 657 + border-radius: var(--radius-md); 658 + font-size: var(--text-base); 659 + background: var(--bg-input); 660 + color: var(--text-primary); 661 + } 662 + 663 + input:focus, select:focus { 664 + outline: none; 665 + border-color: var(--accent); 666 + } 667 + 668 + .hint { 669 + font-size: var(--text-xs); 670 + color: var(--text-muted); 671 + margin: var(--space-1) 0 0 0; 672 + } 673 + 674 + .error { 675 + padding: var(--space-3); 676 + background: var(--error-bg); 677 + border: 1px solid var(--error-border); 678 + border-radius: var(--radius-md); 679 + color: var(--error-text); 680 + margin-bottom: var(--space-4); 681 + } 682 + 683 + .actions { 684 + display: flex; 685 + gap: var(--space-4); 686 + margin-top: var(--space-2); 687 + } 688 + 689 + button.primary { 690 + flex: 1; 691 + padding: var(--space-3); 692 + background: var(--accent); 693 + color: var(--text-inverse); 694 + border: none; 695 + border-radius: var(--radius-md); 696 + font-size: var(--text-base); 697 + cursor: pointer; 698 + transition: background-color var(--transition-fast); 699 + } 700 + 701 + button.primary:hover:not(:disabled) { 702 + background: var(--accent-hover); 703 + } 704 + 705 + button.primary:disabled { 706 + opacity: 0.6; 707 + cursor: not-allowed; 708 + } 709 + 710 + .secondary-actions { 711 + display: flex; 712 + justify-content: center; 713 + gap: var(--space-4); 714 + margin-top: var(--space-4); 715 + } 716 + 717 + .link-btn { 718 + background: none; 719 + border: none; 720 + color: var(--accent); 721 + cursor: pointer; 722 + font-size: var(--text-sm); 723 + padding: var(--space-2); 724 + } 725 + 726 + .link-btn:hover { 727 + text-decoration: underline; 728 + } 729 + 730 + .contact-fields { 731 + display: flex; 732 + flex-direction: column; 733 + gap: var(--space-4); 734 + } 735 + 736 + .required { 737 + color: var(--error-text); 738 + } 739 + 740 + .passkey-step { 741 + display: flex; 742 + flex-direction: column; 743 + gap: var(--space-4); 744 + } 745 + 746 + .passkey-step h2 { 747 + margin: 0; 748 + } 749 + 750 + .passkey-step p { 751 + color: var(--text-secondary); 752 + margin: 0; 753 + } 754 + 755 + fieldset { 756 + border: 1px solid var(--border-color); 757 + border-radius: var(--radius-md); 758 + padding: var(--space-4); 759 + } 760 + 761 + legend { 762 + padding: 0 var(--space-2); 763 + font-weight: var(--font-medium); 764 + } 765 + 766 + .spinner { 767 + width: 32px; 768 + height: 32px; 769 + border: 3px solid var(--border-color); 770 + border-top-color: var(--accent); 771 + border-radius: 50%; 772 + animation: spin 1s linear infinite; 773 + } 774 + 775 + @keyframes spin { 776 + to { 777 + transform: rotate(360deg); 778 + } 779 + } 780 + </style>
+680
frontend/src/routes/OAuthSsoRegister.svelte
···
··· 1 + <script lang="ts"> 2 + import { onMount } from 'svelte' 3 + import { _ } from '../lib/i18n' 4 + import { toast } from '../lib/toast.svelte' 5 + import SsoIcon from '../components/SsoIcon.svelte' 6 + 7 + interface PendingRegistration { 8 + request_uri: string 9 + provider: string 10 + provider_user_id: string 11 + provider_username: string | null 12 + provider_email: string | null 13 + provider_email_verified: boolean 14 + } 15 + 16 + interface CommsChannelConfig { 17 + email: boolean 18 + discord: boolean 19 + telegram: boolean 20 + signal: boolean 21 + } 22 + 23 + let pending = $state<PendingRegistration | null>(null) 24 + let loading = $state(true) 25 + let submitting = $state(false) 26 + let error = $state<string | null>(null) 27 + 28 + let handle = $state('') 29 + let email = $state('') 30 + let providerEmailOriginal = $state<string | null>(null) 31 + let inviteCode = $state('') 32 + let verificationChannel = $state('email') 33 + let discordId = $state('') 34 + let telegramUsername = $state('') 35 + let signalNumber = $state('') 36 + 37 + let handleAvailable = $state<boolean | null>(null) 38 + let checkingHandle = $state(false) 39 + let handleError = $state<string | null>(null) 40 + 41 + let didType = $state<'plc' | 'web' | 'web-external'>('plc') 42 + let externalDid = $state('') 43 + 44 + let serverInfo = $state<{ 45 + availableUserDomains: string[] 46 + inviteCodeRequired: boolean 47 + selfHostedDidWebEnabled: boolean 48 + } | null>(null) 49 + 50 + let commsChannels = $state<CommsChannelConfig>({ 51 + email: true, 52 + discord: false, 53 + telegram: false, 54 + signal: false, 55 + }) 56 + 57 + function getToken(): string | null { 58 + const params = new URLSearchParams(window.location.search) 59 + return params.get('token') 60 + } 61 + 62 + function getProviderDisplayName(provider: string): string { 63 + const names: Record<string, string> = { 64 + github: 'GitHub', 65 + discord: 'Discord', 66 + google: 'Google', 67 + gitlab: 'GitLab', 68 + oidc: 'SSO', 69 + } 70 + return names[provider] || provider 71 + } 72 + 73 + function isChannelAvailable(ch: string): boolean { 74 + return commsChannels[ch as keyof CommsChannelConfig] ?? false 75 + } 76 + 77 + function extractDomain(did: string): string { 78 + return did.replace('did:web:', '').replace(/%3A/g, ':') 79 + } 80 + 81 + let fullHandle = $derived(() => { 82 + if (!handle.trim()) return '' 83 + const domain = serverInfo?.availableUserDomains?.[0] 84 + return domain ? `${handle.trim()}.${domain}` : handle.trim() 85 + }) 86 + 87 + onMount(() => { 88 + loadPendingRegistration() 89 + loadServerInfo() 90 + }) 91 + 92 + async function loadServerInfo() { 93 + try { 94 + const response = await fetch('/xrpc/com.atproto.server.describeServer') 95 + if (response.ok) { 96 + const data = await response.json() 97 + serverInfo = { 98 + availableUserDomains: data.availableUserDomains || [], 99 + inviteCodeRequired: data.inviteCodeRequired ?? false, 100 + selfHostedDidWebEnabled: data.selfHostedDidWebEnabled ?? false, 101 + } 102 + if (data.commsChannels) { 103 + commsChannels = { 104 + email: data.commsChannels.email ?? true, 105 + discord: data.commsChannels.discord ?? false, 106 + telegram: data.commsChannels.telegram ?? false, 107 + signal: data.commsChannels.signal ?? false, 108 + } 109 + } 110 + } 111 + } catch { 112 + serverInfo = null 113 + } 114 + } 115 + 116 + async function loadPendingRegistration() { 117 + const token = getToken() 118 + if (!token) { 119 + error = $_('sso_register.error_expired') 120 + loading = false 121 + return 122 + } 123 + 124 + try { 125 + const response = await fetch(`/oauth/sso/pending-registration?token=${encodeURIComponent(token)}`) 126 + if (!response.ok) { 127 + const data = await response.json() 128 + error = data.message || $_('sso_register.error_expired') 129 + loading = false 130 + return 131 + } 132 + 133 + pending = await response.json() 134 + if (pending?.provider_email) { 135 + email = pending.provider_email 136 + providerEmailOriginal = pending.provider_email 137 + } 138 + if (pending?.provider_username) { 139 + handle = pending.provider_username.toLowerCase().replace(/[^a-z0-9-]/g, '') 140 + } 141 + } catch { 142 + error = $_('sso_register.error_expired') 143 + } finally { 144 + loading = false 145 + } 146 + } 147 + 148 + let checkHandleTimeout: ReturnType<typeof setTimeout> | null = null 149 + 150 + $effect(() => { 151 + if (checkHandleTimeout) { 152 + clearTimeout(checkHandleTimeout) 153 + } 154 + handleAvailable = null 155 + handleError = null 156 + if (handle.length >= 3) { 157 + checkHandleTimeout = setTimeout(() => checkHandleAvailability(), 400) 158 + } 159 + }) 160 + 161 + async function checkHandleAvailability() { 162 + if (!handle || handle.length < 3) return 163 + 164 + checkingHandle = true 165 + handleError = null 166 + 167 + try { 168 + const response = await fetch(`/oauth/sso/check-handle-available?handle=${encodeURIComponent(handle)}`) 169 + const data = await response.json() 170 + handleAvailable = data.available 171 + if (!data.available && data.reason) { 172 + handleError = data.reason 173 + } 174 + } catch { 175 + handleAvailable = null 176 + handleError = $_('common.error') 177 + } finally { 178 + checkingHandle = false 179 + } 180 + } 181 + 182 + let usingVerifiedProviderEmail = $derived( 183 + pending?.provider_email_verified && 184 + verificationChannel === 'email' && 185 + email.trim().toLowerCase() === providerEmailOriginal?.toLowerCase() 186 + ) 187 + 188 + function isChannelValid(): boolean { 189 + switch (verificationChannel) { 190 + case 'email': 191 + return !!email.trim() 192 + case 'discord': 193 + return !!discordId.trim() 194 + case 'telegram': 195 + return !!telegramUsername.trim() 196 + case 'signal': 197 + return !!signalNumber.trim() 198 + default: 199 + return false 200 + } 201 + } 202 + 203 + async function handleSubmit(e: Event) { 204 + e.preventDefault() 205 + const token = getToken() 206 + if (!token || !pending) return 207 + 208 + if (!handle || handle.length < 3) { 209 + handleError = $_('sso_register.error_handle_required') 210 + return 211 + } 212 + 213 + if (handleAvailable === false) { 214 + handleError = $_('sso_register.handle_taken') 215 + return 216 + } 217 + 218 + if (!isChannelValid()) { 219 + toast.error($_(`register.validation.${verificationChannel === 'email' ? 'emailRequired' : verificationChannel + 'Required'}`)) 220 + return 221 + } 222 + 223 + submitting = true 224 + 225 + try { 226 + const response = await fetch('/oauth/sso/complete-registration', { 227 + method: 'POST', 228 + headers: { 229 + 'Content-Type': 'application/json', 230 + 'Accept': 'application/json', 231 + }, 232 + body: JSON.stringify({ 233 + token, 234 + handle, 235 + email: email || null, 236 + invite_code: inviteCode || null, 237 + verification_channel: verificationChannel, 238 + discord_id: discordId || null, 239 + telegram_username: telegramUsername || null, 240 + signal_number: signalNumber || null, 241 + did_type: didType, 242 + did: didType === 'web-external' ? externalDid.trim() : null, 243 + }), 244 + }) 245 + 246 + const data = await response.json() 247 + 248 + if (!response.ok) { 249 + toast.error(data.message || data.error_description || data.error || $_('common.error')) 250 + submitting = false 251 + return 252 + } 253 + 254 + if (data.accessJwt && data.refreshJwt) { 255 + localStorage.setItem('accessJwt', data.accessJwt) 256 + localStorage.setItem('refreshJwt', data.refreshJwt) 257 + } 258 + 259 + if (data.redirectUrl) { 260 + if (data.redirectUrl.startsWith('/app/verify')) { 261 + localStorage.setItem('tranquil_pds_pending_verification', JSON.stringify({ 262 + did: data.did, 263 + handle: data.handle, 264 + channel: verificationChannel, 265 + })) 266 + const url = new URL(data.redirectUrl, window.location.origin) 267 + url.searchParams.set('handle', data.handle) 268 + url.searchParams.set('channel', verificationChannel) 269 + window.location.href = url.pathname + url.search 270 + return 271 + } 272 + window.location.href = data.redirectUrl 273 + return 274 + } 275 + 276 + toast.error($_('common.error')) 277 + submitting = false 278 + } catch { 279 + toast.error($_('common.error')) 280 + submitting = false 281 + } 282 + } 283 + </script> 284 + 285 + <div class="sso-register-container"> 286 + {#if loading} 287 + <div class="loading"> 288 + <div class="spinner"></div> 289 + <p>{$_('common.loading')}</p> 290 + </div> 291 + {:else if error && !pending} 292 + <div class="error-container"> 293 + <div class="error-icon">!</div> 294 + <h2>{$_('common.error')}</h2> 295 + <p>{error}</p> 296 + <a href="/app/register-sso" class="back-link">{$_('sso_register.tryAgain')}</a> 297 + </div> 298 + {:else if pending} 299 + <header class="page-header"> 300 + <h1>{$_('sso_register.title')}</h1> 301 + <p class="subtitle">{$_('sso_register.subtitle', { values: { provider: getProviderDisplayName(pending.provider) } })}</p> 302 + </header> 303 + 304 + <div class="provider-info"> 305 + <div class="provider-badge"> 306 + <SsoIcon provider={pending.provider} size={32} /> 307 + <div class="provider-details"> 308 + <span class="provider-name">{getProviderDisplayName(pending.provider)}</span> 309 + {#if pending.provider_username} 310 + <span class="provider-username">@{pending.provider_username}</span> 311 + {/if} 312 + </div> 313 + </div> 314 + </div> 315 + 316 + <div class="split-layout sidebar-right"> 317 + <div class="form-section"> 318 + <form onsubmit={handleSubmit}> 319 + <div class="field"> 320 + <label for="handle">{$_('sso_register.handle_label')}</label> 321 + <input 322 + id="handle" 323 + type="text" 324 + bind:value={handle} 325 + placeholder={$_('register.handlePlaceholder')} 326 + disabled={submitting} 327 + required 328 + autocomplete="off" 329 + /> 330 + {#if checkingHandle} 331 + <p class="hint">{$_('common.checking')}</p> 332 + {:else if handleError} 333 + <p class="hint error">{handleError}</p> 334 + {:else if handleAvailable === false} 335 + <p class="hint error">{$_('sso_register.handle_taken')}</p> 336 + {:else if handleAvailable === true} 337 + <p class="hint success">{$_('sso_register.handle_available')}</p> 338 + {:else if fullHandle()} 339 + <p class="hint">{$_('register.handleHint', { values: { handle: fullHandle() } })}</p> 340 + {/if} 341 + </div> 342 + 343 + <fieldset> 344 + <legend>{$_('register.contactMethod')}</legend> 345 + <div class="contact-fields"> 346 + <div class="field"> 347 + <label for="verification-channel">{$_('register.verificationMethod')}</label> 348 + <select id="verification-channel" bind:value={verificationChannel} disabled={submitting}> 349 + <option value="email">{$_('register.email')}</option> 350 + <option value="discord" disabled={!isChannelAvailable('discord')}> 351 + {$_('register.discord')}{isChannelAvailable('discord') ? '' : ` (${$_('register.notConfigured')})`} 352 + </option> 353 + <option value="telegram" disabled={!isChannelAvailable('telegram')}> 354 + {$_('register.telegram')}{isChannelAvailable('telegram') ? '' : ` (${$_('register.notConfigured')})`} 355 + </option> 356 + <option value="signal" disabled={!isChannelAvailable('signal')}> 357 + {$_('register.signal')}{isChannelAvailable('signal') ? '' : ` (${$_('register.notConfigured')})`} 358 + </option> 359 + </select> 360 + </div> 361 + 362 + {#if verificationChannel === 'email'} 363 + <div class="field"> 364 + <label for="email">{$_('register.emailAddress')}</label> 365 + <input 366 + id="email" 367 + type="email" 368 + bind:value={email} 369 + placeholder={$_('register.emailPlaceholder')} 370 + disabled={submitting} 371 + required 372 + /> 373 + {#if pending?.provider_email && pending?.provider_email_verified} 374 + {#if usingVerifiedProviderEmail} 375 + <p class="hint success">{$_('sso_register.emailVerifiedByProvider', { values: { provider: getProviderDisplayName(pending.provider) } })}</p> 376 + {:else} 377 + <p class="hint">{$_('sso_register.emailChangedNeedsVerification')}</p> 378 + {/if} 379 + {/if} 380 + </div> 381 + {:else if verificationChannel === 'discord'} 382 + <div class="field"> 383 + <label for="discord-id">{$_('register.discordId')}</label> 384 + <input 385 + id="discord-id" 386 + type="text" 387 + bind:value={discordId} 388 + placeholder={$_('register.discordIdPlaceholder')} 389 + disabled={submitting} 390 + required 391 + /> 392 + <p class="hint">{$_('register.discordIdHint')}</p> 393 + </div> 394 + {:else if verificationChannel === 'telegram'} 395 + <div class="field"> 396 + <label for="telegram-username">{$_('register.telegramUsername')}</label> 397 + <input 398 + id="telegram-username" 399 + type="text" 400 + bind:value={telegramUsername} 401 + placeholder={$_('register.telegramUsernamePlaceholder')} 402 + disabled={submitting} 403 + required 404 + /> 405 + </div> 406 + {:else if verificationChannel === 'signal'} 407 + <div class="field"> 408 + <label for="signal-number">{$_('register.signalNumber')}</label> 409 + <input 410 + id="signal-number" 411 + type="tel" 412 + bind:value={signalNumber} 413 + placeholder={$_('register.signalNumberPlaceholder')} 414 + disabled={submitting} 415 + required 416 + /> 417 + <p class="hint">{$_('register.signalNumberHint')}</p> 418 + </div> 419 + {/if} 420 + </div> 421 + </fieldset> 422 + 423 + <fieldset> 424 + <legend>{$_('registerPasskey.identityType')}</legend> 425 + <p class="section-hint">{$_('registerPasskey.identityTypeHint')}</p> 426 + <div class="radio-group"> 427 + <label class="radio-label"> 428 + <input type="radio" name="didType" value="plc" bind:group={didType} disabled={submitting} /> 429 + <span class="radio-content"> 430 + <strong>{$_('registerPasskey.didPlcRecommended')}</strong> 431 + <span class="radio-hint">{$_('registerPasskey.didPlcHint')}</span> 432 + </span> 433 + </label> 434 + <label class="radio-label" class:disabled={serverInfo?.selfHostedDidWebEnabled === false}> 435 + <input type="radio" name="didType" value="web" bind:group={didType} disabled={submitting || serverInfo?.selfHostedDidWebEnabled === false} /> 436 + <span class="radio-content"> 437 + <strong>{$_('registerPasskey.didWeb')}</strong> 438 + {#if serverInfo?.selfHostedDidWebEnabled === false} 439 + <span class="radio-hint disabled-hint">{$_('registerPasskey.didWebDisabledHint')}</span> 440 + {:else} 441 + <span class="radio-hint">{$_('registerPasskey.didWebHint')}</span> 442 + {/if} 443 + </span> 444 + </label> 445 + <label class="radio-label"> 446 + <input type="radio" name="didType" value="web-external" bind:group={didType} disabled={submitting} /> 447 + <span class="radio-content"> 448 + <strong>{$_('registerPasskey.didWebBYOD')}</strong> 449 + <span class="radio-hint">{$_('registerPasskey.didWebBYODHint')}</span> 450 + </span> 451 + </label> 452 + </div> 453 + {#if didType === 'web'} 454 + <div class="warning-box"> 455 + <strong>{$_('registerPasskey.didWebWarningTitle')}</strong> 456 + <ul> 457 + <li><strong>{$_('registerPasskey.didWebWarning1')}</strong> {@html $_('registerPasskey.didWebWarning1Detail', { values: { did: `<code>did:web:yourhandle.${serverInfo?.availableUserDomains?.[0] || 'this-pds.com'}</code>` } })}</li> 458 + <li><strong>{$_('registerPasskey.didWebWarning2')}</strong> {$_('registerPasskey.didWebWarning2Detail')}</li> 459 + <li><strong>{$_('registerPasskey.didWebWarning3')}</strong> {$_('registerPasskey.didWebWarning3Detail')}</li> 460 + <li><strong>{$_('registerPasskey.didWebWarning4')}</strong> {$_('registerPasskey.didWebWarning4Detail')}</li> 461 + </ul> 462 + </div> 463 + {/if} 464 + {#if didType === 'web-external'} 465 + <div class="field"> 466 + <label for="external-did">{$_('registerPasskey.externalDid')}</label> 467 + <input id="external-did" type="text" bind:value={externalDid} placeholder={$_('registerPasskey.externalDidPlaceholder')} disabled={submitting} required /> 468 + <p class="hint">{$_('registerPasskey.externalDidHint')} <code>https://{externalDid ? extractDomain(externalDid) : 'yourdomain.com'}/.well-known/did.json</code></p> 469 + </div> 470 + {/if} 471 + </fieldset> 472 + 473 + {#if serverInfo?.inviteCodeRequired} 474 + <div class="field"> 475 + <label for="invite-code">{$_('register.inviteCode')} <span class="required">{$_('register.inviteCodeRequired')}</span></label> 476 + <input 477 + id="invite-code" 478 + type="text" 479 + bind:value={inviteCode} 480 + placeholder={$_('register.inviteCodePlaceholder')} 481 + disabled={submitting} 482 + required 483 + /> 484 + </div> 485 + {/if} 486 + 487 + <button type="submit" disabled={submitting || !handle || handle.length < 3 || handleAvailable === false || checkingHandle || !isChannelValid()}> 488 + {submitting ? $_('common.creating') : $_('sso_register.submit')} 489 + </button> 490 + </form> 491 + </div> 492 + 493 + <aside class="info-panel"> 494 + <h3>{$_('sso_register.infoAfterTitle')}</h3> 495 + <ul class="info-list"> 496 + <li>{$_('sso_register.infoAddPassword')}</li> 497 + <li>{$_('sso_register.infoAddPasskey')}</li> 498 + <li>{$_('sso_register.infoLinkProviders')}</li> 499 + <li>{$_('sso_register.infoChangeHandle')}</li> 500 + </ul> 501 + </aside> 502 + </div> 503 + {/if} 504 + </div> 505 + 506 + <style> 507 + .sso-register-container { 508 + max-width: var(--width-lg); 509 + margin: var(--space-9) auto; 510 + padding: var(--space-7); 511 + } 512 + 513 + .loading { 514 + display: flex; 515 + flex-direction: column; 516 + align-items: center; 517 + gap: var(--space-4); 518 + padding: var(--space-8); 519 + } 520 + 521 + .loading p { 522 + color: var(--text-secondary); 523 + } 524 + 525 + .error-container { 526 + text-align: center; 527 + padding: var(--space-8); 528 + } 529 + 530 + .error-icon { 531 + width: 48px; 532 + height: 48px; 533 + border-radius: 50%; 534 + background: var(--error-text); 535 + color: var(--text-inverse); 536 + display: flex; 537 + align-items: center; 538 + justify-content: center; 539 + font-size: 24px; 540 + font-weight: bold; 541 + margin: 0 auto var(--space-4); 542 + } 543 + 544 + .error-container h2 { 545 + margin-bottom: var(--space-2); 546 + } 547 + 548 + .error-container p { 549 + color: var(--text-secondary); 550 + margin-bottom: var(--space-6); 551 + } 552 + 553 + .back-link { 554 + color: var(--accent); 555 + text-decoration: none; 556 + } 557 + 558 + .back-link:hover { 559 + text-decoration: underline; 560 + } 561 + 562 + .page-header { 563 + margin-bottom: var(--space-6); 564 + } 565 + 566 + .page-header h1 { 567 + margin: 0 0 var(--space-3) 0; 568 + } 569 + 570 + .subtitle { 571 + color: var(--text-secondary); 572 + margin: 0; 573 + } 574 + 575 + .form-section { 576 + min-width: 0; 577 + } 578 + 579 + form { 580 + display: flex; 581 + flex-direction: column; 582 + gap: var(--space-5); 583 + } 584 + 585 + .contact-fields { 586 + display: flex; 587 + flex-direction: column; 588 + gap: var(--space-4); 589 + } 590 + 591 + .contact-fields .field { 592 + margin-bottom: 0; 593 + } 594 + 595 + .hint.success { 596 + color: var(--success-text); 597 + } 598 + 599 + .hint.error { 600 + color: var(--error-text); 601 + } 602 + 603 + .info-panel { 604 + background: var(--bg-secondary); 605 + border-radius: var(--radius-xl); 606 + padding: var(--space-6); 607 + } 608 + 609 + .info-panel h3 { 610 + margin: 0 0 var(--space-4) 0; 611 + font-size: var(--text-base); 612 + font-weight: var(--font-semibold); 613 + } 614 + 615 + .info-list { 616 + margin: 0; 617 + padding-left: var(--space-5); 618 + } 619 + 620 + .info-list li { 621 + margin-bottom: var(--space-2); 622 + font-size: var(--text-sm); 623 + color: var(--text-secondary); 624 + line-height: var(--leading-relaxed); 625 + } 626 + 627 + .info-list li:last-child { 628 + margin-bottom: 0; 629 + } 630 + 631 + .provider-info { 632 + margin-bottom: var(--space-6); 633 + } 634 + 635 + .provider-badge { 636 + display: flex; 637 + align-items: center; 638 + gap: var(--space-3); 639 + padding: var(--space-4); 640 + background: var(--bg-secondary); 641 + border-radius: var(--radius-md); 642 + } 643 + 644 + .provider-details { 645 + display: flex; 646 + flex-direction: column; 647 + } 648 + 649 + .provider-name { 650 + font-weight: var(--font-semibold); 651 + } 652 + 653 + .provider-username { 654 + font-size: var(--text-sm); 655 + color: var(--text-secondary); 656 + } 657 + 658 + .required { 659 + color: var(--error-text); 660 + } 661 + 662 + button[type="submit"] { 663 + margin-top: var(--space-3); 664 + } 665 + 666 + .spinner { 667 + width: 32px; 668 + height: 32px; 669 + border: 3px solid var(--border-color); 670 + border-top-color: var(--accent); 671 + border-radius: 50%; 672 + animation: spin 1s linear infinite; 673 + } 674 + 675 + @keyframes spin { 676 + to { 677 + transform: rotate(360deg); 678 + } 679 + } 680 + </style>
+51
frontend/src/routes/RegisterPasskey.svelte
···
··· 1 + <script lang="ts"> 2 + import { startOAuthRegister } from '../lib/oauth' 3 + import { _ } from '../lib/i18n' 4 + 5 + let error = $state<string | null>(null) 6 + let initiated = false 7 + 8 + $effect(() => { 9 + if (!initiated) { 10 + initiated = true 11 + startOAuthRegister().catch((err) => { 12 + error = err instanceof Error ? err.message : 'Failed to start registration' 13 + }) 14 + } 15 + }) 16 + </script> 17 + 18 + <div class="register-redirect"> 19 + {#if error} 20 + <div class="message error">{error}</div> 21 + <a href="/app/login">{$_('register.signIn')}</a> 22 + {:else} 23 + <div class="loading-content"> 24 + <div class="spinner"></div> 25 + <p>{$_('common.loading')}</p> 26 + </div> 27 + {/if} 28 + </div> 29 + 30 + <style> 31 + .register-redirect { 32 + min-height: 100vh; 33 + display: flex; 34 + flex-direction: column; 35 + align-items: center; 36 + justify-content: center; 37 + gap: var(--space-4); 38 + } 39 + 40 + .loading-content { 41 + display: flex; 42 + flex-direction: column; 43 + align-items: center; 44 + gap: var(--space-4); 45 + } 46 + 47 + .loading-content p { 48 + margin: 0; 49 + color: var(--text-secondary); 50 + } 51 + </style>
+19 -85
scripts/install-debian.sh
··· 44 sudo -u postgres psql -c "DROP DATABASE IF EXISTS pds;" 2>/dev/null || true 45 sudo -u postgres psql -c "DROP USER IF EXISTS tranquil_pds;" 2>/dev/null || true 46 47 - log_info "Removing minio buckets..." 48 - if command -v mc &>/dev/null; then 49 - mc rb local/pds-blobs --force 2>/dev/null || true 50 - mc rb local/pds-backups --force 2>/dev/null || true 51 - mc alias remove local 2>/dev/null || true 52 - fi 53 - systemctl stop minio 2>/dev/null || true 54 - rm -rf /var/lib/minio/data/.minio.sys 2>/dev/null || true 55 - rm -f /etc/default/minio 2>/dev/null || true 56 57 log_info "Removing nginx config..." 58 rm -f /etc/nginx/sites-enabled/tranquil-pds ··· 79 echo " - PostgreSQL database 'pds' and all data" 80 echo " - All Tranquil PDS configuration and credentials" 81 echo " - All source code in /opt/tranquil-pds" 82 - echo " - MinIO buckets 'pds-blobs' and 'pds-backups' and all data" 83 echo "" 84 read -p "Type 'NUKE' to confirm: " CONFIRM_NUKE 85 if [[ "$CONFIRM_NUKE" == "NUKE" ]]; then ··· 153 DPOP_SECRET=$(openssl rand -base64 48) 154 MASTER_KEY=$(openssl rand -base64 48) 155 DB_PASSWORD=$(openssl rand -base64 24 | tr -dc 'a-zA-Z0-9' | head -c 32) 156 - MINIO_PASSWORD=$(openssl rand -base64 24 | tr -dc 'a-zA-Z0-9' | head -c 32) 157 158 mkdir -p /etc/tranquil-pds 159 cat > "$CREDENTIALS_FILE" << EOF ··· 161 DPOP_SECRET="$DPOP_SECRET" 162 MASTER_KEY="$MASTER_KEY" 163 DB_PASSWORD="$DB_PASSWORD" 164 - MINIO_PASSWORD="$MINIO_PASSWORD" 165 EOF 166 chmod 600 "$CREDENTIALS_FILE" 167 log_success "Secrets generated" ··· 213 systemctl enable valkey-server 2>/dev/null || true 214 systemctl start valkey-server 2>/dev/null || true 215 216 - log_info "Installing minio..." 217 - if [[ ! -f /usr/local/bin/minio ]]; then 218 - ARCH=$(dpkg --print-architecture) 219 - case "$ARCH" in 220 - amd64) curl -fsSL -o /tmp/minio https://dl.min.io/server/minio/release/linux-amd64/minio ;; 221 - arm64) curl -fsSL -o /tmp/minio https://dl.min.io/server/minio/release/linux-arm64/minio ;; 222 - *) log_error "Unsupported architecture: $ARCH"; exit 1 ;; 223 - esac 224 - chmod +x /tmp/minio 225 - mv /tmp/minio /usr/local/bin/ 226 - fi 227 - 228 - mkdir -p /var/lib/minio/data 229 - id -u minio-user &>/dev/null || useradd -r -s /sbin/nologin minio-user 230 - chown -R minio-user:minio-user /var/lib/minio 231 - 232 - cat > /etc/default/minio << EOF 233 - MINIO_ROOT_USER=minioadmin 234 - MINIO_ROOT_PASSWORD=${MINIO_PASSWORD} 235 - MINIO_VOLUMES="/var/lib/minio/data" 236 - MINIO_OPTS="--console-address :9001" 237 - EOF 238 - chmod 600 /etc/default/minio 239 - 240 - cat > /etc/systemd/system/minio.service << 'EOF' 241 - [Unit] 242 - Description=MinIO Object Storage 243 - After=network.target 244 - 245 - [Service] 246 - User=minio-user 247 - Group=minio-user 248 - EnvironmentFile=/etc/default/minio 249 - ExecStart=/usr/local/bin/minio server $MINIO_VOLUMES $MINIO_OPTS 250 - Restart=always 251 - LimitNOFILE=65536 252 - 253 - [Install] 254 - WantedBy=multi-user.target 255 - EOF 256 - 257 - systemctl daemon-reload 258 - systemctl enable minio 259 - systemctl start minio 260 - log_success "minio installed" 261 - 262 - log_info "Waiting for minio..." 263 - sleep 5 264 - 265 - if [[ ! -f /usr/local/bin/mc ]]; then 266 - ARCH=$(dpkg --print-architecture) 267 - case "$ARCH" in 268 - amd64) curl -fsSL -o /tmp/mc https://dl.min.io/client/mc/release/linux-amd64/mc ;; 269 - arm64) curl -fsSL -o /tmp/mc https://dl.min.io/client/mc/release/linux-arm64/mc ;; 270 - esac 271 - chmod +x /tmp/mc 272 - mv /tmp/mc /usr/local/bin/ 273 - fi 274 - 275 - mc alias remove local 2>/dev/null || true 276 - mc alias set local http://localhost:9000 minioadmin "${MINIO_PASSWORD}" --api S3v4 277 - mc mb local/pds-blobs --ignore-existing 278 - mc mb local/pds-backups --ignore-existing 279 - log_success "minio buckets created" 280 281 log_info "Installing rust..." 282 if [[ -f "$HOME/.cargo/env" ]]; then ··· 381 DATABASE_URL=postgres://tranquil_pds:${DB_PASSWORD}@localhost:5432/pds 382 DATABASE_MAX_CONNECTIONS=100 383 DATABASE_MIN_CONNECTIONS=10 384 - S3_ENDPOINT=http://localhost:9000 385 - AWS_REGION=us-east-1 386 - S3_BUCKET=pds-blobs 387 - BACKUP_S3_BUCKET=pds-backups 388 - AWS_ACCESS_KEY_ID=minioadmin 389 - AWS_SECRET_ACCESS_KEY=${MINIO_PASSWORD} 390 VALKEY_URL=redis://localhost:6379 391 JWT_SECRET=${JWT_SECRET} 392 DPOP_SECRET=${DPOP_SECRET} ··· 406 mkdir -p /var/lib/tranquil-pds 407 cp -r /opt/tranquil-pds/frontend/dist /var/lib/tranquil-pds/frontend 408 chown -R tranquil-pds:tranquil-pds /var/lib/tranquil-pds 409 410 cat > /etc/systemd/system/tranquil-pds.service << 'EOF' 411 [Unit] 412 Description=Tranquil PDS - AT Protocol PDS 413 - After=network.target postgresql.service minio.service 414 415 [Service] 416 Type=simple ··· 420 ExecStart=/usr/local/bin/tranquil-pds 421 Restart=always 422 RestartSec=5 423 424 [Install] 425 WantedBy=multi-user.target ··· 577 echo "PDS: https://${PDS_DOMAIN}" 578 echo "" 579 echo "Credentials (also in /etc/tranquil-pds/.credentials):" 580 - echo " DB password: ${DB_PASSWORD}" 581 - echo " MinIO password: ${MINIO_PASSWORD}" 582 echo "" 583 echo "Commands:" 584 echo " journalctl -u tranquil-pds -f # logs"
··· 44 sudo -u postgres psql -c "DROP DATABASE IF EXISTS pds;" 2>/dev/null || true 45 sudo -u postgres psql -c "DROP USER IF EXISTS tranquil_pds;" 2>/dev/null || true 46 47 + log_info "Removing blob storage..." 48 + rm -rf /var/lib/tranquil 2>/dev/null || true 49 50 log_info "Removing nginx config..." 51 rm -f /etc/nginx/sites-enabled/tranquil-pds ··· 72 echo " - PostgreSQL database 'pds' and all data" 73 echo " - All Tranquil PDS configuration and credentials" 74 echo " - All source code in /opt/tranquil-pds" 75 + echo " - All blobs and backups in /var/lib/tranquil/" 76 echo "" 77 read -p "Type 'NUKE' to confirm: " CONFIRM_NUKE 78 if [[ "$CONFIRM_NUKE" == "NUKE" ]]; then ··· 146 DPOP_SECRET=$(openssl rand -base64 48) 147 MASTER_KEY=$(openssl rand -base64 48) 148 DB_PASSWORD=$(openssl rand -base64 24 | tr -dc 'a-zA-Z0-9' | head -c 32) 149 150 mkdir -p /etc/tranquil-pds 151 cat > "$CREDENTIALS_FILE" << EOF ··· 153 DPOP_SECRET="$DPOP_SECRET" 154 MASTER_KEY="$MASTER_KEY" 155 DB_PASSWORD="$DB_PASSWORD" 156 EOF 157 chmod 600 "$CREDENTIALS_FILE" 158 log_success "Secrets generated" ··· 204 systemctl enable valkey-server 2>/dev/null || true 205 systemctl start valkey-server 2>/dev/null || true 206 207 + log_info "Creating blob storage directories..." 208 + mkdir -p /var/lib/tranquil/blobs /var/lib/tranquil/backups 209 + log_success "Blob storage directories created" 210 211 log_info "Installing rust..." 212 if [[ -f "$HOME/.cargo/env" ]]; then ··· 311 DATABASE_URL=postgres://tranquil_pds:${DB_PASSWORD}@localhost:5432/pds 312 DATABASE_MAX_CONNECTIONS=100 313 DATABASE_MIN_CONNECTIONS=10 314 + BLOB_STORAGE_PATH=/var/lib/tranquil/blobs 315 + BACKUP_STORAGE_PATH=/var/lib/tranquil/backups 316 VALKEY_URL=redis://localhost:6379 317 JWT_SECRET=${JWT_SECRET} 318 DPOP_SECRET=${DPOP_SECRET} ··· 332 mkdir -p /var/lib/tranquil-pds 333 cp -r /opt/tranquil-pds/frontend/dist /var/lib/tranquil-pds/frontend 334 chown -R tranquil-pds:tranquil-pds /var/lib/tranquil-pds 335 + chown -R tranquil-pds:tranquil-pds /var/lib/tranquil 336 337 cat > /etc/systemd/system/tranquil-pds.service << 'EOF' 338 [Unit] 339 Description=Tranquil PDS - AT Protocol PDS 340 + After=network.target postgresql.service 341 342 [Service] 343 Type=simple ··· 347 ExecStart=/usr/local/bin/tranquil-pds 348 Restart=always 349 RestartSec=5 350 + ProtectSystem=strict 351 + ProtectHome=true 352 + PrivateTmp=true 353 + ReadWritePaths=/var/lib/tranquil 354 355 [Install] 356 WantedBy=multi-user.target ··· 508 echo "PDS: https://${PDS_DOMAIN}" 509 echo "" 510 echo "Credentials (also in /etc/tranquil-pds/.credentials):" 511 + echo " DB password: ${DB_PASSWORD}" 512 + echo "" 513 + echo "Data locations:" 514 + echo " Blobs: /var/lib/tranquil/blobs" 515 + echo " Backups: /var/lib/tranquil/backups" 516 echo "" 517 echo "Commands:" 518 echo " journalctl -u tranquil-pds -f # logs"

History

4 rounds 2 comments
sign up or login to add to the discussion
3 commits
expand
feat: filesystem blob storage
fix: delegated acc passkey auth
Sharded filesystem subdirs
expand 0 comments
pull request successfully merged
3 commits
expand
feat: filesystem blob storage
fix: delegated acc passkey auth
Sharded filesystem subdirs
expand 2 comments

almost perfect. im gonna be even more picky and pedantic heh :3 id like the sharding to be even more like what other CAS on file systems do. ie id want bafkreihdwdcefgh4dqkjv67uzcmw7ojee6xedzdetojuzjevtenxquvyku to be stored at /bafkreihd/wdcefgh4dqkjv67uzcmw7ojee6xedzdetojuzjevtenxquvyku

including the selfdescribing part is perhaps a bit redundant right now but will become quite useful if we or spec ever need to or want to expand the allowed types of cids for blobs.

2 commits
expand
feat: filesystem blob storage
fix: delegated acc passkey auth
expand 0 comments
lewis.moe submitted #0
2 commits
expand
feat: filesystem blob storage
fix: delegated acc passkey auth
expand 0 comments