QuickDID is a high-performance AT Protocol identity resolution service written in Rust. It provides handle-to-DID resolution with Redis-backed caching and queue processing.
···33## Overview
44QuickDID is a high-performance AT Protocol identity resolution service written in Rust. It provides handle-to-DID resolution with Redis-backed caching and queue processing.
5566+## Configuration
77+88+QuickDID follows the 12-factor app methodology and uses environment variables exclusively for configuration. There are no command-line arguments except for `--version` and `--help`.
99+1010+Configuration is validated at startup, and the service will exit with specific error codes if validation fails:
1111+- `error-quickdid-config-1`: Missing required environment variable
1212+- `error-quickdid-config-2`: Invalid configuration value
1313+- `error-quickdid-config-3`: Invalid TTL value (must be positive)
1414+- `error-quickdid-config-4`: Invalid timeout value (must be positive)
1515+616## Common Commands
717818### Building and Running
···1020# Build the project
1121cargo build
12221313-# Run in debug mode
1414-cargo run
2323+# Run in debug mode (requires environment variables)
2424+HTTP_EXTERNAL=localhost:3007 SERVICE_KEY=did:key:z42tmZxD2mi1TfMKSFrsRfednwdaaPNZiiWHP4MPgcvXkDWK cargo run
15251626# Run tests
1727cargo test
···1929# Type checking
2030cargo check
21312222-# Run with environment variables
2323-HTTP_EXTERNAL=localhost:3007 SERVICE_KEY=did:key:z42tmZxD2mi1TfMKSFrsRfednwdaaPNZiiWHP4MPgcvXkDWK cargo run
3232+# Linting
3333+cargo clippy
3434+3535+# Show version
3636+cargo run -- --version
3737+3838+# Show help
3939+cargo run -- --help
2440```
25412642### Development with VS Code
···30463147### Core Components
32483333-1. **Handle Resolution** (`src/handle_resolver.rs`)
4949+1. **Handle Resolution** (`src/handle_resolver/`)
3450 - `BaseHandleResolver`: Core resolution using DNS and HTTP
3535- - `RateLimitedHandleResolver`: Semaphore-based rate limiting for concurrent resolutions
5151+ - `RateLimitedHandleResolver`: Semaphore-based rate limiting with optional timeout
3652 - `CachingHandleResolver`: In-memory caching layer
3737- - `RedisHandleResolver`: Redis-backed persistent caching with 90-day TTL
5353+ - `RedisHandleResolver`: Redis-backed persistent caching
3854 - `SqliteHandleResolver`: SQLite-backed persistent caching
3955 - Uses binary serialization via `HandleResolutionResult` for space efficiency
5656+ - Resolution stack: Cache → RateLimited (optional) → Base → DNS/HTTP
405741582. **Binary Serialization** (`src/handle_resolution_result.rs`)
4259 - Compact storage format using bincode
4360 - Strips DID prefixes for did:web and did:plc methods
4461 - Stores: timestamp (u64), method type (i16), payload (String)
45624646-3. **Queue System** (`src/queue_adapter.rs`)
4747- - Supports MPSC (in-process) and Redis adapters
6363+3. **Queue System** (`src/queue/`)
6464+ - Supports MPSC (in-process), Redis, SQLite, and no-op adapters
4865 - `HandleResolutionWork` items processed asynchronously
4966 - Redis uses reliable queue pattern (LPUSH/RPOPLPUSH/LREM)
6767+ - SQLite provides persistent queue with work shedding capabilities
506851694. **HTTP Server** (`src/http/`)
5270 - XRPC endpoints for AT Protocol compatibility
···67856886### Handle Resolution Flow
69871. Check cache (Redis/SQLite/in-memory based on configuration)
7070-2. If not cached, acquire rate limit permit (if rate limiting enabled)
8888+2. If cache miss and rate limiting enabled:
8989+ - Acquire semaphore permit (with optional timeout)
9090+ - If timeout configured and exceeded, return error
71913. Perform DNS TXT lookup or HTTP well-known query
72924. Cache result with appropriate TTL
73935. Return DID or error
···82102- `HTTP_PORT`: Server port (default: 8080)
83103- `PLC_HOSTNAME`: PLC directory hostname (default: plc.directory)
84104- `REDIS_URL`: Redis connection URL for caching
8585-- `QUEUE_ADAPTER`: Queue type - 'mpsc', 'redis', 'sqlite', or 'noop' (default: mpsc)
105105+- `SQLITE_URL`: SQLite database URL for caching (e.g., `sqlite:./quickdid.db`)
106106+- `QUEUE_ADAPTER`: Queue type - 'mpsc', 'redis', 'sqlite', 'noop', or 'none' (default: mpsc)
86107- `QUEUE_REDIS_PREFIX`: Redis key prefix for queues (default: queue:handleresolver:)
8787-- `QUEUE_WORKER_ID`: Worker ID for Redis queue (default: worker1)
108108+- `QUEUE_WORKER_ID`: Worker ID for queue operations (default: worker1)
109109+- `QUEUE_BUFFER_SIZE`: Buffer size for MPSC queue (default: 1000)
110110+- `QUEUE_SQLITE_MAX_SIZE`: Max queue size for SQLite work shedding (default: 10000)
111111+- `CACHE_TTL_MEMORY`: TTL for in-memory cache in seconds (default: 600)
112112+- `CACHE_TTL_REDIS`: TTL for Redis cache in seconds (default: 7776000)
113113+- `CACHE_TTL_SQLITE`: TTL for SQLite cache in seconds (default: 7776000)
114114+- `QUEUE_REDIS_TIMEOUT`: Redis blocking timeout in seconds (default: 5)
88115- `RESOLVER_MAX_CONCURRENT`: Maximum concurrent handle resolutions (default: 0 = disabled)
116116+- `RESOLVER_MAX_CONCURRENT_TIMEOUT_MS`: Timeout for acquiring rate limit permit in ms (default: 0 = no timeout)
89117- `RUST_LOG`: Logging level (e.g., debug, info)
9011891119## Error Handling
···9412295123 error-quickdid-<domain>-<number> <message>: <details>
961249797-Example errors:
125125+Current error domains and examples:
981269999-* error-quickdid-resolve-1 Multiple DIDs resolved for method
100100-* error-quickdid-plc-1 HTTP request failed: https://google.com/ Not Found
101101-* error-quickdid-key-1 Error decoding key: invalid
127127+* `config`: Configuration errors (e.g., error-quickdid-config-1 Missing required environment variable)
128128+* `resolve`: Handle resolution errors (e.g., error-quickdid-resolve-1 Failed to resolve subject)
129129+* `queue`: Queue operation errors (e.g., error-quickdid-queue-1 Failed to push to queue)
130130+* `cache`: Cache-related errors (e.g., error-quickdid-cache-1 Redis pool creation failed)
131131+* `result`: Serialization errors (e.g., error-quickdid-result-1 System time error)
132132+* `task`: Task processing errors (e.g., error-quickdid-task-1 Queue adapter health check failed)
102133103134Errors should be represented as enums using the `thiserror` library.
104135···128159## Development Patterns
129160130161### Error Handling
131131-- Uses `anyhow::Result` for error propagation
132132-- Graceful fallbacks when Redis is unavailable
162162+- Uses strongly-typed errors with `thiserror` for all modules
163163+- Each error has a unique identifier following the pattern `error-quickdid-<domain>-<number>`
164164+- Graceful fallbacks when Redis/SQLite is unavailable
133165- Detailed tracing for debugging
166166+- Avoid using `anyhow!()` or `bail!()` macros - use proper error types instead
134167135168### Performance Optimizations
136169- Binary serialization reduces storage by ~40%
···1531863. Add test cases for the new method type
154187155188### Modifying Cache TTL
156156-- For in-memory: Pass TTL to `CachingHandleResolver::new()`
157157-- For Redis: Modify `RedisHandleResolver::ttl_seconds()`
189189+- For in-memory: Set `CACHE_TTL_MEMORY` environment variable
190190+- For Redis: Set `CACHE_TTL_REDIS` environment variable
191191+- For SQLite: Set `CACHE_TTL_SQLITE` environment variable
158192159193### Debugging Resolution Issues
1601941. Enable debug logging: `RUST_LOG=debug`
1611952. Check Redis cache: `redis-cli GET "handle:<hash>"`
162162-3. Monitor queue processing in logs
163163-4. Verify DNS/HTTP connectivity to AT Protocol infrastructure
196196+3. Check SQLite cache: `sqlite3 quickdid.db "SELECT * FROM handle_resolution_cache;"`
197197+4. Monitor queue processing in logs
198198+5. Check rate limiting: Look for "Rate limit permit acquisition timed out" errors
199199+6. Verify DNS/HTTP connectivity to AT Protocol infrastructure
164200165201## Dependencies
166202- `atproto-identity`: Core AT Protocol identity resolution
+118-12
README.md
···11# QuickDID
2233-QuickDID is a high-performance AT Protocol identity resolution service written in Rust. It provides blazing-fast handle-to-DID resolution with intelligent caching strategies, supporting both in-memory and Redis-backed persistent caching with binary serialization for optimal storage efficiency.
33+QuickDID is a high-performance AT Protocol identity resolution service written in Rust. It provides blazing-fast handle-to-DID resolution with intelligent caching strategies, supporting in-memory, Redis-backed, and SQLite-backed persistent caching with binary serialization for optimal storage efficiency.
4455-Built with minimal dependencies and optimized for production use, QuickDID delivers exceptional performance while maintaining a lean footprint.
55+Built following the 12-factor app methodology with minimal dependencies and optimized for production use, QuickDID delivers exceptional performance while maintaining a lean footprint. Configuration is handled exclusively through environment variables, with only `--version` and `--help` command-line arguments supported.
6677## ⚠️ Production Disclaimer
8899**This project is a release candidate and has not been fully vetted for production use.** While it includes comprehensive error handling and has been designed with production features in mind, more thorough testing is necessary before deploying in critical environments. Use at your own risk and conduct appropriate testing for your use case.
10101111+## Performance
1212+1313+QuickDID is designed for high throughput and low latency:
1414+1515+- **Binary serialization** reduces cache storage by ~40% compared to JSON
1616+- **Rate limiting** protects upstream services from being overwhelmed
1717+- **Work shedding** in SQLite queue adapter prevents unbounded growth
1818+- **Configurable TTLs** allow fine-tuning cache freshness vs. performance
1919+- **Connection pooling** for Redis minimizes connection overhead
2020+1121## Features
12221323- **Fast Handle Resolution**: Resolves AT Protocol handles to DIDs using DNS TXT records and HTTP well-known endpoints
1414-- **Multi-Layer Caching**: In-memory caching with configurable TTL and Redis-backed persistent caching (90-day TTL)
2424+- **Multi-Layer Caching**: Flexible caching with three tiers:
2525+ - In-memory caching with configurable TTL (default: 600 seconds)
2626+ - Redis-backed persistent caching (default: 90-day TTL)
2727+ - SQLite-backed persistent caching (default: 90-day TTL)
2828+- **Rate Limiting**: Semaphore-based concurrency control with optional timeout to protect upstream services
1529- **Binary Serialization**: Compact storage format reduces cache size by ~40% compared to JSON
1616-- **Queue Processing**: Asynchronous handle resolution with support for MPSC, Redis, and no-op queue adapters
3030+- **Queue Processing**: Asynchronous handle resolution with multiple adapters:
3131+ - MPSC (in-memory, default)
3232+ - Redis (distributed)
3333+ - SQLite (persistent with work shedding)
3434+ - No-op (testing)
1735- **AT Protocol Compatible**: Implements XRPC endpoints for seamless integration with AT Protocol infrastructure
1818-- **Comprehensive Error Handling**: Includes health checks and graceful shutdown support
3636+- **Comprehensive Error Handling**: Structured errors with unique identifiers (e.g., `error-quickdid-config-1`), health checks, and graceful shutdown
3737+- **12-Factor App**: Environment-based configuration following cloud-native best practices
1938- **Minimal Dependencies**: Optimized dependency tree for faster compilation and reduced attack surface
2020-- **Predictable Worker IDs**: Simple default worker identification for distributed deployments
21392240## Building
2341···25432644- Rust 1.70 or later
2745- Redis (optional, for persistent caching and distributed queuing)
4646+- SQLite 3.35+ (optional, for single-instance persistent caching)
28472948### Build Commands
3049···45644665## Minimum Configuration
47664848-QuickDID requires the following environment variables to run:
6767+QuickDID requires the following environment variables to run. Configuration is validated at startup, and the service will exit with specific error codes if validation fails.
49685069### Required
5170···62816382This will start QuickDID with:
6483- HTTP server on port 8080 (default)
6565-- In-memory caching only (300-second TTL)
8484+- In-memory caching only (600-second TTL default)
6685- MPSC queue adapter for async processing
6786- Default worker ID: "worker1"
6887- Connection to plc.directory for DID resolution
8888+- Rate limiting disabled (default)
69897090### Optional Configuration
71917292For production deployments, consider these additional environment variables:
73939494+#### Network & Service
7495- `HTTP_PORT`: Server port (default: 8080)
7575-- `REDIS_URL`: Redis connection URL for persistent caching (e.g., `redis://localhost:6379`)
7676-- `QUEUE_ADAPTER`: Queue type - 'mpsc', 'redis', or 'noop' (default: mpsc)
7777-- `QUEUE_WORKER_ID`: Worker identifier for distributed queue processing (default: worker1)
7896- `PLC_HOSTNAME`: PLC directory hostname (default: plc.directory)
9797+- `USER_AGENT`: HTTP User-Agent for outgoing requests
9898+- `DNS_NAMESERVERS`: Custom DNS servers (comma-separated)
9999+100100+#### Caching
101101+- `REDIS_URL`: Redis connection URL (e.g., `redis://localhost:6379`)
102102+- `SQLITE_URL`: SQLite database URL (e.g., `sqlite:./quickdid.db`)
103103+- `CACHE_TTL_MEMORY`: In-memory cache TTL in seconds (default: 600)
104104+- `CACHE_TTL_REDIS`: Redis cache TTL in seconds (default: 7776000 = 90 days)
105105+- `CACHE_TTL_SQLITE`: SQLite cache TTL in seconds (default: 7776000 = 90 days)
106106+107107+#### Queue Processing
108108+- `QUEUE_ADAPTER`: Queue type - 'mpsc', 'redis', 'sqlite', 'noop', or 'none' (default: mpsc)
109109+- `QUEUE_WORKER_ID`: Worker identifier (default: worker1)
110110+- `QUEUE_BUFFER_SIZE`: MPSC queue buffer size (default: 1000)
111111+- `QUEUE_REDIS_PREFIX`: Redis key prefix for queues (default: queue:handleresolver:)
112112+- `QUEUE_REDIS_TIMEOUT`: Redis blocking timeout in seconds (default: 5)
113113+- `QUEUE_SQLITE_MAX_SIZE`: Max SQLite queue size for work shedding (default: 10000)
114114+115115+#### Rate Limiting
116116+- `RESOLVER_MAX_CONCURRENT`: Maximum concurrent handle resolutions (default: 0 = disabled)
117117+- `RESOLVER_MAX_CONCURRENT_TIMEOUT_MS`: Timeout for acquiring rate limit permit in ms (default: 0 = no timeout)
118118+119119+#### Logging
79120- `RUST_LOG`: Logging level (e.g., debug, info, warn, error)
801218181-### Production Example
122122+### Production Examples
82123124124+#### Redis-based (Multi-instance/HA)
83125```bash
84126HTTP_EXTERNAL=quickdid.example.com \
85127SERVICE_KEY=did:key:yourkeyhere \
86128HTTP_PORT=3000 \
87129REDIS_URL=redis://localhost:6379 \
130130+CACHE_TTL_REDIS=86400 \
88131QUEUE_ADAPTER=redis \
89132QUEUE_WORKER_ID=prod-worker-1 \
133133+RESOLVER_MAX_CONCURRENT=100 \
134134+RESOLVER_MAX_CONCURRENT_TIMEOUT_MS=5000 \
90135RUST_LOG=info \
91136./target/release/quickdid
92137```
93138139139+#### SQLite-based (Single-instance)
140140+```bash
141141+HTTP_EXTERNAL=quickdid.example.com \
142142+SERVICE_KEY=did:key:yourkeyhere \
143143+HTTP_PORT=3000 \
144144+SQLITE_URL=sqlite:./quickdid.db \
145145+CACHE_TTL_SQLITE=86400 \
146146+QUEUE_ADAPTER=sqlite \
147147+QUEUE_SQLITE_MAX_SIZE=10000 \
148148+RESOLVER_MAX_CONCURRENT=50 \
149149+RUST_LOG=info \
150150+./target/release/quickdid
151151+```
152152+153153+## Architecture
154154+155155+QuickDID uses a layered architecture for optimal performance:
156156+157157+```
158158+Request → Cache Layer → Rate Limiter → Base Resolver → DNS/HTTP
159159+ ↓ ↓ ↓
160160+ Memory/Redis/ Semaphore AT Protocol
161161+ SQLite (optional) Infrastructure
162162+```
163163+164164+### Cache Priority
165165+QuickDID checks caches in this order:
166166+1. Redis (if configured) - Best for distributed deployments
167167+2. SQLite (if configured) - Best for single-instance with persistence
168168+3. In-memory (fallback) - Always available
169169+170170+### Deployment Strategies
171171+172172+- **Single-instance**: Use SQLite for both caching and queuing
173173+- **Multi-instance/HA**: Use Redis for distributed caching and queuing
174174+- **Development**: Use in-memory caching with MPSC queuing
175175+94176## API Endpoints
9517796178- `GET /_health` - Health check endpoint
97179- `GET /xrpc/com.atproto.identity.resolveHandle` - Resolve handle to DID
98180- `GET /.well-known/atproto-did` - Serve DID document for the service
181181+182182+## Docker Deployment
183183+184184+QuickDID can be deployed using Docker. See the [production deployment guide](docs/production-deployment.md) for detailed Docker and Docker Compose configurations.
185185+186186+### Quick Docker Setup
187187+188188+```bash
189189+# Build the image
190190+docker build -t quickdid:latest .
191191+192192+# Run with environment file
193193+docker run -d \
194194+ --name quickdid \
195195+ --env-file .env \
196196+ -p 8080:8080 \
197197+ quickdid:latest
198198+```
199199+200200+## Documentation
201201+202202+- [Configuration Reference](docs/configuration-reference.md) - Complete list of all configuration options
203203+- [Production Deployment Guide](docs/production-deployment.md) - Docker, monitoring, and production best practices
204204+- [Development Guide](CLAUDE.md) - Architecture details and development patterns
99205100206## License
101207
+46-19
src/bin/quickdid.rs
···1111 create_base_resolver, create_caching_resolver, create_rate_limited_resolver_with_timeout,
1212 create_redis_resolver_with_ttl, create_sqlite_resolver_with_ttl,
1313 },
1414- sqlite_schema::create_sqlite_pool,
1514 handle_resolver_task::{HandleResolverTaskConfig, create_handle_resolver_task_with_config},
1615 http::{AppContext, create_router},
1716 queue::{
1817 HandleResolutionWork, QueueAdapter, create_mpsc_queue_from_channel, create_noop_queue,
1918 create_redis_queue, create_sqlite_queue, create_sqlite_queue_with_max_size,
2019 },
2020+ sqlite_schema::create_sqlite_pool,
2121 task_manager::spawn_cancellable_task,
2222};
2323use serde_json::json;
···5757/// Simple command-line argument handling for --version and --help
5858fn handle_simple_args() -> bool {
5959 let args: Vec<String> = std::env::args().collect();
6060-6060+6161 if args.len() > 1 {
6262 match args[1].as_str() {
6363 "--version" | "-V" => {
···7777 println!();
7878 println!("ENVIRONMENT VARIABLES:");
7979 println!(" SERVICE_KEY Private key for service identity (required)");
8080- println!(" HTTP_EXTERNAL External hostname for service endpoints (required)");
8080+ println!(
8181+ " HTTP_EXTERNAL External hostname for service endpoints (required)"
8282+ );
8183 println!(" HTTP_PORT HTTP server port (default: 8080)");
8284 println!(" PLC_HOSTNAME PLC directory hostname (default: plc.directory)");
8383- println!(" USER_AGENT HTTP User-Agent header (auto-generated with version)");
8585+ println!(
8686+ " USER_AGENT HTTP User-Agent header (auto-generated with version)"
8787+ );
8488 println!(" DNS_NAMESERVERS Custom DNS nameservers (comma-separated IPs)");
8585- println!(" CERTIFICATE_BUNDLES Additional CA certificates (comma-separated paths)");
8989+ println!(
9090+ " CERTIFICATE_BUNDLES Additional CA certificates (comma-separated paths)"
9191+ );
8692 println!();
8793 println!(" CACHING:");
8894 println!(" REDIS_URL Redis URL for handle resolution caching");
8989- println!(" SQLITE_URL SQLite database URL for handle resolution caching");
9090- println!(" CACHE_TTL_MEMORY TTL for in-memory cache in seconds (default: 600)");
9191- println!(" CACHE_TTL_REDIS TTL for Redis cache in seconds (default: 7776000)");
9292- println!(" CACHE_TTL_SQLITE TTL for SQLite cache in seconds (default: 7776000)");
9595+ println!(
9696+ " SQLITE_URL SQLite database URL for handle resolution caching"
9797+ );
9898+ println!(
9999+ " CACHE_TTL_MEMORY TTL for in-memory cache in seconds (default: 600)"
100100+ );
101101+ println!(
102102+ " CACHE_TTL_REDIS TTL for Redis cache in seconds (default: 7776000)"
103103+ );
104104+ println!(
105105+ " CACHE_TTL_SQLITE TTL for SQLite cache in seconds (default: 7776000)"
106106+ );
93107 println!();
94108 println!(" QUEUE CONFIGURATION:");
9595- println!(" QUEUE_ADAPTER Queue adapter: 'mpsc', 'redis', 'sqlite', 'noop' (default: mpsc)");
109109+ println!(
110110+ " QUEUE_ADAPTER Queue adapter: 'mpsc', 'redis', 'sqlite', 'noop' (default: mpsc)"
111111+ );
96112 println!(" QUEUE_REDIS_URL Redis URL for queue adapter");
9797- println!(" QUEUE_REDIS_PREFIX Redis key prefix for queues (default: queue:handleresolver:)");
113113+ println!(
114114+ " QUEUE_REDIS_PREFIX Redis key prefix for queues (default: queue:handleresolver:)"
115115+ );
98116 println!(" QUEUE_REDIS_TIMEOUT Queue blocking timeout in seconds (default: 5)");
99117 println!(" QUEUE_WORKER_ID Worker ID for Redis queue (default: worker1)");
100118 println!(" QUEUE_BUFFER_SIZE Buffer size for MPSC queue (default: 1000)");
101119 println!(" QUEUE_SQLITE_MAX_SIZE Maximum SQLite queue size (default: 10000)");
102120 println!();
103121 println!(" RATE LIMITING:");
104104- println!(" RESOLVER_MAX_CONCURRENT Maximum concurrent resolutions (default: 0 = disabled)");
105105- println!(" RESOLVER_MAX_CONCURRENT_TIMEOUT_MS Timeout for acquiring permits in ms (default: 0 = no timeout)");
122122+ println!(
123123+ " RESOLVER_MAX_CONCURRENT Maximum concurrent resolutions (default: 0 = disabled)"
124124+ );
125125+ println!(
126126+ " RESOLVER_MAX_CONCURRENT_TIMEOUT_MS Timeout for acquiring permits in ms (default: 0 = no timeout)"
127127+ );
106128 println!();
107107- println!("For more information, visit: https://github.com/smokesignal.events/quickdid");
129129+ println!(
130130+ "For more information, visit: https://github.com/smokesignal.events/quickdid"
131131+ );
108132 return true;
109133 }
110134 _ => {}
111135 }
112136 }
113113-137137+114138 false
115139}
116140···194218 let dns_resolver_arc = Arc::new(dns_resolver);
195219196220 // Create base handle resolver using factory function
197197- let mut base_handle_resolver = create_base_resolver(dns_resolver_arc.clone(), http_client.clone());
221221+ let mut base_handle_resolver =
222222+ create_base_resolver(dns_resolver_arc.clone(), http_client.clone());
198223199224 // Apply rate limiting if configured
200225 if config.resolver_max_concurrent > 0 {
···209234 timeout_info
210235 );
211236 base_handle_resolver = create_rate_limited_resolver_with_timeout(
212212- base_handle_resolver,
237237+ base_handle_resolver,
213238 config.resolver_max_concurrent,
214214- config.resolver_max_concurrent_timeout_ms
239239+ config.resolver_max_concurrent_timeout_ms,
215240 );
216241 }
217242···314339 create_sqlite_queue::<HandleResolutionWork>(pool)
315340 }
316341 } else {
317317- tracing::warn!("Failed to create SQLite pool for queue, falling back to MPSC queue adapter");
342342+ tracing::warn!(
343343+ "Failed to create SQLite pool for queue, falling back to MPSC queue adapter"
344344+ );
318345 // Fall back to MPSC if SQLite fails
319346 let (handle_sender, handle_receiver) =
320347 tokio::sync::mpsc::channel::<HandleResolutionWork>(
+8-3
src/config.rs
···7070}
71717272/// Helper function to parse an environment variable as a specific type
7373-fn parse_env<T: std::str::FromStr>(key: &str, default: T) -> Result<T, ConfigError>
7373+fn parse_env<T: std::str::FromStr>(key: &str, default: T) -> Result<T, ConfigError>
7474where
7575 T::Err: std::fmt::Display,
7676{
7777 match env::var(key) {
7878- Ok(val) if !val.is_empty() => val.parse::<T>()
7878+ Ok(val) if !val.is_empty() => val
7979+ .parse::<T>()
7980 .map_err(|e| ConfigError::InvalidValue(format!("{}: {}", key, e))),
8081 _ => Ok(default),
8182 }
···244245 sqlite_url: get_env_or_default("SQLITE_URL", None),
245246 queue_adapter: get_env_or_default("QUEUE_ADAPTER", Some("mpsc")).unwrap(),
246247 queue_redis_url: get_env_or_default("QUEUE_REDIS_URL", None),
247247- queue_redis_prefix: get_env_or_default("QUEUE_REDIS_PREFIX", Some("queue:handleresolver:")).unwrap(),
248248+ queue_redis_prefix: get_env_or_default(
249249+ "QUEUE_REDIS_PREFIX",
250250+ Some("queue:handleresolver:"),
251251+ )
252252+ .unwrap(),
248253 queue_worker_id: get_env_or_default("QUEUE_WORKER_ID", Some("worker1")).unwrap(),
249254 queue_buffer_size: parse_env("QUEUE_BUFFER_SIZE", 1000)?,
250255 cache_ttl_memory: parse_env("CACHE_TTL_MEMORY", 600)?,
+1-1
src/handle_resolver/memory.rs
···77use super::errors::HandleResolverError;
88use super::traits::HandleResolver;
99use async_trait::async_trait;
1010-use std::time::{SystemTime, UNIX_EPOCH};
1110use std::collections::HashMap;
1211use std::sync::Arc;
1212+use std::time::{SystemTime, UNIX_EPOCH};
1313use tokio::sync::RwLock;
14141515/// Result of a handle resolution cached in memory.
···297297 });
298298299299 // Create Redis-backed resolver with a unique key prefix for testing
300300- let test_prefix = format!("test:handle:{}:", std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_nanos());
300300+ let test_prefix = format!(
301301+ "test:handle:{}:",
302302+ std::time::SystemTime::now()
303303+ .duration_since(std::time::UNIX_EPOCH)
304304+ .unwrap()
305305+ .as_nanos()
306306+ );
301307 let redis_resolver = RedisHandleResolver::with_full_config(
302308 mock_resolver,
303309 pool.clone(),
···339345 });
340346341347 // Create Redis-backed resolver with a unique key prefix for testing
342342- let test_prefix = format!("test:handle:{}:", std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_nanos());
348348+ let test_prefix = format!(
349349+ "test:handle:{}:",
350350+ std::time::SystemTime::now()
351351+ .duration_since(std::time::UNIX_EPOCH)
352352+ .unwrap()
353353+ .as_nanos()
354354+ );
343355 let redis_resolver = RedisHandleResolver::with_full_config(
344356 mock_resolver,
345357 pool.clone(),
+69-48
src/handle_resolver/sqlite.rs
···103103 let key = self.make_key(&handle) as i64; // SQLite uses signed integers
104104105105 // Try to get from SQLite cache first
106106- let cached_result = sqlx::query(
107107- "SELECT result, updated FROM handle_resolution_cache WHERE key = ?1"
108108- )
109109- .bind(key)
110110- .fetch_optional(&self.pool)
111111- .await;
106106+ let cached_result =
107107+ sqlx::query("SELECT result, updated FROM handle_resolution_cache WHERE key = ?1")
108108+ .bind(key)
109109+ .fetch_optional(&self.pool)
110110+ .await;
112111113112 match cached_result {
114113 Ok(Some(row)) => {
···198197 ON CONFLICT(key) DO UPDATE SET
199198 result = excluded.result,
200199 updated = excluded.updated
201201- "#
200200+ "#,
202201 )
203202 .bind(key)
204203 .bind(&bytes)
···334333 assert_eq!(result1, "did:plc:testuser123");
335334336335 // Verify record was inserted
337337- let count_after_first: i64 = sqlx::query_scalar("SELECT COUNT(*) FROM handle_resolution_cache")
338338- .fetch_one(&pool)
339339- .await
340340- .expect("Failed to query count after first resolution");
336336+ let count_after_first: i64 =
337337+ sqlx::query_scalar("SELECT COUNT(*) FROM handle_resolution_cache")
338338+ .fetch_one(&pool)
339339+ .await
340340+ .expect("Failed to query count after first resolution");
341341 assert_eq!(count_after_first, 1);
342342343343 // Verify the cached record has correct key and non-empty result
344344- let cached_record = sqlx::query("SELECT key, result, created, updated FROM handle_resolution_cache WHERE key = ?1")
345345- .bind(expected_key)
346346- .fetch_one(&pool)
347347- .await
348348- .expect("Failed to fetch cached record");
349349-344344+ let cached_record = sqlx::query(
345345+ "SELECT key, result, created, updated FROM handle_resolution_cache WHERE key = ?1",
346346+ )
347347+ .bind(expected_key)
348348+ .fetch_one(&pool)
349349+ .await
350350+ .expect("Failed to fetch cached record");
351351+350352 let cached_key: i64 = cached_record.get("key");
351353 let cached_result: Vec<u8> = cached_record.get("result");
352354 let cached_created: i64 = cached_record.get("created");
353355 let cached_updated: i64 = cached_record.get("updated");
354356355357 assert_eq!(cached_key, expected_key);
356356- assert!(!cached_result.is_empty(), "Cached result should not be empty");
358358+ assert!(
359359+ !cached_result.is_empty(),
360360+ "Cached result should not be empty"
361361+ );
357362 assert!(cached_created > 0, "Created timestamp should be positive");
358363 assert!(cached_updated > 0, "Updated timestamp should be positive");
359359- assert_eq!(cached_created, cached_updated, "Created and updated should be equal on first insert");
364364+ assert_eq!(
365365+ cached_created, cached_updated,
366366+ "Created and updated should be equal on first insert"
367367+ );
360368361369 // Verify we can deserialize the cached result
362362- let resolution_result = crate::handle_resolution_result::HandleResolutionResult::from_bytes(&cached_result)
363363- .expect("Failed to deserialize cached result");
370370+ let resolution_result =
371371+ crate::handle_resolution_result::HandleResolutionResult::from_bytes(&cached_result)
372372+ .expect("Failed to deserialize cached result");
364373 let cached_did = resolution_result.to_did().expect("Should have a DID");
365374 assert_eq!(cached_did, "did:plc:testuser123");
366375···369378 assert_eq!(result2, "did:plc:testuser123");
370379371380 // Verify count hasn't changed (cache hit, no new insert)
372372- let count_after_second: i64 = sqlx::query_scalar("SELECT COUNT(*) FROM handle_resolution_cache")
373373- .fetch_one(&pool)
374374- .await
375375- .expect("Failed to query count after second resolution");
381381+ let count_after_second: i64 =
382382+ sqlx::query_scalar("SELECT COUNT(*) FROM handle_resolution_cache")
383383+ .fetch_one(&pool)
384384+ .await
385385+ .expect("Failed to query count after second resolution");
376386 assert_eq!(count_after_second, 1);
377387 }
378388···410420 // First resolution - should fail and cache the failure
411421 let result1 = sqlite_resolver.resolve(test_handle).await;
412422 assert!(result1.is_err());
413413-423423+414424 // Match the specific error type we expect
415425 match result1 {
416416- Err(HandleResolverError::MockResolutionFailure) => {},
426426+ Err(HandleResolverError::MockResolutionFailure) => {}
417427 other => panic!("Expected MockResolutionFailure, got {:?}", other),
418428 }
419429420430 // Verify the failure was cached
421421- let count_after_first: i64 = sqlx::query_scalar("SELECT COUNT(*) FROM handle_resolution_cache")
422422- .fetch_one(&pool)
423423- .await
424424- .expect("Failed to query count after first resolution");
431431+ let count_after_first: i64 =
432432+ sqlx::query_scalar("SELECT COUNT(*) FROM handle_resolution_cache")
433433+ .fetch_one(&pool)
434434+ .await
435435+ .expect("Failed to query count after first resolution");
425436 assert_eq!(count_after_first, 1);
426437427438 // Verify the cached error record
428428- let cached_record = sqlx::query("SELECT key, result, created, updated FROM handle_resolution_cache WHERE key = ?1")
429429- .bind(expected_key)
430430- .fetch_one(&pool)
431431- .await
432432- .expect("Failed to fetch cached error record");
433433-439439+ let cached_record = sqlx::query(
440440+ "SELECT key, result, created, updated FROM handle_resolution_cache WHERE key = ?1",
441441+ )
442442+ .bind(expected_key)
443443+ .fetch_one(&pool)
444444+ .await
445445+ .expect("Failed to fetch cached error record");
446446+434447 let cached_key: i64 = cached_record.get("key");
435448 let cached_result: Vec<u8> = cached_record.get("result");
436449 let cached_created: i64 = cached_record.get("created");
437450 let cached_updated: i64 = cached_record.get("updated");
438451439452 assert_eq!(cached_key, expected_key);
440440- assert!(!cached_result.is_empty(), "Cached error result should not be empty");
453453+ assert!(
454454+ !cached_result.is_empty(),
455455+ "Cached error result should not be empty"
456456+ );
441457 assert!(cached_created > 0, "Created timestamp should be positive");
442458 assert!(cached_updated > 0, "Updated timestamp should be positive");
443443- assert_eq!(cached_created, cached_updated, "Created and updated should be equal on first insert");
459459+ assert_eq!(
460460+ cached_created, cached_updated,
461461+ "Created and updated should be equal on first insert"
462462+ );
444463445464 // Verify we can deserialize the cached error result
446446- let resolution_result = crate::handle_resolution_result::HandleResolutionResult::from_bytes(&cached_result)
447447- .expect("Failed to deserialize cached error result");
465465+ let resolution_result =
466466+ crate::handle_resolution_result::HandleResolutionResult::from_bytes(&cached_result)
467467+ .expect("Failed to deserialize cached error result");
448468 let cached_did = resolution_result.to_did();
449469 assert!(cached_did.is_none(), "Error result should have no DID");
450470451471 // Second resolution - should hit cache with error (no additional database operations)
452472 let result2 = sqlite_resolver.resolve(test_handle).await;
453473 assert!(result2.is_err());
454454-474474+455475 // Match the specific error type we expect from cache
456476 match result2 {
457457- Err(HandleResolverError::HandleNotFound) => {}, // Cache returns HandleNotFound for "not resolved"
477477+ Err(HandleResolverError::HandleNotFound) => {} // Cache returns HandleNotFound for "not resolved"
458478 other => panic!("Expected HandleNotFound from cache, got {:?}", other),
459479 }
460480461481 // Verify count hasn't changed (cache hit, no new operations)
462462- let count_after_second: i64 = sqlx::query_scalar("SELECT COUNT(*) FROM handle_resolution_cache")
463463- .fetch_one(&pool)
464464- .await
465465- .expect("Failed to query count after second resolution");
482482+ let count_after_second: i64 =
483483+ sqlx::query_scalar("SELECT COUNT(*) FROM handle_resolution_cache")
484484+ .fetch_one(&pool)
485485+ .await
486486+ .expect("Failed to query count after second resolution");
466487 assert_eq!(count_after_second, 1);
467488 }
468468-}489489+}
···33//! This module defines the core `QueueAdapter` trait that provides a common
44//! interface for different queue implementations (MPSC, Redis, SQLite, etc.).
5566-use async_trait::async_trait;
76use super::error::Result;
77+use async_trait::async_trait;
8899/// Generic trait for queue adapters that can work with any work type.
1010///
···173173 #[tokio::test]
174174 async fn test_default_trait_methods() {
175175 let queue = MockQueue::<String>::new();
176176-176176+177177 // Test default ack implementation
178178 assert!(queue.ack(&"test".to_string()).await.is_ok());
179179-179179+180180 // Test default try_push implementation
181181 assert!(queue.try_push("test".to_string()).await.is_ok());
182182-182182+183183 // Test default depth implementation
184184 assert_eq!(queue.depth().await, None);
185185-185185+186186 // Test default is_healthy implementation
187187 assert!(queue.is_healthy().await);
188188 }
189189-}189189+}
+4-4
src/queue/error.rs
···29293030 /// Redis operation failed.
3131 #[error("error-quickdid-queue-5 Redis operation failed: {operation}: {details}")]
3232- RedisOperationFailed {
3232+ RedisOperationFailed {
3333 /// The Redis operation that failed
3434- operation: String,
3434+ operation: String,
3535 /// Details about the failure
3636- details: String
3636+ details: String,
3737 },
38383939 /// Failed to serialize an item for storage.
···7373 assert!(err.to_string().contains("LPUSH"));
7474 assert!(err.to_string().contains("connection timeout"));
7575 }
7676-}7676+}
+18-23
src/queue/factory.rs
···99use tokio::sync::mpsc;
10101111use super::{
1212- adapter::QueueAdapter,
1313- mpsc::MpscQueueAdapter,
1414- noop::NoopQueueAdapter,
1515- redis::RedisQueueAdapter,
1616- sqlite::SqliteQueueAdapter,
1212+ adapter::QueueAdapter, mpsc::MpscQueueAdapter, noop::NoopQueueAdapter,
1313+ redis::RedisQueueAdapter, sqlite::SqliteQueueAdapter,
1714};
18151916// ========= MPSC Queue Factories =========
···218215 #[tokio::test]
219216 async fn test_create_mpsc_queue() {
220217 let queue = create_mpsc_queue::<String>(10);
221221-218218+222219 queue.push("test".to_string()).await.unwrap();
223220 let item = queue.pull().await;
224221 assert_eq!(item, Some("test".to_string()));
···228225 async fn test_create_mpsc_queue_from_channel() {
229226 let (sender, receiver) = mpsc::channel(5);
230227 let queue = create_mpsc_queue_from_channel(sender.clone(), receiver);
231231-228228+232229 // Send via original sender
233230 sender.send("external".to_string()).await.unwrap();
234234-231231+235232 // Receive via queue
236233 let item = queue.pull().await;
237234 assert_eq!(item, Some("external".to_string()));
···240237 #[tokio::test]
241238 async fn test_create_noop_queue() {
242239 let queue = create_noop_queue::<String>();
243243-240240+244241 // Should accept pushes
245242 queue.push("ignored".to_string()).await.unwrap();
246246-243243+247244 // Should report as healthy
248245 assert!(queue.is_healthy().await);
249249-246246+250247 // Should report depth as 0
251248 assert_eq!(queue.depth().await, Some(0));
252249 }
···264261 .expect("Failed to create schema");
265262266263 let queue = create_sqlite_queue::<HandleResolutionWork>(pool);
267267-264264+268265 let work = HandleResolutionWork::new("test.example.com".to_string());
269266 queue.push(work.clone()).await.unwrap();
270270-267267+271268 let pulled = queue.pull().await;
272269 assert_eq!(pulled, Some(work));
273270 }
···286283287284 // Create queue with small max size
288285 let queue = create_sqlite_queue_with_max_size::<HandleResolutionWork>(pool, 5);
289289-286286+290287 // Push items
291288 for i in 0..10 {
292289 let work = HandleResolutionWork::new(format!("test-{}.example.com", i));
293290 queue.push(work).await.unwrap();
294291 }
295295-292292+296293 // Should have limited items due to work shedding
297294 let depth = queue.depth().await.unwrap();
298298- assert!(depth <= 5, "Queue should have at most 5 items after work shedding");
295295+ assert!(
296296+ depth <= 5,
297297+ "Queue should have at most 5 items after work shedding"
298298+ );
299299 }
300300301301 #[tokio::test]
···316316 .as_nanos()
317317 );
318318319319- let queue = create_redis_queue::<String>(
320320- pool,
321321- "test-worker".to_string(),
322322- test_prefix,
323323- 1,
324324- );
319319+ let queue = create_redis_queue::<String>(pool, "test-worker".to_string(), test_prefix, 1);
325320326321 queue.push("test-item".to_string()).await.unwrap();
327322 let pulled = queue.pull().await;
328323 assert_eq!(pulled, Some("test-item".to_string()));
329324 }
330330-}325325+}
···165165 let record = match sqlx::query(
166166 "SELECT id, work FROM handle_resolution_queue
167167 ORDER BY queued_at ASC
168168- LIMIT 1"
168168+ LIMIT 1",
169169 )
170170 .fetch_optional(&mut *transaction)
171171 .await
···226226227227 // Optimized approach: Insert first, then check if cleanup needed
228228 // This avoids counting on every insert
229229- sqlx::query(
230230- "INSERT INTO handle_resolution_queue (work, queued_at) VALUES (?1, ?2)"
231231- )
232232- .bind(&work_json)
233233- .bind(current_timestamp)
234234- .execute(&self.pool)
235235- .await
236236- .map_err(|e| QueueError::PushFailed(format!("Failed to insert work item: {}", e)))?;
229229+ sqlx::query("INSERT INTO handle_resolution_queue (work, queued_at) VALUES (?1, ?2)")
230230+ .bind(&work_json)
231231+ .bind(current_timestamp)
232232+ .execute(&self.pool)
233233+ .await
234234+ .map_err(|e| QueueError::PushFailed(format!("Failed to insert work item: {}", e)))?;
237235238236 // Implement optimized work shedding if max_size is configured
239237 if self.max_size > 0 {
···243241 let approx_count: Option<i64> = sqlx::query_scalar(
244242 "SELECT COUNT(*) FROM (
245243 SELECT 1 FROM handle_resolution_queue LIMIT ?1
246246- ) AS limited_count"
244244+ ) AS limited_count",
247245 )
248246 .bind(check_limit)
249247 .fetch_one(&self.pool)
···251249 .map_err(|e| QueueError::PushFailed(format!("Failed to check queue size: {}", e)))?;
252250253251 // Only perform cleanup if we're definitely over the limit
254254- if let Some(count) = approx_count && count >= check_limit {
252252+ if let Some(count) = approx_count
253253+ && count >= check_limit
254254+ {
255255 // Perform batch cleanup - delete more than just the excess to reduce frequency
256256 // Delete 20% more than needed to avoid frequent shedding
257257 let target_size = (self.max_size as f64 * 0.8) as i64; // Keep 80% of max_size
258258 let to_delete = count - target_size;
259259-259259+260260 if to_delete > 0 {
261261 // Optimized deletion: First get the cutoff id and timestamp
262262 // This avoids the expensive subquery in the DELETE statement
263263 let cutoff: Option<(i64, i64)> = sqlx::query_as(
264264 "SELECT id, queued_at FROM handle_resolution_queue
265265 ORDER BY queued_at ASC, id ASC
266266- LIMIT 1 OFFSET ?1"
266266+ LIMIT 1 OFFSET ?1",
267267 )
268268 .bind(to_delete - 1)
269269 .fetch_optional(&self.pool)
···276276 let deleted_result = sqlx::query(
277277 "DELETE FROM handle_resolution_queue
278278 WHERE queued_at < ?1
279279- OR (queued_at = ?1 AND id <= ?2)"
279279+ OR (queued_at = ?1 AND id <= ?2)",
280280 )
281281 .bind(cutoff_timestamp)
282282 .bind(cutoff_id)
283283 .execute(&self.pool)
284284 .await
285285- .map_err(|e| QueueError::PushFailed(format!("Failed to delete excess entries: {}", e)))?;
285285+ .map_err(|e| {
286286+ QueueError::PushFailed(format!(
287287+ "Failed to delete excess entries: {}",
288288+ e
289289+ ))
290290+ })?;
286291287292 let deleted_count = deleted_result.rows_affected();
288293 if deleted_count > 0 {
289294 info!(
290295 "Work shedding: deleted {} oldest entries (target size: {}, max: {})",
291291- deleted_count,
292292- target_size,
293293- self.max_size
296296+ deleted_count, target_size, self.max_size
294297 );
295298 }
296299 }
···298301 }
299302 }
300303301301- debug!("Pushed work item to SQLite queue (max_size: {})", self.max_size);
304304+ debug!(
305305+ "Pushed work item to SQLite queue (max_size: {})",
306306+ self.max_size
307307+ );
302308 Ok(())
303309 }
304310···310316 }
311317312318 async fn depth(&self) -> Option<usize> {
313313- match sqlx::query_scalar::<_, i64>(
314314- "SELECT COUNT(*) FROM handle_resolution_queue"
315315- )
316316- .fetch_one(&self.pool)
317317- .await
319319+ match sqlx::query_scalar::<_, i64>("SELECT COUNT(*) FROM handle_resolution_queue")
320320+ .fetch_one(&self.pool)
321321+ .await
318322 {
319323 Ok(count) => Some(count as usize),
320324 Err(e) => {
···380384 let adapter = SqliteQueueAdapter::<HandleResolutionWork>::new(pool);
381385382386 // Push multiple items
383383- let handles = vec!["alice.example.com", "bob.example.com", "charlie.example.com"];
387387+ let handles = vec![
388388+ "alice.example.com",
389389+ "bob.example.com",
390390+ "charlie.example.com",
391391+ ];
384392 for handle in &handles {
385393 let work = HandleResolutionWork::new(handle.to_string());
386394 adapter.push(work).await.unwrap();
···419427 #[tokio::test]
420428 async fn test_sqlite_queue_work_shedding() {
421429 let pool = create_test_pool().await;
422422-430430+423431 // Create adapter with small max_size for testing
424432 let max_size = 10;
425425- let adapter = SqliteQueueAdapter::<HandleResolutionWork>::with_max_size(
426426- pool.clone(),
427427- max_size
428428- );
433433+ let adapter =
434434+ SqliteQueueAdapter::<HandleResolutionWork>::with_max_size(pool.clone(), max_size);
429435430436 // Push items up to the limit (should not trigger shedding)
431437 for i in 0..max_size {
···446452 // After triggering shedding, queue should be around 80% of max_size
447453 let depth_after_shedding = adapter.depth().await.unwrap();
448454 let expected_size = (max_size as f64 * 0.8) as usize;
449449-455455+450456 // Allow some variance due to batch deletion
451457 assert!(
452458 depth_after_shedding <= expected_size + 1,
···459465 #[tokio::test]
460466 async fn test_sqlite_queue_work_shedding_disabled() {
461467 let pool = create_test_pool().await;
462462-468468+463469 // Create adapter with max_size = 0 (disabled work shedding)
464464- let adapter = SqliteQueueAdapter::<HandleResolutionWork>::with_max_size(
465465- pool,
466466- 0
467467- );
470470+ let adapter = SqliteQueueAdapter::<HandleResolutionWork>::with_max_size(pool, 0);
468471469472 // Push many items (should not trigger any shedding)
470473 for i in 0..100 {
···499502 let pulled = adapter.pull().await;
500503 assert_eq!(pulled, Some(work));
501504 }
502502-}505505+}
+5-5
src/queue/work.rs
···6464 #[test]
6565 fn test_handle_resolution_work_serialization() {
6666 let work = HandleResolutionWork::new("bob.example.com".to_string());
6767-6767+6868 // Test JSON serialization (which is what we actually use in the queue adapters)
6969 let json = serde_json::to_string(&work).expect("Failed to serialize to JSON");
7070- let deserialized: HandleResolutionWork =
7070+ let deserialized: HandleResolutionWork =
7171 serde_json::from_str(&json).expect("Failed to deserialize from JSON");
7272 assert_eq!(work, deserialized);
7373-7373+7474 // Verify the JSON structure
7575 let json_value: serde_json::Value = serde_json::from_str(&json).unwrap();
7676 assert_eq!(json_value["handle"], "bob.example.com");
···8888 let work1 = HandleResolutionWork::new("alice.example.com".to_string());
8989 let work2 = HandleResolutionWork::new("alice.example.com".to_string());
9090 let work3 = HandleResolutionWork::new("bob.example.com".to_string());
9191-9191+9292 assert_eq!(work1, work2);
9393 assert_ne!(work1, work3);
9494 }
9595-}9595+}
+14-15
src/sqlite_schema.rs
···44//! schema used by the SQLite-backed handle resolver cache.
5566use anyhow::Result;
77-use sqlx::{SqlitePool, migrate::MigrateDatabase, Sqlite};
77+use sqlx::{Sqlite, SqlitePool, migrate::MigrateDatabase};
88use std::path::Path;
991010/// SQL schema for the handle resolution cache table.
···6060 tracing::info!("Initializing SQLite database: {}", database_url);
61616262 // Extract the database path from the URL for file-based databases
6363- if let Some(path) = database_url.strip_prefix("sqlite:")
6464- && path != ":memory:"
6565- && !path.is_empty()
6363+ if let Some(path) = database_url.strip_prefix("sqlite:")
6464+ && path != ":memory:"
6565+ && !path.is_empty()
6666 {
6767 // Create the database file if it doesn't exist
6868 if !Sqlite::database_exists(database_url).await? {
···7171 }
72727373 // Ensure the parent directory exists
7474- if let Some(parent) = Path::new(path).parent()
7575- && !parent.exists()
7474+ if let Some(parent) = Path::new(path).parent()
7575+ && !parent.exists()
7676 {
7777 tracing::info!("Creating directory: {}", parent.display());
7878 std::fs::create_dir_all(parent)?;
···114114 sqlx::query(CREATE_HANDLE_RESOLUTION_CACHE_TABLE)
115115 .execute(pool)
116116 .await?;
117117-117117+118118 sqlx::query(CREATE_HANDLE_RESOLUTION_QUEUE_TABLE)
119119 .execute(pool)
120120 .await?;
···249249250250 let cutoff_timestamp = current_timestamp - (max_age_seconds as i64);
251251252252- let result = sqlx::query(
253253- "DELETE FROM handle_resolution_queue WHERE queued_at < ?1"
254254- )
255255- .bind(cutoff_timestamp)
256256- .execute(pool)
257257- .await?;
252252+ let result = sqlx::query("DELETE FROM handle_resolution_queue WHERE queued_at < ?1")
253253+ .bind(cutoff_timestamp)
254254+ .execute(pool)
255255+ .await?;
258256259257 let deleted_count = result.rows_affected();
260258 if deleted_count > 0 {
···325323 let old_timestamp = std::time::SystemTime::now()
326324 .duration_since(std::time::UNIX_EPOCH)
327325 .unwrap()
328328- .as_secs() as i64 - 3600; // 1 hour ago
326326+ .as_secs() as i64
327327+ - 3600; // 1 hour ago
329328330329 sqlx::query(
331330 "INSERT INTO handle_resolution_cache (key, result, created, updated) VALUES (1, ?1, ?2, ?2)"
···380379 assert_eq!(total_entries, 1);
381380 assert!(size_bytes > 0);
382381 }
383383-}382382+}
+2-2
src/test_helpers.rs
···44use deadpool_redis::Pool;
5566/// Helper function to get a Redis pool for testing.
77-///
77+///
88/// Returns None if TEST_REDIS_URL is not set, logging a skip message.
99/// This consolidates the repeated Redis test setup code.
1010pub(crate) fn get_test_redis_pool() -> Option<Pool> {
···3232 None => return,
3333 }
3434 };
3535-}3535+}