nix config

idk some more stuff

+636 -58
+8 -3
home/profiles/cal/default.nix
··· 1 - { pkgs, config, age, ... }: 1 + { 2 + pkgs, 3 + config, 4 + age, 5 + ... 6 + }: 2 7 3 8 { 4 9 home.packages = with pkgs; [ ··· 16 21 a = "contacts_local" 17 22 b = "contacts_remote" 18 23 collections = ["from b"] 19 - metadata = ["displayname"] 24 + metadata = ["displayname" "color"] 20 25 21 26 [storage contacts_local] 22 27 type = "filesystem" ··· 33 38 a = "calendar_local" 34 39 b = "calendar_remote" 35 40 collections = ["from b"] 36 - metadata = ["displayname"] 41 + metadata = ["displayname" "color"] 37 42 38 43 [storage calendar_local] 39 44 type = "filesystem"
+73
home/profiles/opencode/agents/archivist.md.disabled
··· 1 + --- 2 + description: Search through past OpenCode sessions to find relevant context, previous solutions, and historical decisions. Use this when you need to recall how something was done before or find related past work. 3 + mode: subagent 4 + model: anthropic/claude-haiku-4-5 5 + temperature: 0.1 6 + tools: 7 + "*": false 8 + search-history: true 9 + skill: true 10 + permission: 11 + skill: 12 + "session-search": allow 13 + "*": deny 14 + --- 15 + 16 + You are the Archivist, a specialized agent that searches through OpenCode session history to find relevant past conversations, code changes, and decisions. 17 + 18 + You are running inside an AI coding system as a subagent. The main agent invokes you when it needs to find relevant context from previous sessions. 19 + 20 + ## Your Purpose 21 + 22 + When invoked, you will: 23 + 1. Search through the local OpenCode session history 24 + 2. Find sessions and messages relevant to the query 25 + 3. Synthesize findings into a clear, actionable answer 26 + 27 + ## How to Search 28 + 29 + First, load the `session-search` skill to understand the search strategies and storage structure. 30 + 31 + Then use the `search-history` tool to find relevant sessions. You can: 32 + - Search by keywords, code patterns, file names, or concepts 33 + - Filter by project directory if the query is project-specific 34 + - List recent sessions to get an overview 35 + 36 + ## Search Strategies 37 + 38 + 1. **Start broad**: Use general keywords related to the query 39 + 2. **Refine**: If too many results, add more specific terms or filter by directory 40 + 3. **Cross-reference**: Search for related terms (e.g., if searching for "auth", also try "login", "authentication") 41 + 4. **Check context**: Look at session titles and directories to understand the context 42 + 43 + ## Response Format 44 + 45 + Your response should directly answer the question posed, using information from past sessions: 46 + 47 + 1. **Direct answer**: What was found that addresses the question 48 + 2. **Relevant sessions**: List session IDs where this was discussed (so user can resume if needed) 49 + 3. **Key details**: Important snippets or decisions from the history 50 + 51 + Example response: 52 + ``` 53 + Based on past sessions, authentication was implemented using JWT tokens with a 24-hour expiry. 54 + 55 + **Relevant sessions:** 56 + - ses_abc123 - "Implementing user auth" (2024-01-15) 57 + - ses_def456 - "Auth token refresh" (2024-01-20) 58 + 59 + **Key details:** 60 + - Tokens are stored in httpOnly cookies 61 + - Refresh endpoint at /api/auth/refresh 62 + - Used jose library for JWT handling 63 + ``` 64 + 65 + ## Guidelines 66 + 67 + - Be concise and direct - the main agent needs actionable information 68 + - Include session IDs so the user can explore further if needed 69 + - If nothing relevant is found, say so clearly 70 + - Focus on answering the specific question, not providing exhaustive history 71 + - Never fabricate information - only report what's actually in the history 72 + 73 + IMPORTANT: Your final message is returned to the main agent. Make it comprehensive but focused on answering the original question.
+104
home/profiles/opencode/skills/session-search.disabled/SKILL.md
··· 1 + --- 2 + name: session-search 3 + description: Advanced strategies for searching OpenCode session history. Restricted to the archivist agent. 4 + --- 5 + 6 + # Session Search Skill 7 + 8 + This skill provides advanced strategies for searching through OpenCode's session history storage. 9 + 10 + ## Storage Structure 11 + 12 + OpenCode stores data in `~/.local/share/opencode/storage/`: 13 + 14 + ``` 15 + storage/ 16 + ├── session/ # Session metadata by project 17 + │ └── {projectHash}/ 18 + │ └── ses_*.json # Session info (title, directory, timestamps) 19 + ├── message/ # Messages organized by session 20 + │ └── ses_*/ 21 + │ └── msg_*.json # Message metadata (role, agent, model) 22 + ├── part/ # Actual message content 23 + │ └── msg_*/ 24 + │ └── prt_*.json # Content parts (text, tool calls) 25 + └── project/ # Project metadata 26 + └── {hash}.json # Worktree path, timestamps 27 + ``` 28 + 29 + ## Search Tool Usage 30 + 31 + The `search-history` tool accepts: 32 + - `query`: Text pattern to search for (searches message content) 33 + - `directory`: Optional filter by project path (partial match) 34 + - `limit`: Max results (default 30) 35 + 36 + ### Examples 37 + 38 + ```typescript 39 + // Find all sessions mentioning "authentication" 40 + search-history({ query: "authentication" }) 41 + 42 + // Find sessions in a specific project 43 + search-history({ query: "database", directory: "myproject" }) 44 + 45 + // List recent sessions (empty query) 46 + search-history({ query: "", limit: 20 }) 47 + ``` 48 + 49 + ## Search Strategies 50 + 51 + ### 1. Keyword Expansion 52 + Don't just search for the exact term. Try synonyms and related concepts: 53 + - "auth" → also try "login", "authentication", "jwt", "token" 54 + - "database" → also try "postgres", "sqlite", "db", "migration" 55 + - "api" → also try "endpoint", "route", "handler" 56 + 57 + ### 2. Code Pattern Search 58 + Search for code-specific patterns: 59 + - Function names: `handleAuth`, `validateToken` 60 + - File paths: `src/auth`, `lib/database` 61 + - Import statements: `import.*prisma` 62 + - Error messages: specific error text 63 + 64 + ### 3. Tool Usage Search 65 + Find when specific tools were used: 66 + - Edit operations: search for file paths that were edited 67 + - Bash commands: search for command names 68 + - Specific operations: "git push", "npm install" 69 + 70 + ### 4. Directory Filtering 71 + Use the `directory` parameter to scope searches: 72 + - Filter by project name: `directory: "myapp"` 73 + - Filter by path segment: `directory: "usr/projects"` 74 + 75 + ### 5. Iterative Refinement 76 + 1. Start with broad search 77 + 2. If too many results, add specificity 78 + 3. If no results, broaden or try alternative terms 79 + 4. Cross-reference multiple searches 80 + 81 + ## Understanding Results 82 + 83 + ### Session Info 84 + - `id`: Unique session identifier (can be used to reference) 85 + - `title`: Auto-generated session title 86 + - `directory`: Project worktree path 87 + - `updated`: Last activity timestamp 88 + 89 + ### Content Matches 90 + - `sessionID`: Which session contains this match 91 + - `snippet`: Context around the match (±100 chars) 92 + - `role`: user/assistant/tool 93 + 94 + ## Tips 95 + 96 + 1. **Recent vs Relevant**: The tool returns recent sessions first. Older but more relevant sessions may be further in results. 97 + 98 + 2. **Title Search**: Session titles are auto-generated from the conversation and can be good search targets. 99 + 100 + 3. **Multiple Searches**: Don't hesitate to run multiple searches with different terms to build a complete picture. 101 + 102 + 4. **Context Matters**: The snippet provides limited context. Session titles and directories help understand the broader context. 103 + 104 + 5. **No Results**: If no results found, the pattern may be too specific. Try shorter or more general terms.
+2
home/profiles/opencode/tools/.gitignore
··· 1 + node_modules/ 2 + bun.lock
+5
home/profiles/opencode/tools/package.json
··· 1 + { 2 + "dependencies": { 3 + "@opencode-ai/plugin": "1.1.31" 4 + } 5 + }
+275
home/profiles/opencode/tools/search-history.ts.disabled
··· 1 + import { tool } from "@opencode-ai/plugin" 2 + import { $ } from "bun" 3 + import { readdir, readFile } from "fs/promises" 4 + import { join } from "path" 5 + import { homedir } from "os" 6 + 7 + const STORAGE_PATH = join(homedir(), ".local/share/opencode/storage") 8 + 9 + interface SessionInfo { 10 + id: string 11 + title: string 12 + directory: string 13 + projectID: string 14 + created: number 15 + updated: number 16 + } 17 + 18 + interface MessageMatch { 19 + sessionID: string 20 + messageID: string 21 + snippet: string 22 + role: string 23 + timestamp?: number 24 + } 25 + 26 + interface SearchResult { 27 + sessions: SessionInfo[] 28 + matches: MessageMatch[] 29 + totalMatches: number 30 + } 31 + 32 + async function getSessionInfo(sessionID: string): Promise<SessionInfo | null> { 33 + try { 34 + // Sessions are stored in directories named by project hash 35 + const sessionDirs = await readdir(join(STORAGE_PATH, "session")) 36 + for (const dir of sessionDirs) { 37 + const sessionPath = join(STORAGE_PATH, "session", dir) 38 + const files = await readdir(sessionPath).catch(() => []) 39 + for (const file of files) { 40 + if (file.startsWith(sessionID) || file.includes(sessionID)) { 41 + const content = await readFile(join(sessionPath, file), "utf-8") 42 + const data = JSON.parse(content) 43 + return { 44 + id: data.id, 45 + title: data.title || "Untitled", 46 + directory: data.directory || "", 47 + projectID: data.projectID || "", 48 + created: data.time?.created || 0, 49 + updated: data.time?.updated || 0, 50 + } 51 + } 52 + } 53 + } 54 + } catch { 55 + // Fall back to searching message directories 56 + } 57 + return null 58 + } 59 + 60 + async function searchWithRipgrep( 61 + pattern: string, 62 + directory?: string, 63 + limit: number = 50 64 + ): Promise<SearchResult> { 65 + const matches: MessageMatch[] = [] 66 + const sessionIDs = new Set<string>() 67 + 68 + // Search through part storage (contains actual message content) 69 + const partPath = join(STORAGE_PATH, "part") 70 + 71 + try { 72 + // Use ripgrep to search JSON files, extracting context around matches 73 + const rgResult = await $`rg -i -l ${pattern} ${partPath} --type json 2>/dev/null || true`.text() 74 + const matchingFiles = rgResult.trim().split("\n").filter(Boolean) 75 + 76 + for (const file of matchingFiles.slice(0, limit * 2)) { 77 + try { 78 + const content = await readFile(file, "utf-8") 79 + const data = JSON.parse(content) 80 + 81 + // Filter by directory if specified 82 + if (directory) { 83 + const sessionInfo = await getSessionInfo(data.sessionID) 84 + if (sessionInfo && !sessionInfo.directory.includes(directory)) { 85 + continue 86 + } 87 + } 88 + 89 + // Extract snippet around the match 90 + const text = data.text || data.content || JSON.stringify(data) 91 + const lowerText = text.toLowerCase() 92 + const lowerPattern = pattern.toLowerCase() 93 + const matchIndex = lowerText.indexOf(lowerPattern) 94 + 95 + if (matchIndex !== -1) { 96 + const start = Math.max(0, matchIndex - 100) 97 + const end = Math.min(text.length, matchIndex + pattern.length + 100) 98 + const snippet = (start > 0 ? "..." : "") + 99 + text.slice(start, end) + 100 + (end < text.length ? "..." : "") 101 + 102 + matches.push({ 103 + sessionID: data.sessionID, 104 + messageID: data.messageID, 105 + snippet: snippet.replace(/\n/g, " ").trim(), 106 + role: data.role || "unknown", 107 + timestamp: data.time?.created, 108 + }) 109 + 110 + sessionIDs.add(data.sessionID) 111 + 112 + if (matches.length >= limit) break 113 + } 114 + } catch { 115 + // Skip files that can't be parsed 116 + } 117 + } 118 + } catch (e) { 119 + // ripgrep not found or error 120 + } 121 + 122 + // Also search message metadata for titles 123 + const messagePath = join(STORAGE_PATH, "message") 124 + try { 125 + const rgResult = await $`rg -i -l ${pattern} ${messagePath} --type json 2>/dev/null || true`.text() 126 + const matchingFiles = rgResult.trim().split("\n").filter(Boolean) 127 + 128 + for (const file of matchingFiles.slice(0, 20)) { 129 + try { 130 + const content = await readFile(file, "utf-8") 131 + const data = JSON.parse(content) 132 + if (data.sessionID) { 133 + sessionIDs.add(data.sessionID) 134 + } 135 + } catch { 136 + // Skip 137 + } 138 + } 139 + } catch { 140 + // Ignore errors 141 + } 142 + 143 + // Get session info for all matched sessions 144 + const sessions: SessionInfo[] = [] 145 + for (const sessionID of sessionIDs) { 146 + const info = await getSessionInfo(sessionID) 147 + if (info) { 148 + // Apply directory filter for sessions too 149 + if (!directory || info.directory.includes(directory)) { 150 + sessions.push(info) 151 + } 152 + } 153 + } 154 + 155 + // Sort sessions by most recent 156 + sessions.sort((a, b) => b.updated - a.updated) 157 + 158 + return { 159 + sessions: sessions.slice(0, 20), 160 + matches: matches.slice(0, limit), 161 + totalMatches: matches.length, 162 + } 163 + } 164 + 165 + async function listRecentSessions( 166 + directory?: string, 167 + limit: number = 20 168 + ): Promise<SessionInfo[]> { 169 + const sessions: SessionInfo[] = [] 170 + 171 + try { 172 + const sessionDirs = await readdir(join(STORAGE_PATH, "session")) 173 + 174 + for (const dir of sessionDirs) { 175 + const sessionPath = join(STORAGE_PATH, "session", dir) 176 + const files = await readdir(sessionPath).catch(() => []) 177 + 178 + for (const file of files) { 179 + if (!file.endsWith(".json")) continue 180 + try { 181 + const content = await readFile(join(sessionPath, file), "utf-8") 182 + const data = JSON.parse(content) 183 + 184 + if (directory && !data.directory?.includes(directory)) { 185 + continue 186 + } 187 + 188 + sessions.push({ 189 + id: data.id, 190 + title: data.title || "Untitled", 191 + directory: data.directory || "", 192 + projectID: data.projectID || "", 193 + created: data.time?.created || 0, 194 + updated: data.time?.updated || 0, 195 + }) 196 + } catch { 197 + // Skip invalid files 198 + } 199 + } 200 + } 201 + } catch { 202 + // Storage doesn't exist 203 + } 204 + 205 + sessions.sort((a, b) => b.updated - a.updated) 206 + return sessions.slice(0, limit) 207 + } 208 + 209 + export default tool({ 210 + description: 211 + "Search through OpenCode session history to find past conversations, code changes, and decisions. Use this to find relevant context from previous sessions.", 212 + args: { 213 + query: tool.schema 214 + .string() 215 + .describe( 216 + "Search pattern to find in session history. Searches message content, titles, and tool outputs." 217 + ), 218 + directory: tool.schema 219 + .string() 220 + .optional() 221 + .describe( 222 + "Optional: Filter results to sessions from a specific project directory path (partial match)" 223 + ), 224 + limit: tool.schema 225 + .number() 226 + .optional() 227 + .describe("Maximum number of matches to return (default: 30)"), 228 + }, 229 + async execute(args) { 230 + const limit = args.limit || 30 231 + 232 + if (!args.query || args.query.trim() === "") { 233 + // List recent sessions if no query 234 + const sessions = await listRecentSessions(args.directory, limit) 235 + return JSON.stringify( 236 + { 237 + type: "recent_sessions", 238 + sessions, 239 + message: `Found ${sessions.length} recent sessions${args.directory ? ` in ${args.directory}` : ""}`, 240 + }, 241 + null, 242 + 2 243 + ) 244 + } 245 + 246 + const results = await searchWithRipgrep(args.query, args.directory, limit) 247 + 248 + // Format output for the agent 249 + let output = `## Search Results for "${args.query}"\n\n` 250 + 251 + if (results.sessions.length > 0) { 252 + output += `### Relevant Sessions (${results.sessions.length})\n\n` 253 + for (const session of results.sessions) { 254 + const date = new Date(session.updated).toLocaleDateString() 255 + output += `- **${session.title}** (${session.id})\n` 256 + output += ` - Directory: \`${session.directory}\`\n` 257 + output += ` - Last updated: ${date}\n\n` 258 + } 259 + } 260 + 261 + if (results.matches.length > 0) { 262 + output += `### Content Matches (${results.totalMatches})\n\n` 263 + for (const match of results.matches.slice(0, 15)) { 264 + output += `**Session:** ${match.sessionID}\n` 265 + output += `> ${match.snippet}\n\n` 266 + } 267 + } 268 + 269 + if (results.sessions.length === 0 && results.matches.length === 0) { 270 + output += `No matches found for "${args.query}"${args.directory ? ` in ${args.directory}` : ""}\n` 271 + } 272 + 273 + return output 274 + }, 275 + })
-18
hosts/profiles/jacket/default.nix
··· 5 5 ... 6 6 }: 7 7 { 8 - services.jackett = { 9 - enable = true; 10 - port = 8011; 11 - user = "jackett"; 12 - group = "transmission"; 13 - package = pkgs.unstable.jackett; 14 - }; 15 - services.nginx.virtualHosts."jackett.mossnet.lan" = { 16 - enableACME = false; 17 - forceSSL = false; 18 - 19 - locations."/" = { 20 - extraConfig = '' 21 - proxy_pass http://127.0.0.1:8011/; 22 - ''; 23 - }; 24 - }; 25 - 26 8 services.lidarr = { 27 9 enable = true; 28 10 group = "audio";
+4 -4
hosts/profiles/sync/music/get-music.sh
··· 3 3 set -euo pipefail 4 4 5 5 REMOTE_HOST="aynish@talos.feralhosting.com" 6 - REMOTE_PATH="private/transmission/data/" 6 + REMOTE_PATH="private/transmission/data/lidarr/" 7 7 LOCAL_PATH="/tank/new-music" 8 8 TRACKING_FILE="/tank/new-music/.downloaded_albums" 9 9 LOG_FILE="/tank/new-music/download-log" ··· 13 13 14 14 # Get list of albums on remote server 15 15 echo "$(date): Checking for new albums on seedbox..." >>"$LOG_FILE" 16 - REMOTE_ALBUMS=$(rsync --dry-run --list-only "$REMOTE_HOST:$REMOTE_PATH" | grep '^d' | awk '{$1=$2=$3=$4=""; sub(/^ +/, ""); print}' | grep -v '^\.' | grep -v '^tv-sonarr$') || true 16 + REMOTE_ALBUMS=$(rsync --dry-run -s --list-only "$REMOTE_HOST:$REMOTE_PATH" | grep '^d' | awk '{print substr($0, index($0, $5))}' | grep -v '^\.' | grep -v '^tv-sonarr$') || true 17 17 18 18 if [ -z "$REMOTE_ALBUMS" ]; then 19 19 echo "$(date): No albums found on remote server" >>"$LOG_FILE" ··· 39 39 while IFS= read -r album; do 40 40 if [ -n "$album" ]; then 41 41 echo "$(date): Downloading $album" >>"$LOG_FILE" 42 - if rsync -r --log-file="$LOG_FILE" "$REMOTE_HOST:$REMOTE_PATH$album/" "$LOCAL_PATH/$album/"; then 42 + if rsync -r -s --log-file="$LOG_FILE" "$REMOTE_HOST:$REMOTE_PATH$album/" "$LOCAL_PATH/$album/"; then 43 43 echo "$album" >>"$TRACKING_FILE" 44 44 echo "$(date): Successfully downloaded $album" >>"$LOG_FILE" 45 45 ··· 47 47 echo "$(date): Importing $album to beets..." >>"$LOG_FILE" 48 48 # Set umask to allow group read/write access 49 49 umask 002 50 - if beet import -q "$LOCAL_PATH/$album"; then 50 + if beet import -q --quiet-fallback=asis "$LOCAL_PATH/$album"; then 51 51 echo "$(date): Successfully imported $album to beets" >>"$LOG_FILE" 52 52 else 53 53 echo "$(date): Failed to import $album to beets" >>"$LOG_FILE"
+3
hosts/profiles/sync/tv/default.nix
··· 5 5 serviceConfig.Type = "oneshot"; 6 6 path = [ 7 7 pkgs.coreutils 8 + pkgs.gnugrep 9 + pkgs.util-linux 8 10 pkgs.openssh 9 11 pkgs.gawk 10 12 pkgs.rsync 11 13 pkgs.curl 14 + pkgs.jq 12 15 ]; 13 16 script = builtins.readFile ./get-tv.sh; 14 17 serviceConfig = {
+162 -33
hosts/profiles/sync/tv/get-tv.sh
··· 7 7 LOCAL_PATH="/tank/media/tv" 8 8 TRACKING_FILE="/tank/media/tv/.downloaded_shows" 9 9 LOG_FILE="/tank/media/tv/download-log" 10 + LOCK_FILE="/tmp/get-tv-sync.lock" 11 + SONARR_API_KEY="PLACEHOLDER" 12 + SONARR_URL="http://localhost:8989" 13 + 14 + umask 002 10 15 11 16 # Create local directory and tracking file if they don't exist 12 17 mkdir -p "$LOCAL_PATH" 13 18 touch "$TRACKING_FILE" 14 19 15 - # Get list of shows on remote server 16 - echo "$(date): Checking for new TV shows on seedbox..." >>"$LOG_FILE" 17 - REMOTE_SHOWS=$(rsync --dry-run --list-only "$REMOTE_HOST:$REMOTE_PATH" | grep '^d' | awk '{$1=$2=$3=$4=""; sub(/^ +/, ""); print}' | grep -v '^\.') || true 20 + log() { echo "$(date): $1" >>"$LOG_FILE"; } 18 21 19 - if [ -z "$REMOTE_SHOWS" ]; then 20 - echo "$(date): No shows found on remote server" >>"$LOG_FILE" 22 + # Acquire exclusive lock to prevent concurrent runs 23 + exec 9>"$LOCK_FILE" 24 + if ! flock -n 9; then 25 + log "Another instance is already running, exiting" 21 26 exit 0 22 27 fi 23 28 24 - # Check each show against tracking file 25 - NEW_SHOWS="" 26 - while IFS= read -r show; do 27 - if [ -n "$show" ] && ! grep -qF "$show" "$TRACKING_FILE"; then 28 - NEW_SHOWS="$NEW_SHOWS$show\n" 29 - echo "$(date): Found new show: $show" >>"$LOG_FILE" 29 + # Cleanup temp files on exit 30 + TMPFILE=$(mktemp /tmp/sonarr-history.XXXXXX.json) 31 + trap 'rm -f "$TMPFILE"' EXIT 32 + 33 + # Fetch all grabbed history from Sonarr, paginating to avoid missing records 34 + log "Querying Sonarr for grabbed episodes..." 35 + ALL_RECORDS="" 36 + PAGE=1 37 + PAGE_SIZE=250 38 + MAX_PAGES=20 39 + while true; do 40 + if [ "$PAGE" -gt "$MAX_PAGES" ]; then 41 + log "WARNING: Hit max page limit ($MAX_PAGES), some history may be missed" 42 + break 30 43 fi 31 - done <<<"$REMOTE_SHOWS" 44 + HTTP_CODE=$(curl -s --connect-timeout 10 --max-time 30 \ 45 + -o "$TMPFILE" -w "%{http_code}" \ 46 + "$SONARR_URL/api/v3/history?pageSize=$PAGE_SIZE&page=$PAGE&sortKey=date&sortDirection=descending&includeSeries=true" \ 47 + -H "X-Api-Key: $SONARR_API_KEY") 48 + 49 + if [ "$HTTP_CODE" != "200" ]; then 50 + log "ERROR: Sonarr API returned HTTP $HTTP_CODE (page $PAGE)" 51 + exit 1 52 + fi 53 + 54 + # Validate JSON structure 55 + if ! jq -e '.records' "$TMPFILE" >/dev/null 2>&1; then 56 + log "ERROR: Sonarr API returned invalid JSON or missing records field (page $PAGE)" 57 + exit 1 58 + fi 59 + 60 + # Parse grabbed records from this page 61 + # Filter to grabbed events with valid sourceTitle and series.path 62 + # Handle trailing slashes on series.path, skip null paths 63 + PAGE_RECORDS=$(jq -r ' 64 + .records[] 65 + | select(.eventType == "grabbed") 66 + | select(.sourceTitle != null and .sourceTitle != "") 67 + | select(.series.path != null and .series.path != "") 68 + | (.series.path | rtrimstr("/") | split("/") | last) as $folder 69 + | select($folder != "") 70 + | "\(.sourceTitle)\t\($folder)" 71 + ' "$TMPFILE") || true 72 + 73 + if [ -n "$PAGE_RECORDS" ]; then 74 + if [ -n "$ALL_RECORDS" ]; then 75 + ALL_RECORDS="$ALL_RECORDS"$'\n'"$PAGE_RECORDS" 76 + else 77 + ALL_RECORDS="$PAGE_RECORDS" 78 + fi 79 + fi 80 + 81 + # Check if there are more pages 82 + TOTAL_RECORDS=$(jq -r '.totalRecords // 0' "$TMPFILE") 83 + if ! [[ "$TOTAL_RECORDS" =~ ^[0-9]+$ ]]; then 84 + log "WARNING: Invalid totalRecords value from Sonarr: $TOTAL_RECORDS" 85 + break 86 + fi 87 + FETCHED=$((PAGE * PAGE_SIZE)) 88 + if [ "$FETCHED" -ge "$TOTAL_RECORDS" ]; then 89 + break 90 + fi 91 + PAGE=$((PAGE + 1)) 92 + done 93 + 94 + # Deduplicate by sourceTitle (keep first occurrence = most recent, preserving input order) 95 + RECORDS=$(echo "$ALL_RECORDS" | awk -F'\t' '!seen[$1]++') 96 + 97 + if [ -z "$RECORDS" ]; then 98 + log "No grabbed episodes found in Sonarr history" 99 + exit 0 100 + fi 101 + 102 + # Get listing of what actually exists on the remote seedbox 103 + log "Listing remote seedbox contents..." 104 + RSYNC_OUTPUT=$(rsync -s --dry-run --list-only --timeout=30 "$REMOTE_HOST:$REMOTE_PATH" 2>&1) || { 105 + log "ERROR: Failed to list remote seedbox: $RSYNC_OUTPUT" 106 + exit 1 107 + } 108 + 109 + REMOTE_LISTING=$(echo "$RSYNC_OUTPUT" | awk '{ 110 + type = substr($1, 1, 1); 111 + $1=$2=$3=$4=""; 112 + sub(/^ +/, ""); 113 + if ($0 != "." && $0 != "") print type "\t" $0 114 + }') 32 115 33 - if [ -z "$NEW_SHOWS" ]; then 34 - echo "$(date): No new shows to download" >>"$LOG_FILE" 116 + if [ -z "$REMOTE_LISTING" ]; then 117 + log "No files found on remote seedbox" 35 118 exit 0 36 119 fi 37 120 38 - # Download new shows only 39 - echo "$(date): Starting download of new shows..." >>"$LOG_FILE" 40 - while IFS= read -r show; do 41 - if [ -n "$show" ]; then 42 - echo "$(date): Downloading $show" >>"$LOG_FILE" 43 - # Set umask to allow group read/write access for Jellyfin 44 - umask 002 45 - if rsync -r --log-file="$LOG_FILE" "$REMOTE_HOST:$REMOTE_PATH$show/" "$LOCAL_PATH/$show/"; then 46 - echo "$show" >>"$TRACKING_FILE" 47 - echo "$(date): Successfully downloaded $show" >>"$LOG_FILE" 121 + # Write remote listing to temp file for awk matching (avoids passing sourceTitle through -v) 122 + REMOTE_LISTING_FILE=$(mktemp /tmp/remote-listing.XXXXXX) 123 + trap 'rm -f "$TMPFILE" "$REMOTE_LISTING_FILE"' EXIT 124 + echo "$REMOTE_LISTING" >"$REMOTE_LISTING_FILE" 125 + 126 + # Process each grabbed episode from Sonarr 127 + DOWNLOADED=0 128 + SKIPPED=0 129 + while IFS=$'\t' read -r source_title series_folder; do 130 + [ -z "$source_title" ] && continue 131 + [ -z "$series_folder" ] && continue 132 + 133 + # Skip if already downloaded (exact whole-line match) 134 + if grep -qxF "$source_title" "$TRACKING_FILE"; then 135 + SKIPPED=$((SKIPPED + 1)) 136 + continue 137 + fi 138 + 139 + # Check if it exists on the remote seedbox (exact match on entry name) 140 + # Use ENVIRON to pass source_title to awk, avoiding backslash interpretation from -v 141 + REMOTE_MATCH=$(SOURCE_TITLE="$source_title" awk -F'\t' 'BEGIN { name = ENVIRON["SOURCE_TITLE"] } $2 == name { print; exit }' "$REMOTE_LISTING_FILE") || true 142 + if [ -z "$REMOTE_MATCH" ]; then 143 + log "Not yet on seedbox (still downloading?): $source_title" 144 + continue 145 + fi 146 + 147 + # Determine if it's a directory or file from the listing 148 + ENTRY_TYPE=$(echo "$REMOTE_MATCH" | cut -f1) 149 + 150 + log "Downloading '$source_title' -> '$series_folder/'" 151 + mkdir -p "$LOCAL_PATH/$series_folder" 152 + 153 + if [ "$ENTRY_TYPE" = "d" ]; then 154 + # Directory: rsync recursively, trailing slash to put contents into series folder 155 + if rsync -rs --partial --timeout=600 --log-file="$LOG_FILE" \ 156 + "$REMOTE_HOST:$REMOTE_PATH$source_title/" \ 157 + "$LOCAL_PATH/$series_folder/"; then 158 + echo "$source_title" >>"$TRACKING_FILE" 159 + log "Successfully downloaded directory: $source_title" 160 + DOWNLOADED=$((DOWNLOADED + 1)) 48 161 else 49 - echo "$(date): Failed to download $show" >>"$LOG_FILE" 162 + log "Failed to download directory: $source_title" 163 + fi 164 + else 165 + # Single file: rsync into series folder 166 + if rsync -s --partial --timeout=600 --log-file="$LOG_FILE" \ 167 + "$REMOTE_HOST:$REMOTE_PATH$source_title" \ 168 + "$LOCAL_PATH/$series_folder/"; then 169 + echo "$source_title" >>"$TRACKING_FILE" 170 + log "Successfully downloaded file: $source_title" 171 + DOWNLOADED=$((DOWNLOADED + 1)) 172 + else 173 + log "Failed to download file: $source_title" 50 174 fi 51 175 fi 52 - done <<<"$(echo -e "$NEW_SHOWS")" 176 + done <<<"$RECORDS" 177 + 178 + log "Downloaded $DOWNLOADED new items ($SKIPPED already tracked)" 53 179 54 180 # Trigger Jellyfin library scan 55 - echo "$(date): Triggering Jellyfin library refresh..." >>"$LOG_FILE" 56 - if curl -s -X POST "http://localhost:8096/Library/Refresh" \ 57 - -H "X-Emby-Token: aef1b1e0cd5445dc97b755ef8c6224e5"; then 58 - echo "$(date): Jellyfin library refresh triggered" >>"$LOG_FILE" 59 - else 60 - echo "$(date): Failed to trigger Jellyfin library refresh" >>"$LOG_FILE" 181 + if [ "$DOWNLOADED" -gt 0 ]; then 182 + log "Triggering Jellyfin library refresh..." 183 + if curl -s --connect-timeout 10 --max-time 30 \ 184 + -X POST "http://localhost:8096/Library/Refresh" \ 185 + -H "X-Emby-Token: aef1b1e0cd5445dc97b755ef8c6224e5"; then 186 + log "Jellyfin library refresh triggered" 187 + else 188 + log "Failed to trigger Jellyfin library refresh" 189 + fi 61 190 fi 62 191 63 - echo "$(date): TV sync completed" >>"$LOG_FILE" 192 + log "TV sync completed"