···1111.vscode/
1212.env
13131414-# Feed Generator Binary
1414+# Misc Binaries
1515feedgen
1616+indexer/indexer
16171718# Test binary, built with `go test -c`
1819*.test
···25262627# Go workspace file
2728go.work
2929+3030+# trash
3131+.DS_Store
3232+/*/.DS_Store
+63-49
README.md
···11-# go-bsky-feed-generator
22-A minimal implementation of a BlueSky Feed Generator in Go
33-44-55-## Requirements
11+# Rinds
22+A collection of feeds under one roof.
6377-To run this feed generator, all you need is `docker` with `docker-compose`.
44+I don't like Docker and I don't need to compile this, okay thanks!
8596## Running
1010-1111-Start up the feed generator by running: `make up`
1212-1313-This will build the feed generator service binary inside a docker container and stand up the service on your machine at port `9032`.
1414-1515-To view a sample static feed (with only one post) go to:
1616-1717-- [`http://localhost:9032/xrpc/app.bsky.feed.getFeedSkeleton?feed=at://did:plc:replace-me-with-your-did/app.bsky.feed.generator/static`](http://localhost:9032/xrpc/app.bsky.feed.getFeedSkeleton?feed=at://did:plc:replace-me-with-your-did/app.bsky.feed.generator/static)
1818-1919-Update the variables in `.env` when you actually want to deploy the service somewhere, at which point `did:plc:replace-me-with-your-did` should be replaced with the value of `FEED_ACTOR_DID`.
2020-2121-## Accessing
2222-2323-This service exposes the following routes:
2424-2525-- `/.well-known/did.json`
2626- - This route is used by ATProto to verify ownership of the DID the service is claiming, it's a static JSON document.
2727- - You can see how this is generated in `pkg/gin/endpoints.go:GetWellKnownDID()`
2828-- `/xrpc/app.bsky.feed.getFeedSkeleton`
2929- - This route is what clients call to generate a feed page, it includes three query parameters for feed generation: `feed`, `cursor`, and `limit`
3030- - You can see how those are parsed and handled in `pkg/gin/endpoints.go:GetFeedSkeleton()`
3131-- `/xrpc/app.bsky.feed.describeFeedGenerator`
3232- - This route is how the service advertises which feeds it supports to clients.
3333- - You can see how those are parsed and handled in `pkg/gin/endpoints.go:DescribeFeeds()`
3434-3535-## Publishing
3636-3737-Once you've got your feed generator up and running and have it exposed to the internet, you can publish the feed using the script from the official BSky repo [here](https://github.com/bluesky-social/feed-generator/blob/main/scripts/publishFeedGen.ts).
77+aint that hard, go install go, setup postgres, and generate the required feed definitions under your user account. you can use https://pdsls.dev to generate a `app.bsky.feed.generator` record.
88+Set the `rkey` to the desired short url value. itll look like:
99+```
1010+https://bsky.app/profile/{you}/feed/{rkey}
1111+```
1212+for the contents you can use the example below
1313+```
1414+{
1515+ "did": "did:web:${INSERT DID:WEB HERE}",
1616+ "$type": "app.bsky.feed.generator",
1717+ "createdAt": "2025-01-21T11:33:02.396Z",
1818+ "description": "wowww very descriptive",
1919+ "displayName": "Cool Feed Name",
2020+}
2121+```
38223939-Your feed will be published under _your_ DID and should show up in your profile under the `feeds` tab.
2323+## Env
2424+You can check out `.env.example` for an example
40254141-## Architecture
42264343-This repo is structured to abstract away a `Feed` interface that allows for you to add all sorts of feeds to the router.
2727+## Postgres
2828+Be sure to set up `.env` correctly
44294545-These feeds can be simple static feeds like the `pkg/feeds/static/feed.go` implementation, or they can be much more complex feeds that draw on different data sources and filter them in cool ways to produce pages of feed items.
3030+All relevant tables should be created automatically when needed.
46314747-The `Feed` interface is defined by any struct implementing two functions:
3232+## Index
3333+You should start Postgres first
3434+Then go run the firehose ingester in
3535+```
3636+cd ./indexer
3737+```
3838+and go compile it
3939+```
4040+go build -o indexer ./indexer.go && export $(grep -v '^#' ./../.env | xargs) && ./indexer
4141+```
4242+after it has been compiled, you can use `rerun.sh` to ensure it will automatically recover after failure
48434949-``` go
5050-type Feed interface {
5151- GetPage(ctx context.Context, feed string, userDID string, limit int64, cursor string) (feedPosts []*appbsky.FeedDefs_SkeletonFeedPost, newCursor *string, err error)
5252- Describe(ctx context.Context) ([]appbsky.FeedDescribeFeedGenerator_Feed, error)
5353-}
4444+## Serve
4545+Make sure the indexer (or at least Postgres) is running first:
4646+```
4747+go build -o feedgen cmd/main.go && export $(grep -v '^#' ./.env | xargs) && ./feedgen
5448```
4949+the logs are pretty verbose imo, fyi
55505656-`GetPage` gets a page of a feed for a given user with the limit and cursor provided, this is the main function that serves posts to a user.
5151+## Todo
5252+- [ ] Faster Indexing
5353+- [ ] Proper Up-to-Date Following Indexing
5454+- [x] Repost Indicators
5555+- [ ] Cache Timeouts
5656+ - [x] Likes
5757+ - [x] Posts
5858+ - [x] Feed Caches
5959+ - [ ] Followings
6060+- [ ] More Fresh Feed Variants
6161+ - [ ] unFresh
6262+ - [x] +9 hrs
6363+ - [ ] Glimpse
6464+ - [ ] Media
6565+ - [ ] Fresh: Gram
6666+ - [ ] Fresh: Tube
6767+ - [ ] Fresh: Media Only
6868+ - [ ] Fresh: Text Only
57695858-`Describe` is used by the router to advertise what feeds are available, for foward compatibility, `Feed`s should be self describing in case this endpoint allows more details about feeds to be provided.
7070+## Architecture
7171+Based on [go-bsky-feed-generator](https://github.com/ericvolp12/go-bsky-feed-generator). Read the README in the linked repo for more info about how it all works.
59726060-You can configure external resources and requirements in your Feed implementation before `Adding` the feed to the `FeedRouter` with `feedRouter.AddFeed([]string{"{feed_name}"}, feedInstance)`
7373+### /feeds/static
7474+Basic example feed from the template. Kept as a sanity check if all else seems to fail.
61756262-This `Feed` interface is somewhat flexible right now but it could be better. I'm not sure if it will change in the future so keep that in mind when using this template.
7676+### /feeds/fresh
7777+Fresh feeds, all based around a shared Following feed builder and logic to set posts as viewed. May contain some remnant old references to the old name "rinds".
63786464-- This has since been updated to allow a Feed to take in a feed name when generating a page and register multiple aliases for feeds that are supported.
+129
cmd/main.go
···2233import (
44 "context"
55+ "database/sql"
56 "fmt"
67 "log"
78 "net/http"
89 "net/url"
910 "os"
1011 "time"
1212+1313+ _ "github.com/lib/pq"
11141215 auth "github.com/ericvolp12/go-bsky-feed-generator/pkg/auth"
1316 "github.com/ericvolp12/go-bsky-feed-generator/pkg/feedrouter"
1417 ginendpoints "github.com/ericvolp12/go-bsky-feed-generator/pkg/gin"
15181919+ freshfeeds "github.com/ericvolp12/go-bsky-feed-generator/pkg/feeds/fresh"
1620 staticfeed "github.com/ericvolp12/go-bsky-feed-generator/pkg/feeds/static"
1721 ginprometheus "github.com/ericvolp12/go-gin-prometheus"
1822 "github.com/gin-gonic/gin"
···27312832func main() {
2933 ctx := context.Background()
3434+3535+ // Open the database connection
3636+ dbHost := os.Getenv("DB_HOST")
3737+ dbUser := os.Getenv("DB_USER")
3838+ dbName := os.Getenv("DB_NAME")
3939+ dbPassword := os.Getenv("DB_PASSWORD")
4040+ db, err := sql.Open("postgres", fmt.Sprintf("user=%s dbname=%s host=%s password=%s sslmode=disable", dbUser, dbName, dbHost, dbPassword))
4141+ if err != nil {
4242+ log.Fatalf("Failed to open database: %v", err)
4343+ }
4444+ defer db.Close()
4545+4646+ // Ping the database to ensure the connection is established
4747+ if err := db.Ping(); err != nil {
4848+ log.Fatalf("Failed to ping database: %v", err)
4949+ }
30503151 // Configure feed generator from environment variables
3252···88108 []string{"at://did:plc:q6gjnaw2blty4crticxkmujt/app.bsky.feed.post/3jx7msc4ive26"},
89109 )
90110111111+ // idk help me
112112+113113+ rindsFeed, rindsFeedAliases, err := freshfeeds.NewStaticFeed(
114114+ ctx,
115115+ feedActorDID,
116116+ "rinds",
117117+ // This static post is the conversation that sparked this demo repo
118118+ []string{"at://did:plc:mn45tewwnse5btfftvd3powc/app.bsky.feed.post/3kgjjhlsnoi2f"},
119119+ db,
120120+ "rinds",
121121+ false,
122122+ )
123123+124124+ randomFeed, randomFeedAliases, err := freshfeeds.NewStaticFeed(
125125+ ctx,
126126+ feedActorDID,
127127+ "random",
128128+ // This static post is the conversation that sparked this demo repo
129129+ []string{"at://did:plc:mn45tewwnse5btfftvd3powc/app.bsky.feed.post/3kgjjhlsnoi2f"},
130130+ db,
131131+ "random",
132132+ false,
133133+ )
134134+135135+ repostsFeed, repostsFeedAliases, err := freshfeeds.NewStaticFeed(
136136+ ctx,
137137+ feedActorDID,
138138+ "reposts",
139139+ // This static post is the conversation that sparked this demo repo
140140+ []string{"at://did:plc:mn45tewwnse5btfftvd3powc/app.bsky.feed.post/3kgjjhlsnoi2f"},
141141+ db,
142142+ "reposts",
143143+ false,
144144+ )
145145+ mnineFeed, mnineFeedAliases, err := freshfeeds.NewStaticFeed(
146146+ ctx,
147147+ feedActorDID,
148148+ "mnine",
149149+ // This static post is the conversation that sparked this demo repo
150150+ []string{"at://did:plc:mn45tewwnse5btfftvd3powc/app.bsky.feed.post/3kgjjhlsnoi2f"},
151151+ db,
152152+ "mnine",
153153+ false,
154154+ )
155155+156156+ rrindsFeed, rrindsFeedAliases, err := freshfeeds.NewStaticFeed(
157157+ ctx,
158158+ feedActorDID,
159159+ "rinds-replies",
160160+ // This static post is the conversation that sparked this demo repo
161161+ []string{"at://did:plc:mn45tewwnse5btfftvd3powc/app.bsky.feed.post/3kgjjhlsnoi2f"},
162162+ db,
163163+ "rinds",
164164+ true,
165165+ )
166166+167167+ rrandomFeed, rrandomFeedAliases, err := freshfeeds.NewStaticFeed(
168168+ ctx,
169169+ feedActorDID,
170170+ "random-replies",
171171+ // This static post is the conversation that sparked this demo repo
172172+ []string{"at://did:plc:mn45tewwnse5btfftvd3powc/app.bsky.feed.post/3kgjjhlsnoi2f"},
173173+ db,
174174+ "random",
175175+ true,
176176+ )
177177+178178+ rrepostsFeed, rrepostsFeedAliases, err := freshfeeds.NewStaticFeed(
179179+ ctx,
180180+ feedActorDID,
181181+ "reposts-replies",
182182+ // This static post is the conversation that sparked this demo repo
183183+ []string{"at://did:plc:mn45tewwnse5btfftvd3powc/app.bsky.feed.post/3kgjjhlsnoi2f"},
184184+ db,
185185+ "reposts",
186186+ true,
187187+ )
188188+ rmnineFeed, rmnineFeedAliases, err := freshfeeds.NewStaticFeed(
189189+ ctx,
190190+ feedActorDID,
191191+ "mnine-replies",
192192+ // This static post is the conversation that sparked this demo repo
193193+ []string{"at://did:plc:mn45tewwnse5btfftvd3powc/app.bsky.feed.post/3kgjjhlsnoi2f"},
194194+ db,
195195+ "mnine",
196196+ true,
197197+ )
198198+ orepliesFeed, orepliesFeedAliases, err := freshfeeds.NewStaticFeed(
199199+ ctx,
200200+ feedActorDID,
201201+ "oreplies",
202202+ // This static post is the conversation that sparked this demo repo
203203+ []string{"at://did:plc:mn45tewwnse5btfftvd3powc/app.bsky.feed.post/3kgjjhlsnoi2f"},
204204+ db,
205205+ "oreplies",
206206+ true,
207207+ )
91208 // Add the static feed to the feed generator
92209 feedRouter.AddFeed(staticFeedAliases, staticFeed)
210210+211211+ feedRouter.AddFeed(rindsFeedAliases, rindsFeed)
212212+ feedRouter.AddFeed(randomFeedAliases, randomFeed)
213213+ feedRouter.AddFeed(repostsFeedAliases, repostsFeed)
214214+ feedRouter.AddFeed(mnineFeedAliases, mnineFeed)
215215+216216+ feedRouter.AddFeed(rrindsFeedAliases, rrindsFeed)
217217+ feedRouter.AddFeed(rrandomFeedAliases, rrandomFeed)
218218+ feedRouter.AddFeed(rrepostsFeedAliases, rrepostsFeed)
219219+ feedRouter.AddFeed(rmnineFeedAliases, rmnineFeed)
220220+221221+ feedRouter.AddFeed(orepliesFeedAliases, orepliesFeed)
9322294223 // Create a gin router with default middleware for logging and recovery
95224 router := gin.Default()
···11+package main
22+33+import (
44+ "context"
55+ "database/sql"
66+ "fmt"
77+ "log"
88+ "os"
99+ "time"
1010+1111+ "github.com/gorilla/websocket"
1212+ "github.com/lib/pq"
1313+ _ "github.com/lib/pq"
1414+)
1515+1616+const wsUrl = "wss://jetstream2.us-west.bsky.network/subscribe?wantedCollections=app.bsky.feed.post&wantedCollections=app.bsky.feed.repost&wantedCollections=app.bsky.feed.like"
1717+1818+type LikeMessage struct {
1919+ Did string `json:"did"`
2020+ TimeUs int64 `json:"time_us"`
2121+ Kind string `json:"kind"`
2222+ Commit Commit `json:"commit"`
2323+}
2424+2525+type Commit struct {
2626+ Rev string `json:"rev"`
2727+ Operation string `json:"operation"`
2828+ Collection string `json:"collection"`
2929+ RKey string `json:"rkey"`
3030+ Record LikeRecord `json:"record"`
3131+ CID string `json:"cid"`
3232+}
3333+3434+type LikeRecord struct {
3535+ Type string `json:"$type"`
3636+ CreatedAt string `json:"createdAt"`
3737+ Subject LikeSubject `json:"subject"`
3838+ Reply *Reply `json:"reply,omitempty"`
3939+}
4040+4141+type Reply struct {
4242+ Parent ReplySubject `json:"parent"`
4343+ Root ReplySubject `json:"root"`
4444+}
4545+4646+type LikeSubject struct {
4747+ CID string `json:"cid"`
4848+ URI string `json:"uri"`
4949+}
5050+5151+type ReplySubject struct {
5252+ CID string `json:"cid"`
5353+ URI string `json:"uri"`
5454+}
5555+5656+var lastLoggedSecond int64 // Keep track of the last logged second
5757+5858+var (
5959+ postBatch []Post
6060+ likeBatch []Like
6161+ batchInsertSize = 1000 // Adjust the batch size as needed
6262+ batchInterval = 30 * time.Second // Flush every 30 seconds
6363+)
6464+6565+type Post struct {
6666+ RelAuthor string
6767+ PostUri string
6868+ RelDate int64
6969+ IsRepost bool
7070+ RepostUri string
7171+ ReplyTo string
7272+}
7373+7474+type Like struct {
7575+ RelAuthor string
7676+ PostUri string
7777+ RelDate int64
7878+}
7979+8080+func getLastCursor(db *sql.DB) int64 {
8181+ var lastCursor int64
8282+ err := db.QueryRow("SELECT lastCursor FROM cursor WHERE id = 1").Scan(&lastCursor)
8383+ if err != nil {
8484+ if err == sql.ErrNoRows {
8585+ log.Println("Cursor table is empty; starting fresh.")
8686+ return 0
8787+ }
8888+ log.Fatalf("Error fetching last cursor: %v", err)
8989+ }
9090+ return lastCursor
9191+}
9292+9393+func main() {
9494+ // Connect to Postgres // Open the database connection
9595+ dbHost := os.Getenv("DB_HOST")
9696+ dbUser := os.Getenv("DB_USER")
9797+ dbName := os.Getenv("DB_NAME")
9898+ dbPassword := os.Getenv("DB_PASSWORD")
9999+ db, err := sql.Open("postgres", fmt.Sprintf("user=%s dbname=%s host=%s password=%s sslmode=disable", dbUser, dbName, dbHost, dbPassword))
100100+101101+ if err != nil {
102102+ log.Fatalf("Failed to connect to Postgres: %v", err)
103103+ }
104104+ defer db.Close()
105105+106106+ // Ensure tables exist
107107+ createTables(db)
108108+109109+ // Start the cleanup job
110110+ go startCleanupJob(db)
111111+112112+ // Start the batch insert job
113113+ go startBatchInsertJob(db)
114114+115115+ // Start the batch insert job for likes
116116+ go startBatchInsertLikesJob(db)
117117+118118+ // Retrieve the last cursor
119119+ lastCursor := getLastCursor(db)
120120+121121+ // If the cursor is older than 24 hours, skip it
122122+ if lastCursor > 0 {
123123+ cursorTime := time.UnixMicro(lastCursor)
124124+ if time.Since(cursorTime) > 24*time.Hour {
125125+ log.Printf("Cursor is older than 24 hours (%s); skipping it.", cursorTime.Format("2006-01-02 15:04:05"))
126126+ lastCursor = 0 // Ignore this cursor
127127+ } else {
128128+ log.Printf("Resuming from cursor: %d (%s)", lastCursor, cursorTime.Format("2006-01-02 15:04:05"))
129129+ }
130130+ }
131131+132132+ // WebSocket URL with cursor if available
133133+ wsFullUrl := wsUrl
134134+ if lastCursor > 0 {
135135+ wsFullUrl += "&cursor=" + fmt.Sprintf("%d", lastCursor)
136136+ }
137137+138138+ // Connect to WebSocket
139139+ conn, _, err := websocket.DefaultDialer.Dial(wsFullUrl, nil)
140140+ if err != nil {
141141+ log.Fatalf("WebSocket connection error: %v", err)
142142+ }
143143+ defer conn.Close()
144144+145145+ //print wsFullUrl
146146+ log.Printf("Connected to WebSocket: %s", wsFullUrl)
147147+148148+ log.Println("Listening for WebSocket messages...")
149149+150150+ // Process WebSocket messages
151151+ for {
152152+ var msg LikeMessage
153153+ err := conn.ReadJSON(&msg)
154154+ if err != nil {
155155+ log.Printf("Error reading WebSocket message: %v", err)
156156+ continue
157157+ }
158158+159159+ processMessage(db, msg)
160160+ }
161161+}
162162+163163+func createTables(db *sql.DB) {
164164+ _, err := db.Exec(`
165165+ CREATE TABLE IF NOT EXISTS posts (
166166+ id SERIAL PRIMARY KEY,
167167+ rel_author TEXT NOT NULL,
168168+ post_uri TEXT NOT NULL,
169169+ rel_date BIGINT NOT NULL,
170170+ is_repost BOOLEAN NOT NULL DEFAULT FALSE,
171171+ repost_uri TEXT,
172172+ reply_to TEXT,
173173+ UNIQUE(rel_author, post_uri, rel_date)
174174+ );
175175+ `)
176176+ if err != nil {
177177+ log.Fatalf("Error creating 'posts' table: %v", err)
178178+ }
179179+180180+ _, err = db.Exec(`
181181+ CREATE TABLE IF NOT EXISTS likes (
182182+ id SERIAL PRIMARY KEY,
183183+ rel_author TEXT NOT NULL,
184184+ post_uri TEXT NOT NULL,
185185+ rel_date BIGINT NOT NULL
186186+ );
187187+ `)
188188+ if err != nil {
189189+ log.Fatalf("Error creating 'posts' table: %v", err)
190190+ }
191191+192192+ // Create a cursor table with a single-row constraint
193193+ _, err = db.Exec(`
194194+ CREATE TABLE IF NOT EXISTS cursor (
195195+ id INT PRIMARY KEY CHECK (id = 1),
196196+ lastCursor BIGINT NOT NULL
197197+ );
198198+ `)
199199+ if err != nil {
200200+ log.Fatalf("Error creating 'cursor' table: %v", err)
201201+ }
202202+203203+ // Ensure the cursor table always has exactly one row
204204+ _, err = db.Exec(`
205205+ INSERT INTO cursor (id, lastCursor)
206206+ VALUES (1, 0)
207207+ ON CONFLICT (id) DO NOTHING;
208208+ `)
209209+ if err != nil {
210210+ log.Fatalf("Error initializing cursor table: %v", err)
211211+ }
212212+}
213213+func processMessage(db *sql.DB, msg LikeMessage) {
214214+ // Convert cursor to time
215215+ cursorTime := time.UnixMicro(msg.TimeUs)
216216+217217+ // Get the whole second as a Unix timestamp
218218+ currentSecond := cursorTime.Unix()
219219+220220+ // Check if this second has already been logged
221221+ if currentSecond != lastLoggedSecond && cursorTime.Nanosecond() >= 100_000_000 && cursorTime.Nanosecond() < 200_000_000 {
222222+ // Update the last logged second
223223+ lastLoggedSecond = currentSecond
224224+225225+ // Log only once per second
226226+ humanReadableTime := cursorTime.Format("2006-01-02 15:04:05.000")
227227+ log.Printf("Cursor (time_us): %d, Human-readable time: %s", msg.TimeUs, humanReadableTime)
228228+ }
229229+230230+ // Save the record
231231+ record := msg.Commit.Record
232232+ postUri := fmt.Sprintf("at://%s/app.bsky.feed.post/%s", msg.Did, msg.Commit.RKey)
233233+ repostUri := fmt.Sprintf("at://%s/app.bsky.feed.repost/%s", msg.Did, msg.Commit.RKey)
234234+ reply := ""
235235+ if msg.Commit.Record.Reply != nil {
236236+ reply = fmt.Sprintf("Parent: %s, Root: %s", msg.Commit.Record.Reply.Parent.URI, msg.Commit.Record.Reply.Root.URI)
237237+ }
238238+239239+ switch msg.Commit.Collection {
240240+ case "app.bsky.feed.post":
241241+ if msg.Commit.Operation == "create" {
242242+ postBatch = append(postBatch, Post{msg.Did, postUri, msg.TimeUs, false, "", reply})
243243+ } else if msg.Commit.Operation == "delete" {
244244+ deletePost(db, msg.Did, postUri, msg.TimeUs)
245245+ }
246246+ case "app.bsky.feed.repost":
247247+ if record.Subject.URI != "" {
248248+ if msg.Commit.Operation == "create" {
249249+ postBatch = append(postBatch, Post{msg.Did, record.Subject.URI, msg.TimeUs, true, repostUri, ""})
250250+ } else if msg.Commit.Operation == "delete" {
251251+ deletePost(db, msg.Did, record.Subject.URI, msg.TimeUs)
252252+ }
253253+ }
254254+ case "app.bsky.feed.like":
255255+ if record.Subject.URI != "" {
256256+ if msg.Commit.Operation == "create" {
257257+ likeBatch = append(likeBatch, Like{msg.Did, record.Subject.URI, msg.TimeUs})
258258+ } else if msg.Commit.Operation == "delete" {
259259+ deleteLike(db, msg.Did, record.Subject.URI)
260260+ }
261261+ }
262262+ default:
263263+ //log.Printf("Unknown collection: %s", msg.Commit.Collection)
264264+ }
265265+266266+ // Update the cursor in the single-row table
267267+ _, err := db.Exec(`
268268+ UPDATE cursor SET lastCursor = $1 WHERE id = 1;
269269+ `, msg.TimeUs)
270270+ if err != nil {
271271+ log.Printf("Error updating cursor: %v", err)
272272+ }
273273+}
274274+275275+func deletePost(db *sql.DB, relAuthor, postUri string, relDate int64) {
276276+ _, err := db.Exec(`
277277+ DELETE FROM posts WHERE rel_author = $1 AND post_uri = $2 AND rel_date = $3;
278278+ `, relAuthor, postUri, relDate)
279279+ if err != nil {
280280+ log.Printf("Error deleting post: %v", err)
281281+ }
282282+}
283283+284284+func deleteLike(db *sql.DB, relAuthor, postUri string) {
285285+ _, err := db.Exec(`
286286+ DELETE FROM likes WHERE rel_author = $1 AND post_uri = $2;
287287+ `, relAuthor, postUri)
288288+ if err != nil {
289289+ log.Printf("Error deleting like: %v", err)
290290+ }
291291+}
292292+293293+func startCleanupJob(db *sql.DB) {
294294+ ticker := time.NewTicker(1 * time.Hour)
295295+ defer ticker.Stop()
296296+297297+ for range ticker.C {
298298+ cleanupOldPosts(db)
299299+ if err := cleanOldFeedCaches(context.Background(), db); err != nil {
300300+ log.Printf("Error cleaning old feed caches: %v\n", err)
301301+ }
302302+ }
303303+}
304304+305305+func cleanupOldPosts(db *sql.DB) {
306306+ threshold := time.Now().Add(-24 * time.Hour).UnixMicro()
307307+ _, err := db.Exec(`
308308+ DELETE FROM posts WHERE rel_date < $1;
309309+ `, threshold)
310310+ if err != nil {
311311+ log.Printf("Error deleting old posts: %v", err)
312312+ } else {
313313+ log.Printf("Deleted posts older than 24 hours.")
314314+ }
315315+}
316316+317317+func startBatchInsertJob(db *sql.DB) {
318318+ ticker := time.NewTicker(batchInterval)
319319+ defer ticker.Stop()
320320+321321+ for range ticker.C {
322322+ if len(postBatch) >= batchInsertSize {
323323+ batchInsertPosts(db)
324324+ }
325325+ }
326326+}
327327+328328+func batchInsertPosts(db *sql.DB) {
329329+ tx, err := db.Begin()
330330+ if err != nil {
331331+ log.Printf("Error starting transaction: %v", err)
332332+ return
333333+ }
334334+335335+ stmt, err := tx.Prepare(`
336336+ INSERT INTO posts (rel_author, post_uri, rel_date, is_repost, repost_uri, reply_to)
337337+ VALUES ($1, $2, $3, $4, $5, $6)
338338+ ON CONFLICT (rel_author, post_uri, rel_date) DO NOTHING;
339339+ `)
340340+ if err != nil {
341341+ log.Printf("Error preparing statement: %v", err)
342342+ return
343343+ }
344344+ defer stmt.Close()
345345+346346+ for _, post := range postBatch {
347347+ _, err := stmt.Exec(post.RelAuthor, post.PostUri, post.RelDate, post.IsRepost, post.RepostUri, post.ReplyTo)
348348+ if err != nil {
349349+ log.Printf("Error executing statement: %v", err)
350350+ }
351351+ }
352352+353353+ err = tx.Commit()
354354+ if err != nil {
355355+ log.Printf("Error committing transaction: %v", err)
356356+ }
357357+358358+ // Clear the batch
359359+ postBatch = postBatch[:0]
360360+}
361361+362362+func startBatchInsertLikesJob(db *sql.DB) {
363363+ ticker := time.NewTicker(1 * time.Second)
364364+ defer ticker.Stop()
365365+366366+ for range ticker.C {
367367+ if len(likeBatch) > 0 {
368368+ batchInsertLikes(db)
369369+ }
370370+ }
371371+}
372372+373373+func batchInsertLikes(db *sql.DB) {
374374+ tx, err := db.Begin()
375375+ if err != nil {
376376+ log.Printf("Error starting transaction: %v", err)
377377+ return
378378+ }
379379+380380+ stmt, err := tx.Prepare(`
381381+ INSERT INTO likes (rel_author, post_uri, rel_date)
382382+ VALUES ($1, $2, $3)
383383+ ON CONFLICT (rel_author, post_uri) DO NOTHING;
384384+ `)
385385+ if err != nil {
386386+ log.Printf("Error preparing statement: %v", err)
387387+ return
388388+ }
389389+ defer stmt.Close()
390390+391391+ for _, like := range likeBatch {
392392+ _, err := stmt.Exec(like.RelAuthor, like.PostUri, like.RelDate)
393393+ if err != nil {
394394+ log.Printf("Error executing statement: %v", err)
395395+ }
396396+ }
397397+398398+ err = tx.Commit()
399399+ if err != nil {
400400+ log.Printf("Error committing transaction: %v", err)
401401+ }
402402+403403+ // Clear the batch
404404+ likeBatch = likeBatch[:0]
405405+}
406406+407407+func cleanOldFeedCaches(ctx context.Context, db *sql.DB) error {
408408+ //log
409409+ log.Println("Cleaning old feed caches")
410410+ // Get the current time minus 24 hours
411411+ expirationTime := time.Now().Add(-24 * time.Hour)
412412+413413+ // Get all tables from cachetimeout that are older than 24 hours
414414+ rows, err := db.QueryContext(ctx, `
415415+ SELECT table_name
416416+ FROM cachetimeout
417417+ WHERE creation_time < $1
418418+ `, expirationTime)
419419+ if err != nil {
420420+ return fmt.Errorf("error querying cachetimeout table: %w", err)
421421+ }
422422+ defer rows.Close()
423423+424424+ var tablesToDelete []string
425425+ for rows.Next() {
426426+ var tableName string
427427+ if err := rows.Scan(&tableName); err != nil {
428428+ return fmt.Errorf("error scanning table name: %w", err)
429429+ }
430430+ tablesToDelete = append(tablesToDelete, tableName)
431431+ }
432432+433433+ if err := rows.Err(); err != nil {
434434+ return fmt.Errorf("error iterating cachetimeout rows: %w", err)
435435+ }
436436+437437+ // Get all feedcache_* tables that do not have an entry in cachetimeout
438438+ rows, err = db.QueryContext(ctx, `
439439+ SELECT table_name
440440+ FROM information_schema.tables
441441+ WHERE table_name LIKE 'feedcache_%'
442442+ AND table_name NOT IN (SELECT table_name FROM cachetimeout)
443443+ `)
444444+ if err != nil {
445445+ return fmt.Errorf("error querying feedcache tables: %w", err)
446446+ }
447447+ defer rows.Close()
448448+449449+ for rows.Next() {
450450+ var tableName string
451451+ if err := rows.Scan(&tableName); err != nil {
452452+ return fmt.Errorf("error scanning table name: %w", err)
453453+ }
454454+ tablesToDelete = append(tablesToDelete, tableName)
455455+ }
456456+457457+ if err := rows.Err(); err != nil {
458458+ return fmt.Errorf("error iterating feedcache rows: %w", err)
459459+ }
460460+461461+ // Drop the old tables and remove their entries from cachetimeout
462462+ for _, tableName := range tablesToDelete {
463463+ _, err := db.ExecContext(ctx, fmt.Sprintf("DROP TABLE IF EXISTS %s", pq.QuoteIdentifier(tableName)))
464464+ if err != nil {
465465+ return fmt.Errorf("error dropping table %s: %w", tableName, err)
466466+ }
467467+ _, err = db.ExecContext(ctx, "DELETE FROM cachetimeout WHERE table_name = $1", tableName)
468468+ if err != nil {
469469+ return fmt.Errorf("error deleting from cachetimeout table: %w", err)
470470+ }
471471+ }
472472+473473+ return nil
474474+}
+11
indexer/rerun.sh
···11+#!/bin/bash
22+33+# Infinite loop to rerun the Go program
44+while true; do
55+ echo "Starting Go program..."
66+ go run indexer.go
77+88+ # Exit message
99+ echo "Program exited. Restarting in 5 seconds..."
1010+ sleep 5
1111+done
+2-1
pkg/auth/auth.go
···115115 accessToken := authHeaderParts[1]
116116117117 parser := jwt.Parser{
118118- ValidMethods: []string{es256k.SigningMethodES256K.Alg()},
118118+ ValidMethods: []string{es256k.SigningMethodES256K.Alg()},
119119+ SkipClaimsValidation: true, // IM SORRY I HAD TO, MY VPS IS ACTING STRANGE. I THINK ITS FINE
119120 }
120121121122 token, err := parser.ParseWithClaims(accessToken, claims, func(token *jwt.Token) (interface{}, error) {
+571
pkg/feeds/fresh/feed.go
···11+package rinds
22+33+import (
44+ "context"
55+ "crypto/sha256"
66+ "database/sql"
77+ "encoding/hex"
88+ "encoding/json"
99+ "fmt"
1010+ "io/ioutil"
1111+ "log"
1212+ "sort"
1313+ "strconv"
1414+ "strings"
1515+ "time"
1616+1717+ "net/http"
1818+1919+ "math/rand"
2020+2121+ appbsky "github.com/bluesky-social/indigo/api/bsky"
2222+ "github.com/lib/pq"
2323+)
2424+2525+type CachedFollows struct {
2626+ Follows []string `json:"follows"`
2727+ LastModified time.Time `json:"last_modified"`
2828+}
2929+3030+type FeedType string
3131+3232+const (
3333+ Rinds FeedType = "rinds"
3434+ Random FeedType = "random"
3535+ Mnine FeedType = "mnine"
3636+ Reposts FeedType = "reposts"
3737+ OReplies FeedType = "oreplies"
3838+)
3939+4040+type StaticFeed struct {
4141+ FeedActorDID string
4242+ FeedName string
4343+ StaticPostURIs []string
4444+ DB *sql.DB
4545+ FeedType FeedType // random, mnine, reposts
4646+ RepliesOn bool
4747+}
4848+4949+type Follower struct {
5050+ DID string `json:"did"`
5151+}
5252+5353+type Response struct {
5454+ Follows []Follower `json:"follows"`
5555+ Cursor string `json:"cursor"`
5656+}
5757+5858+type PostWithDate struct {
5959+ PostURI string `json:"post_uri"`
6060+ RelDate int64 `json:"rel_date"`
6161+ IsRepost bool `json:"is_repost"`
6262+ RepostURI string `json:"repost_uri,omitempty"`
6363+}
6464+6565+type CachedPosts struct {
6666+ Posts []PostWithDate `json:"posts"`
6767+ LastModified time.Time `json:"last_modified"`
6868+}
6969+7070+// Describe implements feedrouter.Feed.
7171+func (sf *StaticFeed) Describe(ctx context.Context) ([]appbsky.FeedDescribeFeedGenerator_Feed, error) {
7272+ panic("unimplemented")
7373+}
7474+7575+// NewStaticFeed returns a new StaticFeed, a list of aliases for the feed, and an error
7676+// StaticFeed is a trivial implementation of the Feed interface, so its aliases are just the input feedName
7777+func NewStaticFeed(ctx context.Context, feedActorDID string, feedName string, staticPostURIs []string, db *sql.DB, feedType FeedType, repliesOn bool) (*StaticFeed, []string, error) {
7878+ return &StaticFeed{
7979+ FeedActorDID: feedActorDID,
8080+ FeedName: feedName,
8181+ StaticPostURIs: staticPostURIs,
8282+ DB: db,
8383+ FeedType: feedType,
8484+ RepliesOn: repliesOn,
8585+ }, []string{feedName}, nil
8686+}
8787+8888+// GetPage returns a list of FeedDefs_SkeletonFeedPost, a new cursor, and an error
8989+// It takes a feed name, a user DID, a limit, and a cursor
9090+// The feed name can be used to produce different feeds from the same feed generator
9191+func (sf *StaticFeed) GetPage(ctx context.Context, feed string, userDID string, limit int64, cursor string) ([]*appbsky.FeedDefs_SkeletonFeedPost, *string, error) {
9292+ cursorAsInt := int64(0)
9393+ var hash string
9494+ var startOfLastPage int64
9595+ var sizeOfLastPage int64
9696+ var inflightstartOfLastPage int64
9797+ var inflightsizeOfLastPage int64
9898+ var err error
9999+100100+ var smartReadEnabled bool = true
101101+ var smartReportingEnabled bool = true
102102+ log.Printf("smartReadEnabled is %v; smartReportingEnabled is %v", smartReadEnabled, smartReportingEnabled)
103103+104104+ if cursor != "" {
105105+ parts := strings.Split(cursor, "-")
106106+ if len(parts) != 6 {
107107+ return nil, nil, fmt.Errorf("invalid cursor format")
108108+ }
109109+ cursorAsInt, err = strconv.ParseInt(parts[0], 10, 64)
110110+ if err != nil {
111111+ return nil, nil, fmt.Errorf("cursor is not an integer: %w", err)
112112+ }
113113+ hash = parts[1]
114114+ inflightstartOfLastPage, err = strconv.ParseInt(parts[2], 10, 64)
115115+ if err != nil {
116116+ return nil, nil, fmt.Errorf("start of last page is not an integer: %w", err)
117117+ }
118118+ inflightsizeOfLastPage, err = strconv.ParseInt(parts[3], 10, 64)
119119+ if err != nil {
120120+ return nil, nil, fmt.Errorf("size of last page is not an integer: %w", err)
121121+ }
122122+ startOfLastPage, err = strconv.ParseInt(parts[4], 10, 64)
123123+ if err != nil {
124124+ return nil, nil, fmt.Errorf("start of last page is not an integer: %w", err)
125125+ }
126126+ sizeOfLastPage, err = strconv.ParseInt(parts[5], 10, 64)
127127+ if err != nil {
128128+ return nil, nil, fmt.Errorf("size of last page is not an integer: %w", err)
129129+ }
130130+ }
131131+132132+ if limit == 1 && cursor == "" {
133133+ // this happens when the app tries to check if the timeline has new posts at the top or not
134134+ // we should handle this better but ehhhhhhhh
135135+ log.Print("limit is 1 and cursor is empty. Skipping the database query.")
136136+ return nil, nil, nil
137137+ } else if cursor == "" {
138138+ log.Println("Generating new hash")
139139+ hash = generateHash(userDID)
140140+ } else {
141141+ log.Println("Using existing hash")
142142+ }
143143+144144+ tableName := fmt.Sprintf("feedcache_%s_%s", userDID, hash)
145145+146146+ // Check if cache table exists
147147+ var exists bool
148148+ err = sf.DB.QueryRowContext(ctx, "SELECT EXISTS (SELECT FROM information_schema.tables WHERE table_name = $1)", tableName).Scan(&exists)
149149+ if err != nil {
150150+ return nil, nil, fmt.Errorf("error checking cache table existence: %w", err)
151151+ }
152152+153153+ if exists {
154154+ rows, err := sf.DB.QueryContext(ctx, fmt.Sprintf("SELECT post_uri, rel_date, is_repost, repost_uri FROM %s", pq.QuoteIdentifier(tableName)))
155155+ if err != nil {
156156+ return nil, nil, fmt.Errorf("error querying cache table: %w", err)
157157+ }
158158+ defer rows.Close()
159159+160160+ var cachedPosts []PostWithDate
161161+ for rows.Next() {
162162+ var post PostWithDate
163163+ var repostURI sql.NullString
164164+ if err := rows.Scan(&post.PostURI, &post.RelDate, &post.IsRepost, &repostURI); err != nil {
165165+ return nil, nil, fmt.Errorf("error scanning cached post: %w", err)
166166+ }
167167+ if repostURI.Valid {
168168+ post.RepostURI = repostURI.String
169169+ }
170170+ cachedPosts = append(cachedPosts, post)
171171+ }
172172+173173+ if err := rows.Err(); err != nil {
174174+ return nil, nil, fmt.Errorf("error iterating cached posts: %w", err)
175175+ }
176176+177177+ log.Printf("Cached posts found: %d", len(cachedPosts))
178178+ // fuck it print the entrie cachedPosts
179179+180180+ // Mark posts from the last page as viewed
181181+ /*
182182+ // Handle empty cache table
183183+ if len(cachedPosts) == 0 || int64(len(cachedPosts)) < cursorAsInt {
184184+ // just return https://bsky.app/profile/strandingy.bsky.social/post/3lgfdgn7i6s2q
185185+ // special case when you reach the end
186186+ markPostsAsViewed(ctx, sf.DB, userDID, cachedPosts, smartReportingEnabled)
187187+ emptysinglepost := []PostWithDate{}
188188+ emptysinglepost = append(emptysinglepost, PostWithDate{
189189+ PostURI: "at://did:plc:css3l47v2r4xhcgykfd5mdmn/app.bsky.feed.post/3lgfdgn7i6s2q",
190190+ RelDate: time.Now().UnixNano() / int64(time.Microsecond),
191191+ })
192192+ log.Print("empty page hanglerer paginateResult for cachedPosts")
193193+ return paginateResults(emptysinglepost, inflightstartOfLastPage, inflightsizeOfLastPage, cursorAsInt, limit, hash)
194194+ }
195195+ */
196196+197197+ if cursor != "" {
198198+ lastPagePosts := cachedPosts[startOfLastPage : startOfLastPage+sizeOfLastPage]
199199+ // the normal intended markPostsAsViewed call
200200+ if err := markPostsAsViewed(ctx, sf.DB, userDID, lastPagePosts, smartReportingEnabled); err != nil {
201201+ return nil, nil, fmt.Errorf("error marking posts as viewed: %w", err)
202202+ }
203203+ log.Printf("Marking these posts as viewed. Range: %d - %d", startOfLastPage, startOfLastPage+sizeOfLastPage)
204204+ }
205205+206206+ log.Print("default paginateResult for cachedPosts")
207207+ return paginateResults(cachedPosts, inflightstartOfLastPage, inflightsizeOfLastPage, cursorAsInt, limit, hash)
208208+ }
209209+210210+ posts := []PostWithDate{}
211211+212212+ log.Println("Fetching followers")
213213+ followers, err := getFollowers(ctx, sf.DB, userDID)
214214+ if err != nil {
215215+ log.Printf("Error fetching followers: %v\n", err)
216216+ return nil, nil, err
217217+ }
218218+ //log.Printf("Raw response: %s\n", followers)
219219+220220+ log.Println("Converting followers to comma-separated string")
221221+ followerIDs := make([]string, len(followers))
222222+ copy(followerIDs, followers)
223223+224224+ if len(followerIDs) == 0 {
225225+ log.Println("No followers found. Skipping the database query.")
226226+ return nil, nil, nil
227227+ }
228228+229229+ //log.Println("Follower IDs:", followerIDs)
230230+ // Check if the table exists
231231+ var tableExists bool
232232+ err = sf.DB.QueryRowContext(ctx, fmt.Sprintf(`
233233+ SELECT EXISTS (
234234+ SELECT FROM information_schema.tables
235235+ WHERE table_name = %s
236236+ )`, pq.QuoteLiteral(fmt.Sprintf("viewedby_%s", userDID)))).Scan(&tableExists)
237237+ if err != nil {
238238+ log.Printf("Error checking table existence: %v\n", err)
239239+ return nil, nil, fmt.Errorf("error checking table existence: %w", err)
240240+ }
241241+242242+ query := `
243243+ SELECT post_uri, rel_date, is_repost, repost_uri
244244+ FROM posts
245245+ WHERE rel_author = ANY($1)`
246246+247247+ if smartReadEnabled {
248248+ query += `
249249+ AND post_uri NOT IN (SELECT post_uri FROM likes WHERE rel_author = $2)
250250+ AND post_uri NOT IN (SELECT post_uri FROM posts WHERE rel_author = $2 AND is_repost = TRUE)`
251251+ }
252252+ if !sf.RepliesOn {
253253+ query += " AND (reply_to IS NULL OR reply_to = '')"
254254+ }
255255+ if sf.FeedType == "reposts" {
256256+ query += " AND is_repost = TRUE"
257257+ }
258258+ if sf.FeedType == "oreplies" {
259259+ query += " AND (reply_to IS NOT NULL AND reply_to != '') AND is_repost = FALSE"
260260+ }
261261+ if tableExists && smartReadEnabled {
262262+ query += fmt.Sprintf(" AND post_uri NOT IN (SELECT post_uri FROM %s)", pq.QuoteIdentifier(fmt.Sprintf("viewedby_%s", userDID)))
263263+ }
264264+ var rows *sql.Rows
265265+266266+ if smartReadEnabled {
267267+ if sf.FeedType == "mnine" {
268268+ query += ` AND rel_date < $3`
269269+ thresholdTime := time.Now().Add(-9*time.Hour).UnixNano() / int64(time.Microsecond)
270270+ rows, err = sf.DB.QueryContext(ctx, query, pq.Array(followerIDs), userDID, thresholdTime)
271271+ } else {
272272+ rows, err = sf.DB.QueryContext(ctx, query, pq.Array(followerIDs), userDID)
273273+ }
274274+ } else {
275275+ if sf.FeedType == "mnine" {
276276+ query += ` AND rel_date < $2`
277277+ thresholdTime := time.Now().Add(-9*time.Hour).UnixNano() / int64(time.Microsecond)
278278+ rows, err = sf.DB.QueryContext(ctx, query, pq.Array(followerIDs), thresholdTime)
279279+ } else {
280280+ rows, err = sf.DB.QueryContext(ctx, query, pq.Array(followerIDs))
281281+ }
282282+ }
283283+ log.Printf("Query: %s\n", query)
284284+ if err != nil {
285285+ log.Printf("Error querying posts: %v\n", err)
286286+ return nil, nil, fmt.Errorf("error querying posts: %w", err)
287287+ }
288288+ defer rows.Close()
289289+290290+ log.Println("Iterating over rows")
291291+ for rows.Next() {
292292+ var postURI string
293293+ var relDate int64
294294+ var isRepost bool
295295+ var repostURI sql.NullString
296296+ if err := rows.Scan(&postURI, &relDate, &isRepost, &repostURI); err != nil {
297297+ log.Printf("error scanning post URI: %v\n", err)
298298+ return nil, nil, fmt.Errorf("error scanning post URI: %w", err)
299299+ }
300300+ post := PostWithDate{
301301+ PostURI: postURI,
302302+ RelDate: relDate,
303303+ IsRepost: isRepost,
304304+ }
305305+ if repostURI.Valid {
306306+ post.RepostURI = repostURI.String
307307+ }
308308+ posts = append(posts, post)
309309+ }
310310+311311+ if err := rows.Err(); err != nil {
312312+ return nil, nil, fmt.Errorf("error iterating rows: %w", err)
313313+ }
314314+315315+ log.Printf("Freshly queried posts found: %d", len(posts))
316316+317317+ if sf.FeedType == "random" {
318318+ // Sort results randomly
319319+ log.Println("Sorting results randomly")
320320+ rand.Seed(time.Now().UnixNano())
321321+ rand.Shuffle(len(posts), func(i, j int) {
322322+ posts[i], posts[j] = posts[j], posts[i]
323323+ })
324324+ } else {
325325+ // Sort results by date
326326+ log.Println("Sorting results by date")
327327+ sort.Slice(posts, func(i, j int) bool {
328328+ return posts[i].RelDate > posts[j].RelDate
329329+ })
330330+ }
331331+332332+ // Cache the results in the database
333333+ log.Println("Caching results in the database")
334334+335335+ // Ensure the cachetimeout table exists
336336+ _, err = sf.DB.ExecContext(ctx, `
337337+ CREATE TABLE IF NOT EXISTS cachetimeout (
338338+ table_name TEXT UNIQUE,
339339+ creation_time TIMESTAMP
340340+ )
341341+ `)
342342+ if err != nil {
343343+ return nil, nil, fmt.Errorf("error creating cachetimeout table: %w", err)
344344+ }
345345+346346+ _, err = sf.DB.ExecContext(ctx, fmt.Sprintf(`
347347+ CREATE TABLE %s (
348348+ post_uri TEXT,
349349+ rel_date BIGINT,
350350+ is_repost BOOLEAN,
351351+ repost_uri TEXT,
352352+ viewed BOOLEAN DEFAULT FALSE
353353+ )
354354+ `, pq.QuoteIdentifier(tableName)))
355355+ if err != nil {
356356+ return nil, nil, fmt.Errorf("error creating cache table: %w", err)
357357+ }
358358+359359+ // Store the table name and creation time in cachetimeout
360360+ _, err = sf.DB.ExecContext(ctx, `
361361+ INSERT INTO cachetimeout (table_name, creation_time)
362362+ VALUES ($1, $2)
363363+ ON CONFLICT (table_name) DO NOTHING
364364+ `, tableName, time.Now())
365365+ if err != nil {
366366+ return nil, nil, fmt.Errorf("error inserting into cachetimeout table: %w", err)
367367+ }
368368+369369+ for _, post := range posts {
370370+ _, err := sf.DB.ExecContext(ctx, fmt.Sprintf(`
371371+ INSERT INTO %s (post_uri, rel_date, is_repost, repost_uri)
372372+ VALUES ($1, $2, $3, $4)
373373+ `, pq.QuoteIdentifier(tableName)), post.PostURI, post.RelDate, post.IsRepost, post.RepostURI)
374374+ if err != nil {
375375+ return nil, nil, fmt.Errorf("error inserting into cache table: %w", err)
376376+ }
377377+ }
378378+ log.Print("default paginateResult for freshly queried posts")
379379+ return paginateResults(posts, inflightstartOfLastPage, inflightsizeOfLastPage, cursorAsInt, limit, hash)
380380+}
381381+382382+func paginateResults(posts []PostWithDate, inflightcursorAsInt int64, inflightlimit int64, cursorAsInt int64, limit int64, hash string) ([]*appbsky.FeedDefs_SkeletonFeedPost, *string, error) {
383383+ log.Println("Paginating results")
384384+ var paginatedPosts []*appbsky.FeedDefs_SkeletonFeedPost
385385+386386+ if int64(len(posts)) > cursorAsInt+limit {
387387+ for _, post := range posts[cursorAsInt : cursorAsInt+limit] {
388388+ paginatedPosts = append(paginatedPosts, formatPost(post))
389389+ }
390390+ newCursor := fmt.Sprintf("%d-%s-%d-%d-%d-%d", cursorAsInt+limit, hash, cursorAsInt, limit, inflightcursorAsInt, inflightlimit)
391391+ return paginatedPosts, &newCursor, nil
392392+ }
393393+394394+ for _, post := range posts[cursorAsInt:] {
395395+ paginatedPosts = append(paginatedPosts, formatPost(post))
396396+ }
397397+398398+ return paginatedPosts, nil, nil
399399+}
400400+401401+func formatPost(post PostWithDate) *appbsky.FeedDefs_SkeletonFeedPost {
402402+ if post.IsRepost {
403403+ return &appbsky.FeedDefs_SkeletonFeedPost{
404404+ Post: post.PostURI,
405405+ Reason: &appbsky.FeedDefs_SkeletonFeedPost_Reason{
406406+ FeedDefs_SkeletonReasonRepost: &appbsky.FeedDefs_SkeletonReasonRepost{
407407+ Repost: post.RepostURI,
408408+ },
409409+ },
410410+ }
411411+ }
412412+ return &appbsky.FeedDefs_SkeletonFeedPost{
413413+ Post: post.PostURI,
414414+ }
415415+}
416416+417417+func generateHash(input string) string {
418418+ hash := sha256.Sum256([]byte(input + time.Now().String()))
419419+ return hex.EncodeToString(hash[:])[:5]
420420+}
421421+422422+func getFollowers(ctx context.Context, db *sql.DB, userdid string) ([]string, error) {
423423+ unquotedTableName := "follows_" + userdid
424424+ tableName := pq.QuoteIdentifier(unquotedTableName)
425425+ fmt.Printf("Checking for table: %s\n", tableName) // Debug log
426426+427427+ // Check if cache table exists
428428+ var exists bool
429429+ query := "SELECT EXISTS (SELECT FROM information_schema.tables WHERE table_name = $1)"
430430+ fmt.Printf("Executing query: %s with parameter: %s\n", query, unquotedTableName) // Debug log
431431+ err := db.QueryRowContext(ctx, query, unquotedTableName).Scan(&exists)
432432+ if err != nil {
433433+ // Check for specific errors
434434+ if err == sql.ErrNoRows {
435435+ return nil, fmt.Errorf("table existence check returned no rows: %w", err)
436436+ }
437437+ if pqErr, ok := err.(*pq.Error); ok {
438438+ return nil, fmt.Errorf("PostgreSQL error: %s, Code: %s", pqErr.Message, pqErr.Code)
439439+ }
440440+ return nil, fmt.Errorf("error checking cache table existence: %w", err)
441441+ }
442442+443443+ fmt.Printf("Table exists: %v\n", exists) // Debug log
444444+445445+ if exists {
446446+ rows, err := db.QueryContext(ctx, fmt.Sprintf("SELECT follow FROM %s", tableName))
447447+ if err != nil {
448448+ return nil, fmt.Errorf("error querying cache table: %w", err)
449449+ }
450450+ defer rows.Close()
451451+452452+ var cachedFollows []string
453453+ for rows.Next() {
454454+ var follow string
455455+ if err := rows.Scan(&follow); err != nil {
456456+ return nil, fmt.Errorf("error scanning cached follow: %w", err)
457457+ }
458458+ cachedFollows = append(cachedFollows, follow)
459459+ }
460460+461461+ if err := rows.Err(); err != nil {
462462+ return nil, fmt.Errorf("error iterating cached follows: %w", err)
463463+ }
464464+465465+ log.Println("Returning cached followers")
466466+ return cachedFollows, nil
467467+ }
468468+469469+ log.Println("Fetching followers from API")
470470+ var allDIDs []string
471471+ cursor := ""
472472+473473+ for {
474474+ apiURL := fmt.Sprintf("https://public.api.bsky.app/xrpc/app.bsky.graph.getFollows?actor=%s&cursor=%s", userdid, cursor)
475475+ resp, err := http.Get(apiURL)
476476+ if err != nil {
477477+ log.Printf("Error making request: %v\n", err)
478478+ return nil, fmt.Errorf("failed to make request: %v", err)
479479+ }
480480+ defer resp.Body.Close()
481481+482482+ body, err := ioutil.ReadAll(resp.Body)
483483+ if err != nil {
484484+ log.Printf("Error reading response: %v\n", err)
485485+ return nil, fmt.Errorf("failed to read response: %v", err)
486486+ }
487487+488488+ var response Response
489489+ if err := json.Unmarshal(body, &response); err != nil {
490490+ log.Printf("Error unmarshalling JSON: %v\n", err)
491491+ return nil, fmt.Errorf("failed to unmarshal JSON: %v", err)
492492+ }
493493+494494+ for _, follow := range response.Follows {
495495+ allDIDs = append(allDIDs, follow.DID)
496496+ }
497497+498498+ if response.Cursor == "" {
499499+ break
500500+ }
501501+ cursor = response.Cursor
502502+ }
503503+504504+ // Drop the existing table if it exists
505505+ log.Println("Dropping existing followers table if it exists")
506506+ _, err = db.ExecContext(ctx, fmt.Sprintf("DROP TABLE IF EXISTS %s", tableName))
507507+ if err != nil {
508508+ return nil, fmt.Errorf("error dropping existing cache table: %w", err)
509509+ }
510510+511511+ // Cache the results in the database
512512+ log.Println("Caching followers in the database")
513513+ _, err = db.ExecContext(ctx, fmt.Sprintf(`
514514+ CREATE TABLE %s (
515515+ follow TEXT UNIQUE
516516+ )
517517+ `, tableName))
518518+ if err != nil {
519519+ return nil, fmt.Errorf("error creating cache table: %w", err)
520520+ }
521521+522522+ // Use a map to track unique follows
523523+ followMap := make(map[string]struct{})
524524+ for _, follow := range allDIDs {
525525+ if _, exists := followMap[follow]; !exists {
526526+ followMap[follow] = struct{}{}
527527+ _, err := db.ExecContext(ctx, fmt.Sprintf(`
528528+ INSERT INTO %s (follow)
529529+ VALUES ($1)
530530+ ON CONFLICT (follow) DO NOTHING
531531+ `, tableName), follow)
532532+ if err != nil {
533533+ return nil, fmt.Errorf("error inserting into cache table: %w", err)
534534+ }
535535+ }
536536+ }
537537+538538+ log.Println("Returning fetched followers")
539539+ return allDIDs, nil
540540+}
541541+542542+func markPostsAsViewed(ctx context.Context, db *sql.DB, userDID string, posts []PostWithDate, smartReportingEnabled bool) error {
543543+ if len(posts) == 0 || !smartReportingEnabled {
544544+ return nil
545545+ }
546546+ tableName := fmt.Sprintf("viewedby_%s", userDID)
547547+548548+ // Create the table if it doesn't exist
549549+ _, err := db.ExecContext(ctx, fmt.Sprintf(`
550550+ CREATE TABLE IF NOT EXISTS %s (
551551+ post_uri TEXT UNIQUE
552552+ )
553553+ `, pq.QuoteIdentifier(tableName)))
554554+ if err != nil {
555555+ return fmt.Errorf("error creating viewed table: %w", err)
556556+ }
557557+558558+ // Insert posts into the table
559559+ for _, post := range posts {
560560+ _, err := db.ExecContext(ctx, fmt.Sprintf(`
561561+ INSERT INTO %s (post_uri)
562562+ VALUES ($1)
563563+ ON CONFLICT (post_uri) DO NOTHING
564564+ `, pq.QuoteIdentifier(tableName)), post.PostURI)
565565+ if err != nil {
566566+ return fmt.Errorf("error inserting into viewed table: %w", err)
567567+ }
568568+ }
569569+570570+ return nil
571571+}