Constellation, Spacedust, Slingshot, UFOs: atproto crates and services for microcosm

Add new get_many_to_many XRPC endpoint #7

merged opened by seoul.systems targeting main from seoul.systems/microcosm-rs: xrpc_many2many

Added a new XRPC API endpoint to fetch joined record URIs, termed get_many_to_many (we talked about this briefly on Discord already). It is implemented and functions almost identical to the existing get_many_to_many_counts endpoint and handler. Some of its possible flaws like the two step lookup to verify a matching DID is indeed active are duplicated as well. On the plus side, this should make the PR pretty straightforward to review and make it easier to modify both endpoints later on when a more efficient way to validate the status of DIDs is possible.

If you have comments remarks etc. I am happy to work on some parts again.

Labels

None yet.

Participants 2
AT URI
at://did:plc:53wellrw53o7sw4zlpfenvuh/sh.tangled.repo.pull/3mbkyehqooh22
+176 -39
Interdiff #0 โ†’ #1
+6
constellation/src/lib.rs
··· 50 50 } 51 51 } 52 52 53 + #[derive(Debug, PartialEq, Serialize, Deserialize, Clone, Default)] 54 + pub struct RecordsBySubject { 55 + pub subject: String, 56 + pub records: Vec<RecordId>, 57 + } 58 + 53 59 /// maybe the worst type in this repo, and there are some bad types
+2 -2
constellation/src/server/mod.rs
··· 18 18 use tokio_util::sync::CancellationToken; 19 19 20 20 use crate::storage::{LinkReader, StorageStats}; 21 - use crate::{CountsByCount, Did, RecordId}; 21 + use crate::{CountsByCount, Did, RecordId, RecordsBySubject}; 22 22 23 23 mod acceptable; 24 24 mod filters; ··· 618 618 #[derive(Template, Serialize)] 619 619 #[template(path = "get-many-to-many.html.j2")] 620 620 struct GetManyToManyItemsResponse { 621 - linking_records: Vec<(String, Vec<RecordId>)>, 621 + linking_records: Vec<RecordsBySubject>, 622 622 cursor: Option<OpaqueApiCursor>, 623 623 #[serde(skip_serializing)] 624 624 query: GetManyToManyItemsQuery,
+9 -6
constellation/src/storage/mem_store.rs
··· 1 1 use super::{ 2 2 LinkReader, LinkStorage, PagedAppendingCollection, PagedOrderedCollection, StorageStats, 3 3 }; 4 - use crate::{ActionableEvent, CountsByCount, Did, RecordId}; 4 + use crate::{ActionableEvent, CountsByCount, Did, RecordId, RecordsBySubject}; 5 5 use anyhow::Result; 6 6 use links::CollectedLink; 7 7 use std::collections::{HashMap, HashSet}; ··· 244 244 after: Option<String>, 245 245 filter_dids: &HashSet<Did>, 246 246 filter_to_targets: &HashSet<String>, 247 - ) -> Result<PagedOrderedCollection<(String, Vec<RecordId>), String>> { 247 + ) -> Result<PagedOrderedCollection<RecordsBySubject, String>> { 248 248 let empty_res = Ok(PagedOrderedCollection { 249 249 items: Vec::new(), 250 250 next: None, ··· 307 307 308 308 let mut items = grouped_links 309 309 .into_iter() 310 - .map(|(t, r)| (t.0, r)) 310 + .map(|(t, r)| RecordsBySubject { 311 + subject: t.0, 312 + records: r, 313 + }) 311 314 .collect::<Vec<_>>(); 312 315 313 - items.sort_by(|(a, _), (b, _)| a.cmp(b)); 316 + items.sort_by(|a, b| a.subject.cmp(&b.subject)); 314 317 315 318 items = items 316 319 .into_iter() 317 - .skip_while(|(t, _)| after.as_ref().map(|a| t <= a).unwrap_or(false)) 320 + .skip_while(|item| after.as_ref().map(|a| &item.subject <= a).unwrap_or(false)) 318 321 .take(limit as usize) 319 322 .collect(); 320 323 321 324 let next = if items.len() as u64 >= limit { 322 - items.last().map(|(t, _)| t.clone()) 325 + items.last().map(|item| item.subject.clone()) 323 326 } else { 324 327 None 325 328 };
+40 -24
constellation/src/storage/mod.rs
··· 1 - use crate::{ActionableEvent, CountsByCount, Did, RecordId}; 1 + use crate::{ActionableEvent, CountsByCount, Did, RecordId, RecordsBySubject}; 2 2 use anyhow::Result; 3 3 use serde::{Deserialize, Serialize}; 4 4 use std::collections::{HashMap, HashSet}; ··· 114 114 after: Option<String>, 115 115 filter_dids: &HashSet<Did>, 116 116 filter_to_targets: &HashSet<String>, 117 - ) -> Result<PagedOrderedCollection<(String, Vec<RecordId>), String>>; 117 + ) -> Result<PagedOrderedCollection<RecordsBySubject, String>>; 118 118 119 119 fn get_all_counts( 120 120 &self, ··· 1621 1621 &HashSet::new(), 1622 1622 )?, 1623 1623 PagedOrderedCollection { 1624 - items: vec![( 1625 - "b.com".to_string(), 1626 - vec![RecordId { 1624 + items: vec![RecordsBySubject { 1625 + subject: "b.com".to_string(), 1626 + records: vec![RecordId { 1627 1627 did: "did:plc:asdf".into(), 1628 1628 collection: "app.t.c".into(), 1629 1629 rkey: "asdf".into(), 1630 1630 }] 1631 - )], 1631 + }], 1632 1632 next: None, 1633 1633 } 1634 1634 ); ··· 1730 1730 assert_eq!(result.items.len(), 2); 1731 1731 assert_eq!(result.next, None); 1732 1732 // Find b.com group 1733 - let (b_target, b_records) = result.items.iter().find(|(target, _)| target == "b.com").unwrap(); 1734 - assert_eq!(b_target, "b.com"); 1735 - assert_eq!(b_records.len(), 2); 1736 - assert!(b_records.iter().any(|r| r.did.0 == "did:plc:asdf" && r.rkey == "asdf")); 1737 - assert!(b_records.iter().any(|r| r.did.0 == "did:plc:asdf" && r.rkey == "asdf2")); 1733 + let b_group = result 1734 + .items 1735 + .iter() 1736 + .find(|group| group.subject == "b.com") 1737 + .unwrap(); 1738 + assert_eq!(b_group.subject, "b.com"); 1739 + assert_eq!(b_group.records.len(), 2); 1740 + assert!(b_group.records 1741 + .iter() 1742 + .any(|r| r.did.0 == "did:plc:asdf" && r.rkey == "asdf")); 1743 + assert!(b_group.records 1744 + .iter() 1745 + .any(|r| r.did.0 == "did:plc:asdf" && r.rkey == "asdf2")); 1738 1746 // Find c.com group 1739 - let (c_target, c_records) = result.items.iter().find(|(target, _)| target == "c.com").unwrap(); 1740 - assert_eq!(c_target, "c.com"); 1741 - assert_eq!(c_records.len(), 2); 1742 - assert!(c_records.iter().any(|r| r.did.0 == "did:plc:fdsa" && r.rkey == "fdsa")); 1743 - assert!(c_records.iter().any(|r| r.did.0 == "did:plc:fdsa" && r.rkey == "fdsa2")); 1747 + let c_group = result 1748 + .items 1749 + .iter() 1750 + .find(|group| group.subject == "c.com") 1751 + .unwrap(); 1752 + assert_eq!(c_group.subject, "c.com"); 1753 + assert_eq!(c_group.records.len(), 2); 1754 + assert!(c_group.records 1755 + .iter() 1756 + .any(|r| r.did.0 == "did:plc:fdsa" && r.rkey == "fdsa")); 1757 + assert!(c_group.records 1758 + .iter() 1759 + .any(|r| r.did.0 == "did:plc:fdsa" && r.rkey == "fdsa2")); 1744 1760 1745 1761 // Test with DID filter - should only get records from did:plc:fdsa 1746 1762 let result = storage.get_many_to_many( ··· 1754 1770 &HashSet::new(), 1755 1771 )?; 1756 1772 assert_eq!(result.items.len(), 1); 1757 - let (target, records) = &result.items[0]; 1758 - assert_eq!(target, "c.com"); 1759 - assert_eq!(records.len(), 2); 1760 - assert!(records.iter().all(|r| r.did.0 == "did:plc:fdsa")); 1773 + let group = &result.items[0]; 1774 + assert_eq!(group.subject, "c.com"); 1775 + assert_eq!(group.records.len(), 2); 1776 + assert!(group.records.iter().all(|r| r.did.0 == "did:plc:fdsa")); 1761 1777 1762 1778 // Test with target filter - should only get records linking to b.com 1763 1779 let result = storage.get_many_to_many( ··· 1771 1787 &HashSet::from_iter(["b.com".to_string()]), 1772 1788 )?; 1773 1789 assert_eq!(result.items.len(), 1); 1774 - let (target, records) = &result.items[0]; 1775 - assert_eq!(target, "b.com"); 1776 - assert_eq!(records.len(), 2); 1777 - assert!(records.iter().all(|r| r.did.0 == "did:plc:asdf")); 1790 + let group = &result.items[0]; 1791 + assert_eq!(group.subject, "b.com"); 1792 + assert_eq!(group.records.len(), 2); 1793 + assert!(group.records.iter().all(|r| r.did.0 == "did:plc:asdf")); 1778 1794 }); 1779 1795 }
+7 -4
constellation/src/storage/rocks_store.rs
··· 2 2 ActionableEvent, LinkReader, LinkStorage, PagedAppendingCollection, PagedOrderedCollection, 3 3 StorageStats, 4 4 }; 5 - use crate::{CountsByCount, Did, RecordId}; 5 + use crate::{CountsByCount, Did, RecordId, RecordsBySubject}; 6 6 use anyhow::{bail, Result}; 7 7 use bincode::Options as BincodeOptions; 8 8 use links::CollectedLink; ··· 1132 1132 after: Option<String>, 1133 1133 filter_dids: &HashSet<Did>, 1134 1134 filter_to_targets: &HashSet<String>, 1135 - ) -> Result<PagedOrderedCollection<(String, Vec<RecordId>), String>> { 1135 + ) -> Result<PagedOrderedCollection<RecordsBySubject, String>> { 1136 1136 let collection = Collection(collection.to_string()); 1137 1137 let path = RPath(path.to_string()); 1138 1138 ··· 1241 1241 } 1242 1242 } 1243 1243 1244 - let mut items: Vec<(String, Vec<RecordId>)> = Vec::with_capacity(grouped_links.len()); 1244 + let mut items: Vec<RecordsBySubject> = Vec::with_capacity(grouped_links.len()); 1245 1245 for (fwd_target_id, records) in &grouped_links { 1246 1246 let Some(target_key) = self 1247 1247 .target_id_table ··· 1253 1253 1254 1254 let target_string = target_key.0 .0; 1255 1255 1256 - items.push((target_string, records.clone())); 1256 + items.push(RecordsBySubject { 1257 + subject: target_string, 1258 + records: records.clone(), 1259 + }); 1257 1260 } 1258 1261 1259 1262 let next = if grouped_links.len() as u64 >= limit {
+3 -3
constellation/templates/get-many-to-many.html.j2
··· 23 23 24 24 <h3>Many-to-many links, most recent first:</h3> 25 25 26 - {% for (target, records) in linking_records %} 27 - <h4>Target: <code>{{ target }}</code> <small>(<a href="/links/all?target={{ target|urlencode }}">view all links</a>)</small></h4> 28 - {% for record in records %} 26 + {% for group in linking_records %} 27 + <h4>Target: <code>{{ group.subject }}</code> <small>(<a href="/links/all?target={{ group.subject|urlencode }}">view all links</a>)</small></h4> 28 + {% for record in group.records %} 29 29 <pre style="display: block; margin: 1em 2em" class="code"><strong>DID</strong>: {{ record.did().0 }} 30 30 <strong>Collection</strong>: {{ record.collection }} 31 31 <strong>RKey</strong>: {{ record.rkey }}
constellation/templates/hello.html.j2

This file has not been changed.

constellation/templates/try-it-macros.html.j2

This file has not been changed.

+109
lexicons/blue.microcosm/links/getManyToMany.json
··· 1 + { 2 + "lexicon": 1, 3 + "id": "blue.microcosm.links.getManyToMany", 4 + "defs": { 5 + "main": { 6 + "type": "query", 7 + "description": "Get records that link to a primary subject, grouped by the secondary subjects they also reference", 8 + "parameters": { 9 + "type": "params", 10 + "required": ["subject", "source", "pathToOther"], 11 + "properties": { 12 + "subject": { 13 + "type": "string", 14 + "format": "uri", 15 + "description": "the primary target being linked to (at-uri, did, or uri)" 16 + }, 17 + "source": { 18 + "type": "string", 19 + "description": "collection and path specification for the primary link (e.g., 'app.bsky.feed.like:subject.uri')" 20 + }, 21 + "pathToOther": { 22 + "type": "string", 23 + "description": "path to the secondary link in the many-to-many record (e.g., 'otherThing.uri')" 24 + }, 25 + "did": { 26 + "type": "array", 27 + "description": "filter links to those from specific users", 28 + "items": { 29 + "type": "string", 30 + "format": "did" 31 + } 32 + }, 33 + "otherSubject": { 34 + "type": "array", 35 + "description": "filter secondary links to specific subjects", 36 + "items": { 37 + "type": "string" 38 + } 39 + }, 40 + "limit": { 41 + "type": "integer", 42 + "minimum": 1, 43 + "maximum": 100, 44 + "default": 16, 45 + "description": "number of results to return" 46 + } 47 + } 48 + }, 49 + "output": { 50 + "encoding": "application/json", 51 + "schema": { 52 + "type": "object", 53 + "required": ["linking_records"], 54 + "properties": { 55 + "linking_records": { 56 + "type": "array", 57 + "items": { 58 + "type": "ref", 59 + "ref": "#recordsBySubject" 60 + } 61 + }, 62 + "cursor": { 63 + "type": "string", 64 + "description": "pagination cursor" 65 + } 66 + } 67 + } 68 + } 69 + }, 70 + "recordsBySubject": { 71 + "type": "object", 72 + "required": ["subject", "records"], 73 + "properties": { 74 + "subject": { 75 + "type": "string", 76 + "description": "the secondary subject that these records link to" 77 + }, 78 + "records": { 79 + "type": "array", 80 + "items": { 81 + "type": "ref", 82 + "ref": "#linkRecord" 83 + } 84 + } 85 + } 86 + }, 87 + "linkRecord": { 88 + "type": "object", 89 + "required": ["did", "collection", "rkey"], 90 + "description": "A record identifier consisting of a DID, collection, and record key", 91 + "properties": { 92 + "did": { 93 + "type": "string", 94 + "format": "did", 95 + "description": "the DID of the linking record's repository" 96 + }, 97 + "collection": { 98 + "type": "string", 99 + "format": "nsid", 100 + "description": "the collection of the linking record" 101 + }, 102 + "rkey": { 103 + "type": "string", 104 + "format": "record-key" 105 + } 106 + } 107 + } 108 + } 109 + }

History

8 rounds 13 comments
sign up or login to add to the discussion
11 commits
expand
wip: m2m
Add tests for new get_many_to_many query handler
Fix get_m2m_empty test
Replace tuple with RecordsBySubject struct
Fix conflicts after rebasing on main
Use record_id/subject tuple as return type for get_many_to_many
Fix get_many_to_many pagination with composite cursor
Fix get_many_to_many_counts pagination with fetch N+1
wip
Fix rocks-store to match mem-store composite cursor
Address feedback from fig
expand 0 comments
pull request successfully merged
10 commits
expand
wip: m2m
Add tests for new get_many_to_many query handler
Fix get_m2m_empty test
Replace tuple with RecordsBySubject struct
Fix conflicts after rebasing on main
Use record_id/subject tuple as return type for get_many_to_many
Fix get_many_to_many pagination with composite cursor
Fix get_many_to_many_counts pagination with fetch N+1
wip
Fix rocks-store to match mem-store composite cursor
expand 0 comments
8 commits
expand
wip: m2m
Add tests for new get_many_to_many query handler
Fix get_m2m_empty test
Replace tuple with RecordsBySubject struct
Fix conflicts after rebasing on main
Use record_id/subject tuple as return type for get_many_to_many
Fix get_many_to_many pagination with composite cursor
Fix get_many_to_many_counts pagination with fetch N+1
expand 1 comment

Okay. I wrapped my head around the composite cursor you proposed and am working on refactoring both storage implementations towards that. I think I might re-submit another round tomorrow :)

6 commits
expand
wip: m2m
Add tests for new get_many_to_many query handler
Fix get_m2m_empty test
Replace tuple with RecordsBySubject struct
Fix conflicts after rebasing on main
Use record_id/subject tuple as return type for get_many_to_many
expand 3 comments

Found a bug in how we handle some of the pagination logic in cases where the number of items and the user selected limit are identical to very close too each other (already working on a fix)

thanks for the rebase! i tried to write things in the tiny text box but ended up needing to make a diagram: https://bsky.app/profile/did:plc:hdhoaan3xa3jiuq4fg4mefid/post/3mejuq44twc2t

key thing is that where the focus of getManyToManyCounts was the other subject (aggregation was against that, so grouping happened with it),

i think the focus of disagreggated many-to-many is on the linking records themselves

to me that takes me toward a few things

  • i don't think we should need to group the links by target (does the current code build up the full aggregation on every requested page? we should be able to avoid doing that)

  • i think the order of the response should actually be based on the linking record itself (since we have a row in the output), not the other subject, unlike with the aggregated/count version. this means you get eg. list items in order they were added instead of the order of the listed things being created. (i haven't fully wrapped my head around the grouping/ordering code here yet)

  • since any linking record can have a path_to_other with multiple links, i think a composite cursor could work here:

a 2-tuple of (backlink_vec_idx, forward_vec_idx).

for normal cases where the many-to-many record points to exactly one other subject, it would just be advancing backlink_vec_idx like normal backlinks

for cases where the many-to-many record actually has multiple foward links at the given path_to_other, the second part of the tuple would track progress through that list

i think that allows us to hold the necessary state between calls without needing to reconstruct too much in memory each time?

(also it's hard to write in this tiny tiny textbox and have a sense of whether what i'm saying makes sense)

Interesting approach! I have to think through this for a bit to be honest. Maybe I tried to follow the existing counts implementation too closely

Having said that, I added a new composite cursor to fix a couple of bugs that would arrive when hitting a couple of possible edge-cases in the pagination logic. This affects both the new get-many-to-many endpoint as well as the existing get-many-to-many-counts endpoint. As the changes are split over two distinct commits things should be straightforward to review.

Your assumption is still correct in the sense that we do indeed have to build up the aggregation again for every request. I have to double-check the get-backlinks endpoint to get a better sense of where you're going at.

Finally, I agree that the interface here doesn't necessarily make the whole thing easier to understand, unfortunately

6 commits
expand
wip: m2m
Add tests for new get_many_to_many query handler
Fix get_m2m_empty test
Replace tuple with RecordsBySubject struct
Fix conflicts after rebasing on main
Use record_id/subject tuple as return type for get_many_to_many
expand 2 comments

i think something got funky with a rebase or the way tangled is showing it -- some of my changes on main seem to be getting shown (reverted) in the diff.

i don't mind sorting it locally but will mostly get to it tomorrow, in case you want to see what's up before i do.

That's one on me, sorry! Rebased again on main and now everything seems fine

5 commits
expand
wip: m2m
Add tests for new get_many_to_many query handler
Fix get_m2m_empty test
Replace tuple with RecordsBySubject struct
Fix conflicts after rebasing on main
expand 5 comments

Rebased on main. As we discussed in the PR for the order query parameter, I didn't include this here as it's not a particular sensible fit.

i need to get into the code properly but my initial thought is that this endpoint should return a flat list of results, like

{
  "items": [
    {
      "link": { did, collection, rkey }, // the m2m link record
      "subject": "a.com"
    },
    {
      "link": { did, collection, rkey },
      "subject": "a.com"
    },
    {
      "link": { did, collection, rkey },
      "subject": "b.com"
    },
  ]
}

this will require a bit of tricks in the cursor to track pages across half-finished groups of links

(also this isn't an immediate change request, just getting it down for discussion!)

(and separately, i've also been wondering about moving more toward returning at-uris instead of broken-out did/collection/rkey objects. which isn't specifically about this PR, but if that happens then switching before releasing it is nice)

Hmm, I wonder how this would then work with the path_to_other parameter. Currently we have this nested grouping in order to show and disambiguate different relationships between different links.

For instance take the following query and it's results:

http://localhost:6789/xrpc/blue.microcosm.links.getManyToMany?subject=at://did:plc:2w45zyhuklwihpdc7oj3mi63/app.bsky.feed.post/3mdbbkuq6t32y&source=app.bsky.feed.post:reply.root.uri&pathToOther=reply.parent.uri&limit=16

This query asks: "Show me all posts in this thread, grouped by who they're responding to."

A flat list would just give us all the posts in the thread. The nested structure answers a richer question: who's talking to whom? Some posts are direct responses to the original article. Others are replies to other commenters, forming side conversations that branch off from the main thread.

The pathToOther grouping preserves that distinction. Without it, we'd lose the information about who's talking to whom.

{
  "linking_records": [
    {
      "subject": "at://did:plc:2w45zyhuklwihpdc7oj3mi63/app.bsky.feed.post/3mdbbkuq6t32y",
      "records": [
        {
          "did": "did:plc:lznqwrsbnyf6fdxohikqj6h3",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdd27pja7s2y"
        },
        {
          "did": "did:plc:uffx77au6hoauuuumkbuvqdr",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdd2tt5efc2a"
        },
        {
          "did": "did:plc:y7qyxzo7dns5m54dlq3youu3",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdd2wtjxgc2d"
        },
        {
          "did": "did:plc:yaakslxyqydb76ybgkhrr4jk",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdd35hyads22"
        },
        {
          "did": "did:plc:fia7w2kbnrdjwp6zvxywt7qv",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdd37j3ldk2m"
        },
        {
          "did": "did:plc:xtecipifublblkomwau5x2ok",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdd3dbtbz22n"
        },
        {
          "did": "did:plc:hl5lhiy2qr4nf5e4eefldvme",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdd42hpw7c2e"
        },
        {
          "did": "did:plc:fgquypfh32pewivn3bcmzseb",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdd46jteoc2m"
        }
      ]
    },
    {
      "subject": "at://did:plc:3rhjxwwui6wwfokh4at3q2dl/app.bsky.feed.post/3mdczc7c4gk2i",
      "records": [
        {
          "did": "did:plc:3rhjxwwui6wwfokh4at3q2dl",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdczt7cwhk2i"
        }
      ]
    },
    {
      "subject": "at://did:plc:6buibzhkqr4vkqu75ezr2uv2/app.bsky.feed.post/3mdby25hbbk2v",
      "records": [
        {
          "did": "did:plc:fgeie2bmzlmx37iglj3xbzuj",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdd26ulf4k2j"
        }
      ]
    },
    {
      "subject": "at://did:plc:lwgvv5oqh5stzb6dxa5d7z3n/app.bsky.feed.post/3mdcxqbkkfk2i",
      "records": [
        {
          "did": "did:plc:hl5lhiy2qr4nf5e4eefldvme",
          "collection": "app.bsky.feed.post",
          "rkey": "3mdd45u56sk2e"
        }
      ]
    }
  ],
  "cursor": null
}

Correct me if I'm somehow wrong here!

Regarding returning at-uris: I think this might be a nice idea as users might be able to split these up when they feel the need to any way and it feels conceptually more complete. But, it might be easier to do this in a different PR over all existing XRPC endpoints. This would allow us to add this new endpoint already while working on the updated return values in the meantime. I'd like to avoid doing too much distinct stuff in one PR. :)

at-uris: totally fair, holding off for a follow-up.

flat list: i might have messed it up in my example but i think what i meant is actually equivalent to the grouped version: flattened, with the subject ("group by") included with every item in the flatted list.

clients can collect the flat list and group on subject to get back to your structured example, if they want.

my motivations are probably part sql-brain, part flat-list-enjoyer, and part cursor-related. i'm trying to disregard the first two, and i'm curious about your thoughts about how to handle the cursor:

with a flat list it's easy (from the client perspective at least) -- just keep chasing the cursor for as much of the data as you need. (cursors can happen in the middle of a subject)

with nested results grouped by subject it's less obvious to me. correct me if i'm wrong (need another block of time to actually get into the code) but i think the grouped item sub-list is unbounded size in the proposed code here? so cursors are only limiting the number of groups.

if we go with the grouped nested response, i think maybe we'd want something like:

  • a cursor at the end for fetching more groups, and
  • a cursor for each group-list that lets you fetch more items from just that group-list.

(i think this kind of nested paging is pretty neat!)

Interesting. Now that you mention it I feel I kinda get where you're going at!

I think the whole cursor thing, albeit possible for sure, is kinda creating more unnecessary complexity so I'll probably go with your suggestion.

It seems easier to create custom groupings on their own for most users (having more freedom is always great) and I think from an ergonomic perspective the two cursors might create more friction.

4 commits
expand
wip: m2m
Add tests for new get_many_to_many query handler
Fix get_m2m_empty test
Replace tuple with RecordsBySubject struct
expand 1 comment

Added the missing lexicon entry for the new endpoint and changed the return type as well. Commented this wrongly at the other PR that I was working on. Sorry about that lol.

3 commits
expand
wip: m2m
Add tests for new get_many_to_many query handler
Fix get_m2m_empty test
expand 1 comment

I think the existing get_many_to_many_counts handler and the new get_many_to_many handler are similar enough that we might extract the bulk of their logic in a shared piece of logic. Maybe a method that takes the existing identical function parameters and a new additional callback parameter (that handles what we do with found matches, i.e. calculate counts or join URIs) might be one way to go for it.

I am not too sure yet though if this is indeed the right thing to do as the new shared implementation might be a bit complicated. But given the strong similarities between the two I think it's worth at least considering.