Pagination/querying cost optimization - Slash

I am currently running a Slash back end and am trying to figure out how I could cost-optimize my queries.

My app is a standard social media style application. Hence, most operations would be queries for user- and post-data. With a standard feed with infinite scroll, sorting posts by date, how can I optimize my queries, to lower the cost?

Is it better to continiously load small batches of ~10 posts/users at a time as the user scrolls, or would it be more cost-efficient to load bigger chunks of data at the risk of overfetching?
When I load a standard feed with only previews of the respective posts, would it be better to load everything for every post and then only pass it through as an arguments as the user navigates to a detailed page, or should I only load the data required for the preview, and then load the rest when a user decides to navigate to the detailed page of a post?

In other words, should I optimize for loading minimal data, or minimal number of queries, with more data?

Thankful to any advice.

This cannot be answered without a bias IMO. GraphQL is meant for only get what you need when you need it and depending on what client caching architecture you are using this can have various performance effects. It really all depends on your exact use case and no 2 use cases will most likely be the same. How are you rendering the data may also have an effect on it with extra re-renders slowing down your application.

You basically have to figure out what works best for you and do it that way. I wouldn’t refetch a 2nd query for just another field that you could have got with the first but if you multiply that by 1000 then the problems become real

I remember reading something about credits and that with Slash one has 10000 credits, which stand for 10000 queries, but I am not sure whether that is still a thing.

The way that I currently structured my app, I would hit 10000 queries way before hitting the data limit.

I’m sure it depends on the exact app, but talking about a regular news feed, with only a number of fields like profile photo url, text, ids, comments, likes, etc., I’m assuming I only fetch the data that is displayed on the feed and not on the post-details page. That means aggregating data for comments and likes for example. However, that means I would fetch those details and all the other details only when a user decides to navigate to the post details page.

Going off of that, is there a substantial difference between only fetching 10 posts as the user scrolls and fetching 100 posts at a time, even though the user might not even scroll that far? I’m not sure how those approaches would scale.

If anyone that is building or has built a social media-style app, could give me some hints, I would greatly appreciate that!

No more credits on Dgraph Cloud (Slash was renamed).

10 vs. 100 pagination… we have pagination in our application. But again this can be a based on use case. For instance in our app a user can select and hide fields, some fields are very simple same level fields while others are deep (to the 5th degree). So if a user selects a field we do not already have we have to add that in to the query and rerun. But that is just one aspect and not in line with your use case. The thing to really look at is how much data might be overfetched, how often that data would he overfetched, and the cost of overfetching vs not. You might just have to do some A/B testing or metric analysis in your app to find the sweet spot. If for instance users on average load 2.3 pages so then you might increase it from 10 to 30. One thing to think about also is how expensive is the filter. If you are doing a really complex filter and/or have advanced auth rules, it might be cheaper to overfetch even more data than to paginate x times.