How to get next X results in pagination

Hi,

When using pagination how can I get the next X resultls.
For example if I query all users that have the name Rachel and the result has 30 users but I want to get them in 3 different pages of 10 people what should I do?

Thanks

You have to use first along with offset, to paginate you have to increase the offset only.

{
  q(func: eq(name@en, "Rachel"), first: 10, offset:0) {
    name@en
    age
  }
}

Will it always skip the same objects?
Let’s say I make a query with offset 0, 10, or 20. Will, it always return new results, or in the query with the offset set to 20, results from the first ten might appear?

How would you do this in DQL?

The offset is based on indexation or if it is not indexed will be based on the UID order (which can be an unwanted sorting). If you have an indexed predicate used in the pagination query, it will always return the same objects. But you have to always give a new “positive” value to the index.

For example, the date is a good type of index for pagination. However, name isn’t a good one (If you use has(name@en)). Cuz you can always add homonyms and this will increase the number of “Rachel” for example. And this changes the paging.

If you want fixed paging, choose a fixed index.

Well, let’s say I have a type repository, and it has a name description and tags where tag is also a type that has keyword (e.g react) and creation date
I want to return all of the repositories that have the term react in their name description or onw of the tags.
There might be thousands of repositories that answer this query (like in GitHub), so I need 10 every time.
What will be the index here, and how can I know the max number of offset - if there are 300 repositories that answer this query and each time I return 10, how can I know to stop at offset 290?

I would do like

{
  q(func: eq(repo.created_at, "2017"), first: 10, offset:0) @filter(some tags and description here) {
    repo.name
    repo.created_at
  }
}

To base my query on the date first.

With count(uid) and divide that result by the limit you add.

count(uid) / first 10 = total of pages you have.
Or
300 / first 10 = 30

And the count of pages starts at 0. So you have 0, 10, 20, 29 offsets.

Ok so first you run a query that counts how many repos are relevante to this query and then another query that has the pagination and returns the data right?

The query that returns the number of uids could be done at the same query in multiple blocks. The count uid just would give you the amount of pages. You won’t use it in another query anyway. You would use this information on the front-end tho.

BTW, I have shared this some time ago How to store ordered data? - #4 by MichelDiz it could be useful, cuz it is a linked list of nodes. So it is a fixed ordering. It is a bit complex, but it works.

But in GraphQL it would have to be two separate queries ran synchronously as it doesn’t support blocks nor count like DQL does.

gotya thanks.
one last thing about this query is the next query considered right if I want to get all queries that have the term ork in part of their name, description, or one of their tags:

{
  me(func: eq(Repository.created_at, "2017"))@filter(regexp(Repository.description, /ork/) or regexp(Repository.name, /ork/)){
    Repository.tags @filter(regexp(Hashtag.keyword,/ork/)){
        Hashtag.keyword
    }
    Repository.name
  }
}

I am not sure that this kind of query is possible in GQL at all.
There are no filters on sub-types so it will be impossible to ask which repositories have a specific term in their name or description and in one of their tags

No problem. The order of the factors does not change it. But you can always use Custom DQL.

BTW, I’m not aware if GraphQL supports pagination. It should, but I never used it.

Sure its possible, just maybe not as easy as we would like. I was just going off from the category of the topic being GraphQL is why I chimed in.

GraphQL does support first and offset. You can get a subtype filter if you reverse the query or use cascade.

I have not crossed this bridge of getting a count because it is more trouble than it is worth at the moment. So my implementation right now just uses pagination until no more results are available at which time I know I have reached the end.

The OP was strictly about pagination. Since it has branched out to pagination with sub-filters the GraphQL way right now would to get a list of repo ids from the root repos with your filters. Get another list of repo ids from the tag root with your tag filters. Union these two sets together client side. Use this as this array of repo ids as the filter and then use first and offset to get pagination.

Yes, I thought about this solution. Technically, in this case, I can count the number of ids in the client as well and figure out the number of pages I need.
I think this will be the best option to solve it.
Filtering the tags and use cascade will loose the data because I need repose with the tag ork but also get there other tags so filter to a specific tag won’t be the right thing to do.
On the other hand doing a reverse query as you suggested on the Hashtags and find the repositories they applied to and then get the uids will solve it :slight_smile:

Last thing, if I use the pagination after having the uids will they considered as fixed indecies and the result will always be the same?

even if you provide the ids in a filter, the order will still be the same based upon retrieved uids (ids) unless you provide an alternative sort method.

Great.
So no need to worry when increasing the value of the offset about results that jumped up the first 10 (offset 0) to first 10 (offset 2) right?

What do you mean by providing an alternative sort method? different order?

not unless data was added/deleted in between the queries otherwise the order will be the same.

Offset 2 would not get the 2nd set.

  • page 1 = (first: 10, offset: 0)
  • page 2 = (first: 10, offset: 10)
  • page 50 = (first: 10, offset: 490)

use the formula offset = page*limit-limit where limit = items per page (used in first)

Thanks