Query sever timestamps in GraphQL?

Thank you for this well thought out push for server-side timestamps!

I love this. Would help a lot with getting only newest data and not receiving old data from sub-graphs over and over again.

Does this need to be so strict? Why not allow both? For instance, we would definitely need mutable timestamps for client <-> server synchronization.

When implemented like this, you could choose to allow it or not:

type TimestampConfig {
   active: Boolean!
   mutable: Boolean!
}
@timestamps( createdAt: TimestampConfig, updatedAt: TimestampConfig )
1 Like

This is just my opinion on how and why I would do it this way. I am not developing this, so the core-devs would have to make the final decision in this regards by my $.02 is:

It should be strict to ensure that the API layer never does actions it should not be doing. With Dgraph GraphQL, this is a little harder to see at first. But as I explained in my edit after thoughts above, the GraphQL implementation inside of Dgraph is just that, an API. It takes the GraphQL and rewrites it using rules into DQL. In an API layer, actions such as adjusting timestamps are not permitted. If it was, then any user would be able to adjust a timetamp willy-nilly. Think of it how it may be with other database and APIs. If a timestamp is automated by the databse, then the API uses that automation of the timestamps and does not allow writing to that through the API. And looking at the implementation I wrote above, if the _updatedAt field gets set in a rewriting process into the DQL mutation, then allowing a user to also set this _updatedAt predicate could result in writing two values and then the rewriting process becomes more complex with needing then to decide when not to add in the automated predicate if it is supplied by the user. But I think the issue goes deeper than this…

I believe a client->server synchronization should be different than a server->client synchronization. Hear me out. Right now there are no good implementation that I have found for offline GraphQL. The only things really out there is client side cache of GraphQL (ie: Apollo Client). To update the client cache with a source of truth, the client should first update the source of truth (server) and then with the response update the cache. Therefore a client->server sync is not really pushing the source of truth, but is setting pieces of truths and returns the source of truth for those pieces (that would contain the timestamps). For any server<->server syncs that needs to set these timestamps, that should be done using DQL live imports and rdf streams IMO.

It would be interesting to see any implementation of client->server syncs where the client is collecting mutations and then running those in batches at a later point to perform sync. This would add more complications because the client would then be responsible for ensuring that there were no conflicts of unique ids and also require some sort of blank node implementation in GraphQL. I don’t think the GraphQL spec is ready yet for client->server sync.

For the time being, if you have a timestamp field you want to let the database control, let it control it, if you have a timestamp field that you want to control, then do so with a regular DateTime field as I stated above. This would get a timestamp feature into production quickly which could then be iterated upon for feature enhancements later.

Sure - the implementation would be a bit more complex, but is this really a deciding criterion?

I’m fine with this approach. But I fear that once it is implemented immutable-only, there won’t be a reiteration for a loooooong time given the vast amount of (important) feature- and bug-requests currently.

We didn’t find any either, that’s why we build it from scratch. We have one dgraph database running on the client (+ electron react app) and one dgraph database on our server. The user can work offline, all data is stored in his dgraph instance and when he goes online, we synchronize both databases using GQL mutations. And I can tell you - it’s working just fine and isn’t much effort either when using code-generation. And that’s why we would need mutable timestamps. When the user creates a post offline on date XXX, the same data should show online on that post.

I’d also happily implement the synchronization with DQL if you think that this is better suited. Can you point me to some ressources where I can learn how to use “DQL live imports and rdf streams” for server<->server synchronization purposes?

@amaster507 Thank you for the detailed post. What you have here is pretty good and can be implemented pretty much as it is.

Yeah, we’d properly like to have the predicates named as Type._createdAt and Type._updatedAt so that they can be sharded across Alpha nodes as the data grows.

I also agree that they should be set automatically and shouldn’t be exposed via the GraphQL API. This is also because the GraphQL API is supposed to be used by browsers that are clients and it’s not a good practice to be setting timestamps via the client.

1 Like

Sorry, why do you want to limit clients to browsers? What about my @custom logic resolvers? They are doing a lot of stuff that dgraph-gql can’t do (and might never be able to do) and use GQL clients to write back to my dgraph instance. They are running on my servers, so I trust them to write correct timestamps.

Can’t we find some kind of agreement here? E.g. GQL-clients can change timestamps when they send some kind of authentication header?

The thing is, that this decision is very important for us. It will decide if we have to learn DQL, throw month of developer work away and rewrite our complete synchronization logic. And if it is like this, we better start yesterday than next week.

Also if you decided to make it read-only, I need to be sure that server<->server sync can be achieved (with partial data) with DQL. Can you give your opinion on this @pawan ?

1 Like

Typically wouldn’t you have more timestamps, something like createdAt for the server and initialAt for an optional offline init event?

1 Like

That will do of course.

You may find this proposal useful: GraphQL error: Non-nullable field was not present in result from Dgraph - #6 by abhimanyusinghgaur

Notice how you can configure which fields you want in the mutation input and later extending @input to support default values too. So, you will totally be able to configure mutation inputs as per your use-case. Both createdAt and updatedAt can be accomplished that way.

For the part about authentication, I think, that may be taken care of when we support field-level auth as well.