Feature Request: Increment/Decrement field values like Firestore

https://firebase.google.com/docs/firestore/manage-data/add-data#increment_a_numeric_value

Hi, I don’t understand why dgraph still has no increment/decrement feature :(( We need that feature buddies

Unfortunately upserts ain’t a solution, because if 1000 users like within one second a picture, dgraph can’t keep up with that and the transactions will fail

thank you very much!!!

In a MVCC architecture like dgraphs, not sure how one would increment without following a read then write transaction which can fail. Any suggestions?

If it does not require 100% accuracy a separate read then a blind write would not fail but if two happen at once they would lose one increment value.

1 Like

Firestore works differently as counters pretty much don’t exist, so IMHO they are not comparable.

I see two options.

  1. Use a custom lambda with DQL with an upsert:
  1. Create a new node for each count, then aggregate the total.

Ex. The number of likes on a posts:

type Post {
  id: ID!
  likes: [Like] @hasInverse(field: post)
  ...
}
type User {
  id: ID!
  likedPosts: [Like] @hasInverse(field: user)
  ...
}
type Like {
  id: string @id
  user: User
  post: Post
}

The like id will be a composite key like user__post to enforce uniqueness. I would also put this in a custom mutation until dgraph supports composite indexes and lambda pre-hooks in about 5 years.

This way, when you remove a like, add a like etc, the aggregate count will always be up-to-date. Increments can be fishy sometimes.

J

1 Like

Hi thanks for the response

First solution would still have problems if 999 users like something at the same time. Right? (because 998 mutations/transactions would fail)
Or am I missing something?

Second solution: With that solution, to get the total like count, I would have to aggregate/count it every time i request it. is that right?

With the sentence:

you mean that it will up-to-date, with the composite index & pre-hooks features in 5 years?

Or already now? How? With a ‘custom mutation’? What exactly do you mean with that?

I am not sure, as definitely not an expert on how Dgraph works behind the scenes.

Correct, but because Dgraph uses sharding, I don’t believe this would be a problem.

I think Dgraph is meant to be scalable in either case, but not an expert on Badger.

I was being sarcastic here since we have to beg for any kind of communication or update from the Dgraph team.

That being said, I am a huge fan of the product.

J

1 Like