How to make like feature / increment values without transactions?

What I want to do

I want to have a like functionality like facebook youtube instagram have

What I did

It works with transactions. But if 50 users like something at the same time, 49 transactions will fail. it becomes clogged

Has someone an idea to solve that easily? My approach would be to have sharded counters, every user has his own counter, and a scheduled cloud function adds up everything together. but i dont like that approach, for obvious reasons

// i read in old discussions things about upserts, but I really don’t understand how an upsert shall solve that https://dgraph.io/docs/howto/upserts/

Try this

So, instead of a number, you should use Edges.

e.g: { User A [Predicate: like] } ->{ node Type: Like [Predicate: like] → } → { User B }
So the like will be an intermediate list of nodes. Or you can just use simple edges.
e.g: { User A [Predicate: like] } -> { User B }

I’d prefer the intermediate nodes cuz I can add extra information to the action. Making the like action richer of information.

Also, Edges are always unique. You can’t like twice with that approach.

BTW, you can’t do anything without transactions.

Cheers.

2 Likes

Also check out these learning guides

https://dgraph.io/learn/courses/datamodel/sql-to-dgraph/develop/compare-schema/

https://dgraph.io/learn/courses/datamodel/evolution/develop/graphs-data-modeling/graph-data-model/

They both cover how to handle likes in a social media kind of way.

1 Like

thanks @MichelDiz @amaster507 but is handling that through edges really the right way? Because if a user fetches e.g an instagram post, dgraph would have to aggregate/count these edges to get the like count. and if 100 users per second fetch an instagram post, that would be a “heavy” load for nothing. 100 requests at the database for nothing, always the same aggregation. that’s a waste of resources or not?
I think the right thing to do are cached results https://dgraph.io/docs/graphql/queries/cached-results/ with a 5 seconds TTL. is that right? would that solve my problem? i think yes but I want to be sure so I ask my señors

BTW are aggregations/countings now intensive work? because dgraph would need to count all edges, that hell of work. But maybe dgraph has some smart internal cache mechanism or something like that?? (i read something with badger)

It is just counting ids in a posting list. Read the Dgraph White Paper. Counting edges should he really simple backend work unless you are filtering the edge then you would have an additional step first filter the ids then count.

I prefer to not cache things very often at the backend, but let clients cache when/how they want.

1 Like