Current methods for running graph analyses (e.g. betweenness, centrality)?

I’m trying to run some analyses against my Dgraph database. For example, I would like to find important nodes (centrality). I’m currently trying to load Dgraph data into Spark jobs, but am running in to Spark-specific issues. I’m hoping there’s a better way.

For those of you that are doing similar calculations, what is your approach? Do you load your data into some graph processing systems manually? Do you use a library? What are your results? Do you have any code you can share?

I’ve searched and have found a few real old discussions about this sort of thing, but I haven’t found any specifics or example implementations.

(FWIW, I don’t mean to ask for Spark support here. I just mention it as it’s something I’m trying to use.)

Try DGL

https://www.dgl.ai/pages/about.html

For the most part, I script data loading & apply (python) ML tools such as PyTorch, DGL, or GluonTS (time-series). When things work out in the lab, I stuff things into a dockerized microservice, deploy to GKE, and integrate things with Hasura, which usually gives me a very fast total turnaround time.

I tried working with neo4j b/c of its build-in APOC graph algorithms such as PageRank or centrality.
However, for a number of reasons, I am replacing neo4j with DGraph in operations and everything else that must work. Lesson learned.

Marvin,
Thanks a lot for sharing your approach and experience (and for the feature request in GitHub!)

See also [Feature request] Add Graph Deep Learning Capabilities · Issue #4608 · dgraph-io/dgraph · GitHub

Thanks,

This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.