Dgraph, Microservices, Time series, Scalability and CI/CD

I would not personally recommend it. Dgraph is already built to shard and scale horizontally instead of vertically. I would recommend keeping connected data, well… connected as much as possible.

Yes possible with the graphql endpoint and the @custom directive. You could even query data off from other non graph endpoints. Just FYI though, that data would not be expandable beyond the first retrieval so you have to get any of the nested data in a single request.

see above.

This is really hard because it is based a lot on use case. For me, the biggest challenge is going to be doing client side custom filtering of deeply nested data at multiple levels. As for generally speaking, dgraph has outperformed our previous running MySQL 8.0.17 on RDS for even the simple data lookups on a single table. If you are comparing relational dbs to graph though and even care about the n+1 problem, then graphs will win out every single time!

The query language will probably be most peoples biggest challenge, it involves a different way of thinking than with traditional relational dbs. With SQL like queries you do SELECT [fields] FROM [tables [w/joins]] WHERE [filters] HAVING [special case filters] [pagination] this is totally different with graph databases of any kind. You literally select the fields you want and apply the filters at every level down through the graph. So making the jump for new comers will most likely be easier that don’t have any relational db experience then those making the jump. However if you take it for what it is, you literally ask for the data you want in the shape that you want it. Designing your schema is of the utmost importance to get it right early in the game.

Not my expertise here, sorry. I will say it works for basic datetime tracking and plays really well with UI providing the datetime in the correct format without needing to convert it to a javascript time object like I did with MySQL. I use times for events, tasks, and activity tracking, though there is no currently native support for createdAt and updatedAt fields which has to currently be resolved on the UI at this time.

Sure, do it on Slash and pay-as-you-go.

umm, idk sorry. All I know is that Dgraph does automatic sharding but not sure about any geo-sharding… guess someone up the chain can give more info here.

I know this is tagged Dgraph, but I eventually foresee most users being mainly on the GraphQL endpoint so that is where I am answering this from, and I know that dgraph schema is vaguely the same under the hood.

Changing schema does not change data. However it can change access to that data. I am doing schema control in my github repo with my main UI. This will handle any kind of schema rollbacks or change control needed. Any data that needs to change along the way with a schema change will need to be done with custom scripts. Roll back on data changes may not be possible sometimes. If you delete data it is gone and a db rollback is not available unless you restore from a backup as far as I know.

Well, seeing that Dgraph made Graphql± and created Badger from scratch for their own pony show, they probably view it pretty highly. having this said, if any graph database is worth their salt, they would be able to import a well constructed graphql data with full schema. I foresee the future of graphql being the main graph query language. And not just a query language but a full functional db language. Dgraph is leaps and bounds beyond anybody else here and they have a lot of catching up to do. How can Dgraph use graphql as a query language, because they built their whole database structure around it.

Search for liveloader and bulkloader in the Dgraph docs. EDIT: sorry, I misread this as imports instead of queries. You are talking about something different here.

2 Likes