How to set up automated migration

Dgraph seems to be a very promising database, I’ve read the documentation, however I can’t figure out a viable path to set up schema and data migration. Can a core member shed some light on this topic?

Related question: how are you guys managing deployment on your environments?

1 Like

Welcome William!

  • We usually recommend new users to go through a tour of dgraph here. This will help you understand the schema and query aspects.

  • For migration:
    You can export database and schema from an existing Dgraph instance.
    You can then either do a bulk or live load. Bulk load is used when loading a large amount of data for the first time in a fresh cluster, while live load can be used for loading data into an already running cluster.

  • Automation
    Here is a helm chart for kubernetes that might be helpful.

Thanks for the reply @anand. The use case that I’m looking for is: I would like to have the schema automatically applied during CI and run necessary data migration.

From what I understood, dgraph does not provide a straight forward solution and require the developer to code schema and data migration logic themselves. If this is correct could you give me the explanation on how to proceed?

Hi @whollacsek,
We can certainly apply schema dynamically during migration or any other CI activity. In fact, there are Dgraph users who add types and other schema elements while processing their real-time data streams.

  • Dgraph clients have an “alter” provision to mutate the schema. Here is the detail on alter from a Java client.

  • In addition to this, you can pass the schema as a part of the bulk loader. This can be used for initial data load purposes while moving between environments.

Thanks for the clarification I’ll take a look at both :slight_smile:

1 Like