Dgraph Release v22.0.2 is now Generally Available

Hi Dgraph Community,

We are pleased to announce that a new release of Dgraph v22.0.2 is now available. We would like to thank the Project Steering Committee members for their inputs during the release and the Dgraph team for their efforts to make this release happen.

The release binaries and release notes are now available on GitHub. The docker images for dgraph/dgraph and dgraph/standalone are available on DockerHub.

This release includes ARM64 support for development use as requested by the community. This comes with a multi-arch docker image to support both arm64 and amd64 architectures.

This release has 2 critical bug fixes - one on ACL associated with multi-tenancy and the other related to data corruption in Badger. We have also fixed 37 Security fixes (CVEs and GHSA) and incorporated a number of fixes from version v21.12.0.

This release includes the following major quality improvements:

  • Additional unit tests that cover various areas of the product
  • Significant enhancements to CI/CD and infrastructure to support ARM64 for dgraph, lambda and badger repositories.
  • Integration of the LDBC Benchmark to run on CI
  • Enabled Coveralls on CI to measure code coverage to include the integration tests in addition to the unit tests
  • Increased test coverage with Coverall from 37% to 64%

Please file Github Issues with the label v22.0.2 if you find any problems or bugs in this release.

Thank you and Happy Holidays!

Dgraph Team


This says we can’t upgrade from v21.12.0 to v22.x.x, but can we upgrade from older versions such as v20.11.x?

You can upgrade to any version from any version you want. The secret is to export it to RDF and then bulk/liveload it. The only but is regarding GraphQL. Perhaps some GQL feature has been removed, but it will come soon in mid-2023.

But in all cases across a major version boundary, we MUST export and reload, correct?

Usually yes, but v21.03 is compatible with v22.0 because the underlying badger database format did not change. For upgrading v21.12 to v22.0 you need to import/export, because some changes made in v21.12 were rolled back and subsequently are not included in v22.0.

@MichelDiz, @sudhish I’m using the standalone v22.0.2 image:

original command: docker run --rm -it -p 8000:8000 -p 8080:8080 -p 9080:9080 -v ~/dgraph:/dgraph --name dgraph dgraph/standalone:v21.03.2

v22 container: docker run --rm -it -p 8000:8000 -p 8080:8080 -p 9080:9080 -v ~/dgraph:/dgraph --name dgraph-22.0.2 dgraph/standalone:v22.0.2

I had to update the schema in the v22 container of course:

curl -X POST localhost:8080/admin/schema --data-binary '@src/lib/graphql/schema.graphql'

However, even though the same volume has been mounted (-v ~/dgraph:/dgraph), the data doesn’t seem to be present in the database.

What am I doing wrong? Do I need to manually backup the v21.03 data and import it into the v22 database? I can’t just point it at the same Docker volume?

My understanding is that the data should be compatible? If not, would be ideal if there was an explanatory error in this scenario.

Here’s how I resolved my issue: live loader instructions in v22 docs · Issue #548 · dgraph-io/dgraph-docs · GitHub

You should never use standalone in prod.
You should never upgrade without a backup(export) and changing the docker tag.
The standalone image doesn’t work with bulkloader, only if you create your own image with a custom script to do so.

There are several steps to check one by one to be sure if an upgrade made by tag is succeed. To avoid complex scenarios. The simplest way is to export and re-import. Or use Binary Backups(EE feature).

There are several ways this can go wrong, so to avoid anything. Just follow the export and re-import procedure whenever in doubt. You can even try (unsafe) to upgrade just by tag (changing Dgraph binary). But always do an export RDF/JSON dataset.

Even though it is technically the same version cut, with the change of version number Dgraph may end up not recognizing the manifest. Or maybe because you are using a standalone image (this is the most likely scenario of all) it has already started a cluster even before you attach the data from the bind. Something at some point may happened. I can’t tell for sure. Or the upgrade wasn’t compatible due to the versioning.

1 Like

This is my local development environment.

The export/import procedure was simple enough.