We are pleased to announce that a new release of Dgraph v22.0.2 is now available. We would like to thank the Project Steering Committee members for their inputs during the release and the Dgraph team for their efforts to make this release happen.
This release includes ARM64 support for development use as requested by the community. This comes with a multi-arch docker image to support both arm64 and amd64 architectures.
This release has 2 critical bug fixes - one on ACL associated with multi-tenancy and the other related to data corruption in Badger. We have also fixed 37 Security fixes (CVEs and GHSA) and incorporated a number of fixes from version v21.12.0.
This release includes the following major quality improvements:
Additional unit tests that cover various areas of the product
Significant enhancements to CI/CD and infrastructure to support ARM64 for dgraph, lambda and badger repositories.
You can upgrade to any version from any version you want. The secret is to export it to RDF and then bulk/liveload it. The only but is regarding GraphQL. Perhaps some GQL feature has been removed, but it will come soon in mid-2023.
Usually yes, but v21.03 is compatible with v22.0 because the underlying badger database format did not change. For upgrading v21.12 to v22.0 you need to import/export, because some changes made in v21.12 were rolled back and subsequently are not included in v22.0.
I had to update the schema in the v22 container of course:
curl -X POST localhost:8080/admin/schema --data-binary '@src/lib/graphql/schema.graphql'
However, even though the same volume has been mounted (-v ~/dgraph:/dgraph), the data doesn’t seem to be present in the database.
What am I doing wrong? Do I need to manually backup the v21.03 data and import it into the v22 database? I can’t just point it at the same Docker volume?
My understanding is that the data should be compatible? If not, would be ideal if there was an explanatory error in this scenario.
You should never use standalone in prod.
You should never upgrade without a backup(export) and changing the docker tag.
The standalone image doesn’t work with bulkloader, only if you create your own image with a custom script to do so.
There are several steps to check one by one to be sure if an upgrade made by tag is succeed. To avoid complex scenarios. The simplest way is to export and re-import. Or use Binary Backups(EE feature).
There are several ways this can go wrong, so to avoid anything. Just follow the export and re-import procedure whenever in doubt. You can even try (unsafe) to upgrade just by tag (changing Dgraph binary). But always do an export RDF/JSON dataset.
Even though it is technically the same version cut, with the change of version number Dgraph may end up not recognizing the manifest. Or maybe because you are using a standalone image (this is the most likely scenario of all) it has already started a cluster even before you attach the data from the bind. Something at some point may happened. I can’t tell for sure. Or the upgrade wasn’t compatible due to the versioning.