I have a problem, how can I check the specific error information?Just errot logs.I run the cluster through docker compose
The cluster is deployed on Ubuntu, and I query the data through pydgraph. I am using v20.03.3
status = StatusCode.UNKNOWN
details = “: cannot retrieve UIDs from list with key 000015746f7069635f647574795f6372696d655f63617365000000000000000178: readTs: 10239 less than minTs: 441470 for key: “\x00\x00\x15topic_duty_crime_case\x00\x00\x00\x00\x00\x00\x00\x01x””
debug_error_string = “{“created”:”@1611322692.194000000",“description”:“Error received from peer ipv4:192.168.0.14:9082”,“file”:“src/core/lib/surface/call.cc”,“file_line”:1055,“grpc_message”:“: cannot retrieve UIDs from list with key 000015746f7069635f647574795f6372696d655f63617365000000000000000178: readTs: 10239 less than minTs: 441470 for key: “\x00\x00\x15topic_duty_crime_case\x00\x00\x00\x00\x00\x00\x00\x01x””,“grpc_status”:2}"
I’m not sure but this looks like a bad usage of the cluster. I saw that happening when the user(and me) used the same Zero from other tries. For example, instead of starting from scratch, we ignore and use the same zero over and over - sometimes without noticing. This accumulates timestamps and uid leasing and timestamps and so on gets out of sync. But it is just a theory. Also, corruption of the data might lead to this. @pawan would say better.
As you are using Docker. This might happen very often. Cuz, not all users are aware of the Docker volume system. Things just accumulate.
My recommendation is to export your data. Prune the whole Docker (be careful to not lose anything important) - delete all related path in the host and start again from scratch.
Well, you can’t control what log to see. You can control the level of logs only.
Hi @MichelDiz,I refer to the method of using docker to enable clustering in the official documents. I’m sorry that I just used dgraph and docker for a short time. And just want to verify the relevant functions to facilitate the official use in the future.
Can you tell me how to export data from dgraph? Exported by docker. Unfortunately, I didn’t export successfully by referring to the document https://dgraph.io/docs/deploy/fast-data-loading/bulk-loader/.Can you provide some examples of docker operation to export data? It‘s best to export in JSON format
By Docker? Do you mean to move the exported files from Docker? There is a cp command on Docker. But if you are biding the volume with a local path, the export will just appear in the path of your local machine(the host I mean).
This part of the docs doesn’t show how to export. Just to import via bulk loader.
To export in JSON you will add just a param in the command(via GraphQL). But there’s no difference if you let it as RDF.
No, Docker is just the medium. It is container software. It has nothing to do with Dgraph itself. Dgraph uses it to make deployment easy instead of a Bare Metal.
If you have binded the path with the volume. Which means, you have used -v ~/dgraph:/dgraph - This parameter on Docker, means that you have binded the path ~/dgraph (This is the home of your user if you are on a Unix like OS). If you have done like this, you will see a folder called “export” in that folder.
You haven’t binded the volume. The only way is a “Docker copy” command docker cp | Docker Docs. This is similar to Linux’s cp command. Or scp from ssh.
Note that Docker is “similar” to a “virtual machine”. It is a container that runs an operational system. In its minimum requirements. So, it is like a “remote machine” that you have to access.
I’m not sure how it is done in Python. But looks like is this way. Make sure that you see the logs saying that an export happened and if inside the container there is a folder called “export”.
I think whats happening is that the user has somehow deleted the zw directory that Zero uses and has restarted Zero. Even though his Alpha instance has data corresponding to a higher timestamp (441470), his read timestamp for the query is only 10239. This indicates that either the user is using a very old transaction timestamp to read the data or they somehow have rebooted their Zero after deleting its data because of which newer timestamps are being assigned to new queries.