Cannot import with bulk loader

I Want to Do

Restore my backup data by using bulk loader to a new instance of Dgraph (v20.11.0)

What I Did

  • I started dgraph zero then alpha via docker containers
  • I get into dgraph zero container and executed the command:

dgraph bulk -f /data/g01.rdf.gz -s /data/g01.gql_schema

the output was:

[Decoder]: Using assembly version of decoder
Page Size: 4096
I0118 13:11:57.979265 71 init.go:107]

Dgraph version : v20.11.0
Dgraph codename : tchalla
Dgraph SHA-256 : 8acb886b24556691d7d74929817a4ac7d9db76bb8b77de00f44650931a16b6ac
Commit SHA-1 : c4245ad55
Commit timestamp : 2020-12-16 15:55:40 +0530
Branch : HEAD
Go version : go1.15.5
jemalloc enabled : true

For Dgraph official documentation, visit https://dgraph.io/docs/.
For discussions about Dgraph , visit http://discuss.dgraph.io.

Licensed variously under the Apache Public License 2.0 and Dgraph Community License.
Copyright 2015-2020 Dgraph Labs, Inc.


I0118 13:11:57.979769      71 util_ee.go:126] KeyReader instantiated of type <nil>
Encrypted input: false; Encrypted output: false
{
	"DataFiles": "/data/g01.rdf.gz",
	"DataFormat": "",
	"SchemaFile": "/data/g01.gql_schema",
	"GqlSchemaFile": "",
	"OutDir": "./out",
	"ReplaceOutDir": false,
	"TmpDir": "tmp",
	"NumGoroutines": 1,
	"MapBufSize": 2147483648,
	"PartitionBufSize": 4194304,
	"SkipMapPhase": false,
	"CleanupTmp": true,
	"NumReducers": 1,
	"Version": false,
	"StoreXids": false,
	"ZeroAddr": "localhost:5080",
	"HttpAddr": "localhost:8080",
	"IgnoreErrors": false,
	"CustomTokenizers": "",
	"NewUids": false,
	"ClientDir": "",
	"Encrypted": false,
	"EncryptedOut": false,
	"MapShards": 1,
	"ReduceShards": 1,
	"EncryptionKey": null,
	"BadgerCompression": 1,
	"BadgerCompressionLevel": 0,
	"BlockCacheSize": 46976204,
	"IndexCacheSize": 20132659
}
Connecting to zero at localhost:5080
2021/01/18 13:11:57 while lexing 
...
..
.
(MY SCHEMA FILE PRINTED OUT)
.
..
...
# Dgraph.Authorization ....
 at line 18 column 20: Invalid schema. Unexpected "
github.com/dgraph-io/dgraph/lex.(*Lexer).ValidateResult
	/ext-go/1/src/github.com/dgraph-io/dgraph/lex/lexer.go:199
github.com/dgraph-io/dgraph/schema.Parse
	/ext-go/1/src/github.com/dgraph-io/dgraph/schema/parse.go:443
github.com/dgraph-io/dgraph/dgraph/cmd/bulk.readSchema
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/bulk/loader.go:185
github.com/dgraph-io/dgraph/dgraph/cmd/bulk.newLoader
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/bulk/loader.go:139
github.com/dgraph-io/dgraph/dgraph/cmd/bulk.run
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/bulk/run.go:285
github.com/dgraph-io/dgraph/dgraph/cmd/bulk.init.0.func1
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/bulk/run.go:49
github.com/spf13/cobra.(*Command).execute
	/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
	/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
	/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
github.com/dgraph-io/dgraph/dgraph/cmd.Execute
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/root.go:71
main.main
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/main.go:102
runtime.main
	/usr/local/go/src/runtime/proc.go:204
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1374

github.com/dgraph-io/dgraph/x.Check
	/ext-go/1/src/github.com/dgraph-io/dgraph/x/error.go:42
github.com/dgraph-io/dgraph/dgraph/cmd/bulk.readSchema
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/bulk/loader.go:186
github.com/dgraph-io/dgraph/dgraph/cmd/bulk.newLoader
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/bulk/loader.go:139
github.com/dgraph-io/dgraph/dgraph/cmd/bulk.run
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/bulk/run.go:285
github.com/dgraph-io/dgraph/dgraph/cmd/bulk.init.0.func1
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/bulk/run.go:49
github.com/spf13/cobra.(*Command).execute
	/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
	/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
	/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
github.com/dgraph-io/dgraph/dgraph/cmd.Execute
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/cmd/root.go:71
main.main
	/ext-go/1/src/github.com/dgraph-io/dgraph/dgraph/main.go:102
runtime.main
	/usr/local/go/src/runtime/proc.go:204
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1374

the message: at line 18 column 20: Invalid schema. Unexpected "
refers to this part:

type UserAuth @auth(
    query: { rule:  "{$name: { eq: \"Prllel\" } }" },

It seems like is not accepting quotes but when I submit my schema by curl it does it without problems. the backup files (data and schema) was generated with a previous dgraph instance with version (v20.07.1) but I don’t really think the version differences is the problem.

Is something in the execution to import my data that I am doing wrong?

Thanks in advance

In the export folder, you would find the Dgraph Schema g01.schema.gz. Please use this as the schema file. You can ignore the GraphQL schema for bulk loading purposes.

Instead of :
dgraph bulk -f /data/g01.rdf.gz -s /data/g01.gql_schema

please try :
dgraph bulk -f /data/g01.rdf.gz -s /data/g01.schema

1 Like

In additions to Anand’s suggestion, the right way to import GraphQL schema is using

dgraph bulk -h | grep graphql
  -g, --graphql_schema string.   Location of the GraphQL schema file.

If you have started your Schema via GraphQL, you can stick to it. You can give an empty Dgraph Schema to the bulk and let it create the schema based on the GraphQL Schema only.

1 Like

Following @anand suggestion I tried with /data/g01.schema but it throws me an not file found error but it worked when I add the .gz extension. Then the logs returns this:

[Decoder]: 
Using assembly version of decoder
Page Size: 4096
I0119 01:52:06.629887      44 init.go:107] 

Dgraph version   : v20.11.0
Dgraph codename  : tchalla
Dgraph SHA-256   : 8acb886b24556691d7d74929817a4ac7d9db76bb8b77de00f44650931a16b6ac
Commit SHA-1     : c4245ad55
Commit timestamp : 2020-12-16 15:55:40 +0530
Branch           : HEAD
Go version       : go1.15.5
jemalloc enabled : true

For Dgraph official documentation, visit https://dgraph.io/docs/.
For discussions about Dgraph     , visit http://discuss.dgraph.io.

Licensed variously under the Apache Public License 2.0 and Dgraph Community License.
Copyright 2015-2020 Dgraph Labs, Inc.


I0119 01:52:06.630362      44 util_ee.go:126] KeyReader instantiated of type <nil>
Encrypted input: false; Encrypted output: false
{
	"DataFiles": "/data/g01.rdf.gz",
	"DataFormat": "",
	"SchemaFile": "/data/g01.schema.gz",
	"GqlSchemaFile": "",
	"OutDir": "./out",
	"ReplaceOutDir": false,
	"TmpDir": "tmp",
	"NumGoroutines": 1,
	"MapBufSize": 2147483648,
	"PartitionBufSize": 4194304,
	"SkipMapPhase": false,
	"CleanupTmp": true,
	"NumReducers": 1,
	"Version": false,
	"StoreXids": false,
	"ZeroAddr": "localhost:5080",
	"HttpAddr": "localhost:8080",
	"IgnoreErrors": false,
	"CustomTokenizers": "",
	"NewUids": false,
	"ClientDir": "",
	"Encrypted": false,
	"EncryptedOut": false,
	"MapShards": 1,
	"ReduceShards": 1,
	"EncryptionKey": null,
	"BadgerCompression": 1,
	"BadgerCompressionLevel": 0,
	"BlockCacheSize": 46976204,
	"IndexCacheSize": 20132659
}
Connecting to zero at localhost:5080
Predicate "dgraph.type" already exists in schema
Predicate "dgraph.graphql.xid" already exists in schema
Predicate "dgraph.graphql.schema" already exists in schema
___ Begin jemalloc statistics ___
Version: "5.2.1-0-gea6b3e973b477b8061e0076bb257dbd7f3faa756"
Build-time option settings
  config.cache_oblivious: true
  config.debug: false
  config.fill: true
  config.lazy_lock: false
  config.malloc_conf: "background_thread:true,metadata_thp:auto"
  config.opt_safety_checks: false
  config.prof: true
  config.prof_libgcc: true
  config.prof_libunwind: false
  config.stats: true
  config.utrace: false
  config.xmalloc: false
Run-time option settings
  opt.abort: false
  opt.abort_conf: false
  opt.confirm_conf: false
  opt.retain: true
  opt.dss: "secondary"
  opt.narenas: 16
  opt.percpu_arena: "disabled"
  opt.oversize_threshold: 8388608
  opt.metadata_thp: "auto"
  opt.background_thread: true (background_thread: true)
  opt.dirty_decay_ms: 10000 (arenas.dirty_decay_ms: 10000)
  opt.muzzy_decay_ms: 0 (arenas.muzzy_decay_ms: 0)
  opt.lg_extent_max_active_fit: 6
  opt.junk: "false"
  opt.zero: false
  opt.tcache: true
  opt.lg_tcache_max: 15
  opt.thp: "default"
  opt.prof: false
  opt.prof_prefix: "jeprof"
  opt.prof_active: true (prof.active: false)
  opt.prof_thread_active_init: true (prof.thread_active_init: false)
  opt.lg_prof_sample: 19 (prof.lg_sample: 0)
  opt.prof_accum: false
  opt.lg_prof_interval: -1
  opt.prof_gdump: false
  opt.prof_final: false
  opt.prof_leak: false
  opt.stats_print: false
  opt.stats_print_opts: ""
Profiling settings
  prof.thread_active_init: false
  prof.active: false
  prof.gdump: false
  prof.interval: 0
  prof.lg_sample: 0
Arenas: 17
Quantum size: 16
Page size: 4096
Maximum thread-cached size class: 32768
Number of bin size classes: 36
Number of thread-cache bin size classes: 41
Number of large size classes: 196
Allocated: 58312, active: 90112, metadata: 2927672 (n_thp 0), resident: 2969600, mapped: 8478720, retained: 2007040
Background threads: 2, num_runs: 3, run_interval: 124264000 ns
--- End jemalloc statistics ---
Processing file (1 out of 1): /data/g01.rdf.gz
Shard tmp/map_output/000 -> Reduce tmp/shards/shard_0/000
badger 2021/01/19 01:52:06 INFO: All 0 tables opened in 0s
badger 2021/01/19 01:52:06 INFO: Discard stats nextEmptySlot: 0
badger 2021/01/19 01:52:06 INFO: Set nextTxnTs to 0
badger 2021/01/19 01:52:06 INFO: All 0 tables opened in 0s
badger 2021/01/19 01:52:06 INFO: Discard stats nextEmptySlot: 0
badger 2021/01/19 01:52:06 INFO: Set nextTxnTs to 0
badger 2021/01/19 01:52:06 INFO: DropAll called. Blocking writes...
badger 2021/01/19 01:52:06 INFO: Writes flushed. Stopping compactions now...
badger 2021/01/19 01:52:06 INFO: Deleted 0 SSTables. Now deleting value logs...
badger 2021/01/19 01:52:06 INFO: Value logs deleted. Creating value log file: 1
badger 2021/01/19 01:52:06 INFO: Deleted 1 value log files. DropAll done.
Num Encoders: 1
Final Histogram of buffer sizes: 
 -- Histogram: 
Min value: 11580 
Max value: 11580 
Mean: 11580.00 
Count: 1 
[0 B, 64 KiB) 1 100.00% 
 --

Finishing stream id: 1
Finishing stream id: 2
Finishing stream id: 3
Finishing stream id: 4
Finishing stream id: 5
Finishing stream id: 6
Finishing stream id: 7
Finishing stream id: 8
Finishing stream id: 9
Finishing stream id: 10
Finishing stream id: 11
Finishing stream id: 12
Finishing stream id: 13
Finishing stream id: 14
Finishing stream id: 15
Finishing stream id: 16
Finishing stream id: 17
badger 2021/01/19 01:52:06 INFO: Table created: 1 at level: 6 for stream: 1. Size: 252 B
Finishing stream id: 18
badger 2021/01/19 01:52:06 INFO: Table created: 3 at level: 6 for stream: 3. Size: 354 B
Finishing stream id: 19
badger 2021/01/19 01:52:06 INFO: Table created: 12 at level: 6 for stream: 12. Size: 344 B
Finishing stream id: 20
badger 2021/01/19 01:52:06 INFO: Table created: 2 at level: 6 for stream: 2. Size: 287 B
Finishing stream id: 21
badger 2021/01/19 01:52:06 INFO: Table created: 6 at level: 6 for stream: 6. Size: 358 B
Finishing stream id: 22
badger 2021/01/19 01:52:06 INFO: Table created: 15 at level: 6 for stream: 15. Size: 357 B
Finishing stream id: 23
badger 2021/01/19 01:52:06 INFO: Table created: 7 at level: 6 for stream: 7. Size: 224 B
Finishing stream id: 24
badger 2021/01/19 01:52:06 INFO: Table created: 11 at level: 6 for stream: 11. Size: 500 B
Finishing stream id: 25
badger 2021/01/19 01:52:06 INFO: Table created: 10 at level: 6 for stream: 10. Size: 356 B
Finishing stream id: 26
badger 2021/01/19 01:52:06 INFO: Table created: 13 at level: 6 for stream: 13. Size: 344 B
badger 2021/01/19 01:52:06 INFO: Table created: 5 at level: 6 for stream: 5. Size: 224 B
badger 2021/01/19 01:52:06 INFO: Table created: 4 at level: 6 for stream: 4. Size: 497 B
Writing split lists back to the main DB now
badger 2021/01/19 01:52:06 INFO: Table created: 14 at level: 6 for stream: 14. Size: 325 B
badger 2021/01/19 01:52:06 INFO: copying split keys to main DB Sent data of size 0 B
badger 2021/01/19 01:52:06 INFO: Table created: 16 at level: 6 for stream: 16. Size: 325 B
badger 2021/01/19 01:52:06 INFO: Table created: 8 at level: 6 for stream: 8. Size: 343 B
badger 2021/01/19 01:52:06 INFO: Table created: 17 at level: 6 for stream: 17. Size: 272 B
badger 2021/01/19 01:52:06 INFO: Table created: 21 at level: 6 for stream: 21. Size: 276 B
badger 2021/01/19 01:52:06 INFO: Table created: 18 at level: 6 for stream: 18. Size: 269 B
badger 2021/01/19 01:52:06 INFO: Table created: 23 at level: 6 for stream: 23. Size: 357 B
badger 2021/01/19 01:52:06 INFO: Table created: 20 at level: 6 for stream: 20. Size: 299 B
badger 2021/01/19 01:52:06 INFO: Table created: 19 at level: 6 for stream: 19. Size: 291 B
badger 2021/01/19 01:52:06 INFO: Table created: 22 at level: 6 for stream: 22. Size: 301 B
badger 2021/01/19 01:52:06 INFO: Table created: 9 at level: 6 for stream: 9. Size: 646 B
badger 2021/01/19 01:52:06 INFO: Table created: 24 at level: 6 for stream: 24. Size: 357 B
badger 2021/01/19 01:52:06 INFO: Table created: 27 at level: 6 for stream: 27. Size: 369 B
badger 2021/01/19 01:52:06 INFO: Table created: 26 at level: 6 for stream: 26. Size: 367 B
badger 2021/01/19 01:52:06 INFO: Table created: 25 at level: 6 for stream: 25. Size: 367 B
badger 2021/01/19 01:52:06 INFO: Resuming writes
badger 2021/01/19 01:52:06 INFO: Lifetime L0 stalled for: 0s
badger 2021/01/19 01:52:06 INFO: 
Level 0 [ ]: NumTables: 01. Size: 2.5 KiB of 0 B. Score: 0.00->0.00 Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 5 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 6 [B]: NumTables: 27. Size: 9.0 KiB of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level Done
badger 2021/01/19 01:52:07 INFO: Lifetime L0 stalled for: 0s
badger 2021/01/19 01:52:07 INFO: 
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 5 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 6 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level Done
[01:52:07Z] REDUCE 00s 100.00% edge_count:191.0 edge_speed:191.0/sec plist_count:134.0 plist_speed:134.0/sec. Num Encoding MBs: 0. jemalloc: 0 B 
Total: 00s

Note:
When I exported the data from a previous dgraph instance (v20.07.1) it generated these files:

  • g01.gql_schema.gz
  • g01.rdf.gz
  • g01.schema.gz

When I query for data with graphql It returns the error that there is not even a graphql schema in Dgraph. For this case I tried to import in a new instance of dgraph v20.11.0.

When I try with

root@d6ebe5a596a1:/dgraph# dgraph bulk -f /data/g01.rdf.gz -g /data/g01.gql_schema

(for that, I extracted g01.gql_schema from its .gz)

It throws me:
Schema file must be specified.

below is the log:


[Decoder]: Using assembly version of decoder
Page Size: 4096
I0119 02:04:47.516959      83 init.go:107] 

Dgraph version   : v20.11.0
Dgraph codename  : tchalla
Dgraph SHA-256   : 8acb886b24556691d7d74929817a4ac7d9db76bb8b77de00f44650931a16b6ac
Commit SHA-1     : c4245ad55
Commit timestamp : 2020-12-16 15:55:40 +0530
Branch           : HEAD
Go version       : go1.15.5
jemalloc enabled : true

For Dgraph official documentation, visit https://dgraph.io/docs/.
For discussions about Dgraph     , visit http://discuss.dgraph.io.

Licensed variously under the Apache Public License 2.0 and Dgraph Community License.
Copyright 2015-2020 Dgraph Labs, Inc.


I0119 02:04:47.517507      83 util_ee.go:126] KeyReader instantiated of type <nil>
Encrypted input: false; Encrypted output: false
Schema file must be specified.

But It seems to me that bulk command requires to be given -s parameter

As I said, “You can give an empty Dgraph Schema to the bulk”. Add an empty txt file and run as it is. e.g -s myschema.txt

Oh I missed that part, sorry. I tried again, this time giving an empty file .txt to -s

dgraph bulk -f /data/g01.rdf.gz -s /data/schema.txt -g /data/g01.gql_schema

and it throws me the following:


[Decoder]: Using assembly version of decoder
Page Size: 4096
I0119 03:25:04.552334      45 init.go:107] 

Dgraph version   : v20.11.0
Dgraph codename  : tchalla
Dgraph SHA-256   : 8acb886b24556691d7d74929817a4ac7d9db76bb8b77de00f44650931a16b6ac
Commit SHA-1     : c4245ad55
Commit timestamp : 2020-12-16 15:55:40 +0530
Branch           : HEAD
Go version       : go1.15.5
jemalloc enabled : true

For Dgraph official documentation, visit https://dgraph.io/docs/.
For discussions about Dgraph     , visit http://discuss.dgraph.io.

Licensed variously under the Apache Public License 2.0 and Dgraph Community License.
Copyright 2015-2020 Dgraph Labs, Inc.


I0119 03:25:04.553241      45 util_ee.go:126] KeyReader instantiated of type <nil>
Encrypted input: false; Encrypted output: false
{
	"DataFiles": "/data/g01.rdf.gz",
	"DataFormat": "",
	"SchemaFile": "/data/schema.txt",
	"GqlSchemaFile": "/data/g01.gql_schema",
	"OutDir": "./out",
	"ReplaceOutDir": false,
	"TmpDir": "tmp",
	"NumGoroutines": 1,
	"MapBufSize": 2147483648,
	"PartitionBufSize": 4194304,
	"SkipMapPhase": false,
	"CleanupTmp": true,
	"NumReducers": 1,
	"Version": false,
	"StoreXids": false,
	"ZeroAddr": "localhost:5080",
	"HttpAddr": "localhost:8080",
	"IgnoreErrors": false,
	"CustomTokenizers": "",
	"NewUids": false,
	"ClientDir": "",
	"Encrypted": false,
	"EncryptedOut": false,
	"MapShards": 1,
	"ReduceShards": 1,
	"EncryptionKey": null,
	"BadgerCompression": 1,
	"BadgerCompressionLevel": 0,
	"BlockCacheSize": 46976204,
	"IndexCacheSize": 20132659
}
Connecting to zero at localhost:5080
___ Begin jemalloc statistics ___
Version: "5.2.1-0-gea6b3e973b477b8061e0076bb257dbd7f3faa756"
Build-time option settings
  config.cache_oblivious: true
  config.debug: false
  config.fill: true
  config.lazy_lock: false
  config.malloc_conf: "background_thread:true,metadata_thp:auto"
  config.opt_safety_checks: false
  config.prof: true
  config.prof_libgcc: true
  config.prof_libunwind: false
  config.stats: true
  config.utrace: false
  config.xmalloc: false
Run-time option settings
  opt.abort: false
  opt.abort_conf: false
  opt.confirm_conf: false
  opt.retain: true
  opt.dss: "secondary"
  opt.narenas: 16
  opt.percpu_arena: "disabled"
  opt.oversize_threshold: 8388608
  opt.metadata_thp: "auto"
  opt.background_thread: true (background_thread: true)
  opt.dirty_decay_ms: 10000 (arenas.dirty_decay_ms: 10000)
  opt.muzzy_decay_ms: 0 (arenas.muzzy_decay_ms: 0)
  opt.lg_extent_max_active_fit: 6
  opt.junk: "false"
  opt.zero: false
  opt.tcache: true
  opt.lg_tcache_max: 15
  opt.thp: "default"
  opt.prof: false
  opt.prof_prefix: "jeprof"
  opt.prof_active: true (prof.active: false)
  opt.prof_thread_active_init: true (prof.thread_active_init: false)
  opt.lg_prof_sample: 19 (prof.lg_sample: 0)
  opt.prof_accum: false
  opt.lg_prof_interval: -1
  opt.prof_gdump: false
  opt.prof_final: false
  opt.prof_leak: false
  opt.stats_print: false
  opt.stats_print_opts: ""
Profiling settings
  prof.thread_active_init: false
  prof.active: false
  prof.gdump: false
  prof.interval: 0
  prof.lg_sample: 0
Arenas: 17
Quantum size: 16
Page size: 4096
Maximum thread-cached size class: 32768
Number of bin size classes: 36
Number of thread-cache bin size classes: 41
Number of large size classes: 196
Allocated: 58312, active: 90112, metadata: 2927672 (n_thp 0), resident: 2969600, mapped: 8478720, retained: 2007040
Background threads: 2, num_runs: 1, run_interval: 0 ns
--- End jemalloc statistics ---
Processing file (1 out of 1): /data/g01.rdf.gz
Shard tmp/map_output/000 -> Reduce tmp/shards/shard_0/000
badger 2021/01/19 03:25:04 INFO: All 0 tables opened in 0s
badger 2021/01/19 03:25:04 INFO: Discard stats nextEmptySlot: 0
badger 2021/01/19 03:25:04 INFO: Set nextTxnTs to 0
badger 2021/01/19 03:25:04 INFO: All 0 tables opened in 0s
badger 2021/01/19 03:25:04 INFO: Discard stats nextEmptySlot: 0
badger 2021/01/19 03:25:04 INFO: Set nextTxnTs to 0
badger 2021/01/19 03:25:04 INFO: DropAll called. Blocking writes...
badger 2021/01/19 03:25:04 INFO: Writes flushed. Stopping compactions now...
badger 2021/01/19 03:25:04 INFO: Deleted 0 SSTables. Now deleting value logs...
badger 2021/01/19 03:25:04 INFO: Value logs deleted. Creating value log file: 1
badger 2021/01/19 03:25:04 INFO: Deleted 1 value log files. DropAll done.
Num Encoders: 1
Final Histogram of buffer sizes: 
 -- Histogram: 
Min value: 12119 
Max value: 12119 
Mean: 12119.00 
Count: 1 
[0 B, 64 KiB) 1 100.00% 
 --

Finishing stream id: 1
Finishing stream id: 2
Finishing stream id: 3
Finishing stream id: 4
Finishing stream id: 5
Finishing stream id: 6
Finishing stream id: 7
Finishing stream id: 8
Finishing stream id: 9
Finishing stream id: 10
Finishing stream id: 11
Finishing stream id: 12
Finishing stream id: 13
Finishing stream id: 14
Finishing stream id: 15
Finishing stream id: 16
Finishing stream id: 17
badger 2021/01/19 03:25:05 INFO: Table created: 1 at level: 6 for stream: 1. Size: 252 B
Finishing stream id: 18
[03:25:05Z] REDUCE 01s 100.00% edge_count:127.0 edge_speed:127.0/sec plist_count:110.0 plist_speed:110.0/sec. Num Encoding MBs: 0. jemalloc: 4.0 GiB 
badger 2021/01/19 03:25:05 INFO: Table created: 12 at level: 6 for stream: 11. Size: 289 B
Finishing stream id: 19
badger 2021/01/19 03:25:05 INFO: Table created: 14 at level: 6 for stream: 9. Size: 708 B
badger 2021/01/19 03:25:05 INFO: Table created: 2 at level: 6 for stream: 3. Size: 288 B
Finishing stream id: 20
Finishing stream id: 21
badger 2021/01/19 03:25:06 INFO: Table created: 8 at level: 6 for stream: 13. Size: 308 B
Finishing stream id: 22
badger 2021/01/19 03:25:06 INFO: Table created: 7 at level: 6 for stream: 12. Size: 308 B
Finishing stream id: 23
badger 2021/01/19 03:25:06 INFO: Table created: 5 at level: 6 for stream: 8. Size: 271 B
Finishing stream id: 24
badger 2021/01/19 03:25:06 INFO: Table created: 6 at level: 6 for stream: 7. Size: 224 B
Finishing stream id: 25
badger 2021/01/19 03:25:06 INFO: Table created: 3 at level: 6 for stream: 6. Size: 292 B
Finishing stream id: 26
[03:25:06Z] REDUCE 02s 100.00% edge_count:127.0 edge_speed:117.2/sec plist_count:110.0 plist_speed:101.5/sec. Num Encoding MBs: 0. jemalloc: 3.9 GiB 
badger 2021/01/19 03:25:06 INFO: Table created: 17 at level: 6 for stream: 5. Size: 224 B
badger 2021/01/19 03:25:06 INFO: Table created: 16 at level: 6 for stream: 4. Size: 286 B
badger 2021/01/19 03:25:06 INFO: Table created: 23 at level: 6 for stream: 23. Size: 301 B
Finishing stream id: 27
Finishing stream id: 28
badger 2021/01/19 03:25:06 INFO: Table created: 11 at level: 6 for stream: 16. Size: 293 B
badger 2021/01/19 03:25:06 INFO: Table created: 18 at level: 6 for stream: 18. Size: 269 B
badger 2021/01/19 03:25:06 INFO: Table created: 21 at level: 6 for stream: 21. Size: 292 B
badger 2021/01/19 03:25:06 INFO: Table created: 19 at level: 6 for stream: 19. Size: 291 B
badger 2021/01/19 03:25:06 INFO: Table created: 20 at level: 6 for stream: 20. Size: 299 B
badger 2021/01/19 03:25:06 INFO: Table created: 25 at level: 6 for stream: 25. Size: 322 B
badger 2021/01/19 03:25:07 INFO: Table created: 15 at level: 6 for stream: 17. Size: 272 B
Writing split lists back to the main DB now
badger 2021/01/19 03:25:07 INFO: Table created: 4 at level: 6 for stream: 2. Size: 287 B
badger 2021/01/19 03:25:07 INFO: copying split keys to main DB Sent data of size 0 B
badger 2021/01/19 03:25:07 INFO: Table created: 13 at level: 6 for stream: 10. Size: 267 B
badger 2021/01/19 03:25:07 INFO: Table created: 26 at level: 6 for stream: 26. Size: 322 B
badger 2021/01/19 03:25:07 INFO: Table created: 22 at level: 6 for stream: 22. Size: 276 B
badger 2021/01/19 03:25:07 INFO: Table created: 28 at level: 6 for stream: 28. Size: 330 B
badger 2021/01/19 03:25:07 INFO: Table created: 9 at level: 6 for stream: 14. Size: 293 B
badger 2021/01/19 03:25:07 INFO: Table created: 27 at level: 6 for stream: 27. Size: 330 B
badger 2021/01/19 03:25:07 INFO: Table created: 24 at level: 6 for stream: 24. Size: 1.6 kB
badger 2021/01/19 03:25:07 INFO: Table created: 29 at level: 6 for stream: 29. Size: 331 B
badger 2021/01/19 03:25:07 INFO: Table created: 10 at level: 6 for stream: 15. Size: 267 B
badger 2021/01/19 03:25:07 INFO: Resuming writes
badger 2021/01/19 03:25:07 INFO: Lifetime L0 stalled for: 0s
badger 2021/01/19 03:25:07 INFO: 
Level 0 [ ]: NumTables: 01. Size: 1.1 KiB of 0 B. Score: 0.00->0.00 Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 5 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 6 [B]: NumTables: 29. Size: 9.8 KiB of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level Done
[03:25:07Z] REDUCE 03s 100.00% edge_count:127.0 edge_speed:72.02/sec plist_count:110.0 plist_speed:62.38/sec. Num Encoding MBs: 0. jemalloc: 1.0 GiB 
badger 2021/01/19 03:25:07 INFO: Lifetime L0 stalled for: 0s
badger 2021/01/19 03:25:07 INFO: 
Level 0 [ ]: NumTables: 00. Size: 0 B of 0 B. Score: 0.00->0.00 Target FileSize: 64 MiB
Level 1 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 2 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 3 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 4 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 5 [ ]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level 6 [B]: NumTables: 00. Size: 0 B of 10 MiB. Score: 0.00->0.00 Target FileSize: 2.0 MiB
Level Done
[03:25:07Z] REDUCE 03s 100.00% edge_count:127.0 edge_speed:65.38/sec plist_count:110.0 plist_speed:56.63/sec. Num Encoding MBs: 0. jemalloc: 0 B 
Total: 03s

But it still does not return any data when I query by graphql (just returns that there is no Graphql schema in dgraph) or by querying with Ratel. I don’t have a clue yet where could be the error.

How did you start the Alpha?

the configuration I use to run alpha, zero and ratel is:

version: '3.2'
services:
  zero:
    image: dgraph/dgraph:v20.11.0
    volumes:
      - dgraphdb02:/dgraph
      - /home/me/Downloads/export/dgraph.r90002.u0118.1158/:/data
    ports:
      - 8001:5080
      - 8002:6080
    restart: on-failure
    command: dgraph zero --my=zero:5080
    networks:
      - dgraph-net
  alpha:
    image: dgraph/dgraph:v20.11.0
    ports:
      - 8012:8080
      - 8011:9080
    volumes:
      - dgraphdb02:/dgraph
      -/home/me/Downloads/export/dgraph.r90002.u0118.1158/:/data
    restart: on-failure
    command: dgraph alpha --my=alpha:7080 --lru_mb=2048 --zero=zero:5080 --whitelist 172.17.0.0:172.32.0.0,192.168.1.1
    networks:
      - dgraph-net
  ratel:
    image: dgraph/dgraph:v20.11.0
    command: dgraph-ratel
    ports:
      - 8010:8000
    networks:
      - dgraph-net
volumes:
  dgraphdb02: ~
networks:
  dgraph-net:

After the bulk, are you moving the p directory to the right place?

I didn’t. I tried with this last step and then restarted the dgraph instances and finally it worked. Thank you for the help

Nice! glad it worked for you.

1 Like

@MichelDiz @anand I went through all the above steps but still see schema path missing error