After using the latest version, both the client and ratelUI are very slow queries

After using the latest version, both the client and ratelUI are very slow queries. Is it because I query when I input data? If so, how do we need to solve it? Distributed?

If it is suspended, the log will tell me that 2 more transactions are being executed. What should I do?



Can you please test with our last RC?

https://hub.docker.com/r/dgraph/dgraph/tags/

v1.0.9-rc3

https://storage.googleapis.com/dgraph-bin/dgraph-darwin-amd64.tar.gz
https://storage.googleapis.com/dgraph-bin/dgraph-linux-amd64.tar.gz
https://storage.googleapis.com/dgraph-bin/dgraph-windows-amd64.tar.gz

Cheers.

Yes, my test is v1.0.9-rc3

On Java client, print this information all the time.

What are your specs?

SYSTEM:

DGRAPH:
SCHEMA:
human_id:string @index(term) .
relation:uid @reverse @count .
company_id:string @index(term) .
human_name:string @index(term) .
create_time:string @index(term) .
update_time:string @index(term) .
share_hash:string @index(exact) .
company_name:string @index(fulltext,term) .

DATA:
30000 company_id

QUERY:
{
query(func:eq(company_id,“99”)){
uid
}
}

And your storage type?
and what is your Dgraph configs? Single Dgraph?

So anyway, do this. Export your DB https://docs.dgraph.io/deploy#export-database
Check if the exported RDF are ok.
Clean everething (are you using docker?). Get the v1.0.9-rc4 binaries or Docker image.
Do a Bulk Load https://docs.dgraph.io/deploy#bulk-loader

Then test again. But try to stress the Dgraph.

I tested it according to your method, and the query would not be too slow. But there is a new problem, when I insert data, will occupy a lot of memory, resulting in my insert slower and slower, other operations can not be completed, the computer can not operate.
I used the way of inserting RDF to insert data, and also tried the way of JSON, and I had the same problem.

(1)
%E5%BE%AE%E4%BF%A1%E6%88%AA%E5%9B%BE_20180925101556

(2)

Hey, those logs are not from Bulkload. They are from LiveLoad. If you use bulkload you will have more speed on data insertion. Check Loading close to 1M edges/sec into Dgraph - Dgraph Blog

If you do not want to increase memory, I recommend using Bulkload for data insertion.
And which of the problems presented still persist?

A bulkload log looks like this:

./dgraph bulk -r out.rdf.gz -s movielens.schema --map_shards=4 --reduce_shards=1 --zero localhost:5080 
{ 
"RDFDir": "out.rdf.gz", 
"SchemaFile": "movielens.schema", 
"DgraphsDir": "out", 
"TmpDir": "tmp", 
"NumGoroutines": 8, 
"MapBufSize": 67108864, 
"ExpandEdges": true, 
"SkipMapPhase": false, 
"CleanupTmp": true, 
"NumShufflers": 1, 
"Version": false, 
"StoreXids": false, 
"ZeroAddr": "localhost:5080", 
"HttpAddr": "localhost:8080", 
"MapShards": 4, 
"ReduceShards": 1 
} 
The bulk loader needs to open many files at once. This number depends on the size of the data 
set loaded, the map file output size, and the level of indexing. 100,000 is adequate for most data 
set sizes. See `man ulimit` for details of how to change the limit. 
Current max open files limit: 7168 
2018/07/11 21:55:13 loader.go:77: Connecting to zero at localhost:5080 
MAP 01s rdf_count:30.54k rdf_speed:29.83k/sec edge_count:94.38k edge_speed:92.17k/sec 
MAP 02s rdf_count:108.4k rdf_speed:53.44k/sec edge_count:327.9k edge_speed:161.7k/sec 
2018/07/11 21:55:15 merge_shards.go:36: Shard tmp/shards/002 -> Reduce tmp/shards/shard_0/002 
2018/07/11 21:55:15 merge_shards.go:36: Shard tmp/shards/001 -> Reduce tmp/shards/shard_0/001 
2018/07/11 21:55:15 merge_shards.go:36: Shard tmp/shards/000 -> Reduce tmp/shards/shard_0/000 
2018/07/11 21:55:15 merge_shards.go:36: Shard tmp/shards/003 -> Reduce tmp/shards/shard_0/003 
REDUCE 03s [100.00%] edge_count:327.9k edge_speed:327.9k/sec plist_count:16.63k plist_speed:16.63k/sec 
REDUCE 03s [100.00%] edge_count:327.9k edge_speed:877.8k/sec plist_count:16.63k plist_speed:44.52k/sec 
Total: 03s

wich way? like this way? apollo-universal-starter-kit-With-Dgraph-DB/packages/server/src/dgraph/dgraphconnector.js at 86160bae53595290703221192f1b7a405ae8bc38 · OpenDgraph/apollo-universal-starter-kit-With-Dgraph-DB · GitHub

%E5%9B%BE%E7%89%87

%E5%9B%BE%E7%89%87

I have load the data, but I don’t know how to do the next step.
And,can bulkloader only be used to initialize the database? What if I need to input data all the time? How does this method need to be used?

It is done just for this, to initialize an ingest from scratch.

In that case either you use a client or via LiveLoad.

is simple, BulkLoad creates the output folder “Out/*” each new folder with a number would be a Shard. You can control this by editing the flag “--reduce_shards”. Once you have a output done, just get those files in “out/0/*” and move to a Dgraph Server from scratch. And run your server.

Or copy the Dgraph binarie to that path and start Dgraph Server from there.

Yes.I did.
However, what I need is to insert data after starting the server.Because insertions and queries are possible at the same time.So,this method is not suitable,but other methods will occupied memory.

The problem that occupy memory can solve? I can’t do anything now.

Is it possible that you record a video and send me in personal inbox(discuss)? showing exactly what you are doing by reproducing the issue. Doing it so I could easily determine an exit for you. (youtube or something accessible).

Are you using docker? At first print I see an unusual IP: “10.10.18.71”.

Another detail.

Its specs say it has 8GB of memory. Is not that too small for your operating system? possibly + docker + some other software + browser and etc?

In practice you have very few RDFs (Supposedly 12k, that’s very small, should not be a problem). 2GB for the Dgraph is more than enough. So there’s something you’re doing that does not fit.

Cheers.