Server Error\Timeout while loading schema

Running Dgraph locally in Docker the schema took several minutes to load to a done state. So my next step was to take everything and cast it out on the net doing so on AWS with a single host setup following the deploy docs.

I can load a very simple schema no problem, but when I try to load my lengthy schema the server errors out and by watching Ratel connection indicators it seems to cycle booting up and then going offline until it seems to finally just quit.

I thought that maybe it was my setup so I used my invite to slash.graphql and uploaded my schema there. After submitting it, there was no indication anything was happening. After a few minutes I got a brief error that was quickly followed by a generic server error. Sorry, I couldn’t quite get what the first error said.

During writing this, I went and tried to submit the same schema again on slash.graphql and it accepted it almost immediately and took me to the API explorer.

I am still unable to get a response from my own AWS running instance.

Error fetching schema from server: Please retry again, server is not ready to accept requests

Although I can see it on: http://54.147.127.165:8080/

Dgraph browser is available for running separately using the dgraph-ratel binary

The /graphql endpoint is not providing the schema when trying to connect to it with GraphQLPlayground still.

Should I piece apart my schema and send it in increments/updates? It seems that the algorithm for parsing the schema only updates the fields that have changed, so that might be the best way to get it all on a server is incrementally instead of all at once.

I would post the schema but it was too large to allow in this post :frowning: But you can see it where slash.graphql finally processed it: https://primo-yam-7925.us-west-2.aws.cloud.dgraph.io/graphql

Hi @amaster507
So as I understand it is working perfectly fine through Slash right?
Though when submitting the schema to AWS instance you’re facing issues. Can you share the error that you get on submitting the schema? And which approach did you follow to setup your AWS single host instance?

It is working now on Slash, but it did seem to take a while.

https://dgraph.io/docs/deploy/#run-using-docker-compose-on-single-aws-instance is what I followed for setup. Opened ports: 8080, 5080, 8000, 9080

I get the response (52) Empty reply from server after submitting the schema.

Watching Ratel it seems like the server is going up/down while strying to still add types and predicates. I was able to catch it up a few times and see that it has loaded more since the last time.

Right now I am just getting Error fetching schema from server: Please retry again, server is not ready to accept requests when trying to see the schema from Ratel.

I am about to restart the server again and try to simplify the schema and get it to add all of the String, Date, Int, Float, Enum fields and then start to build it back up from there adding the linked predicates a few at a time. It really just feels like a server overload trying to process to much. I am sure there is a lot going on behind the scenes with building the payload types, inputs, and the query and mutation parameters, on top of the dgraph schema publishing for types, predicates, and indexes.

Here is the schema: https://drive.google.com/file/d/1rNtcUGwsLAt78r7VIVsBI5AVK-Lz1XyP/view?usp=sharing

Hi @amaster507
I tried with your schema locally and it took around 5mins to submit. Before that I did run into few issues like you. If you saw this “too many open files” in the logs like me, go here. I increased it ulimit -n 1024 and it worked perfectly fine after that for me.

Well, I think I got to the bottom of it. After fixing the “too many open files” I ran into the main issue I believe with running my schema. OOM. :frowning: Read about troubleshooting OOM from that same link as above and saw that it recommends 7-8Gb (t2.large+ on AWS) and I was running on a t2.micro (1Gb). I guess that would work for smaller schemas, but not for our needs. Now back to deciding where to go from here to keep costs down and not drive up a bill. I am still deterred from Slash being a more permanent solution because of the lack of pricing info. I used 12 of my free credits and still haven’t loaded any actual data on it. Based on our normal user load I estimate that we would use ~3-10 K credits/day

Edit: Yep, reconfigured as a t3.large (8Gb) and the schema built and was done before I could even get the logs open

That’s great! I will let you know about the pricing info of Slash soon. Let me know if you have any more questions.

@amaster507 Can we schedule a call so we can learn more about your project and give you more info about the Slash pricing? Feel free to DM me your email address, and let me find a time to chat.

In the meantime, please check out this post here: What does the credit mean in Slash GraphQL?