What are the limits for a schema for GraphQL

Are there any recommended or maximum limitations for the schema when building out a graphql server?

  • How many Nodes Types?
  • How many Predicates?
  • Overall Length of Schema?

If there are not any limitations, is there a better way to submit a schema then through curl POST to 8080?

I seem to only be able to build a schema so large before it brings down dgraph for good. I am sure that there has to be other users with larger schemas as well. I ain’t building just a simple ToDo app, lol.

I have a 635 line schema, and I can keep going. Any modification to the schema is seamless. Maybe the core team can talk about limits better.

I have max 28 predicates, as of now, for a type.

I read somewhere that curl -X POST localhost:8080/admin/schema --data-binary ‘@schema.graphql is best.

Maybe Dgraph supports gRPC for admin tasks, I don’t know.

Please share your schema length, how many Nodes &, Predicates you have in that schema as well.

There aren’t any limits to the size of the schema that you can have.

Currently submitting a request to /admin/schema endpoint is recommended method. There is also updateGQLSchema mutation available of /admin endpoint. Could you please share some information on what way are you looking forward to?

Could you share some more details about the size of the schema and what is it that you are observing so that we can debug this? Does Dgraph crash or go OOM? How many resources are provisioned to Dgraph?

I finally resolved this by increasing memory. I followed: https://dgraph.io/docs/deploy/#run-using-docker-compose-on-single-aws-instance which creates a t2.micro service on AWS. After finally figuring out my problem was OOM. I resized to a t3.large and that allowed my schema to POST.

The documentation should make mention in this section that this creates a 1Gb instance and is eligible for the free tier but does not meet the recommended 8Gb of memory. And that a t*.large is recommended. I think that would clear a lot of confusion up for new users running docker images on their own aws instances.

I was just looking for a way that maybe didn’t use as much memory to create the schema. That may not be possible though.

My schema right now is 40Kb consisting of 110 types, ~1,000 predicates, 250 @search directives, 267 @hasInverse directive.



I have loaded 235k predicates and zero types or indices. Dgraph can handle that schema well but Ratel has problems coping with this amount.

What is your RAM requirement for running that large of a schema?

And I assume you are using GraphQL endpoint and not just Dgraph.

The dataset has 500m triples, the largest predicate has 40m triples and 8GB in size, the second largest has 18m triples and 20GB. The zero process runs with 700MB RAM and alpha starts at 3.5GB. Alpha RAM grows as you query the graph, obviously.

I am using the dgraph gRPC GraphQL± endpoint via the Java client dgraph4j.

ah, In my OP I was referencing the GraphQL schema that builds the schema for the /graphql endpoint. Long story short, I needed way more than 1Gb of RAM to process my graphql schema to generate Dgraph schema.