Timeout exceeded while awaiting headers

I ran into a very weird problem today. Although, I tried to understand what is going on I can’t get my head around what actually is happening.

I have a custom lambda resolver which runs complicated but manageable queries and mutations. Both, queries and mutations are set dynamically according to the input parameter of the resolver

// minimal example

export const removeContent = async ({ args, authHeader }) => {
  try {
    let query = "";
    let mutations = [];
    if (args.input.one) {
      query = "Some quite complicated query string (working)";
      mutations = { "some": "JSON" }
    }

   if (args.input.two) {
     query = "Some quite complicated query string (working)";
     mutations = { "some": "JSON" } 
   }

   // ... more

   // perform DQL upsert mutation
   // ...

  } catch (e) {
    return {
      error: JSON.stringify(e)
    }
  }
}

If I then perform a GraphQL mutation removeContent I get the error

"message": "Evaluation of custom field failed because external request returned an error: Post \"http://router.fission.svc.cluster.local:80/cluster/0x73f3cf95/1/graphql-worker\": context deadline exceeded (Client.Timeout exceeded while awaiting headers) for field: removeContent within type: Mutation.",

However, I realised that if I remove some of the if statements it is working again. Also if I change the variable declaration from let to var, it seems to accept more of the if statements. Could this be a memory issue?

Thanks! Any help appreciated!

1 Like

This feels like a bug. But I’ll check.

@MichelDiz I think I’ve found the issue! I had the graphql-request package included in the bundle. Guess this has somehow interfered with Dgraph’s own GraphQL implementation? However, I have removed it, replaced it with a simple fetch request and so far it seems to work!

Is there, apart from the 500KB size limit, any other memory limitations for the lambda on Cloud?

I’m waiting a response on this.

1 Like

@MichelDiz By the way, do you know by any chance if there is a way of checking log files on a Dedicated Cloud instance? Guess my corrupted source must have logged something there.

Limits:
cpu:     250m
memory:  128Mi

128MB feels okay, right? So, must be a bug. I need to check if in the community there is also something similar happening. Or if it’s unique to the Cloud.

We don’t have a way to expose Alpha logs from the Cloud to customers.
Also, Lambda logs are available from the Cloud UI but only for a max of the last 60 mins.

A post was split to a new topic: GraphQL error: Evaluation of custom field failed because external request returned an error