Slash GraphQL is great, we’ll be able to mutate and query most of our data directly from it. However, some requests require data processing before being inserted, for example, uploading a CSV file that needs to be deserialised into objects/schema types. One way to solve for this is to set up a middleware API that handles these types of requests, so for example we’d use our Elixir+Phoenix API to handle the uploadCSV(file: String!)
request, and then insert the data into Dgraph with liveforeverx/dlex.
However, it seems like we could avoid the boilerplate involved in managing this API (writing graphql resolvers mostly) by sending the request to Slash GraphQL and having Slash use a lambda to process the CSV before inserting the data. It kinda seems like Slash expects you to store all of these lambdas in a single JS file, but I can see that quickly turning into a nightmare to manage—I don’t want all our business logic in one file.
What seems like a better solution is to use a FaaS framework like OpenFaaS hosted on Okteto but I’m not sure how I Slash GraphQL and OpenFaaS would communicate with each other (would I embed API calls in one-liner Slash GraphQL lambdas and an OpenFaaS function would insert the data and return a success/error message to Slash that in turn returns it to the client?) and whether it actually makes sense as a solution.