Pretty much the title… I’m interested how much I can spare me a backend or if I still need one.
Anyone using GraphQL is probably using it on the frontend, so yes, probably everyone. That being, there maybe a backend used for more complex cases on top of that in certain situations. Personally, if I hosted my own backend, I wouldn’t use Dgraph GraphQL, I would use Dgraph DQL, as it is way more powerful.
Hopefully in the next year or so Dgraph GraphQL will be able to handle almost any case DQL can, and be able to handle more standard and complex backend validation cases.
Dgraph Cloud’s purpose is for those of us that don’t want to deal with the hardware end of things.
J
Haven’t deployed to production since migrating from Elixir/Absinthe to Dgraph, however, it’s 200x less work. There are some things that I’m using middleware for, basically any complex mutations that need to be done inside transactions, but it’s not difficult. I’ve using Sveltekit, so I just create an ‘endpoint’, and execute DQL using the dgraph-js-http
lib. Example:
import { dgraphClient } from '$lib/configs/dgraph-client.ts';
import { v4 as uuidv4 } from 'uuid';
import Mux from '@mux/mux-node';
const { Video } = new Mux(import.meta.env.VITE_MUX_TOKEN_ID, import.meta.env.VITE_MUX_TOKEN_SECRET);
export async function post(request) {
const txn = dgraphClient.newTxn();
const id = uuidv4();
const transcriptSlug = request.body.slug;
let response = {};
const upload = await Video.Uploads.create({
cors_origin: import.meta.env.VITE_APP_URL,
new_asset_settings: {
passthrough: id,
playback_policy: 'public',
mp4_support: 'standard'
}
});
try {
let videoUpsertJson = {
query: `{
var(func: type(Transcript)) @filter(eq(Transcript.slug, \"${transcriptSlug}\")) {
t as uid
Transcript.video {
v as uid
}
}
}`,
set: [
{
"uid": "uid(v)",
"dgraph.type": "Video",
"Video.transcript": { "uid": "uid(t)" },
"Video.uploadId": upload.id,
"Video.status": 'waiting_for_upload'
},
{
"uid": "uid(t)",
"Transcript.video": { "uid": "uid(v)" } // Create inverse edge, which only happens automatically in graphql
}
]
};
const videoRecord = await txn.mutate({ mutation: JSON.stringify(videoUpsertJson) });
const videoRecordId = videoRecord.data.uids.videoRecord;
await txn.commit();
} catch(e) {
if (e === dgraph.ERR_ABORTED) {
console.log(e)
} else {
throw e;
}
} finally {
await txn.discard();
return {
body: {
id,
url: upload.url
}
}
}
}
import * as dgraph from "dgraph-js-http"
const clientStub = new dgraph.DgraphClientStub(
import.meta.env.VITE_DGRAPH_URL,
false,
);
const dgraphClient = new dgraph.DgraphClient(clientStub);
dgraphClient.setDebugMode(import.meta.env.VITE_DGRAPH_DEBUG_MODE);
export { dgraphClient };
Also for queries that aren’t possible with the standard generate graphql endpoints, you can just create custom queries in the schema that use custom DQL
I’ve been working on a graphql server library for nextjs: https://github.com/ian/next-graphql
We’re sorta doing this in a different way, we spool up a next based GraphQL server that uses remote schemas to delegate to dgraph.
Neat thing about this approach is that we can add guards to endpoints and require auth, admin, etc. An example of this would be a CMS where any user can read but only admins can write.
Love your feedback on this. It seems like it cleans up a ton of the frontend <-> proxy <-> dgraph problems you’re talking about.
We also use nextjs. It really helped bring more backend functionality closer to the front end. But we still use GraphQL in both client side and server side scripts. I think the best of any world is flexibility. Let things shine where they work well at without reinventing every piece of the wheel.