I see pricing for Cloud has gone up from $10/m to $40/m and is still labeled as “Early Access Pricing May Change”. Is there any idea what the final price is going to be? Are existing customers set at price they sign up for or is that up in the air as time goes on?
Thanks for asking, I’d like to know the answers as well.
I’ve been building an app that was going to use Dgraph cloud, but the jump from $10 to $40 is too much.
What a huge waste of time!
Can only second that, $40 = 4x, a really high increase and unsustainable for small apps!
Draph please, can we just have granular pricing, like pay per MB ?
I am all for having a future plan for pricing, but guys I think we need to give Dgraph a little bit of a break for offering their services at such a low cost. Let’s just compare a few other services and costs. Many of these cannot even begin to compare with the power of Dgraph. (Prices are middle tier estimates and closest feature comparison. Most pricing also has overage limits costs)
- Hasura - $99/mo. (Standard)
- neo4j - $65/mo. (Aura Professional)
- Neptune - $297/mo. (Example Pricing)
- Fauna - $135/mo. (Team $150/mo. credit)
- SkySQL - $700/mo. (HA 8x30)
Having a $10/mo. database service was unheard of and obviously not a sustainable pricing hence the “early access pricing”. There are some questions here that need answered by the Dgraph team, but in the mean time let’s not shoot the horse we rode in on.
I would like to have answered by the team:
- Is there any planning yet for what the non “early access pricing” will be?
- Is there any grandfathering in for early birds?
- Will there be any reduced costs for those of us committed to long term use (Annual Pricing)?
- Will there be anything between the shared HA pricing ($40/mo.) and the dedicated HA (est. $1,314/mo.)
P.S. Fellow Dgraphers, remember last year the pricing structure was centered around the cost of a “credit” and that was not easy to plan for and could get expensive rather quickly. So Dgraph changed their pricing model to accommodate the community. Some of the above seems like an ask to return to that former pricing model. Be careful what you ask for.
I think you’re comparing apples to oranges on some of those. Personally, I am not complaining because $40 a month per instance to support Dgraph is a bargain, even with the needed features that they have not quite developed yet to be 100% in the competition.
That being said, I want these features, so let’s all pay a fair market value for the service so they can get there and pay their programmers. I don’t agree with complaining about pricing at this point. Once they add a few of those necessities (not going to list them here), not only will Dgraph be a superior product, they will be a FAR superior product. I honestly don’t know if the Dgraph team or their investors understand their potential, but we do.
That being said, I do agree with one day, dgraph having a pricing model based on usage and size would be better and fair. I don’t see a value in the free tier, as I use all of it while developing in just a couple of days. Maybe the free tier should have more reads etc. Either way, you can’t do much with it.
I think the real current problem is that I am paying the same for developing a small database (that I want to grow one day), as someone who has a huge database already and uses many reads etc (up to 25gb). I am not sure how to solve that problem. This is a future problem though IMHO.
All I currently ask for is more updates and responses from the Dgraph Team on actual future development, and I think people would have more an idea of what they are investing their money and time into.
FWI: Anyone that can’t afford $40 a month really shouldn’t be using any cloud product. They still have much to learn about full-stack programming, the cost to run a business, cash flow, and competition analysis.
Thanks for vote of confidence about pricing change. I had mentioned in my announcement email about the focus for the upcoming months for Dgraph Cloud.
In the past few months we have:
- Integrated with Apollo Federation
- Allowed running custom JS code for processing business logic
- Added a third layer of auth at Cloud level for maximum security
- Improved GraphQL query latency by 33%
- Launched Data Studio — a unique way for data exploration
We have plenty more on the roadmap for 2021. Some of the things coming soon, which we think you’d like:
- 10x GraphQL mutation throughput
- 2x concurrent transactional performance
- Auto-scaling with learner nodes, for serving peak traffic and remote users (Dedicated instances)
We also just went GA with v21.03 in Dgraph Cloud – which took us a bunch more effort than expected. This would pave the way for a better multi-tenant integration in Cloud UI, change data capture via Kafka, audit logs and learner nodes as mentioned above.
So, there’s a lot of stuff coming to the cloud that’s keeping us busy, and hope our users would like and find them a value for their money.
@mrjn HI Manish,
I need to read my emails, so I must have missed that.
Facets sound awesome, but I think you guys should not worry about it until you have Nested Filters down, including nested counter aggregations, @auth nested filters, and all combinations that DQL can do (using var without querying extraneous nodes you don’t need). Not only do I imagine this is every intermediate or above user’s most requested feature, it is the biggest limitation IMHO that makes us have to use DGL sometimes. I believe we have no problem waiting until late next year for Facets if you guys do nested filters right, from the beginning.
That being said, I was specifically referring to my many (ignored) posts about backend validation. This is the biggest limitation and oversight I feel your team does not take seriously. There are two specific things you guys need to do in order to have better security options that are EASY fixes.
1.) Update-After Validation - dgraph is actually open to attack until this is fixed (probably), and the code is already there, this also opens up so many doors for the @auth directive, just search for my many posts on this
2.) Pre-hooks AND-OR Access to Deleted or Changed Data in Post Hooks - The second is acceptable as well… no point in post-hooks if I can’t view the original data, firebase functions do it that way
3.) Reference Directive - Right now it is impossible to delete certain things, this would be a game changer, follows sql logic
If you ask your experienced and serious users to vote on the most important needs, I guarantee all 3 of these will be at the top. These features are offered in most other NoSQL and SQL databases.
I just want to be heard. These are 3 posts that can probably back track to 20 other related posts minimum by many other users. I just want to know that you guys have a plan to tackle back-end validation, and actually read and discuses when someone posts a feature request on here. It is usually to fix a problem.
Well, I ended up listing my top 3, although there are many more, I believe these are the most important.
Thanks for reading,
For feature requests, can you please file experience reports. We get a lot of feature requests, and the best way to make a good case for a feature is through experience report.
You’d see the template if you file it under Issues category.
How about a serverless offering?
There is only a few companies offering serverless databases at the moment. This would be perfect for developers / those building a side project. I appreciate that there is often a startup lag with a serverless application, but the cost benefits are really fantastic.
I added 3 new feature request, please review them as I know your team has probably already seen them since the problems behind them have been mentioned in dozens of posts.
- Feature Request: Update After @auth Validation
- Feature Request: Access Deleted or Changed Data in Lambda Webhook
- Feature Request: Cascade Delete & Deep Mutations By Reference Directive - #3 by jdgamble555
I feel like there is a pattern here in my requests of updating the one-scenario when it comes to updates and deletes, to all-scenarios. Luckily these problems have already been solved in other databases, so we can learn how to solve the problems easily.
Two of my requests are related to backend Validation, which needs to be taken very very seriously.
Thank you for the awesome product and your time reading this!
Again, the price is worth it when you guys continue to innovate the product.
serverless DB: thought that’s what dGraph Cloud is supposed to be ?
I also thought the $10/mo was bonkers cheap. I chalked it up to how efficiently they could run the shared service as the tech is just that good I also thought of this database as not needing to store as much in ram as other databases. Not sure if that is true or not. I am happy to support the product and hope that the money is well spent to grow the team and keep the project moving forward! I am a huge fan of this product and this team and want to see them succeed.
I was shocked by the 40$ no Thanks that is too much. Why not just give us cloud for 10$ and if our business is successful we will pay 50.000$ a month no problem. But 40$ is way too much for most people’s pocket, you may lose big apps that way that could be built on dgraph.