Using community feedback to prioritize GitHub issues

At Dgraph, we love community :heart:

Over the years, we have received a lot of support from our community, and so we also want community to be able to decide the direction that we take. For this, we have always used community feedback to prioritize our stories and what we work on. This includes publishing our roadmap for the next year and getting feedback on the same and planning our year based on that feedback.

In order to further improve, recently we have semi-automated this process by defining some metrics to accurately measure the feedback and incorporating the results of those metrics into our sprint planning process. Now, we prioritize the issues in our GitHub repositories based on the community involvement that they receive. Before every sprint that we plan, we look on the open GitHub issues, measure the community involvement in them, use this measurement to find their priority score, sort them from highest to lowest priority score, and consider this as the feedback from community to plan the sprint. This creates a continuous feedback loop from the community.

What does that mean?
It means that more the community is involved in a GitHub issue, higher the priority score it will get, resulting in us paying more attention to that issue and solving it faster :fast_forward:

How do we calculate Priority Score?
We have a script to do this task for us :sunglasses:

We measure 3 metrics for each open issue on GitHub:

  • Number of reactions on the issue description (numReaction)
  • Number of comments (numComment)
  • Number of participants (numParticipant)

When the script is run, it fetches the above mentioned metrics for all the open issues in the repository. Then for each metric, it finds the issue with the highest value for that metric, and uses that value as the benchmark for measuring the metric ranking for other issues. This way, all the issues get a metric score between [0,1] for all the 3 metrics. Then, for each issue, we take a linear sum of each metric score, to decide the final priority score. While taking the linear sum, we ensure that we give equal weightage to each metric. Here is the mathematical explanation:

metric_benchmark = max(metric value for all issues)
metric_score = metric_value_for_the_issue/metric_benchmark
priority_score = metric_score[numReaction] + metric_score[numComment] + metric_score[numParticipant]


metric_score ∈ [0,1]
priority_score ∈ [0,3]

This measurement helps us know the relative priority of all the open issues at the time the script is run, and thus helps us prioritize the issues.

Feel free to post any suggestions/improvements in this process, we would love to hear from you.

Thanks :coffee:

2 Likes

@abhimanyusinghgaur This is great. A couple of comments.

  1. We have always encourage community feedback and will continue to do so.
  2. We have always used community feedback to prioritize our stories and what we work on. This includes publishing our roadmap for the next year and getting feedback on the same and planning our year based on that feedback.

The thing we are doing differently now will be to “formalize” or “semi-automate” that process by defining some metrics to accurately measure the feedback and incorporating the results of those metrics into our sprint planning process.

I think if you say this way, it not only emphasizes the new work we are doing but also values the past efforts and shows that we are constantly improving – not only our code but also our processes.

Thanks

1 Like