Software development metrics can help track how well your team is working together. Unlike measuring developer productivity, which focuses on the individual, the metrics in this article will help you track the progress of the entire team.
Development teams change over time, and with each change, the culture of the team completely changes. The metrics we’re going to share with you will act as a lagging indicator to how well your team is performing in a way that studying individuals within an organization cannot.
1. Roadmap Items Achieved
Product teams work from a prioritized list called a roadmap. When your team is completing plenty of roadmap issues, you have confidence that the product is moving closer to achieving its product vision.
If the product vision is well thought out, and the roadmap is prioritized objectively, then completing roadmap items means completing the tasks that will benefit the company the most. Measuring this is a great way to show how development can impact a product.
The easiest way to measure that completed items link to the roadmap is to tag individual issues in GitHub or Jira with the associated roadmap item. Then when reviewing completed items, you can count up the roadmap items.
This metric relies on the roadmap being kept up to date and that items within it are relevant to the product vision.
2. Deploys Made
The only work that matters is the work that can be put live and shown to your users. The deploys made metric is about measuring how many times code written by the team has made it onto your servers for your users to see.
Work that goes into features that never make it in front of your users isn’t always wasted. Indeed, there can be useful lessons learned from code that never gets deployed. However, your users are still going to be the best indicators if something is excellent or not. That’s why measuring how many times we get things in front of them is useful.
Where you host your application will impact how you can measure this — familiar places like the Apple App Store or Heroku store logs about deploys to their servers.
Things can always go wrong, but pushing too many deploys without the appropriate quality checks is a way to ensure that you’ll run into problems. Be sure to temper this metric with some of the other software development metrics in this guide.
3. Cycle Time
Cycle time is the time it takes for an issue, like a feature request or a bug, to go from one state to another – normally from “being worked on” to “complete”.
Efficiently working through a ticket until it is complete requires multiple team members and excellent collaboration. Seeing a downward trend in how long cards take shows that you’re removing obstacles and encouraging collaboration.
The majority of issue trackers can timestamp when tickets were opened, assigned, worked on, and completed.
It’s important to keep in mind that not all issues are created equal. Sometimes there can be lots of “quick wins”, but there can also be issues that require outside involvement that stretch out over more extended periods.
4. Work In Progress
Measuring work in progress means tracking the number of tickets that your team currently has open.
This metric is particularly useful when you look at trends over time. This type of metric ideally stays reasonably stable, with the team only opening new issues when they close old ones. If the number of tickets in progress appears to move up over time, it can be an indicator there are too many blocked or unresolvable tickets in the team.
Tools like Pivotal Tracker can track how many tickets are currently in progress.
This number can be skewed near the start or end of a body of work, for example, if you work in sprints. Always be sure to track over time to avoid this skewing.
5. Mean Time Between Failures
Issues on your application or website are inevitable. They come with the territory when you’re creating something new to serve your customers. Mean time between failures tracks the average length of time between failures of the system. In an ideal world, there would be a long time between failures.
Tracking this metric gives us a good insight into the quality and stability of our system. If you can deliver new features and keep this number down, then you’re moving fast without breaking things.
To track this metric, we need a way of tracking when things go wrong with the application. These details could be from uptime monitoring services, bug tracking systems, or user complaints. We plot all the issues over time and see the gaps in between them.
Compiling all of this information from different systems can be time-consuming, and the definition of failure needs to be consistent, or you’re going to have incomparable data points.
6. Mean Time To Repair
Knowing that failures are inevitable means that we could also track the mean time to repair. Mean time to repair means measuring how long it takes us to respond and fix the spotted failures. The shorter the number, the better.
To calculate the mean time to repair, we need to take the data we have from calculating the mean time between failures and add when fixes for the issues where deployed.
You will need to pull data from your error logging systems and your deployment systems. Over time the team should be able to put automated fixes in for some of the failures, which will hopefully bring the mean time to repair down.
Make sure that the team is doing root cause analysis on issues. Instead of getting good at fixing issues quickly when they arise, this will help them put things in place to make sure the issues don’t occur in the first place.
7. New Bugs Versus Closed Bugs
Developers love to joke about how closing one bug opens up fifty new ones, but there is an element of truth to this. When developers rush to fix one thing, they often neglect the knock-on effect it could have on other parts of the system. Measuring the number of new bugs versus closed bugs is an excellent software development metric for keeping an eye on this.
Knowing this metric over a given period allows you to determine if the quality of your product is improving over time. It also allows you to see if your quality assurance team is getting more or less rigorous at catching issues early.
During any given period, you need to see how many closed tickets were bug fixes, and we also need to see how many opened cards that were new bugs. Most modern issue tracking systems will allow you to see this.
Bugs don’t exist in a vacuum, and you should consider external factors when trying to work out why bug counts are going up. For example, perhaps the marketing team just started pushing a rarely-used feature.
8. Code Coverage
Code coverage means tracking how much of the code is covered by automated testing tools.
When code coverage is high, your developers can move at pace, knowing that if they change something and it breaks, they will be alerted to it. When code coverage is low, each change will need to be checked by a quality assurance person, slowing your team’s velocity.
Code coverage tools exist for most programming languages, a popular tool for the Ruby language is SimpleCov.
A lot of people mistakenly equate code coverage to code quality; this is not the case. It is purely a mark of how much of your code is covered by a test suite. If the tests don’t test the system in a meaningful way, then it doesn’t matter how close to 100% coverage you get, bugs will still frequently pass through.
9. Release Confidence
Release confidence is our first qualitative measure in this list. It is a measure of faith, scored by the team involved, about the next release to production. At its most basic it would be a poll asking “On a scale of 1 to 10, how confident are you in the next release we’re going to deploy?”
This figure acts as a great jumping-off point for a conversation about why release confidence is low, or why release confidence doesn’t align with the success of the release.
A simple poll shared near the end of a body of work is enough to capture what is needed. This could be done via Slack, or as part of an automated stand-up.
As with all qualitative data, taking the number at face value and not digging into the “why” behind the number would be a missed opportunity.
10. Net Promoter Score
Net Promoter Score, or NPS, is a well-regarded metric to gauge how satisfied your customers are and how likely they are to recommend you.
By tracking this metric over time, you can see if development impact is having a mostly positive effect on the product or a negative one. Bugs are going to make people less likely to recommend you; exceptional features are more likely to make someone recommend you.
Tracking net promoter score requires talking to your customers directly. Survey Monkey has written an excellent guide to this that shows you how you can create the survey.
Features are not the only thing that impacts net promoter score, so with any survey, tailor it to talk about the specific things that are relevant for your team.
What Changes Have The Biggest Impact To Team Efficiency?
We’ve shared these software development metrics to help you measure team efficiency. Remember, the goal of measurement should always be to inspire positive change.
We’d love to know what changes you’ve made to help improve your team’s efficiency. Let us know in the comments below.