Skip to main content

General overview of agile metrics

Background

This page covers general information about agile metric recommendations, typical metrics, and some examples. It is intended to be a starting point for more conversation within your business.

Personas

To put our recommendations into perspective, we need to understand the personas in play and their motivations and areas of concern. We can then identify the top concerns for each persona and relate the reports to answering these concerns.
For this article, we’ve selected the Delivery Manager persona, which focuses on the agile delivery component of the program. This persona typically manages a program of work across many squads or agile teams. This persona is concerned with three main areas:
  1. Quality
  2. Commitment & predictability
  3. Lead time

Metrics

When you balance these metrics with other business metrics (not covered here), you can better assess the program overall. All metrics need to align with the program and portfolio’s goals, objectives, and OKRs.
Try to use metrics that trend over time as appropriate. Without trends, you will not be able to determine if you are improving, on track, or going to miss your goals.
To protect team morale and prevent adverse team behavior, avoid using metrics to call out teams below target. Otherwise, you may notice results like:
  • Teams may game or inflate metrics to look “better” without achieving your program goals.
  • Teams may become competitive and even avoid working collaboratively, increasing wait times.

Recommendations

We will focus on the three key concerns identified above - quality, commitment, and lead time.

Quality

Key concerns
Description
Quality
  • Is the quality trending up or down?
  • Balance
  • Are teams balancing between feature work and defect work effectively?
  • Detection
  • Are we identifying defects early in the capability workflow?
  • Do the defects predict lower quality in delivery, for example, more defects raised AFTER go-live?
  • Other metrics not covered here that could contribute to the definition of “quality”:
    • Automated tests? Tests passed or code coverage as a quality metric?
    • Support tickets? As a measure, do you track severities raised by end users?
    • Production downtime
    • Customer satisfaction

    Example reports

    Code coverage
    This example here is from Sonarqube, a popular code quality tool. Code coverage is one of the qualities it can measure in your codebase. This report shows the trend of code coverage over time. This information could be visualized with defect detection or major support events to determine if there is any correlation.
    General Overview Agile Metrics 1
    Source: SonarCloud
    Flow distribution
    A report that displays more correlated information, like the flow distribution diagram below (from Atlassian’s Project Aspen), will bring more context to the number of defects for the teams' capacity and ability to work on feature work. It will also visualize the knock-on effect of a high load of severity 1 defects the team must drop everything to address.
    General Overview Agile Metrics 2

    Commitment

    Key concerns
    Description
    Delivery commitment
    Are teams delivering what they’ve committed to during a Product Integrations (PI)?
    Predictability
    Is the predictability trending up or down over time?

    Example reports

    PI tracking
    General Overview Agile Metrics 3
    This is one example of a PI status dashboard from Jira Align. It’s similar to the PI Baseline Movement report, except it has a graph that visually charts the delivery velocity (this example has added, but not removed, scope).
    Team predictability
    Many managers focus on velocity reports across their teams to report on predictability.

    Comparing story points between teams is generally problematic across teams of teams because of the variety in which teams operate. Unless each team has the same skillsets, works on similar effort deliverables, estimates, develops, tracks, and tests exactly the same, you will not get a valid comparrison.

    Be careful of comparing team’s velocity values against one another. This is not a problem on itself, it just means the teams are estimating differently.

    A way to view predictability is to abstract away from estimation methods and just focus on work items and their throughput. The report below should hover around zero for a highly predictable team. They finish what they start. 
    General Overview Agile Metrics 4

    Lead Time

    Key concerns
    Description
    Capability lead time
    What are the lead times for capability delivery over time?
    Responsiveness
    How are the cycle times trending for the program?
    Identifying blockers
    Which statuses are being blocked, for how long, and why?

    Example reports

    Responsiveness
    An example of this is the control chart available natively in Jira. It shows a rolling average of lead or cycle time (depending on your workflow) for issues over time and visualizes clusters.
    General Overview Agile Metrics 5
    While this is typically a team-level report, a kanban board filtered down to epics will produce the same report at the epic level.
    Blockers
    Similar to the control chart, Jira's native cumulative flow diagram is handy to visualize where blockers are.
    General Overview Agile Metrics 6
    Again, this is typically a team-level report, but you can use it on a kanban board filtered to epics.
    This is especially handy if wait times are defined in the workflow by status. It could then be correlated with the control chart over the same period to show what impact wait time blockers had on lead time.
    These reports can have their time scale customized and correlated, so it would be possible to view this information over a specific PI to monitor that PI, or over a longer period for longer-term trends.
    The reports only tell you that there are blockers, but not the cause. Dependencies are often a big reason for long wait times, so it’s important to be able to visualize these.
    Some options include these reports from Jira Align:
    General Overview Agile Metrics 7
    General Overview Agile Metrics 8
    They show similar information in different ways, but ultimately both intend to highlight clusters of dependencies so that you can investigate them. Advanced Roadmaps has an early access feature (which will roll into the core product soon) for dependency mapping between issues. It’s a high-level and interactive dependency map that allows managers to filter up or down depending on the view they need. 
    General Overview Agile Metrics 9

    Summary

    General Overview Agile Metrics 10
     More notes on “balanced” metrics: 
    • Team behavior will change as a result of focusing on specific metrics. A better approach is to view overall progress against program goals holistically. For example, consider different types of metrics together, like productivity, quality, responsiveness, and predictability.
    • Letting one area slip, or too much focus on one, will imbalance the rest.
    • Don’t get too hung up on detailed metrics in general. Getting it “mostly” right should steer you in the right direction.
    Some final thoughts on “leading” versus “lagging” metrics:
    • Leading metrics can predict a result for you (for example, if you will achieve your OKRs). Lagging metrics are often taken afterward or as a point-in-time metric (for example, for status reporting).
    • Combining these two vantage points will allow you to steer and track your OKRs better than focusing on one or the other.
    • The Atlassian “Goals, Signals and Measures” play will help you define what these signals and measures should be for your team, program, and business. It is highly recommended to run this play at the start of the process to determine your goals and metrics.

    Next steps

    Again, we recommend walking through the “Goals, Signals and Measures” play mentioned above. This page has described some recommendations and possible lenses to view your reports. But it is worth running the play to take a step back and ensure that your goals, signals, and chosen metrics align with one other.

    Was this content helpful?

    Connect, share, or get additional help

    Atlassian Community