Been collecting data on the gap between postmortems and actual follow-through.
Average postmortem generates 3-7 action items. Fewer than 40% get completed within 90 days. 60% never get completed at all. "Twenty action items means zero action items. Teams will complete two and forget the rest."
The failure mode is always the same: incident happens, postmortem written, action item says "add regression test for X." Item sits in the backlog. Next sprint starts. Feature work wins. Test never gets written. Same incident 3 months later. Rinse repeat.
Ben Treynor Sloss (Google): "To our users, a postmortem without subsequent action is indistinguishable from no postmortem."
The hard part isn't knowing what broke. It's that "add a test" always loses to feature work in sprint planning. Always.
What if the regression test was created as part of incident response, not a backlog item? Root cause identified, test describing the failure scenario created right then, merged before the incident is closed. Not a ticket for next sprint. Part of the resolution.
How does your team handle this? Does "add a regression test" actually get done, or does it rot in the backlog?
Catch risky product regressions in the PR, with the recording, screenshots, and step trace engineers need to fix them.
Get a demo