To repurpose the quote above, what if the portions were really large? I could see that being a place that some people might go to frequently. The food stinks, but man do you get a lot of it. That is how many teams operate without realizing it. It is so easy to produce a lot of work without any of it actually being valuable to customers.
So what can we do to avoid producing a lot of food that stinks?
We have to measure what we're doing.
KPIs are essential for understanding the impact of our work. Without them, we really have no idea if what we're working on has any value. And focusing on the wrong KPIs is a great way to serve up a lot of food, but it may not be the kind of food that our target audience wants to eat.
For a while when I first started in my current role, we didn't have well-defined KPIs. And we got whiplash because of it. It was incredibly easy for anyone to come to our team and request new products and features under the guise of "improving satisfaction" and "better experience". Those things are incredibly important. But how will this feature improve satisfaction or create a better experience? How do we know for sure? How will we measure it? Too often those questions went unanswered and we were left having certain decisions made for us by whoever could yell the loudest or get the most attention from higher-ups (not to say that isn't still an ongoing struggle, but let's save that for another post).
So establishing meaningful KPIs was crucial in beginning to wrangle stakeholders and partners into legitimate discussions around outcomes. If we believed a new feature would improve efficiency or satisfaction or outcomes, how were we going to measure? What did we expect and how could we validate it?
Performance indicators also tie the overall company strategy back to our products. At WGU, the mission is to provide affordable, competency-based education to underserved groups. The measures of that all revolve around student outcomes: making progress toward a degree, overall satisfaction, keeping students enrolled and engaged, etc. So the measures for products need to stay focused on those overall goals and the university strategy.
The key performance indicators I created fall into four groups: student outcomes, student satisfaction, student issues, and staff efficiency (you can replace "student" with "customer" in your own KPIs if this is useful to you). In order for new features to pass muster, they had to move the needle in one or more of those areas.
For example, within student satisfaction, we measure that through a variety of surveys. Since my team's focus is on the assessment experience, and student's are given surveys after they take an online assessment, we are able to very accurately measure if changes we make impact their overall satisfaction.
Another example is within the employee efficiency. We know very accurately the number of issues our support staff deal with and the time it takes to deal with them. So we can measure the decrease in issues as well as equate that to time spent troubleshooting and solving problems. If our features are truly impactful, we can measure the decrease in the number of issues and how many man-hours that saves.
Establishing the right KPIs and measuring against them is difficult and can be time-consuming. Sometimes it can be incredibly difficult to find the right measure. On my team, we deliver assessments to students and evaluate them. But I don't think the number of assessments we deliver is necessarily a good KPI. It is easy to measure, yes, but would delivering more assessments mean anything? Possibly, but because that would be an easy rabbit hole to go down without adding value, it's not currently one of our KPIs.
Establishing the most important indicators up front is one of the keys to knowing what to work on and how well you've done solving underlying issues. It isn't easy. It takes time and effort. And patience. We may not always see immediate pay-offs. But having created the right KPIs initially, and staying focused on measuring and improving, we can avoid making a banquet of terrible food.
Some teams have backlog refinement (or grooming) down to a science. Other teams, not so much. Either they don't even try, or it is a time that is more or less wasted since there is no clear direction or purpose.
I've been there. We've probably all been there. And if you haven't, I suppose you can stop reading now. But for anyone who currently finds their refinement efforts lacking, here is a little something to try. And even if things are going well, maybe this could help focus you even more. As I was contemplating how to make our refinement meetings more focused, here is what I came up with.
FUSE. As we examine each story in our backlog, along with the epics they belong to, I've asked the team to help think about 4 things in order to ensure each story is ready for prime time.
Focused: Is each story appropriately focused? As we build out stories and descriptions and acceptance criteria, there is always the possibility that we've included too much in a given story. So asking the team to help us take a step back and make sure that the story is focused enough and doesn't need to be further broken out.
Understandable: Is the story understandable? What questions remain? This is probably an area that I always need some guidance on. I understand the story since I put it into the backlog. I generally know the entire backstory as well. And the business need and the user perspective etc. But the team may not always have that full perspective. This is a great time to not only explain everything, but to also make sure all of that is captured in the story.
Sized: What is the size of the story? Of course, no refinement exercise is really complete if we haven't estimated a size for a given story. Once the story is understood and broken out sufficiently, we need to size it so we have an idea where it will fit into one of our sprints. Of course, sizing is a separate topic in itself, so we'll leave that for another post.
Expounded: Finally, is the story fully expounded? This ties in very nicely with it being understandable, but I think it deserves its own section. We need to make sure appropriate detail is there, and doing this jointly with the development team is crucial. It involves getting their input and including that.
Expounding also extends to the epic or version as well. Our refinement meetings are great opportunities to ensure that the stories we've written include all the necessary items. Hopefully that is the case, but it's never a bad idea to pause and think again about all the stories together to ensure nothing has been overlooked.
As we've implemented this "checklist" into our refinement meetings, it has vastly improved our productivity. It has given our team the ability to focus on what we need to get done and given us direction. We no longer meander through stories but have a purpose in analyzing everything. So give it a try and let me know what other practices you use to get the most out of your refinement meetings.
My personal musings on a variety of topics.