Second part of guest post from Mauro Bagnato about organizational metrics. Enjoy!
This “clarification workshop” was arranged in the following way:
Step 1. Learning&sharing. 1 hour.
Goal of the first timeslot was to introduce a new way of looking at metrics and share insights and reflections. Definition, purpose, risks and incentives were on the agenda.
Step 2. Understanding why. 30 minutes
The second timeslot was dedicated to the WHY. We tried to answer the question: “Why do we need metrics?” After a very interesting and intense brainstorming session, the leadership team came up with this answer: “We need metrics because we want to understand if we’re improving”. Good start, clarifying the purpose helps tracing the direction to get the metrics we really need.
Step 3. Undestanding what. 30 minutes
The third timeslot was focused on the OBJECT OF MEASUREMENT. Now the question was:” What are we going to measure?” Well, starting from the why found in the previous step, we just needed to measure the improvement. Unfortunately the word improvement was definitely something too vague and hard to measure, unless we had been able to turn it into something tangible. “What does improvement mean for us?” or better “Let’s suppose to be able to clone our organization and instill in the cloned one a massive dose of improvement, while holding the amount constant in the original one. What change do you imagine you would actually observe in the cloned organization?”. Those questions triggered an interesting discussion ended up with the statement: if we want to understand whether our organization is improving or not, we need to observe certain dimensions: delivered value, external perception, learning, innovation and climate. Now we had five objects of measurement even if they still needed further clarification to be measured.
Step 4. Metrics definition. Two one-hour iterations
The fourth timeslot was focused on turning each dimension into metric. In order to facilitate the discussion and to work on more items in parallel, I split the leadership team into two groups and asked them to go through a four phases discussion. The following picture describes the proposed logical path.
If we are able to turn each dimension into something really tangible, it should be easy to directly derive the metric…basically this was the main idea. For example, if innovation meant generating ideas only, we could assume that measuring the number of ideas would tell us how innovative we are. Thus we must understand the meaning of innovation first. In other words the further clarification phase (see the picture) means keep repeating the actions in the step 3 till the vague concept of innovation becomes tangible.
A deep clarification of the object of measurement should lead us to the metrics but… how can we check if those metrics are really good? Did we consider all the aspects? Did we miss something? Next “control” phases in the path help answering those questions. Before going deeper into those phases, we need to make some reflections first. Let’s start from the assumption that measuring means getting information and that getting information requires a certain investment. The amount of this investment should be tied to the importance of the information. Since the reason why we collect information is that we want to reduce the uncertainty related to a certain decision, then the importance of the information is directly related to the relevance of the decision we need to take. (in the financial context reducing the uncertainty related to an investment decision could save a lot of money). Starting from these considerations, the phase 3 requires to answer the question: “which decision does this measure inform?” or in a different way “do we really need this measure?”.
The last “control” phase comes at the end of the chain. Setting a metric in itself inevitably influences people behavior in ways that may or may not be the intended outcome. Here is an example of possible side effects or incentives a metric may produce. Let’s assume that the metric used to monitor a help-desk performance is the number of handled calls. This metric may generate the side effect that help-desk workers are encouraged to conclude the call as soon as possible without solving the problem. In this case the real outcome (side effect) of the metric is far from the expected benefit. On the other hand if the metric was the customer feedback collected at the end of the call, then help-desk workers would be encouraged to give the best service possible to their customers.
It was interesting to see that only few of the metrics found at the beginning of the path, passed the final control phases!
The outcome of this intense, tiring, but interesting workshop was a bunch of metrics related to two dimensions only. Yes…we didn’t manage to complete all the work, but we learnt together how to tackle the metric problem in a different and structured way. We found out how to solve the metrics enigma!
It was a collaborative work of the whole leadership team that gained the fundamental result of building a shared vision around the metrics.
No comments:
Post a Comment