Metrics for Service Management:
Metrics for Service Management:
2 Managing, metrics and perspective
As with a lot of folklore , there are wise sayings on both sides of the question about how to use metrics as part of management:
'You can't manage what you can't measure' [attributed to Tom DeMarco developer of Structured Analysis]
'A fool with a tool is still a fool' [attributed to Grady Booch, developer of the Unified Modeling Language]
Both of these are true. Managing requires good decision-making and good decision-making requires good knowledge of what is to be decided. ITIL®'s concept of Knowledge Management is designed to avoid this pitfall.
Relying simply on numbers given by metrics, with no context or perspective, can be worse than having no information at all, apart from 'gut feel'. Metrics must be properly designed, properly understood and properly collected, otherwise they can be very dangerous. Metrics must always be interpreted in terms of the context in which they are measured in order to give a perspective on what they are likely to mean.
To give an example: a Service Manager might find that the proportion of emergency changes to normal changes has doubled. With just that information, most people would agree that something has gone wrong - why are there suddenly so many more emergency changes? This could be correct, but here are some alternative explanations of why this is the case:
- If the change process is new, this may reflect the number of emergency changes that the organization actually requires more accurately. Previously these changes might have been handled as ordinary changes without proper recognition of the risk.
- In a mature organization, a major economic crisis might have intensified the risk of a number of previously low-risk activities. It would be the proper approach for the Service Manager, recognizing changes related to these, to make them emergency changes.
- The change management process might have been improved substantially in the current quarter, so much so that the number of ordinary changes that have been turned into standard changes has led to a halving of the number of normal changes. The number of emergency changes has stayed exactly the same, but the ratio is higher because of the tremendous improvement in the change process.
Even a very simple and apparently uncontroversial metric can mean very different things. As with most management, there is no 'silver bullet'. Metrics must be properly understood, within context, in order to be useful tools. To ensure that they are understood, metrics must be designed. For best results, service metrics should be designed when the Service itself is designed, as part of the Service Design Package, which is why the 'Design' section in this book is the largest.
The metric template used in this book includes the field 'Context' specifically to allow each metric to be properly documented so that, when it is designed, the proper interpretation and possible problems with it can be taken into account. The design of a metric is not simply the measure and how it is taken; it must also make it clear how it will be reported and how management will be able to keep a perspective on what it means for the business - particularly its contribution to measuring value.
This is also a reason why the number of metrics deployed must be kept as small as possible (but not, as Einstein put it, 'smaller'!). Metrics must also be designed to complement each other. In the example above, the ratio between emergency and normal changes is an important and useful one to measure, but it could be balanced by measuring the number of standard changes, the business criticality of changes and, perhaps, the cost of changes.
These would all help to embed the metric into a context that a