PPL Perspectives: Why do evaluations go wrong (part 1)? Common risks

PPL Perspectives: Why do evaluations go wrong (part 1)? Common risks
posted 28 September 2017

By Vish Valivety, Consultant at PPL

 

Evaluations are critical, not just to track performance and measure success – but also to help inform decisions and deliver improvements to your service.  At PPL, we’ve managed to help organisations across the country develop evaluations that span from single service pilots to larger Vanguard programmes.  All evaluations are different in their own way, but there are a lot of similarities between them as well.  These similarities include what makes evaluations succeed and what makes them fail to make an impact.

While most people consider evaluations to be a ‘must-do’, they are often treated as an afterthought or a check-box exercise, without a clear, structured approach.  This inevitably causes the evaluation to fail where it is needed most- to help you decide what to do next.

So, what exactly causes evaluations to go wrong?  In our experience, we see three key risks that occur at different stages at an evaluations life.

1. The evaluation design isn’t tailored to your service

There is no silver bullet for evaluations, and using a generic framework to evaluate your service is likely to lead to the wrong questions being asked, key areas being untested, unfocused data collection and general confusion about the conclusions.  Adding to this is the classic ‘too many cooks’ problem, where different priorities from different stakeholders may cause the evaluation scope to grow out of proportion to the point where it does not answer the relevant questions robustly or compellingly.

2. Your evaluation is never run

Even if you are able to develop a strong, fit-for-purpose evaluation design, the process of running it can still be a daunting proposition.  This step is often a stumbling block that is never overcome.  Evaluations can end up being a costly affair that may require commissioning expert support and designing new collecting and measuring processes.  Add to this the intense time requirement on often already stretched staff members and you might find that you either don’t have enough time or money to run your evaluation correctly or even at all.

3. You don’t learn anything from your evaluation

Too often we find that evaluations are not effectively used to develop and share real insights about what works and what doesn’t.   The evidence you collect might not tell you anything specific, there may be large gaps in your evaluation scope, or you might just not know what to do with what you get. As a result, you may be disappointed to find that your evaluation does not give you a clear idea of what to do more of, what to continue and what to stop doing.

What can you do to design, run and learn from your evaluations in a way that gives you meaningful results?  We share five practical and vital thoughts and steps for you to keep in mind when designing and running your evaluation to give it the best chance of succeeding.