PPL Perspectives

‘Nobody does it better’? How can public services employ evaluations most effectively for service improvement?

13 October 2017

Vish Valivety, Consultant

Why do evaluations go wrong? Common risks and tips to help you succeed

Evaluations are critical, not just to track performance and measure success – but also to help inform decisions and deliver improvements to your service. At PPL, we’ve managed to help organisations across the country develop evaluations that span from single service pilots to larger Vanguard programmes. All evaluations are different in their own way, but there are a lot of similarities between them as well. These similarities include what makes evaluations succeed and what makes them fail to make an impact.

While most people consider evaluations to be a ‘must-do’, they are often treated as an afterthought or a check-box exercise, without a clear, structured approach. This inevitably causes the evaluation to fail where it is needed most- to help you decide what to do next.

So, what exactly causes evaluations to go wrong? In our experience, we see three key risks that occur at different stages at an evaluations life.

  1. The evaluation design isn’t tailored to your service

There is no silver bullet for evaluations, and using a generic framework to evaluate your service is likely to lead to the wrong questions being asked, key areas being untested, unfocused data collection and general confusion about the conclusions. Adding to this is the classic ‘too many cooks’ problem, where different priorities from different stakeholders may cause the evaluation scope to grow out of proportion to the point where it does not answer the relevant questions robustly or compellingly.

  1. Your evaluation is never run

Even if you are able to develop a strong, fit-for-purpose evaluation design, the process of running it can still be a daunting proposition. This step is often a stumbling block that is never overcome. Evaluations can end up being a costly affair that may require commissioning expert support and designing new collecting and measuring processes. Add to this the intense time requirement on often already stretched staff members and you might find that you either don’t have enough time or money to run your evaluation correctly or even at all.

  1. You don’t learn anything from your evaluation

Too often we find that evaluations are not effectively used to develop and share real insights about what works and what doesn’t. The evidence you collect might not tell you anything specific, there may be large gaps in your evaluation scope, or you might just not know what to do with what you get. As a result, you may be disappointed to find that your evaluation does not give you a clear idea of what to do more of, what to continue and what to stop doing.

What can you do to design, run and learn from your evaluations in a way that gives you meaningful results? We share five practical and vital thoughts and steps for you to keep in mind when designing and running your evaluation to give it the best chance of succeeding.

Tip 1: Establish what success looks like for everyone involved

Make sure you take time up front, right at the start of the project, to be clear about what outcomes you expect from your initiative. For most models of care, it’s not just one organisation that delivers these outcomes so it’s important to think about what success means from the various points of view of all stakeholders involved- from the people who commission and provide care to the people who receive it. Once your outcomes are established, you can start identifying key performance indicators (KPIs) that will allow you to measure if you’ve reached your goals.

These goals should be realistic, and you should have a clear plan on when you expect to reach them.

Tip 2: Design a detailed plan to monitor and measure progress

There’s lots of ways to develop a plan, but there are common elements that should build on each other to get you to a robust design:

Evaluation questions- what success means for your service KPIs- the measures you will use answer your evaluation questions Evidence base- the data and information sources to see if your KPIs are met Analytical method- how you get useful insights from your evidence base Project and resource plan- who will be responsible for evaluation tasks and when they will do them Making sure that you have a detailed design that takes into all these elements can save you a lot of trouble down the line, and help you run a smooth, and useful, evaluation.

Tip 3: Don’t just use numbers

Many times, the term evaluation is felt to mean a purely scientific and quantitative investigation- full of numbers and charts. While in-depth data analysis can help pull out key insights, it should by no means be your only form of evidence. Combining insights from numerical data with qualitative questions and methods will help answer the questions on why and how, letting you learn more than using only data collection could. You’ll also make your evaluation accessible to everyone, not just ‘numbers’ people.

Tip 4: Work with what you’ve got

When designing an evaluation framework, it’s important to avoid reinventing the wheel. As a general rule, we’ve found that if something is important enough to determine if a service is a success or not, it is probably being captured somewhere already. Make sure to engage with your data and business intelligence teams so you can uncover exactly what you are looking for without needing to redesign and recapture information.

And always remember the 80/20 rule: If somewhat pertinent data is being captured in an existing process, it may be preferable to use this instead of designing a new methodology.

Tip 5: It’s not a one-time thing

Evaluations are often conducted at the end of a pilot period or after a significant amount of time from the start of a service. Following this pattern ends up severely limiting what the evaluation framework can do for you. The best type of services are ones that constantly improve, taking what they learn on a regular basis and evolving. Your evaluation framework can help with this. Using what we call a ‘continuous improvement cycle’, regular evaluation points at different levels- from operational to strategic- will indicate pain points, successes, opportunities and risks in a quick enough timeframe to make changes and adapt.

Don’t leave your evaluation to the end, it can be the best tool to facilitate a viable and successful service.

There are a lot of tools and guides to help you think about how you can design and run an evaluation. Some are generic in nature and others - like our How To guide on Measuring Success in Integrated Care – relate specifically to health and care, and how to measure performance in complex programmes with multiple agencies and stakeholders involved.

Here at PPL, we design and deliver a wide range of evaluations across health and care. We have evaluated several Vanguard programmes, and have delivered various other organisational, programme and project evaluations relating to new models of care. We also run evaluation training and openly share our tools, frameworks and techniques to help build evaluation capacity and skills in the sector.