Making the most out of evaluation

Making the most out of evaluation

Nine principles to maximise the use of your evaluation

Jessica Smart

Expert Panel Project resource – June 2020
Mature businesswoman gesturing while brainstorming colleagues at creative workplace. Multi-ethnic female professionals are planning while sitting at office. They are in meeting.

Nine principles to maximise the use of your evaluation

This resource is for anyone doing an evaluation. It describes some of the different ways you can use an evaluation, and presents nine principles to apply to your evaluation to make sure it gets used. These principles work whether you are evaluating a program or service at your organisation or commissioning an evaluation from an external evaluator.

Using an evaluation

Doing evaluation and using evaluation are two different things. For as long as evaluation has been a part of the education and social services sectors, evaluators and practitioners have been concerned about the under-use of evaluation findings. Research, and our own experience working with providers in the families and children services sector, tells us that while evaluation data are frequently collected, they are not as often used to inform learning, program improvement or decision making. This is unfortunate because an evaluation has the potential to:

  • build the knowledge and understanding of staff and other stakeholders
  • contribute to building a positive organisational culture
  • inform decision making.

Understanding the different ways an evaluation can be used

There are several different ways an evaluation can be used. Some of these are described below.

Use for decision-making

When thinking about 'evaluation use', we tend to immediately think of how evaluation findings are used to make decisions about a program or service. For example, this could be changing the way a program or service is delivered, expanding a program, or increasing or discontinuing funding.

Use for learning and understanding

Evaluation can change the way we think about a program or service. We might read an evaluation report and increase our knowledge or shift our attitudes or we could change our understanding about a whole area of work. These shifts in thinking can translate to changes in practice or to different decisions being made in the future.

Use for strategic or political purposes

Evaluation can be used 'symbolically' to advocate for a program, to confirm existing practice or to give legitimacy to a program or service.

Use for 'the process'

In addition to effects or changes that take place as a result of the evaluation findings, the process of being involved in an evaluation can have positive benefits. Being involved in evaluation can clarify the intended outcomes of a program or service, build consensus about a program's aims and activities, foster deeper thinking about a program or its goals, and help stakeholders identify and question assumptions about the program or area of work. This can result in positive changes to practice (Weiss, 1998).

After a recent review of the evidence about evaluation use, we identified the following principles that can help make sure your evaluation is put to best use.

The principles

The nine principles to maximise the use of your evaluation. Please read text description.


Read text description


#1 1. Be clear about the purpose of the evaluation

Evaluation can be undertaken for a range of different reasons. It is important to be clear about the purpose of doing an evaluation and how you plan to use the findings. This will guide other decisions about who you need to work with, who will be using the evaluation and the methods you use to collect and analyse data. For child and family services, evaluation is often undertaken for two reasons:

  • Evaluation for accountability (e.g. an 'outcomes' or 'impact' evaluation) 'looks back' at what was done in a program or service with the goal of understanding effectiveness or value for money.
  • Evaluation for learning and program improvement (e.g. a 'process' or 'implementation' evaluation) 'looks forward' to understand what can be done to strengthen a program or service.

These are both valid reasons to conduct an evaluation but they each have a different purpose. Each of these might work with and prioritise different stakeholders or need a different design or methods. It is possible to design an evaluation for both learning and accountability - for example, an outcome evaluation that also gathers information that can be used to make program improvements - but you need to plan this from the start. Clarifying the purpose of your evaluation and having agreement from other stakeholders about this purpose will make it more likely that your evaluation findings are used.

As part of considering the purpose, you should think about how you might respond to the information that you receive. For example, if you are undertaking an evaluation of a program that you believe is effective, how might you respond if the results show limited effectiveness? Similarly, if you are undertaking an evaluation with the goal of gathering information to strengthen the program, it can be useful to consider how you will implement findings and what resources might be required.

#2 2. Identify and work closely with stakeholders

Working collaboratively with the people who will use an evaluation (the stakeholders) is routinely identified as the most effective strategy to increase the use of an evaluation (Johnson et al., 2009; Patton, 2008; Weiss, 1998). Collaborative approaches fall on a spectrum from consulting with stakeholders through to having stakeholders partner on all aspects of evaluation - design, data collection, analysis and dissemination. For more information on collaborative approaches, Michael Patton's 2008 book Utilization-Focused Evaluation describes a collaborative process in detail.

There are likely to be many different stakeholders and it is probably not realistic to work with them all. In these principles we refer to 'stakeholders' but we are really referring to a small group of key or 'priority' stakeholders. To identify your stakeholders and work out who your priority stakeholders should be, you can use the evaluation stakeholder mapping tool.

#3 3. Assess whether your program and organisation are ready for evaluation

To conduct a useful evaluation, a program or service needs to be 'ready' to be evaluated. To be ready, it will need:

  • clearly defined goals or intended outcomes
  • consensus among staff and stakeholders about the purpose of the program (Weiss, 1998)
  • clearly defined activities.

(Note: a partial exception to this is developmental evaluation, which is specifically designed to evaluate projects that are in a state of flux. However, even with developmental evaluation there still needs to be some agreement within an organisation about the purpose of the evaluation and how the results can be used).

Organisations that have a positive evaluation and learning culture are more likely to use evaluation findings. These organisations will have senior leaders who are supportive of evaluation, organisational policies and strategies that encourage and support evaluation, and staff who are supported to use evidence and evaluation to enhance practice.

There are many tools that can be used to assess a program's and an organisation's readiness for evaluation:

#4 4. Build relationships through quality communication

Building relationships through good communication is a principle in its own right but also underpins many of the other principles because interpersonal relationships are core to effective evaluation use. Many experts have identified the importance of the 'personal factor' in evaluation use (Patton, 2008). The personal factor is about having relationships with people in different parts of the organisation who can use the evaluation and advocate for other people to use it. Relationships with these people should be sought out, cultivated and supported.

These interpersonal relationships can be established and maintained through high quality communication. The type, quality and frequency of communication between the person leading the evaluation and the stakeholders will influence how the evaluation is used.

Building and strengthening the 'personal factor' will help an evaluation get used in two ways. The first is that your relationships will give you more access to information about what is happening within the program and the organisation. This means that the evaluation can be timed to coincide with decisions that need to be made, or any changes to the political environment around a program (see the fifth principle on organisational context). This is particularly important if the person leading the evaluation is external to the organisation.

Second, relationships and communication can 'pave the way' if there are any negative or disappointing findings. The person leading the evaluation should share information that they are getting throughout the evaluation process and not wait until the end. This is discussed further in the ninth principle.

#5 5. Understand the organisational context

A program or service sits in an organisational context that has its own politics, resource limitations and culture. This can influence how evaluation information is used to make decisions. An evaluation conducted with no understanding of this organisational context will often be of limited use.

Understanding the cultural and political context in which the program or service exists can help identify where there is room to make improvements to a program or service as well as who is likely to support, or resist, the evaluation findings and recommendations.

Understanding who has the authority to make decisions about a program or service, and what other sources of information will be used to inform program decisions, also ensures that the evaluator can work with the right stakeholders and address some of the barriers that could prevent the evaluation from being used. Similarly, understanding how and when decisions will be made about the program will allow the person leading the evaluation to share information at the right time to inform program decisions.

This principle is particularly important if you are hiring a consultant to conduct an evaluation. Working closely with an evaluation consultant, particularly in the early stages of scoping the evaluation, can assist in building their understanding of the organisation. If the person leading the evaluation is already working at the organisation this can be easier, although it can still be useful to think through and identify any particular challenges within the organisational context that might get in the way of the evaluation being used.

#6 6. Advocate relentlessly (with patience)

The evaluation should be designed to be used from the beginning, and then its use considered and facilitated at every step. You will need to champion and advocate for use throughout the evaluation, and this should begin in the very earliest planning stages.

The checklist that accompanies Patton's Utilization-Focused Evaluation can be a good guide to follow. It is available in his book (see further reading below) and has been republished by Better Evaluation on their website.

Bear in mind that information from an evaluation is only one influence among several on a program, service or policy. Politics, organisational priorities and culture, community priorities and staff beliefs and values all rightly have an impact on decision making for policy and service systems. It is important to remember that change can take time. Having realistic expectations about what might be achievable within your time frames will help you prioritise and strategise without becoming disillusioned.

#7 7. Build user engagement through evaluation capacity building

Don't assume that others are enthusiastic about evaluation. There can be resistance or fear of evaluation for many reasons, including previous negative experiences. You may need to invest time in evaluation capacity building within your team or broader organisation. If team members have limited evaluation knowledge or skills it can be difficult for them to see the value of evaluation or to understand what is being done and why. If staff and other stakeholders are really not on board, you may need to start where they are at and 'sell' evaluation through a 'what's in it for me' perspective (Maloney, 2017). For example, you could talk with stakeholders about how positive findings from an evaluation can be used in funding applications or how evaluation findings can be used for service improvements that can, for example, lead to increased client engagement.

#8 8. Conduct a good quality evaluation

To be useful, an evaluation must be relevant and credible. It must be well designed, use appropriate methods, the data must be valid and reliable and it must be rigorous. This does not mean that it needs to be expensive or overly technical. The design and methods must be suitable for the program or service being evaluated and the stakeholders and any other users must have confidence in the findings.

This principle is connected to the other principles: in order to produce a good quality evaluation that is suitable for the context, you need to work collaboratively with users to identify their needs and ensure they understand the evaluation design and how it will meet their needs (Blewdon, 2010). You also need to understand the organisational context.

The evaluation report itself should be accessible and easy to read with no jargon. The findings should be clear and the recommendations should be detailed and actionable. The findings and recommendations should meet the needs of stakeholders as identified at the beginning of the evaluation.

#9 9. Use knowledge exchange and knowledge brokering strategies

It is important to recognise that decisions are not made based on evaluation information alone. Factors such as levels of community or stakeholder support for a program, staff observation, and existing stakeholder values and beliefs can all conflict with information arising from an evaluation. This conflict can prevent evaluation recommendations from being considered or implemented. For example, stakeholders are often more open to evaluation findings if there is some alignment with their existing knowledge and values. They may be less open to using the evaluation findings if they are challenging in some way; for example, if they do not align with stakeholders' existing values or beliefs or if the recommendations suggest a change in practice. Knowledge exchange and knowledge brokering strategies are two ways to ensure that potential evaluation users are receptive to new information arising from the evaluation, even when it is challenging or disappointing.

Knowledge exchange refers to a two-way exchange of information. This means that the person leading the evaluation works to understand staff and stakeholder knowledge and includes it in the evaluation. The evaluation questions, data collection methods and the analysis will then be informed by staff and stakeholders. This is in contrast to a more traditional approach where information only flows in one direction; from the evaluation to the stakeholders (e.g. via a final report). Evaluation findings are more likely to be rejected by stakeholders if they disagree with the purpose, direction or methods of the evaluation. The benefit of a knowledge exchange approach is that even if the findings are challenging, they are more likely to be considered and implemented by stakeholders because they will see them as credible.

A knowledge broker is a person who facilitates the knowledge exchange. In the context of evaluation use, a knowledge broker translates and mediates between the evaluation and the stakeholders. This person (most likely the person leading the evaluation) works with stakeholders after the evaluation is completed to help them make sense of the evaluation findings and to determine how they will apply information from the evaluation.

What is critical to take from this principle is that having good quality evaluation findings alone is not enough for the evaluation to be used. The evaluation needs to incorporate the knowledge and values of stakeholders for it to be accepted and seen as credible by stakeholders, especially if the findings are challenging. Working with stakeholders to make sense of and act on the findings can also be valuable.

Further reading

The above principles have been synthesised based on a recent review of the literature, but there is so much more out there, particularly if you want to go deeper into the theory or research on the use of evaluation. The books and articles listed below provide some suggestions. Most of the literature is written for evaluators but the learnings will be relevant to people who are planning an evaluation, working with an evaluation consultant or conducting evaluation internally.

  • Patton, M. Q. (2008). Utilization-focused evaluation. If you read one book on evaluation utilisation, this should be it. Using examples from his own practice, Patton describes in detail how to design and conduct an evaluation that is utilised through engaging with users at every step of the way.
  • Maloney, J. (2017). Evaluation. What's the use? This article is easy to read and provides a good summary of the literature. It gives an Australian perspective to some utilisation issues and has a good clear discussion. There is a PowerPoint summarising the article here.
  • Cousins, B. J. and Shulha, L. M. (2006). A comparative analysis of evaluation utilization and its cognate fields of inquiry: Current issues and trends. In Sage handbook of evaluation: Policies, programs and practices. This article provides a more theoretical perspective on knowledge production and utilisation, with a discussion of the fields of research use and evaluation use.
  • Weiss, C. (1998). Have we learned anything new about the use of evaluation? In this classic article Weiss reflects on the development of understanding of evaluation knowledge and provides an overview and summary of the field.

References

  • Blewdon, M. (2010). Developing evaluation capacity and use in the NZ philanthropic sector: What can be learnt from the US experience? Evaluation Journal of Australasia, 10(1), 8-16.
  • Cousins, B. J., & Shulha, L. M. (2006). A comparative analysis of evaluation utilization and its cognate fields of inquiry: current issues and trends. In I. F. Shaw, J. C. Greene, & M. M. Mark (Eds.), Sage handbook of evaluation: Policies, programs and practices (pp. 266-291). London: Sage Publications.
  • Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on evaluation use. A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377-410.
  • Maloney, J. (2017). Evaluation: What's the use? Evaluation Journal of Australasia, 17(4), 25-38.
  • Patton, M. Q. (2008). Utilization-focused evaluation. Los Angeles: Sage Publications.
  • Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1), 21-33.

Downloads

Authors and acknowledgements

This resource was authored by Jessica Smart, Senior Research Officer at the Australian Institute of Family Studies.  

This document has been produced as part of the Families and Children Expert Panel Project funded by the Australian Government through the Department of Social Services.

 


Featured image: © GettyImages/Stígur Már Karlsson /Heimsmyndir

Publication details

Expert Panel Project resource
Published by the Australian Institute of Family Studies, June 2020

Download Publication

Publication meta