Content type
Practice guide
Published

January 2021

Researchers
Download Practice guide

This short resource defines program evaluation in the context of child and family support services. It was written for those who are new to evaluation and unfamiliar with evaluation terms.

Definitions of evaluation

Evaluation refers to the systematic process of assessing what you do and how you do it to arrive at a judgement about the ‘worth, merit or value’ of something (Mertens & Wilson, 2013; Scriven, 2003–04). Essentially, evaluation involves taking a series of planned steps in order to better understand a program or service.

There are many types of evaluation designed for different situations and with different objectives. You can find more detailed information about specific evaluation types on the Better Evaluation website.  

In this resource, we discuss process and outcome evaluations because they are the most commonly used in the child and family support sector.

Evaluations that focus on process want to understand how a program works. This type of evaluation is typically focused on the who, what, where and how (Centers for Disease Control and Prevention [CDC], 2018). A process evaluation often asks:

  • Was the program content delivered as the program developers intended?
  • Was every program session delivered?
  • Did participants attend each session?
  • Who attended the program and were they from the population group the program was designed for?
  • Are staff appropriately qualified to deliver the program?

Evaluations that focus on outcomes are interested in whether, how well (or how much), and for who a program has met its goals. Outcome evaluations can also look at whether a program, policy or service has produced any unintended outcomes or changes. These evaluations are sometimes called ‘impact evaluations’. An outcome evaluation often asks:

  • Did the program achieve its intended outcomes or goals? To what extent? Did it have a positive effect? How much of an effect?
  • Who benefited from the program or service? Who did not benefit? Why?
  • Have there been any unintended outcomes for participants? 

In order to understand whether a program or service has achieved its goals, it’s often useful to collect data at different times. This can be necessary to assess whether or not change has actually taken place and, if so, how much change. For example, data might be collected at the start and at the end of a program or at defined time points after a client enters a service (e.g. six months or a year after entry or when they leave).

You can find further guidance on how to design an evaluation that suits your needs here.

Reasons to evaluate

Evaluation has increasingly become a necessary part of service delivery and decision making. Evaluation findings can be used to assess whether or not programs and services should continue to be funded or if change is needed. They can also explain if, how and why things are working and recommend improvements or adaptations to programs, services and interventions. What we learn through evaluation contributes to our collective knowledge of what works for families and children. And importantly, there is an ‘ethical obligation’ (Giancola, 2021) to evaluate because we need to be sure that:

  1. families and children are actually benefiting from the programs they attend
  2. funds are directed towards good quality programs
  3. programs are not causing harm

Evaluating what you do and how you do it can let you know if you are achieving these aims and, if not, how a program might be improved.

Is monitoring the same as evaluation?

Most services collect some form of routine data about their programs; for example, attendance and referral numbers or website traffic. If you’re using this data to keep track of what is happening across your service, then you are engaged in a practice called monitoring. Monitoring is really useful for tracking progress towards service-wide goals, identifying problems that arise in real time and making adaptations, and for meeting reporting requirements.

Monitoring overlaps with process evaluation (and data collected through routine monitoring can be used to answer evaluation questions) but is often seen as slightly different. This is because while monitoring might describe what is happening, it doesn’t explain why it is happening (Funnell & Rogers, 2011). In contrast, evaluation usually aims to go beyond description and to explore why something is happening, how much a program or service is responsible for a change and what this means for future actions. 

Evaluation usually involves writing evaluation questions (high-level questions that will be answered in the evaluation) and systematically taking steps to answer those questions. Engaging in evaluation is often an essential way to inform program decisions (e.g. ceasing operation or scaling-up a program) and it gives our communities confidence that they are accessing quality and responsive services that will address their needs.

Steps in evaluation

However you want to focus your evaluation, you will need to decide on:

  • evaluation questions
  • data collection methods and measures
  • how to conduct data analysis
  • how the data will be used.

That is, you need to be systematic in your approach.

Child and family support workers frequently make judgements and decisions based on what clients and colleagues tell them and on their own observations. Evaluation simply formalises this way of thinking by systematically working through a series of steps to arrive at a judgement about the ‘worth, merit or value’ of a program (Mertens & Wilson, 2013; Scriven, 2003–04).

If you are new to evaluation or need help with planning your approach, read our resource Planning an evaluation: step by step, which describes the key steps and actions involved in evaluation.

Evaluation resources to get you started

References

  • Centers for Disease Control and Prevention (CDC). (2018). Developing process evaluation questions. Evaluation Brief No. 4. Division of Adolescent and School Health. Atlanta, Georgia: CDC. Retrieved from www.cdc.gov/healthyyouth/evaluation/index.htm 
  • Funnell, S., & Rogers, P. (2011). Purposeful program theory: Effective use of theories of change and logic models. San Francisco, CA: Jossey-Bass. 
  • Giancola. S. (2021). Program Evaluation: Embedding Evaluation into Program Design and Development. Los Angeles: Sage Publications. 
  • Mertens, D., & Wilson, A. (2013). Program evaluation theory and practice: A comprehensive guide. London: Guilford Press. 
  • Scriven, M. (2003–04). Michael Scriven on the differences between evaluation and social science research. The Evaluation ExchangeIX(4). Retrieved from www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research
Acknowledgements

This resource was authored by Kathryn Goldsworthy, Senior Research Officer at the Australian Institute of Family Studies.

This document has been produced as part of AIFS Evidence and Evaluation Support funded by the Australian Government through the Department of Social Services.


Featured image: GettyImages/JohnnyGreig

Download Practice guide

Share