Program planning and evaluation guide

Program planning and evaluation guide – March 2018

Step 5: Evaluate your program or service

In social services, evaluation is usually undertaken to find out whether a program or service was delivered the way it was planned, and to examine its effects on program participants.

This page will give you the basics about evaluating programs and services, from developing evaluation questions to interpreting data. You will also find guidance on selecting an evaluation approach that is suited to your program participants, objectives and resource requirements.

Resources on this page explain what evaluation is, describe different types of evaluation, and give examples of how agencies can build a culture of research and evaluation.

5.1 Introduction to evaluation

Evaluation can be complex and confusing, and it can be hard to know where to start. Resources in this section explain what evaluation is and why it’s important.

Back to top

5.2 Types of evaluation

There are many different types of evaluation. Social services will most likely opt for a process or impact evaluation, or in some cases a combination of the two. This resource explains the difference between impact and process evaluation.

Resources

Evaluation and innovation in family support services [CFCA Practitioner Resource]

Instructional video (Opens in YouTube)

Still from the guided tour through outcomes measurement video

 

 

Back to top

5.3 Ethical considerations for evaluation projects

Some evaluation projects will need to undergo an ethics review process to ensure that evaluation is conducted with integrity and with minimal risk to participants. This section provides an overview of ethical standards in evaluation and explains the ethics approval process.

Back to top

5.4 Selecting program outcomes to measure

Program outcomes or objectives are what you anticipate will happen as a direct result of the program or service you are delivering. Most program evaluations will target short and medium-term outcomes.

This section provides guidance on choosing specific, realistic and measurable outcomes that can be used to demonstrate program effects.

Resources

The basics of evaluation [CFCA Practitioner Resource]

Demonstrating community-wide outcomes [Practice guidelines]

Detailed guidance for designing an outcomes evaluation [CFCA Practitioner Resource]

Measuring outcomes in programs for Aboriginals and Torres Strait Islanders [Webinar recording, transcript and slides]

Instructional video (Opens in YouTube)

Still from the guided tour through outcomes measurement video

 

 

Back to top

5.5 Tailoring your evaluation to program participants

It is critical to the success of your evaluation that you tailor it to the needs of the people attending your program or service. Factors such as literacy levels, age and cultural background will influence how you engage participants in the evaluation process, and the methods you use to collect information.

This section covers working with participant groups in evaluation and proposes solutions for overcoming challenges.

Back to top

5.6 Designing an evaluation

Evaluation design is rarely a linear process; there are many overlapping parts. You will be deciding on the:

  • evaluation questions
  • level of evidence you need to collect
  • evaluation approach
  • type of data (qualitative or quantitative) that will give the best evidence

The resources in this section will help you make those decisions.

Back to top

5.7 Finding the right tool to measure program outcomes

Measuring program outcomes properly means using the right tool (“instrument”). You might choose to use a standardised survey, observation checklist or an interview guide – or a combination of all three.

This section will help you to choose an outcomes measurement tool, or tools, to suit your needs.

Resources

Available outcomes measurement tools [Summary matrix and explanatory notes]

Detailed guidance for designing an outcomes evaluation [CFCA Practitioner Resource]

Developing an outcomes measurement framework: The Mallee [Webinar recording, transcript and slides]

How to choose an outcomes measurement tool [Short article]

How to develop an evaluation plan [Step-by-step guide]

Instructional video (Opens in YouTube)

Still from the guided tour through outcomes measurement video

 

 

Back to top

5.8 Analysing evaluation data

Once you have collected the data you need you can start analysing. Data analysis primarily involves organising data so that you can report against your intended program outcomes and make judgements about what the data means. This resource provides some general advice about analysing qualitative and quantitative data.

Resources

The basics of evaluation [CFCA Practitioner Resource]

Back to top

5.9 What to do after you complete the evaluation

Evaluation has the potential to improve our knowledge about how and why a program or service works or doesn’t work and it can be a powerful tool for continual improvement. The broader sector will also benefit from communicating or “disseminating” your findings. 

This section explores various means of research dissemination and provides examples of practitioners who consistently use evaluative findings and evidence to strengthen programs and practice.

Resources

Collaboration and co-design when evaluating intergenerational trauma projects [Short article]

Disseminating evaluation findings [CFCA Practitioner Resource]

Evidence Informed Practice in Intensive Family Support Programs: Are we there yet? [Webinar recording, transcript and slides]

Lessons learned: The FAST program in NT [Webinar recording, transcript and slides]

Practitioners on evidence [Collection of conference papers]

Back to top