An introductory guide to collecting evidence about program quality and implementation

Content type
Practice guide
Published

March 2025

Researchers
Download Practice guide

On this page:

Why and how this resource was developed

This practice guide was developed by the Evidence and Evaluation Support (EES) team at the Australian Institute of Family Studies (AIFS). The EES team provides resources and capability-building activities to support Families and Children Activity (FaC) providers in collecting, using and communicating evidence. The target audience for our resources also includes evaluators working within or in partnership with family and community services.

This resource was created based on consultations with FaC provider representatives in late 2023. During these consultations, participants expressed a need for more support and guidance on evaluating program processes and implementation, as these aspects are often overlooked in favour of measuring program outcomes.

The guidance in this resource is informed by a review of the literature listed in the bibliography, along with the EES team’s research, evaluation and practice expertise.

Introduction

Program outcomes are critical to those working in or alongside family and community services. They help us to understand whether the things we are doing are benefitting children, families and communities. They also add to the broader evidence base about the kinds of strategies and programs that work (and don’t) for families.

However, to understand why a program is (or is not) achieving its outcomes, we first need to know about how the program is working and how well it is responding to need. That is, we need a ‘process evaluation’.

Process evaluation is a systematic way of examining how a program operates and is implemented. It examines the activities conducted for an intervention, program, service or policy – the way they are executed and the fidelity1 to the original plan. By examining these things, we seek to understand how a program or intervention is delivered, identify any challenges or deviations from the original aim or plan and assess whether the program is being delivered as intended.

Process evaluation is a flexible approach that can be applied to many family and community programs. It can be adapted to various contexts, used with simple or complex interventions, at the start or end of a program, with mixed methods, or on its own. It can also be done at the same time as an evaluation of program outcomes.

What is in this guide?

This guide explains why process evaluations are useful and outlines the key steps involved in doing one. Each step offers practical information for how to plan and execute a process evaluation, with examples and tools to help you apply the guidance in this resource.

Links to example process evaluation reports are provided at the end of the resource.

If you want to know more about how we define ‘evaluation’, read this first: What is evaluation?

What can family and community services gain from process evaluation?

 

Implementing new programs

One of the best times to undertake a process evaluation is when piloting a new program. Pilot programs need all kinds of testing. A process evaluation can help you to explore the program’s feasibility, identify which elements of the program are important for its success and understand how participants respond.

For example, in a process evaluation, you might explore questions about whether the program is sufficiently resourced, if the content is engaging or whether you have built in enough time to deliver the planned activities.

Doing a process evaluation while piloting a new program will also help you to navigate the inevitable surprises that come with implementing anything new. Like developmental evaluation,2 process evaluation can offer real-time feedback on the program implementation and allow you to make immediate adjustments, as needed, to ensure that it is delivered effectively and efficiently (Moore et al., 2015).

Continuous improvement

Often, after a program has been running for a while, you may start to notice things that aren’t working as well as expected. For example, service users may not engage with the program material as well as hoped, or attendance numbers may have declined midway through the program. Perhaps program costs are higher than expected or there are issues with staff training or turnover. Alternatively, longstanding programs may veer off course because the program intent has changed (intentionally or unintentionally) and/or program procedures are not being adhered to.

When these things occur, it is tempting to make changes based on assumptions or previous experience about what is happening and why. However, undertaking a process evaluation is a way to systematically assess the situation and make improvements that are grounded in evidence. By prioritising learning and continuous improvement through procedures such as process evaluation, programs have a better chance of succeeding over time.

Services can also use process evaluation to generate data that can be used to inform future decision making and policy development. These data can highlight which program components are most effective and/or working well, reveal any areas needing adjustment, and provide insight into participant experiences and outcomes. Consequently, policymakers and program designers can make informed decisions that enhance program effectiveness and better address community needs.

Additionally, process evaluation can help explain any negative or unexpected outcomes that occur during program delivery. It can also help assess if program targets and outcomes are still suitable or if new ones are required. This information is valuable for offering potential solutions to senior managers in your organisation or to funding bodies.

Monitoring program fidelity

Program fidelity refers to how closely a program or intervention is delivered according to its original design. This concept is central to implementation science, which emphasises delivering programs with high fidelity to achieve desired outcomes. This is important because changing parts of a program, or changing how it is delivered, may affect those parts of a program that make it effective.

One effective way to monitor program fidelity is through process evaluation. The aim here is to reveal what is actually being delivered and assess whether the program is staying on track or deviating from the plan (and if so, why). Process evaluation of this sort is particularly suited to manualised or evidence-based programs, which have a written delivery plan and require specific implementation processes to achieve their intended outcomes.

By monitoring fidelity through process evaluation, you can identify where things aren’t going as planned and determine what (if any) changes are needed.

Replicating established programs

Replicating existing programs – for example, by running them in new locations or with new client groups – may seem simple in theory; however, it is often more complicated in practice. Community strengths and needs, culture and values, for example, can differ significantly from place to place. These differences might mean that you need to adapt certain program features such as program content, delivery, language and structure.

If you need to make changes when introducing a program to a new place, you need to know what parts of the program can be changed without limiting its effectiveness, and which parts are non-negotiable for achieving positive outcomes. This is where implementation science and process evaluation comes in. Implementation science provides a framework for understanding how best to adapt and implement programs in different contexts, ensuring that the core components that drive success are maintained.

Process evaluations can complement implementation by systematically assessing the implementation process, providing data on what works, what doesn’t, and why. Together, they help ensure that programs are effectively tailored to new settings while preserving their intended impact.

For example, before trying to replicate a program in a new area or context, you can undertake a process evaluation to gain an understanding of what program components, resources, procedures and skill sets have been essential for achieving program outcomes and what elements are flexible or optional. Having this knowledge will support other organisations and staff to adopt the program and deliver it successfully.

Additionally, by undertaking a process evaluation you can help ensure a replicated program runs effectively and draw on lessons learned from other sites, such as what attracts participants and what encourages their retention. Further process evaluation activities can build on this by monitoring implementation at new sites and working out any kinks that appear.

Can I monitor process indicators without having to do a formal evaluation?

The short answer to this is yes. While there are many benefits to undertaking a formal evaluation, in some situations it is preferable to establish monitoring systems that facilitate routine data collection and analysis.

formal evaluation process is typically more structured and comprehensive. It commonly involves a systematic assessment conducted at a specific time (such as at the midpoint or end of a program cycle). Formal evaluations often involve detailed data collection and analysis and provide robust evidence to inform strategic decisions, justify funding and demonstrate accountability to stakeholders.

Routine monitoring systems involve ongoing data collection and real-time tracking of program processes and performance. This approach is beneficial for maintaining continuous oversight, quickly identifying issues and making timely adjustments. Routine monitoring can be less resource intensive than formal evaluation and can be integrated into daily operations, fostering a culture of continuous improvement. It can enable ongoing monitoring and the ability to identify trends with an opportunity for making timely adjustments as needed. However, it may not provide the same depth of analysis as a formal evaluation and might focus more on immediate outputs rather than long-term outcomes.

Ultimately, the choice between these approaches depends on your specific needs. If you require detailed, periodic assessments to guide major decisions or secure funding, a formal evaluation might be more appropriate. If your priority is to maintain ongoing oversight and make incremental improvements, a routine monitoring system could be more suitable. Ideally, you would be able to adopt both approaches.

Whatever you decide, the information in this resource can be used to guide both approaches.

What are the key steps in doing a process evaluation?

Like all evaluation approaches, there are some key steps involved in doing a process evaluation:

key steps in doing a process evaluation figure

Below, we explore how to navigate each of these steps and make decisions that will help you to achieve your process evaluation goals. It is important to note that progressing through each step won’t always be linear – you may complete several steps at once or go and back forth between steps. Consider using an evaluation plan to stay on track.

Evaluation plans

An evaluation plan is a document that records the evaluation purpose, questions, methods, time frames, resources, budget and any specific skill sets/people needed for the project. Having an evaluation plan gives you a bird’s-eye view of the entire evaluation and can help with managing each step.

When developing an evaluation plan, it is important to consider and document who needs to know about the evaluation findings, as well as when and how you will share key findings and recommendations. Documenting these aspects upfront enables you to establish key relationships, communicate information about the evaluation as it progresses, and adequately resource your dissemination strategies.


1. Describe the program

The first thing you need in a process evaluation is a description of how the program or service being evaluated is intended to operate:

  • What you will do?
  • Who will do it?
  • Who are the participants?
  • How will the program operate?
  • What are the intended benefits?

Having this information about the program readily available means that you can easily see key program components and processes, identify parts of the program that you want to investigate and test and refine them (Moore et al., 2015, pp. 3–4).

Considerations

  • Having a program logic model and/or theory of change can be helpful, especially when it describes the needs of your priority population, program activities, resourcing requirements, program outputs and underlying assumptions about how the program should work (Moore et al., 2015; Saunders et al., 2005).
  • If you have a program logic model, it is also helpful to have other detailed documents on hand, such as a needs assessment, program literature review or a program manual. These documents can assist in developing specific evaluation questions, particularly when seeking to understand program reach, implementation and fidelity (the extent to which the program model was implemented).
  • While it is good evaluation practice to have a program logic or theory of change that describes the program, there may be times when that is not possible. For instance, when there are short time frames for an evaluation of a program without a logic model. When in this position, Moore and colleagues (2015) suggest working with program staff to identify and record assumptions about why a program is believed to work, then use this as a basis for the evaluation.

2. Write evaluation questions

Developing evaluation questions will provide you with a clear focus and purpose for your evaluation. The questions you choose will determine everything else in your evaluation: methods, measures, analysis and use. Because of this, you will need to spend some time crafting them.

Process evaluations are typically focused on questions about program fidelity (i.e. the extent to which program elements are implemented according to the program theory or intended outcomes), reach (participation numbers and characteristics) and/or dose (the amount of a program delivered, participant satisfaction and engagement) (Saunders et al., 2005).

Considerations

  • To begin, start with broad questions such as, ‘Are program processes and procedures consistently followed?’ You will eventually break these questions down into smaller, measurable components (i.e. specific indicators and measurable data points).

Example process evaluation questions are listed in Table 1.

  • While preparing evaluation questions, you will naturally start thinking about how you are going to answer your questions. You can refine them later but it is helpful at this stage to start noting down any ideas you have for how you will source information for your evaluation.
  • If you don’t plan on doing a comprehensive evaluation (i.e. you want to collect information to monitor program operations), having evaluation questions can still be helpful. Refer back to your program logic or theory of change for guidance on where to focus your data collection efforts (see also Appendix B).

At the end of this step, it is likely that you will have more questions than you have the time and resources to answer. Refining and finalising evaluation questions is an iterative process, and you may come back to your questions many times before you finalise your approach to collecting evidence for the evaluation.

Table 1: Example process evaluation questions

Process evaluation questions

Implementation and fidelity (dose and fidelity)

  • To what extent were the core program elements implemented?
  • Was the program implemented according to legal and practice frameworks?
  • What other barriers and facilitators influenced delivery of the program?
  • Were evaluation tools used in accordance with established procedures?
  • Are clients accessing services within expected time frames?
  • Are program processes and governing frameworks consistently followed?
  • What levels of program satisfaction were there?

Service use and reach (reach and dose)

  • How many people are receiving services?
  • Are the intended service users the people receiving services?
  • Are clients receiving the services they are seeking?
  • Do participants take up follow-up support/actions?
  • What were the levels of participation?
  • How many people completed the program?

Service quality (dose)

  • Are clients receiving a good quality program?
  • To what extent did program staff receive training in the program?
  • Is staffing sufficient in terms of numbers and qualifications/experience?
  • Is performance significantly better or worse at one program site compared to another?

Awareness and accessibility

  • Is the community aware of the services available to them?
  • To what extent are community members aware of what services and programs are available to them and how to access them?
  • Can clients easily navigate between different programs and service systems?

Resource management

  • Are resources used effectively and efficiently?
  • Are the resources, facilities and funding sufficient to support positive outcomes?
  • Did the service allow sufficient time for program staff to do administrative tasks?
  • Is staffing sufficient in terms of numbers and qualifications/experience?

Sources: Questions are based on guidance in Pietrzak et al., 1990, Rossi et al., 2004 and Saunders et al., 2005.

3. Identify your information needs

Your next task is to consider the kind of information you will need to answer your evaluation questions. In most cases, you will need to draw from both qualitative and quantitative data. Start by reviewing your evaluation questions to determine the specific information you need.

Qualitative and quantitative data

Qualitative data will tell you why or how something happened and is useful for understanding attitudes, beliefs and behaviours (Smart, 2020). These data are particularly useful when investigating any barriers or enablers to program implementation but are also useful for gaining a deeper understanding of issues such as program engagement, resourcing and quality.

Quantitative data will tell you how many, how much or how often something has occurred (Smart, 2020). In process evaluation, this type of data is often used to understand service use and reach, and aspects of implementation such as the number of sessions delivered (dose) and how closely the program plan was implemented (fidelity).

Qualitative and quantitative data collection methods that can be used in process evaluation are shown in Table 2.

Table 2: Common types of data collection methods

Quantitative methodsQualitative methods
SurveysInterviews with staff and participants
Statistical analysis of administrative recordsDocument analysis
ChecklistsObservation
AuditsOpen-ended surveys
Attendance logsCase studies
Self-report measuresFocus groups
Document analysisVideo/audio recordings content analysis
Observation 

Guidance on using some of these methods is available on the Better Evaluation website.

Considerations

  • Before choosing your data sources and methods, you will need to consider whether you have appropriately skilled staff to collect the data and then to analyse it. Qualitative methods such as interviews and focus groups require skills and experience, and training is often needed to use observation checklists. Likewise, having someone skilled in performing statistical analysis will be helpful if you are collecting quantitative data.
  • Different methods come with trade-offs in terms of the time required for preparation, administration and analysis. Additionally, each method varies in its level of rigour and independence. Choosing the right method involves balancing these trade-offs against your specific needs and available resources.

Indicators

Indicators are those things that can be measured to assess performance or progress towards an overall goal or objective. Process indicators are generally components of evaluation questions that can be measured with quantitative data. Process indicators typically focus on activities and outputs.

For example, indicators for an evaluation question that asks, ‘To what extent were the core program elements implemented?’ might include:

  • percentage of sessions focused on core program topics (child development, parenting styles, ideas for play and interaction, emotion regulation, self-care, support systems)
  • number of referrals made
  • number of instances facilitators used role modelling
  • number of homework activities distributed
  • number of instances the facilitator encouraged social networking
  • number of sessions where information was provided about community activities and services.

A more comprehensive list of process indicators is provided in Appendix A

Considerations

  • Indicators should be measurable and preferably expressed in numbers or specific units. You might have already listed a set of indicators in the outputs column of your program logic model.
  • Most logic models will also have information about core program components and details about the inputs that are needed to deliver the program. You can use this information to generate indicators.
    • For example, having qualified staff is a common program input. An indicator related to this could be the percentage of staff with appropriate qualifications and experience.
  • Draw from other program documentation such as program manuals or procedures to generate indicators.

After this step, pause and reflect on how feasible your evaluation plan is. You may need to reduce or refine your evaluation questions and/or indicators in the context of your time frame and budget, prioritising what is most important to you.

4. Develop standards for success

If measuring process indicators is part of your evaluation plan, then you will need to develop performance standards for success. Establishing standards will help you to judge whether the program is or isn’t being delivered as intended and then determine what actions to take.

Performance standards are usually framed around reaching a specific number or percentage. For example, you might set a target that 70% of clients are offered referrals. Table 3 provides examples of standards for a parenting program.

Other ways to articulate performance standards include using Key Performance Indicators (KPIs) or developing a rubric that outlines what success looks.

Table 3: Examples of parenting program performance standards

To what extent were the core program elements implemented?How satisfied with the program are participants?Is there a sufficient number of qualified staff delivering the program?
  • 70% of clients are offered referrals
  • 70% of program sessions feature at least one core topic
  • 80% of sessions have role modelling
  • 70% of planned homework activities are distributed
  • Social networking is encouraged at least once per session
  • Information is provided about community activities and services at least twice during the program
  • 80% of participants are satisfied with the program
  • 100% of staff have a relevant diploma
  • 80% of staff have at least 3 years of experience
  • 80% of staff have completed mandatory training
  • 2 facilitators are present for 80% of sessions

Considerations

  • Many organisations have set performance standards or deliver programs that have key performance indicators (KPIs) attached to them – for example, minimum attendance numbers. In this scenario, you would build those established standards into your evaluation plan.
  • When establishing performance measures, look for guidance from:
    • past performance data
    • industry standards and benchmarks
    • government or funders’ contractual expectations regarding KPIs
    • colleagues, professional networks and other key stakeholders.

5. Source evidence for your evaluation

When it comes to sourcing evidence for your evaluation, you have 2 options – draw from existing information already held by your organisation or collect new information. It is likely that you will need to do both.

Considerations

  • Family and community service providers tend to collect a range of information that will be useful for process evaluation – for example, client demographics, referral numbers, waiting lists and service usage. So, before you make plans to collect new information, check what you have access to and map it against your evaluation questions and indicators.
  • Consider administrative data that may be of use – for example, budgets, contracts, HR records, etc.
  • When using existing information, identify the source, the time of collection and the individuals or entities involved in the data collection process. This will help you evaluate the quality of the information and determine its usefulness for drawing meaningful conclusions.
  • If you need to collect new information to answer your evaluation questions, refer back to Step 3 and choose your data collection methods. Then, work with someone skilled in those methods to collect the information you require.
  • Familiarise yourself with ethical evaluation principles and practices. When collecting data, take appropriate steps to protect participants’ privacy and confidentiality, and obtain informed consent from participants to take part in the evaluation.
  • If you need to use a survey, observation tool or fidelity measure, there may be existing standardised measures that you can use. For example, the Session Rating Scale is a standardised tool that measures client satisfaction. Standardised measures have been rigorously tested and will usually be more reliable than a measure you develop in-house so are a good choice if you can find suitable ones.
  • Examples of fidelity measures and checklists are provided in the list of Further reading at the end of this guide.
  • If there is no appropriate standardised measure, the alternative is to develop your own measurement instrument. To do this, seek help from someone skilled in designing measurement instruments or follow this guidance on writing surveys.
  • If using interviews or focus groups to collect data, prepare questions that are unambiguous and don’t lead participants into giving particular responses. Further guidance on conducting interviews is available in this Better Evaluation resource.


6. Analyse and interpret

The first part of this step is to analyse the information you have collected. The type of analysis you perform will depend on your evaluation questions and the data you have collected. For instance, if your evaluation question is about identifying barriers and enablers to program implementation, you might conduct thematic analysis of staff interviews to uncover key themes.

In addition to analysing the data you have collected, you will need to make sense of the findings, form judgements about what the evidence is telling you, and make recommendations on what to do next.

Considerations

When working with quantitative data

  • Prepare for analysis by cleaning the data and removing errors, duplicates or irrelevant information. This includes handling missing values, outliers and duplicates. Data validation and verification processes should be implemented to identify inconsistencies in the data.
  • Clearly identify and define the variables you will be analysing.
  • Enter the data into a digital format, such as a spreadsheet or database.
  • Explore options for analysis, including statistical methods, software and tools. The Better Evaluation website provides helpful guidance on different types of analysis and the skills you might need.
  • Basic descriptive analysis where you calculate the mean (average), median (middle value), and mode (most frequent value) will often be sufficient. For example, if you have survey data on customer satisfaction, you can use descriptive analysis to calculate the average satisfaction score of participants in the program.
  • For complex analysis, consider seeking support from someone with the appropriate skills.

When working with qualitative data

  • To analyse qualitative data, the data need to be in a format that can be analysed – for example in documents, spreadsheets, etc.
  • Prepare for analysis by cleaning the data – that is, review transcriptions for any errors or inconsistencies, remove personal information and correct typos.
  • There are various analysis methods to choose from. Common methods include content analysis (categorising verbal or behavioural data) and thematic coding (identifying common themes or patterns). The Better Evaluation website offers detailed explanations of these methods.
  • Tools such as NVivo or ATLAS.ti can help organise and code qualitative data, making it easier to identify patterns and themes.
  • If you need to analyse the data manually – or cannot afford specialist software – spreadsheets are a good option. Look at the data for recurring themes, patterns and insights, and use the coded data to identify trends and draw conclusions.

General

  • If you have collected data from multiple sources, you will need to bring them together to validate (or ‘triangulate’) your findings.
  • Keep detailed records of your data preparation and analysis steps for transparency and reproducibility.
  • Once you have finished analysing the data, consider involving evaluation participants or other key stakeholders in the validation process. Doing this can highlight things in the data you may have missed. Building stakeholder relationships in this way can also help when it comes to sharing the findings and implementing key recommendations.
  • Document how you conducted the evaluation and your findings in a report. Use charts, tables and other visuals to represent your data. Our evaluation planning guide outlines key points to cover, and this guide from Better Evaluation offers practical tips for organising an evaluation report.

7. Action, communicate and share your findings

Once your evaluation is complete, you should have new insights about how your program can be changed, improved or scaled up and clear recommendations for what actions to take next. Your next step is to communicate the evaluation findings with evaluation participants, program staff and decision makers, and anyone else who would benefit from the findings. You can then develop an action plan complete with time frames for implementing the evaluation recommendations.

This AIFS resource presents 9 principles for making the most out of your evaluation.

Considerations

  • If you didn’t do this at the start of the evaluation, identify who you need to share the evaluation findings with. Document anything you know about their interests in the project and preferences for receiving information.
  • Partner with people or organisations who can help disseminate findings and advocate for any changes recommended in the evaluation report.
  • Consider producing different products for different audiences. For example, a one-page infographic for participants, a presentation with key findings for staff or a detailed technical report for senior managers or the funding body.
  • Submit abstracts to relevant conferences and events to promote the evaluation or engage with the wider community through workshops.
  • Consider writing up findings for publication in journals or book chapters.
  • Leverage online platforms and social media to enhance the visibility of the research findings.
  • Write a policy brief, white paper or report to government on the research findings.
  • Develop a plan for actioning the recommendations that establishes tasks, roles and responsibilities, and time frames. Monitor the plan at regular intervals.

Further reading

Program fidelity assessment tools

General evaluation resources

Bibliography

Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J. & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Science, 2, 401.

Centers for Disease Control and Prevention. (2024). Evaluation reporting: A guide to help ensure use of evaluation findings. Centers for Disease Control and Prevention. www.cdc.gov/training-development/media/pdfs/2024/04/Evaluation-Reporting-Guide.pdf

Department of Social Services (DSS). (2024). Families and Children program logic example 1 – single service. Canberra: DSS. www.dss.gov.au/family-support-services/resource/families-and-children-program-logic-example-1-single-service

Haynes, A., Brennan, S., Carter, S., O’Connor, D., Schneider, C. H., Turner, T. et al.; CIPHER team. (2014). Protocol for the process evaluation of a complex intervention designed to increase the use of research in health policy and program organisations (the SPIRIT study). Implementation Science, 9, 113. doi: 10.1186/s13012-014-0113-0. PMID: 25413978; PMCID: PMC4218994.

Limbani, F., Goudge, J., Joshi, R., Maar, M. A., Miranda, J. J., Oldenburg, B. et al. (2019). Process evaluation in the field: global learnings from seven implementation research hypertension projects in low-and middle-income countries. BMC Public Health, 19(1), 953. doi: 10.1186/s12889-019-7261-8. PMID: 31340828; PMCID: PMC6651979.

Martin, M., Steele, B., Lachman, J., & Gardner, F. (2021). Measures of facilitator competent adherence used in parenting programs and their psychometric properties: A systematic review. Clinical Child and Family Psychology Review, 24, 1–201.

Molloy, C., Macmillan, C., Perini, N., Harrop, C., & Goldfeld, S. (2019). Restacking the Odds – Communication Summary: Parenting programs. An evidence-based review of the measures to assess quality, quantity, and participation. Melbourne: Murdoch Children’s Research Institute. www.rsto.org.au/resources/publications-and-tools

Moore, G. F., Audrey, S., Barker, M., Bond, L., Bonell, C., Hardeman, W. et al. (2015). Process evaluation of complex interventions: Medical Research Council guidance. BMJ, 350, h1258. doi: 10.1136/bmj.h1258. PMID: 25791983; PMCID: PMC4366184.

Pietrzak, J., Ramler, M., Renner, T., Ford, & Gilbert, N. (1990). Practical program evaluation: Examples from child abuse prevention. California: Sage Publications.

Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (2004). Evaluation: A systematic approach (7th ed.). California: Sage Publications.

Saunders, R. P., Evans, M. H., & Joshi, P. (2005). Developing a process-evaluation plan for assessing health promotion program implementation: A how-to guide. Health Promotion Practice, 6(2), 134–147. doi: 10.1177/1524839904273387.

Smart, J. (2020). Planning an evaluation. Melbourne: Australian Institute of Family Studies.

Acknowledgements

This resource sheet was researched and written by Kat Goldsworthy, Research Fellow with the Evidence and Evaluation Support team at AIFS.

The author wishes to acknowledge and thank Sharon Grocott who reviewed this resource and provided helpful suggestions for ways to improve it. Thanks also to Dr Stewart Muir and Sharnee Moore of the Child and Family Evidence team at AIFS for their feedback on various drafts.

Featured image: © gettymages/kate_sept2004


1Program fidelity refers to how closely a program or intervention is delivered according to its original design. Key aspects include adherence to guidelines, quality of delivery, participant engagement and maintaining the program’s unique elements.

 2Developmental evaluation is commonly used to assess processes and improve programs. 

Appendix A: Example process indicators

Appendix A: Example process indicators

Evaluation questionsIndicators
Implementation and fidelity
To what extent were the core program elements implemented?
  • % of sessions focused on core program topics (child development, parenting styles, ideas for play and interaction, emotion regulation, self-care, support systems)
  • # of referrals made
  • # of instances facilitators used role modelling
  • # of homework activities distributed
  • # of instances facilitator encouraged social networking
  • # of sessions where information was provided about community activities and services
Was the program implemented according to legal and practice frameworks?
  • % of program activities that fully comply with relevant legal requirements and regulations.
  • % of program activities that align with established best practice guidelines and standards.
  • % of staff trained on legal and practice frameworks
Were evaluation tools used in accordance with established procedures?
  • % of participants given evaluation questionnaires at start of the program
  • % of participants given evaluation questionnaires at end of the program
  • % of satisfaction measures issued at 3-month frequencies
Are clients accessing services within expected time frames?
  • # participants accessing relevant services within 4 weeks after need is identified
Are program processes and governing frameworks consistently followed?
  • % of service processes that have fully documented procedures, including all necessary forms, checklists and guidelines
  • % of process documents that are reviewed and updated annually to ensure they accurately reflect current practices and regulatory requirements
  • % of program staff observed following procedures
  • % of sessions that complied with processes in these frameworks, e.g. goal setting, continuity of care, intake questions asked, screening tools and referrals given, opportunities to provide feedback
How satisfied are participants with the program?
  • % of participants that expressed satisfaction at program completion
Service use and reach (reach and dose)
How many people are receiving services?
  • # of people attending the services
Are the intended service users the people receiving services?
  • % of priority population group attending the program
  • % of people completing minimum program sessions
Are clients receiving the services they are seeking?
  • % of service users that attended services related to an identified need on intake forms
Do participants take up follow-up support/actions?
  • # of participants accessing recommended supports
How many people completed the program?
  • % of program participants that attended 80% of the program
  • % of program participants that attended weeks 2–6
How many people are receiving services?
  • # of people attending the services
Service quality (dose)
Are clients receiving a good quality program?
  • % of program documents that are explicitly informed by research evidence
  • % of evaluation processes followed
  • % of program participants that are satisfied with the program
  • % of sessions that complied with established processes, e.g. goal setting, continuity of care, intake questions asked, screening tools and referrals given, opportunities to provide feedback
  • # of needs assessments conducted in past 5 years
  • # of times program changed/updated in past 5 years
  • % of program changes informed by program data or research evidence
To what extent did program staff receive training in the program?
  • % of staff who have training completion certificates
  • % of staff trained in past 5 years
Is staffing sufficient in terms of numbers and qualifications/experience?
  • % of staffing ratios achieved in past 12 months
  • % of staff who meet qualifications/experience requirements
  • % of staff who agree that the program is appropriately staffed
  • % of staff who agree that the program staff are appropriately skilled
  • % of staff who have completed required training
Is performance significantly better or worse at one program site compared to another?
  • # of sessions delivered per term
  • % of clients that report program satisfaction
  • % of clients achieving desired program outcomes
  • % of clients that successfully completed 80% of the program
  • # of incidents reported
Awareness and accessibility
Is the community aware of the services available to them?
  • % of community members who have received information about available services
  • # of outreach events held
  • # of attendees at outreach events
  • % of community members who can identify at least 3 available services
To what extent are community members aware of how to access services?
  • % of community members who can accurately describe how to access specific services
  • # of enquiries or calls received by service providers regarding how to access services
  • # of community members who indicate an awareness of services
  • # of community members who state they know how to access services when needed
  • % of service users who indicate service access was easy
Can clients easily navigate between different programs and service systems?
  • % of clients who report satisfaction with the ease of navigating between services
  • # of referrals successfully completed between different programs
  • Average time taken for clients to transition from one service to another
Resource management
Are resources used effectively and efficiently?
  • $ per client served
  • % of budget spent on direct services versus administrative costs
  • # of services delivered within the planned time frame and budget
Are the resources, facilities and funding sufficient to support positive outcomes?
  • % of planned inputs that went into delivery
  • % of planned budget spent on delivery
  • Client satisfaction with venue, equipment and resources
Did the service allow sufficient time for program staff to do administrative tasks?
  • Time spent by individual staff on administrative tasks
Is staffing sufficient in terms of numbers and qualifications/experience?
  • # of staff at every program session
  • # of staff with relevant qualifications/experience
  • % of staff with recent PD
  • % of staff with cultural competence

Note: # = number; % = percentage; $ = dollars; PD = personal development.

Appendix B: Relevance of program logic model elements for process evaluation

Appendix B: Relevance of program logic model elements for process evaluation

 
Appendix C: Examples of published process evaluations

Appendix C: Examples of published process evaluations

This is not an exhaustive list of evaluation examples but includes reports on programs relevant to child and family services. These reports were selected because they are freely available and showcase various methods used in process evaluation. They also provide detailed methodology sections and reflections on utilising evaluation findings.

Glossary

Glossary

The definitions used here are informed by the literature listed in the bibliography and by the AIFS team’s professional expertise.

TermMeaning in this resource
Continuous improvementAn ongoing effort to enhance programs, services or processes through small, incremental changes over time, rather than through major changes made all at once
Descriptive analysisA type of analysis that typically describes how many, how much, who, what, when and to what extent
Evaluation questionsQuestions that guide the focus and purpose of an evaluation, determining the methods, measures, analysis and use of the evaluation
Evidence-based programsPrograms that are based on scientific evidence and have been proven effective through rigorous evaluation and research
Implementation scienceThe study of methods and strategies to promote the systematic uptake of research findings and evidence-based practices for regular use by organisations and communities. It aims to understand and address the barriers to effective implementation and to improve the quality and efficiency of various processes and interventions across different fields.
IndicatorsSpecific, measurable data points or variables used to assess and evaluate the performance or progress of a program or intervention
Mixed methodsThe use of both qualitative and quantitative data collection methods in an evaluation project
Outputs Measures of what happens in a program or service and who receives the program/service
Performance standardsSpecific targets or benchmarks used to judge whether a program is being delivered as intended and is achieving its goals
Process evaluationA systematic way of examining how a program operates and is implemented, focusing on the activities conducted, their execution and adherence to the original plan
Program doseThe amount of a program delivered, including the number of sessions, duration and frequency of activities
Program evaluation The systematic collection of information about activities, characteristics and outcomes of programs used to make judgements, improve effectiveness, add to knowledge and/or inform decisions about programs  
Program fidelityThe degree to which a program or intervention is delivered according to its original design
Program implementationThe process of executing and managing the activities and components of a program to ensure it is delivered as planned
Program logic modelA visual representation of what happens in the program, and the changes that are expected over time as a result of what happens in the program – also known as a logic model, results chain or chain of causation 
Program reachThe extent to which the intended audience or population is exposed to or participates in a program
Qualitative dataNon-numerical information that provides insights into the why and how of certain phenomena, often used to understand attitudes, beliefs and behaviours
Quantitative dataNumerical information that can be measured and quantified, often used to understand how many, how much or how often something occurs
Routine monitoringOngoing data collection and real-time tracking of program processes and performance to maintain continuous oversight and make timely adjustments
Standardised measureA measurement tool that has been rigorously tested and objectively assesses a concept through a series of consistent questions or items. These items typically use a uniform response format so that scores can be compared. 
Thematic analysisA method for identifying, analysing and reporting patterns (themes) within qualitative data
Theory of changeA comprehensive description and illustration of how and why a desired change is expected to happen in a particular context