Using qualitative methods in program evaluation

Content type
Short article
Published

May 2016

Researchers

Kat Goldsworthy, Kelly Hand

Qualitative research seeks to answer questions that stress how social experience is created and given meaning, beyond the scope of numbers and statistics (Rogers & Goodrick, 2010). Qualitative research can use a variety of methods such as interviews and observations – the method you choose should be based on what you are researching and the resources you have. However, being rigorous and transparent is the key to good qualitative research. That is, to be of high quality your research should be supported by a series of logical and justifiable steps.

Where should I start?

To begin with, have a strong rationale for why you have chosen a qualitative approach to answer your research question/s, and identify suitable data collection methods (interviews, focus groups, observations, open-ended surveys etc.) and any key perspectives that should be captured. Consider what techniques or concepts will guide the data analysis and interpretation stage, and what quality checks you can put in place to justify your interpretations.

Overall, being clear about the methodological process will help to strengthen the credibility of your findings.

Can qualitative methods be used to measure program impact?

If you’re considering adopting a qualitative approach to evaluate the impact of a program1, be aware that using strictly qualitative methods may not generate the answers you need, and is generally recommended in only a handful of cases. For instance, if you have a small population size, there are no existing outcomes measures that can be used with your target group, or you are evaluating a pilot version of the program.

An alternative option is to incorporate qualitative methods into your evaluation design alongside quantitative methods, as part of a mixed-methods design. Evaluations that adopt a mixed-methods approach are well placed to establish any causal relationships between the program content and outcomes, and to tell us how and why these changes occurred. Mixed-methods can also be used to:

  • test assumptions of how programs work in practice;
  • identify or explore unintended outcomes of the program; and
  • capture detailed and complex data about a particular issue or program, and enhance understandings about what aspects of the program have and haven’t worked.

Can anyone conduct a qualitative program evaluation?

Regardless of the techniques you choose to gather information about your program, conducting a quality qualitative evaluation relies on having time and expertise. Collecting participant insights can be time consuming, especially if only one evaluator is involved in the project. There is also the potential to be left with a large volume of data that needs to be interpreted, synthesised and communicated. Having an expert in qualitative methods conduct or assist with the evaluation will help to ensure that these tasks are completed in a systematic and rigorous way. The evaluation write-up is a good opportunity to demonstrate that your evaluation is of high quality – but again, this requires a specific set of skills.

Further reading

  • Ezzy, D. (2002). Qualitative analysis: Practice and innovation. Crows Nest: Allen and Unwin. Chapter 4.
  • Liamputtong, P., & Ezzy, D. (2009). Qualitative research methods (3rd ed.). South Melbourne: Oxford University Press.
  • Miller, E & Daly, E. (2013). Understanding and measuring outcomes: The role of qualitative data. Glasgow, Scotland: Institute of Research and Innovation in Social Services.
  • Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks: Sage Publications.
  • Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications. San Francisco: Jossey-Bass.
  • Trochim, W. (2006). The Research Methods Knowledge Base
. Ithaca: Web Center for Social Research Methods. Qualitative measures section.

References

Rogers, P. & Goodrick, D. (2010). Qualitative data analysis. In Wholey, J., Hatry, H., & Newcomer, K., (Eds.), Handbook of practical program evaluation (3rd ed., pp. 429–53). San Francisco: Jossey-Bass. 

1. For a discussion of the differences between impact and process evaluation, see CFCA Practitioner Resource Evidence-based practice and service-based evaluation. 


The feature image is by Stef Lewandowski, CC BY-NC 2.0.

Share