Using research to answer practice questions

Content type
Short article
Published

March 2021

Researchers

Anagha Joshi, Kristel Alla, Nerida Joss

This is the second short article in a series focusing on the use of evidence in practice. This article describes how to incorporate appropriate research into decision making based on practice-relevant questions, and considers the strengths and limitations of using different types of research.

Research is a core component of an evidence-informed approach to practice. To support the best selection and use of research in decision making, an approach that maps a set of common research questions against appropriate types of research can be used as a guide.1 This can support you in selecting appropriate research to identify a problem in a community or to answer questions on the effectiveness of a program or appropriateness of a practice.2

In this article, we describe and apply this approach when answering practice questions in the child, family and community welfare sector, highlighting the strengths and potential limitations of using different types of research.

How practice questions can be answered by different types of research

There are four common ways research can be used to answer relevant practice questions:

  1. understanding the problem and its context
  2. finding out what works and determining resource allocation
  3. understanding processes to deliver programs/practices
  4. assessing the appropriateness and acceptability for users.1,2,3

We map these four ways, identifying the best types of research to answer these questions and providing research examples from our sector (see Table 1).

Uses of research evidencePractice questionsRationale for using research evidenceTypes of research that answer these questionsExamples relevant to the child, family and community welfare sector
Understanding the problem and its context
  • What is the problem that needs to be solved?
  • What are the needs of the community?
  • To identify needs and issues (of the intended users)
  • To identify trends
  • Cohort studies
  • Surveys
  • Case-control studies

What are the factors associated with successful transition to out-of-home care?

Comparing the rates of family violence incidents from 2019/20

Finding out what works and determining resource allocation
  • Does the program/service work?
  • Is doing this work better than doing that?
  • Will it do more harm than good?
  • Is it worth investing in this program or service?
  • To identify ‘what works’ and what does not to address a problem
  • To understand whether it is worth investing in a particular program, practice or intervention.
  • Systematic reviews
  • Randomised control trials
  • Quasi-experimental studies

Do interventions to reduce homelessness work?

Is parent–child interaction therapy in foster care effective?

Understanding processes to deliver programs/practices
  • How do you implement this program/service effectively?
  • What is needed for the success of this program/service?
  • To understand how and why things work
  • To understand the processes that lead to good outcomes
  • Systematic reviews
  • Qualitative research
  • Surveys
  • Quasi-experimental studies

What are the barriers and enablers of kangaroo mother care?

How does a mental health literacy intervention for parents work?

Assessing the appropriateness and acceptability for users
  • Is this the right service or program for this group?
  • Will intended users be willing to engage with what is offered?
  • To understand whether clients are willing to engage with a program (acceptability)
  • To understand whether a program or practice suits their needs (appropriateness)
  • Systematic reviews
  • Qualitative research
  • Surveys
  • Quasi-experimental studies

What are the experiences of support during the perinatal period for new fathers?

Is telehealth acceptable and feasible for early intervention parent counselling?

1. Understanding the problem and its context

Effective practice decisions require an understanding of the problem, its context and the needs of the community. Research can be used to inform this. Surveys can be used to determine how many people are experiencing or are affected by an issue, as they can capture information on a large number of people at once.4 Cohort studies can provide appropriate data about a group over a period of time (e.g. to understand the associations between infant and childhood experience and adult outcomes).5 However, surveys and cohort studies are not appropriate when trying to explain why the problem is occurring or why a program or intervention is not working.4.6

2. Finding out what works and determining resource allocation

Identifying which program or intervention is best at solving a problem supports effective decisions on resource allocation. Systematic reviews provide strong evidence for whether certain programs or interventions work and why. This is because they systematically summarise the results and screen the quality of many studies on a topic to answer a question about what works. Systematic reviews are considered high-quality evidence when making decisions.4 However, systematic reviews may not be available, particularly when the evidence on a topic is newly emerging. In these instances, other types of research can be used including case-control or quasi-experimental designs.

In addition, randomised control trials (RCT) can be helpful when answering questions on effectiveness (what works?) and cost effectiveness (is it worth it?). This is because RCTs can link causes to their impacts through assigning participants to random groups during interventions, which increases the trustworthiness of the findings.7 RCTs are less appropriate for understanding the context, processes and acceptability of programs and practices (the ‘how’ and ‘why’ things work). They are also conducted in a controlled environment that often does not reflect real-world settings.

3. Understanding processes to deliver programs or practices

After selecting a program or practice, understanding the key components for the effective implementation of programs or practices supports effective decision making and positive outcomes for users. Qualitative research can provide this, as it involves asking practitioners and users what they believe helped or hindered a program’s success. This type of data can be helpful for understanding the reasons that lead to program or service outcomes, and it works well alongside survey data to understand participant experience and program quality.

4. Assessing the appropriateness and acceptability for users

Understanding whether programs or services are acceptable or appropriate for intended users supports effective decision making. Qualitative research is useful for answering these practice questions as it provides an open dialogue for individuals to voice their concerns, opinions and experiences. As indicated above, qualitative research supports understanding the reasons that lead to program or service outcomes and it works well alongside survey data to understand participant experience and program quality.

A limitation of qualitative research is that it can be resource intensive, which can limit the number of participants in the research. In addition, individual experiences can be subjective, which means that if only a small number of people are interviewed then findings may not represent the majority.8

Conclusion

Choosing appropriate research when making evidence-informed decisions in practice supports improved outcomes in the child, family and community welfare sector. This article is a guide to understanding the best types of research evidence to use when asking common practice questions.

Further reading and related resources

References

1. Petticrew, M., & Roberts, H. (2003). Evidence, hierarchies, and typologies: Horses for courses. Journal of Epidemiology and Community Health, 57(7), 527–529.

2. Kumanyika, S., Brownson, R. C., & Cheadle, A. (2012). The L.E.A.D. framework: Using tools from evidence-based public health to address evidence needs for obesity prevention. Preventing Chronic Disease, 9, E125.

3. Shaxson, L. (2014). Investing in Evidence: Lessons from the UK's Department for Environment, Food and Rural Affairs (Working paper 2). Jakarta: Knowledge Sector Initiative. Retrieved from www.ksi-indonesia.org/assets/uploads/original/2020/01/ksi-1580281998.pdf

4. Blanchet, K., Allen, C., Breckton, J., Davies, P., Duclos, D., Jansen, J. et al. (2018). Research Evidence in the Humanitarian Sector: A practice guide. London: Evidence Aid, London School of Hygiene and Tropical Medicine and Nesta (Alliance for Useful Evidence).

5. Caruana, E. J., Roman, M., Hernandez-Sanchez, J., & Solli, P. (2015). Longitudinal studies. Journal of Thoracic Disease, 7(11), E537–40.

6. Breckton, J. (2016). Using Research Evidence: A Practice Guide. London: Nesta (Alliance for Useful Evidence). Retrieved from www.alliance4usefulevidence.org/publication/using-research-evidence-a-practice-guide-january-2016

7. Hariton, E., & Locascio, J. J. (2018). Randomised controlled trials - the gold standard for effectiveness research. BJOG: International Journal of Obstetrics & Gynaecology, 125, 1716.

8. Galdas, P. (2017). Revisiting bias in qualitative research: Reflections on its relationship with funding and impact. Los Angeles, CA: SAGE Publications.

Share