Evaluating the outcomes of programs for Indigenous families and communities

Content type
Practice guide
Published

February 2017

Researchers

Stewart Muir, Adam Dean

Overview

This practitioner resource outlines some key considerations for community sector organisations and service providers who are thinking about evaluating the outcomes or impact of a program for Indigenous families or communities.

More general guidance on evaluation terminology, techniques and planning can be found in the Child Family Community Australia (CFCA) Research and Evaluation Resources. A useful starting point is the CFCA Practitioner Resource Planning for Evaluation, which outlines the different stages of an evaluation and has a useful discussion of first steps.

Key messages

  • There is limited evidence for the efficacy of most programs for Indigenous families and relatively few examples of programs with publically available outcomes or impact evaluations.

  •  Outcomes or impact evaluations should be built into program design. However, even when evaluations are not built into program design, evaluations should be carefully planned and thought out to ensure that the evaluation is properly resourced, the questions and methods are culturally appropriate and the process is locally acceptable and appropriate.

  •  Respectful relationships and opportunities for meaningful involvement by Indigenous people should be built into all stages of an evaluation, including evaluation planning and design.

  • Community consultation and relationship building are essential components of evaluations of programs for Indigenous people and families, but they can be time consuming and stretch evaluation time frames; this needs to be acknowledged in evaluation planning and resourcing.

  • The evaluation method should be appropriate to the evaluation questions and the type of program or intervention. Using a method poorly matched to the evaluation questions can waste time and resources.

  •  In practice, the precision and rigour of the evaluation method will likely be balanced with other factors such as the available resources, the complexity of the program, the sample size, levels of intercultural understanding, and community or organisational capacity.

  • Overcoming or mitigating challenges to good evaluation requires careful planning. It can also require additional resources or support from funders and sponsors.

The authors would like to acknowledge the assistance of the Cultural and Indigenous Research Centre Australia (CIRCA) in reviewing and providing feedback for this practice guide. CIRCA is a member of the Expert Panel Project Industry List and is available to provide help with program planning, implementation and evaluation.

Introduction

About this resource

This resource outlines some issues and common challenges that require careful thought when planning an evaluation of a program targeting Indigenous people. This is not, however, a how-to guide to evaluation. Child Family Community Australia (CFCA) already has an extensive range of resources that outline evaluation principles and methods. In any case, the specifics of an evaluation will depend on the program and the context. Rather, this resource highlights the need for commissioning organisations and funding bodies to adequately plan for evaluation, to consider potential barriers and solutions before the evaluation starts, and to ensure the meaningful participation of Indigenous people. Developing an Indigenous-focused evaluation culture will not guarantee an evaluation's success; however, the absence of such a culture is likely to make the evaluation task more difficult and less likely to meet local community needs. Case studies are included in the resource to illustrate key points.

The need for outcome evaluations

Despite growing calls for social policies and services to become more "evidence-based", most programs for Indigenous families or communities have little evidence for their effectiveness (Osborne, Baum, & Brown, 2013; Productivity Commission, 2013). Relatively few programs or services have published evaluations of their outcomes and even fewer have done so using methods that can reliably indicate whether a program is having the intended effect, how much of an effect it has or whether any observed results can be attributed to the program and not to something else (Biddle, 2013; Day & Francisco, 2013). 1

There are several reasons for this dearth of evidence. Evaluations of social programs are rarely straightforward, especially when the programs address community-wide issues, tackle complex or entrenched problems, or where the relationship between cause and effect, action and outcome, is poorly understood or changes according to circumstance (Humphreys et al., 2009; Rogers, 2010). Evaluations of Indigenous programs can also be especially complex because of the context in which they take place (Guenther, Arnott, & Williams, 2009).

Programs often run in communities with high levels of socioeconomic disadvantage, limited organisational resources and varying levels of intercultural understanding or trust. This can make it difficult to establish program effects in the short or medium term. Indigenous programs are also often undertaken against the backdrop of fluctuating policy priorities and a range of other social interventions, complicating attempts to separate out the effects of any specific program (Moran, 2016).

The complexity of collecting evidence for effectiveness is exacerbated by service providers' common focus on the urgent task of delivering services and not on testing whether or not they are working (McKendrick, Brooks, Hudson, Thorpe, & Bennet, 2013).

Indigenous community organisations have rarely had the funding, incentives or support required to develop an internal evaluation or data collection capacity. As a result, evaluations are rarely built into program design, staff are not funded or trained to collect evaluation data and program outcomes are often not formally evaluated. Even when outcome or impact evaluations are undertaken, it is often as an afterthought. This can mean that insufficient time or resources have been set aside for good quality evaluation and insufficient thought given to how program outcomes can be measured (Mayne, 2010).

The consequences of this lack of evidence can affect the whole sector. For funders, it can be hard to know whether a program should be funded or expanded or whether the money would be better spent elsewhere. For program managers, the lack of evaluation data can limit their ability to adapt or improve programs, while for Indigenous communities who run or host such programs, it can be hard to know how to get the social outcomes that they want. Hence exploring program outcomes or impacts can be vital if time and resources are not to be wasted.

Evaluation: The basics

The ideal first step in evaluation is building it into a program's design and setting aside enough time and money for it to be done regularly and done well. In the real world of service delivery, however, evaluation often "lags behind practice" (Day, Francisco, & Jones, 2013, p. 12) and evaluations are often commissioned after a program has already started. Nonetheless, it is still vital that careful thought goes into what the evaluation is meant to achieve, what outcomes should be evaluated, what baseline data is available or can be collected, what method is best suited to measuring the program's outcomes, and how much time can be allowed for the technical parts of the evaluation and relationship building with the Indigenous people or communities involved.

There are some generally well-understood steps in the evaluation process and these also apply to Indigenous programs (see Box 1). The process starts with describing the program objectives and how the program is supposed to deliver them. To enable this, it helps to have a program logic (a plan that sets out the program's objectives and how they are meant to be achieved) and some well-defined, measurable outcomes.

Not all of a program's objectives have to be evaluated at once and some will be easier to measure than others. Most outcomes evaluations will only assess short or medium term outcomes or explore indicators of progress against longer-term goals. Having a clear idea of what outcomes will be measured can then help determine research questions, appropriate methods and whether an external evaluator is required. If an external evaluator is required, then there will need to be discussion about what the program's evaluation needs are and how they can be met. This may not be a one-off conversation but rather something that evolves over time. It can be helpful to regard evaluations as part of an ongoing process. Evidence is built over time and the findings of a single evaluation are rarely a definitive statement about effectiveness.

Box 1: The broad steps in an evaluation

The steps in an evaluation include:

  • describing in detail the program, its objectives, and how its components lead to its intended outcomes;
  • formulating evaluation questions;
  • deciding which measures or methods will be able to provide the data to answer the question(s);
  • deciding how to collect the data and putting in place the procedures that will allow the data to be collected;
  • collecting and analysing the data; and
  • interpreting and acting on the findings.

Source: Parker & Robinson (2013)

Indigenous community involvement

Community consultation and participation is an essential component in the design, data collection and reporting phases of evaluation. This should cut across all the steps described above (Box 1) and be planned for from the beginning. Indigenous community members and/or community researchers - whether they be program staff, program participants or community stakeholders - are often the best guides to their community. As such, they are best placed to identify issues that might hinder or aid an evaluation and can help ensure that the design, data collection and reporting meet local needs.

Genuine opportunities for Indigenous community involvement are also essential because it is, after all, Indigenous people who are most affected by the evaluated program; hence, they will have an intrinsic interest in the outcomes.

There are several useful guides to ethical and appropriate research with Indigenous people and common to virtually all of them is an emphasis on respectful relationships, ensuring that the research or evaluation is of benefit to the Indigenous people concerned, and a commitment to substantive Indigenous participation and sharing of findings (see Jamieson et al., 2012; National Health and Medical Research Council, 2007; Orr et al., 2012; Walker, Ballard, & Taylor, 2003).2

Box 2: Case study: Indigenous community involvement in evaluation planning

Ninti One is a not-for-profit lead organisation supporting the delivery of the Stronger Communities for Children (SCfC) program to 10 Indigenous communities in the Northern Territory. As part of its role, Ninti One supports the planning and design of program evaluations to help ensure that what's measured in the evaluation process both reflects local needs and meets funder requirements. Measuring local change is a key principle of the SCfC program, with the expectation that the program will generate its own evidence through local measurement and evaluation.

Advisory groups of local community members, also known as SCfC Boards, are an essential part of the SCfC program. The advisory groups work with the wider community to identify local needs and they work with service providers to develop program strategies, make strategic decisions and coordinate and deliver services. Ninti One, service providers and other experts provide training and advice to the advisory groups to support them in their use of evidence and evaluation data for decision-making.

In one example, the Kardu Lurruth Ngala Purrungime SCfC Board in Wadeye evaluated their Kids Kitchen school holiday program provided by the Palngnun Wurnangat Association. Ninti One facilitated workshops with the SCfC Board in Wadeye to develop a Community Plan, articulating the community's visions, priorities and services required to achieve its goals. Using these priorities as their basis, the SCfC Board developed an Impact Assessment Framework, looking at short, medium and long-term outcomes. For each outcome the SCfC Board identified a measure of change, and learned how conducting surveys and interviews, recording community observations and responses, and reviewing population statistics could help them understand the changes happening in their community. Using this framework, the SCfC Board was able to assess whether the program achieved desired outcomes.

More information about the role of advisory groups in the SCfC program can be found on the Ninti One website.

An important early step in planning and consultation is discussion about the purpose of the evaluation and whose needs it serves; this needs to be between the commissioning organisation, the evaluators and Indigenous community stakeholders. Reaching agreement about a program's purpose is not always straightforward. Program funders seeking evidence of effectiveness or cost-benefit are often the main reason that evaluations take place at all but the questions that funders want answered will not necessarily meet the needs of program managers, program participants or the wider community (Price, McCoy, & Mafi, 2012). Even the definition of "success" against a single measure might vary according to the perspective of the funder, program manager or participant (Guenther et al., 2009). Hence, the planning process might need to include a way of working out mutually acceptable evaluation questions or even parallel sets of questions. This might also require using a range of methods, and gathering different kinds of evidence, for a single outcome measure.

None of this is to say that community involvement is always easy and it can be especially challenging for non-Indigenous organisations or evaluators. There can be multiple stakeholders with different needs or agendas and it is not always possible, or desirable, to identify individuals who can "speak for the community". This means that many people may have to be consulted. Further, as Scougall (2006) notes, the reality is that most evaluators are non-Indigenous and are short-term visitors to the community where the evaluation takes place. As such, they often lack pre-existing relationships of trust. Without the time or resources to develop such relationships, the ideal of community participation and empowerment can be difficult to achieve in practice.

Most often, non-Indigenous evaluators will have to seek the assistance of a local "sponsor" or community researcher to facilitate community engagement. Such individuals can draw on their knowledge of the community and on their local relationships to facilitate the evaluation process (Guenther et al., 2014; Price et al., 2012). This will often be the most practical solution for evaluations with relatively tight timelines and is often the start of relationship building in the longer term too. However, this kind of relationship building can still take time and care needs to be taken that an adequate range of community views are being accessed. Ultimately, what is often required is for commissioning organisations and program funders to understand that building and maintaining relationships within the local community is essential and that it can require a significant upfront investment in time and resources.

Box 3: Case study: Empowering participants to engage with the evaluation process

Kids Caring for Country is a program based in Murwillumbah, New South Wales, that facilitates an Aboriginal All Ages Playgroup and After School Group out of which several other activities operate. The program is designed to empower participants to take an active role in determining program activities, including how the program is evaluated.

In approaching the evaluation process, staff were concerned that overly intrusive or culturally inappropriate evaluation tools would have negative effects on the ongoing trust and operation of the program. Responding to these concerns, program staff sought to empower parents and family members to engage with the process early on, beginning with evaluation design.

Staff started this process by introducing the need for evaluation to participants during regular Yarning Circle sessions, where staff asked for their input on the proposed evaluation tool, the Growth and Empowerment Measure (GEM). Staff discussed each question in the GEM with parents and carers, who were able to suggest changes to better represent their priorities of culture, family and spirituality. This process took several weeks, to ensure that all participants had a say in determining how their project would be more meaningfully evaluated. Proposed amendments were then presented to designers of the tool to ensure that its validity was maintained.

In planning for the evaluation survey, staff determined that a special workshop led by the family support worker and cultural advisor would be set-up to facilitate a supportive group evaluation process. Participants, who were already familiar with the evaluation tool, were reminded about the workshop a week in advance and a separate program for kids was run in parallel to allow parents and carers (including teenagers with caring roles) time to reflect on their experiences and emotional wellbeing and to complete the survey.

Community capacity is also an issue, particularly for those seeking to use participatory methods or to use Indigenous staff in data collection. Service providers or community organisations can sometimes have limited capacity to get involved in all phases of an evaluation because they lack the resources, staff capacity and/or training to do so. These are real challenges and can make it difficult to have substantive community input into data collection, especially in the short or medium term. However, it can be worthwhile in the longer term to try and build an internal "evaluation culture" where evidence and evaluation data are used to inform decision making and where staff are encouraged, and rewarded, for taking part in evaluations (Stewart, 2014). For Indigenous organisations, in particular, building local community capacity can be an important additional aim of an outcomes evaluation because it can allow for greater Indigenous control of the process and ultimately make data collection more responsive to local needs.

Choosing an evaluation method

Although many evaluation frameworks that stress the need for community involvement use qualitative methods, this does not have to be the case and no specific method is inherently best suited to evaluating the outcomes of Indigenous programs. Given the enormous heterogeneity of Indigenous communities across the country, what is most appropriate will differ according to the context. Ultimately, decisions about methods will almost invariably balance the evaluation objectives, the time and resources available and what evaluators and key stakeholders think is most feasible and appropriate.

That said, it is important that the chosen method can actually address the evaluation questions. This might seem obvious but not all research or evaluation techniques are equally well suited to measuring all the different types of outcome or impact. The choice of evaluation method is a particular issue in the context of Indigenous evaluations both because of the general lack of robust evidence for effectiveness of Indigenous programs and because of the related "image problem" (Walter & Anderson, 2013, p. 130) that quantitative methods have sometimes had within Indigenous research circles, largely due to concerns about their appropriateness, the assumptions underpinning such research and the frequent lack of meaningful consultation with Indigenous communities by the mainly non-Indigenous researchers using quantitative methods.

The lack of evidence has sometimes been attributed not only to a lack of evaluations but also to the more specific lack of evaluations using experimental methods such as randomised controlled trials (RCTs) or quasi-experiments with control groups (Day & Francisco, 2013). Such methods are often considered to produce the most robust evidence that a program works because they are the best able to assess the "counter factual"; that is, what would have happened had the program not been run at all (Leigh, 2010).

This demand for "good" evidence suggests that commissioning organisations should at least consider using RCTs or quasi-experiments. There are, however, some common barriers to their use. Cost and feasibility are obvious issues. RCTs and quasi-experiments may produce the most robust evidence but they are often expensive to run. This can put them out of the reach of smaller community organisations unless they can access additional financial support or partner with other community organisations. Further, although RCTs are routinely described as the "gold standard" of evaluation evidence, they are often not suited to the relatively uncontrolled environs of service provision or whole of community interventions. Such environments can be too changeable and complex for the evaluators to factor in all the relevant variables and they also rarely allow for truly "random" selection (Humphreys et al., 2009; Kelly, 2010; McDonald, 2011).

Box 4: Evaluation approaches and methods

For an overview of the different types of methods and evidence used in evaluations, see the CFCA practitioner resource Evidence-based practice and service-based evaluation. Also see Developing a culture of evaluation and research

None of these challenges necessarily make RCTs untenable for evaluating Indigenous programs. When a program is well funded, has a relatively straightforward set of objectives and a clear theory of cause and effect, RCTs or quasi-experiments should be considered. Further, with enough planning and thought it can be possible to overcome some of the challenges outlined above. Biddle (2013), for example, suggests that randomising either the timing of when people start a program or the promotion of the program might mitigate the problem of non-random participation in the program. However, the feasibility of using any form of RCT should be carefully thought out: a poorly conceived, designed or run RCT is of little value.

At the other end of the evidence scale, and far more commonly used in this context, are qualitative methods. These methods are often the quickest and easiest to deploy, especially after a program has already started. Qualitative methods are also sometimes preferred in Indigenous contexts because they allow participants to express themselves using their own words and concepts rather than the imposed categories of a survey or psychometric measure (Chouinard & Cousins, 2007). Qualitative methods can also be very effective in exploring specific case studies or uncovering how and why a program works or does not work.

Box 5: Sample sizes

Choices about research methods, and the way results are interpreted, can also be affected by sample size. In small remote communities, or in many urban settings too, there simply might not be enough participants in a program for quantitative measures to produce meaningful results. It can also be difficult in small communities to find adequately sized participant groups and matched control groups. Even when there is a sufficiently large participant group to start with, sometimes heavy attrition rates can radically reduce the size of those who complete the program.

These challenges are not always easily solved. Over-recruitment can be used to counter attrition but it requires a sufficiently large population to do so and it often costs more. Hence in some cases, despite their limitations, qualitative case study approaches might be the best option for small program evaluations. In other instances, it might be enough to exercise caution when interpreting quantitative results from small samples, especially when generalising to other sites or future iterations of the program.

It might also sometimes be possible to "think outside the box" and try alternative approaches. If the program is run in several places, it might be possible to aggregate results in order to increase the sample size. Care, however, needs to be taken to ensure that program implementation and the aggregated communities are sufficiently similar for the results to be comparable. This can be difficult, given the heterogeneity of Indigenous communities. That said, evaluation approaches such as "realist evaluation" sometimes regard the difference between communities as part of the comparison. Realist evaluation is a methodological framework that attempts to understand the context in which something works and for whom (Pawson & Tilly, 1997). The focus on different contexts can mean this methodology is suitable for comparing outcomes for different localities, or different sub-groups, in order to understand how or why they differ (Westhorp, 2014).

Whether any of these approaches are feasible will depend on the specific program context, resources and the time available. For groups with a very small population, if the program already has well developed objectives and a well thought-out program logic it might also be worth reconsidering whether undertaking a formal outcomes evaluation is a good use of resources.

However qualitative methods on their own are seldom suited to measuring most types of program outcome, largely because they can only rarely measure whether real change has occurred. Despite the previously mentioned concerns about the use, or misuse of quantitative methods with Indigenous people or communities it is also sometimes the case that quantitative measures can be both methodologically suitable and culturally appropriate (Walter & Anderson, 2013). Surveys, such as the Longitudinal Study of Indigenous Children (LSIC), have been able to successfully gather statistical data in a range of Indigenous community settings because they have had Indigenous leadership, expert and community input into survey design and content, have employed Indigenous interviewers and have committed to reporting back their findings to participants (Department of Social Services, 2016).

In practice, the most feasible solution in real-life Indigenous service environments is often to use mixed methods to collect data and gather evidence (Stufflebeam & Shinkfield, 2007). As always, the specific methods will depend on the problem, the context and what information is available. They might, for example, include mixing quasi-experiments, pre-post surveys or analyses of program and administrative data with qualitative interviews. Even in combination these methods can rarely make definitive statements about whether an evaluated program actually caused any observed changes in program participants, but consistent results can be a strong indicator as to whether a program is having an effect (Parker & Robinson, 2013).

Adapting evaluation methods and measures

The appropriateness of the chosen data-collection methods for the community context should be considered from the beginning, but issues can still emerge in the course of data collection. For example, sometimes the methods or measures used might not work out in practice and could need adaptation to the local context.

In remote communities, in particular, community members can have varying levels of literacy and numeracy and, in some parts of central or northern Australia, varying levels of facility in standard Australian English. More generally, whatever the context and whatever the method, care has to be taken to ensure that people understand the questions they are being asked. Survey questions, measurement tools and psychometric tests that were developed for largely urban and non-Indigenous populations might not be appropriate for people with different world views.

Community involvement again can be an important factor. Issues of language and understanding can be alleviated when there has been time for relationship building or when community researchers are involved in data collection. Similarly, although surveys and psychological measures can be difficult when English is not the first language of the participant (or of the local interviewer), or when the concepts used are alien to participants, it is often possible to work with evaluators and the community to create culturally relevant survey materials. Adapting measures for local contexts is likely to be time consuming but is nonetheless a crucial consideration (Guenther & Boonstra, 2009).

In some cases, entirely new measures might be required. One tool developed specifically for Indigenous contexts is the Growth and Empowerment Measure (GEM). This is a validated measure developed originally for an Indigenous family wellbeing program that subsequently has been used in other evaluations measuring the outcomes of Indigenous empowerment programs (Kinchin, Jacups, Tsey, & Lines, 2015). The tool was designed to measure change in dimensions of empowerment in ways that were defined and described by Indigenous people (Haswell et al., 2010). There are few such specific measures and, given the wide variability in the lives of Indigenous people across the country, such measures that do exist are likely not applicable in all settings. What such measures indicate, however, is that it is possible to work with local communities and key stakeholders to devise methods that are both locally appropriate and methodologically robust.

Box 6: Case study: Translating outcome measures for local contexts

Families and Schools Together (FAST) is an early intervention and prevention program designed to strengthen family functioning and build protective factors for children. FAST is an international program that uses established evaluation processes but it has been adapted to the Northern Territory context by FAST NT.

When FAST NT first prepared to evaluate their program, it became clear that many of the evaluation tools used to assess outcomes (e.g., validated psychometric tools) would not work in the remote Indigenous communities where they were rolling out the program because the language and concepts that the tools used were not always meaningful for program participants. Using these tools, therefore, risked collecting meaningless data.

To overcome these issues, FAST NT began work with an external consultant to develop new evaluation processes that would translate their pre-packaged measurement tools into something workable in the specific Indigenous community contexts where they worked and where participants often had low literacy and numeracy skills. Part of the challenge was developing an evaluation process that built a robust evidence base using measurement tools that were meaningful to local participants while also satisfying funder requirements.

The new process needed to be:

  • culturally relevant (within the participants' frame of reference);
  • meaningful to participants (allow participants to meaningfully engage with the process);
  • able to measure change;
  • easily administrated;
  • consistent with measures included in the international tool; and
  • able to meet the analytic requirements of funder reporting frameworks.

Part of the task turned out to be replacing the psychometric survey tool - which was not producing reliable results - with a narrative inquiry tool that used pictures and symbols that were meaningful to local participants but could also be translated into definitive outcomes understandable to funders.

This process was not simple. It involved an ongoing process of trialling and re-trialling evaluation tools to see what worked and what could be improved. As in this case study, the translation of existing measurement tools into new contexts may take significant time and effort to ensure that they are meaningful to participants while also meeting desired standards of evidence.

Reporting

The different audiences for an evaluation may also require different dissemination strategies (see Dissemination of evaluation findings). It is essential that findings are disseminated beyond an organisation's management or program funders if evidence is to become part of service provision and to prevent or minimise community scepticism about the value of the process. As in any evaluation of a community program, thought will need to go into how evaluation findings are best reported to practitioners, program participants and community stakeholders. Dissemination should be planned for in the commissioning phase and will need adequate time and resourcing.

What such a multipronged or tailored dissemination strategy means in practice will vary according to the context and the audience. It might include producing summary leaflets outlining the findings or presentations of findings at workshops and meetings. The Lowitja Institute's Guide for Researchers (Laycock, 2011) has a chapter on what methods can be most appropriate for reporting the results of health research to different audiences, most of which can be used for disseminating evaluation results.

Finally, consider making evaluation findings available to the wider public. The lack of good evidence for the effectiveness of many Indigenous programs is exacerbated by the lack of accessible evaluation data or findings (Cobb-Clarke, 2013). Not releasing evaluation data can be especially tempting when evaluations have found limited or no evidence of change but identifying what is not working can help programs adapt and improve and can be hugely helpful to future program design.

Further reading

Biddle, N. (2013). Data about and for Aboriginal and Torres Strait Islander Australians. (Issues paper No. 10, prepared for the Closing the Gap Clearinghouse). Canberra: Australian Institute of Health and Welfare, & Melbourne: Australian Institute of Family Studies. Retrieved from <www.aihw.gov.au/WorkArea/DownloadAsset.aspx?id=60129548209>.

Jamieson, L. M., Paradies, Y. C., Eades, S., Chong, A., Maple-Brown, L., Morris, P. et al. (2012). Ten principles relevant to health research among Indigenous Australian populations, Medical Journal of Australia, 197(1), 16-18. Retrieved from <www.mja.com.au/system/files/issues/197_01_020712/jam11642_fm.pdf>.

Laycock, A., with Walker, D., Harrison, N., & Brands, J. (2011). Researching Indigenous health: A practical guide for researchers. Melbourne: The Lowitja Institute. Retrieved from <www.lowitja.org.au/researchers-guide>.

Orr, M., Kenny, P., Gorey, I. N., Dixon, T., Mir A., Cox E., & Wilson J. (2012). Aboriginal knowledge and intellectual property protocol community guide. Alice Springs: Ninti One Limited & Waltja Tjutangku Palyapayi Aboriginal Corporation. Retrieved from <www.nintione.com.au/resource/Aboriginal-Knowledge-and-IP-Protocol-Community-Guide-booklet-A5.pdf>.

References

  • Biddle, N. (2013). Data about and for Aboriginal and Torres Strait Islander Australians. (Issues paper no. 10 prepared for the Closing the Gap Clearinghouse). Canberra: Australian Institute of Health and Welfare, & Melbourne: Australian Institute of Family Studies. Retrieved from <www.aihw.gov.au/WorkArea/DownloadAsset.aspx?id=60129548209>.
  • Cobb-Clark, D. A. (2013). The case for making public policy evaluations public. In: Productivity Commission. Better Indigenous policies: The role of evaluation. Roundtable proceedings, 81-91. Canberra: Productivity Commission. Retrieved from <www.pc.gov.au/research/conference-proceedings/better-indigenous-policies>.
  • Chouinard, J. A., & Cousins, J. B. (2007). Culturally competent evaluation for Aboriginal communities: A review of the empirical Literature. Journal of MultiDisciplinary Evaluation, 4(8), 40-57. Retrieved from <http://uaf.edu/ces/info/reporting/programevals/aboriginal-eval.pdf>.
  • Day, A., & Francisco, A. (2013). Social and emotional wellbeing in Indigenous Australians: Identifying promising interventions. Australia and New Zealand Journal of Public Health, 37(4), 350-355.
  • Day, A., Francisco, A., & Jones, R. (2013). Programs to improve interpersonal safety in Indigenous communities: Evidence and issues(Issues paper No. 4, prepared for the Closing the Gap Clearinghouse. Canberra: Australian Institute of Health and Welfare, & Melbourne: Australian Institute of Family Studies.
  • Department of Social Services (2016). Footprints in time: The longitudinal study of Indigenous children. Data user guide, release 7.0. Canberra: Department of Social Services.
  • Guenther, J., Arnott, A., & Williams, E. (2009). Measuring the unmeasurable: Complex evaluations in the Northern Territory. NARU Seminar Series, Darwin, 8 October 2009. Retrieved from <naru.anu.edu.au/__documents/seminars/2009/paper_guenther_oct2009.pdf>.
  • Guenther, J., & Boonstra, M. (2009). Adapting evaluation materials for remote Indigenous communities and low-literacy participants. Presentation to APCCAN 2009 Symposium 08, Perth, 17 November 2009. Retrieved from <www.catconatus.com.au/docs/091117_APCCAN_FAST.pdf>.
  • Guenther, J., Osborne, S., Arnott, A., Williams, E., & Disbray, S. (2014). Amplifying the voice of remote Aboriginal and Torres Strait Islander VET stakeholders using research methodologies: Volume 1. AVETRA 17th Annual Conference: Informing Changes in VET Policy and Practice: The Central Role of Research. Surfers Paradise: Australian Vocational Education and Training Research Association. Retrieved from <avetra.org.au/wp-content/uploads/2014/05/Abstract-33.pdf>.
  • Haswell, M. R., Kavanagh, D., Tsey, K., Reilly, L., Cadet-James, Y., Laliberte, A. et al. (2010). Psychometric validation of the growth and empowerment measure (GEM) applied with Indigenous Australians. Australian and New Zealand Journal of Psychiatry, 44(9), 791-799.
  • Humphreys, J. S., Kuipers, P., Wakerman, J., Wells, R., Jones, J. A., & Kinsman, L. D. (2009). How far can systematic reviews inform policy development for "wicked" rural health service problems? Australian Health Review, 33(4), 592-600.
  • Jamieson, L. M., Paradies, Y. C., Eades, S., Chong, A., Maple-Brown, L., Morris, P. et al. (2012). Ten principles relevant to health research among Indigenous Australian populations, Medical Journal of Australia, 197(1), 16-18. Retrieved from <www.mja.com.au/system/files/issues/197_01_020712/jam11642_fm.pdf>.
  • Kelly, T. (2010). Five simple rules for evaluating complex community initiatives. Community Investments, 22(1), 99-22, 36. Retrieved from <www.frbsf.org/community-development/files/T_Kelly.pdf>.
  • Kinchin, I., Jacups, S., Tsey, K., & Lines, K. (2015). An empowerment intervention for Indigenous communities: An outcome assessment. BMC Psychology, 15(3), 1-5.
  • Laycock, A., with Walker, D., Harrison, N., & Brands, J. (2011). Researching Indigenous Health: A Practical Guide for Researchers. Melbourne: The Lowitja Institute. Retrieved from <www.lowitja.org.au/researchers-guide>
  • Leigh, A. (2010). Evidence-based policy: Summon the randomistas? In Productivity Commission (Ed.). Strengthening evidence based policy in the Australian federation: Volume 1. Proceedings, (pp. 215-226). Canberra: Productivity Commission.
  • Martin, K., & Mirraboopa, B. (2003). Ways of knowing, being and doing: A theoretical framework and methods for Indigenous and Indigenist research. Journal of Australian Studies, 27(76), 203-214.
  • Mayne, J. (2010). Building an evaluative culture: The key to effective evaluation and results management. The Canadian Journal of Program Evaluation, 24(2), 1-30.
  • McKendrick, J., Brooks, R., Hudson, J., Thorpe, M., & Bennett, P. (2013). Aboriginal and Torres Strait Islander healing programs: A literature review. Report to the Healing Foundation, Canberra.
  • McDonald, M. (2011). Demonstrating community-wide outcomes: Exploring the issues for child and family services (CAFCA Practice Sheet). Melbourne: Australian Institute of Family Studies. Retrieved from <aifs.gov.au/cfca/publications/demonstrating-community-wide-outcomes-exploring-issues>.
  • Moran, M. (2016). Serious Whitefella stuff: When solutions became the problem in Indigenous affairs. Melbourne: Melbourne University Press.
  • National Health and Medical Research Council. (2007). National Statement on Ethical Conduct in Human Research 2007 (Updated May 2015). Canberra: Commonwealth of Australia.
  • Orr, M., Kenny, P., Gorey, I. N., Dixon, T., Mir A., Cox E., & Wilson J. (2012). Aboriginal knowledge and intellectual property protocol community guide. Alice Springs: Ninti One Limited & Waltja Tjutangku Palyapayi Aboriginal Corporation. Retrieved from <www.nintione.com.au/resource/Aboriginal-Knowledge-and-IP-Protocol-Community-Guide-booklet-A5.pdf>
  • Osborne, K. Baum, F., & Brown, L. (2013). What works? A review of actions addressing the social and economic determinants of Indigenous health (Issues Paper No. 7, produced for the Closing the Gap Clearinghouse). Canberra: Australian Institute of Health and Welfare, & Melbourne: Australian Institute of Family Studies.
  • Parker, R., & Robinson, E. (2013). Evidence-based practice and service-based evaluation (CFCA Practitioner Resource). Melbourne: Australian Institute of Family Studies. Retrieved from <aifs.gov.au/cfca/publications/evidence-based-practice-and-service-based-evaluation>.
  • Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: Sage.
  • Price, M., McCoy, B., & Mafi, S. (2012). Progressing the dialogue about a framework for Aboriginal evaluations: Sharing methods and key learnings. Evaluation Journal of Australasia, 12(1), 32-37. Retrieved from <www.boystown.com.au/downloads/rep/BT-Article-Progressing-the-dialogue-on-framework-for-Aboriginal-evaluations.pdf>.
  • Productivity Commission. (2013). Better indigenous policies: The role of evaluation, roundtable proceedings. Canberra: Productivity Commission.
  • Rogers, P. (2010). Learning from the evidence about evidence-based policy. In Productivity Commission (Ed.). Strengthening evidence based policy in the Australian federation: Volume 1. Proceeding, (pp. 195-213). Canberra: Productivity Commission.
  • Scougall, J. (2006). Reconciling tension between principles and practice in Indigenous evaluation. Evaluation Journal of Australasia, 6(2), 49-55. Retrieved from <www.aes.asn.au/images/stories/files/Publications/Vol6No2/Reconciling_tensions_between_principles_and_practice_in_Indigenous_evaluation.pdf>.
  • Smith, L. T. (1999). Decolonizing methodologies: Research and Indigenous peoples. Dunedin: University of Otago Press.
  • Stewart, J. (2014). Developing a culture of evaluation and research (CFCA Paper No. 28). Melbourne: Australian Institute of Family Studies. Retrieved from <aifs.gov.au/cfca/publications/developing-culture-evaluation-and-research>.
  • Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications. San Francisco: Jossey-Bass.
  • Walker, R., Ballard, J., & Taylor, C. (2003). Developing paradigms and discourses to establish more appropriate evaluation frameworks and indicators for housing programs (AHURI Final report no. 29). Melbourne: Australian Housing and Urban Research Institute.
  • Walter, M., & Anderson, C. (2013). Indigenous statistics: A quantitative research methodology. Walnut Creek, CA: Left Coast Press.
  • Westhorp, G. (2014). Realist impact evaluation: An introduction (Methods Lab). London: Overseas Development Institute. Retrieved from <www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/9138.pdf>.

1 "Outcomes" and "impact" are sometimes used interchangeably, but "impact evaluation" can also refer to evaluations that look at longer-term program effectiveness or to evaluations that assess whether a program has had wider effects than the relatively narrow and well-defined outcomes usually measured in an outcomes evaluation. This resource refers to both outcomes and impact evaluations (as opposed to process evaluations that look at how a program is being implemented).

2 There are several research and evaluation frameworks that make this kind of community knowledge and participation a central part of the evaluation process. For example, "participatory action research" (PAR) and "empowerment evaluation" actively involve participants in the process of inquiry and commonly feature local capacity-building efforts. Indigenous research methodologies, such as those of Smith (1999) and Martin (2003) also emphasise the need for Indigenous involvement in, and ownership of, the research process.

Acknowledgements

Stewart Muir is a Research Fellow at the Australian Institute of Family Studies. A social anthropologist by training, Stewart has worked with the Centre for Research on Socio-cultural Change (CRESC) and the Morgan Centre for the Study of Relationships and Personal Life at the University of Manchester, UK. Stewart has been involved in research on transitions from out-of-home care, ageing, behaviour change and place-based disadvantage and has more than a decade's experience working with Australian Indigenous people in southeast Australia. His research interests include contemporary kinship patterns, structural inequality, conceptions of evidence and qualitative research methods.

Adam Dean is a Research Officer with the Child Family Community Australia information exchange at the Australian Institute of Family Studies. Adam works on a range of knowledge translation products and activities as part of the CFCA information exchange.

Acknowledgements: The authors would like to acknowledge staff at Ninti One, FAST NT, Kids Caring for Country and Culture and People for their assistance with writing the case studies used in this article. We also thank the community organisations and evaluation practitioners that were consulted during the development of this article.

Feature image courtesy of the Department of Prime Minister and Cabinet.

Citation

Muir, S., & Dean, A. (2017). Evaluating the outcomes of programs for Indigenous families and communities (CFCA Practice Resource). Melbourne: Australian Institute of Family Studies.

ISBN

978-1-760161-18-7

Share