Sharon is the Director of the Aboriginal and Torres Strait Islander Research Unit at Ipsos Australia, and is responsible for a number of complex evaluation projects. Sharon previously worked for the Department of Social Services on Footprints In Time - the Longitudinal Study of Indigenous Children. She is a member of the Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS) and the Australian Bureau of Statistics’ roundtable to provide advice on Aboriginal and Torres Strait Islander statistics. She has a strong reputation for developing survey methodology, ethical conduct of research and associated cultural protocols.
Measuring outcomes in programs for Aboriginal and/or Torres Strait Islander families and communities
Measuring outcomes in programs for Aboriginal and/or Torres Strait Islander families and communities
This webinar discussed ways to measure the outcomes of programs for Aboriginal and/or Torres Strait Islander families and communities.
Audio transcript (edited)
Good afternoon everyone and welcome to today's webinar, ‘Measuring outcomes in programs for Aboriginal and Torres Strait Islander families and communities’. My name is Stewart Muir and I'm a research fellow here at the Australian Institute of Family Studies. I'd like to begin by acknowledging the traditional custodians of the lands on which we meet in Melbourne, where we're transmitting from, the traditional custodians are the Wurundjeri people of the Kulin nation.
I pay my respects to their Elders past and present and to Elders from other communities who may be participating today.
Today's webinar will consider ways to measure the outcomes or programs for Aboriginal and or Torres Strait Islander families and communities. The webinar is part of a series of resources on evaluation for and with Indigenous families and communities. And a practice paper and other articles on this topic are available on the CFCA website. Before we begin I need to briefly mention some housekeeping details. One of the core functions of the CFCA information exchange is to share knowledge. So I'd like to invite everyone to submit questions via the chat box at any time during the webinar and we will respond to your questions at the end of the presentation.
We'd also like you to continue the conversation we begin here today, so to facilitate this we've set up a forum on our website where you can discuss the ideas and issues raised, submit additional questions for our presenters and access related resources. We'll send you a link to the forum at the end of today's presentation. Also as you leave today's webinar a short survey will open in a new window and we would really appreciate it if you could fill it in and give us your feedback. Please remember too that this webinar is being recorded and the audio transcript and slides will be made available on our website and YouTube channel in due course.
So before we begin it's now my pleasure to introduce today's presenters. Sharon Barnes is the Director of the Aboriginal and Torres Strait Islander Research Unit at Ipsos where she's responsible for a number of complex evaluation projects. Sharon previously worked for the Department of Social Services on Footprints in Time, a longitudinal study of Indigenous children. She is also a member of the Australian Institute of Aboriginal and Torres Strait Islander Studies and the Australian Bureau of Statistics Roundtable to provide advice on Aboriginal and Torres Strait Islander statistics.
Kylie Brosnan is the Director of Public Affairs at Ipsos Queensland. She has over 25 years' experience in social research and evaluation, with specialist fieldwork supplies and research consultancies. She's designed and managed complex data collection methodologies using innovative technological solutions. In particular, Kylie has used digital technologies to overcome language and literacy barriers with humanitarian migrants, prisoners and Aboriginal and Torres Strait Islander people to enable their voices to be considered in government policy and decision making. So without further ado, please join me in giving Sharon and Kylie a very warm virtual welcome, and over to you.
Well good afternoon everyone. Firstly I'd like to thank the Australian Institute of Family Studies for the opportunity to present this webinar today. I would like to pay my respects to the Elders past and present, and acknowledge that the webinar allows us to connect with people across many different peoples' traditional country. I am Sharon Barnes and I am a Ngunawal woman. Today both Kylie and I will be presenting. I will now hand over to Kylie to start us off.
Good afternoon everyone. I'd also like to pay my respects to the Elders past and present and acknowledge the traditional owners of the country which I live on, which is the Yuggera nation. We hope that our presentation today will give you some thoughts about how you might approach evaluation differently, or some comfort that you're not alone in addressing the challenges of measuring the outcomes of programs for Aboriginal and Torres Strait Islander families. This webinar will outline ways data can be collected to measure the outcomes to evidence whether programs or policies for Aboriginal and Torres Strait Islander families or communities work.
Government and community organisations and service providers are often stumped with how to evidence change resulting from all of their efforts. Proving change is happening and measuring outcomes to learn what works best can be challenging. So I'll start off by looking back to the basics. Not because we don't think you know what these are, but rather, we want to frame the presentation in how we're doing things and what we think they are, and try to come from what they mean for Aboriginal and Torres Strait Islander people. There's a language to outcomes and once you learn the language of how to speak about outcomes, it makes knowing what data to collect easier.
Collecting the data may seem overwhelming, so keeping it streamlined and simple is important. There is an abundance of data but just having data does not mean you have evidence. Then once all this work is done, how do we tell the story to prove something is working or not? Of course, this is all open to debate and we'd definitely like to hear your feedback on our positioning after the chat of this webinar.
So, evaluations are not always as simple as finding the data, and systematically examining what it has to say and proving the extent to which outcomes have been achieved. Evaluators by nature are a critical lot. We agonise over the quality of the data and fill our report with caveats and limitations about how good the evidence is and make lots of useful suggestions for how to improve it for next time - well, if there is a next time. The challenges we have had are not unique to Aboriginal and Torres Strait Islander policies or programs, but these are some of the biggest problems I have encountered evaluating them.
A lack of baseline data to measure effectiveness because the evaluation has commenced after implementation. This is when you see an evaluation report turn into an evaluation of the data, rather than what it says about the issue itself. This happened to John Altman and Susie Russell when they set out to examine the evidence for the effectiveness of the Northern Territory National Emergency Response Intervention. And this still happens to us today. The second problem I have encountered is the procurement of an evaluation after the fact and at the end of funding, creating an environment of anxiety for all involved. It places a lot of pressure for weak or small evidence to be enough to make decisions.
This has large implications for stakeholders’ reasoning, mind-set and thought processing, potentially making the interviews tainted by this context of their refunding anxiety. In these circumstances the evaluations budgets are very tight and timeframes are very short. This is not ideal and results in the extent to which consultation and data can be collected. And less evaluation budget means smaller sample size that have less reliability making the evidence less robust. But despite this we have to do the best we can to deliver a sound evaluation, because it does inform decisions. Where the change required may take years or even a generation before we can tell if it has made a difference to people's outcomes, this is the "too soon to tell" syndrome. Often we are trying to predict pathways to outcomes, rather than evidencing outcomes.
If you want to measure change, don't change the measure is the rule. But with fluid policy or just changing agendas when program staff change, or government changes, this is a hard rule to stick by. When the goal posts keep moving, how do you know what you are trying to measure and make sure there is data recorded against that measure? Moving goal posts are inevitable. We don't live in a static world and change is all around us. We have to learn to rapidly adapt. But focusing on behaviour is one way we can measure change over time. Evidence for evaluations is a concept worth unpacking. Paul Carney from the Centre for Constitutional Change at the Economic and Social Research Centre in the UK, in his post on ‘Power to Persuade’, cautions that policymakers use a range of information sources to inform their decision making, including those that sit outside the hierarchy of scientific methods; namely, values.
Evaluators are also making decisions about evidence that is never value free, but whose values are they? Latour and Woolgar's famous anthropological study of the scientific laboratory also demonstrates that facts aren't simply covered as black or white but, rather, are constructed through subjective decision making processes. In this grey world of social policy evaluation, decision makers must make sense of conflicting and very qualitative evidence. Often the evidence is filtered through personal experience in our own way of seeing the world. Evaluators interpret what people mean when they tell us things and we may get it entirely wrong.
Certainly, there has been cases of many Indigenous policy evaluations being ethnocentric of making sense of the world by relating it to what we already know and believe. This is the field we play on. We know there will be challenges every time we lace up our boots and take to the field. Win or lose, it's how you play the game that counts. In this presentation we hope to give you some practical advice or tips that may help you when you face the same challenges. So let's get back to basics: What is evidence?
We live in a political and scientific world that demands evidence based policy or evidence informed practice. Having evidence is important because it helps people decide what to resource or who to support or who gets the money. There is a lot of demand for this thing called "evidence" but some policy and programs don't always work as intended, and often decisions are based on no evidence, or the evidence is ignored, or they use the wrong kinds of evidence, or they put a disproportionate response on some really weak evidence, rather than what is working or not.
This happens in many sectors but unfortunately has happened a lot in the Indigenous policy and program space. When an evaluator describes "evidence" they need to clarify what counts as evidence and what an evidenced based policy or practice would look like. Therefore, evaluators need to keep in check how they see evidence. Science and social science provides methods of explanation and interpretation of phenomena called empirical evidence. Empirical evidence is known as a source of knowledge acquired through a means of senses, particularly by observation of experimentation. Empirical evidence is the process of finding empirical data. Empirical data is the information that comes from the research and before any pieces of empirical data are collected, scientists carefully design their research methods to ensure their accuracy, quality and integrity of the data.
If there are flaws in the way that empirical data is collected the research will not be considered valuable. The empirical data may be flawed because of survey error. Survey error includes sample error, that is, not having enough people to be representative, or coverage area – not including all those in a population under the study. Or measurement error, that's asking really bad questions, interviewer bias or modal bias. And non-response error, where people just don't take part in the survey. There's also non-survey data used in evaluation and it may also be flawed with errors, like data entry error, data omission and data loss. Other errors are associated with incorrect application of statistical tests.
Understanding and mitigating for all these potential errors, needs a sound understanding of social science and research methodologies. Think about some of your programs and policies. What type of evidence do you have? Is your evidence error free? The scientific methods used to collect empirical evidence should be objective. But being objective means that you may overlook what we should value, or how we should live our lives, or what outcomes are more important than others.
Think about some of your program and policies again. What type of evidence do you have? Is your evidence value free? We can't assume that we are all rational, we do not allow our emotions to play a part in the way we design, collect and analyse data. Regardless of the facts, we are never totally rational as evaluators – we're human, we're irrational and our emotions are driven by our beliefs, morals and values. The voice of our participants or stakeholders in the evaluation may come through either strongly or weakly, depending on our own values. Pure science assumes that we do not include reasoning or intuition into our analysis, but rather follow process, method and rigour.
As human beings, we carry a blueprint in our DNA that is stamped with our lived experiences or shaped by our culture. The ability to separate oneself from one's own world view comes with practising reflection. Giving meaning to the evidence comes from the world views of the clients, for example, in a family centric evaluation. It only really has meaning when it is seen through the eyes of the family from their world view, at the centre of the evaluation. Evaluation is often said to be a craft; that is, a mix of science and art. Evidence in my eyes may be completely different to the same evidence seen through Sharon's eyes. We may draw different conclusions, place weight on different findings, or overlook what is meaningful and what isn't. It is only through open dialogue, deep reflection can we examine it from different perspectives.
Different perspectives enlighten or enrich the analysis of evidence. Participants in a program have a unique perspective to add, they see evidence through their eyes which strengthens voice and world view in the interpretation. When evaluations have space for the intersection of that empirical evidence, values and reflection, they create the opportunity for open dialogue that brings greater insights. Anyone looking for just scientific, robust evidence in their evaluation might find that without some moral compass it doesn't really reveal voice. Or without reflection, the true meaning may be lost. Importantly, evaluations without empirical evidence can lose the clarity of facts. Patton in his 2008 literature refers to these opposing paradigms and positions as, "Too much closeness may compromise objectivity and too much distance may diminish insight and understanding.” So being conscious of these three intersections is important as an evaluator. Attempting to exclude one or two may be impossible. Balancing all three is the craft. We will talk more about these three things as they play a part in evaluations from the plan, the design, to collecting data, to analysis and reporting.
Outcomes should be the easiest thing to articulate, it's what we want to see happen as a result of all our efforts, but too often we just assume that if you do stuff, and stuff happens, then we just measure what happened and assumed it was because somebody did something. Outcomes need to have a causal link to the behaviour that the program aims to change. The behaviour in question that needs to change must be identified through a collaborative process with the policy funders, program delivery and families or communities. Programs co-designed with Aboriginal and Torres Strait Islander people, have more chance of having meaningful outcomes that can be measured.
Despite the rhetoric that this is happening and some advocating for this, we still see a lack of real collaboration and consultation on the ground to develop program logics or program theories. This is not to say this is universal, and we acknowledge some great examples on the AIFS website. Tensions between what government or funding bodies want to see – so their policy agenda – and what Aboriginal or Torres Strait Islander people, families or communities want to see, their lives – don't always align. This is because of differences in values, world views and behavioural theories based on western science.
Aboriginal and Torres Strait Islander researchers consider how outcomes might be expressed in traditional ways that reflect their values, ideas of what is good or right, self-worth and being a strong person. This nearly always has a connection to the collective group and the connectedness was the strength. Their values are less about material achievement even social, economic, or personal health and well-being. But rather they incorporate a broad range of standards when assessing what is value for the community or program. Inevitably, self-determination becomes an underlying driver of behaviour. The root causes of behaviour are often less discussed due to shame.
Dealing with trauma, mental health, racial discrimination and the historical legacy of colonisation are underlying most of our people's social problems. Yet they are often overlooked when deciding behavioural interventions and not considered in the context of circumstances of people when measuring outcomes. Many programs and policies assume that the outcomes for non-Indigenous have the same value proposition as Aboriginal and Torres Strait Islander people. Then when you go to measure these outcomes with non-Indigenous values, with Aboriginal and Torres Strait Islander people, you may conclude that nothing has changed or even gotten worse, when you may in fact just be measuring the wrong thing.
If a program has not been co-designed and developed with deep consultation, it is likely that outcomes can't be clearly stated in a program theory until the evaluation is done to develop the theory, rather than just proving the theory. It's a case of “I don't know what I'm looking for, but I will know when I see it”. This is because outcomes seem to crystalise in stories and narratives. When we hear the participants talk through their story, we find they voice outcomes naturally in conversation. In an ideal world, a program logic would have been developed and tested. This would have informed an evaluation framework to guide the program operators to collect, record and gather the data needed to demonstrate outcomes.
Our experience has never been in the ideal world and so we are often brought in after the program is operating, sending poor people delivering the program into a panic about what data systems or processes they need to retrofit to demonstrate outcomes, which have never been defined. At the same time, contract managers may find themselves at a loss to articulate outcomes to the program deliverers because they are removed from the policy or program or people in government who know what the program agenda was intended to address. This leads to a clash between what is outcomes data and what is contractual performance data. Data comes in all shapes and sizes.
When people think about data they often think of lots and lots of numbers, they think they need to count something, but what? The evaluation methods that Aboriginal and Torres Strait Islander researchers prefer to use is primarily narrative, with interviews conducted one on one or in small groups. This method works well for small numbers of interviews, but when some of our evaluations require samples of 500 to 1,000 survey participants, we need to streamline it and make the qualitative process more manageable by creating sort of quasi quali-quant method. Initial stories which may include visual representations or paintings, pathways or symbols that represent themes, outcomes and drivers – this is all data and we collect it.
Then we look at it, to see if there is a pattern appearing in the stories and the narratives, and we start to categorise this. Once an outcome pattern is defined in the qualitative stage in whatever format, whether it's story based, pictorial or conversations, it becomes a data record. Data is not an outcome, an outcome is described by a pattern of data found in a data record. For example, asking someone, “do you feel better or worse?” Well, the answer to the question is just one data point. It's not an outcome, but we need more data to make it a data record. That is, lots of data points. The data record could contain things like who they were, what they participated in and why they felt better.
An outcome pattern that may be, you know, “a woman who's 45 years old, felt better after counselling session because she was listened to by an Aboriginal health worker who didn't judge her and understood her story.” As you start to build up these patterns, your curiosity kicks in and that's when you realise there's a lot more data that we need to know about this participant, more that you need to know to make you better understand their outcome pattern. Like, you know, “has she had a bad experience in the past with counsellors, or is this first time to counselling? Does she have other medical conditions or disabilities, and is there anything else that may impact on her participation?” Just like a jigsaw puzzle, you need all the pieces in place to see the pattern.
Data is often stored in discrete ways or separate boxes and often not connected to other relevant data. Data records need all the data about a participant linked together in some way. When it's not linked in a data record we find the problem of reading reports like, “10 people participated.” Well, that's just an output. “Half were under 45 and half were over.” Well, that's just an interesting context. “Five people said they felt better” – another output. And “six people were counselled by an Aboriginal worker” – another context. But we can't make any statement about the outcome of the program, because we don't know if those people who said they felt better, had the Aboriginal worker or not, or were younger or older. So you can see that data by itself doesn't give us or gain any knowledge.
Alone it does not tell us about the different outcome patterns. But having lots of data in separate boxes doesn't give us a good indication of what's working or not working either in what circumstance. When the data is linked together in one data record, we can start to look for patterns. You need to keep all your pieces of the jigsaw puzzle together, so you can put the picture together. We're often asked by program deliverers, “what data do I need to collect for your evaluation?” Our advice is often “collect as much as you can about your participants over time, but make sure you collect it at the participant level in one data record.” There are, of course, some considerations around privacy and confidentiality and data linkage, however, we won't go into those today. But there are good guidelines for managing and confidentialising data that we will put in our reference section.
So, if data is not an outcome, what's an outcome? When asked to consider what an outcome of a program is, using participatory processes, some Aboriginal and Torres Strait Islander participants tend to use communal or traditional law as a backbone of their decision-making and reflection. The importance of deep deliberation about what the outcomes are within a community goes way beyond a one day logic workshop, or just a discussion item on an agenda in a community meeting. This deliberation is reinforcing the concept that actions have consequences beyond the immediate need for an outcome measure to be written up in your evaluation plan. Deliberations must explore the consequences of stating outcomes that may span generations.
In some cases, they may need to engage cultural protocols, such as using words in their own language to connect the deliberations with spiritual guidance from Elders or community leaders. Communication is not just focused on what to discuss, there is much more importance placed on how to discuss it. Evaluators must give space and time for the importance to be placed on the proper ways of communicating, such as the use of respectful language and deliberation processes that may go around and around and around until a consensus is reached. Take for example the outcomes of an employment program. Non-Indigenous outcome patterns may look like an increase in the number of placements for long term unemployed, who are placed in jobs that fit with their self-identity, are more likely to stay in the job.
But for an Aboriginal person the outcome pattern may be different. It may be that increase in the number of placements is because long term unemployed who are placed in jobs keep them connected to country and contribute to community needs are much more likely to stay in that job. We can count the number of people in placements - it's a number - stuff happened. It's an output, it's not an outcome. By adding the context of whether they were long term unemployed and, importantly, the reasoning for the behaviour, staying in the job for connection to country and community, this all defines an output now coming together and looking more like an outcome pattern.
The reasoning for the employment contributing to a collective identity and cultural connection to land must be fully explored and validated for its meaning and consensus reached before we can interpret this as an outcome pattern. When writing the outcome pattern, try to avoid vague words or words that are prone to different meanings – words that people don't use in everyday language. The natural tone of how participants describe their outcomes in their language is the best way to articulate outcomes. This not only makes the findings of your work accessible to others, but authentic and meaningful to the community.
So, avoid words like ‘empower’, ‘enabled’, ‘capacity’. For example, some people may not say, “This program empowers families to develop family plans that restore children and young people's cultural needs." This statement has no reasoning and no real stated behaviour. It places the outcome in the artefact, which is the family plan, rather than in the reasoning and the human behaviour. But the participant may say, "I talked up at that meeting and I was listened to when I said I wanted my child to stay on country. I felt I was strong enough to do this because Sharon helped me to understand what that meeting was all about, and that I was allowed to speak up at that meeting and say things that were important to my child growing up strong." From this conversation we can then pull out the reasoning and the behaviour of both the participant and Sharon the worker. So pay real attention to reasoning, because this is the reasoning that drives behaviour.
Behaviour is what the program aims to change, so if you want to measure outcomes, you need to collect data on the reasoning and the behaviour, as well as who it was. Nick Biddle's findings from years of research into Indigenous Government service delivery has led to the conclusion that innovation in service design is to be undertaken using some behavioural science principles. That is, looking at the reasoning for the behaviour. So, how do we collect reasoning and behaviour?
So specifically thinking about primary data collection, the goal of collecting data to make data records is to develop evaluation processes that are robust enough to accommodate and value different ways of knowing with Aboriginal and Torres Strait Islander people. Build ownership and a sense of community within the program delivery and evaluators. There has been a lot of work done by many different groups such as AIFS, DPMC, and the AES, that have contributed to strengthening the data collection capacity of Aboriginal and Torres Strait Islander researchers. There should be little reason why Aboriginal and Torres Strait Islander people can't have an Aboriginal or Torres Strait Islander researcher interview or survey them, as there is a good sustainable pool of people willing and able to work across Australia.
Evaluative processes must not be harmful to communities and must be strength-based in their approaches. Considerations that should be made including ownership of the data, community consultation, confidentiality, design of questionnaires to be fit for purpose, community approval of data interpretation and the report, and also specifically address the importance of capacity building during research and evaluation. This is also a concept of holism, which may be loosely defined as traditional or cultural values. That evaluation methods must match the values of people involved in and using the program. The method of data collection should reflect how people go about their daily lives.
If people do not often talk indoors and closed rooms, then this should not be part of the interviewing strategy. If people prefer to talk whilst doing other activities, then this should be accommodated for in the field work design. Reflexivity is still something that Aboriginal and Torres Strait Islander researchers have to consider as they too are human beings who make choices about what to research, interpret what they see and hear, decide what to write and how. And they do all this in the context of their own personal biographies and disciplinary environments, which may be different to the lived experiences of the participants in the survey. Aboriginal and Torres Strait Islanders have to think carefully about who has the power to say what about whom, and make sure that research participants have some influence or say over the research and how it is presented.
Cultural protocols need to be part of the design, but should not bias the responses of participants when the power between the participant and the researcher is not equal. Diversity of the field team members is important. We must consider a wide range of age, gender and languages, as this is important when considering the wider structures of power and control in culture. We are often delighted when our field teams come together to discuss and deliberate, as they provide accounts that they realise are fragments, just part of a picture, fallible and imperfect, but combined to make up a bigger story and picture of what is happening in the community. Evaluators working with or as Aboriginal and Torres Strait Islander researchers need to be true to be useful; that is, a good evaluation is one that captures a true story.
The evaluator must have an understanding of the self determination that fuels the goals and aspirations of the families or communities to preserve, restore and protect their cultures and ways of doing things. Although programs being evaluated might be conducted with other non-Indigenous populations, there is always a sub-text about self- determination that must be heard by evaluators. These two ways of working is important, particularly for Indigenous programs and policies. Where we see a real gap in this vision is the analysis and reporting. Our experience is there is less opportunity for capacity strengthening in the statistical and analytical skills in incorporating Aboriginal and Torres Strait Islander perspectives into these areas of evaluation.
And we do not want to merely duplicate mainstream approaches at the tail end of a project. A good evaluation has to meet expectations. Funders demand robust, empirical evidence. However, self-determining communities' values and reflections are needed to make the evidence meaningful. Aboriginal and Torres Strait Islander researchers want to take ownership for defining success and telling the story from the perspective of the communities' values and aspirations. They feel there is a responsibility to let the community know that there is flexibility in what is to be measured and assessed. Communities can define the standards. There is also a strong sense that they would like to advocate for evaluation, and telling the program story is one way to show how meaningful their work can be.
The voices of the communities must be heard and listened to. Some say, "We know this is working, don't let Government take it away from us." Or, "This is not working we need to fix it." They know what is working or not, but they are going to have to tell the story and get the true story, the one that has their voice balanced with empirical evidence. If things are working, or even just working for some in a little way, this accomplishment needs to be celebrated and discussed. If things are not working even just for some, then it needs to be investigated. We know that not every program will work for everyone. This is why outcome patterns need to come through strongly in the stories.
So this is where we started our presentation today. Anyone looking for scientific, robust evidence in their evaluation may find that without some moral compass it doesn't reveal voices, or without reflection the true meaning may be lost. Importantly, evaluations without empirical evidence can lose clarity of the facts. If you're about to conduct an evaluation of an Indigenous program or policy, or you're in the midst of one, perhaps take some time to think about how it is balanced between empirical evidence, values, and reflection. Pay particular attention to where the strengths of your evaluation are, and where it may need to be strengthened.
We ask ourselves, “who has the power to control that data, those values, the worldview in this evaluation?” And “who has the opportunity to control the data, the values and the worldview in this evaluation?” At Ipsos we have established an Aboriginal and Torres Strait Islander advisory group headed by Professor Mick Dodson and Professor Maggie Walter to help guide us and strengthen our values and reflection space and add an Indigenous perspective to the empirical space. We also work in partnership with Indigenous owned researchers and evaluators. But what we've found is that there is less control in the power and opportunity for Aboriginal and Torres Strait Islander evaluators in the empirical evidence space.
Our experience has brought us to the conclusion that capacity strengthening of Aboriginal and Torres Strait Islanders in the knowledge of how to analyse and use empirical evidence is needed. Statistical and analytical techniques may then expand the use of qualitative data to quantitative data. Specifically, there is a need to enhance their ability to fully understand the statistical and methodological theories behind the non-Indigenous scientific methods, so as to develop Indigenous ways of building better methods as Aboriginal and Torres Strait Islander researchers. Some work has been done to develop this thinking by Maggie Walter and others, but this needs to be adopted, adapted and supported more broadly. Aboriginal and Torres Strait Islander researchers have demonstrated their abilities in data collection and fieldwork. Given the opportunity to go beyond this role, through training and mentoring to analyse quantitative data sets and find outcome patterns, will help them succeed in telling the story with robust, truthful and meaningful evidence. Thank you all for listening.
IMPORTANT INFORMATION - PLEASE READ
The transcript is provided for information purposes only and is provided on the basis that all persons accessing the transcript undertake responsibility for assessing the relevance and accuracy of its content. Before using the material contained in the transcript, the permission of the relevant presenter should be obtained.
The Commonwealth of Australia, represented by the Australian Institute of Family Studies (AIFS), is not responsible for, and makes no representations in relation to, the accuracy of this transcript. AIFS does not accept any liability to any person for the content (or the use of such content) included in the transcript. The transcript may include or summarise views, standards or recommendations of third parties. The inclusion of such material is not an endorsement by AIFS of that material; nor does it indicate a commitment by AIFS to any particular course of action.
1. Measuring outcomes in programs for Aboriginal and/or Torres Strait Islander families and communities<
- Sharon Barnes
- Kylie Brosnan
- The views expressed in this presentation are those of the presenter and may not necessarily reflect those of the Australian Institute of Family Studies or the Australian Governmen
2. Introducing Sharon Barnes
- Home of the Ngunawal People
3. Introducing Kylie Brosnan
- Living on Yugara (Yuggera) Country
- Challenges we have had to overcome
- Start with the basics
- What is an outcome?
- What is data?
- What is evidence?
- Learning the language of outcomes
- Collecting the data
- Turning data into evidence
- Telling the story
5. Challenges we have had to overcome
- Lack of baseline data
- Quick funding is up
- Too soon to tell
- Moving goal posts
- Value free evidence
- Reflexivity in evaluating the facts
- [Venn diagram]
- Circle 1 Values
- Circle 2 Refection
- Circle 2 empirical
- Dialogue is when the circles overlap
- Clarity and collaboration on what behavior needs to change
- Behaviour identified
- Causal links identified
- Theory on what causes change
- Little appreciation what the underlying drivers of behavior
- Little evidence to hypothesis about behaviours from Indigenous perspective
- A data point is not a data record - A data record is not an outcome
- Who (profile), did it with them< /li>
- What they did, did it with them, their behaviour was after
- Why they did it, things changed or not
- When they did it
- How often they did it, they felt before, they felt during, they felt after
- But importantly a data record is one that is all linked
- Diagram showing the data cycle
11. Learning the language of outcomes
12. Collecting the data
- Photos of communities
13. Self-determination in the evaluation process
- Photos of communities
14. Tell the story
- [Graphic of a word cloud]
- defining success
- true story
- Telling the story
- Torres Strait Islander
15. Summing up
- Venn diagram’of intersecting circles of labelled Empirical, Values, Reflection
- Surrounded by a larger circle with the words Power and opportunity
16. The Ipsos Vision
- Through our research and evaluation work and what we can contribute as an organisation Ipsos will make a positive difference to the lives and expectations of Aboriginal and Torres Strait Islander researchers and communities wanting to tell their story.”
- Thank you for listening
- Altman, J., & Russell, S. (2012). Too Much'Dreaming': Evaluations of the Northern Territory National Emergency Response Intervention 2007-2012.
- Biddle, N. (2016) Indigenous Insights for Indigenous Policy from the Applied Behavioural Sciences. Asia and the Pacific Policy Studies. John Wiley & Sons Australia, Ltd and Crawford School of Public Policy at The Australian National University.
- Bromell, D. (2012). Evidence, values and public policy. ANZSOG Occasional Paper, ANZSOG, Canberra.
- Cairney, P. (2016). Centre for Constitutional Change at the Economic and Social Research Centre in the UK in his post on Power to Persuade,Retrieved at: (http://www.powertopersuade.org.au/blog/pu5gbm8585kntq1ojnhvt9sgf50ahz/14/3/2016?rq=Paul%20Cairney)
- Patton, M. Q. (2008). Utilization-focused evaluation. Sage publications. 452
- SenGupta, S., Hopson, R., & Thompson‐Robinson, M. (2004). Cultural competence in evaluation: An overview. New Directions for Evaluation, 2004(102), 5-19.
- Symonds, J. E., & Gorard, S. (2010). Death of mixed methods? Or the rebirth of research as a craft. Evaluation & Research in Education, 23(2), 121-136.
- Scott, D. (2007). Resolving the quantitative–qualitative dilemma: a critical realist approach. International Journal of Research & Method in Education, 30(1), 3-17.
- Walter, M. and Andersen, C. (2013). Indigenous Statistics : A Quantitative Research Methodology, Left Coast Press, Los Angeles, California, pp. 158. ISBN 9781611322934 [Authored Research Book]
- Woolgar, S. (1982). Laboratory studies: A comment on the state of the art. Social studies of science, 12(4), 481-498.
18. Recommended reading
- Good examples of realist evaluation methods for finding the “reasoning” and “behaviour” and “context”.
- Jagosh, J., Macaulay, A. C., Pluye, P., Salsberg, J., Bush, P. L., Henderson, J., ... & Seifer, S. D. (2012). Uncovering the benefits of participatory research: implications of a realist review for health research and practice. Milbank Quarterly, 90(2), 311-346.
- Pawson, R., Greenhalgh, T., Harvey, G., & Walshe, K. (2005). Realist review–a new method of systematic review designed for complex policy interventions. Journal of health services research & policy, 10(suppl 1), 21-34.
- Cartwright, N., & Hardie, J. (2012). Evidence-based policy: a practical guide to doing it better. Oxford University Press.
- Cargo M & Warner L 2013. “Realist evaluation” in action: a worked example of the Aboriginal Parental Engagement Program. Melbourne: Australian Institute of Family Studies. Viewed 14 March 2014.
- Robinson, G., & Tyler, W. (2005, December). Ngaripirliga’ajirri-cross-cultural issues in evaluating an Indigenous early intervention program. In TASA Conference Proceedings 2005.
- Good examples of where Aboriginal and/or Torres Strait Islander communities set the standards and values to define outcomes:
- Nguyen, O. K., & Cairney, S. (2013). Literature review of the interplay between education, employment, health and wellbeing for Aboriginal and Torres Strait Islander people in remote areas: working towards an Aboriginal and Torres Strait Islander wellbeing framework. Alice
- Springs: Ninti One. Taylor, J., Doran, B., Parriman, M., & Yu, E. (2014). Statistics for community governance: the Yawuru Indigenous population survey, Western Australia. International Indigenous Policy Journal, 5(2).
- Ethics, privacy and confidentiality
- National Health and Medical Research Council (Australia). (2003). Values and ethics: Guidelines for ethical conduct in Aboriginal and Torres Strait Islander health research. The Council.
- Grove, N., Brough, M., Canuto, C., & Dobson, A. (2003). Aboriginal and Torres Strait Islander health research and the conduct of longitudinal studies: issues for debate. Australian and New Zealand journal of public health, 27(6), 637-641.
- AIATSIS Guidelines for Ethical Research in Australian Indigenous Studies 2012
- Audio: Measuring outcomes in programs for Aboriginal and/or Torres Strait Islander families and communities
- Slides: Measuring outcomes in programs for Aboriginal and/or Torres Strait Islander families and communities
- Q&A: Measuring outcomes in programs for Aboriginal and/or Torres Strait Islander families and communities
This webinar was held on 15 March 2017.
Drawing on past evaluations, practice wisdom and lessons learnt in the field, this webinar encouraged professionals who are thinking about evaluating the outcomes of a program for Aboriginal and/or Torres Strait Islander families or communities to consider evidence in a different light.
We have also published a practitioner resource on this topic: Evaluating the outcomes of programs for Indigenous families and communities.
- Evaluating the outcomes of programs for Indigenous families and communities
This practitioner resource outlines some key considerations for those thinking about evaluating the outcomes or impact of a program for Indigenous families and communities.
- Research and evaluation resources
A range of CFCA resources on research and evaluation, including guides on the basics of evaluation planning.
- Stronger Communities for Children: An approach to community involvement in Indigenous program evaluation
An overview of Ninti One’s approach to working with local communities to deliver the Stronger Communities for Children program.
- Ipsos Australia
Feature image courtesy of the Department of Prime Minister and Cabinet.