Join the conversation - Measuring outcomes in programs for Aboriginal and/or Torres Strait Islander families and communities

Join the conversation - Measuring outcomes in programs for Aboriginal and/or Torres Strait Islander families and communities

15 March 2017

This webinar discussed ways to measure the outcomes of programs for Aboriginal and/or Torres Strait Islander families and communities.

Please post your comments and questions below.

This webinar aimed to encourage professionals who are thinking about evaluating the outcomes of a program for Aboriginal and/or Torres Strait Islander families or communities to consider evidence in a different light.

A full recording of this webinar is available on our YouTube Channel.

The audio, transcript and presentation slides are also available.

Further reading and resources

Feature image courtesy of the Department of Prime Minister and Cabinet.


With respect to the consultation phase of both policy development or service design/evaluation, could Sharon and Kylie provide some ideas on how to engage an Aboriginal community in a discussion about self-determinism and defining success? What language could be used and how do we frame that question, and guide that discussion?
Hello I have a number of questions prompted by the session today. -I would be interested in some more detail about the use of digital technology to collect data in communities. Is it used to collect survey data, aid interviews through use of images or sound, are there interactive elements, have specific apps or tools been designed as part of a specific evaluation within a community? -Are some age groups (children/youth) more responsive to digital tools that others or not? -Do you have any examples of how Ipsos has managed the challenge of dealing with the impact on community of poor or adverse findings from an evaluation such that funding is reduced or shut off or that the community is divided over the results? -and finally a general question about your views on the usefulness of evaluating program outcomes in communities in terms of the uptake/impact of the results/recommendations on funding and policy decisions Thank you.
Shan Short
Hi Amelia, The consultation may take a day, a week or several weeks depending on the complexity of the topic or sensitivity. It may not take a simple question / answer process but rather weave back and forth through lots of conversations until clarity is reached. The discussion will start differently depending on the stage of implementation - if it is a new program or policy, the conversation is more around what is happening in their community and what behaviours they identify that need to be changed and the best way to do that. However, there is often already a lot of thinking that happens outside the community that is brought to the community - "They already know what they are going to do - they just come and tells us they don't ask us". There is often already data that has been collected that has alerted a need, but the community is not aware of this data. It is important to reflect on yourself, do you come with a preconceived idea of what the program should look like, do you believe in the idea, if so as humans we behave in a way that protects and advocates the ideas we believe in - even subconsciously. People may see your passion and be polite and not challenge your ideas, even if they don't think they will work. It is less about constructing good questions to ask, and more about stopping to listen, reflecting on how your own values and world view are forming your judgements and may be directing the deliberations. There is emerging support in the community development space for Problem Driven Iterative Adaption It is much harder to come in with a completely blank page for design than it is to talk about something that already exists. If something already exists and has been implemented. Try to start with "what does the community value", for example if does the community value "children going to school" or "adults working". Try to understand WHY these are important and these may be very different to why others may value them. Once the values for program are clear, then try to focus on the behaviours that achieve the goals and the behaviours that do not achieve the goal. Then unpack the reasoning for each. The reasoning may take the longest and be the most sensitive. The program needs to support and resource interventions that change the reasoning.
Kylie Brosnan
Hi Shan, Yes this is an area I am passionate about. The use of smart phones or phones is high and digital literacy may be surpassing LLN. Digital tools can be engaging, and children and people love them - when they work. Sometimes technology and connectivity is challenging in remote parts of the country. Digital tools are particularly good for younger people and children or vulnerable clients. However it is just another tool or method - and is not a "methodology". You need to think about WHY you want to go digital. The use of audio and visual stimulus should be aimed at reducing the cognitive load of the task or activity - making it easier/less burden. This is important for people who may have PTSD or suffering trauma, or have low LLN for example who find surveys tiring. Examples I have seen being used within the program administration: JCU working with Indigenous inmates as they record their coming down from gunja using ladders and other pictorial scales. Griffith University has developed a tool to measure health and wellbeing changes in children using an avatar. Menzies have developed an app to help the practitioner work with clients to record their safety plans and wellbeing plans. Yirrkala have developed an APP for CDP participants to record participation and activity. The best way to use mobile phone apps is "in the moment". I believe that more creativity in this area would result in capturing behaviour and reasoning in a less intrusive way than a post survey/feedback form. The best way to capture behavioural data for evaluation or social research is "in the moment". So it can be a great way to capture real behaviour rather than recalled behaviour in a survey which can be prone to error. At Ipsos we have used Mobile "in the moment" apps to help us capture more accurate behavioural data for research and evaluation. The more passive and less intrusive the design can be, helps with response rates. Yes - I have had evaluations where the results have demonstrated real "program failure" or "design failure". In one case the "program failure" of the Indigenous organisation saw them be defunded. However, they used this constructively, and addressed all the recommendations in the report to change the way they were working and reapplied for funding 12 months later. When they were refunded they felt more confident that they "actually" knew what this program was about what it was trying to achieve - because the community had told them in the evaluation results, not government in the program design. So the "program failure" was really a result of "design failure". Often communities or organisations apply or get funded for services, before they really understand what this means (values) for the community and what are the real "behaviours" that need to change. The evaluation - even if done at the end of funding, can be a great way of the community doing self examination and reflection to help with visioning for the future. I have also worked in locations of community conflict where you need to ensure that all opposing parties have a chance to be heard. Again reflect on your values and world view but making sure that all opposing parties are represented in the evaluation team. Even if it is not safe for them to be in the same room at the same time. So you work with this...separate trainings and reviews etc. Then when you bring it all together it should reflect all views and be accepted. This may take some work, but if all parties agree the "truth" needs to be told then there can be consensus. I think there is three issues with the uptake of evaluation recommendations and findings. 1. Some have faced all the challenges we discussed in the webinar - and as such they give some but not all the answers. So the evidence hasn't been there to better inform future policy. 2. There is a lot of evaluation findings out there that helped one program - but without sharing with other similar programs we see a lot of repeated mistakes. There are transferrable learning's that need to be collated and disseminated within departments, and more broadly across agency and jurisdiction. This AIFS website is a great initiative for sharing and learning. 3. Government / NGOs thinks it knows best and will often dismiss findings or evidence because their "values and worldview". Changing the way government/NGOs see "partnership" and "ownership" of Aboriginal and/Torres Strait Islander people and their communities in evaluation needs some more work. I know there are some real advocates pushing for this in Gov. so I am optimistic!
Kylie Brosnan

Add a comment

Need some help?

CFCA offers a free research and information helpdesk for child, family and community welfare practitioners, service providers, researchers and policy makers through the CFCA News.



Sign up to our email alert service for the latest news and updates