How developmental evaluation can be used to develop and adapt social service programs

Content type
Webinar
Event date

13 March 2024, 1:00 pm to 1:30 pm (AEST)

Presenters

Sandra Opoku, Kat Goldsworthy

Location

Online

Scroll

About this webinar

Evaluation is quickly becoming a necessary part of service delivery and decision making in child and family services. Traditional forms of evaluation can take a long time to produce the results needed to enhance service delivery. In contrast, developmental evaluation involves learning as you go, meaning evaluators can test new approaches, track program implementation and design initiatives that are responsive to context and need. This makes developmental evaluation useful for service providers working in dynamic environments that require responsive interventions.

This webinar will explore how Relationships Australia Victoria used developmental evaluation to adapt their Men’s Behaviour Change Program (MBCP) to online delivery during the COVID-19 pandemic.

Relationships Australia Victoria’s Lead Evaluator, Sandra Opoku, will discuss how and why she used developmental evaluation with the MBCP, the results of the evaluation and what she learned along the way.

This webinar will give you:

  • an understanding of how to design and implement a developmental evaluation  
  • insight into what evidence a developmental evaluation can generate
  • an understanding of what developmental evaluation can and can’t be used for

This webinar will be of interest to practitioners, managers and evaluators working in child and family services.

Please note: This pre-recorded interview runs for 30-minutes. There will be no live facilitation or audience questions.


We encourage you to test your system before the webinar, and read our Frequently Asked Questions. A recording of this presentation will be made available shortly after the broadcast.

Audio transcript (edited)

KAT GOLDSWORTHY: Good afternoon, everyone. My name is Kat Goldsworthy. I’m a research fellow at the Australian Institute of Family Studies, working in the Evidence and Evaluation Support Team. Before we begin, I would like to acknowledge the Wurundjeri, Woiwurrung and Bunurong people of the Kulin Nation, who are the traditional owners of the land in Melbourne when I’m speaking from today. I also pay respects to the traditional owners of country throughout Australia, and recognise their continuing connection to lands and waters. We pay our respects to Aboriginal and Torres Strait Islander cultures and to elders past and present.

Today’s webinar is part of a series designed to share information about the evaluative work that social service providers are undertaking across Australia. The format is a little bit different to our usual webinar program, in that it’s just going to be a brief conversation between me and one other guest. Today my guest is Sandra Opoku from Relationships Australia Victoria, who’s going to talk to us about how developmental evaluation can be used to create and adapt social service programs.

So before we dive into our discussion, I do have a little bit of housekeeping to cover. So first of all, the webinar is being recorded and that recording will be available in about two weeks’ time, so you can access the recording either through the AIFS newsletter, if you’re subscribed to that. Otherwise, you can access it on the AIFS website, under the webinar banner. There’s also some related readings and resources associated with this webinar, and you can access that in the handout section as you go to webinar control panel. Of course, this is a big topic, and we can’t cover everything, so if you have unanswered questions, that’s probably a good place to start. There’s also going to be a short feedback survey that will open at the end of the webinar, and we would really appreciate you just taking a minute to complete that survey. That gives us some really good information about how we can improve our webinar program.

So let’s get started. Sandra, welcome. Thank you so much for joining us. I’d really like to kick things off with you just telling our audience a little bit about who you are and what your role is at Relationships Australia?

SANDRA OPOKU: Great. Thanks for having me. So I’m Sandra Opoku. I work at Relationships Australia Victoria. I manage evaluation and social impact activities. I’ve been there for about seven years now. And my role has evolved during that time. And I’ve always supported evaluation activities, working really closely with the practise quality team there. So that has influenced I think our approach to evaluation and how we use evaluation at Relationship Australia Victoria to improve client outcomes.

KAT GOLDSWORTHY: Thank you. How big is – is it a big organisation, Relationships Australia Victoria?

SANDRA OPOKU: Yeah. So we’re part of a national federation, but in Victoria – so we have about – close to nearly 400 staff now. And we’re the largest provider of men’s behaviour change programs in Victoria. And we deliver the family mediation services, counselling services, and a large variety of prevention or early intervention services as well.

KAT GOLDSWORTHY: I guess your role would be very interesting in thinking about evaluation impact across all of those kind of different services types. But that’s probably a discussion for another day, Sandra. Now, listen, I’ve invited you to talk to us today because I first heard about your experience in using developmental evaluation at a presentation that you gave at the AIFS Evaluators Community of Practise. And while listening to you talk, I have to admit I’m a bit of an evaluation nerd, but I was getting very excited by what you were saying, because I don’t really know a huge amount about developmental evaluation. It’s one of those things that I’ve heard sort of spoke about at different conferences. But when you were talking about the particular project that you’d just finished up, you made it sound very accessible and like a very pragmatic approach, form of evaluation. And just really well suited to social service environments.

And again, I’ve heard little bits and pieces about that before, but I don’t hear it being used very much in social services environments. So to hear your story, I thought, oh, this is fantastic. I really want to share what you’re doing. But for people listening who, like me, are quite new to the concept, would you mind just explaining or giving a bit of a brief overview about what developmental evaluation is? And why someone might potentially use the method?

SANDRA OPOKU: Yeah, sure. So like you said, it’s really been a response for us to kind of the complexities that we work in. So there is lots available about developmental evaluation in general. So I don’t want to go on too much. And there’s great resources that AIFS provides as well in the practise guide. But I think, like you said, we don’t have many examples specifically for our sector. And really we have come to use this because – I think we talked about this at the community of practise. More and more we’re using programs, we’re informing program development, implementation, and design. And so how does our role support that process. And that’s where developmental evaluation is a really good fit, because it supports program development and adaptation by applying evaluative thinking, generating really timely feedback and data. And then that informs decision-making as you go. And it’s really designed to work in this kind of space of complex dynamic environments.

And it’s kind of how evaluation is evolving as well, because we’re more and more dealing with that complexity. Social innovation is becoming more important. And developmental evaluation is really a response to the reality and the nature of our work, I feel. Because I feel like that’s what my role mostly is. It’s mostly formative or developmental evaluation. Ultimately to help improve practise and improve outcomes for families and children.

And what I really love about developmental evaluation is that it applies guiding principles, so that there’s some rigor, but it’s flexible and can be adapted to the context. And provides some direction when we’re innovating or adapting in complex environments. Which I don’t know – hopefully everyone find this, but in our space it is always complex and dynamic, or we’re working with different context. And it’s never the same and things have to constantly evolve and adapt. So I know there’s a bit of a debate about what considered high quality evaluation. And some people might not be used to developmental evaluation. It might feel a bit more like action research. It’s actually kind of like a way of working. But it is very – like you have to bring all the tools in your evaluation toolbox to developmental evaluation. It’s really applying a way of working which is how evaluators work, which is critical thinking, using data, bringing evidence to the table, and then working collaboratively with team to provide that. And then also chose appropriate methods.

And I think with all of our work, what’s most important is your approach is fit for purpose and relevant to the context. So it is not always suitable to use developmental evaluation. But it’s also not always suitable to use ICTs and things like that. So it’s all about finding the right fit, and then I’ve found that developmental evaluation is really useful for a lot of our work. So yeah, that’s kind of where it’s come from for us. And as the name suggests, you use it when things are in development or when they’re emerging. The model or intervention is not clear yet, or it’s rapidly changing. It’s not summative evaluation of a specific intervention. That’s not when we use it. And I think it’s just becoming increasingly important as we have more complexity and the environments are changing.

The pandemic was obviously a good example of that, where we required rapid changes. We didn’t know what to do. There’s no precedent. We had no evidence to support which way to go. So we just had to – we had to keep going. So this is when we needed to support that process by applying a little bit of rigor, critical thinking, and evidence, in a space that’s normally quite messy and emerging.

KAT GOLDSWORTHY: I mean, yeah, that’s really interesting. I’m going to ask you about a project that you did or an evaluation that you did through the pandemic. But before we get there, it’s interesting to me to hear you talk a lot about how important it is to evaluate in that formative stage, and to do those formative evaluations. And to be doing evaluations is actually looking at how something is being created and adapted, and trying to track that over time and be responsive to the feedback that you’re getting. And maybe not so much focusing on outcomes and that kind of summative evaluation stuff. That’s obviously something that’s supported across your organisation, and you place importance on doing these kind of formative evaluations. Is that something – do you as an organisation do that fairly regularly, when you’re doing something like this? Or was this a bit of a new model or a new approach that you were trying just for this particular case?

SANDRA OPOKU: Yeah, so it’s definitely something that has evolved and emerged with the role and with the requirements. I think that’s the real great thing about being an in-house evaluator is that you get to work collaboratively with teams of people in the organisation. You get to know each other, and then you can rapidly respond. So it’s not like we set out to say this year we’re going to do two developmental evaluations. It was – especially in the example we’re going to give, it’s more like it was necessitated by the circumstance. And in a way I’ve always been a bit of a big fan of Michael Quinn Patton’s work. And so we already work – a lot of the principles of developmental evaluation, we already had started applying to our work and how we evaluate. And it just so happened that all of that kind of came together in this example.

But also it just – and we don’t always necessarily call it developmental evaluation, but we’ve learnt that the process works well for us is if we work together from the start. Like let’s say we’re going to get a new program. We work together as a team. At the beginning when we don’t know yet how it’s going to kind of pan out and what the best model might be. So we work together where as an evaluator I play a role where we gather data to inform further development. And we meet really regularly as a team as part of the project team. So it’s collaborative, it’s gathering data. It’s also using – kind of questioning things and applying critical thinking. And it’s really just bringing all of those skills that as an evaluator you bring to the table to a project team. To kind of help with accountability, if that’s what’s needed. Or if it’s about learning development, because evaluation really should be about learning.

And so it’s important that like teams are able to be responsible. So if something’s set in stone and nothing’s going to change, then we can’t add any value to that. But if we can help inform the process, that’s when you would want to do developmental evaluation and work really closely together.

KAT GOLDSWORTHY: And I wonder too if developmental evaluation – or for people that aren’t super familiar with formal models of evaluation, this is just a really way of systemising or formalising some of the things that people might be doing intuitively anyway. Which is when you’re starting a new program, you’re delivering it, you’re watching out for those – you know, you might be casually and not in a systematic way looking at, well, what’s working here, what’s not working here. How do we change it. What do we need to be thinking about to improve things so that we do kind of get better engagement, so that we are making sure that we’re implementing something that’s going to make a difference. And it meeting a need. But this seems to be a way of formalising or systematising it in a way that you’re making sure that you’re actually seeking the information that you really need to understand what’s going on in the program. And then being able to reflect on it and actively make changes as you go. Would that be a way of looking at it?

SANDRA OPOKU: Yeah, yeah. I think you’re absolutely right. I think that’s a good way to describe it. Because it is – as much as it sounds very open and it can be anything, it does have a structure to it. And it also helps – because I think the challenge for us is like people have this conception of evaluation and what it’s going to do for them, and that we need it. And without having a clear kind of purpose. And so I think especially in that really kind of early stage of innovation and things like that, it’s not that easy to have an evaluation plan when you don’t know what it is that you’re trying to do. So it’s really great to then bring developmental evaluation as like a method to the table, where we can say this is what we can offer and then everybody’s quite clear about what that means. It’s not going to mean that we have outcomes data straight away. That’s something that will come down the track. So it’s about also setting expectations.

KAT GOLDSWORTHY: Yeah, yeah. And like it’s funny. You kind of hit the nail on the head before when you said it does seem like there’s not a lot of maybe structure around it. Or that it can kind of be anything. Because I think that’s how I have – when I’ve heard about development evaluation in the past, I’m like, what is this? How would you actually apply this in practice? I don’t understand. It seems so fluid and you know, I’m just probably too much of a structured person that I just – it always kind of goes over my head a little bit. So I think it would be really good to hear you talking about – or maybe if you could talk about an example where you’ve used developmental evaluation? Obviously the presentation that I heard you talk about and the journal article that I think we’re going to – we’re putting in the handout section of the webinar, was around a developmental evaluation you did to adapt a men’s behaviour change program. So could you potentially talk about that? And that might just kind of give us a really good example of what this looks like in practise.

SANDRA OPOKU: Yeah, sure. So, first of all, I do recommend that – like there are lots of great webinars, like Clear Horizon do a lot of work in Melbourne and Australia and internationally. And developmental evaluation is something that is being talked about more and more. And I think they give really good explanations. I don’t want to claim to be an expert. I’m an end user of it. And I’ve found it really helpful. So also I think, like I said, it’s been an evolution where like this is what I’ve found is most useful and also relevant in our context. Because if something is not – like we talk about utilisation focused evaluation, and I’m really passionate about that, because I know from experience that if it’s not utilisation focused, it’s not going to work. We won’t be able to do evaluation in this context. So that’s kind of the thing we’ve tried to apply. So in –

KAT GOLDSWORTHY: Can I just interrupt you very briefly, Sandra. When you talk about utilisation evaluation, would you mind defining that or explaining that a little bit better about what that means?

SANDRA OPOKU: Yeah. So it’s one of the principles of developmental evaluation, but it’s also kind of its own method on its own, where it’s approaching evaluation kind of from the perspective of the end user. So I find it really relevant, because in my work, I’m always having to work with the practitioners and the clients. And if you’re designing something that isn’t going to be implementable from an evaluation perspective, you’re not going to be able to gather any data. It won’t be useful data. And then you’ll spend a lot of time trying to have the perfect evaluation plan, but you won’t be able to put it into action. So it’s about working closely with the people who are delivering the service, and it’s about trying to integrate it – well, to me it’s about trying to integrate it as much as possible into the actual program. And not have it be too much of an impost really. And let it kind of –

And that’s where I think the design part is really important, because you need to understand the program really well for you to know what’s going to be the most useful way to collect the data. And also what is the key data that you need to collect. And not over-collect data, but be really targeted.

KAT GOLDSWORTHY: Yeah. So that you’re not overburdening everyone. Because I guess that’s kind of I think a concern that a lot of people have around evaluations, it’s going to put more pressure on the people collecting the data. It’s going to burden the clients, who are often giving a lot of data. So it makes sense to be – actually when you do really purposeful, intentional evaluation, the idea is to actually have a minimal kind of impact on everyone, if you’re doing it in this very intentional way, which sounds like that’s what you’re talking about. And then the data can actually be used for something as well. So everyone’s kind of benefitting from the end result, as opposed to just collecting all this stuff and then it just kind of going nowhere.

SANDRA OPOKU: Yeah, absolutely. Yeah. So it needs to be useful, it needs to be practical. And it needs to be relevant to the program.

KAT GOLDSWORTHY: So anyway, I interrupted you when you were about to go into the men’s behaviour change, your definition of that. So example of the men’s behaviour change program, I’d love to hear about it.

SANDRA OPOKU: Yeah, so I think this is how you first contacted me, because we had been working on this as a case study. So during the pandemic, in Victoria – and the restrictions associated with the COVID-19 pandemic meant that we were very limited. We had to really shift very rapidly to delivering programs online. Or cease them. So with men’s behaviour change, it really was important to continue. There was an increased risk of family violence during the COVID pandemic, and there was an increase in service demand. So we wanted to continue to be able to deliver the program, but do it in a way that was safe and that achieved the program objectives. So like I said, it was more just because I work closely with the team, the practise team at Relationships Australia Victoria, it was naturally kind of we approached this as a team and we all put our heads together and, you know, how can we do this.

So the project team worked on how can we adapt. And then I obviously supported that with putting in place the concurrent evaluation plan to really collect data, as much data as we could. Because there was – the first thing we always try and do is what’s the evidence for this? You know we try and have an evidence informed approach where we use evidence from the research, from practitioner’s experience, and then from clients. But we just didn’t have any evidence around shifting this online. In fact, any evidence that was available had concerns about some of the challenges. So we really wanted to address – so you do use an evidence based approach, where – evidence informed approach, where we looked at what the concerns were.

So the peak bodies came up with guidelines for programs shifting online. As we were developing. So they were changing constantly. And so we tried to really address those issues and collect data to support – make sure we were addressing those issues. So that we knew we were kind of on the right track, but also that we could help inform the sector by addressing their concerns as well. So we had a great opportunity, because we also had external observers come in who were experts in men’s behaviour change programs. And a lot of people were very reluctant and didn’t know if this was going to work. I think now we take it for granted that we can all use video conferencing technology, and we’re using breakout rooms. Like that’s second nature now, but at the time, that was really – it was a big shift. And it had to happen really quickly. So there was a lot of technical concerns.

So there was the practice and then the technical. And then some of the concerns were around engagement and I think this is common for most child and family services was the concern around body language. You know, not being able to build rapport in the same way online as you would face to face. So those are all the things we had to try and keep in mind and collect data about. And so we collected data. We put in place – I mean, to try and be really specific, you know, what this actually looks like is we put – basically we collected electronic surveys from the facilitators, the participants, and the external observers after every session. So this is a 20 week program that every week looks different. There’s different content. So we wanted to understand every single week, did the content that we were hoping to deliver, did that work in this format. And did we need to make any changes.

So we were really trying to document as we went, so we would meet weekly and any changes that we thought were necessary, we would document that and discuss it as a team, with kind of all the different perspectives. And then also gather client feedback to see whether we were addressing or achieving the key aims of the project. Obviously we had a small kind of – we could only work with what we had. But we tried to collect as much data as we could to help inform that process. And then we adapted along the way.

KAT GOLDSWORTHY: And so I’m curious about some of the practical things around administering the surveys. Particularly doing it kind of every week. A lot of people might say, “Oh god, that’s a lot. Are people really going to stick around to complete those survey and do that?” So was it something that you just built into the sessions? Kind of the end of the session time online?

SANDRA OPOKU: Yeah.

KAT GOLDSWORTHY: And just asked people to stay around and do it. And did you get good rates of completion around those things, and good engagement from participants?

SANDRA OPOKU: Yeah. So I think it’s always difficult in this space to implement additional kind of data collection and things like that. But in a way, as much as it was a very unique situation that we hadn’t had to deal with before, it was also kind of an opportunity. So I think – you know, we did research and evaluation in other spaces as well. Like in family dispute resolution services around this. And we found that people were really understanding that it might not be perfect as we adapt, but it’s better than nothing. And people were grateful to receive a service, and we were learning along the way. So people were willing to participate in helping inform that process generally.

And the program we initially started with in the men’s behaviour change scenario was a program that had started face to face, and then shifted to the online space. So we started with them, because then we could kind of – they had an experience of the face to face, and then they could compare. So that’s the group we started with, and we worked really collaboratively with them and got permission to use external observers. And then also the – you know, the survey was voluntary, but in generally people – we kind of embedded it into the time of the session, so we gave them time at the end of the session to complete the survey. And they all were quite generous in the feedback that they provided.

KAT GOLDSWORTHY: And I imagine too that I guess there’s a sense of power for the participant in that process, isn’t it? Because you’re kind of saying, yeah, we really want this feedback for you, but a result you actually get to inform what the program looks like. And I imagine that’s kind of a pretty – could be a good incentive in a situation like this.

SANDRA OPOKU: Yeah. I think that’s true for all evaluation. I think it’s always really important to be clear about the purpose. And I think if people buy into that, it’s easier to get meaningful data.

KAT GOLDSWORTHY: Yeah. And so in terms of the kind of feedback that you were getting and the data that you were getting, did you end up changing the program quite a lot? Were you making adaptations regularly? Can you talk us a little bit about the results and how that was incorporated back into the program?

SANDRA OPOKU: Yeah, so one of the other key principles of developmental evaluation is timely feedback. And this is super important, because otherwise it’s kind of – it becomes redundant. So you need to gather – so like I said, we met weekly. So it was a big commitment, in a sense that we had to put things aside and just focus on doing this, because we would gather that feedback and then meet as a team before the next session, so that we could make the changes before the next session would even happen. So it was about constantly adapting and responding to that feedback. So we made quite a lot of changes, which like I said, I think now we take for granted. But we realised that we had to increase the length of the session and reduce the amount of participants. And then things like using breakout rooms for some of the activities. We would swap some of the activities. We adapted some of the course content as we went.

We also put in place some safeguards, like updating the consent form, because people could be delivering the program and be – so we found this for some of our services, where we had to kind of make sure people were not in the same room as others, or had care of children while they were doing the program. So we had to do participant agreements and consent forms for people who might be living with their partner, get their consent to make sure that it was safe. So there was a lot of things that we kind of adapted on a weekly basis. So we were really trying to respond to what was needed at the time. And like I said – and I think most people can relate to the experience during the pandemic that everyone was kind of responding as we needed to. But we did try and document that and add some rigor to that process by doing it in this way.

KAT GOLDSWORTHY: Yeah. And I mean, you’ve had this great result where you’ve been able to actually publish the findings of your evaluation and contribute to the evidence base around men’s behaviour change and the delivery of men’s behaviour change programs online. Which is just such a fantastic outcome, just in terms of what you can achieve, kind of at a broader scale when you do evaluations of these things. And as you say, kind of try and do innovation and evaluating new ideas and new ways of doing things. I’ve probably only got time for one more question, Sandra. And I mean, I have – there’s probably a million ones I could ask and it’s such an interesting process. But I’m curious about what you would tell other people who were interested in taking up developmental evaluation. Are there any pieces of advice that you could offer, things that people should consider before they jump into using this approach?

SANDRA OPOKU: Yeah. I mean, I have had a thought, like I’ve been trying to think about this. And I think – because we are trying to use developmental evaluation in other areas. And so I think what I’ve learnt is that being really clear, upfront about expectations. Like what is developmental evaluation, what can it offer us. And understanding what – if people bring you in to do it, what are their questions, what do they need to know so that you can be clear about what’s the best way you can help. And you want to know what you’ll achieve, but you’ve got to be flexible and agile, because things will change. That’s the whole purpose, when you’re using this. Things will change, but you want people to be clear about what they want to get out of the developmental evaluation.

So for example, it should be about learning. So you need to be willing to learn and adapt, and the evaluator needs to be able to inform that kind of decision making process. So you kind of have an evaluation plan from the start. So you need to be ready for that to, because I think evaluators might struggle with that, not being able to have a clear plan. And you need to be ready to kind of make changes. And so I think using principles as a guide works really well for me, because you need to have something to be able to anchor to. So with the developmental evaluation, the most important principle to me – I’ve already talked about being utilisation focused. And feedback being really timely to inform the decision making process. And then for it to be collaborative. So it needs to be a team of people who work well together, trust each other, and are willing to kind of work in this way. So you need to be open to change and making adaptations as you go. That’s what it’s all about.

KAT GOLDSWORTHY: And I’m just going to be really cheeky, because I just have a follow-up question. That was a great answer, thank you. But when you talk about being flexible and not having an evaluation plan, does that mean the kind of methods and the questions that you’re asking at different stages of the evaluation may change or are subject to change based on how things are going?

SANDRA OPOKU: Yeah, exactly. So, you try and plan as much as you can, and use evidence to inform the way you’re going to approach it, But things might totally change as you go and you realise, okay, that is not what we need. Or it’s not what we need anymore maybe as well. And so you need to be able to let go and just – you know, it’s not going to be perfect. But you need to make the best decision that you can with the evidence that you have available. So that’s where as things change, you also need to change your evaluation approach.

KAT GOLDSWORTHY: Yeah, I guess it makes sense, doesn’t it, intuitively. There’s no point just sticking with something that’s not working. Or that’s not giving you the information that you need.

SANDRA OPOKU: Yep.

KAT GOLDSWORTHY: I obviously could talk about this all day, and we’re going to leave it there. I think we’ve already gone a little bit over time. But I just want to thank you so much, Sandra, for joining us and telling us all about developmental evaluation. It’s such an interesting topic and you’ve done some really incredible work in this space. So thank you for taking the time to share your experience with us.

SANDRA OPOKU: Thank you for having me.

KAT GOLDSWORTHY: Yeah. I’d also just like to thank all the audience for coming along today and being involved and being engaged. And you know, being open to us trying this new webinar format. Hopefully it’s been an enjoyable experience for you. I’d also just like to thank our brilliant communications team, who are doing all the amazing things in the background to make this webinar happen. It absolutely could not happen without them. So big thank you to everyone. Please subscribe to our AIFS newsletter to be notified about the recording. Please also fill out the feedback survey when this webinar closes. We’re evaluators, we really want the feedback so that we can improve our program as much as possible. And I’ll leave it there. We really look forward to you joining us at our next webinar. Take care, and we’ll see you soon.

Related resources

Related resources

Presenter

Sandra Opoku | Manager, Evaluation and Social Impact at Relationships Australia Victoria (RAV).

Sandra Opoku is Manager of Evaluation and Social Impact at Relationships Australia Victoria (RAV). This role leads impact, evidence and innovation activities to meet RAV’s strategic objectives and contribute to improved outcomes for families and communities. She has 10 years experience in addressing complex social issues in the social services and international development sectors – including in mental health, family violence, family law, early intervention and prevention, alcohol and other drugs, employment and disability. She particularly enjoys the collaborative development of organisational theories of change, program logics, and outcome measurement frameworks. To demonstrate the social impact of a wide variety of social services she experiments with implementing a combination of fit-for-purpose qualitative and quantitative evaluation methods including developmental evaluation, utilization-focused evaluation, principle focused evaluation, place-based evaluation, most significant change technique, results-based accountability, realist evaluation, and feedback informed treatment. 

Facilitator

Kathryn Goldsworthy | Senior Research Officer, Evidence and Evaluation Support

Kat Goldsworthy works in the AIFS Evidence and Evaluation Support team which specialises in strengthening evaluation capability across child and family support services. Kat is knowledgeable and skilled in designing and preparing program evaluations, developing program theory and logic models, collecting and analysing qualitative data, communicating evaluation results, research synthesis, knowledge translation and group facilitation and training. She has worked in government and not-for-profit organisations for 15 years in roles related to employment, health and community services.

Kat is passionate about creating and sharing knowledge about programs and practices that can positively benefit Australian families. 

Share