Working with evaluators: What do you need to know?

Content type
Webinar
Event date

19 November 2020, 2:00 am to 3:00 am (AEST)

Presenters

Professor Ilan Katz, Dr Del Goodrick, Sulabha Pawar , Stewart Muir

Location

Online

Scroll

This webinar was held on Thursday, 19 November 2020. 

This webinar brought together a panel of social service providers and evaluators to discuss how to build successful evaluation partnerships.

The panel discussed:

  • Is there a 'right' time to do evaluation?
  • What evaluators can help with
  • How to prepare for evaluation
  • Stories about the successes and failures of evaluation partnerships

In a context where social services are increasingly under pressure to evaluate their programs and interventions, cross-discipline partnerships can help to showcase the important work services are doing. 

This webinar is of interest to professionals working in the families and children support sector.

Audio transcript (edited)

DR MUIR: Hello, everyone and welcome to today's Webinar. Working with evaluators: what do you need to know? My name's Stewart Muir, I'm the Executive Manager of the Family Policy and Practice Research Program at the Australian Institute of Family Studies or AIFS for short, as we like to call it. I'd like to start today by acknowledging the traditional custodians of the land from which we're broadcasting. Each of the panellists is in a different location, so that's different placed people for each location. But in Melbourne where I am, the traditional custodians are the Wurundjeri people of the Kulin Nation and I'd like to pay my respects to their elders, past, present and emerging. And also extend that respect towards those Nation's people attending this webinar today.

So today we've invited a panel to discuss, sometimes hard to navigate the process of initiating and conducting an evaluation and particularly working with them outside evaluator. Evaluating the work we do is becoming increasingly important within child and family services, it's becoming an increasingly part of practice. So today we're going to talk about how services can do this and specifically we're going to discuss what evaluators can help with. How to prepare for an evaluation and ways to partner with and collaborate with evaluators, to make sure that you get what you want out of an evaluation.

Our panellists today are Professor Ilan Katz from the Social Policy Research Centre at the University of New South Wales. Ilan brings the insights of experiences of his previous work as a social worker and manager, before he became a research and evaluator. Ilan specialises in mixed methods, evaluations of policies and programs and a lot of his work has been in child protection and with families, but he's also done research with many client groups, including Elders, people with disability, Aboriginal and Torres Strait Islander peoples, migrants, refugees and people who are living with mental illness. So Welcome Ilan.

PROF KATZ: Thank you. Thanks Stewart, I'm glad to be here.

DR MUIR: We're also speaking with Dr Del Goodrick, a program evaluation and social researcher who's joining us from Queenstown, I believe in New Zealand. Del is a Pakeha New Zealander, but she's also worked extensively in Australia with a range of government agencies primarily in evaluation within public health, education and public policy contexts. And we hope that Del will instil a passion with the role of evaluation – capacity building in evaluation and for the use of the influence evaluation in all of you today.

DR GOODRICK: Great. Hello everybody, thanks, Stewart.

DR MUIR: And finally we have Sulabha Pawar, the National Manager of Strategic Initiative at The Smith Family. Sulabha is a senior thought leader within The Smith Family and was – is responsible for a wide range of program innovation and design, particularly in The Smith Family's place-based early intervention and prevention community initiatives. So she has extensive experience in developing programs, working with evaluators and building internal capacity.

Indeed, Sulabha and Del can share experiences on projects that they'd work on together. There are lots of different ways we could begin the discussion, but I think I'll start with something that – with the – we, that's in AIFS and some of our researches here who work with services to help them with their evaluation, we get this - asked often, what they should be looking for in an evaluator and how do you make sure you get what you need? To start this off, I thought I would direct this towards Sulabha as a service provider and wondered if you had any insights or perspectives you could share with us.

MS PAWAR: Thank you for that. I have to start by saying that as with all zoom things my laptop decided not to really function, so that's what – a couple of seconds late, but it all worked. And I think that talks really well about you know, the way you are fortunately evaluation actually you know, when you are well-planned, even if something goes wrong, you can get onto it straight away. So the most important thing, before we start looking at engaging an evaluator and selecting one, is about the kind of pre-planning we do as service providers ourselves.

It's really important to understand the purpose, why you want to do an evaluation? There could be many reasons. You could be testing an absolutely new program and you want to understand the impact that your intervention is going to make. So that takes a whole different approach to evaluation. You might have a really good program in a very niche community that you are implementing, but you want to now start thinking about scaling it up. And so that means, that you want to evaluate the core elements of the program that you need to include if you're going to scale it up and still have the same impact that you want to have.

The second thing is about context. So, you need to understand the participants who you are targeting, the model that you have and the complexity of the evaluation. If you're doing a systems evaluation, the complexity as many of – many of the attendees of the panel would know, are quite different to when you are doing direct service evaluation. And finally, it's around the timing. So, the earlier you get your evaluator partnered with and engaged with, the better it is for you and for the evaluator as well. Because that helps you to then – helps the evaluator and you to work on your theory of change, understand your priors and what you would need to measure it and the data that you would need to collect?

And so, once you've done your planning and when once it gets into the selection and engagement approach of the evaluation, strongly urge you to look at it – the partnership view and less, rather than a contract management kind of activity. You know, sometimes there's a requirement to do an evaluation which means that they take a more of a let’s just tick this box exercise, you know, but if we go with everything in terms of partnership and doing the best we can together, it makes a different approach. So then, of course the skills and expertise, you know.

So Del and Ilan are here, the technical expertise they have, the context knowledge that they have, is really important. And then you move into, you know that – that track report, so how well do they understand our participants? How well will they understand the requirements of my organisation and my program? And I can't not say this, because it's – and it's on top of everyone's mind, it's the return on investment. So oftentimes we have really grand aspirations on what we want to evaluate and then you have very modest funding to do so.

So it's again, in the planning phases we understand what we're going to get from the 10,000 or the hundred thousand that we have, or the million-dollar we might have, the more realistic it would be in having those discussions and share expectations with that evaluator.

DR MUIR: Okay, thank you.

MS PAWAR: Yeah, yeah.

DR MUIR: I might throw this open to the panel as well and I wondered Del or Ilan, if you had one, anything you wanted to add?

DR GOODRICK: Oh absolutely and I'm sure Ilan has lots to add too. I just want to reinforce what Sulabha said about being really clear about the purpose of the evaluation. Why are you doing it and how do you plan for the information to be used from the evaluation? Who's that for? I think that's a really critical thing to get right.

PROF KATZ: Yeah, I would, I'm – Sulabha said more or less everything that I would say, but I think just to emphasis a couple of those points. One is that evaluators have to have two-three characteristics, I guess. One, is obviously they need to know about evaluation. And the technical aspects of it. And secondly, though they need communication skills. Or we need communication skills, because you've got to get over to the client, or to the service, what the evaluation is and some of the issues that Del just talked about, you know, how to develop a program logic, or what is the purpose of this evaluation?

And it's not always very clear or straightforward. And then the third bit of it - as Sulabha said is, actually some sort of understanding of what the actual program is trying to achieve. Because evaluation isn't just a technical exercise, you know, good evaluations are really – what’s called, or the technical term is formative in the sense that they are there to help develop the program and help the policy or the program development. And in order to do that, you have to have hopefully a kind of deep knowledge of what the issues are.

And I think, in particular, for NGOs and smaller programs, you can be very unrealistic about what some of these programs firstly, the resources available, but it's also what they can achieve. And so part of what the evaluator has to do, is to bring it down to what is realistic, what can you really expect? If you do 'X', what kind of outcomes would you really expect to have? Because otherwise people get really disappointed when they don't – when the outcomes that are expected, don't eventuate. And part of the evaluator's role is to use their understanding and the international literature often to, kind of bring things down to that level.

DR MUIR: Thanks. Thanks for that Ilan. I want to follow-up on something that's Sulabha said about some skills and experiences, I guess and Ilan touched on that too, with an evaluator. And certainly something we get asked often is, well how do you find someone? How do you find the evaluator in the first place? I wonder - maybe start with Ms Sulabha like, if you're looking for an evaluation expert, where do you get them? Where do you find them?

MS PAWAR: Yeah, I mean, it's such an important question. And we keep forgetting that, you know, one of the things that Ilan just said, is do you need an evaluation? So oftentimes, we have found that we wanted all those things, but what we really wanted was just a monitoring exercise. Like, it was an intervention which was evaluated well in fact and now we just wanted year on year to monitor on the progress we are making. So we don't really need an evaluator, it would have been counterproductive. And when we do look at evaluation – evaluators, we are fortunate to have internal capacity, like our research and advocacy group.

So, me as a program designer, really leverage on their skills and experience in the sector, to identify whether I need to go to a university, which will have the multi-disciplinary approach, to supporting my evaluation, because that's the complexity of it. Or should I go to an indicator like Ledger who has got really good skills in – understanding our audience doing methodology evaluation like someone like Del, for example. You know, but it depends on that.

I've also found in the envelope with the community for children, that it's – and I'm not going to present myself as the expert, it is often as good, to maybe go to an objective independent, organise provision like AIFS to discuss the options that we might have and what we could – as Ilan said, make realistic in our expectation. And again, funding thing drives that to an extent as well. And you know, I'm sure, we'll talk of that, but what we could do that would facilitate the kind of evaluator would get in a good way. And so, I've also found that, you know, just going broad with an open EOI or something like that, doesn't always work. I think it's better to be a little more targeted and specific. Again, after having conversations from others in the sector.

DR MUIR: Yes. Thank you. I wonder Del maybe if you could add a little bit about – well how does someone find someone like you? I guess, like

DR GOODRICK: Aha I actually think, something that both Ilan and Sulabha noted, was really important that an evaluator is not just a technical expert, they need to have really good communication skills, they need to be relational. And we need to talk often, hopefully not past each other but with each other, to learn together. I think good evaluation is a joint effort. I don't think it's – you hire out the expertise and say, you know, our evaluation – our program evaluation is done.

I think it's – that collaborative nature of evaluation that makes it much more successful and much more usable and useful. So I think networks are really important. So in Australia of course, you've got the Australasian – Australian Evaluation Association. And in New Zealand, we have one called ANZIA, and a Te Aroha New Zealand's Evaluation Association. So those associations are networks of evaluators that will often – they'll put up a list of evaluators that are looking for work. But probably the best experience is to find someone who's had a good experience with an evaluator, this is in my view, ask them about that experience and use your own networks to find the community or the type of evaluator that's suitable for you. And obviously, I read something interesting once about nine tips for commissioning a waste of money evaluation. And it was written by – it was written by Jane Davidson and Patricia Rogers in a blog. And they basically said, you know, just choosing someone because they're a researcher, oh yeah, they'll be able to do it and not thinking about the specific evaluation expertise which Ilan mentioned, is a real recipe for disaster.

So go with your networks, find out who's worked best in the networks that are in relationship to the work that you do, think of choosing someone that's an evaluator with evaluation expertise and ideally, content expertise, but you often don't get those together. And in Ilan's case, you certainly do. But you know, find someone that you can speak to, that's a good relationship person, but a great communicator and really good at the technical work of doing evaluation, that would be my add-ons.

DR MUIR: Thanks Del. Did Ilan - did you want to add anything to that?

PROF KATZ: Just on the technical side, I mean, increasingly evaluations are more and more complex, even of small, you know, we've done evaluations of let's say, playgroups and even those can be quite complicated. And so, often the technical knowledge isn't really one person, that you'd need a team of people. Particularly, if you're going to do cost-effectiveness or economic evaluation, that's a slightly different skillset from doing a process evaluation or outcome evaluation. So, I guess as another criteria might be networked that person is and whether they can pull in resources from other people to do particular parts of the evaluation.

DR MUIR: Thanks Ilan. You touched on a question that comes up a lot for us which is, people want to know, how much does it cost? Like how much does it cost to do an evaluation? And that's one of those questions that's like, well how long is a piece of string? But I guess it – how – what is the best way for people to sort of think about cost, but how can they talk to an evaluator about what things cost? Maybe if we start with you Ilan, just to follow-up what you said for

PROF KATZ: Well the sort of rule of thumb that people throw around I'm saying this is realism, is that anywhere between 5 and 10 per cent of the costs of the program, should be given over to evaluation. I have to say that most evaluations that we do are for a lot less than 5 per cent of what the program costs. And again, you know, the answers is well it depends, because if you are – if you really want to do a robust cost-effectiveness evaluation because you really wanted to know whether something should be refunded, that is going to be more expensive.

If you want to get a sense of well is this a feasible thing to do, is it, you know, is it working, has it been implemented okay, then it's a little bit cheaper. I mean, we've – I've done an evaluation, I guess the cheapest one I've ever done is around $20,000 and the most expensive has been two or three million dollars. So, you know, it depends on the size of the program and what you want. But to be honest, anything less than 30 or $40,000 is going to get you a pretty skimpy evaluation. I don't know what others think, but that's my view.

DR GOODRICK: Yes, I'd like to add a comment too to back that up. The old treasury and finance used to say between 8-10 per cent of the budget, but if you think of a pilot program that you might actually as Sulabha said, want to take to the scale, it actually might require more investments because you've got a lot more in data collection activities around it, so using that simple metric is not going to be sufficient and I'd prefer to go with the tasks required and daily rates. I think that's a really good conversation to have with evaluators, well, you know, if you were employing someone from large contracting organisations, you're paying a lot higher level of overheads and a lot greater amount, but are those the people doing the work.

Or, you know, who's actually going to be conducting the work for the evaluation that you're doing and the program. So, to me a daily rate for personnel and for costs and for tasks, is a good way to start those conversations. You know, for this amount and I've just had an experience here in New Zealand where someone kind of was scoping something and has gone to three different evaluators, just to get a sense of what might be the likely scope of this activity that they want to do. So I think you know, we should be more open about having those discussions, as well.

PROF KATZ: Yes.

MS PAWAR: I think it's also a, to your point Del, you know, very much depends on the kind of evaluation you want. And what you as a service provider is able to contribute to ease the work of the evaluator. So we often forget there is a lot of logistical and project management work, that, evaluation requires, you know. Or the evaluator needs to build on, you know, to – so if you are able to contribute to some of that work, that means the hours spent would be more spent on the rigour of data collection, methodology, analysis and interpretation, rather than running around trying to organise consent forms, organise, you know, you put – input into these applications, which can take a lot of time.

So all of those logistics work if you are able to do and support, obviously the cost will come down. So, that's something to consider when you are working on a shoe-string budget. We have always found that, especially when we are testing a new program, the evaluation cost has to be built-in right from the get-go. And it has to be between that 5-10 per cent one. What happens is when you are trying to modify a program, or you are trying to, you know, take one which has worked well and tried to see how – where else it can be used, where you could end up taking some shortcuts. And that has not proven to be a good way to go forward, so we are trying to put good evaluation from even when we have that situation.

DR MUIR: Thank you. That actually leads onto the next question I had, which, I'll – I'll direct it towards you Del, which is an evaluator, would you say that services need particular skills themselves to commission and evaluation. Are there things that they can have internally that will help – get them what they need or reduce cost?

DR GOODRICK: Yes, I think so Stewart. I think, a real willingness to be – to be – to do that really hard critical thinking about, you know, what do we want from the evaluation, I think, when I reflect back on evaluations that haven't gone as well and I think they're really good reflective activities, because there's always the partnership and it's not one part of the partnership that lets it down, it's the partnership together sometimes that doesn't work. And I think, when I think back on some of the ones that I would say, were less successful, they were where the client thought somehow, I was supposed to be psychic and a mind reader. And guess what they really wanted to find about and guess how much the budget was, you know.

You know, we can't give you the number, we can't even really give you a scope, so, how can I possibly put a proposal or respond to an RFQ, when I don't really know what the parameters are of the evaluation. So I think the skills around critical thinking, about setting up as best you can, you know, with a program logic or a theory of change, so that the evaluator knows a bit about the scope of the program or they know a little bit about what the intended outcomes are and how it works so that you can start to scope out some of those activities for the RFQ, or have those as part of that first stage.

I think the second skillset if there's a skillset, is curiosity. I think some organisations kind of know what the evaluation's going to find. And sometimes they want a contract that responsibility out to someone else that maybe – sometimes they can blame a bit later or hold responsible in those things. Being curious about a program is to say, well what don't we know. What would be interesting for us to learn from that.

And I just want to go back to, there was a past president of the American Evaluation Association called Eleanor Chelimsky. And one of the things she said is evaluation isn't just about accountability, I mean to a lot of our funders, it can be primarily, we've got to be accountable and to the community, for money spent. But what about the opportunity for learning and new knowledge. And you know, so I really think having that mindset of being curious about what can we learn, how can we – accountable and what new knowledge can we generate through this – stimulate us, is a really important mindset as well. And then I would go back to that partnership idea. If we really want to work in partnership with an evaluator and if that's something that our agency has committed to, what are the attributes of a good partnership?

Good communication, open communication, sharing ideas, sharing resources, I think we should be tracking and monitoring the success of our evaluation partnerships, as well as the success of the program. Because they're – you know often paid lip service to, but then you find some messages and breakdown, so I think there's something about using – having that curiosity, the mindset and also finding out what's already out there. AIFS has some great resources online. I actually, when I was preparing for this session, last week, did a bit of a – a google search and found lots of awesome stuff. But AIFS actually has a guide for commissioning evaluations, external evaluations, which is excellent, written by, I think, ooh, the name has completely gone, Eleanor – Eleanor - - -

VOICE: Eleanor Kerdo.

DR GOODRICK: Yes, Eleanor Kerdo that's the person. And that's available online and it's actually accessible very user-friendly document that will just back up everything we're saying. So, I think use what's already there.

DR MUIR: Thanks, thanks for the plug Del.

DR GOODRICK: It is, one of them presents, one of the ones that came up first so - - -

DR MUIR: That's nice to hear.

DR GOODRICK: Yes.

DR MUIR: I guess, as a follow-up question to this, I guess, is the tippings. I mean one is the curiosity thing, I think is something that anyone can have and regardless of their experience. So when – so if you – I think everyone has talked about some of the pre-work that can go into commissioning in evaluation and that might be sort of program logics or theories of change. But for someone who's new to evaluation, I mean, how – how can they approach this, when they don't maybe have much knowledge of evaluation or what's required. Well maybe if we started with you Ilan?

PROF KATZ: Well, you know, a – again it's an issue of communication and finding somebody to talk to and discuss it with. You know, you could – you – I guess there's a chicken and egg situation here that we've been talking about is, do you find an evaluator and then talk through the evaluation with them, or do you talk to AIFS or somebody else first and then find an evaluation. And I guess the answer is going to be both. And actually this links to one of the points that was made, is you know, how early on in the program, should you get the evaluator, because actually, it's slightly more complicated than as early as possible. Because as we've been saying, you know, evaluations or better if they've got a clear objective and you know what you want to get out of it. And my experience is that sometimes in the very early stages of program development.

Personally, people are so focused on getting the program up that they don't really have much time for thinking about evaluation. That can actually have consequences way down the track actually, if they're not collecting the data, for example. But on the other hand, the evaluator has to be, has to think with them, about – about some of those issues and be sensitive to you know, issues of siding up, but also in my experience particularly, in kind of early intervention type programs, they evolve over time.

So you can start evaluating 'X', but six months later you're evaluation ‘Y’, because you know, they've had feedback from the clients for example, they've changed the way the – the thing operates and so it's not quite the thinking that was there before. And so sometimes, you know, you have to adapt the evaluation methodology then to kind of track the development of the – of the – of the program. So that's – so that kind of communication needs to happen from the early stages which continued through the program.

Now one other thing I think is that there are a number of different stakeholders in this, so it's not just you, the program deliverer and the evaluation. You know, often the funder obviously, funds the evaluation and there could be a government department or whatever and they have a stake in this. And so the communication has to be with them as well. And then increasingly finds themselves, so there's – or the end-users or service uses, are also involved in co-designing research and evaluation.

So they're not just participants in the evaluation, often now increasingly there on a steering committee and actually part of the design. So again, the evaluator has to have the ability to communicate, not only with service providers and bureaucrats but with clients as well. And that's an ongoing process too, so you know, we're moving from evaluation just being somebody coming in, doing a piece of work, producing a report and disappearing, to something much more organic and developing over time, with communication being the key issue throughout that process.

One other point very quickly is that many people see evaluation as a risk. You know, evaluations very seldom say everything is wonderful and you don't have to do anything and just carry on doing what you're doing and carry on. So, inevitably, there are going to be issues that an evaluation raises and for people doing the program that can come across particularly when funding is at stake, can be a huge risk. So the curiosity's kind of balanced also with the anxiety about what is this evaluation going to tell me. And that's again, something else that an evaluator has to deal with. And be open and transparent about.

DR MUIR: That's something I'd like to come back to a little bit later actually because I think that's really important. But I guess Sulabah I mentioned, I want to come back to something that you said earlier, which was around monitoring and the difference between evaluation and monitoring. And I guess that gets to a question around what – and can you as an organisation that's providing the service, what can you do in turn later, to sort of help the evaluation along or what tasks might you choose to do as opposed to handing it off to an external evaluator?

MS PAWAR: Yes, I think you did, again, depending on the type of evaluation. Often times, we go out to an evaluator because you need technical expertise, objectivity and credibility. Right? So, sometimes evaluating ourselves, I don't think, as an answer, yeah, we can get reports that say that everything is looking really good and you really don't want that, that is not a proper evaluation because a proper evaluation should have all the things around that.

So I think it is – we are trying to do it, some of the things and the task we could do and put it into context. It is understanding – a really good understanding of one participant. I think that's really helpful. All right? And for an evaluator as well. Because if you know that they understand, you know, what might be the vulnerability that will impact on data collection, around this, what might be the privacy and consent issues and either if we have a relationship with those participants, we could push out of that and stop some of the issues from arising. And if we don't, we could just get – evaluators and get their advice on what we can do make their job a bit easier. And you know, give them a head start when they started doing that. I think it's also really important to understand for ourselves, how much of the evaluation involved, is going to be quantitative, qualitative and mixed methods. So sometimes you know, it is – I've seen evaluation which has been very heavy on the qualitative aspects, but again, the qualitative information has not been translated into data points.

So when it comes out, the report comes out more as a story. So having that conversation can help us to again, prepare ourselves and our evaluator better, without systems and processes. Right? And I think finally like I mentioned before, it's not the logistics. So, one of the things we have found is, sometimes we are evaluating a program which is in quite an inaccessible location. And we've got really sound and technically good and very good relational evaluator as well. And we have had to create a link between that inaccessible location and this evaluator, not just from geography but from the part - the stakeholder's perspective.

Because to Ilan's point, sometimes you think of what evaluation as us, service providers, or participants who are coming in and the evaluator, but it is all the other influences with those participants that they might need to engage, even some time to access it. So I think yeah, I can go on and on on that one but I will leave that for now.

DR MUIR: Thank you. Look, we touched on communication skills quite a bit, the communication skills of the evaluator, and I guess as someone commissioning those, I guess sometimes there's a question for the person commissioning an evaluator, do they go for someone with subject matter expertise or evaluation - sort of the technical expertise? So I guess throwing to you, Ilan, as someone who sort of straddles those fields. How do people make those - how should people make those choices?

PROF KATZ: Well I would say obviously as an evaluator that it's necessary that they have evaluation skills. You know, you can't do any evaluation without those skills, so just knowing about the issue, the topic um or the client group is not good enough. and somebody made the point ah I think Sulabha that you know you can go to the local uni and you can find some early childhood expert and say well, they know about early childhood or they know about child protection so they can do the evaluation. I don't think that that's - you know evaluation - really evaluation is very difficult and becoming more and more complex and challenging with new big data sets and things like that you have to work with. Um so that's the core skill, you know if they understand and know about the issue as well or the client group, that's an added you know if you've got two, one who does and one who doesn't and they've got the equivalent skills, you would go with one that does. Or you would develop as I said before a team of people, one of whom has got content knowledge of the actual intervention and the other one's got the technical knowledge. Um so that would be my response to that I think.

MS PAWAR: I think one very important thing - so yeah absolutely, Ilan, you - technical knowledge (indistinct) cos that's why you're going to an evaluator right? For their expertise. Some of the things you can do. But it's also communication skills along with that alignment of the shared purpose because you want to have trust in the evaluator so, and they should be - they should feel comfortable to ask you the difficult questions.

You know, so I think that ability, that partnership thing that we've all been talking about, you know. Opening yourself to be asked the tough questions which is why you're getting an evaluator here. And be, be comfortable enough to respond to it knowing it's going to add value to an improvement rather than be penalised for something. I think it's really important to keep that in mind as we go over this.

DR MUIR: Thank you. I want to come back to a question we had earlier which was - well I suppose a topic that came up earlier which is sometimes you don't get the results you want from an evaluation. You might've done a lot of planning and a hard work, the program's been evaluated, the results are in and the program isn't doing as well as expected, so the question is what happens now? I guess I might go to you, Del, I mean how, I suppose as an evaluator, how do you deal with that and thinking about how the client, when they get the negative results, you know what can they do at that point?

DR GOODRICK: Yeah um thanks Stewart, I was - just as you were posing that and when we were talking about that earlier I was reflecting on an experience I had where I had a very great collegial relationship throughout an evaluation, this is probably 10 years ago, and I'd sent the draft report in over email and said, you know, 'Let's meet and discuss,' and I got this email back saying, 'We liked your report but could you take the section out that says, "Limitations of the program"?'

And ah after the initial - um I wrote a very nice email and we met up and discussed this and it was their, they were so tied to it, it was you know it was their baby, their program, and for them, it was a criticism and I said but you didn't hire a marketing person, you hired an evaluator, you hired someone who was going to look at the strengths and weakness and things, opportunities for improvement. So I think we changed the heading to 'Opportunities for improvement'.

Most of the text didn't change but it was really as simple as having that out and out discussion about people are not going to find this credible if it's all a good news story, if it's all positive, because it's an evaluation as Ilan said, it's got to have that balanced perspective of both the strengths and the limitations and the opportunities to do things better. So that would be my take on, again if you've got a good collegial relationship you can kind of call it together. There have been occasions though were reports that have had - been predominantly negative have been varied, so I'm very aware of that experience as well, whether you like that or not. So yeah that would be my point.

DR MUIR: Sulabha, I wonder - I mean how have you dealt with this in the past?

MS PAWAR: Yeah I will talk about two experience, so one was when the (indistinct words) marketing (indistinct words) community (indistinct words) promotion vote and (indistinct words) an evaluation vote, and that was early on in my career and you know that's what I was referencing when it was all about how wonderful are we and everyone in that place and everything that has been done. And there was almost nothing about what could be improved or what could be done. So really, it was I really feel good stuff but not anything that we could work on given it was a multiyear program that we were running.

And then recently we have had an experience where it was a post-school program for kids who are at risk of early school leaving, and we were testing a program that was trying to keep them on and then helps them to move into education or most probably some kind of a trade or apprenticeship. And because it was a pilot it was being tested and as we were going through it, we could start seeing the trends coming because the evaluator was pretty honest with us and we could see that there are some real serious issues with the children of change actually and the participant cohort.

So the result that we did get was that we were - there was a whole group of the cohort who were just not engaging. So the analysis being that the program is not suited for that cohort. And with the cohort that was engaging, there was a high degree of happiness, satisfaction, (indistinct) talk about it like going to the movies, they all came out of the movies saying, 'Ah great,' but nothing changed. And they (indistinct) small discernible difference.

So we took that on board and because we were at the stage of program development when we could really do something with it, we'd modified the program approach so the actual intervention and the intensity of it, the activities that were being undertaken, but also there's some deep-dive work with the evaluator around the best cohort which would most benefit from a program of this kind. And it is now into the second testing phase now so we'll see what improves, so yeah, and that was an interesting one. You could sense as, in some I think Ilan mentioned, you can sense sometimes when things are not going to go as you want it to be.

DR MUIR: Okay, what you're doing then is seeing this as a chance for improvement?

MS PAWAR: (Indistinct) because I think the fundamental point is beyond anything, this is an investment that is going into something which means that it is not going into something else. There is a portion of cost always associated with the work we do. So we all went to the participants and the beneficiaries to the best to do what the best we can, and if it's not really making a difference then draw the investment from that and apply it to something else, and be brave enough to make that call.

PROF KATZ: Can I just say that one of the skills that an evaluator has to have other than communication is a thick skin because sooner or later you're gonna get heavy push back from somebody because you know we've talked about it again as if it's just the service provider and the evaluator but you know the funder wants you to come up and say this is a waste of money, you know, defund them, do something else, and you then have to say well actually the data doesn't really bear that art, and so you very often get pushback if - from one constituency or another.

And the whole point about an independent evaluator is to be independent, but it can be quite challenging sometimes to do that because you've got multiple stakeholders and they might have very very different views about the program and what they think the evaluation should be producing. Um so you know if you really want an independent evaluator you've got to find somebody that's gonna be prepared to stick up for themselves, and there's a kind of stock responses that people have, you know, for example, 'Ah your methodology was terrible. Even though was agreed to it upfront, ah we now think that you know you didn't speak to enough people,' or, 'You spoke to the wrong people.' Or, 'Well this is history now, we've changed the program completely and all those things that you said were wrong, well we've now sorted all of that out so how can you do this report.'

So there are various kind of ways that people get around undermining an evaluation report and as I say, the evaluator has to stick up for themselves. And that's part of the skill.

DR GOODRICK: Yeah I'd like to add to that too. Ilan mentioned the anxiety that some people can have around evaluation and I remember years ago, Scriven talked about XEA, Extreme Evaluation Anxiety, and it was a lovely little paper that was really about you know why don't you scope it out with the people in the program first, 'How are you feeling about the evaluation?' 'What have been your experiences with evaluation?' 'What are your concerns about it?' And actually get stakeholders to have meaningful discussions, especially if it's co-design, where we're often designing things with communities or with stakeholders, you know, 'What are your concerns about the process?' Surface those, and then address them because I think a lot of the times people do think it's a personnel evaluation, not a program evaluation, so.

DR MUIR: Okay thanks Del, that's been a great piece of advice. This discussion has been fantastic and I think we've identified some really common themes here. I think one of them that's cutting across is the importance of that kind of communication and partnership approach is important for both the evaluator and for the client in terms of people getting what they want.

I want to open up now to some audience questions because we have plenty coming through and if you're listening in, please do keep sending them through, we have a little bit of time for questions. I was going to start with, we had a question about - which I guess goes back to the beginning of the discussion, is how important is it to have someone external do an evaluation? Like when should you choose to get an external person as oppose to say doing it yourself?

DR GOODRICK: I'm looking for - - -

PROF KATZ: (Indistinct words) - - -

DR GOODRICK: Would you like to go? Or me, or?

MS PAWAR: You go first - - -

DR MUIR: I'd like to argue - I'd like to argue for building internal capacity in evaluation. I think it - I think those skills are great even if you never do an evaluation to be aware of what you need to have in a good evaluator. Some of the language around quality, how would you know a quality evaluation if you fell over it. You know, not being sort of bedazzled by consultants with fancy speak and jargon but actually being able to discern quality, I think's really critical, so internal capability, you know, developing your own internal strengths like the Smith Family have, that Sulabha can talk about, people you can go to within your own organisations who've had experience and can just provide that useful advice.

And often internal, I think evaluations can actually be often more critical of the program that an external person ever would be because they're much more aware of all the, you know the warts and the things that might be sitting under the surface of a drive in/drive out kind of approach that you might do with an external consultant. So I think internal and external partnerships work really well. So again, going back to our cost question, you could reduce some of the overall cost of an external evaluation by building some internal data collection mechanisms into the external evaluation so that the external evaluator can also build the capacity of the staff, in doing evaluations, it's a win-win.

MS PAWAR: I think, we have - like I said, we are fortunate enough to have a really solid research and advocacy team but, so when they do go for external evaluators, it's driven by (indistinct) sometimes because of capacity as in the time that we have, you know, we have a small team and we need a focused dedicated evaluator to work on a program for a period of time that they can't devote to.

We (indistinct words) point, oftentimes for very technical skills, you know there is a level of complex data manipulation and data work that we have to do and put together, um and again if you're a dedicated evaluator you might be able to help with that. You can't be testing a multi (indistinct) pilot which will be (indistinct) six-year period, and it will require ah skills not just on in understanding the methodology, the count, the qualitative, but also from economic data analysis, and for that we need someone who would be able to get that expertise within the university or whatever, and so that's another time and we would go for an evaluator externally.

DR MUIR: Thank you. We've had a few questions about cost of course, that always keeps coming up. Ilan, I mean you talked about, I mean the enormous range that you can have in terms of how much - - -

PROF KATZ: Yeah.

DR MUIR: - - - evaluation costs. I guess we've had some questions coming in about, from organisations that don't have a lot of funds. And I guess that then can they do, can a small organisation with limited funds do a comprehensive evaluation? Is there - are there options for them?

PROF KATZ: Um well I think the previous discussion actually answered some of that because you can have ah internal - if you've got internal capacity, that's not a cash - I mean it's a time costs - so there's two kind of costs here, there's time costs and cash costs. If you internally have got the time, um you know to collect the data and do some of the tasks, then the external evaluator can just be there to do capacity building advice and maybe do some analysis of the data. So that obviously, means that it's a lot cheaper in terms of the money you hand out.

On the other hand it means that you, you know, the organisation, will have to devote a lot more energy to that evaluation. Um so that's the kind of trade off that you've got there. Um and then the other one is to be very careful about what do you actually want? You know, as we've said to you know multiple times, if you want a full-blown process outcome economic evaluation of you know, with data tracking people over two years post the program that's expensive. Ah if you want, if you've got more limited ambitions for your evaluation then you can do it a lot cheaper, so yes there are ways of cutting costs but you know as with everything else in the world you get what you pay for essentially.

DR MUIR: Thanks for that. We have the question that again has come up with us in the past at the, at (indistinct) do the service providers or the evaluators need to engage in an ethics approval process to evaluate their programs or when, what kind of things should kick off that process? If anyone wants to take that.

PROF KATZ: Well just to say from - as university evaluators, we have to go through ethics every time so that's just, that's just the - not negotiable issue, and we think not only do we have to but we think it's a good thing to do.

So, um the answer is yes, and increasingly you know as for all these other issues, as we come more and more um complicated because for example if you're going to report separately on Aboriginal outcomes then you have to go through an Aboriginal ethics committee as well as a university committee. If you're going to collect data from health people then you have to go through health education. If you're going to go into schools to collect data you have to go through education ethics committee's as well. So you can't do it without going through those processes and that's something you just have to build into the evaluation process, I don't know what the others think but that's my view.

DR GOODRICK: I, I would like to see ethics approaches by some agencies that are actually concerned about ethical risks for participants and not about ethical risk for the agency. My experience some years ago, and we're going 20 years back here now and hopefully things have changed, was that the number of forms and the extensive list of forms that are required, from form consent and plain language statements actually put so many young people off participating before you can even get them engaged in the evaluation. And I'm wondering sometimes is that, are the ethics committee's concerned about the risk to the child or the students with parental consent or are they actually protecting themselves?

So it's a bigger issue for another day but certainly, because I'm not associated with a university for any obviously children students' health, I have to go through the appropriate ethics committees, but for a lot of agencies that I work with here in New Zealand and Australia, they have their internal ethics committee which is quicker, often than university ethics. I mean we've been held up for five months on a project when I was in the university and the project was only eight months in length, how ethical is that?

MS PAWAR: And I would agree with Del (indistinct) the ethics approval processes can sometimes be a bit of a hindrance. But I think if you go to, again, the (indistinct) and the rationale behind why we need (indistinct) approaches and protocols of evaluations, we would agree that it is to benefit the client or the participant, you know not the group.

And having internal, like for us having internal ethics committee that would make sure that every time you decide to undertake an evaluation, small or big, the first and foremost question you should have is how ethical this is and how much it is about the - not about you and your program, but it's about doing an improvement or service for the client group.

Sometimes even internally we've had people rushing to go into evaluation and then a month into it, you know there are all these concerns coming up about whether this is the right group, the level of vulnerability that the client has and that is not really a good (indistinct)

DR GOODRICK: No, no. And burden too. It burdens - - -

MS PAWAR: Yep.

PROF KATZ: So I think this issue actually links with the cost one that we were talking about before and others (audio malfunction) I have a few which may be controversial, that you shouldn't evaluate everything that moves. You should be - you should think about what - why - you know does this really need an evaluation and what do we want to do with this evaluation, because now these days it's become a tick box thing that you've got some money, there has to be an evaluation.

Um and I think that's the wrong way of thinking about it. It should be the other way around, it should be, 'Well what do we need from this evaluation? What is it going to achieve for us, and then let's do it, rather than because the funder says we have to or because it's gonna look good if we've done an evaluation, so we'll get a bit more money or whatever it is. I think, you know, evaluations, for the reasons we've all said, have become more complicated, more expensive, more difficult and therefore you need to think more carefully about each one and whether it should be done.

And there's the ethical issue that Del also raised which is you know there is a burden on both service providers and clients when you do an evaluation. And if you're just doing it because of the tick box, then is it really ethical to do that piece of work?

MR PAWAR: I couldn't agree more. That is just absolutely the approach. You know the rush to get evaluation for purposes beyond improvement and testing and um looking - reflection, and - is very, very counterproductive.

DR MUIR: Okay so a question we often get, and a question that's come up in the webinar today, is about how you balance or how an organisation balances evaluation with program delivery. How can they, when they're so focused on either doing the program and delivering a service or when it seems like they're having to concentrate so much on the evaluation that they can almost lose sight of what they're doing the program for. So how can they get that balance?

MS PAWAR: I might go first, as someone who has been confronted with this challenge, so to speak, I think when you are doing program delivery and evaluation at the same time, it is to use the lens of action learning. So it is about understanding that you are evaluating with the set parameters, so you don't want to change the core elements of your program because if you keep moving it, your evaluation will not be able to be on a stable platform.

At the same time, the learning exercise that you get from a good evaluator as you're going through your implementation of delivery, can help you to finesse and really tighten your approach in a really positive way. So I think if you take that mindset, you wouldn't (indistinct words) the time that you might need to give for that evaluation activities. So sometimes as an implementer, I might feel like, 'Ah please why are you asking me all those questions. I have to go and do all this stuff and make sure my team members are okay with it,' but if I just know that those questions will help me to respond better in my delivery, it works more effectively.

DR MUIR: Thank you.

DR GOODRICK: And I would add to that to say, I like the idea of seeing it as an action learning or an action research cycle where we're developing something, we're gathering data about it, we then you know modifying, gathering more data about it and using that as an opportunity for the organisation to learn about how it's going, but also had a modified as it's progressing, and I guess that's particularly relevant for formative evaluations where the purpose is improvement versus more of the traditional I guess outcomes evaluation where if you're trying to implement a model with fidelity you don't really want to shift it too much. So it depends on the type of evaluation, the type of program and um obviously the purpose and use.

PROF KATZ: Um and I think from an evaluator's point of view, one of the - again one of the skills is to try and minimise the burden on the service provider, um you know so not to come in and expect people to fill in, um, you know huge questionnaires and for clients to have to complete questionnaires every three months that take 20 minutes at a time or 30 minutes at a time et cetera, so to the extent that you can - the data collection from the evaluation can also be part of the data collection for the actual service, um you know you would try and build an evaluation that does that. Um and I think that that's quite an important um issue for evaluators.

Um you know normally evaluation came from these, from clinical trials where ah in that situation, you know a lot of energy has to go into the data collection and the whole thing is - revolves around the evaluation, around the research. Whereas the kind of evaluations we're talking about, the service delivery really takes precedent and the evaluation has to be back on that rather than the other way around.

DR MUIR: Thank you. That leads on to another question we had which is how do people deal with multiple program or service model changes during the evaluation period? Um I know from experience and I know for everyone else, sometimes in a program or during a program, adjustments are being made, the program changes a bit, how do you deal with that in an evaluation? How does a service provider sort of - I suppose address that? Can they change their program during an evaluation?

DR GOODRICK: I think, um if I can respond to that, there are some brands of evaluation or types of evaluation that actually privilege innovation and change, and so Michael Patton's developmental evaluation which he developed to kind of be a bridge between a formative and a summative evaluation, is actually - suggests that the evaluator provides that information that allows the program to innovate and maybe even double the learning and change the program entirely.

So it's quite a popular approach here and I understand in some areas in Australia, and it's an opportunity to support change and - because we don't know with a logic model, you know logic model's lovely because we go, 'Oh here's what we're gonna do and this is what's gonna result,' a lot of the world's not like that and a logic model is, a linear logic model doesn't work so I think there are some brands that really would relate really well to those sort of agencies, Stewart.

PROF KATZ: Yes I think it depends on the program, as we've all said, you know if you're implementing some American evidence-based program with program fidelity that's costing you hundreds of thousands of dollars to buy the manual, then obviously you don't wanna change it. You are evaluating that particular program in this particular way and you're, you know that's what you're doing.

If you're evaluating some, a pilot, early intervention program that's responding to community needs and community needs change while you're doing the evaluation then as Del says, you change the evaluation and - or you build an evaluation methodology that can cope with those changes because those changes are good, you know, it's not a deficit, it's actually a benefit that programs adapt and change to different circumstances.

MS PAWAR: And I would just add another thing, you know, what Del mentioned before is actually learning framework report (indistinct words) really facilitate that. Ah, but also you want to make sure that the (indistinct) links are pretty much stable, you know, because if you change it too much then again, I'm talking about a traditionally (audio malfunction) problematic. Sometimes that's why a multi evaluation (indistinct words) that is already inbuilt can help us (indistinct).

DR GOODRICK: M'mm.

DR MUIR: Thank you. We have a question that's come in about, for program service users or when the people, the clients of a program or a service, when they predominantly say Aboriginal, Torres Strait Islander or the Firs Nations people, or from a, I suppose, a cultural background different to that of the evaluator, how do you approach that as an evaluator? And how does, what kind of, as a service when you're employing an evaluator, how should you approach that? So I wonder if we start with you, Del?

DR GOODRICK: Um, my own personal philosophy is bound up with my professional one. Um I've done about five or six evaluations that have involved Aboriginal and Torres Strait Islander communities and I would never lead an evaluation in that space as a non-Aboriginal person myself and as a pakeha New Zealander and very similar here, I wouldn't be - I wouldn't do an evaluation for Maori services or Maori health and human services as the lead evaluator. I would always work with someone in the lead role who is from that community.

In terms of access, in terms of appropriateness and in terms of modelling too, the kinds of - I'm not gonna hear things the same way, I'm not gonna understand things in the same way as people that have a lived experienced of that particular cultural group and we do have sets of cultural competencies in evaluation as well and I think that - we should pay attention to those when we're looking at evaluators as well, you know how - where are they? What's their level of cultural competence around working with these communities?

MS PAWAR: Absolutely, and to that point you know about context and participants, this is where it comes down to, so one of the first things to look at is you know, apart from the technical (indistinct), how important it is to be respectful and intentional in the way you are engaging and understanding the lived experience of the participants in the context in which they are, you know and that's one - that should be a part of your litmus test for your evaluators, when you're selecting your (indistinct words).

It's not always easy to do, and it requires like real investigating, having multiple conversations with the evaluator, but if it's important for the client and for the program, you must (indistinct words).

PROF KATZ: Yeah I mean I haven't got much more to add, I think the - firstly don't assume that you know something about some other culture because you read a book about it or you've - you know so you're always, as a researcher or evaluator, on a learning curve.

Secondly, I agree with Del that you would always work with um other researchers from that community. We increasingly also employ community members as researchers and so there's a capacity-building component which you would want to build into ah research and evaluation as well. Um and it's not just the individuals, you know, sometimes you're working with communities, with elders and others as well, and you're feeding back to the community so there's a whole element of community engagement that um is now more and more part of the evaluation.

So, um you know it's a two-way process um and that's again a continuous process for the researcher but also for the community, and particularly with Aboriginal and Torres Strait Islander, ah the other thing is to know that the history of research in those communities is negative and you - and you know sometimes more than negative, traumatic for the community, so that is something to be cognoscente of and again to be transparent about with those communities.

DR MUIR: Okay well thank you everyone, but I'm afraid we've run out of time. The discussion has been full of great advice and insight. I would like to extend my thanks to you, Ilan, Del and Sulabha and to everyone who's attended today so thank you very much.

MS PAWAR: Thank you.

DR GOODRICK: Thank you, thank you so much.

PROF KATZ: Thank you. Thanks Stewart.

DR MUIR: Thanks again everyone, stay safe and goodbye for now.

DR GOODRICK: Stay safe. Bye.

PROF KATZ: Bye.

WEBINAR CONCLUDED

IMPORTANT INFORMATION - PLEASE READ

The transcript is provided for information purposes only and is provided on the basis that all persons accessing the transcript undertake responsibility for assessing the relevance and accuracy of its content. Before using the material contained in the transcript, the permission of the relevant presenter should be obtained.

The Commonwealth of Australia, represented by the Australian Institute of Family Studies (AIFS), is not responsible for, and makes no representations in relation to, the accuracy of this transcript. AIFS does not accept any liability to any person for the content (or the use of such content) included in the transcript. The transcript may include or summarise views, standards or recommendations of third parties. The inclusion of such material is not an endorsement by AIFS of that material; nor does it indicate a commitment by AIFS to any particular course of action.

Presenters

Professor, Social Policy Research Centre, Uni of NSW
Ilan was a social worker and manager before becoming a researcher and evaluator. He has led a large number of research projects on a wide range of social policy issues, specialising in mixed method evaluations of policies and programs. Although he has focused on child protection and families, he has researched with many client groups including elders, people with disability, Aboriginal and Torres Strait Islander peoples, migrants and refugees and people living with mental illness.

Program Evaluation and Social Research
Del is a Pakeha New Zealander. She has been working in Australian for the past 25 years, but is currently based in Queenstown, NZ. Del has strong interests in evaluation theory and its application to practice. She has worked with a range of government agencies, primarily in evaluation within public health, education and public policy contexts. Del is passionate about the role of capacity building in evaluation, and the use and influence of evaluation.

National Manager at The Smith Family
Sulabha is the National Manager, Strategic Initiatives at The Smith Family Australia. As a senior thought leader within the organisation, she is responsible for a wide range of program innovation and design, particularly the organisation’s place-based early intervention and prevention community initiatives. A leader in the not-for-profit sector for over a decade, Sulabha has extensive experience in developing and evaluating policy and programs for children birth to 12 years; and developing and leading community capacity building in vulnerable disadvantaged communities in metropolitan, regional and remote Australia.

Dr Stewart Muir | Executive Manager, Child and Family Evidence

Executive Manager, Family Policy & Practice Research, AIFS
A socio-cultural anthropologist by training, Stewart is a highly experienced researcher with extensive experience in developing and leading large multi-method research and evaluation projects. His research and evaluation projects have focused on out-of home care, services for children and families and military families. He also has extensive experience working with Aboriginal and Torres Strait Islander communities. Stewart has a particular interest in helping child and family services to build their capacity for evidence-informed practice.

Share