Evidence, evidence-based programs and evaluation

Content type
Webinar
Event date

22 April 2015, 1:00 pm to 2:00 pm (AEST)

Presenters

Elly Robinson

 

You are in an archived section of the AIFS website 

 

Scroll

About this webinar

This webinar was held on 22 April 2015.

Please note: this webinar is a repeat of the presentation given at the FRSA Senior Executives' Forum in March 2015.

In this webinar, Elly Robinson covered the following topics:

  • How is "evidence" defined and used?
  • Criteria for the evidence-based program profiles
  • Choosing the right program
  • Program fidelity and adaptation
  • Criteria for assessment of "other programs" (including provisional assessment)
  • Program evaluation and implementation
  • Program logic
  • Useful resources

Audio transcript (edited)

Webinar facilitated & speaker introduced by Sharnee Moore

MOORE

Good afternoon everyone and welcome to this CFCA webinar, "Evidence, Evidence-based Programs and Evaluation".  My name is Sharnee Moore and I'm a research fellow here at the Australian Institute of Family Studies.  Today's webinar is intended for Communities for Children Facilitating Partners, Community Partners and others who are implementing the 30 per cent evidence-based program requirement.  This webinar is an adaptation of a presentation given to the FRSA Senior Executives' Forum in March 2015 and will cover topics such as how evidence is defined and used in this context, the criteria for the evidence-based program profiles and the assessment of other programs, choosing the right program and information about program planning, implementation and evaluation.

Before I introduce our speaker, I would like to acknowledge the traditional custodians of the lands on which we are meeting today.  In Melbourne, the traditional custodians are the Wurundjeri people of the Kulin nation.  I pay my respects to their Elders, past and present, and to the Elders from other communities who may be participating today.  It is now my great pleasure to introduce today's presenter, Elly Robinson.  Elly is the manager of the Child Family Community Australia, or CFCA, information exchange and the Expert Panel project here at the Australian Institute of Family Studies.  Elly has extensive experience in knowledge translation and exchange, project management and resource development and has authored a number of publications and resources.  She recently completed her Master of Public Health at the University of Melbourne and also holds post-graduate qualifications in secondary education and adolescent health.

Now I need to alert you to some brief housekeeping details before I hand over to Elly.  One of the core functions of the CFCA information exchange is to share knowledge so I would like to remind everyone that you can submit questions via the chat box at any time during the webinar.  There will be a limited amount of time for questions at the end of Elly's presentation and we will try to respond to as many questions as possible.  We also want you to continue the conversation we begin here today.  To facilitate this, we will set up a forum on the CFCA website where you can discuss the ideas and issues raised and share your own experiences.  We will send you a link to the forum later this afternoon.  Please remember that this webinar is being recorded and the audio transcript and presentation slides will be made available on the CFCA website in due course.  So please join me with giving Elly a very warm, virtual welcome.

ROBINSON

Thanks Sharnee and thank you everyone for attending today.  As Sharnee mentioned, this presentation is most relevant to Communities for Children Facilitating Partners and others who are involved in delivering these programs but I will talk generally about some of the concepts and talk more specifically around the requirements for the 30 per cent evidence-based program profiles.  So hopefully it will be of interest to you all so you're aware of that.  One of the key questions that often arises in projects such as this and beyond is, "What is evidence?" and there's no one particular answer to this.

On the slide is a list of different things that are often identified as evidence in some form, so the more empirical published research that is often peer reviewed and conducted in a way that gives it some rigor.  It may be locally gathered data, which is particularly relevant to programs such as Communities for Children.  There may be a program that's built on relevant theory and, again, this is something that's come up in programs around what theories they're based on in Communities for Children and how those theories are operationalised in the programs that are run.

Is ‘expert opinion’ used?  This is often the case with things like policy making, practitioner wisdom and the place of practitioner wisdom when we're looking at what evidence should be driving program delivery and that includes your own practice wisdom or others around you, and in your workplaces and beyond conferences and so forth and the way that that information is collected.  Then there are the more general ones such as Wikipedia and Google results, which may not have quite the same amount of rigor involved but are surprisingly often used in decision making around evidence.  So we'll talk a bit today about which types of evidence have the most rigor behind them in looking at things like evidence-based programs and practices.

This is a way of looking at evidence that Jack Shonkoff, who is a child development expert in the US, looks at and I think this is relevant when we talk about how we have framed evidence within the Communities for Children expectations.  So he has three different areas.  The first is “established knowledge”. So, that's the knowledge that's defined by the scientific community against a set of strict criteria.  It's what we know, using the term, "know" fairly loosely in this sense.  But this probably links the most to the evidence-based program profiles that we've put together, and I'll get to those in a little while, but the evidence-based program profiles are most reflective of the information that we know thanks to the scientific and research community.

The second area, he talks about – “a reasonable hypothesis” - and that's assertions about what we don't necessarily know yet but they're based on the established knowledge.  And these sorts of areas is the reason why we put in place the assessment of other programs for the Communities for Children project and looking at - recognising the fact that there was already much good program delivery going on out there that was based on some areas of established knowledge and recognising that within the project.

Lastly, it's “unwarranted assertions” and they're the ones that are generated by anyone that may or may not be distortions of established knowledge but often are distortions in a way that are not helpful.  And these are the ones that shouldn't necessarily guide responsible practice because we want good practice and programs to be based on something that tells us that there is a clear and rigorous relationship between a number of things.  Again, the terms that are used in this area when we talk about how we put evidence into practice and policy are quite broad.

In particular, people have probably heard mostly of talk about “evidence-based practice or policy” or “evidence-informed practice and policy” and I just want to talk a little bit about the definitions of those in a minute.  Other terms that we've heard being used are “evidence-influenced” and “evidence-aware”.  Probably a little less so than the other two, but it just shows the grappling of how and what types of evidence influence programs and policy and practice, what terms are used to indicate that.

There's no one definition of “evidence in practice” that's used. So, as I said, the two main ones are “evidence-based” and “evidence-informed” but it largely also depends on how you want to define evidence.  So the idea of evidence-based programs and practice was first used in medicine in the early 1990s and, of course, medicine gives us a much, I guess, clearer relationship between a treatment and an outcome in that often it's visible.  So something like if a patient breaks their arm and you set it and the outcome is that the arm is healed is often a lot easier than working with people. But, nevertheless, the idea of programs and practices having to have some sort of rigorous evidence base behind them came from medicine and inter-social work fairly soon after that.

The term "evidence-informed" is often seen as better conveying that decisions are guided or informed by evidence rather than based solely upon it.  One of the things that is becoming clear is that the idea of whether “evidence-based” or “evidence-informed” is used to describe something depends on how we define evidence.  So if evidence is narrowly defined as just including empirical evidence that comes from research projects, then “evidence-informed” is a more inclusive way of saying it's not just that evidence that matters, there are other sources of evidence such as practitioner wisdom and client needs that need to be given some consideration.

So, in a sense, we've called what we do “evidence-based practice” and that's the term that we've been using but it's important to say that that's a very inclusive definition of evidence when we use that and I'll explain that a little more in the next slide.  But one of the questions that came in before the webinar was around what evidence takes precedence and it's a hard question to answer.  In some respects it depends on what you're trying to do and again, the next slide might help to explain that a little bit.  But, ultimately, what we're trying to do with this project and with similar projects is to move away from the idea of, "We know it works because we know it works" and trying to put a little more rigor behind that and I'll talk about that a little bit later as well a bit further.

One of the views of “evidence-based practice” that nicely encapsulates the different types of evidence that can be used is provided by Aron Shlonsky and Michelle Ballan in an article from 2011 that I'm happy to pass on if people are interested.  But they very clearly state that evidence-based practice is an overview of three different things.  It's what we know from research evidence and the best research evidence we have, and that's combined with practitioner expertise and client values and expectations around what they want and what they need.  All of those things combine to make evidence-based practice.  So in this sense evidence is not narrowly defined as just being about research evidence but nicely recognises those other impacts as well.

What they say is that the way to look at evidence-based practice is to optimise the combination of these things.  So where there is good, strong, rigorous research evidence around what works, we give that greater weight within what we're doing because, clearly in that sense, there's been research around that says that this works for this particular target group.  If there's not such strong research evidence around what works, but there are strong client preferences for a particular program, then you act accordingly.  And, in some senses, that's also a recognition of the fact that research evidence doesn't always exist for a lot of things that need to happen in program delivery, so particularly when you're innovating in the case of having new target groups or new client needs or adapting an existing program to suit some of those different scenarios.

So, in those sorts of senses then, the evidence base hasn't necessarily caught up with what needs to be done and a good example of that is the areas where, at the moment, online and other technologies are used to deliver programs because often longitudinal studies are difficult for that because the technological environment changes so quickly.  So, again, it might be using the best available information outside of the formal evidence base to decide how to address those needs.  This might be a familiar concept to people as well.  Often within the - particularly within the academic areas - people talk about hierarchies of evidence and I just want to reflect on this for a little while and what this means for particularly what we're trying to do within the Expert Panel project more broadly and the Communities for Children program area more specifically.  Within the realm of research evidence, there's a common understanding that different types of evidence allow stronger or weaker conclusions to be drawn.

At the top of the hierarchy of evidence most often is randomised controlled trials and that's where the randomisation comes from a random allocation to a control or experimental group of your sample.  If there's no bias in that allocation, so if that's happened without any interruption from something that would create some bias, it's more reasonable to conclude that the program made a difference because you can tell the difference between what's happened within the control group that didn't receive the program and what happened in the experimental group that did.  So it's considered a strong research design.

One of the issues that comes up often around randomised control trials within service environments, and this is also relevant to other designs, so not just coming down on randomised controlled trials, but there are lots of challenges in doing that within the work place or within a service environment.  You may get dropout of participants and if that happens at a different rate from the two groups, then the validity of what you're studying and the possible biases that are introduced may influence your results.  There might be unexpected differences between your control and experimental group for various reasons and that again will influence results.

One of the most important reasons why, in a service environment, randomised controlled trials may not be appropriate is the ethical issues behind it, and what we mean by that is the ethical issues involved in withholding treatment from a control group.  So an example of that is if we have a really good program that we know works to eliminate or limit domestic violence in homes and we say, "We're going to provide this program and we're not going to give it to one group of families and we are going to give it to another even though we know it works", there's clearly an ethical issue with the fact that that treatment hasn't been provided to people who are experiencing that issue.  So ethical issues are one of the most difficult issues in relation to conducting randomised controlled trials in a service environment.

Often next down on the hierarchy of evidence is what's called “quasi-experiments” and that's where rather than using a control group specifically, it's the use of a naturally occurring comparison group.  So that might be participants on a waiting list for a program, so you're offering the program to one group and you're not offering the program yet to participants on a waiting list, so they're a naturally occurring comparison group for your project.  It might be that you offer a different intervention or a briefer version of the program to your comparison group and then you compare the outcomes for both those groups.  So it's not randomised and it's not controlled, so in that sense you can't overall definitely say that the program caused the changes but greater benefits to those who receive the program may mean it's effective.  And comparison groups are often used within this area and are a good use of something that helps us to understand whether the program worked or not for the participants.

Down the hierarchy a little bit is pre- and post-tests.  So with pre-tests and post-tests, you collect data at the start of the program and you collect data at the end of the program and see whether your sample has improved or not improved or stayed the same by the end of the program.  There's no comparison or control group and that's what makes it difficult to know whether it was the program that caused the changes or not because there might be other factors for each of those groups that have impacted on the outcomes.

So no real conclusions can be drawn because the changes might have happened anyway and a good example of this is where you might run a program say on literacy approaches for young children but at the same time as you're running your program, the families that aren't in your program are also receiving some sort of other literacy program and so you can't tell that that's the reason why, or - for your group - that they receive something outside of the program that has influenced the outcomes for them beyond your program.  So sometimes it's difficult to separate out the impacts of the program with other things that are going on around those families.

One of the debates is around, "Is it better than nothing?" and, again, it depends who you ask: for those who very strictly adhere to the hierarchy of evidence, they would probably say that they're not a strong enough or rigorous enough approach to evaluation to give it much credit.  However, it does give us some information on what's going on and in some cases is a reasonable way of testing out whether things are working or not, particularly if there's not a natural comparison group that you can use.

So one of the examples we've had so far in the assessment of programs is a program that operated in a classroom in a regional area where there wasn't another group of kids at that age so they were the only class of that particular age group.  And so there wasn't a naturally occurring comparison group unless you left the school and then you start to introduce different types of biases if your comparison group is too different.  So for the purposes of this exercise, we certainly - and I'll get to this - but one of the things that we say is a reasonable indication or a rigorous enough indication that the program is having some effect is pre- and post-testing.

So this is some of the - again some of the problems that happen with hierarchies and why there are some questions around how useful they are within these sorts of projects.  It neglects a number of important issues and one of those may be, for example, that the evaluation or the research is a randomised controlled trial but the RCT wasn't particularly well implemented, so by all accounts it ticks off on that, but in fact the results have been biased in some way.  So we can't just say that RCTs or randomised controlled trials are the be-all and end-all without understanding that they have rigor associated with them as well.

By only looking at RCTs and other comparison groups, we may underrate the value of good observational studies which are often more appropriate for certain programs.  Again, going back to programs where there would be ethical issues if a randomised controlled trial was used, observational studies may be a really good way of gathering information from some of those programs so we have to give them credit as well.  By only concentrating on the upper levels of hierarchy, we can lose useful evidence.  So one of the backlashes against RCTs in a service environment is to say, well if we just look at - there's been 4,000 evaluations of this particular program but only eight of them use an RCT, or programs for a particular issue, we might be missing a whole lot of things that happened in those other, almost 4,000 programs that are of relevance and of importance to us.  So we don't want to lose that.

What counts as good enough evidence depends on how we want to use it and this has been a particularly important driver for us in looking at how we approach the 30 per cent requirement for the Communities for Children programs.  We wanted to create a system that was realistic for the sector.  So that's where the assessment of other programs came into the 30 per cent requirement and saying we have a set of rigorous enough criteria that we can use to say that a program has enough evidence around it for the time being and we'll also provide providers with information as to how the program can be improved in terms of rigor.  I'll get to that later.  And there's different uses for different designs so, again, RCTs and other designs may not be appropriate for particular target groups or programs.

One of the things that's important to consider is what sorts of factors influence whether practitioners are more likely to use research evidence and there's been some studies done on this sort of thing and it's important, I guess, from an organisational perspective or implementing these sorts of programs that we look at, well, what's the best ways of getting practitioners to understand and use that research evidence within programs?  The literature says that it's most likely to happen if it fits with knowledge that's already been gained via hands-on experience and that makes sense.  If it makes sense within the context of your work, it fits into your schema of how those things work.

Research has to be easy to implement and to use and this is a recognition that practitioners and service providers are often overloaded and there's not a lot of time to read and assimilate research.  And this is the basis on which we do the work that we do with CFCA information exchange, is to try and find ways that - the most appropriate and the easiest ways that busy people can use research in a way that they understand and is easy for them to use.  It has to be adaptable to suit the agency and practitioner needs as well.  So again, what we do in CFCA is make sure that when we look at the literature around a particular area and we find what the findings have said, that we then look at the ways in which those findings - what they actually mean in practice and policy - and try to communicate that through the work that we do as well.

I like this quote because I think it summarises the reality that it's no use producing world class research if that research is not accessible for busy professionals.  So one of the great debates going on at the moment is around open access to academic studies and other research and how people are able to find those and translate them into what they're doing and that it makes more sense for those things to be as accessible as possible rather than the current user-pays system.

We have to also recognise that there're organisational factors.  So you might be the most enthusiastic practitioner around using research evidence that there ever was but if you're not supported by factors within the organisation, then you'll have more trouble having the factors in place that help you to do that.  So that's just a few things that we know from the literature that influences use of research within organisations.  And that's leadership attitude, so there has to be a champion for it within the organisation, someone who drives the rationale and drives the meaning and the project and gets people on board to do that, and that there's a leadership response that is encouraging of using research and evidence.

Staff resources, of course; people have to have the time to be able to and the skills to be able to use research in practice.  If there's organisational stress or financial pressure, it's less likely to be in place.  Management types again who are - different management types who may be more amenable to the use of research and the reasons why it should be used.  Tolerance for change within an organisation and that there's a cultural experimentation and risk taking.  So one of the biggest risks that we have is that the push for the use of evidence-based practice and policy in some way diminishes the opportunities for people to innovate and try things out and we don't want to lose that because that's an important part of testing, particularly in a very dynamic family environment and responding to needs and testing out what works for families and children in this kind of setting.

I just wanted to draw people's attention to - we have a paper on developing a culture of evaluation and research on our website and these are a few of the things that talk about what organisations look like which have a culture of evaluation and research.  So those organisations deliberately seek evidence in order to better design and deliver programs and can deliver evidence to stakeholders that programs are achieving desired results and enable robust decision making and supporting professional development.  So that's the URL for that paper if people are interested.  It also draws attention to a number of tools that can be used within organisations to get a sense of where the culture is at within your organisation around the use of evidence and evaluation.  These slides will go up at the end of the presentation as well, so don't feel like you have to jot these down as we go.

So I wanted to move onto what this looks like.  That's the background to the context of how we've set up the response to the 30 per cent requirement for evidence-based programs that exist for Communities for Children.  So there's a number of ways that the 
30 per cent requirement can be met.  One of the first ones is the online profiles that we have that are a list of evidence-based programs that have met a set of criteria that we've used to assess them as rigorous enough to be included.  The way we went about that was that we looked at a number of clearinghouses and databases that are well known for their - they're set up to host the programs that meet a certain number of criteria that recognise these programs as evidence-based.  So we looked at a whole bunch of different clearinghouses and databases.

One of the issues we had was that they often used different definitions of evidence, not surprisingly, and they may privilege those very scientific definitions that we talked about before.  So the more rigorous those clearinghouses and databases apply the evidence criteria, the less programs that are on those databases which makes sense.  So again, a lot of programs haven't been evaluated to the standard of a randomised controlled trial for various reasons so we tried to get a good cross-section of clearinghouses and databases that use those different definitions.  We also recognise that the programs that are in these clearinghouses, their relevance to C for C programs varies.  So there tended to be more parenting interventions that were in the databases and fewer community interventions.

The other thing we recognise is that there might be programs out there that would be included in those clearinghouses or databases but they haven't been submitted or they haven't been found so we only know what we know.  So one of the things that we've done is try to put in place a system that recognises that and says, "There must be other programs out there that would fit those criteria and if people know of those, then please forward them to us".  That again is the URL of the list of evidence-based programs.  I think there's 26 or 27 programs on the list.  Some of them won't be a surprise to anyone but there's details on there around the program, their target group, information on the training and so forth that's available for service providers to use.

It's important to say that we recognise and increasingly recognise some of the limitations for providers in using these programs.  I guess what we've tried to do is think of these programs as - these are the high level standard programs that have a good evidence base that show that they work for families and children.  So if you can run these or if there's scope to run these, then they're one of the best choices for children and families under the circumstances that the program has been set up for.

These are the criteria that we used for inclusion in the evidence-based program profiles, so just to give you some idea and all of this information is on our website as well.  So to be included in that profile database, the objectives had to be in line - of the program had to be in line with the C for C Facilitating Partner model and part of that is that the programs targeted children aged zero to 12 years and their families.  We had to know that there was documented information on the program that was available so clearly it needed to be documented and available information on aims, objectives, theoretical basis of the program, the program logic or similar behind the program, who the target group for the program was and the activities and documented outlined activities of that program and why they're important as well.

There also needed to be a training manual or documentation available that allows for a replication in Australia.  So we didn't want to provide programs that looked really good but people couldn't run them because they didn't have the material they needed to understand how to do so.  We're looking for what we've called good quality evaluations with at least 20 participants in them, which we felt was a minimum to say that the - a minimum sample to say that -the results were likely to be rigorous enough, and that the evaluation shows positive outcomes and no negative effects of the program.  So the question was, "Is there a reasonable assumption that the program itself changed people's knowledge, attitudes or behaviours?"  So that was important with these programs in particular that it was reasonably clear that that was happening.  We also wanted to know that the program had either been replicated or had a potential for replication and again, this was so that it could be put in place in different areas and was available for C for C to do that. 

Just to stay on this for a little while, there are some things to consider.  If people are looking to deliver these programs, these are the three factors to consider.  The program quality is covered by the criteria so the process we went to to appoint programs to the database covered off on the program quality.  So we looked - as you can see from the criteria, we looked for the things that characterise a good quality program and that was one of the factors to consider.  Program match and organisation resources are the things that need to be considered by service providers before choosing the program and I'll just talk about them a little more.

These are some of the things that are good to consider in terms of whether the program is a good match for your organisation.  So how well do the goals and objectives of the particular program reflect those already of the organisation?  How well do the program's goals match those of participants?  So if you have a very vulnerable family, so you have a number of risk factors, then one of the programs that's a short prevention program may not - is not likely to be - suitable.  How long is the program?  So again it's not - there's no - or there's a risk in putting in place a program that goes for a long period of time with families who are unable to or not keen to commit for that longer duration and yet those programs need that longer duration of commitment to produce lasting behaviour.

So again you have to think about what suits your target group.  Has the program been effective with a similar target group?  So again, each program on the program profile database identifies what the target group is for that program and I'll talk in a little while around fidelity and adaptation with those programs and whether that's one way of adapting it to suit a different target group but often the programs are designed for a particular group or they might be designed for general audiences.

It's rare to find a program that suits all groups and situations, not surprisingly, so we encourage people when they see the program to carefully read the materials or talk to program developers around the adaptation.  We've had lots of conversations with the developers of programs that are on that list and some of them are very willing to help people to look at the ways that they may be adapted within your communities or with your target groups.  So again, the details are on the website around the developers of the program to have a chat to them and often the program may have already been adapted to a different target group and it's worth talking to them if so.  And does the program complement other programs in the organisation and the community in general?

In terms of organisational resources needed to run these programs, we're very cognisant of the fact that it's not just about taking a program off the shelf and delivering it.  It doesn't work like that and we recognise that so there has to be a bunch of things that already exist or can exist within the organisation that support the development of the program.  So they cut across the different areas of expertise, staff, financial support and the time to run the program and if these are inadequate, then the chances of success with that program are limited.  So they are important parts of delivering the evidence-based programs and we do recognise that.  So in terms of organisational resources it's important to ask, which programs have the best chance of being continued into the future?  Does the program have a good chance of being integrated into the base programming of the organisation?  And again, so is there the capacity?  Is there the commitment to do that?

Can organisations collaborate with others in the community to deliver and evaluate the program?  This may be one way that the - the resources involved in doing so and, again, it will help with the evaluation of the program and contributing to the evidence base because there'll be larger samples to be involved in the program as well.  So it's worth considering, if one organisation doesn't have the expertise and the resources, whether they can partner with others within the organisation and that will help to do that.

One of the important factors around delivering the evidence-based programs is program fidelity.  So basically the definition of program fidelity is staying true to the original program design, so it's delivered in a similar setting with a similar target group and with the same number of activities over the same duration.  The higher the fidelity to the program, the greater the likelihood of positive outcomes.  So it makes sense that the evidence base behind the program is based on the idea that it's been evaluated for the target group that the program is targeted at.  So diverting from that target group or the way that it's run means that the evidence base behind the program is compromised, so it becomes less of an evidence-based program. So, fidelity is important.

Again, we recognise that staying true to a program is something that service providers may need some assistance in doing because it depends on a lot of things or they need to recognise that these things need to be in place as well.  And those sorts of things are well trained professionals who receive the right and accredited training for the particular program, that there's a supportive infrastructure which is one of the things we've talked about already, that there's adequate resources, managerial support and regular process evaluations.  So they're the types of evaluations that look at the way that the program is implemented and whether it's implemented correctly.

The flipside of that is adaptation, and adaptation will often happen intentionally and often will happen unintentionally.  And one of the ways that it does happen unintentionally is what we call a program drift.  So the intention is to run the program as it exists within the program documentation but for various reasons it doesn't work.  So somewhere along the line you might lose a couple of families and decide to cut down the number of sessions or get new people in and those sorts of things.  So there may be intentional decisions as well around that and, again, it's worth chatting to the program developer around which adaptations will work.

So intentional changes to content, duration or delivery style may diminish the program effects.  So it usually works that simpler things, like changing the language to suit a different target audience or a slightly different target audience, is unlikely to have an effect on the program but changing the duration of the program or the content of the activities is one adaptation that often will change the intent of the program.  So it's important to recognise that changing the program from what it is is an important part of - is an important consideration if you're looking to use the evidence-based programs.  Down the bottom is a URL again to a couple of research briefs. I think Issues 3 and 4 on that site, the briefs that are related to choosing a program and fidelity and adaptation and they're good, easy to read and use briefs.  That's not a typo, it’s actually - the, "c" is missing in "Practice".  If you try and put the, "c" in, it sends you off to an error page, so it's not a typo, that's actually how it's spelt.

The second way of meeting the 30 per cent requirement is, again, going back to the idea that we wanted to recognise that programs that are already being delivered or that people are proposing that they would like to deliver meet five, what we call, “rigorous enough criteria” and I'll get to the criteria in a minute.  So these are slightly less rigorous than the ones that we used to establish the program database but still rigorous enough for the purposes of this project.  Just a note that programs must be submitted by a Facilitating Partner and they need to have a conversation with a Grant Agreement Manager before submitting.  If it's a Community Partner that wants to submit a program, they need to do this through a Facilitating Partner - the relevant Facilitating Partner - but their details can be included within the submission so that if we have questions around the program, we can go directly to the partner. Again, that's the URL for submitting programs and for further information.

These are the criteria for inclusion of other programs and, again, they're just slightly less rigorous than the ones we use to establish the database. So, these must be documented: A theoretical and/or research background to the program; a program logic or theory of change or logic model, whatever you'd like to call it; activities in the program which generally match good practice in meeting the needs of the target group and the purpose of the program; and evaluation that's been conducted, again with at least 20 participants and one that has established that the program has positive benefits for the target group, and that staff members are qualified and/or trained to run the program.

One of the things that's come up for us already is that we've had programs that have each of these things, but they don't link to each other. So we also want to look at - the theoretical background has to make sense within the program logic which makes sense within the context of the - so the activities within the program logic make sense within the context of the research background and that the evaluation is linked to all of those as well.  So they have to have logical and coherent links between those elements.  The way that the process works is that after submission of the program, CFCA researchers assess the program and provide detailed feedback to the Facilitating Partner.  We're looking at around about four to six weeks for turnaround for those. They do take a little bit of time to get through, but we will try and get them back as soon as we can.

From the assessment, there's four possible outcomes: the program is eligible to be included in the 30 per cent requirement;  the program is eligible to be included and it's both in the 30 per cent requirement and the evidence-based program profile so it meets those additional criteria for the profiles; it's eligible for provisional approval and I'll talk about that in a minute; or the program cannot at this stage be included in the 30 per cent requirement.  And we'll let people know when we assess it; we tick a box to say which one of these we have agreed that it is.

We'll also suggest how the industry list may help with feedback and, just to give you an idea - so the industry list is a part of the broader Expert Panel project - the industry list is a version of the Expert Panel itself that is available to sector organisations to purchase assistance from.  The Expert Panel is composed of organisations and sole traders who are highly competent in one or more of three areas:  Program planning; program implementation; and program evaluation or outcomes measurement.  So the Industry List is a version that's available to sector organisations.  You don't have to use the Industry List.  You may already have relationships with existing researchers or evaluators who you wish to use, which is fine.  This can be considered a resource that's at your - that you can use if you wish to but you don't have to, but it may help.  And we are in the process of establishing the Industry List and hope that it will be available within the next few weeks.

Provisional assessment is another part of it.  So, again, this is in recognition that there are good programs out there but they may not meet those rigorous - the assessment of other program criteria just yet and that more time is needed to meet those criteria.  So what we ask for a provisional assessment is that there's a research or theoretical background that is articulated around the program, that it does have some links, that some form of evaluation has been conducted in the past.  Again it may not meet those rigorous evaluation criteria just yet.  And then we ask for people to outline the plan for meeting criteria in the online guide, the five criteria, by 30th of June 2016, so basically a year away and that plan needs to be submitted by 30th of June this year.

The plan doesn't need to be detailed.  What we want to see is that you understand what the criteria are about and that you've got something in place to meet those criteria.  And again it might be that you are looking to employ some help from the industry list to help with those tasks, you might be wanting to do it in-house, but again we'll give good feedback on the assessment sheet to let you know where we think that the work needs to be done.  I just want to talk very briefly about program logic and evaluation just so that - and this is a whistle-stop tour through these things.

Program logic, again, has a number of different names.  There are some differences between these, but for the purposes of this exercise, these sorts of things - we look at one particular way of doing program logic and that's the basis of what we're looking for, for this project.  The program logic visually represents what's going on in a program.  It's basically the roadmap of the program.  There are two important things that need to be included and is what makes the program logic what it is and that's the relationship between the different parts of the program logic.  So I'll show you an example in a minute, but it goes back to that idea that the research base behind the program needs to be linked to why you're targeting a particular group and then why those activities fit what the target group needs and then what the outcomes are.  So there has to be a logical link.  It's what's called “if-then” relationships between the parts of the program logic.  And there has to be an intention.

So the program logic is a roadmap for the program itself and, again, this might change over time and probably will.  So it's not that it is set in stone from the start but that it provides a visual example of what goes on in the program.  We actually have done one because we thought we'd better walk the walk, so we did one for the Expert Panel and this is what our program logic looks like.  It might not be very clear on your screens but this is a fairly standard template for a program logic, but there's lots of different ways that they can look and we're not really fussed about what it looks like as long as it generally has the same elements in it.  So it's about what you put into your program - the inputs, what the outputs are, so what activities are done, who the participants of the program are and then looking at what are the short term, medium and long term outcomes of the project or program.

Most programs won't be able to explicitly show how they impact on long-term outcomes and we don't expect that.  By the time you get to that level, you'll see just below that are the external factors and we recognise that there are lots of different things that impact on families and parenting and those sorts of issues that have nothing to do with your program and that will influence whether long term outcomes are achieved or not.  It's very hard to measure by the time you get down to that stage.  But what we're looking for are evaluations that look at the short term and possibly the medium term outcomes for the program and at the moment we're working on an evaluation plan and fine tuning that for the Expert Panel project that looks particularly at the medium term outcomes for the project.  And again, the other box down the bottom is assumptions so there's certain things that need to be in place to make sure that the program is working as well that aren't necessarily in the organisation's control.

I'll quickly run through the, "Why evaluation?" slides.  I think it's just important to provide some basis as to why we're looking at evaluation of programs being important.  So these are some of the reasons why evaluation is conducted.  It's conducted for quality assurance reasons, so systematically checking whether programs and services are meeting the standards of the program.  Were participants satisfied with the program and did they benefit from it?  And good evaluations try to get beyond the idea of, "Yeah, we liked the program because the food was great" or, "The program facilitator smiled a lot".  So we're trying to find what are the nuts and bolts of why they were satisfied and what they got out of the program and the next question is around that, so did they benefit because of the program or were there other factors that were at play?

Evaluation also helps providers to know how the program can be improved or refined.  So it might be that you find some parts of the program are working and some others aren't, and one of the other things we've talked about within the context of this program is, what are some of the elements of programs that are - key elements of programs that are - the parts that work and what are the things that are more negotiable around what works for a program.  So within evaluation, you may be able to find things that can be improved or refined that are able to be adapted and that they don't have a great impact or they have a better impact on the program outcomes.  Evaluation can also help to justify requests for further support and funding and process evaluation will give you some idea about whether the implementation of the program was true to the program design.

These are the broad types of evaluations and one of the questions that came in quite early was around, "What sort of evaluation do you use at what stage?"  So if we're setting up a new program, what do we - how do we try to - build the evidence base around that program.  So it's important to say evaluation just doesn't happen at the end of a program, there's different types.  Needs assessment around what's needed for the target group can happen before an initiative or when you're reviewing a program if the needs of the target group have changed.  So you're assessing and evaluating what the needs are of those particular groups that you're trying to access or help.

Process and implementation evaluation basically answers the questions around, “is the program implemented in the way it's intended and is it reaching the people for whom it's designed?” Outcome and impact evaluation is more likely to happen with well-established programs and the idea is to try and figure out whether the program helped the clients and has there been any unintended outcomes.  And one of the things we're very keen to capture in the Expert Panel more broadly is when programs work but also when they didn't work and why didn't they work, and it's as important to know if a particular program doesn't work with a target group as much as it is that it does.  So we see that as very valuable information that we also intend to share.  And there's economic evaluations as well which talk about the benefits and costs of a program.

In terms of the assessments of other programs, one of the questions again that has come up is around, "What sorts of evaluations are we looking for?"  We're certainly not expecting that people have done randomised controlled trials.  In fact, the type of evaluation that's undertaken is less important than the results of the evaluation.  So we'd like to see evaluations that suit the point at which the program is at because that makes sense, but what we want to know is that the results of the evaluation are valid and that they show that a positive outcome of the program has happened.  So basically if the evaluation has an element that's of pre- and post-testing of participant outcomes, or there's a comparison of outcomes for those who did and didn't receive the program, or there's a comparison of two types of service interventions.

And again if we offer up a provisional assessment, this will help a lot of the programs who haven't yet had the opportunity to do an evaluation that really gives us some idea of whether the program is working or not.  That extra 12 months gives programs the scope to be able to do that.  We also recognise that some evaluations that are put in place will go longer than that and very happy to talk about that further.  Again there's some information at the URL at the end of that slide.

So there's lots of ways of collecting evaluation data, so it might be that you collect new data from key informants or clients.  So I won't go into the different types of qualitative and quantitative ways of collecting data but there is some information on our website that I'll show you later where that is that looks at the different ways of doing that.  It may be that you make use of internal administrative data including program data that may already exist that will help you look at whether the program works.  Use of external administrative data as well, so that may be things like hospital data, hospital admissions or admissions to other services that help you to look at what's going on for your clients.

It may be that you use existing representative data sets to see what's happening in your area.  One of the ones that's come up quite regularly already for us is that people have used the Australian Early Development Census to look at what's happening in particular demographic areas and linking them to what's happening in their programs and there's also the Longitudinal Study of Australian Children that's run here at the Institute of Family Studies as well.  It's good to know that multiple data sources allow for the greater validity of your results and also greater depth so ideally there'd be a few different areas to work with, but, again, we recognise the limitations and probably resource limitations in doing this as well.

The other important thing is that we need to balance the quantity of data with the quality of data and the ability to analyse so it's important that we ask questions around what resources exist for people to look at that data, who's conducting the evaluation and what are the skill sets available to you, so do you do it internally, do you have the expertise internally.  And that's often a good option because your people know the project well, but the limitation is that that introduces a certain amount of bias into the results as well because they're invested in the program.  So you might employ someone externally but, again, them not having links to the program may be a negative but if you don't have the internal expertise to conduct evaluations, then someone external might help and, again, the Industry List will have people on there that can help.

You need to know how much time you have.  There's no use in collecting an enormously large amount of data but not actually having the time to analyse that data and there are ethical issues with that as well, so collecting data from people that can't be used is not ideal if you can avoid it.  "Is it a one-off or an ongoing process? And I think this is a question that happens particularly for groups like play groups that go on and don't necessarily have a start and stop point so how do you get good figures on that.  And do you need it all?  So yeah, so again going back to the idea of how much of the data do you need and keeping that in mind at the outset in terms of your timelines and the resources available to you and being careful to only bite off what you can chew.

These are the evaluations that I mentioned before.  So there's an enormous amount of information within these five evaluation resources and these resources are particularly tailored for family support service providers so - and they cover off on the evaluation journey – so, they're worth having a look at and they include all the sorts of things that I've discussed today, but also more broadly in terms of evaluation.  So they may be useful to people and we're hoping that we've written them in a way that is easy to use and understand for people who don't have much time to get heavily into the literature around these things.

So lastly, that's the end of my presentation, I just wanted to give you the contacts for the website for Child Family Community Australia information exchange.  We have a broad range of information on there which would be useful for people, particularly those who are looking at the research and the evidence behind their programs and there's 30 years of publications and so forth on the website, so it's a good place to start looking for that research base.  The Expert Panel website, that's the address to actually find information specifically on the Expert Panel.  And lastly is the email address if you have any queries around either the Industry List or the 30 per cent requirement for Communities for Children as well.  Please send that through and all of us will keep an eye on that through the day and we can check out what your questions are and queries and get back to you.

WEBINAR CONCLUDED

IMPORTANT INFORMATION - PLEASE READ

The transcript is provided for information purposes only and is provided on the basis that all persons accessing the transcript undertake responsibility for assessing the relevance and accuracy of its content. Before using the material contained in the transcript, the permission of the relevant presenter should be obtained.

The Commonwealth of Australia, represented by the Australian Institute of Family Studies (AIFS), is not responsible for, and makes no representations in relation to, the accuracy of this transcript. AIFS does not accept any liability to any person for the content (or the use of such content) included in the transcript. The transcript may include or summarise views, standards or recommendations of third parties. The inclusion of such material is not an endorsement by AIFS of that material; nor does it indicate a commitment by AIFS to any particular course of action.

Slide outline

  1. Evidence, evidence-based programs and evaluation
    • Elly Robinson
    • CFCA Webinar 22 April 2015
    • The views expressed in this presentation are those of the presenter and may not necessarily reflect those of the Australian Institute of Family Studies or the Australian Government.
  2. What is evidence?
    • Published research?
    • Locally gathered data?
    • Relevant theory?
    • Expert Opinion?
    • Practitioner wisdom (including own)?
    • Wikipedia?
    • Google results?
  3. What is evidence? - Shonkoff
    • Established knowledge
      • Defined by scientific community against strict criteria
      • What we “know”
    • Reasonable hypotheses
      • Assertions about what we don’t yet know
      • Based on established knowledge
    • Unwarranted assertions
      • Generated by anyone
      • May be distortions of established knowledge
      • Should not guide responsible practice
  4. Evidence in practice
    • Evidence based?
    • Evidence informed?
    • Evidence influenced?
    • Evidence aware?
  5. Evidence in practice
    • No one definition – largely depends how you define evidence
      • “Evidence-based practice” first used in medicine in early-1990s.
      • “Evidence-informed” is seen as better conveying that decisions are guided or informed by evidence rather than based solely upon it - but this comes from a narrow definition of evidence.
    • What evidence takes precedence?
      • Depends what you are trying to do!
      • But - move away from “we know it works because we know it works”.
  6. Evidence-based practice
    • One view - overlap of best (research) evidence, practitioner expertise and client values/expectations (Shlonsky & Ballan, 2011)
    • “Optimising the combination”: Strong evidence = give greater weight
    • Weak evidence & strong client preferences = act accordingly
    • Figure: Evidence based-practice:
      • Client clinical state and circumstances
      • Client preferences and actions
      • Current best evidence, e.g. programs
  7. “Hierarchies” of evidence
    • Research evidence
      • Common understanding that different types of evidence allow stronger or weaker conclusions to be drawn.
    • Randomised-controlled trials
      • Randomised – allocation to a “control” or “experimental” group
      • If no bias in allocation, more reasonable to conclude that program made a difference
  8. “Hierarchies” of evidence - RCTs
    • Problems with RCTs in service environments (also relevant to other designs)
      • Drop out of participants, especially if at different rates from the two groups
      • Unexpected differences between groups
      • Ethical issues re: withholding treatment from control group
  9. “Hierarchies” of evidence – quasi-experiments
    • Use of naturally occurring comparison groups
      • Participants on a waiting list
      • Offer a different intervention, e.g. a briefer version of program.
    • Greater benefits to those in program may mean it is effective – but because not randomised, you can’t say it caused changes.
  10. “Hierarchies” of evidence – pre- and post-test
    • No comparison or control group
    • Measures before and after program changes
    • No real conclusions can be drawn – changes might have happened anyway…
    • But…is it better than nothing?
  11. Problems with hierarchies
    • Neglect too many important issues, e.g. may have been an RCT but poorly implemented
    • Underrate the value of good observational studies
    • Can lead to loss of useful evidence
    • What counts as “good enough” evidence depends on how we want to use it
    • Different uses for different designs
  12. Practitioner use of research evidence
    • Most likely to happen if it:
      • Fits with knowledge they have gained via hands-on experience
      • Is easy to implement and use – often overloaded and no time to read and assimilate research.
      • Is adaptable to suit agency or practitioner needs, i.e. is contextualised to practice.
    • “It is…no use producing world-class research if that research is not accessible for busy professionals…” (Sharples, 2013)
  13. Organisational factors
    • What influences use of research?
      • Leadership attitudes
      • Staff resources
      • Organisational stress or financial pressure
      • Management types
      • Tolerance for change
      • Culture of experimentation and risk taking
  14. Developing a culture of evaluation/research
    • Organisations with a culture of evaluation and research:
      • deliberately seek evidence in order to better design and deliver programs.
      • can deliver evidence to stakeholders that programs are achieving desired results, enable robust decision-making and support professional development.
    • https://www3.aifs.gov.au/cfca/publications/developing-culture-evaluation-and-research
  15. Communities for Children Facilitating Partners
    • 30% requirement for evidence-based programs
  16. 1. CFCA online profiles of evidence-based programs
    • A number of clearinghouses/databases were searched for evidence-based programs relevant to CfC.
      • Clearinghouses/databases use different definitions of “evidence” – may privilege very “scientific” definitions.
      • Relevance to CfC varies – more parenting interventions, fewer community interventions.
      • May meet criteria but don’t appear in clearinghouses/databases – we only know what we know.
    • https://apps.aifs.gov.au/cfca/guidebook/programs
  17. 1. CFCA online profiles of evidence-based programs
    • Criteria for inclusion
      • Objectives of program are in line with CfC FP model.
      • Targets children aged 0-12 years and their families.
      • Documented information on program is available:
        • Aims, objectives and theoretical basis;
        • Program logic or similar;
        • Target group for program; and
        • Activities of the program and why they are important.
  18. 1. CFCA online profiles of evidence-based programs
    • Criteria for inclusion
      • Training manual or documentation that allows for replication in Australia is available.
      • A good quality evaluation with at least 20 participants.
      • Evaluation shows positive outcomes (and no negative effects reported).
        • “Is there a reasonable assumption that the program itself changed people’s knowledge, attitudes or behaviours?”
      • Program has been replicated (or has potential for replication).
  19. Choosing the right program
    • Three factors to consider:
      • Program quality (covered by the criteria)
      • Program match
      • Organisational resources
  20. Choosing the right program
    • Program match
      • How well do the goals and objectives of the program reflect those of the organisation?
      • How well do the program’s goals match those of participants?
        • E.g. a short prevention program doesn’t suit a family with many risk factors
      • How long is the program?
        • Likelihood of participants committing to full program
        • Longer duration more likely to produce lasting behaviour.
  21. Choosing the right program
    • Program match
      • Has the program been effective with a similar target group?
        • May be designed for a particular group or “general” audiences.
      • Rare to find a program that suits all groups/situations
        • Carefully read program materials or talk to program developers re: adaptation. Some developers may be willing to help.
      • Does program complement other programs in organisation and community in general?
  22. Choosing the right program
    • Organisational resources
      • Does the organisation delivering the program have:
        • Expertise
        • Staff
        • Financial support
        • Time
      • If human and financial resources are inadequate, chances of success are limited.
  23. Choosing the right program
    • Organisational resources
      • Which programs have the best chance of being continued in the future?
      • Does the program have a good chance of being integrated into the “base programming” of the organisation?
      • Can organisations collaborate with others in the community to deliver and evaluate the program?
  24. Fidelity
    • Program fidelity = staying true to the original program design
    • Higher the fidelity, greater the likelihood of positive outcomes
    • Fidelity depends on
      • Well trained professionals who receive accredited training
      • Supportive infrastructure
      • Adequate resources
      • Managerial support
      • Regular process evaluations
  25. Adaptation
    • Practitioners will often change or adapt programs, intentionally or not
    • Unintentional – program “drift” – use fidelity tools/process evaluation
    • Intentional changes to content, duration, or delivery style may diminish the programs effects - seek advice from the developer
    • http://fyi.uwex.edu/whatworkswisconsin/research-to-pratice-briefs/
  26. 2. Assessment process – 30% requirement
    • In recognition of the programs already being delivered (or proposed) that meet five “rigorous enough” criteria
    • Programs MUST be submitted by a Facilitating Partner after conversation with their Grant Agreement Manager
    • Community Partner details can be included if applicable
    • https://www3.aifs.gov.au/cfca/eform/submit/contact-fac-expert-panel
  27. 2. Assessment of “other” programs
    • Criteria for inclusion - must be documented
      • A theoretical and/or research background to the program.
      • A program logic (or theory of change, or logic model).
      • Activities in the program which generally match good practice in meeting the needs of the target group.
      • An evaluation (with at least 20 participants) has established that the program has positive benefits for the target group.
      • Staff members are qualified and/or trained to run the program.
    • Also must have logical and coherent links between these elements.
  28. Assessment process – 30% requirement
    • CFCA researchers will assess the program and provide detailed feedback (4-6 weeks)
    • Four possible outcomes:
      • Eligible to be included in the 30% requirement
      • Eligible to be included in the 30% requirement AND evidence based program profiles (must meet the additional criteria)
      • Eligible for Provisional Approval
      • The programme cannot at this stage be included in the 30% requirement
  29. Assessment process – 30% requirement
    • CFCA researchers will also suggest how the “Industry List” may help with the feedback
      • The Industry List is a version of the Expert Panel that is available to sector organisations to purchase assistance
      • The Expert Panel is composed of organisations/sole traders who are highly competent in:
        • Program planning and/or
        • Program implementation and/or
        • Program evaluation/outcomes measurement
      • You do not have to use the Industry List, but it may help.
  30. 3. Provisional assessment
    • In recognition of when more time is needed to meet criteria.
    • Must be able to:
      • Articulate research and/or theoretical background of the programme;
      • show that some form of evaluation has been conducted in the past (may not meet the more rigorous evaluation criteria yet); and
      • outline your plan for meeting the criteria in the online guide by 30 June 2016 – submit by 30 June 2015.
  31. Program logic
    • Or theory of change…
    • Or logic model…
    • Or program theory…
    • Or intervention logic..
  32. Program logic
    • Visually represents what is going on in a program
    • Two important things
      • Relationships – logical links between each stage of program logic model (if…then…)
      • Intention – a roadmap for the program
  33. Program logic
Families and Children Expert Panel - Program Logic
InputsOutputsOutcomes
 ActivitiesParticipationShort-termMedium-termLong-term

AIFS manager and researchers

DSS managers and staff

Expert Panel

Steering Committee

Communications staff

Web, library & publications staff

Other external partnerships

Support and Resources 
Expert Panel 
Steering Committee 
Training and support tools and resources 
Inquiry desk

Expert Panel 
Establishment 
Work Allocation 
Guidance & support

Dissemination 
Resource Sheets 
Practice Guides 
Webinars 
Podcasts 
Other events

Sector programs and evaluations

Stakeholder groups

High focus 
FaC organisations, DSS (FaC policy)

Regular focus 
Expert Panel 
Steering Committee 
AIFS staff

Low focus 
e.g., media, advocacy groups, non-FaC practices

Increased understanding of evidence, measuring outcomes and evaluation

CFCA and panel offer useful support to build sector capacity

Increased engagement in activities supporting evidence-based programs and practices

Increased implementation of evidence-based programs

Increased capacity to plan, implement and evaluate programs and practices

Increased use of prevention and early intervention activities

Increased use of outcomes reporting and evaluations

CFCA is increasingly used and considered a primary and useful source of information and dissemination

Increased use of evidence-based programs and practices designed to improve outcomes for children and families
Assumptions, e.g. Evidence-based practice remains valued, sufficient applications are received from suitable organisations for Panel. Panel percieved by the sector as good quality and useful, the Steering Committee provides timely and useful advice.External factors, e.g Policy and funding environment, pool of existing services with relevant expertise for panel, organisational culture around evidence-based practice, individual beliefs around measuring outcomes, other factors influencing outcomes for disadvantaged families and children.
  1. Why evaluation?
    • Quality assurance – systematic checking of program/service meeting standards
    • Were participants satisfied? Did they benefit?
    • Did they benefit because of the program?
    • How can the program be improved/refined?
    • Justify requests for further support/funding
    • Was implementation true to program design?
  2. Broad types of evaluation
    • Needs assessment
      • Before an initiative or when reviewing program if needs have changed
    • Process/implementation
      • Is the program being implemented in the way it is intended?
      • Is it reaching the people for whom it is designed?
    • Outcome/impact evaluation
      • Does my program help my clients?
      • Has there been any unintended outcomes?
    • Economic evaluation
      • Benefits and costs of program
  3. Assessment of “other” program
    • For the purposes of the evaluation requirement:
      • The type of evaluation undertaken is less important than the results of the evaluation.
      • Shows a positive outcome of the programme as indicated by:
        • pre- AND post-testing of participant outcomes; or
        • comparison of outcomes for those who did/didn’t receive the program; or
        • a comparison of two types of service interventions
      • https://www3.aifs.gov.au/cfca/frequently-asked-questions-communities-children-facilitating-partners#requirement
  4. Evaluation approaches
    • Collect new data from key informants (qualitative, quantitative)
    • Make use of internal administrative data including program data
    • Use of external administrative data
    • Use of existing representative datasets (e.g. Australian Early Development Census, Longitudinal Study of Australian Children)
    • Multiple data sources allow for greater validity and also greater depth
  5. Evaluation approaches
    • BUT need to balance quantity of data with quality and ability to analyse
      • What resources do you have?
      • Who is conducting the evaluation? What are the skill sets available to you?
      • How much time do you have?
      • Is this a one off or an ongoing process?
      • DO YOU NEED IT ALL?
  6. Evaluation resources for family support
    • Evaluation and innovation
    • Evidence-based practice & service-based evaluation
    • Ethics in evaluation
    • Preparing for evaluation
    • Dissemination of findings
    • http://www.aifs.gov.au/cfca/topics/evaluation.php
  7. Contact us
    • Child Family Community Australia (CFCA) information exchange
      • www.aifs.gov.au/cfca
    • Expert Panel
      • https://www3.aifs.gov.au/cfca/expert-panel-project
    • Industry list and CfC queries

Presenter

Elly was formerly the manager of the Child Family Community Australia (CFCA) information exchange, which is the product of the amalgamation of three previous AIFS clearinghouses (National Child Protection Clearinghouse, Australian Family Relationships Clearinghouse, Community and Family Clearinghouse Australia). Elly has extensive experience in the writing, development and production of publications, learning materials and resources for practitioners, service providers, students and the broader community. She has authored a number of publications, submission and journal articles and played a primary role in the authorship of two Specialist Practice Guides for the Department of Human Services (VIC). Elly is currently undertaking her Masters in Public Health at the University of Melbourne, and her research interests include young people and their families, mental health, global health and the impact/use of digital communications in families and relationships.

Share