Implementing programs and practices in child and family services: The ‘why’ and ‘how’ of good implementation practice

Implementing programs and practices in child and family services: The ‘why’ and ‘how’ of good implementation practice

Robyn Mildon and Jessica Hateley-Browne
26 June 2019

This webinar outlined key concepts and practical steps for implementing evidence-informed programs and practices in child and family services.

Audio transcript: Implementing programs and practices in child and family services

Audio transcript (edited)

MS MOORE: Good afternoon everyone, and welcome to today's Webinar, Implementing Programs and Practices in Child and Family Services: The why and how of good implementation practice. My name is Sharnee Moore, and I'm a research fellow here at the Australian Institute of Family Studies. I would like to start by acknowledging the traditional owners of the lands on which we are meeting. In Melbourne, the traditional custodians are the Wurundjeri people of the Kulin Nation. I pay my respects to their elders, past and present and to the elders from other communities who may be participating today.

Today's webinar will outline key concepts and practical steps for implementing evidence informed programs and practices in child and family services. We are privileged to have two experts on this topic in Associate Professor 
Robyn Mildon and Dr Jessica Hateley-Browne. Robyn Mildon is an internationally recognised figure in the field of evidence synthesis and translation, implementation science and evaluation of evidence in practice and policy. She is the Executive Director of the Centre for Evidence and Implementation, an Honorary Associate Professor with the University of Melbourne and the inaugural co-chair of the Knowledge, Translation and Implementation Group with the Campbell Collaboration. Robyn will be providing a brief instruction to Jessica who will be presenting the webinar today.

Jessica Hateley-Browne is a researcher with a background in health psychology. She has more than 10 years of experience in applied behavioural research, particularly in the health services, population health and child and family welfare fields. She has held senior roles in academic and applied research centres and in government. Jessica has worked on and led large scale trials and evaluation projects in various settings, as well as national population surveys, high profile implementation projects supporting the embedding of evidence informed practice in real world settings and the development of program logics and outcomes frameworks for community based initiatives.

We encourage you to send your questions through via the chat box during the webinar for Jessica to answer during question time. We've also set up a forum on our website where you can discuss the ideas and issues raised, submit additional questions for our presenters and access related resources. Any questions submitted via the chat box that we don't get to today will be answered in the forum, so please do let us know if you don't want your question or first name published on our website. As always, our webinars are recorded and the slides are available in the hand out sections of go to webinar. The audio, slides, transcript and recording of the webinar will be made available on our website and YouTube channel shortly. So it is with great pleasure that I would like to introduce Jessica Hateley-Browne. So please give her a very warm virtual welcome.

MS MILDON: Hi, everyone. I'm Robyn Mildon, I'm the Executive Director for the Centre for Evidence and Implementation. I'm just going to introduce this webinar that one of our CEI team 
Jessica Hateley-Browne will be giving this afternoon. It's on the topic that is very, very close to our hearts at CEI, and an emerging area of science that's important. One of the reasons that it's important to talk about implementation science is over the last sort of three decades, there's been a growing urgency with health and human service fields to address the research to practice gap. There's often a quote that is often used when people are giving talks in this area that it takes 17 years for 
14 percent of research to make its way into practice.

The urgency is fuelled by the observation that research produces innovations that work, and often this is referred to evidence based or evidence informed practices. And yet, individuals in the community do not receive these effective interventions in a timely manner often, not due to any very positive efforts to make this happen. So the growing gap between knowledge generated from best clinical, or research or research synthesis and the integration of this evidence in every day settings is essentially the goal of implementation science. In the child and psychology and psychiatry field, it's worth thinking that over – in the past 50 years, treatment developers have generated approximately 500 interventions that fall broadly into 86 evidence based treatment approaches.

So 500 different interventions, but when somebody took a look at what all of these were, they essentially can be categorised into 86 distinct approaches. Yet, services for young people, children and families in the community rarely are able to incorporate these interventions. Instead, people may receive a range of interventions preferred by the practitioners or recommended by the folks. Many of them without high quality research support, and by that I don't necessarily mean randomised controlled trials. I mean some level of assessment of what does the intervention do, does it make a difference on outcomes and how does it do that? So low rates of the adaption of evidence in services and practice is something that implementation science tries to address head on. Essentially, because we can't benefit from things that we don't receive.

So, Jess will take you through a wonderful presentation of defining exactly what implementation science is, why does it matter and then walking through the guidelines we've been developing with the Australian Institute of Family Studies to hopefully inform the work that you do with child and family services, enable you to draw on some of the science to help bed down some of the best practices and evidence informed practices that you might be using out within your services. So have a terrific time, thank you.

DR HATELEY-BROWNE: Well, thanks so much, Robyn. That's a really good scene setting place for us to start and I'm thrilled to be here today talking about the how and why of good implementation practice and thanks to everyone for joining us. Specifically, what we want to spend our time on today is showing why a focus on implementation is important. I'm going to spend a little bit of time outlining some key concepts in implementation science, and then I'm going to move to describing a framework that provides a bit of roadmap for how to plan for and use good implementation practices and along the way, I'll give some practical examples about what these practices could look, and very happy to take questions at the end about how these could be applied in whatever particular context you have in mind in your own work.

So I just want to start by talking about something that might seem really simple, but is actually really complex. What is implementation? I commonly get asked this question, as we all do at CEI. Often people say something like, isn't it just common sense? And the answer is, well, it is and it isn't. It sure is common sense to say that people can't benefit from something they don't receive, however, high quality implementation of evidence informed programs and practices is really challenging. And I'm sure that we can all think of examples in our own work and even in our own personal lives where a new idea has failed not because it was a bad idea, but because it wasn't put into action well and that's where implementation signs can really help us. It's an emerging area of science as Robyn described and we can draw lessons from this science to help us learn what makes for good quality implementation increasing the likelihood that the community can access and benefit from approaches that work and decreasing the likelihood that evidence informed approaches fall over too soon.

So, in summary, the implementation is the active process of integrating evidence informed programs and practices in the real world. When I say evidence informed programs and practices that's quite an intentional turn of phrase that I'm using. Oftentimes people will talk about evidence based programs or evidence based practice and that's not incorrect in anyway, but what we've seen over time is that that turn of phrase can be used to kind of imply that practitioners and workers are really robotically responding to what's cropping up in the research evidence but of course, that's not at all the case. We want to integrate the best available research evidence with practitioner expertise and with client preferences and values and that's where we get to this idea of evidence informed programs and also evidence informed practices out in the real world.

Implementation is a real focus on how a program or practice will be adopted and embedded into a service. We're really talking about how it gets done, rather than what is being done today. So why's implementation important? Robyn's already given us a really good context here and she referred to the evidence practice gap. What we know is that widespread sustained implementation of evidence informed programs and practices has been really difficult to achieve across the human services and there is an ongoing gap between what we know works and what's being done in practice. And what this means is there are programs and practices that are being developed and tested that have good evidence behind them but there is unrealised potential there because they are not being implemented well out in the world where people who need them can benefit from them, or at least this isn't being done consistently and without gaps. And there are a couple of common pitfalls that are contributing to this evidence practice gap.

One is, only focusing on the what, and ignoring the how. So what I mean by that is that we put a lot of focus on selecting and getting good evidence informed programs and practices into our services, a really good focus on the what, and that is absolutely essential. But sometimes, our focus has been so exclusively on the what, that we've ignored the how, how are we going to incorporate this into our service or into our organisation? And another common pitfall is failing to consider influencing factors and throughout this presentation, I'm going to refer to these as implementation enablers and implementation barriers and these influencing factors impact on the ability to initiate and sustain new initiatives.

So implementation, as I'm referring to it in this presentation, is like the bridge that closes the gap between research and practice. It's an active process and set of strategies that we can use to keep building that bridge and close that evidence practice gap. It's really clear that implementation matters for outcomes. We've seen over the last couple of decades that the child and family service sector increasingly has a focus on outcomes and rightly so. It's very important for us to keep in mind the benefits for the people that we are serving with our programs, and practices and initiatives.

An effective program or practice is absolutely necessary for good child and family outcomes, but it's not sufficient on its own. Children and families cannot benefit from something that they do not receive. So I quite like this little diagram down the bottom here which shows us that, in order to get to positive outcomes for children and families, we need to consider the 'what' which is the evidence informed program or practice, as well as the 'how' which is active and effective implementation. And as we are exploring this active and effective implementation, we need to be taking into account the barriers and enablers that are specific to our context that might help or hinder implementation. Figure out how to leverage the enablers, overcome the barriers and it's only then that we're going to see positive outcomes when we have high quality implementation of evidence informed programs or practices.

So I'm going to spend a bit of time now talking about the key concepts of implementation. So this is a bit of a – almost like a bit of a glossary, bit of a collection of tools to put in your tool kit when talking about and communicating about implementation and then I'll jump into some more specifics as the presentation moves along. The first concept that I want to introduce is implementation stages. Anyone who's had any contact or interaction with implementation science will have come across something of this nature before. There are so many different implementation frameworks out there. There's a lot of different work that's gone into understanding this in implementation science. It doesn't matter what framework you come from or what piece of implementation science, work or research that you pick up, it is an undisputed point of agreement that implementation happens in stages. It is a process that unfolds and not a single event.

This is a common mistake that we can make when we have something nice, and new, and shiny, and it's evidence informed and we're really excited about getting it into our program or practice. We can often jump straight to stage three that blue circle there which is when we initiate practice and start checking for how to refine it. But we often say with this kind of work that you pay now or you pay later. If you jump straight into stage three without doing some of the work that's embedded in stage 1 and stage 2 – and I'll jump into some of the specifics about what that looks like in a little while, it can often fall over because we haven't taken the time to understand the needs of the target population. We haven't taken the time to understand what the options are, in terms of evidence informed programs and practices that might meet this needs, and we haven't done good planning or good stakeholder engagement, good preparation and increasing organisational readiness, so it can often fall over.

Indeed, 50 percent of implementation activity really happens before you hit go and that's what I like about looking at these stage documents. You can see we've got stages one and stages two that are the engaging, planning and preparatory work before we even hit go on a new program or practice and it's important to consider that different implementation activities are relevant in different stages. So the stages have a bit of a function of giving us a bit of a roadmap for how we might tackle implementation of a new program or practice in our organisations or settings and it's also important to acknowledge that the process isn't always linear. It's always nice to draw circles with arrows that connect them, but it's usually not this clean. And indeed, you can move back and forth between the stages.

So an example of that might be that you get 
through – you've done all of your stakeholder engagement work, you've got a really good plan, you've done some good training for your practitioners in the new program that they're going to be implementing and you get through to stage three where you're initiating practice and refining its approach as you go along and then you have a big wave of staff turnover because of redundancies or other factors and so you're back to a point where you don't have trained staff in the program that you're trying deliver. You might need to move back to stage 2 in that instance, and think about how to prepare a new set of staff or you might even need to recruit new staff and pause the delivery of the program completely.

I'm going to move on to talk a little bit more about implementation enablers and barriers. As is probably 
self-explanatory, implementation enablers increase the likelihood that a program or practice will be successfully implemented, whereas, an implementation barrier makes the implementation process more challenging and it's important to say here that it's really normal to experience implementation barriers. It's not a problem necessarily if you are coming across barriers, it's good that you can identify them, and just because there are barriers there, doesn't mean you shouldn't try out the implementation or persist with implementation.

Your implementation can be successful if you identify and overcome the barriers early in the process and it's also important to remember that even once implementation is underway, barriers and enablers should be continually monitored as different influencing factors will emerge during different stages of implementation. So for example, it might be that in the early stages of implementation for example, in stage 1 where you're just kind of thinking about what is the new thing that we need to put into place to meet the need that we've identified, that a barrier is you just don't know what the options are and what the evidence is saying.

Later on, during program or practice initiation, it might become evident that a barrier is the practitioners who are delivering the program or practice don't have very high confidence in the program or practice. They have low 
self-efficacy, they're not sort of backing themselves with the delivery or maybe they think the program or practice now that they've had a chance to give it a go, isn't really of value. And identifying that barrier and working out how to address it, is not going to be possible really until you get to that stage. So the continual monitoring of barriers and enablers should happen throughout so that it gives you an opportunity to not only anticipate things that might be coming up in the future, but to be constantly responding to them as you move through the implementation stages.

The next thing I want to jump into is implementation strategies. These are techniques that improve the adoption, planning, initiation and sustainability of a program or practice. They really are the how-to components of the implementation process, and we can use them to overcome barriers. So how do we decide what implementations strategies to use? There's a lot of work out there that's identified a lot of different implementation strategies. So how might we know which ones are best to use at what time? Well, we can look to the existing evidence. What do we know has worked elsewhere to help drive a similar program or practice through? Sometimes when we purchase a license manualised program from a developer or a purveyor they will come up with packaged up implementation support. For example, the developer or the purveyor of the program might have prescribed training or prescribed supervision that practitioners need to undergo before they can deliver the program. Or they can also be selected based on identified barriers in your local context, and we'll look a little bit more later about how we can do that.

And the next thing I want to focus on and just draw your attention to is implementation leadership. Implementation leadership is the level of support that leaders provide to implementation efforts and this leadership can come from people with formal organisational authority, and this includes people like executive leaders, middle management and team leaders, or also from champions with informal influence. This might be people – practitioners within a team who are just super enthusiastic about this new approach and who have some informal social influence and can engender a lot of enthusiasm for the approach and for the change process. And ideally the implementation leadership would come from both of these places, and really investing in and building the implementation leadership ahead of time before the practice or program is initiated can be really, really useful. And in fact, the benefits of doing this, of having identified implementation leaders and champions who are well prepared, the benefits of this are really undisputed and should not be underestimated in your context.

And the final concept I want to introduce is indicators of high quality implementation. You often hear these referred 
to – and I will refer to them also as implementation outcomes. These are indicators that allow you to see how well your implementation process is going. These implementation outcomes are the effects of using your implementation strategies and they indicate the quality of your implementation. So really useful to understand what some of these are, and I'll give some examples in a little while. Figure out how you're going to monitor them, and keep your finger on the pulse of the quality of your implementation so that you can respond to that information and improve it as you go along.

Okay, so let's get down to some specifics. I'm going to introduce an implementation framework that is presented in a lot more detail in the implementation guide that we have written and developed in collaboration with AIFS which is going to be available with this webinar. So this is a bit of an introduction to this implementation framework which should provide a bit of roadmap for your implementation efforts. As I mentioned before, there's lots of different implementation frameworks out there, you know, on the market, if you start Googling around and, you know, they've all got strengths and weaknesses as with any different theoretical framework or practice framework. What we've done here is try to take what is common amongst a bunch of the implementation frameworks that are used really commonly in the child and family services sector.

So this is a bit of a visual representation of what the framework looks like. You'll see – once again, we are talking about four implementation stages. Stage 1, engage and explore. This is where we're defining what needs to change for who, figuring out what new program or practice we're going to implement, setting up a team of people who might drive these implementations forward and starting to think about the organisational readiness and other early barriers or enablers to implementation. Stage 2 is about planning and preparing. This is where we choose the implementation strategies, nut out an implementation plan ahead of time, decide how to monitor the quality of the implementation and start to build readiness to use the program or practice.

Stage three is when we hit go, when we're going to start using the new program or practice. And importantly, introduce continuous quality improvement cycles to monitor, and improve the implementation and also to highlight any adaptions or tailoring that might be necessary to the program or practice that you have introduced if that is permissible. And finally, we move into stage four, sustain and scale. This is where we're sustaining a program or practice over time and it's embedded as business as usual where it no longer it feels and seems like something new or additional that the organisation is doing. And it's also the phase where the organisation might be considering scaling up or scaling out the program or practice perhaps to new teams, new sites and new organisations completely.

So I'm going to walk through this in a little bit more detail. And there is a lot more detail about the different activities that constitute each of these stages in the implementation – AIFS implementation guide that's going to be available. So in 
stage 1, if you're really starting at the very beginning, you want to define what needs to change and for whom. So the questions you might be asking yourself are, is there a need or a gap in our service, and who is effected by this need or gap? And when doing this you can undertake a formal needs assessment process or a service mapping exercise, even for your whole region to try to avoid duplication of efforts. And in doing this, you want to identify what the gaps are for whom, and decide what outcomes you would like should you introduce a new program or practice that was going to fill that gap. And so once you're clear on what the needs are, who's being effected, so what the target population is, and what outcomes you want to achieve, then it's time to move on to thinking about, okay, so what program or practice is going to meet these needs, is suitable for this target population and is going to deliver the outcomes that we want based on the evidence.

So the suggestion here is to look for existing evidence informed programs or practices that could fill your gap, rather than necessarily trying to reinvent the wheel from the beginning. Look for ones that already have a good evidence base behind them, as appropriate to your target population and setting of course. In some settings, so we know for example, that there is often an absence of – for example, randomised control trial evidence for programs and practices that are appropriate – yeah, appropriate and suggested for use with Aboriginal and Torres Strait Islander communities for example. In that instance, the best decision that you can make is using Aboriginal ways of knowing and cultural knowledge to inform your decision. So we're not always just talking about picking out programs that has randomised control trial evidence. There are lots of different types of evidence that you might be looking to based on your needs and your context.

There are a lot of different menus or repositories of programs or practices out there that you can use to start exploring what might be effective, what the strength of evidence is behind it and also, what target population they might be useful for. So I recommend taking a really thoughtful and critical look at what's out there, as you're trying to identify how to address the needle gap that you've found. 
Sometimes – there's quite a lot of evidence to suggest that setting up an implementation team to – which is a group of people, a team of champions who are going to be responsible for driving the implementation locally can be really effective. It is a resourcing issue sometimes, it is a time issue sometimes, so this may or may not be suitable for your program or practice in your particular context, but we certainly recommend it as a strategy to consider right at this early stage as a way to designate implementation leadership. And we suggest at this stage that you start considering likely enablers and barriers. And a particular concept that is important in this early stage is implementation – organisational readiness to implement.

Focus – which helps you focus on the ways in which your organisation is ready and unready to implement a program or practice that you want to use and just because you determine there are certain ways that the organisation is not yet ready to implement, that doesn't mean you should drop the idea entirely. This is when your implementation strategies come in. So if we know that the organisation is not quite ready to implement in the sense that perhaps practitioner motivation for change is pretty low, then we might consider what are some of the strategies that we can use to try to build motivation amongst practitioners for change and engaging in the change effort? Little bit of animation there that didn't quite come in.

This is an example tool that can help organisations go through the readiness thinking process. And you can see 
here – this tool is freely available – the place to download it is linked to in the implementation guide resource. And you can see there are three kind of components to thinking about readiness as suggested by this tool. And that is motivation, so the degree to which as an organisation want this program or practice to happen. And it's often the case that if people from different parts of the organisation or different levels in the organisationally hierarchy go through this process, that motivation is different at different stages. It's very common to see a high motivation for change at the leadership level or the executive level, trying to stay ahead of the funding curve, trying to innovate. But at the practice level, where workloads are really high, the work is really complex and people are feeling pretty stretched. That's sometimes to say, okay, now we're going to change this can feel like – yeah, just a little bit too much of a stretch. So we can often see differences in motivation to different parts of the organisation.

The other component is program or practice-specific capacity. So this is figuring out what is needed to make this particular program or practice happen. And then there's general capacity. So the organisation's overall functioning. So this can be a useful tool to guide you through thinking about what's an existing challenge for you in your organisation? What's an existing strength for you and your organisation? And in which places, or on what points are you actually unsure because perhaps it's too early in the implementation process to know, or perhaps you just need to go and talk to some other people and get some more information.

Stage 2: Plan and Prepare. Sorry about that, I'll just jump back in there. Choosing the Implementation Strategies. So this is about deciding which implementation strategies are best to drive the implementation process at each stage. I mentioned before there's been a lot of work identifying, articulating and defining the different implementation strategies that we've seen and observed in the implementation efforts in the field. And there's a particular project called the ERIC project, which I have referenced on this slide, that has been very focused on identifying and defining and operationalising implementation strategies. I've got a couple up on the screen here just as some key examples that could be useful, just to make this all a little bit more concrete. So for example, you might have identified a really good program or practice that meets the needs and is appropriate for the target population that you're working with, but you know the licencing costs money or you know it's going to cost money to bring in an implementation intermediary or you know it's going to cost money to introduce practice supervision, or practice coaching so maybe simply accessing new funding is a key implementation strategy that you need to do but usually right up in Stage 1 or early Stage 2.

Jumping down a little, it might be that you need to talk with stakeholders to determine if the chosen problem is important to them and whether they think the program or practice that you're thinking of implementing is appropriate. This is called 'conducting local consensus discussions'. It could be done in workshops where you bring your practitioners together and maybe even stakeholders from other organisations in your region to wrestle with what's the highest priority, and are your early ideas going to be appropriate and acceptable to key stakeholders who you are going to need to implement the program well.

It might be that you need to conduct ongoing training, planning for and conducting ongoing training in the program or practice to make sure that you've got the workforce that you need with the skills that they need to deliver the program or practice with high quality. Or it might be, jumping a little further down the table, that you need to consider providing follow-on technical support. I'm sure we've all been in this situation where we've gone off to training, or we've been sent off to training, and we've been really energised by the new things that we've been learning and you think, 'Yep, great. I can't wait to put that in to place in my practice, and really put that into action' but you come back to your normal workplace in your normal team, nobody else has done the training you've done, and it's really hard to modify your practice. You get back in to your normal way of doing things, you get busy, and we know from the literature that training alone is usually not enough to change practice behaviour. Most of us need some follow-on technical support in the form of for example, ongoing coaching or clinical supervision to help you apply the new skills and knowledge in practice. So I hope that gives you a little bit of a flavour of the kinds of really concrete activities that I'm referring to when I say 'implementation strategy'.

Other key activities that you should be thinking about in Stage 2 are developing an implementation plan that identifies how to put your implementation strategies in to action. So this includes specifically what needs to be done, when and where it needs to be done, how it's going to happen, and who's responsible for making it happen by when. So really getting this down in pen and paper. This can be an iterative document, it can also be used to track actions and progress over time, and we've provided a template in the implementation guide resource and there are – if you are also on the internet, there are a lot of different templates available that can be tailored to your needs and your context. And this can feel like paper pushing. It can feel like you're not doing real work but to really spend the time to nutting out the plan and the road map can be super useful when things get a little bit hard, get a little bit tricky, when people start to diverge in their thinking around what's a priority, and what kind of work they're prepared to do. It can be really helpful to come back to the pre-agreed strategies and actions in the implementation plan to help you nut through some of the tricky spots.

And this is something that often gets missed, the next activity, 'Deciding how to monitor implementation quality'. This is the implementation outcomes that I was talking about before. Think about how to identify the best indicators of implementation quality. So really the question here is how will you know the program or practice is being implemented well and as intended? Plan how you will measure and monitor this during the implementation process, and also have a plan for how you are going to check on that information so that you know if and how you need to improve the implementation over time. Here are a couple of examples of implementation outcomes, or indicators of implementation quality. One is 'acceptability'. So this is how acceptable, agreeable, satisfactory the program or practice is perceived to be by stakeholders, stakeholders usually considered to be the people who are implementing the program itself.

Also 'feasibility'. To what extent can this program or practice be successfully used or carried out in your setting? It might have been a great program to be used with really good outcomes, with good evidence of effectiveness in a different context to yours, but maybe it's not feasible in your context for various reasons. Maybe it's not culturally appropriate, maybe it requires too many adaptations to the way you employ and train and contract staff. There can be lots of things that get in the way of feasibility, particularly when we're thinking about programs or practices that might have been developed you know, internationally that haven't been tested in a local context.

Also think about 'appropriateness'. So the perceived fit or relevance or compatibility of the program or practice in your setting. And 'fidelity' is really important. This is the extent to which a program or practice is being implemented as intended. So an example of this might be a program is supposed to have six group based sessions, but what you discover as you are monitoring this is that most practitioners are kind of skipping over session 5 because it's not quite a good fit for your target population. So that shows you not only that there's a particular structural element to your program that's not being implemented as intended and it could be a core active ingredient that you need to get the outcomes, but it also shows you that there's something in the program or practice that practitioners are thinking isn't appropriate for the program that you're working for, an you wouldn't know that unless you were monitoring these kinds of things along the way. And then there's also 'reach'. So the degree to which a program or practice is integrated into an agency or service provider settling. And this includes the extent to which it's actually reaching the target population.

And then the final activity in Stage 2: Plan and Prepare is to build your readiness to implement. So ensuring your organisation will be ready to start using the program or practice. So this is about using some implementation strategies that are appropriate to this stage, such as training, acquiring resources, adapting existing practices to make sure you're ready to hit go when the time comes.

At this point, once you've built readiness and all of those other little bits and pieces are in place, you're ready to start using the program or practice. And what we need to make sure that we do here for good implementation is to continually monitor those indicators of implementation quality and use continuous quality improvement cycles to monitor this using the information to guide improvements or adaptations to your implementation. So an example of this might be that you are wanting to use the program specifically with single parents. That's the target population. And so what you would want to do is monitor over time who is coming in to the program. You might look after a month or so and determine that actually the program is only being used 50 per cent of the times with single parents, and other times with other types of family structures that the program doesn't have evidence for, or hasn't been designed for use for.

So in that instance you might then use that data to raise a number of questions that you need to explore. Are the right people being referred in to the program, and if not, why not? Is it because the referrers don't understand the needs and the target population of the program? Or is it that the referrers understand and they're referring the right people but something at the organisation level is happening. Perhaps you're not reaching your target so there starts to be a, at the intake level, the inclusion criteria is started to creep. So this gives you data to identify where there might be challenges in the implementation where it might be happening a little bit differently to what you expected, and it gives you a few ideas about which threads to pull on and which questions to ask to try to improve the implementation and get it as close to as intended as possible.

And then we move in to 'Sustain and Scale'. This is really where the program or practice gets embedded as business as usual. Where you would want to be giving efforts to improve and retain your staff's competency levels, ensuring your program or practice doesn't feel like a new thing or a stretch anymore. And then considering the scale-up of the program or practice. So if the first implementation attempts are stable you might want to start introducing the program or practice to new teams, sites or contexts and this begins a new implementation process and takes you back to Stage 1 again, where you might want to revisit some of that early work to make sure that the scale up of the program or practice is indeed appropriate.

In the Guide – this is pulled directly from the Guide and it's a bit of a flowchart to help you decide where to start in the implementation stages. You're not necessarily at zero in some of your considerations of introducing programs or practices. In some instances, you might already have chosen a program or practice, and you want to know what are the next steps. So this little decision tool will help you identify which stage you are currently at, and if you're going to use the Guide, where in the Guide you might want to start to identify what your next steps are.

And I just want to close with a note of encouragement. As you are hearing me talk and as you are using the Guide, you might be thinking, 'Wow, good implementation practice is really a lot of work' and that's true, there's no getting around that. However, I just want to emphasise that we know that this investment pays dividends later in the form of sustainable and effective service delivery that continues to provide 
evidence-informed programs or practices, so effective programs or practices to the people who need them. And it's very much a case of pay now, or pay later. It's one of the most common phases thrown around in implementation science.

Actively using the Guide that we've developed will help you turn knowledge of the concept in to practical skills. And by using this approach step by step, I think you'll find that you can build your confidence and organisational capacity to lead implementation efforts really through to completion. So for more implementation guidance, I'd refer you to the Implementation in Action resource that's up on the AIFS website and that's going to be provided as a resource with this webinar. And then there's some additional references and further reading that you might be interested in as well. So I might pause there and take some questions.

MS MOORE: Thanks Jessica. That was a really interesting presentation. That was great. So we've now reached the question part of the webinar, and we've got time to take a few questions. We've received some great questions in the chat box already but please keep submitting your questions. And any questions that we don't get to now will be answered in the online forum, so please let us know again if you don't want us to publish your question or first name in that forum. So we've got a couple of fairly practical questions that have come through which is consistent with the very practical tips you talked us through.

So one of the questions is around accessing evidence. And I think we hear this a lot that in sort of a practice context, unless you've got access to an academic on staff or some kind of journal access, it can be really difficult to access evidence. So do you have any tips who maybe don't have those resources available as to where they could start looking at the evidence?

DR HATELEY-BROWNE: Yeah, that is a really common question as you say Sharnee, and it's a key frustration I know for practitioners who really want and really care about evidence-informed practice but just struggle with access. The first thing I would say is that there are a number of repositories or menus of 
evidence-informed practice, and I mentioned that briefly in my presentation, that have already done some of this synthesis work for you. So you don't need to trawl through all of the individual journal articles that are out there on a particular program. The synthesis work is done, and they identify the target population and the outcomes that are effective and that can be reached with a program or practice for the particular target populations. One example is the California Clearing House. Menu of evidence-informed programs and practices particularly for children, families and young people. That can be a good place to start. And there are a number of others out there as well. All of these menus and repositories have their limitations but they're a good place to start because some of the synthesis work has already been done.

Another point that I would direct you to, particularly in regards to implementation science, if you want to know about the evidence about what works in implementation, the Society for Implementation Research – SIRC, it is. S-I-R-C – and they are an excellent online resource. They have some resources freely available and they have others accessible behind a paywall for a small membership fee. You get a 12 month access, and that's a very good resource for if you're interested in evidence-informed implementation practice.

And the third thing that I would say is that there is really a growing understanding in academia about the importance of accessibility of new research, and many funding bodies now are requiring that new evidence gets published in some kind of open access way. So Google Scholar is your friend. There are lots of journals now that are open access only. So you can download the full text of everything, and many academics have a philosophical commitment to making their work available so wherever possible, they will. So for example, they will put up draft PDFs of their articles on their LinkedIn page or that kind of thing, which usually show up in Google Scholar. And continue whenever you have the opportunity to lobby for open access publication and repositories of evidence, because this research is often funded by taxpayer money and so we want to be able to make evidence as accessible as possible.

We then of course get to the point where we need to have the capacity to interpret and respond to the evidence, so having access isn't enough and there's lots of work that I know AIFS and others are doing for capacity building for the sector for, 'Okay, so now I've got the access. What do I do with it' in my context?' And that's really the implementation piece in terms of how to use the evidence in your setting.

MS MOORE: Thanks Jessica. And we would point people to the CFCA website as well, as a great resource for evidence synthesis as well. So we've got a question here as well about how to support new program implementation. And you did touch on barriers a little bit but when those barriers include the old status quo, which is already overwhelming staff workload and as well, where there's maybe a higher staff turnover. So any tips for dealing with that particular set of barriers?

DR HATELEY-BROWNE: It's really hard, and it's really common. So in that readiness thinking tool that I briefly showed you, one of the key components and you can see it's right up the top of the page is motivation. The status quo is incredibly powerful and motivation to try something new or give something else a go can be low for very, very good reasons. The workload is too high, the resources are already stretched, staff turnover as was mentioned. So there are no easy answers to this, and I don't have a silver bullet. The strongest encouragement that I can give you is acknowledge that these things are real, and even if you have a different perspective from someone who is a frontline practitioner, acknowledge that this experience is real and that unless those barriers and those concerns are acknowledged and that they can see that the implementation or leadership trying to understand them and trying to address them, you're probably not going to get their buy-in but you need their buy-in if this is going to be effective. So I would say plenty, plenty, plenty of opportunities for people to air their concerns and a lot of demonstration of what you're going to do about it.

So workload, such a common thing. 'Our workload is already too high, we can't do this.' A common pitfall is constantly adding things to the set of services or programs that you are delivering without thinking about what you should 
de-implement. We can't keep always adding new things, so there might be a prioritisation exercise that needs to be gone through in the organisation to figure out, 'Okay, we're hearing from our practitioners that workload is too high. They're already too stretched. We need to make a commitment about what we're going to lift off if we're going to put this in.'

Another thing could be, and I've seen this just very recently as CEI have been providing implementation support to some agencies for the implementation of a new initiative, that when they put together an implementation team, they wanted some practitioners on the team but the practitioners were saying, 'I couldn't possibly give up more time. If I spend time on this champion's team, I'm going to lose out seeing some clients which means families and children suffer.' So what they did was that for the people who were on that, they made the tricky decision that for people who were on the implementation team, they would temporarily have a lower caseload. So they wouldn't take any new cases and the rest of the practitioners committed to taking that load for a little while so that some people who were really interested in driving the implementation could focus on that, and people who just wanted to get on with seeing clients could do that. And that was a self-opt-in process so that everyone had the chance to play to their strengths. So I know that doesn't solve the problem, but there are a couple of little concrete strategies that I've seen be used really well in small to medium sized agencies.

MS MOORE: Thanks for that. We've got a few more questions, and we've got a little bit more time so I think we'll work through them. So there's a question here about how we can best adapt implementation once a program has begun, and it cannot be paused or wound back. So how would you suggest people tackle that?

DR HATELEY-BROWNE: Yes. We would say, 'Good on you' – if you're in that situation, what that means is that you've been doing some continuous quality improvement monitoring, and it really depends – how I answer this question depends on what it is you need to change. If what you've identified is you know, the example that I gave before about potentially the right referrals not coming in that match the target population, then obviously that would be a different set of actions than if you identify that some of the content of your program or practice needs to be changed. Or if you identify that practitioners need some booster training or some additional coaching in how to put this in to practice. So it's tricky to give one answer to that because it depends what the problem is that you've identified.

What I would suggest is that without knowing what the problem is that could be identified, is direct people to that list from the ERIC project of the various implementation strategies and you could use that as a bit of a way to get the juices flowing and get inspired about what some of the action could be in response to some of the problems that you've addressed. And I would also direct people to a tool called C for ERIC matching tool which is linked to in the Implementation in Action Guide which gives you an opportunity to – it's a nifty little spreadsheet that gives you an opportunity to identify the barriers that you have – like, input the barriers that you've identified and it presents back to you some implementation strategies that have been rated by experts as most likely to work for those barriers. So again, they would need to be adapted to context. They would need to be operationalised for the particular thing that you're trying to address but good inspiration for how to use implementation signs to address the challenges that you're experiencing.

MS MOORE: Thank you. We'll have at least one more question. So we've had a question here about how all of this just seems like it's common sense, and it's steps that people would already be taking when they're going through a change process. So could you explain – and you did touch on this a little bit at the outset of the presentation, but could you explain a little bit about what you think taking this particular kind of framework, what does that add to the process when it might just feel like it's another thing being overlaid.

DR HATELEY-BROWNE: Yeah totally. Really valid question. I would say that if you're already doing this and if you really are committed as an organisation to a staged approach to implementation using evidence-informed strategies to drive the implementation forward, good on you, and that is absolutely fantastic to hear. I think most of us are not in that situation. So if that is you, you might be looking at all of this saying, 'Yep. Tick, tick, tick. Already got all of that in place.' Then take this as affirmation that your approach is already really strong to implementation.

I think many of us have lived through implementation experiences that are reasonably frustrating and we haven't known why they're frustrating. And what we hope with this framework is that it gives people a way to see, 'Okay, here might be a way, and it is a way, to map out our implementation. Here are some of the key activities that we need to be able to you know, make sure that we do as appropriate to our context.' And it also gives us a way to think, 'Okay, so this previous implementation experience that I have had that was pretty frustrating. Oh, I can see why now. Because we didn't do this thing. We didn't make a plan. We just jumped straight in to doing it. We didn't really know who was responsible. We didn't have any clear implementation leadership, and so, once the initial energy burned out, we were kind of stuck with no one leading the charge for example.

So I think if any of us have had that experience, this could be a way to identify why you've had that experience and how to improve it in the future. If you're looking at this saying, 'This is already in place for us' I say, good on you. You're probably the exception to the rule and I hope that you can be a really good resource to other people and other agencies around you who may not quite have the implementation infrastructure that you have.

MS MOORE: Great, thanks Jess. I think that's all we've got time for today, so thank you to our presenters and thank you to everyone for attending the webinar today. We've had some really great questions come through. So please follow the link on your screen to the website to continue that conversation, and remember that as you leave the webinar, a short survey will open up in a new window and we’d really love to receive your feedback on the webinar today. Thanks, and see you next time. 

WEBINAR CONCLUDED

IMPORTANT INFORMATION - PLEASE READ

The transcript is provided for information purposes only and is provided on the basis that all persons accessing the transcript undertake responsibility for assessing the relevance and accuracy of its content. Before using the material contained in the transcript, the permission of the relevant presenter should be obtained.

The Commonwealth of Australia, represented by the Australian Institute of Family Studies (AIFS), is not responsible for, and makes no representations in relation to, the accuracy of this transcript. AIFS does not accept any liability to any person for the content (or the use of such content) included in the transcript. The transcript may include or summarise views, standards or recommendations of third parties. The inclusion of such material is not an endorsement by AIFS of that material; nor does it indicate a commitment by AIFS to any particular course of action.

Slide outline: Implementing programs and practices in child and family services

Slide outline

1. Implementing programs and practices in child and family services
The ‘why’ and ‘how’ of good implementation practice

Assoc Prof Robyn Mildon and Dr Jessica Hateley-Browne
CFCA Webinar
26 June 2019 

2. House keeping

  • Send through your questions via the chat box at any time during the webinar. 
  • Let us know if you don’t want your question published on the online forum following the presentation. 
  • All our webinars are recorded. 
  • The slides are available in the handout section of Gotowebinar. 
  • The audio and transcript will be posted on our website and YouTube channel in the coming week.

3. Implementing programs and practices in child and family services
The ‘why’ and ‘how’ of good implementation practice

Assoc Prof Robyn Mildon
Dr Jessica Hateley-Browne

4. This webinar will: 

  • show why a focus on implementation is important
  • outline some key concepts in implementation science
  • describe a framework that provides a map for how to plan for and use good implementation practices
  • provide practical examples of these practices 

5. What is implementation?

6. Implementation is...

  • …the active process of integrating evidence-informed programs and practices in the real world 
  • …focused on ‘how’ a program or practice will be adopted and embedded into a service 

7. Why is implementation important?

8. There is an evidence-practice gap

  • Widespread sustained implementation has been difficult to achieve across human services
  • A gap between what we know works and what’s being done in practice
    • Unrealised potential
  • Common pitfalls:
    • only focusing on the ‘what’ and ignoring the ‘how’
    • failing to consider influencing factors (enablers and barriers) that impact ability to initiate and sustain new initiatives

9. Alt text: 

Image of a broken bridge. One side of the bridge is research the other side is practice, the word ‘implementation’ is connecting the two sides.

The Royal’s Institute of Mental Health Research, University of Ottawa

10. Implementation matters for outcomes

  • An effective program or practice is necessary for good child and family outcomes
    • but, not sufficient on their own
  • Children and families cannot benefit from something they don’t receive

Alt text: 

The relationships between implementation strategies, implementation outcomes and client outcomes

What? Evidence-informed program or practice + How? Active and effective implementation → Positive outcomes - Benefits for the people you serve

Arrow from How? Active and effective implementation to Barriers/enablers - Factors that hinder or help implementation. Arrow from Barriers/enablers to Postive outcomes.

11. What are the key concepts of implementation?

12. Implementation stages

  • Implementation happens in stages
  • It is a process that unfolds, not a single event
  • Different implementation activities are relevant in different stages
  • 50% of implementation activityhappens before you hit ‘go’
  • Process isn’t always linear 

Alt text:

Stage 1: Engage and explore

Stage 2: Plan and prepare

Stage 3: Initiate and refine

Stage 4: Sustain and scale

13. Implementation enablers and barriers

  • Implementation enablers increase the likelihood a program or practice will be successfully implemented
  • Implementation barriers make the implementation process more challenging
  • It’s normal to experience barriers
  • Your implementation will be successful if you can identify and overcome barriers early in the process
  • You should continually monitor the enablers and barriers, as different influencing factors will emerge during different stages of implementation

14. Implementation strategies

  • Techniques that improve the adoption, planning, initiation and sustainability of a program or practice (Powell et al., 2019)
  • They are the ‘how to’ components of the implementation process and are used to overcome barriers. 
  • So how do you decide which implementation strategies to use?
    • Existing evidence
    • Prescribed by program developer/purveyor
    • Based on identified barriers

 15. Implementation leadership 

  • Implementation leadership is the level of support leaders provide to implementation efforts.
  • Leadership can come from:
    • people with formal organisational authority e.g. executive leaders, middle management and team leaders
    • champions with informal influence
  • The benefits of having implementation leaders and champions are undisputed and should not be underestimated

16. Indicators of high-quality implementation

  • To see how well your implementation process is going, you need to monitor your ‘implementation outcomes’ 
  • Implementation outcomes are the effects of using your implementation strategies
  • They are indicators of the quality of implementation 

17. A useful implementation framework

18. Stages of implementation

Alt text:

Stage 1: Engage and explore

  • Define what needs to change and for whom
  • Select and adopt program or practice
  • Set up an implementation team
  • Assess readiness; consider barriers and enablers

Stage 2: Plan and prepare

  • Choose implementation strategies
  • Develop an implementation plan
  • Decide how to monitor implementation quality
  • Build readiness to use program or practice

Stage 3: Initiate and refine

  • Start using the program or practice
  • Continuously monitor and improve

Stage 4: Sustain and scale

  • Sustain the program or practice, embedding as ‘business as usual’
  • Scale-up the program or practice

19. Stage 1: Engage & explore

Define what needs to change and for whom

  • Is there a need or gap in your service? Who is affected by this need or gap? 
  • Identify what these gaps are, then decide what outcomes you’d like from a new program or practice

Select and adopt a program or practice

  • Look for existing evidence-informed programs and practices that could fill your gap
  • Try using a menu or repository

Set up an implementation team

  • A team of champions 
  • Responsible for driving the implementation  

Consider likely enablers and barriers, assess readiness

  • Identify early enablers and barriers (this process should be ongoing)
  • Focus on the ways in which your organisation is ready and unready to implement

20. Readiness thinking tool 

For each of the following 'motivations' and the 'degree to which we want the program or practice to happen' think whether for your program this is a:

  • Challenge
  • Strength
    or
  • Unsure.

Motivation - Degreee to which we want the program to happen

Relative advantage - This program or practice seems better than what we are currently doing.

Compatibility - This program or practice fits with how we do things. 

Simplicity - This program or practice seems simple to use.

Ability to pilot - Degree to which this program or practice can be tested and experimented with.

Observability - Ability to see that this program or practice is leading to outcomes.

Priority - Importance of this program or practice compared to other things we do.

For each of the following 'Program or practice-specific Capacity' factors and 'What is needed to make this particular program or practice happen' think whether for your program this is a:

  • Challenge
  • Strength
    or
  • Unsure.

Program or practice-specific Capacity - What is needed to make this particular program or practice happen?

Program or practice-specific knowledge & skills - Sufficient abilities to do the program or practice.

Champion - A well-connected person who supports and models this program or practice.

Supportive climate - Necessary supports, processes, and resources to enable this program or practice.

Inter-organisational relationship - Relationships between organisations that support this program or practice.

Intra-organisational relationships - Relationships within organisation that support this program or practice.

For each of the following 'general capacity' factors and 'our overall functioning' think whether for your program this is a:

  • Challenge
  • Strength
    or
  • Unsure.

General Capacity - Our overall functioning

Culture - Norms and values of how we do things here.

Climate - The feeling of being part of this organisation.

Innovativeness - Openness to change in general.

Resource utilisation - Ability to acquire and allocate resources including time, money, effort, and technology.

Leadership - Effectiveness of our leaders.

Internal operations - Effectiveness at communication and teamwork.

Staff capacities - Having enough of the right people to get things done.

Process capacities - Ability to plan, implement, and evaluate.

21. Stage 2: Plan and prepare

Choose implementation strategies

  • Decide which implementation strategies are best to drive the implementation process at each stage

22. Example implementation strategies, adapted from the ERIC project

Access new funding - Access new or existing money to help implement the program or practice.

Change physical structure and equipment - Adapt physical structures and/or equipment (e.g. changing the layout of a room or adding equipment) to best accommodate the program or practice. 

Conduct local consensus discussions - Talk with stakeholders to determine if the chosen problem is important to them and whether they think the new program or practice is appropriate. 

Conduct ongoing training - Plan for and conduct ongoing training in the program or practice.

Develop and use tools and processes to monitor implementation quality - Develop tools and processes to monitor implementation quality and use them to create your continuous quality improvement cycle. 

Identify and prepare champions - Identify and prepare people who’ll dedicate themselves to driving an implementation. 

Inform local opinion leaders - Identify local opinion leaders or other influential people and inform them about the program or practice in the hope they will encourage others to adopt it.

Mandate change - Ask your leadership team to publicly declare that the new program or practice is a priority and they’re determined to implement it. 

Provide follow-on technical support - Provide practitioners with ongoing coaching or clinical supervision to help them apply new skills and knowledge in practice.

Promote adaptability - Identify how a program or practice can be tailored to meet local needs. Clarify which elements to maintain to preserve fidelity. 

Use an implementation advisor - Seek guidance and support from an implementation expert. 

23. Stage 2: Plan & prepare part 2

Choose implementation strategies

  • Decide which implementation strategies are best to drive the implementation process at each stage

Develop an implementation plan

  • Develop an implementation plan that identifies how to put your implementation strategies into action
  • Include what needs to be done; when and where it needs to happen; how it is to happen; and who is responsible

Decide how to monitor implementation quality

  • Identify the best indicators of implementation quality
  • Plan how you will measure and monitor these during the implementation process

24. Implementation outcomes and suggestions for measurement 

Acceptability - The perception among stakeholders that a program or practice is agreeable, palatable or satisfactory

How to measure:

  • Qualitative interviews 
  • Quantitative survey tool such as the Acceptability of Intervention Measure (AIM)

Feasibility - The extent to which the program or practice can be successfully used or carried out within your setting

How to measure:

  • Qualitative interviews 
  • Quantitative survey tool such as the Feasibility of Intervention Measure (FIM)

Appropriateness  - The perceived fit, relevance or compatibility of a program or practice

How to measure:

  • Qualitative interviews 
  • Quantitative survey tool such as the Intervention Appropriateness Measure (IAM)

Fidelity - The extent to which a program or practice is being delivered as intended

How to measure:

  • Self-report practice checklists for practitioners
  • Client interviews or questionnaires

Reach - The degree to which a program or practice is integrated into an agency or service provider setting, including the degree it effectively reached the target population.

How to measure:

  • Administrative data

25. Stage 2: Plan & prepare part 3

Choose implementation strategies

  • Decide which implementation strategies are best to drive the implementation process at each stage

Develop an implementation plan

  • Develop an implementation plan that identifies how to put your implementation strategies into action
  • Include what needs to be done; when and where it needs to happen; how it is to happen; and who is responsible

Decide how to monitor implementation quality

  • Identify the best indicators of implementation quality
  • Plan how you will measure and monitor these during the implementation process

Build readiness to implement

  • Ensure your organisation will be ready to start using the program or practice. 
  • Use implementation strategies such as training, acquiring resources and adapting existing practices to make sure you’re ready to ‘hit go’ 

26. Stage 3: Initiate & refine

Start using the program or practice

  • The first practitioners start using the program or practice.

Continuously monitor and improve

  • Use continuous quality improvement cycles to monitor the quality of the implementation. Use this information to guide improvements or adaptations to your implementation.

27. Stage 4: Sustain & scale

Sustain the program or practice, embedding as ‘business as usual’

Improve and retain your staff’s competency levels. Ensure your program or practice is embedded into ‘business as usual’.

Scale-up the program or practice 

If the first implementation attempts are stable, introduce the program or practice to new teams, sites or contexts. This begins a new implementation process. 

28. Implementation stages – Deciding where to start tool

Do you have a program or practice selected for implementation?

No → You are in Stage 1. Start with ‘Define what needs to change, for whom’. 

Yes →Have you started to use the program or practice?

Yes → You are in Stage 3. Start with Stage 3, but check that the relevant activities in Stage 2 are complete.

No → Have you set up an implementation team?

No → You are in Stage 1. Start with ‘Set up an implementation team’ (if using). 

Yes → Have you assessed organisational readiness?

No →You are in Stage 1. Start with ‘Consider likely enablers and barriers, and assess readiness’.

Yes → Have you developed an implementation plan with clear strategies to improve readiness for implementation?

No → You are in Stage 2. Start with ‘Choose implementations strategies’ and ‘Develop an implementation plan’.

Yes → Do you know how you will monitor implementation?

No → You are in Stage 2. Start with ‘Decide how to monitor implementation quality’.

Yes → Are you ready to start using the program or practice?

No → You are in Stage 2. Start with ‘Build readiness to use the program or practice’. 

Yes → You are in Stage 3. Start with ‘Start using the program or practice’

29. A note of encouragement

  • Using good implementation practices can seem like a lot of work – and it’s true!
  • However, this investment pays dividends later → sustainable and effective service delivery
    •   “pay now or pay later”
  • Actively using this guide will help you turn knowledge of the concepts into practical skills. 
  • By using this approach step by step, you’ll build your confidence and capacity to lead implementation efforts  

30. For more implementation guidance

Alt text:

Cover of the Guidelines publication:  Implementation in action: A guide to implementing evidence-informed programs and practices by Jessica Hateley-Browne, Lauren Hodge, Melinda Polimeni and Robyn Mildon 

31. References and further reading

  • Burke, K., Morris, K., & McGarrigle, L. (2012). An Introductory Guide to Implementation. Dublin: Centre for Effective Services.
  • Damschroder, L. J., Aron, D. C., Keith, R. E., et al. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4(50).
  • Eccles, M. P., & Mittman, B. S. (2006). Welcome to implementation science. Implementation Science, 1(1).
  • Metz, A., Bartley, L., Ball, H., et al. (2015). Active implementation frameworks for successful service delivery: Catawba County Child Wellbeing Project. Research on Social Work Practice, 25(4), 415-422.
  • Powell, B. J., Fernandez, M. E., Williams, N. J., et al. (2019). Enhancing the impact of implementation strategies in healthcare: A research agenda. Frontiers in Public Health, 7(3).
  • Powell, B. J., Waltz, T. J., Chinman, M. J., et al. (2015). A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implementation Science, 10(21).
  • Proctor, E., Silmere, H., Raghavan, R., et al. (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38(2), 65-76.
  • Rabin, B., & Brownson, R. (2018). Terminology for dissemination and implementation research. In R. C. Brownson, G. A. Colditz & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (Vol. 2, pp. 19-45). New York: Oxford University Press.
  • Scaccia, J. P., Cook, B. S., Lamont, A., et al. (2015). A practical implementation science heuristic for organizational readiness: R=MC2. Journal of Community Psychology, 43(4), 484-501.
  • Weiner, B. J., Lewis, C. C., Stanick, C., et al. (2017). Psychometric assessment of three newly developed implementation outcome measures. Implementation Science, 12(108). 

32. For further information contact:

Assoc Prof Robyn Mildon
Executive Director
robyn.mildon@ceiglobal.org
www.ceiglobal.org

Dr Jessica Hateley-Browne
Senior Advisor
jessica.hateleybrowne@ceiglobal.org
www.ceiglobal.org

33. Continue the conversation…

Please submit questions or comments on the online forum following today’s webinar:
aifs.gov.au/cfca/news-discussion

This webinar was held on Wednesday, 26 June 2019. 

Closing the gap between what we know works and what is being done in routine practice requires a focus on implementation. Implementation is the active process of adopting and embedding evidence-informed programs and practices in real-world settings.

This webinar:

  • showed why a focus on implementation is important
  • outlined some key concepts in implementation science
  • described a framework that provides a map for how to plan for and use good implementation practices
  • provided practical examples of these practices.

This webinar is of interest to practitioners, leaders and policy makers, particularly those who are involved in planning, designing and implementing services, policies and programs.

Related resources 


This webinar is co-produced by CFCA and the Families and Children Expert Panel Project, AIFS.


Featured image: © GettyImages/RgStudio

About the presenters

Robyn Mildon

Executive Director, Centre for Evidence and Implementation

Assoc. Prof. Robyn Mildon is an internationally recognised figure in the field of evidence synthesis and translation, implementation science and evaluation of evidence in practice and policy. She is the Executive Director of the Centre for Evidence and Implementation (CEI), an Honorary Associate Professor with the University of Melbourne, and the inaugural Co-Chair of the Knowledge Translation and Implementation Group with the Campbell Collaboration. 

 

Robyn has led a number of projects focused on program and practice development, service reviews, and program and implementation evaluations. She has extensive experience working with government and non-government agencies to refine outcomes, develop program logics and conduct evaluations and support evidence-informed policy making. She has a substantial track record of working with multiple stakeholders to support the adoption, implementation and evaluation of effective approaches to working with children, families and their communities and to advance evidence in practice.

Jessica Hateley-Browne

Senior Advisor, Centre for Evidence and Implementation

Dr Jessica Hateley-Browne is a researcher with a background in health psychology. She has more than 10 years of experience in applied behavioural research, particularly in the health services, population health, and child and family welfare fields. She has held senior roles in academic and applied research centres, and in a government agency. Jessica has worked on and led large-scale trials and evaluation projects in various settings, as well as national population surveys, high-profile implementation projects supporting the embedding of evidence-informed practice in real-world settings, and the development of program logics and outcomes frameworks for community-based initiatives. 

 

Jessica has expertise in implementation science and in using mixed-methods in research that seeks to describe and address health and social program and policy challenges in Australian and international contexts. She is committed to high-quality knowledge translation and contributing to and utilising the best evidence to inform policy and practice. Her expertise has been recognised through invitations to speak at academic conferences and research meetings around the world.