Implementation in action

A guide to implementing evidence-informed programs and practices
Guidelines – June 2019

6. Stage 3: Initiate and refine

During Stage 3, you'll start using the program or practice for the first time. By this point, staff will be trained in the program or practice, and the necessary systems to support your implementation will be established (e.g. the plan for data collection and monitoring; leadership engagement; and support). It's very important to collect and respond to the monitoring data in this stage. You'll need to focus on continuously improving the implementation of the program or practice and responding to the new implementation barriers that emerge as you begin using the program or practice. Try to identify and respond to these barriers in a timely way.

6.1 Initiate the program or practice

Practice is initiated when the first practitioners have started using the new program or practice. You may choose to initiate your program or practice with just one team or a small number of teams. In the early days, even your highly experienced staff may feel challenged because the program or practice itself, or the implementation activities, may be unfamiliar. They may perceive the new program or practice to be unhelpful, or even burdensome.

If your implementation plan includes post-training implementation strategies, such as follow-on coaching, they should be actioned now.

6.2 Continuously monitor the implementation process

Now it's time to start monitoring implementation quality, according to the plan you made in Stage 2 (see Chapter 5.3). You should also continue to look for new enablers and barriers to implementation (see Chapter 4.3 for some suggestions for how to do this). Use the information from your quality monitoring to help you decide if you need to review your implementation strategies and how you might do that. Share summaries of your monitoring data at your implementation team meetings (or meetings with other decision makers) and identify and explore barriers regularly together. This will ensure the information you collect gets used to inform decisions on how to improve the implementation process. It will also ensure any unintended consequences from implementing the new program or practice are noticed, reviewed and responded to. This could include staff burnout as a result of feeling over-stretched, and unexpected costs being incurred during implementation.

Be curious about the information and data you're collecting. The purpose is not to judge whether the implementation 'succeeded' or 'failed'. Rather, the purpose is to bring some of the barriers to light so you can respond to, minimise or overcome them.

6.3 Make improvements based on monitoring data

Regularly review your monitoring data. Your reviews may show that some implementation strategies or actions in the implementation plan (see Chapter 5.2) don't meet your needs and should be adjusted. This is a normal part of the implementation process. For example, you may find that you need to provide top-up training or more intensive coaching in the program or practice to help practitioners to build their skills and confidence. Or, you may find that referral rates are slowing down and you need to undertake more promotion and educational outreach activities to boost referral numbers.

When you identify barriers, draw on the resources of the implementation team or other decision makers to decide how to respond to them and improve your implementation process. Use your data to inform your decisions about how to make improvements. Once you've decided how to revise your implementation strategies, update your implementation plan to record the new actions you've committed to. Remember to note who will be responsible for each action and when each action is due to be completed.

It's important to keep monitoring implementation quality, enablers and barriers after introducing potential solutions. If nothing changes, you know the 'solution' you introduced is not working. You'll need to try a new implementation strategy or revisit your understanding of the barrier you're trying to overcome. Figure 4 illustrates this continuous quality improvement cycle.

Figure 4: Continuous quality improvement cycle

Figure 4: Continuous quality improvement cycle

Applying this cycle during implementation will help you to quickly determine whether you need to make changes to the program or practice to improve the fit between your context, and the new program or practice. As you become more familiar with the improvement cycle, data-informed decision making will become easier and more natural. What may have felt challenging at the beginning of this phase will likely become routine.

Consider this example

A plan is developed to implement and monitor a new parenting program for families at risk of government child protection services involvement. The program is implemented and administrative data are collected to monitor whether the target population is being reached (monitor). After a few months of program implementation, the administrative data show the parenting program is not reaching the intended target population of families; however, enrolment targets are being met (review). The implementation team investigates why this is the case; however, they need more information.

They can gather this information by reviewing the cases accepted at intake and discussing the issue with relevant staff. Questions arise, like: Are the external referrals into the program inappropriate, but being accepted at intake anyway? If so, they may need to ensure there's clearer communication with external stakeholders and undertake additional promotion of the program. Or perhaps practitioners are self-selecting 'easy' children and families for the program, and putting those who reflect the true target population on a wait list? This may suggest that practitioners aren't confident with the new approach. They may need additional encouragement (e.g. praising efforts) and support (e.g. reduced caseload or administrative duties) from leadership, or more intensive coaching to build confidence in the program elements (respond).

This example shows how implementation teams can make data-informed decisions to effectively address barriers that can threaten high-quality implementation. Once you decide how to respond, you'll need to update and action your revised plan. Then the cycle starts again.

6.4 Adapt the program or practice

If you choose to adapt your program or practice at this stage, ensure you take a very considered approach. First, get a clear sense of the 'core components' versus the 'flexible components' of the program or practice you're using. Core components directly reflect the underlying theory and mechanisms of change that the program was built on, and cannot be changed. Flexible components are not directly related to the theory and mechanism of change, and may offer scope for local adaptations. We suggest you seek advice from the program developer or purveyor about which components are core and which are flexible before embarking on an adaptation process.

There is some evidence to suggest local adaptations may be beneficial to implementation, encouraging buy-in and ownership, and enhancing the fit between an intervention and the local setting (Lendrum & Humphrey, 2012). However, too much flexibility can take away from a program's effectiveness, particularly when modifications are made to the core components of the intervention. If you find lots of adaptations are needed to fit your context, you may want to revisit your initial decision to adopt that particular program or practice.

Practitioners can feel frustrated when they're delivering manualised programs with many fixed, core components. These types of programs can be perceived as inflexible and you may find program fidelity (i.e. delivering the program exactly as it was designed and intended) pitted against a practitioner's sense of autonomy and 'practice wisdom'. However, it can be more helpful to view program fidelity as a guide to understanding where to be 'tight' and where to be 'loose'. Practitioners should stick tight to the core components of an intervention until they fully understand them, and can apply and use them in daily practice. Only then should you begin to introduce local adaptations. A good fidelity measure will enable you to actively and accurately monitor the core components and will show you when adaptations can be introduced.

Core components may include the content and mode of delivery of a program. Flexible components may include the program packaging and promotional material, which can be adapted to use different languages and images that best reflect the local context.