Implementation in action

A guide to implementing evidence-informed programs and practices
Guidelines – June 2019

5. Stage 2: Plan and prepare

During Stage 2, the implementation team (or other decision makers) will plan and prepare for implementation. During this stage, you'll need to:

  • choose implementation strategies
  • develop and start using your implementation plan
  • identify your implementation outcomes
  • decide how to monitor the implementation process.

5.1 Choose implementation strategies

Implementation strategies are the 'how to' of implementation. You'll use these strategies to overcome barriers, build readiness and drive the implementation process. Choose the best strategies for your context, and the program or practice you're implementing. Some interventions, such as manualised programs, come packaged with specific implementation strategies; for example, training requirements and quality monitoring. However, even these usually have scope to add other implementation strategies at the local level if you need them. Other programs or practices will not suggest which implementation strategies to use, so you'll need to choose them yourself.

If you're able to choose your own implementation strategies, one useful technique is to match the strategies to the implementation barriers you've identified or experienced. The Expert Recommendations for Implementing Change (ERIC) project identified more than 70 commonly used implementation support strategies that can be used to drive the implementation process (Powell et al., 2015; Waltz et al., 2015). See Table 3 for some examples. These strategies have been matched with common implementation barriers (defined using the CFIR) to create a decision aid - the CFIR-ERIC Matching Tool.

Table 3: Example implementation strategies, adapted from the ERIC project
Implementation strategy Explanation
Access new funding Access new or existing money to help implement the program or practice.
Alter incentive structures Develop and use incentives to support, adopt and implement the program or practice.
Audit and provide feedback Collect and summarise performance data over a specified time period. Give the data to practitioners and administrators to monitor, evaluate and modify behaviour.
Change physical structure and equipment Adapt physical structures and/or equipment (e.g. changing the layout of a room or adding equipment) to best accommodate the program or practice.
Conduct educational meetings Hold meetings with different stakeholder groups (e.g. providers; administrators; other organisational stakeholders; and community, client, and family stakeholders) to build awareness, inform them and educate them about the innovation.
Conduct local consensus discussions Talk with local providers and other relevant stakeholders to determine if the chosen problem is important to them and whether they think the new program or practice is appropriate.
Conduct ongoing training Plan for and conduct ongoing training in the program or practice.
Develop and use tools and processes to monitor implementation quality Develop tools and processes to monitor implementation quality (as assessed against your implementation outcomes). Use them to create your continuous quality improvement cycle.
Develop and distribute educational materials Develop and distribute manuals, toolkits and other supporting materials that help stakeholders to learn about the program or practice, and that teach practitioners how to deliver the program or practice.
Identify and prepare champions Identify and prepare people who'll dedicate themselves to driving an implementation. They will help to support, market and overcome indifference or resistance within the organisation.
Increase demand Attempt to influence the market for your new program or practice. Increase competition intensity and increase the maturity of the market for your new program or practice.
Inform local opinion leaders Identify local opinion leaders or other influential people and inform them about the program or practice in the hope they will encourage others to adopt it.
Make training dynamic Vary your training methods to cater for different learning styles and work contexts. Ensure your training is interactive.
Mandate change Ask your leadership team to publicly declare that the new program or practice is a priority and they're determined to implement it.
Model and simulate change Model or simulate the changes that the implementation will require.
Provide follow-on technical support Provide practitioners with ongoing coaching or clinical supervision. Use modelling, feedback and support to help them apply new skills and knowledge in practice.
Promote adaptability Identify how a program or practice can be tailored to meet local needs. Clarify which elements to maintain to preserve fidelity.
Recruit, designate and train for implementation Recruit, designate and train for the implementation effort.
Remind practitioners Develop reminder systems that help practitioners to remember important information. This system can also prompt them to use the program or practice, or to do other important implementation activities. Reminders could be client- or encounter-specific, and they can be provided verbally, on paper or electronically.
Revise roles Shift and revise staff roles. Consider redesigning job characteristics. When you revise roles, consider if they need to expand to cover both implementation and provision of the program or practice. Also consider how to eliminate service barriers to care, and include personnel policies.
Use an implementation advisor Seek guidance and support from an implementation expert.
Train-the-trainer Train designated team leaders, practice leads and partner organisations on how to train others in the program or practice.

Source: Powell et al., 2015

You can use the CFIR-ERIC Matching Tool5 to help you decide which implementation strategies to use. Input the implementation barriers you've identified into the tool and it will generate a list of implementation strategies that experts think will best address these barriers.

Sometimes the CFIR-ERIC Matching Tool will generate a long list of potential strategies for addressing the inputted barriers, and these won't all be feasible in your context. While helpful, the Matching Tool can't replace careful thought and decision making based on your specific context. We've identified some guiding principles to help you select the best implementation strategies for your context:

  • Select implementation strategies that best describe the change in behaviour you require to overcome the barriers you identified in Stage 1.
  • Engage stakeholders (practitioners, leadership, clients, referrers and the community) to help you select the best implementation strategies and develop actions for these strategies. Consider asking stakeholders to rate the importance and feasibility for proposed implementation strategies to help you make the decision.
  • Remember that implementation strategies can be one discrete action, or a collection of actions that are interwoven, packaged up and aimed at addressing multiple barriers (Powell et al., 2012).

Once you've chosen your implementation strategies, develop specific actions to bring them to life. Table 4 provides examples of common barriers to implementation, and relevant strategies and actions that can be used in the child and family services context to overcome each of the barriers.

Table 4: Implementation barriers, implementation strategies and actions that are relevant to the child and family services sector
Barrier Implementation strategy Definition Relevant implementation stage(s) Example actions
Low adaptability
A program or practice seems promising but has been developed for a different context and target population. It's not appropriate in its current form due to cultural, linguistic and other reasons.
Promote adaptability Identify how to tailor the program or practice to meet local needs. Clarify which elements of the program or practice must be maintained to preserve fidelity. Stage 1: Engage and assess
Stage 4: Sustain and scale
  • Ask stakeholders which adaptations would make the program or practice more appropriate for their context.
  • Clarify which components of the program or practice must be maintained to preserve fidelity. Determine which program elements can be tailored to your local context (if any). You can do this by checking information in program menus or repositories, program manuals or guidelines, or by speaking directly with the developer of the program or practice.
  • If permitted, introduce the adaptations once they've been approved and fidelity to the core components of the program or practice is reached (usually during Stage 4). Monitor their impact to see if the adapted version of the program or practice is more acceptable, a better fit and can be delivered with fidelity.
Resistance to change
Practitioners aren't committed to the change because they don't believe a new program or practice is needed.
  1. Conduct local consensus discussions
  2. Conduct educational meetings
  3. Identify and prepare champions
  1. Talk with stakeholders about whether the chosen problem is important to them and what program or practice is appropriate to address it.
  2. Meet with stakeholder groups and tell them about the program or practice.
  3. Identify and prepare people who can motivate colleagues, model effective implementation and overcome resistance to change.
(1) Stage 1: Engage and assess 

(2) & (3) Stage 2: Plan and prepare

1.(a) Conduct workshops with practitioners and ask for their thoughts on how to define the target population, their unmet needs, and how to explore new programs or practices that might meet their needs.

(b) Conduct group discussions with practitioners and leadership staff using the questions in the Implementation Considerations Checklist (see Appendix C).

2.Run group or one-on-one information sessions with staff and practitioners. Explain the program or practice, including potential benefits and the resources and commitment required. Give your staff the opportunity to ask questions and explore their concerns.

3.During consultations, identify possible implementation champions. Approach them afterwards and chat with them about the positive behaviours you observed. Try to enlist their support in the implementation process.

Low engagement from leadership
Key managers are not committed to, or actively involved in, the implementation process.
Recruit, designate and train for leadership Recruit, designate and/or train leaders to drive the implementation process Throughout the whole implementation process Implementation leaders need to continuously communicate the vision, purpose and expectations for program implementation. Their aim is to inspire and encourage staff to adopt the new way of working. You may need to: 
  • Clearly designate who will lead the implementation process.
  • Recruit staff who will help drive the implementation. Don't assume current personnel can implement the change.
  • Make clear professional development plans for existing leaders. Help to build their capacity ahead of major implementation efforts.
Limited evaluation of the implementation process

1.Develop and use tools and processes for monitoring implementation quality

2.Audit and provide feedback

1.Develop tools and processes for monitoring the quality of the implementation, according to the implementation outcomes. Use these to inform your continuous quality improvement cycle.

2.Collect and summarise performance data over a specified time period. Give it to practitioners and administrators to monitor, evaluate and modify behaviour.

(1) Develop in Stage 2: Plan and prepare 

(1) Use in Stage 3: Initiate and refine

(2) Stage 3: Initiate and refine

(1 & 2) Plan how to collect and monitor data that can inform your decisions about how to improve practice or implementation processes. The data should show if the program or practice is being used, how well it's being used, if it's being used with fidelity; that is, as intended, the quality of the implementation process, and the impact of the program or practice on clients. You should decide and put into practice: 
  • the specific data points to collect
  • how to best collect, store, manage and interpret the data
  • how to communicate the data so it informs decision making
  • who is responsible for collecting, analysing and reporting back on the data.

(1 & 2) Consider how you can use the data you already have or routinely collect (e.g. intake data and client feedback) and systems and processes that already exist (e.g. client database and case-noting) to monitor implementation quality and auditing performance.

Low self-efficacy 

Practitioners are not confident in their own ability to implement and deliver the program or practice to a high standard.

  1. Conduct ongoing training
  2. Make training dynamic
  3. Provide follow-on coaching
  1. Plan for and conduct ongoing training in the program or practice.
  2. Vary your training methods to cater to different learning styles and work contexts. Ensure your training is interactive, with a focus on skill-building.
  3. Use skilled coaches to provide ongoing modelling, feedback and support for staff. These coaches help staff apply their new skills and knowledge in practice. They can be either internal or external to your organisation.

1.Stage 2: Plan and prepare & Stage 3: Initiate and refine

2.Stage 2: Plan and prepare

3.Stage 3: Initiate and refine

1.Ensure all practitioners, team leaders, supervisors and managers can access training in an ongoing way. Consider incentivising participation.

2.Use adult learning principles to design training in the new program or practice. Consider using web-based technology to reach a broader audience and make the delivery more flexible.

3.Training alone is usually not sufficient to create a change in practice. Supplement training with follow-on coaching by experts in the program or practice. This will help practitioners to turn their new knowledge into practice. Training takes place at the end of Stage 2 and coaching can start at Stage 3.

Consider a coach-the-coach model. In this model, an expert gives intensive coaching to an existing team leader or supervisor, who in turn coaches the practitioners in their team.

5.2 Develop an implementation plan

It's important to plan your implementation carefully. Planning will help you identify and address many of the common barriers before they start to cause issues. It will also help you to establish the right implementation strategies to overcome or minimise the barriers. The implementation plan is best developed collaboratively by those on your implementation team (or other key decision makers if you have not set up a team). You can amend and adapt the plan over time. You may need to reconsider your priorities as conditions change and new barriers emerge. The implementation plan can also be used to record implementation enablers, ensuring there is a plan in place for maintaining them throughout implementation.

Your implementation plan should include:

  1. the implementation barriers (identified in Stage 1)
  2. the implementation strategies and specific actions you will take to overcome each of the barriers (chosen in Stage 2)
  3. who will deliver on each action
  4. timeframes, milestones and due dates for each action.

Depending on your needs, your implementation plan may also include:

  • a record of implementation enablers, and strategies and actions for how to maintain them
  • a register of all the risks you've identified during implementation
  • an implementation quality monitoring plan (described in Chapter 5.3)
  • an activity tracker (to track the progress of your implementation strategies and actions)
  • any other information that can help guide your process.

You can use the Implementation Plan Template (Appendix E) to help the implementation team or other decision makers to map out their plan. For another example, see this template developed by the National Clinical Effectiveness Committee.6

5.3 Decide how to monitor implementation quality

The only way to know if your implementation is going well is to monitor its progress. This needn't be an onerous task. Firstly, you'll need to decide which data will be most useful. Choose data that will show when you need to adjust and improve the implementation process, or your new program or practice. You'll also need to ensure you collect and review data regularly. The best implementation monitoring plans use continuous quality improvement cycles during Stage 3 - once you've started the new practice or program (see Chapter 6.3). Ideally, they should also help you identify any unintended consequences (both positive and negative), which can inform future implementation efforts, such as scaling-up to other teams or sites (see Chapter 7.2).

Your implementation monitoring plan should track your key implementation outcomes (i.e. is the program being implemented and how well?). These are different to your program outcomes, which describe the desired changes for children, parents, carers, families and caregivers (i.e. is the program making a difference for people using the service?). Implementation outcomes indicate the quality of your implementation. An evidence-informed program or practice that's implemented well (i.e. has good implementation outcomes) has the best chance of delivering benefits for children and families (see Figure 2 in Chapter 2).

Your implementation team or other decision makers should select which outcomes to monitor, ideally before the new program or practice has started. However, if the program or practice has already started, it's not too late to put monitoring measures in place. You can do this any time.

Table 5 includes some key implementation outcomes, alongside some simple, good-quality measurement methods and tools. We also encourage you to consider additional outcomes and measures that are appropriate for your context.

It's important to consider the quality of your measurement tools. This includes psychometric considerations such as reliability, validity and sensitivity, as well as practical considerations such as length, language and ease of use.

Table 5: Implementation outcomes and suggestions for measurement
Implementation outcome Definition How to measure
Acceptability The perception among stakeholders that a program or practice is agreeable, palatable or satisfactory
  • Qualitative interviews with people who deliver the program or practice (e.g. using CFIR Interview Guide Individual Characteristics questions)
  • Quantitative survey tool such as the Acceptability of Intervention Measure (AIM)
Feasibility The extent to which the program or practice can be successfully used or carried out within your setting
  • Qualitative interviews with people who deliver the program or practice (e.g. using CFIR Interview Guide Intervention Characteristics questions)
  • Quantitative survey tool such as the Feasibility of Intervention Measure (FIM)
Appropriateness The perceived fit, relevance or compatibility of a program or practice
  • Qualitative interviews with people who deliver the program or practice (e.g. using CFIR Interview Guide Intervention Characteristics Inner Setting and Outer Setting questions)
  • Quantitative survey tool such as the Intervention Appropriateness Measure (IAM)
Fidelity The extent to which a program or practice is being delivered as intended
  • Self-report practice checklists for practitioners
  • Client interviews or questionnaires regarding the aspects of the program they've experienced
Reach The degree to which a program or practice is integrated into an agency or service provider setting, including the degree it effectively reached the target population.
  • Administrative data

Your fidelity measures should be tailored to the program or practice you're implementing. The EPIS Centre website (www.episcenter.psu.edu/fidelity) provides examples of existing fidelity measures for a range of programs in the child and family service sector. The best fidelity measures or checklists allow for assessment of how often the program or practice is used, and how extensive its reach is. They also track the competence and quality of program or practice use.

The Society for Implementation Research Collaboration (SIRC) is currently developing a repository of tools (see societyforimplementationresearchcollaboration.org/sirc-instrument-project/) that measure implementation outcomes. The repository includes information on the psychometric properties (e.g. reliability, validity) and pragmatic qualities of each tool. The repository is a work in progress and available to paid members of SIRC.

Table 5 includes three short, simple and freely accessible implementation outcome measures with good psychometric properties: AIM (to measure acceptability), FIM (to measure feasibility) and IAM (to measure appropriateness). All three have been developed and validated by Weiner and colleagues (2017) and are available as a free download.8

5.4 Build readiness to use the program or practice

Now it's time to build your organisation's readiness to implement the program or practice. Start using the implementation strategies and activities in your implementation plan that are relevant at this early stage. Some strategies, such as ongoing and skills-based training and identifying and preparing champions, will need to be used during Stage 2 so you can start to build readiness before you start the program or practice. You may also need to use other strategies in your implementation plan later in the implementation process (e.g. follow-on coaching, which would only start after the program or practice has been initiated in Stage 3).