Content type
Practice guide
Published

February 2018

Introduction

Program evaluations assist organisations to plan, develop and improve their programs, with the aim of improving outcomes for clients (Alston & Bowles, 2003). Developmental evaluation is one approach to understanding program effectiveness.

What is developmental evaluation?

Watch a video about developmental evaluation.

Traditionally, program evaluation has focused on the processes and/or outcomes of a program in order to find out whether the program has led to positive change for the target group. For this to occur, inputs, activities and outputs must be established and agreed by stakeholders in order to properly measure a program’s effectiveness and worth.

In contrast, developmental evaluation (DE) is a structured way to monitor, assess and provide feedback on the development of a project or program while it is being designed or modified; that is, where inputs, activities and outputs may not yet be known, or may be in a state of flux. DE attempts to address the challenges of evaluating a program in that context by allowing a more responsive and adaptive approach. That is, by “asking evaluative questions, applying evaluation logic, and gathering and reporting evaluative data to support project, program, product and/or organizational development with timely feedback” (Patton, 2012). DE can best be described as a form of thinking and acting strategically as an innovative program unfolds (Patton, 2011). 

The primary focus of DE is to provide program developers with timely feedback that can be used to adapt and improve their project or program. Rigour in DE is established by asking thorough and accurate questions of the process and findings, identifying patterns and new information in those findings, and ensuring that these are incorporated into the program via a feedback loop (Ontario Centre of Excellence for Child and Youth Mental Health [OCECYMH], 2017).

Learn by doing

Developmental evaluation enables program developers to learn and act on those learnings during the process of developing or significantly redefining a program (Gamble, 2008). This dynamic time in the life of a program involves the trialling and testing of new methods, logic models and outcome goals. It is a time when these elements of a program are uncertain—the initial goal of a program may change during its development, as unintended consequences, stakeholder priorities, unpredicted factors and new information come to light.

Therefore, it is necessary for program developers to be flexible and adjust elements of the program as necessary, informed by what they learn along the way. Using evaluation tools and methods and a feedback loop, adjustments may be made to the developing program to move it closer to achieving its initial goals, or the newly established goals that may have emerged from the development process.

When should developmental evaluation be used?

Developmental evaluation is best suited to programs still being developed, to existing programs undergoing redesign or situations where complex issues or crises have arisen that require programs to be significantly altered (Gamble, 2008; Patton, 2011). DE can also be used for programs addressing complex problems, where solutions are not known and programs need to be fluid and flexible. Stakeholders may try different approaches and will be able to identify when these are unsuccessful, as well as recognise when the intended outcome does emerge. It is then important to observe and document the “ripple” effects of these outcomes, which may intersect with other factors in a way that was not predictable (Patton, 2012).

There are two “distinct niches” for DE.

  1. Developmental Evaluation provides support for the exploration and design of a program prior to its establishment (which can be used to evaluate process and outcomes once established).
  2. Developmental Evaluation is useful for dynamic situations where program stakeholders expect to keep developing and adapting the program, and never intend to conduct a final outcome evaluation of a standardised model. DE supports ongoing real-time decisions about what to change, expand, close out or further develop (Patton, 2011).

It is important to note the difference between the improvement of a program and its development. Most programs employ a practice of continuous improvement, utilising evaluation feedback in this process. Developmental evaluation is not appropriate for the purposes of simply improving an existing and established program. A formative evaluation method is more relevant for this circumstance.

Figure 1 illustrates where DE may be suitable.

fig2.png

Figure 1: When DE may be suitable in the program context

Source: Adapted from Preskill & Beer (2012), p. 6.

The developmental evaluation process

Traditional evaluation methods rely on a logic model that explains why the intended program outcomes can be expected, given the fixed inputs and activities of the program. In contrast, DE employs a “systems thinking” approach, mapping relationships and interconnectivity, articulating assumptions about how change occurs and how the social “problem” being addressed by a program is part of a larger system (OCECYMH, 2017; Patton, 2011).

Because DE is used in more complex contexts than traditional evaluation methods, cause and effect may only be known in retrospect, and there may be disagreement and uncertainty among stakeholders (OCECYMH, 2017). For this reason, it is critical to document decisions, reasoning, actions and results, and continuously provide feedback in a timely manner. This approach means frequent adaptation may occur and stakeholders may learn the outcomes of decisions made along the way. This need not be in a very formal way, can include any kind of data (quantitative, qualitative, mixed), any kind of design (e.g., naturalistic, experimental), and any kind of focus (processes, outcomes, impacts, costs and cost–benefit, among many possibilities), depending on the nature and stage of an innovation (Patton, 2011). Overall, timeliness is the priority. This contrasts with traditional evaluation methods where data collection usually occurs in one phase, followed by a final reporting phase (OCECYMH, 2017; Patton, 2012).

The starting point for developmental evaluation is an intended program strategy, although this may be fairly vague in its beginnings. As development begins, some parts of this strategy will not be realised, as evidence comes to light that these may be unrealistic, or no longer desirable. Other parts of the strategy will become more focused and deliberate, and other, new strategic factors will emerge. The strategy at the end of the program development process will reflect the original strategy and other factors discovered along the way, without those factors that were found to be unnecessary. This program development process is facilitated by DE, and accountability is established by ensuring that learnings are communicated and utilised (OCECYMH, 2017).

The role of the evaluator

Because developmental evaluation does not sit outside the program like a traditional evaluation, it is necessary for the evaluator to be part of the program development team (Patton, 2012). The evaluator is integrated into the process of gathering and interpreting data, framing issues and testing developments. As such, it is not possible, nor desirable, for the evaluator to remain objective, as they are immersed in the development of the program.

The evaluator should:

  • ask evaluative questions—complex questions that expose implicit assumptions;
  • be able to “read between the lines” and ask difficult questions that may challenge the status quo, introduce uncomfortable topics and deal with tensions that surface as a result;
  • apply evaluation logic;
  • gather real-time data to inform ongoing decision making and adaptations;
  • help navigate uncertainty and foster relationship building; and
  • facilitate systematic, data-based reflection and decision making and encourage “learning by doing”.

Most importantly, DE requires the evaluator to recognise, record and feed back unexpected and unpredicted outcomes, in order to facilitate continuous improvement of a program (Patton, 2012).

Conclusion

Developmental evaluation helps an organisation to generate rapid learning to support the direction of the development of a program, and/or affirm the need for a change of course. DE provides real-time feedback so that the program stakeholders can implement new measures and actions as goals emerge and evolve. DE is positioned as an internal team function integrated into the processes of program innovation. Developmental evaluation encourages learning, and accountability is centred on recognising and utilising learnings.

Further reading

  • BetterEvaluation. (n.d.). Developmental evaluation. BetterEvaluation. Retrieved from <betterevaluation.org/en/plan/approach/developmental_evaluation>.
  • Brennan, K. (2013). Developmental evaluation: An approach to evaluating complex social change initiatives. Presented at the Next Generation Evaluation: Embracing Complexity, Connectivity and Change Conference, Stanford University, Stanford, 14 November 2013. Retrieved from < www.fsg.org/tools-and-resources/next-generation-evaluation-conference>.

References

  • Alston, M., & Bowles, W. (2003). Research for social workers (2nd ed.). Crows Nest, NSW: Allen & Unwin.
  • Gamble, J. A. A. (2008). A Developmental evaluation primer. Montreal: TheJW McConnell Family Foundation.
  • Ontario Centre of Excellence for Child and Youth Mental Health (OCECYMH). (2017). Developmental evaluation learning module. Ottawa: OCECYMH. Retrieved from <www.excellenceforchildandyouth.ca/resource-hub/developmental-evaluation>.
  • Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: The Guildford Press.
  • Patton , M. Q. (2012). Planning and evaluating for social change: An evening at SFU with Michael Quinn Patton. [Web Video]. Retrieved from <www.youtube.com/watch?v=b7n64JEjUUk&list=UUUi_6IJ8IgUAzI6JczJUVPA...>.
  • Preskill, H. & Beer, T. (2012). Evaluating social innovation. Washington, DC: Center for Evaluation Innovation.

 

Acknowledgements

This paper was developed and written by Kate Rosier and Sharnee Moore with Elly Robinson and Jessica Smart, CFCA information exchange.

Share