Developing a culture of evaluation and research

Content type
Practice guide
Published

November 2014

Researchers

Jacqueline Stewart

Overview

An organisation with a culture of evaluation and research is one that is committed to using research and evaluation findings to inform its decisions. With such a culture, organisational efforts to build effective evaluation and research activities are strengthened. This paper aims to provide practical information on the structures, practices and actions that support a change toward a strong culture of evaluation and research.

Key messages

  • Organisations with a culture of evaluation and research deliberately seek evidence in order to better design and deliver programs.

  • By fostering a strong evaluation and research culture, organisations can deliver evidence to stakeholders that programs are achieving desired results, enable robust decision-making and support professional development.

  • Tools are available to assess organisational readiness for cultural change. These tools help to identify structures and practices that work for and against an evaluation and research culture.

  • An assessment of the extent to which programs and services can be evaluated in a reliable and credible way helps to ensure evaluations are only carried out when necessary precursors (e.g., clearly defined program objectives) are in place.

  • A clear plan for cultural change is important. It needs to detail the organisational vision for evaluation and research, strategies for engaging stakeholders and ways to support change champions.

  • Professional development activities may be required to enable staff to do evaluations and research (whether conducting in-house projects or supporting external consultants) and to use evaluation and research findings to inform decision-making.

  • A range of organisational support systems enable and sustain cultural change. Adequate financial and human resources are vital. Organisations also need to ensure staff are accountable to outcomes, rewarded for participating in evaluation and research and able to document and access lessons learned from these activities.

Introduction

Introduction

Increasingly, organisations are expected to demonstrate, and document, the differences their services are making for children, families and communities (McCoy, Rose, & Connolly, 2013; 2014). Evaluation and research helps to respond to this demand. Evidence gained through these activities helps to identify whether programs have achieved what was intended and enables organisations to be transparent and accountable (see Box 1).

To develop effective evaluation and research functions, organisations needs a strong culture of evaluation and research; a culture in which evidence is deliberately sought in order to better implement and deliver programs. Without such a culture any efforts to build effective evaluation and research activities will be undermined (Mayne, 2010).

This paper is a guide to developing a culture of evaluation and research. It reviews what an evaluation and research culture is, and explores the actions that can be taken to develop and sustain a culture of evaluation and research.

Box 1: The key terminology

Evaluation and research

Evaluation and research are two means of getting information (or evidence) that informs decisions relating to program design and implementation and day-to-day practice. Examples of how each method has been used to inform practice in a human services organisation are provided below:

  • Assessment Tool Evaluation Project - Practitioners assessed the effectiveness of the practice tools they used when supporting people with a mental illness.
  • Filling a Service Gap Research Project - Practitioners had identified a lack of support services for young sex offenders. They undertook a literature review to identify evidence of services that meet the needs of this population (Gardner & Nunan, 2007).

The evaluators and researchers on these projects most likely used similar methods to obtain the required evidence. What differed was their purpose and the question they asked. This is the key difference between evaluation and research - the focus of the project. Evaluation uses social research methods to systematically investigate the effectiveness of a program or program component (Rossi, Lipsey, & Freeman, 2004). Research is more focused on general explorations and "finding answers" through the production of knowledge (Gardner & Nunan, 2007).

Additional information about the differences between evaluation and research can be found in the CFCA practitioner resource: Evaluation and Innovation in Family Support Services (2013c).

Evidence

Evidence (facts and information) comes from a range of sources:

  • practice knowledge and experience (including professional wisdom and values, law and policy);
  • children's and families' needs and experiences; and
  • evaluation and research (CFCA, 2013d).

Evidence-based and evidence-informed practice

Evidence-based practice involves using the best evaluation and research evidence, practitioner expertise and client values in service design and delivery. Some professionals consider evidence-informed practice a more accurate term. They argue it better reflects the practice reality in which decisions are informed or guided, rather than influenced solely, by evidence (CFCA, 2013d).

What is a culture of evaluation and research?

What is a culture of evaluation and research?

Organisations already have a culture of evaluation and research. To understand the present organisational culture, reflections about how practitioners and managers make their judgements, and the decisions based on those judgements, must be undertaken. Are they doing and using (or not using) evaluation and research? The answers will provide an insight into whether evaluation and research either: informs (and is part of) daily practice; is given some importance; or is rarely considered in decision-making (Murphy, 1999).

Organisations with a strong culture of evaluation and research are committed to using findings from evaluation and research to inform decision-making (see Box 2). There is widespread support and understanding of evaluation and research.1 Both functions are seen as "integral and valued parts of the organisation's activities and purpose" (McCoy et al., 2013, p. 16).

Box 2: Observable signs of an evaluation and research culture

Researchers identified characteristics common to organisations with a culture of evaluation and research, including:

  • detailed implementation plans for programs;
  • specific and measurable program objectives;
  • systematic data collection for every program;
  • accessible data for all who are interested;
  • shared staff confidence that collected information aids understandings of whether and how programs achieve desired results;
  • organisational-wide knowledge of the extent to which program outcomes are achieved; and
  • critical reviews of the results of evaluation and research (Mora & Antonie, 2012).

 

1 Research and evaluation must be underpinned by ethical considerations. For further information, see Demystifying Ethical Review (CFCA, 2013a).

Why develop a culture of evaluation and research?

Why develop a culture of evaluation and research?

The most pressing reason to develop a culture of evaluation and research comes from the environment surrounding an organisation. More and more external stakeholders require evidence that programs are achieving what was intended, and are the reason for observed changes in participant outcomes. The degree to which evidence can be delivered is influenced by the organisation's commitment to learning and developing from evaluation and research activities (McCoy et al., 2014).

A culture of evaluation and research also offers other benefits to the organisation. A strong evaluation and research culture encourages those responsible for designing and implementing programs to consider a "mix" of evidence (Owen, 2003). Blending practice knowledge, knowledge from and about service users and evaluation and research findings facilitates innovation in family support services (see CFCA practitioner resource: Evaluation and Innovation in Family Support Services (2013c)).

Box 3: Changing and improving the mix of evidence used in decision-making

Practitioners at St Luke’s Anglicare identified a lack of suitable services for young sex offenders. Eager to fill this service gap, practitioners wanted to implement their ideas for suitable services, rather than working through the fact-finding and problem-solving processes of research. Together with a researcher (who was a degree removed from day-to-day practice), the practitioners came up with recommendations that acknowledged both professional wisdom and research findings on programs with proven track records of meeting the needs of the target population (Gardner & Nunan, 2007).

Further benefits are available to practitioners. Involvement in evaluation and research activities provide practitioners with the opportunity to gain or enhance their skills. Knowledge gained about emerging or different programs and practices can renew their interest in what they do and how they do it. Practitioners' confidence may also grow as they gather evidence that a program or service is achieving desired results (Murphy, 1999).

Box 4: Further reading on an evaluation and research culture

Additional information on evaluation and research culture and why it is important can be found in the CFCA practitioner resource: Evidence-based Practice and Service-based Evaluation (2013d).

How to create a culture of evaluation and research

How to create a culture of evaluation and research

Structures, practices and actions can be put into place to support a change toward a culture of evaluation and research. Box 5 summarises elements that have been identified from a synthesis of a range of sources (detailed in the following discussion), including reports from human services organisations and government agencies on their experiences, and the findings of researchers who investigated whether, and how, organisations built an evaluation and research culture.

Box 5: Elements to foster an evaluation and research culture

Assess organisational readiness

  • Assess current structures, practices and actions to identify those that may be working for and against an evaluation and research culture.
  • Assess the evaluability of programs and services.

Plan for cultural change

  • Create a vision for evaluation and research in the organisation.
  • Engage (internal and external) stakeholders.
  • Identify and support evaluation and research leaders.

Build capacity

  • Develop the capacity of staff to do evaluation and research.
  • Develop the capacity of staff to use evaluation and research.

Develop (or strengthen) organisational support systems

  • Commit necessary organisational resources.
  • Encourage an outcome-oriented and supportive accountability regime.
  • Reward desired behavioural change.
  • Manage knowledge (Adapted from Mayne, 2010).
Assess organisational readiness

Assess organisational readiness

An assessment of organisational readiness is a review of the existing characteristics of the organisation. It enables one to identify and build upon current practices that work for an evaluation and research culture, as well as disincentives or potential barriers to cultural change (Mayne, 2010; Preskill & Boyle, 2008).

Assess current structures practices and actions

Several tools are available to assess organisational readiness for cultural change. Detailed in Box 6, the tools are typically surveys that ask for a range of views about how well the organisation performs in terms of its operations, employees and management. Commonly explored themes include the:

  • motivation and attributes of program leaders and staff (perceived need and ability to change, approach to decision-making, staff confidence in skills);
  • organisational systems (preparedness of organisational structures, policies and procedures to support learning from evaluation and research);
  • openness to learning (knowledge of how workplace learning occurs can inform efforts to build an institutional commitment to learning from evaluation and research (Busch & Hostetter, 2009).

There is flexibility in who completes the surveys. Ideally, someone highly knowledgeable about the organisation and its operations can complete most tools. Generally, however, it is recommended that surveys are completed by a cross-section of individuals, who represent multiple levels and functions within the organisation. Once collated, the assessments can then be posted around the organisation to gain even more feedback on perceived readiness for cultural change (Gill, 2010).

Box 6: Tools to assess organisational readiness for cultural change

ROLE: Readiness for Organisational Learning and Evaluation instrument

  • The survey is suitable for use with all staff or with select individuals (e.g., members of a practice unit who are aspiring to use evaluation and research findings to inform their decisions).
  • Survey respondents are asked how much they agree or disagree with a series of statements related to six factors: culture; leadership; systems and structures; communication; teams; and evaluation. Researchers have found that these six factors significantly influence the extent to which evaluation findings are used to support organisational learning and decision-making (Preskill & Torres, 1999).
  • Instructions are given on how to analyse and interpret survey results.
  • Low scores in one or more element suggest that learning from evaluation may not be supported and provide a point of focus for change management initiatives.
  • The survey is free to download.

Organisational Audit for Evidence-Informed Practice

  • The audit form explores four factors that support organisations to embed evidence-informed practice: leadership; organisational culture; building capacity; and sharing learning.
  • It is expected senior managers complete all sections. Other staff (depending on their role and experiences within the organisation) may only be able to complete certain sections (like the shared learning component).
  • Respondents are asked to record how far particular practices, behaviours and attitudes are embedded in the organisation using a four-point scale (ranging from "no, nothing like this" to "yes, and widely known").
  • Scores need to be discussed with a view to deciding where action is required to further embed evidence-informed practice within the organisation.
  • The audit can be completed at regular intervals to review progress against agreed action plans.
  • The audit form is available for download for a small cost.

Assess the evaluability of programs and services

Program plans and objectives are another important factor to consider when assessing organisational readiness for cultural change. Program staff need to believe that outcomes can be measured. If not, they will assign little value to any attempt to develop an organisational commitment to learning from evaluation and research (Poole, Davis, Reisman, & Nelson, 2001). Similarly, they need to see evaluation activities producing information that is useful for decision-making (Kotter, 2007). If an evaluation does not deliver adequate information to assess a program's impacts or processes, program staff can lose confidence in the activity.

To manage these potential risks (and determine whether an evaluation is worthwhile at a given point in time), the extent to which programs and services can be evaluated in a reliable and credible manner need to be considered. Key questions to contemplate include:

  • Does the design of the program or service allow for evaluation?
    • Is the program logic (i.e., the theory behind what the program will do, how it will do it and what it will achieve) clearly defined?
    • Are the objectives of the program or service specific, measurable, achievable, realistic and time-based (SMART)?
    • Are the objectives commonly understood and supported by all stakeholders?
  • Are program or service outcomes verifiable based on planned or available data collection systems?
    • Will baseline data be needed (and if so, available) to track change?
    • Is monitoring data, planned to be collected on a regular basis, against program or service objectives (Mora & Antonie, 2012; Posavac, 2011; United Nations Office on Drugs and Crime, 2012)?

The answers to these questions will reveal how to proceed. Either an evaluation can go ahead in the near future, or program improvements (such as revisions to the objectives) are required first. Making this decision upfront will help maintain staff confidence in the evaluation process (and potentially save time and resources by ensuring the evaluation is carried out when all the necessary supports and structures are in place).

Box 7: Resources to support an evaluability assessment

For information about writing SMART objectives, program logic models and monitoring program outcomes see resource: Planning for evaluation I: Basic principles (2013e); and resource: Planning for evaluation II: Getting into details (2013f).

Plan for cultural change

Plan for cultural change

The organisational readiness assessments help to reflect on where the organisation is right now. The next step is to plan the future direction of the organisation; to determine the organisation's wants and needs for evaluation and research.

Create a vision for evaluation and research

A clearly articulated vision is critical to fostering an organisational culture of evaluation and research. It directs the change effort, communicating to staff and other stakeholders the demand for and purpose of evaluation and research activities in the organisation. The vision also outlines for stakeholders exactly how it will be achieved (Kotter, 2007; Mayne, 2010).

The first step in articulating the organisational vision is to identify who will develop it. Kotter (2007) suggests organisations form a "powerful guiding coalition". The coalition consists of people with enough influence in the organisation (in terms of titles, expertise, reputation and relationships) to lead the change effort; for example, board members, senior managers and representatives of key clients or service users. They will be responsible for drafting a vision, communicating it and implementing changes as required (Kotter, 2007).

The starting point for the vision is the organisation's wants, needs and aspirations for evaluation and research. Key questions to consider include:

  • What role will evaluation and research play in the organisation?
  • What value will evaluation and research add to the organisation and its stakeholders?
  • How will evaluation and research contribute to strategic decision-making?
  • How will the organisation use what is learnt from evaluation and research?

The answers to these questions can be refined and crafted into a concise statement (two to four sentences) that signals to all stakeholders (internal and external) what evaluation and research is going to be used for and the values on which evaluation and research practice is based (Preskill & Mack, 2013).

Box 8: Example of an evaluation and research vision statement

At Interrelate our goal is to ensure that we continue to provide the best programs and services to our clients. We evaluate our work through projects that we conduct both in-house and in collaboration with our research partners. Our clients and staff are vital to these projects, and what we learn from them we share both within the organisation and across the broader family relationship services sector.

(Retrieved from the Interrelate website <www.interrelate.org.au/research/>.)

Strategies for achieving the vision must also be identified. The strategies need to address:

  • decision-making structures - How will evaluation and research projects be conceptualised, supported and responded to?
  • resource implications - What resources are required and where will they come from?
  • monitoring activities - Who is overseeing the implementation of the vision, reporting on progress and reviewing adopted strategies (Gardner & Nunan, 2007; Research in Practice, 2006)?

Tips for developing an evaluation and research vision

  • Establish connections between the evaluation and research vision and the vision and goals of the organisation. Staff need to see how their involvement in evaluations and research connects with the overall purpose of the organisation. When staff understand this connection they are more likely to adopt the evaluation and research vision and participate in the strategies involved in achieving it (Baron, 2011; Gill, 2010).
  • Emphasise internal drivers for change. While the accountability requirements of funders may be a key reason for adopting an evaluation and research culture, it is best not to promote funders as the sole reason for change. Staff will be more motivated by internal drivers of change like a commitment to quality or effective program delivery (Preskill & Boyle, 2008).
  • Adopt terminology that staff can relate to. Staff might not readily accept or adopt the various terms used in the evaluation and research fields (e.g., theory of change). If particular terminology does not work for staff agree on terms that do (e.g., road map or blueprint for change) (Mayne, 2010).

Box 9: Examples of how others have documented their vision for evaluation and research

These examples show that there is no one right way to communicate an organisation’s (or agency’s) evaluation and research vision. What is important is to craft a vision relevant and meaningful to the information needs of the organisation and its stakeholders.

DETA Evaluation Strategy

The Queensland Department of Education, Training and Employment (DETA) Evaluation Strategy reveals why evaluations are valued in the agency, the governance structures in place to ensure timely and systematic evaluations, the capability needed to conduct successful evaluations and the relationships between key evaluation stakeholders. 

Anglicare Victoria’s research themes and projects

Anglicare Victoria details their four key research themes online. They identify the specific focus within each theme and outline the types of knowledge they are seeking through their research activities. 

Berry Street Childhood Institute Strategic Framework

Berry Street identifies knowledge building (via research and evaluation) as a strategic goal and specifies how they intend to build knowledge through research partnerships and evaluation of initiatives. 

Engage stakeholders

Stakeholder involvement is a powerful predictor of the success of cultural change. Stakeholder involvement in defining, initiating and monitoring the development (or strengthening) of a culture of evaluation and research helps to create interest, motivation and buy-in. Individuals can also be empowered (through engagement strategies that challenge their thinking or develop their skills) to increase their use of evaluation and research for decision-making (Gardner & Nunan, 2007; Kerman, Freundlich, Lee, & Brenner, 2012; Poole et al., 2001).

Decisions need to be made about which stakeholders to involve. Staff, volunteers, the governing body, program sponsors and service users can all offer valuable perspectives on how to foster a culture of evaluation and research in the organisation (Gardner, 2003; Patel, 2012). Commitment and endorsement from upper management is vital. Senior managers are well placed to build trust in, and mutual understanding of, the evaluation and research culture (Packard, Patti, Daly, & Tucker-Tatlow, 2012). Program sponsors (whether internal or external) are typically not involved in the detailed planning and execution of the cultural change. However, it is important to ensure that program sponsors and staff agree on the purpose of evaluation and research in the organisation (Posavac, 2011; Rodriguez-Bilella & Monterde-Diaz, 2012).

Regular communication is the key means of involving stakeholders. Messages and methods of communication need to be tailored to match the information needs of each stakeholder group. For example, if outreach staff check emails infrequently then incorporate discussions on how to learn from evaluation and research into routine meetings about practice issues (Kotter, 2007; Danseco, 2013).

Two-way communication is also important. Stakeholders must have an opportunity to provide feedback and express any concerns. Timely, considered responses to all feedback will grow confidence and trust in the organisation and the change initiative (McCoy et al., 2013). Table 1 details some commonly held concerns about the adoption of an evaluation and research culture and provides suggestions on how to respond to these concerns. Typically the concerns arise from uncertainty about what impact evaluation and research findings might have on staff and program and service delivery. These concerns can be addressed by reinforcing how findings present opportunities for stakeholders (staff and funding bodies) to enhance program design and delivery.

Common concerns …Suggested responses …
Evaluation and research drains money away from direct service provisionEvaluation and research helps to ensure resources are used wisely. Without the information these activities provide, organisations risk spending time and money on services that are of unknown value.
Evaluation results are never used. What's the point?Failing to act on evaluation findings, where appropriate, is a legitimate concern. Conduct of rigorous evaluations that are timely and relevant to pending decisions arise through careful planning.
Staff observations are the most valuable source of ideas for improving programs and services and assessing their resultsThe professional wisdom of program staff will always be highly valued. The move to a culture of evaluation and research is about applying a mix of evidence to make decisions.
Programs will be at risk of shut downUnfavourable findings rarely lead to the cancellation of a program. The needs of the children, families and communities the organisation works with are too great. Instead, negative findings enable refinements to the service delivery approach.
The information will be used to assess the performance of staffEvaluations are conducted to assess the merits of programs, practices or strategies. The results should be assessed separately from processes for staff performance appraisals.

Source: Adapted from Posavac (2011)

Identify and support evaluation and research leaders

It is useful, as organisations engage stakeholders in the change process, to identify evaluation and research leaders or champions. These are individuals who are immediately enthusiastic about creating a culture of evaluation and research. They may even be early adopters of desired behaviours (i.e., currently using evaluation and research findings to inform their decision-making) (Kerman et al., 2012). With appropriate support, these champions can motivate others by modelling desired behaviours, asking critical questions of their colleagues or helping them to implement evaluation and research projects and act on findings (Preskill & Boyle, 2008).

While evaluation and research champions can come from any level of the organisation, leadership does need to come from the top. Managers are well placed to initiate and drive change because they guide and support staff on a daily basis. If they credibly communicate the vision and need for change, demonstrate their commitment to learning from evaluation and research and empower others to follow their example, a culture of evaluation and research becomes more likely (Mayne, 2010; Mora & Antonie, 2012).

Box 10: Anyone can lead organisational learning from evaluation and research

Below are examples of how different members of an organisation have led the change to a culture of evaluation and research:

  • An administrative assistant recommended the organisation trial a new client database for 30 days, monitor its usefulness and then get together to discuss whether it was worth continuing with it.
  • A manager bought together people from across the organisation to pilot a new service and make improvements before widespread implementation.
  • A board member engaged their counterparts in a discussion about the indicators the organisation should use to determine the extent to which evaluation and research is informing decision-making (Gill, 2010).
Build capacity

Build capacity

Staff members need to see how to bring about cultural change. Part of this awareness comes from organisational efforts to develop and communicate a vision for learning from evaluation and research and to demonstrate leadership in evidence-informed practice. The other part stems from their capacity or ability to realise change themselves. In other words, staff members require the ability to do and use evaluation and research.

Box 11: Case study on managers leading change

Big Brothers Big Sisters is a mentoring-based youth-serving organisation operating in Australia and internationally. Stakeholder demands for evaluation required one service provider (located in a small city in the United States) to collect and report various data on the participating young people. Management have supported staff to embrace a culture of learning from evaluations in a number of ways, including:

  • promoting the collected data as a tool they can use to enhance program delivery (as opposed an attempt to assess staff performance);
  • demonstrating how to use data for problem-solving (in this case to enhance the client referral process); and
  • enabling staff to analyse the data in new ways to gain insights into the factors that promote client retention (Hoole & Patterson, 2008).

Develop capacity to do evaluation and research

To identify an organisation's evaluation and research capacity building requirements, consider:

  • current levels of evaluation and research expertise - Do staff have the knowledge and skills they need to design, implement and manage evaluation and research projects?
  • the organisation's plans for evaluation and research - Is the intention to engage the services of external evaluators and researchers, conduct in-house projects or utilise both of these approaches (McCoy et al., 2014)?

Box 12: Further reading on considerations for choosing an evaluation and research approach

For help on deciding whether to engage an external evaluation consultant, conduct an evaluation in-house or use both of these options in a hybrid model see the CFCA practitioner resource: Evidence-based Practice and Service-based Evaluation (2013d), where Figure 1 details the advantages and disadvantages of each approach.

A choice to engage external consultations, develop an internal evaluation and research function or use both approaches informs decisions on whose capacity to develop and what capabilities to grow. If the organisation plans to mainly commission evaluation and research projects from external consultations then those responsible for this activity may need support in framing evaluation and research projects, securing funding, interpreting data and communicating findings. Staff who will be involved in conducting evaluation and research may need training in a broader range of skills including project planning, instrument development and validation, ethical considerations, data collection, analysis and interpretation, reporting and follow-up (Cousins, Goh, Elliot, & Bourgeois, 2014).

Box 13: Soft skills are important for evaluation and research too

The capacity to carry out evaluation and research projects is represented not only by technical procedural-type knowledge but also by soft skills. Valuable soft skills include conflict resolution, cooperative teamwork and good communication and facilitation skills (Cousins et al., 2014).

An assessment of the existing evaluation and research expertise within the organisation will help to develop tailored training and development strategies. These are strategies specific to different worksites or teams, ways of working and existing skill sets (Baron, 2011). Experienced evaluators and researchers may benefit most from professional development type opportunities such as attending conferences, subscribing to relevant journals or joining professional associations. Emerging evaluators and researchers may need more formal, structured training (such as that offered through universities or relevant service providers), together with coaching and technical assistance from an experienced evaluator or researcher (Cousins et al., 2014).

Whatever strategies are identified, ongoing training and development activities are vital to embed evaluation and research within the organisational culture. One-off training or education events fail to demonstrate a strong commitment to evaluation and research. Ongoing events (see the examples in Box 14) highlight evaluation and research activities as an important component of everyday practice (Gardner & Nunan, 2007).

Additionally, training and development activities are needed for both individuals and groups of people from multiple levels within the organisation. By bringing together people with different viewpoints and experiences, opportunities are created for staff members to envision and re-vision the organisation's conduct and use of evaluation and research. This collective experience promotes the notion that all members of the organisation must work together to strengthen the institutional commitment to learning from evaluation and research (Gardner, 2007; Gardner & Nunan, 2007).

Box 14: Examples of capacity building activities

Below are examples of evaluation and research capacity building initiatives:

  • agency-wide newsletter - This is designed to provide regular information about evaluation and research projects of relevance to the organisation's programs and practices (Gardner & Nunan, 2007).
  • lunchtime forums - Internal and external presenters with expertise in evaluation speak on a range of topics related to its practice. The forums are available to everyone in the agency (Hanwright & Makinson, 2008).
  • learning circles - Experienced evaluators and researchers from different organisations meet (online or face-to-face) for one hour every six weeks to discuss topics of mutual interest. Typically the rotating chairperson nominates a journal article for members to read and dissect during each catch-up (Kishchuk, Gauthier, Roy, & Borys, 2013).
  • collaborative evaluation and research fellowships - Organisations enter into arrangements with universities to nominate graduate students for evaluation and research fellowships. The fellow joins the organisation for one year to carry out an actual evaluation or research project under the supervision of a suitably experienced university faculty member. Participatory evaluation and research methods are used to enable staff members to actively participate in the full spectrum of the project (from conception, design, conduct, analysis, interpretation, conclusions and communication of results) (Cousins, Goh, Clark, & Lee, 2004).

Develop capacity to use evaluation and research

Capacity of staff members to meaningfully use evaluation and research will increase when there are planned, conscious uses of evaluation and research findings to:

  • inform the development of a new program or service or the expansion of an existing program or service;
  • determine whether programs, services or practices need to be enhanced or strengthened; and
  • assess whether programs and services are achieving intended results or effects (Cousins et al., 2014).

A range of activities facilitate the planned use of evaluation and research findings. These activities aim to ensure staff members both know about evaluation and research processes and findings and are empowered to undertake these processes and act on findings. Examples of key activities (highlighted in Cousins et al., 2014) include:

  • participation or involvement in evaluation and research - participatory evaluation and research projects provide those involved with experience of how evaluation and research can be used to solve problems, identify new courses of action and inform a range of decisions related to program planning and implementation;
  • supportive management - managers need to encourage and reward staff who try out new approaches based on ideas gained from evaluation and research activities; and
  • knowledge transfer - the organisation needs to transfer knowledge gained about program successes and failures both during and upon completion of evaluation and research projects. The information must be accessible (easy to comprehend) and available to all who are interested.

More detail is provided in the next section on developing (or strengthening) organisational support systems.

Box 15: Child Family Community Australia (CFCA) information exchange

CFCA information exchange is a resource that can help keep staff informed about evaluation and research processes and findings. Hosted by the Australian Institute of Family Studies (AIFS), CFCA provides quality, evidence-based information, resources and interactive support for professionals in the child, family and community welfare sectors. See the CFCA information exchange.

Develop organisational support systems

Develop organisational support systems

Organisational support systems are important in developing a culture of evaluation and research. These systems (like resources) make evaluation and research possible. They also give day-to-day meaning to the evaluation and research culture. The systems signal to staff what is required (e.g., outcome-oriented practice), reward desired behaviour, promote organisational learning and capture this learning to ensure it informs decision-making within the organisation (Mayne, 2010).

Commit resources

Adequate financial and human resources help to embed a culture of evaluation and research. These resources enable the full array of evaluation and research activities to be carried out. These activities include investment in different types of projects (e.g., process and impact evaluations or exploratory research and longitudinal studies); as well as in learning processes (e.g., staff training, and communication and knowledge transfer). Allocating adequate resources also demonstrates there is firm leadership support for the role of evaluation and research in organisational decision-making (Preskill & Mack, 2013).

What represents "adequate" resourcing depends on the organisation's plans for their evaluation and research functions. It can be useful to review what resources the organisation has designated in the past and whether this level of resourcing has been sufficient (or will be sufficient given future plans). Key questions to consider include:

  • How are current evaluation and research projects budgeted?
  • How does the organisation determine how much to allocate to any one project?
  • How adequate are current resource levels?
  • What else is needed to ensure an ongoing ability to carry out evaluation and research (Preskill & Mack, 2013)?

Organisations with a strong commitment to evaluation and research consistently allocate resources exclusively for evaluation and research purposes on an annual basis. The recommended allocation is between 5-10% of a program's budget (Preskill & Mack, 2013). To secure these resources, organisations have built evaluation and research activities into new grant agreements, secured supplementary funding from other sources (e.g., research institutes) or established a line item within the organisation's budget (Preskill & Boyle, 2008).

Box 16: Funders may have advice on extra resources

Kegeles, Rebchook, and Tebbetts (2005) interviewed representatives of community-based organisations and their funders to identify (among other things) barriers to conducting evaluations. The community-based organisations reported that a lack of funding and inexperienced staff made it difficult to carry out impact evaluations. The funders revealed that had they known of these issues they could have provided extra help; for example, recommended individuals or organisations to provide evaluation expertise or allocated extra funding for capacity building initiatives (Kegeles et al., 2005). Kegeles et al. (2005) concluded that community-based organisations and funders needed to establish mechanisms by which they could openly communicate difficulties or concerns, such as insufficient funding, and identify solutions.

Box 17: A different way to spend an evaluation budget

Berry Street engaged the services of an external evaluator to coach their staff. This consultant was a de facto member of the team, guiding the organisation in how to design and implement an evaluation framework for assessing the effectiveness of home-based care services (Parker & Jones, 2011). For more information see the Family Relationships Quarterly No. 19 (Australian Family Relationships Clearinghouse, 2011).

Encourage an outcome-oriented and supportive accountability regime

An outcome-oriented accountability regime is one in which managers are accountable to results, as opposed to just outputs. If managers are merely responsible for following procedures and delivering outputs (such as counselling sessions or educational events) there is little incentive to seek evidence of achieved outcomes (such as changed attitudes and behaviours or improved social conditions) (Mayne, 2010).

Tip: Achievable outcomes

Outcomes must be within the control of the team responsible for delivering the program or service. If staff cannot literally achieve outcomes, they will seek to ensure accountability is for outputs only (Mayne, 2010).

Clear program documentation enables an outcome-orientated accountability regime. This documentation presents specific, measurable, achievable, realistic and time-based program (SMART) objectives. It also links program activities to intended outcomes (i.e., articulates the program logic) (CFCA, 2013e). Collectively, this information lets program staff know exactly what their program is expected to achieve and how it is expected to achieve it.

Box 18: Resources to support clear program documentation

For further information on writing SMART objectives and program logic models see: Planning for Evaluation I: Basic principles (2013e).

To be supportive the accountability regime needs a learning focus. Regardless of what an evaluation or research project reveals (e.g., outcomes were attained or there was insufficient evidence of desired results), the key questions should always be: What has been learned and what is needed to change in the future (Mayne, 2010)?

Box 19: Responding positively to less than positive results

Staff at a youth mentoring service identified that the retention rates of young people in their program were lower than anticipated. Through further research staff identified that some case closures were unavoidable (e.g., young people had moved out of the service area). Other case closures, however, were put down to youth dissatisfaction or incompatibility with their assigned mentor. The team responded by investigating whether the correct information was being collected to allow for the best quality match possible (Hoole & Patterson, 2008).

Reward desired behavioural change

Rewards are an important means of establishing and sustaining a culture of evaluation and research. These rewards recognise that staff are adopting desired behaviours. It is worth thinking broadly about what behaviours will demonstrate an acceptance of evaluation and research in the organisation. Together with welcoming or undertaking evaluation and research, desired behaviour change might also include using evaluation and research findings to inform decision-making, sharing evaluation and research results across the organisation and holding learning events on good practice in evaluation and research (Mayne, 2010).

Table 2 identifies types of rewards that may be appropriate for an organisation.

Type of incentiveIncentives for managers and staffIncentives for departments or teams
IntangiblePersonal recognition (personal notes, phone calls) 
Public recognition (newsletters, intra- and internet) 
New job title 
New job responsibilities
Public recognition (newsletters, intra- and internet) 
New projects 
Increased authority over program planning, design and implementation 
Revised (i.e., less taxing) reporting requirements
PerkConference attendance 
Training and development 
Educational leave/sabbaticals
 
FinancialPromotions 
Bonuses 
Pay increases
Increased program budgets 
Allocations of discretionary funds 
More staff

Source: Mayne (2010) and Gill (2010)

Manage knowledge

Knowledge gained from evaluation and research activities must be available at the right time and in the right format to address pressing questions. This capability is vital to the development of an evaluation and research culture. Information gained from evaluation and research activities will only be used in meaningful ways if it is accessible (Mayne & Rist, 2006; Preskill & Mack, 2013).

To create a useful and useable knowledge management system, the needs of the primary audience for the system and existing data sharing practices within the organisation need to be considered. Key questions to reflect on include:

  • What kinds of evaluation and research resources would be meaningful to staff?
  • How can these resources be made accessible to those who need and want them?
  • Are existing knowledge management systems appropriate? How might these systems be enhanced?
  • What needs to happen to ensure that the methods of capturing and sharing information align with existing work processes?
  • How much time will be required to input and use information? Is this time available (Preskill & Mack, 2013)?

A range of knowledge management systems is possible depending on the organisation's needs and available resources. Technology-based systems can range from shared drives where reports are posted online to cloud-based platforms (where resource are made available to users on demand via the Internet from a cloud computing provider's servers) (Preskill & Mack, 2013). Other possible solutions include peer support groups or communities of practice where staff (and possibly external participants) come together to share evaluation and research experiences and learning (Hanwright & Makinson, 2008; Gardner & Nunan, 2007).

Box 20: Sharing knowledge via an internal committee

One organisation appointed an internal committee to be responsible for receiving and sharing the findings of an evaluation. The intent was to demonstrate that the organisation, as a whole, was responsible for making meaning of findings, as opposed to the evaluator or individual managers (Owen, 2003).

As well as helping staff share evaluation and research experiences, consider the information needs of funders and fellow service providers. Disseminating details of evaluation and research processes and findings will keep funding bodies apprised of the organisation's activities and impacts and could inform the way in which other service providers work or the way in which their programs are run (CFCA, 2013b).

Box 21: Dissemination of evaluation and research processes and findings

For further information on developing a public dissemination strategy see CFCA practitioner resource: Dissemination of Evaluation Findings (2013b).

To share evaluation and research experiences now, consider contributing a short article for publication on the News and Discussion section of the CFCA website.

Final comments

Final comments

Cultural change involves working through a series of stages that, in total, usually require a considerable length of time. Up to five years may be required for a genuine commitment to learning from evaluation and research to take hold in the organisation (Danseco, 2013; Gill, 2010). Missing steps only creates the illusion of speed, rarely produces satisfying results and, more often than not, undoes hard won gains (Kotter, 2007). A consistent commitment to the vision for evaluation and research in the organisation is crucial. Be sure to monitor progress, taking time to celebrate signs of change (small and large) and to take corrective action when things are not going as planned.

References

References

  • Australian Family Relationships Clearinghouse. (2011). Family Relationships Quarterly No. 19. Melbourne: Australian Institute of Family Studies. Retrieved from <www.aifs.gov.au/afrc/pubs/newsletter/frq019/index.html>.
  • Baron, M. E. (2011). Designing internal evaluation for a small organization with limited resources. In B. B. Volkov & M. E. Baron (Eds.), Internal evaluation in the 21st century: New Directions for Evaluation, 132, 87-99. doi:10.1002/ev.398.
  • Busch, M., & Hostetter, C. (2009). Examining organizational learning for application in human service organizations. Administration in Social Work, 33, 297-318. doi:10.1080/03643100902987929.
  • Child Family Community Australia (CFCA). (2013a). Demystifying ethical review. Retrieved from <www.aifs.gov.au/cfca/pubs/factsheets/a144068/index.html>.
  • Child Family Community Australia (CFCA). (2013b). Dissemination of evaluation findings. Retrieved from <www.aifs.gov.au/cfca/pubs/factsheets/a145977/index.html>.
  • Child Family Community Australia (CFCA). (2013c). Evaluation and innovation in family support services. Retrieved from <www.aifs.gov.au/cfca/pubs/factsheets/a145794/index.html>.
  • Child Family Community Australia (CFCA). (2013d). Evidence-based practice and service-based evaluation. Retrieved from <www.aifs.gov.au/cfca/pubs/factsheets/a145838/index.html>.
  • Child Family Community Australia (CFCA). (2013e). Planning for Evaluation I: Basic principles. Retrieved from <www.aifs.gov.au/cfca/pubs/factsheets/a145859/index.html>.
  • Child Family Community Australia (CFCA). (2013f). Planning for Evaluation II: Getting into detail. Retrieved from <www.aifs.gov.au/cfca/pubs/factsheets/a145914/index.html>.
  • Cousins, J. B., Goh, S. C., Clark, S., & Lee, L. E. (2004). Integrating evaluative inquiry into the organisational culture: A review and synthesis of the knowledge base. The Canadian Journal of Program Evaluation, 19(2), 99-141.
  • Cousins, J. B., Goh, S. C., Elliot, C. J., & Bourgeois, I. (2014). Framing the capacity to do and use evaluation. In J. B. Cousins & I. Bourgeois (Eds.), Organizational capacity to do and use evaluation: New Directions, 141, 7-23. doi:10.1002/ev.20076.
  • Danseco, E. (2013). The five Cs for innovating in evaluation capacity building: Lessons from the field. The Canadian Journal of Program Evaluation, 28(2), 107-118.
  • Gardner, F. (2007). Creating a climate for change: Critical reflection and organisations. International Journal of Knowledge, Culture and Change Management, 6(7), 73-80.
  • Gardner, F. (2003). Critical reflection in community based evaluation. Qualitative Social Work, 2(2), 197-212.
  • Gardner, F., & Nunan, C. (2007). How to develop a research culture in a human services organisation: Integrating research and practice with service and policy development. Qualitative Social Work, 6(3), 335-351. doi:10.1177/1473325007080405.
  • Gill, S. (2010). Developing a learning culture in non-profit organisations. Thousand Oaks, CA: Sage Publications.
  • Hanwright, J., & Makinson, S. (2008). Promoting evaluation culture: The development and implementation of an evaluation strategy in the Queensland Department of Education, Training and the Arts. Evaluation Journal of Australasia, 8(1), 20-25.
  • Hoole, E., & Patterson, T. E. (2008). Voices from the field: Evaluation as part of a learning culture. In J. G. Carman & K. A. Fredericks (Eds.), Nonprofits and evaluation: New Directions for Evaluation, 119, 93-113. doi:10.1002/ev.270.
  • Kegeles, S. M., Rebchook, G. M., & Tebbetts, S. (2005). Challenges and facilitators to building program evaluation capacity among community-based organisations. AIDS Education and Prevention, 17(4), 284-299.
  • Kerman, B., Freundlich, M., Lee, J. M., & Brenner, E. (2012). Learning while doing in human services: Becoming a learning organization through organizational change. Administration in Social Work, 36, 234-257. doi:10.1080/03643107.2011.573061.
  • Kishchuk, N., Gauthier, B., Roy, S. N., & Borys, S. (2013). Learning circles for advanced professional development in evaluation. The Canadian Journal of Program Evaluation, 28(1), 87-96.
  • Kotter, J. P. (2007). Leading change. Why transformation efforts fail. Harvard Business Review, 84(1), 96-103.
  • Mayne, J. (2010). Building an evaluative culture: The key to effective evaluation and results management. The Canadian Journal of Program Evaluation, 24(2), 1-30.
  • Mayne, J., & Rist, R. C. (2006). Studies are not enough: The necessary transformation of evaluation. The Canadian Journal of Program Evaluation, 21(3), 93-120.
  • McCoy, A., Rose, D., & Connolly, M. (2013). Developing evaluation cultures in human services organisations. Evaluation Journal of Australasia, 13(1), 15-20.
  • McCoy, A., Rose, D., & Connolly, M. (2014). Approaches to evaluation in Australian child and family welfare organizations. Evaluation and Program Planning, 44, 66-74, doi:10.1016/j.evalprogplan.2014.02.004.
  • Mora, C., & Antonie, R. (2012). Levers supporting program evaluation culture and capacity in Romanian public administration: The role of leadership. Society and Economy, 34(3), 423-432. doi:10.1556/SocEc.34.2012.3.4
  • Murphy, D. (1999). Developing a culture of evaluation. Paris: TESOL France. Retrieved from <www.tesol-france.org/articles/murphy.pdf>.
  • Owen, J.M. (2003). Evaluation culture: A definition and analysis of its development within organisations. Evaluation journal of Australasia, 3(1), 43-47.
  • Packard, T., Patti, R., Daly, D., & Tucker-Tatlow, J. (2012). Organizational change for services integration in public human service organizations: Experiences in seven counties. Journal of Health and Human Services Administration, 34(3), 471-525.
  • Parker, R., & Jones, A. (2011). Walking the talk: Facilitating evaluation in a service environment. Retrieved from <www.aifs.gov.au/afrc/pubs/newsletter/frq019/frq019-5.html>.
  • Patel, F. (2012). Embedding an internal evaluation culture: Critical issues for consideration from an innovative model. Studies in Learning, Evaluation Innovation and Development, 9(1), 22-32.
  • Poole, D. L., Davis, J. K., Reisman, J., & Nelson, J. E. (2001). Improving the quality of outcome evaluation plans. Nonprofit Management and Leadership, 11(4), 405-421.
  • Posavac, E. J. (2011). Program evaluation. Methods and case studies (8th ed.). Boston: Prentice Hall.
  • Preskill, H., & Boyle, S. (2008). Insights into evaluation capacity building: motivations, strategies, outcomes, and lessons learned. The Canadian Journal of Program Evaluation, 23(3), 147-174.
  • Preskill, H., & Mack, K. (2013). Building a strategic learning and evaluation system for your organization. Boston: FSG. Retrieved from <www.fsg.org/Portals/0/Uploads/Documents/PDF/Building_an_Evaluation_System.pdf?cpgn=WP%20DL%20-%20Building%20an%20Evaluation%20System>.
  • Preskill, H., & Torres, R. T. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks, CA: Sage Publications.
  • Research in Practice. (2006). Tool 8: Documenting your strategic plan. Firm foundations: A practical guide to organizational support for the use of research evidence (p. 95).
  • Rodriguez-Bilella, P., & Monterde-Diaz, R. (2012). Evaluation, valuation, negotiation: Some reflections towards a culture of evaluation. The Canadian Journal of Program Evaluation, 25(3), 1-10.
  • Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (2004). Evaluation: A Systematic Approach (7th ed.). Thousand Oaks, CA: Sage Publications.
  • United Nations Office on Drugs and Crime. (2012). Evaluability Assessment Template. Retrieved from <www.unodc.org/documents/evaluation/Guidelines/Evaluability_Assessment_Template.pdf>.
Acknowledgements

Jacqueline Stewart is a freelance consultant in social research.

The author wishes to acknowledge the valuable contributions of Alicia McCoy, Research and Evaluation Manager at Family Life.

The feature image is by David Joyce. CC BY SA 2.0.

ISBN

978-1-922038-74-6

Share