Family Relationships Quarterly No. 19

AFRC Newsletter No. 19 – August 2011

Walking the talk: Facilitating evaluation in a service environment

by Robyn Parker and Amanda Jones

The growing emphasis on evaluation may require agencies to re-think some of their organisational practices - for example, in order to accommodate evaluation procedures and requirements - and to re-distribute infrastructure and resources. Evaluation can be complicated, time consuming and resource intensive, and the skills required are not necessarily commonly taught as part of everyday practice in the broader child and family services sector. Program evaluations can be conducted solely within the agency by drawing on the existing knowledge, skills, and capabilities of existing staff, by an external consultant brought in specifically to design and conduct the evaluation, or through some combination of the two. This article outlines how the third of these options is being utilised to put in place a framework for the ongoing monitoring and outcome evaluation of home-based foster care services.

This article is based on a paper presented at the 6th Australian Family and Community Strengths Conference in Newcastle, NSW, November 2010.

Box 1: Evaluation resources

This article focuses on evaluation within a service setting, which is based on a series of resources published by the Australian Family Relationships Clearinghouse in November 2010. AFRC Issues No. 6, Evaluation in Family Support Services, comprises a series of five papers aimed at prompting providers to think carefully and systematically about evaluation, and to guide and support them through the evaluation process. These papers are not intended to turn practitioners into expert evaluators. They aim to build the capacity of practitioners to design and conduct or collaborate on evaluations of their programs and services.

These resources are available online: Evaluation in Family Support Services <>

Program and service providers are becoming more aware of the need to assess the impact of their activities on their clients. Without evaluation, providers cannot know what works for which clients and, as importantly, what does not work for which clients. Clients have a right to expect that the services they receive are effective, funders can rightfully expect their money to be well spent, and practitioners are professionally and ethically bound to ensure that any activities and methods they use are "good practice" and evidence informed.

There are three broad ways that agencies may approach the task of designing and conducting an evaluation of their services or programs:

  • Some agencies may use existing expertise and build internal capabilities (in-house evaluation).
  • Others may prefer to engage an external evaluation consultant to conduct the evaluation with minimal input from agency personnel (external evaluation).
  • Alternatively, a combination of these approaches may be used wherein an external consultant is engaged to work systemically within the agency on the design and implementation of the evaluation.

This article focuses on an application of the third approach, in which an external evaluation consultant was seconded to a large provider of child welfare services to co-design and implement an evaluation framework for assessing the effectiveness of home-based care services. The first two approaches and their benefits/disadvantages are described in Box 2.

Box 2: Approaches to evaluation in a service setting

In-house evaluation

  • Designing and conducting evaluations in-house takes advantage of staff knowledge of and familiarity with the agency's programs, infrastructure and, in particular, its culture, and can draw on existing relationships with clients and stakeholders.
  • Can be less costly than engaging an external evaluator in monetary terms.
  • Can potentially contribute to the development and growth of a culture of evaluation in the agency, in which staff accept the need for evaluation, and are willing to contribute to its design and accommodate its implementation (see Box 3).
  • May impact on service delivery through the redistribution of internal resources and increased workloads on staff.
  • Can be susceptible to subjectivity in the way that data is collected, and in the interpretation and reporting of findings. Therefore findings may be perceived by others as less credible than an external evaluation.
  • It is not the case that any evaluation is better than none - a badly conducted evaluation might lead to the conclusion that a program or service is beneficial when it actually has no positive effect or worse, a negative effect on the client. Therefore it is crucial to ensure that the staff member charged with conducting an internal evaluation has sufficient knowledge and training to design a valid evaluation and instill confidence in the validity of the findings.

External evaluation

  • Can overcome some of the difficulties associated with implementing an evaluation plan or project using only internal personnel and resources.
  • Brings an objective perspective.
  • Can bring greater technical expertise and knowledge, for example, in the development of tools and procedures.
  • Can minimise the impact on the workload of agency staff.
  • Can be undertaken in a more timely and efficient manner.
  • May take time for an external consultant to acquire sufficient knowledge and understanding of the client group, the program or service being evaluated, or the broader practice and policy context in which the service is provided.
  • May be more expensive compared to internal staff, but consultancy costs must be weighed against the costs of conducting a poor evaluation - not only in monetary terms but also (and especially) in regard to the possibility of continuing to deliver a service that does not benefit the clients.

Collaborative evaluation

A collaborative model whereby an external evaluator is seconded to or embedded within an agency offers a number of advantages over completely internal or external evaluations. This model is being put into practice in a collaborative project between the Australian Institute of Family Studies and Berry Street, the largest independent child and family welfare organisation in Victoria.1 The overall aim of the secondment was to begin the process of embedding monitoring and evaluation practices and procedures into Berry Street's daily practice - making them part of "how things are done" at Berry Street and building a culture of evaluation (see Box 3).

In being seconded to Berry Street, the external evaluator became a de facto member of staff, and as such was seen as, if not a complete "insider", at least less of an "outsider", thereby helping to break down the perception of "us" versus "them" (Owen & McDonald, 1999). Being in this position helped when existing methods or procedures needed to be challenged, which might be less well received coming from a complete outsider or which a staff member may be reluctant to voice. It was also possible to prompt close examination of procedures that form the "nitty gritty" of evaluation, such as asking who will actually collect the data, who will enter it into a database, and whether there is sufficient infrastructure to support these activities. It was important to resolve these issues early in the process, to maintain the integrity of the evaluation and facilitate its implementation.

It is well understood that, for an evaluation to be properly and effectively implemented, there must be an acceptance of it by those who are most directly involved or affected. The Berry Street staff member2 working in conjunction with the external evaluator acted as a conduit between the evaluator and the broader agency, including both the frontline workers and higher levels of management. This role within Berry Street conferred a degree of legitimate authority in the implementation of the evaluation, particularly as the staff member had significant formal and informal corporate, practical and political knowledge of the agency, its clients, and the sector. This made it easier to negotiate across levels of the agency and to gather together those others in the agency whose contribution is vital to the viability and success of the evaluation. Combining these three sources of knowledge and expertise is resulting in a simultaneous "top down/bottom up" approach that will help promote acceptance of the need for and implementation of the evaluation framework.

Key enablers

On the basis of this evaluation secondment experience with Berry Street, as outlined above, the following tips for getting the most out of an evaluation consultancy are offered:

  • Get overt, concrete support from your management and Board, including commitment of funds and infrastructure resources to facilitate the implementation and acceptance of the evaluation. Develop a strong communication plan that includes informing staff of this governance support.
  • Use the consultant to demystify evaluation, but also identify any internal champions of the evaluation strategy.
  • Help those staff most directly affected understand and accept the need to build evaluation into the agency's "ways of working".
  • Facilitate opportunities for evaluator contact with staff from a range of programs. Less formal discussions/conversations about evaluation in general and attendance at meetings can help foster relationships and break down resistance to evaluation activities. These discussions may also help managers of other programs in their thinking or rethinking how well their programs are performing, and will also help the evaluator learn more about the agency. Brief information sessions run by the evaluator may also help to foster interest in and acceptance of the evaluation project. The aim of these would not necessarily be to train staff to be evaluators, but to build their understanding of the role of evaluation in working with clients.
  • Create opportunities for interested staff to participate in the design and implementation of the evaluation. This may require changing or adapting roles and redistributing tasks to allow for their involvement. In the longer term, staff members may acquire the knowledge and expertise to allow future evaluation activities to be undertaken in-house or with minimal involvement from external consultants.
  • Think big, act small. Introducing a sweeping system of evaluating all programs and services will require a lot of change and may add to the workload of already busy people. While the overall aim may be to integrate evaluation practices into the agency, it may be sensible to focus on a single program or service. At Berry Street, developing and implementing the evaluation content and procedures on a smaller scale in order to identify what works well and what potential problems may arise down the track has been an effective approach. That model can then be applied to other programs or services, with appropriate adaptations as required. Ongoing review of its integration and the impact on infrastructure and resources can also be undertaken.
  • If at all possible, take it slowly. Unless the program has already been clearly documented and articulated (in particular its objectives, preferably in the form of a program logic model), time will be needed for a clear picture to be developed of the intended outcomes of the program to be evaluated. While staff working in a program are often clear about what they do and why they do it, it can be difficult to articulate the intentions of the program - that is, what benefits clients are expected to experience - in such a way that the achievement (or otherwise) of those benefits can be assessed. Introducing evaluation as a routine part of the agency's activities may also entail a lot of change for many staff members, and this change process needs to be carefully managed.

Box 3: Culture of evaluation

What is a culture of evaluation?

An organisation that has a culture of evaluation has a known, shared policy and common understanding of the role of evaluation of their programs and services (Murphy, 1999). There is a commitment to using evaluation findings to inform decision-making, and practice review and development (Owen, 2003), thereby ensuring practice is evidence-based.

Why is a culture of evaluation a good thing?

Having a collective mindset of the value of evaluation helps to reinforce reflective thinking and practice and promotes a focus on what works for clients. It also enables the organisation to respond promptly to requests from stakeholders for accountability and effectiveness information. Staff confidence can be enhanced through the knowledge that their efforts are effective and evidence based, and new skills and expertise acquired.

Final comments

Although there is growing recognition of the ongoing need for evidence that programs and services are effective, it can be difficult for a busy agency to marshal the resources required to integrate or absorb evaluation successfully into regular practice. Funding constraints can exclude the possibility of engaging the services of an external evaluator to carry out the task. By bringing a consultant into the agency as a de facto member of staff, many of the barriers to the acceptance of evaluation activities can be overcome, and the adoption of evaluation thinking and practice as "how we do things here" can be facilitated.


  • Murphy, D. (1999). Developing a culture of evaluation. Paris: TESOL France.
  • Owen, J. (2003). Evaluation culture: A definition and analysis of its development within organisations. Evaluation Journal of Australasia, 3(1), 43-47.
  • Robyn Parker is a Senior Research Officer at the Australian Institute of Family Studies. Amanda Jones is Senior Manager, Evaluation, Policy and Practice at Berry Street.

Robyn Parker is a Senior Research Officer at the Australian Institute of Family Studies. Amanda Jones is Senior Manager, Evaluation, Policy and Practice at Berry Street.


1  Berry Street ( services are provided in four Victorian regions. Home-based and residential foster care are prominent among the programs and services offered, which also include family violence, therapeutic clinical/counselling (via "Take Two"), children’s contact centres, various youth and community outreach programs, and Open Place, which provides services for "Forgotten Australians".

2  The Berry Street staff member involved was the Senior Manager, Evaluation, Policy and Research.