Evaluation and innovation in family support services

 

You are in an archived section of the AIFS website 

 

Content type
Practice guide
Published

November 2013

This resource is part of a series on evaluation. 

Evaluation is a formal term for something that program providers and managers, and indeed human beings generally, do informally every day. Practitioners and policy makers make judgements about clients, personnel, policies and programs that lead to further choices and decisions - this is a natural part of everyday working life. The difference between informal and formal evaluation is the absence (informally) or presence (formally) of a systemic way of collecting evidence (Fitzpatrick, Sanders, & Worthen, 2011).


While evaluation can be complex, providers and managers already have the capacity and many of the resources needed to evaluate many programs and practices. Where there are gaps in what is needed to design and conduct an evaluation, providers may benefit from drawing on external sources of expertise in designing, implementing or analysing their evaluation.

The outcomes of evaluations, whether formal or informal, can be the catalyst for some level or type of innovative response. In these cases, the identification of ineffective services or gaps in service delivery can lead to changes in program content or management - whether minor or major, simple or complex. These responses, in time, require their own evaluation. Therefore, evaluation and innovation can be seen as part of a cycle of ongoing development and improvement of services.

This paper provides a brief overview of evaluation and innovation in the context of family support services. CFCA provide a number of evaluation resource sheets covering further aspects of program evaluation: broader issues relating to evidence and ethics in evaluation; the basic terms, types and principles of evaluation; getting an evaluation plan under way and keeping it on track; and what happens when the evaluation is complete.

Simply speaking, a program is defined as "... a set of resources and activities directed toward one or more common goals, typically under the direction of a single manager or management team" (Newcomer, Hatry, & Wholey, 2010, p. 5). The term "program" may be used formally to describe the activities undertaken by an organisation (e.g., a management hierarchy of programs and subprograms), and this is likely to be the description you are most familiar with. In terms of program theory, however, "program" may be broadened to include any intervention, whether a project, strategy, event or policy (Funnell & Rogers, 2011).

A practice is a way of working that comprises the tasks or actions involved in the work, as well as the elements that make them work and the explanations of how they work. The elements of the practice, including at the organisational level, that contribute to its effective delivery and intended outcomes are also part of the practice, and they have clear parameters that mean they can be compared against similar practices (Soriano, Clark, & Wise, 2008).

Programs, practices, policies and everything in between are open to evaluation. Evaluation principles and procedures discussed in this series of evaluation resources are likely to be applicable to a range of circumstances.

What is evaluation?

Clients have a right to receive high-quality services that are based on the best evidence of what will help them and their families, and to expect that well-trained, well-qualified and well-supported professionals provide those services. The only way to really know this is through evaluation.

Evaluation is the process of determining whether (and how) your program achieves its objectives. It is more than just the collection of information and data; it is about systematically assessing what you do and how you do it to arrive at a judgement about the "worth, merit or value" of a program (Mertens & Wilson, 2013; Scriven, 2003-04).

Box 1: Evaluation and research - Are they different?

Strictly speaking - yes. Although evaluators and researchers often use similar methods and tools, they differ in their purpose and in the questions they ask. Some key differences are:

  • Although the terms are often used interchangeably, evaluation is about making informed decisions based on a judgement (i.e., taking action), whereas research is more focused on creating and/or advancing knowledge and theory, and may be done simply to satisfy curiosity (the "wouldn't it be interesting to know ..." questions).
  • Researchers often seek to generalise beyond a sample of participants to the general population, whereas evaluators generally focus on what happened for this group of participants and what the findings mean for service provision and management. Any generalisations are made only to other potential participants.
  • Evaluations require working closely with stakeholders and clients (who may be your colleagues if the evaluation is being run internally) in the development and implementation of the evaluation plan, whereas research is often (but not always)* designed and conducted at a distance from the clients and/or participants.
  • Researchers are often able to implement sophisticated designs, whereas evaluators often have limited scope for large-scale or complex designs, especially when retrofitting an evaluation plan to a program that is already underway.
  • Evaluators report to, and answer to, those who commissioned or instigated the evaluation, whereas researchers report to, and answer to, the broader scientific community.

There is, however, a sense in which some types of evaluation can also be seen as research. Since the point of research is often to determine cause and effect relationships, and the point of some evaluations is to determine the extent to which the program was the primary reason for the changes observed in participants, then using the terms interchangeably is not wholly incorrect. Research may provide directions that are useful in an evaluative sense, such as when the need for program responses or improvements to an existing program is suggested by findings (Mertens & Wilson, 2013).

In practical terms, the differences are less important than the way in which the process is conducted - they both require information to be gathered systematically, thoroughly and carefully.

* You can conduct research by simply adding a single item about a topic of interest to your program feedback form.  
Source: Adapted from Boulmetis & Dutwin (2005); Maring & Davis-Unger (2005); Owen & Rogers (1999); Preskill & Russ-Eft (2005)

The point of a program evaluation is to demonstrate whether participants benefited from attending your program. Furthermore, you need to run the evaluation in such a way that it is clear that participants benefited because of the program and not for some other reason. An evaluation should also be designed so that ways in which the program can be improved are apparent (Weiss, 1998). Evaluation is therefore essentially about quality assurance. Other reasons to evaluate a program are listed in Box 2.

Box 2: Reasons to evaluate a program

The evaluation of a program may be a result of push factors, such as an external mandate from a funding body, or pull factors, such as an internal need to assess whether a program is having desired outcomes. Either way, staff members need to feel that an evaluation will result in useful information that will help them do their jobs in a more effective way. Other reasons to conduct program evaluations include:

  • to assess progress on program outcomes and goals;
  • to find opportunities for continuous program and personnel quality improvement;
  • to answer questions related to efficiency (e.g., How long did it take and how much did it cost?), effectiveness (e.g., What did you do? How well did you do it?), and impact (e.g., Did the program influence the participants' lives?);
  • to justify requests for further support and funding; and
  • to add to the evidence base about which programs are, and which programs aren't, effective, and ensure that resources are not wasted on programs that are not effective (e.g., the Triple P Parenting program has demonstrated benefits; the Scared Straight Juvenile Delinquency program has not).

Source: Adapted from Boulmetis & Dutwin (2005); Centers for Disease Control and Prevention (n.d.).

Types of evaluation

Once you recognise the need for evaluation, the next step is to decide which type of evaluation to conduct. Evaluation references discuss several different types of evaluation. However, there are two broad categories of evaluation that are most relevant to providers of family support services. These are impact evaluation and process evaluation.

Impact evaluation: Does my program help my clients?

An assessment of the effectiveness of a program would be made via an impact evaluation, which examines whether, how well, and for which participants a program has met its objectives. Impact evaluation is most appropriate for evaluating "settled" programs (Owen & Rogers, 1999). It can also provide information as to the degree to which the program was responsible for the observed or reported change. That is, did the participants "change" over the course of the program, and was the program primarily responsible for those changes? Relevant issues include:

  • Have the intended outcomes for participants been attained? (i.e., have participants improved their knowledge, learned skills, changed a behaviour or attitude?)
  • Have there been any unintended outcomes for participants?
  • Did the program show impacts for participants to an extent that justify the expansion (or replication elsewhere) of the program?
  • Has the program met the needs of its intended clients?

The measurement of change as a result of a program may also be called an "outcome evaluation", a term that is often used as an alternative to impact evaluation. Depending on which book you read, website you refer to, or sector you work in, it may become confusing to see these two terms used in different ways. There is no universal practice of the terms "outcomes" and "impacts" - for example, sometimes outcomes are seen as coming before impacts and sometimes vice versa. The important thing is to consider change as a result of a program over a period of time, to the extent possible - immediately after the program, short-term, medium-term and long-term (Funnell & Rogers, 2011).

Process evaluation: How does my program work?

Process evaluation focuses on the characteristics of the program and its implementation. It aims to understand how the program works, and answers questions such as:

  • Is the program being implemented in the way it was intended? (e.g., are the groups being conducted with more than the number of participants for which it was designed?)
  • Is it reaching the people for whom it was designed? Do those who are actually accessing the program have the characteristics of its intended target group?
  • What are the "active ingredients"? (i.e., what are the specific materials, activities or ways of working that bring about change for participants?)
  • Is it being delivered in the most effective way? (e.g., are mixed-sex groups "better" than single-sex?)
  • Which organisational or governance factors are helping or hindering the delivery of a program? (Alston & Bowles, 2003; Nutbeam & Bauman, 2006)

A process evaluation also includes documentation of the activities and procedures, which enhances adaptation and replication, and examination of aspects such as:

  • client characteristics;
  • how clients enter and progress through the program;
  • how many sessions are attended;
  • why clients drop out of programs;
  • service policies and processes;
  • practitioner skills;
  • cross-agency interaction; and
  • governance (Alston & Bowles, 2003).

Process evaluation often takes a back seat to impact evaluation. Yet, without information about how the program is implemented, it can be difficult to understand why participants did or did not gain some benefit from participating in the program (Nutbeam & Bauman, 2006). Perhaps the results are less encouraging because the group sizes are too large, or because one section of the program is less effective than another. A good process evaluation will point to the mechanisms underlying participant change, or lack thereof.

It is just as important to identify if a program (or parts of it) does not have a positive impact on clients. We rarely talk about what doesn't work, as though this is far less important than what does, even though similar lessons may be learned. If a program (or parts of one) is not shown to benefit its participants, then it is important to identify why this might be the case and either withdraw the program or replace the ineffective components.

There are unfortunately limited ways to share information about what doesn't work. An exception is the Admitting Failure website, created in the spirit of the Engineers Without Borders (Canada) annual Failure Report. Admitting Failure offers a community and a resource to encourage new levels of transparency, collaboration and innovation within civil society.

Further reading about types of evaluation

  • Alston, M., & Bowles, W. (2003). Research for social workers (2nd ed.). Crows Nest, NSW: Allen and Unwin. Chapter 8.
  • Owen, J. M. & Rogers, P. J. (1999). Program evaluation: Forms and processes (2nd ed.). Crows Nest, NSW: Allen & Unwin. Chapters 3 and 13.
  • South Australian Community Health Research Unit. (2008). Planning and evaluation wizard. Adelaide: Flinders University.

Whatever the evaluation reveals about your program, some action is likely to be required. This may involve making minor or major changes to program content or delivery, re-orienting the program towards a different target group, expanding the program, or scrapping it (or parts of it) altogether. Addressing some of these issues may require an innovative response. The next section discusses innovation and its relationship to evaluation, and presents a model of the process of identifying the need for innovation and the kind of innovative response that might be implemented.

Innovation and service delivery

Innovation is often associated with business, technology and manufacturing; however, in recent years it has become a theme across a range of sectors, including family and relationship services. Similarly, there has been increased emphasis on the evaluation of family and relationship programs, highlighting the need for an evidence base to guide practice and identify areas in which innovative responses may be required. Such responses may involve major or minor adaptations to an existing program or practice, changes to program materials or delivery, or the introduction of an entirely new program or way of working. In any case, the response may be aimed at existing or new client groups, and may address new or emerging issues and experiences.

Ongoing social and demographic changes require that service delivery must be constantly monitored and evaluated to ensure that organisational and program goals and objectives continue to be met. Evaluation and innovation are therefore likely to become even more central to the long-term performance and sustainability of programs and services.

What is innovation?

Innovation can be applied in different ways to the social services sector. From a community development perspective, for example, Bradford (2003) described innovation as "applying the best ideas in a timely fashion to emergent problems" (p. v). Here, the emphasis is on a new problem or issue. In contrast, innovation may be undertaken to improve or change an existing practice for implementation in a different community or for a different target audience.

Innovation does not necessarily involve making large or substantial changes to programs or practices, nor does any change you introduce have be "new" per se. It can be useful to think about whether the type of innovation required is something completely new (a breakthrough idea) or something more like an adaptation of, or improvement to, an existing practice or process (an incremental idea) (Jasinski, 2007).

From a broader perspective, innovation can be seen as "fundamentally, a social process that supports good science, strong productivity growth, and a more inclusive society" (Maxwell, 2003, p. 18). While there are common elements in how innovation is defined, the outcomes of innovation can mean different things to different people. A technological innovation, for instance, might be well received for its economic benefits, but less welcomed by the workforce it renders redundant (Bradford, 2003). However, even in the industrial or business worlds, people realise that innovation requires the integration of ideas from a range of sources (Bradford, 2003).

While these definitions suggest that innovation is a response that promotes ongoing improvement, they are either too broad or too limited to be of much value at a practitioner level in the family support services sector. Within the family support services sector, innovation tends to be described in terms of what an innovative practice or activity "looks like". Furthermore, while a dictionary definition of innovation as something new and original is easy to understand, it is not particularly helpful in suggesting how one might actually go about "being innovative" in practice. Two views on innovation follow. These are then integrated into a model depicting how the need for innovation arises and possible responses.

Ways of thinking about innovation

Innovation can be applied at the level of specific program or practice, or at the organisational level. Essentially, innovation is about transformation. As Jasinski (2007) stated, innovation is:

About ideas and the transformation of those ideas into value creating outcomes ... Innovations include breakthrough ideas that lead to new products or services, and incremental ideas which improve the way processes are undertaken ... Innovation is about the creation of new knowledge and the use of that knowledge. (p. 17)

A project focusing on innovative practice in education across Europe (GHK Consulting, the Danish Technology Institute, & Technopolis, 2008) identified several types of innovation. Three of particular relevance to the family support services sector have been adapted below:

  • Innovation in content - Applying new approaches to the specific topics and issues dealt with in a program or in the individual modules or sections of a program. This may involve the application of a new approach, theory or model, or adding new topics in response to social, demographic or other trends.

Example: Introducing a discussion into marriage and relationship education programs about how the use of social networking sites might affect couple relationships.

  • Innovation in delivery method - Employing new or adapted modes of delivery through partnerships with stakeholders, creating ownership among clients, staff or stakeholders, or through the use of learning approaches that make use of new information and computer technologies (such as the Internet, blogs, vlogs, webinars and other electronic communication platforms).

Examples: Setting up one-stop shops or service hubs that provide access to several types of onsite or outreach services; the provision of online couple and family counselling and therapy or web-based couple relationship education programs for clients; or webinars for knowledge sharing among professionals in different locations.

  • Innovation in forging new partnerships or networks - Sharing and increasing expertise, knowledge and experience, fostering communication and exchange of ideas.

Examples: Connecting post-separation services within and across regions, such as the Family Law Pathways Networks; providing external or cross-agency supervision; combining professional development activities (i.e., across several geographically close organisations).

Knowing when innovation is needed

The need for innovation is likely to be identified when it becomes clear that a service is not meeting one or more of its objectives for:

  • the clients or communities that it serves (e.g., elderly CALD or Indigenous groups, young parents);
  • the agency itself (e.g., meeting client targets); and/or
  • its workforce (e.g., high staff turnover, low morale).

The signal that an innovative response is needed may arise from:

  • a formal program evaluation in which problems are identified in its content or processes;
  • changes in target group characteristics, issues and experiences;
  • the emergence of new groups in the community in need of services;
  • new knowledge or information; and/or
  • the emergence of new issues brought about by broad social, economic, technological or environmental changes.

The process through which the need for innovation and possible responses can be identified is captured in the model shown in Figure 1. Target groups are classified as existing (e.g., couples intending to marry; couples experiencing relationship difficulties; families with problematic parent-child relationships) or new. New target groups typically do not spring into existence fully formed. They may include those who have existed for some time but whose numbers have increased to the point where they form a discrete, identifiable group of a size that warrants a programmatic rather than individual service response (such as some CALD groups or Humanitarian Entrant families; fly-in fly-out families; same-sex parents).

Figure 1. Possible pathways to innovation

Where the target group exists, the issues can be:  Normative: e.g., family conflict, mental health issues, healthy couple & family relationships; where the Action would be Maintenance of current service delivery; or Emerging: e.g., cultural clashes; impact of emerging technologies on couple and family relationships. The Action required should be Innovative in Process (incremental) or Product/Service (breakthrough). This innovative action relates to Content, Delivery, and New partnerships. Where the target group is new, the issues can similarly be:  Normative: e.g., family conflict, mental health issues, healthy couple & family relationships; or Emerging: e.g., cultural clashes; impact of emerging technologies on couple and family relationships. For a new target group, the actions for normative and emerging issues are the same:  The action should be Innovative in Process (incremental) or Product/Service (breakthrough). This innovative action relates to Content, Delivery, and New partnerships

In the model, issues are termed normative or emerging. Normative refers to issues that are relatively entrenched in society (for instance, family conflict or addictive behaviours). Emerging issues are those that arise out of broad societal trends and changes, such as the difficulties experienced by recently arrived migrant and refugee families or the impact of emerging technologies (such as social networking sites) on couple and family relationships. An emerging issue may, over time, become more widespread or established at a family, community or societal level.

When a new or adapted program, delivery method or staff management practice is implemented, it is useful to know whether those changes have produced any benefits for clients or staff members. Innovation and evaluation are therefore part of an ongoing cycle:

  • evaluation leads to an awareness of the need for change or improvement (innovation);
  • an innovation is designed and implemented to respond to that need; and
  • subsequent evaluation determines how well the innovation has met its objectives - possibly leading to further innovation.

References

  • Alston, M., & Bowles, W. (2003). Research for social workers (2nd ed.). Crows Nest, NSW: Allen and Unwin.
  • Boulmetis, J., & Dutwin, P. (2005). The ABCs of evaluation: Timeless techniques for program and project managers (2nd Ed.). San Francisco, USA: Jossey-Bass.
  • Bradford, N. (2003). Cities and communities that work: Innovative practices, enabling policies  (Discussion Paper F|32). Ontario: Canadian Policy Research Networks Inc. 
  • Centers for Disease Control and Prevention (n.d.). Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide. Retrieved from: <www.cdc.gov/eval/guide/introduction/index.htm#why>
  • GHK Consulting, the Danish Technology Institute, & Technopolis. (2008). Inventory of innovative practices in education for sustainable development. Retrieved from <ec.europa.eu/education/more-information/doc/sustdev_en.pdf>.
  • Fitzpatrick, J., Sanders, J. & Worthen, B. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.).New Jersey, US: Pearson Education..
  • Funnell, S., & Rogers, P. (2011). Purposeful program theory: Effective use of theories of change and logic models. San Francisco, CA: Jossey-Bass.
  • Jasinski, M. (2007). Innovate and integrate: Embedding innovative practices. Canberra: Department of Education, Science and Training. Retrieved from <flexiblelearning.net.au/about/research/>
  • Maring, B., & Davis-Unger, A. (2005). Program evaluation. Seattle, WA: Office of Educational Assessment, University of Washington. Retrieved from <washington.edu/oea/services/research/program_eval/faq.html>.
  • Maxwell, J. (2003). Innovation is a social process (PDF 779 KB). Paper prepared for Statistics Canada Science and Innovation Information Program, March 2003. Retrieved from <www.statcan.gc.ca/pub/88f0006x/88f0006x2003006-eng.pdf>.
  • Mertens, D., & Wilson, A. (2013). Program evaluation theory and practice: A comprehensive guide. London: Guilford Press.
  • Newcomer, K., Hatry, H., & Wholey, J. (2010). Planning and designing useful evaluations. In J. Wholey, H. Hatry, & K. Newcomer (Eds), Handbook of practical program evaluation (3rd ed.). San Francisco, CA: Jossey-Bass.
  • Nutbeam, D., & Bauman, A. (2006). Evaluation in a nutshell: A practical guide to the evaluation of health promotion programs. North Ryde, NSW: McGraw-Hill.
  • Owen, J., & Rogers, P. (1999). Program evaluation: Forms and approaches (2nd ed.). Sydney, NSW: Allen & Unwin.
  • Preskill, H. S., & Russ-Eft, D. F. (2005). Building evaluation capacity: 72 activities for teaching and training. Thousand Oaks, California: Sage Publications Inc.
  • Scriven, M. (2003-04). Michael Scriven on the differences between evaluation and social science research. The Evaluation Exchange, IX(4). Retrieved from <www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research>.
  • Soriano, G., Clark, H., & Wise, S. (2008). Promising Practice Profiles final report. Melbourne: Australian Institute of Family Studies.
  • South Australian Community Health Research Unit. (2008). Planning and evaluation wizard. Adelaide: Flinders University. Retrieved from <www.flinders.edu.au/medicine/sites/pew/pew_home.cfm>.
  • Weiss, C. H. (1998). Evaluation (2nd ed.). New Jersey: Prentice-Hall.
Acknowledgements

This paper was first developed and written by Robyn Parker, and published in the Issues Paper series for the Australian Family Relationships Clearinghouse (now part of CFCA Information Exchange). This paper has been updated by Elly Robinson, Manager of the Child Family Community Australia information exchange at the Australian Institute of Family Studies.

Feature image: By Epicantus, CC BY 2.0.

Share