Issues in evaluation of complex social change programs for sexual assault prevention

 

You are in an archived section of the AIFS website 

 

Content type
Practice guide
Published

May 2013

Overview

Preventing sexual violence before it occurs is known as primary prevention. Such prevention programs that aim for social change, as in sexual assault prevention, can challenge more traditional perceptions of program success and require consideration as to how success or failure is measured. Broader thinking is required to provide creative, solution-focused evaluation that can be incorporated into program design to enhance sexual assault prevention efforts.

Key messages

  • The most valid, rigorous and reliable information for any evaluation is that which informs the stakeholders about what they need to know for prevention effectiveness.

  • The complexity of multi-level interventions for sexual assault primary prevention may require consideration of new techniques for evaluating.

  • Acceptance of innovative methods of evaluation is required at the policy and funding level to ensure progress towards effective assessment of primary prevention in sexual assault.

  • Evaluators should utilise an approach that best suits the evaluation question they are considering even if that requires innovative techniques.

Introduction

Introduction

Across the policy landscape, there is an emphasis on funding programs that have been evaluated as successful and, of course, delivering programs that work makes sense. Evaluation in the field of sexual assault prevention however, can present many challenges due to the complexity involved, particularly as primary prevention interventions now form a key strategy in reducing sexual and other types of violence against women (Casey & Lindhorst, 2009; Evans, Krogh, & Carmody, 2009; VicHealth, 2007). Sexual assault prevention requires social change, but evaluation in this type of multi-level, change environment is still developing. This means there are opportunities for considering evaluation in a different light, and thinking about what kind of information is most useful for understanding what makes for successful primary prevention. This paper aims to stimulate thinking around creative, solution-focused evaluation that can be incorporated into program design to enhance sexual assault prevention efforts.

Sexual assault primary prevention works by targeting the complex, systemic causes of this crime. The knowledge base continues to expand about the causes of sexual assault, but the key social factors include gender inequality and social norms around gender roles, violence, and sexual behaviour (Davis, Fujie Parks, & Cohen, 2006; Evans et al., 2009; VicHealth, 2007). Primary prevention strategies can be targeted at aspects of causation, or aim to effect change in individual communities, but these are incremental steps toward broad-scale social change to remove the conditions that lead to sexual assault. Bronfenbrenner’s social–ecological model is a prominent example of conceptualising the multi-level causation that underpins a systemic change approach to sexual violence prevention (Quadara & Wall, 2012). The multiplicity of causal factors, and different layers of influence required to change behaviour at a societal level, make it difficult for evaluators and their stakeholders to identify exactly what outcomes they need to know about in order to decide whether a program is effective or not and then how to best extract that information in a sensitive area of research.

For the sexual assault field, particularly in Australia, this type of large-scale, public health approach to prevention is relatively young, so there is little to guide evaluators in this area. Consideration needs to be given to the design of evaluations that can incorporate the complexity of multi-level interventions in the prevention space. These type of evaluations will need to be more sophisticated than merely program feedback from participants (Evans et al., 2009). They will need to work out ways to measure social changes that will result in reduced sexual assault perpetration. This is a very complex task and consideration of how it will be done must be built into program designs. There may be potential to draw on similar fields of prevention, such as domestic violence, that also aim to effect social change to assess how evaluations in those fields have addressed the challenges of multi-level strategies. However, context will be a vital factor as variables and characteristics of particular settings and communities will always be dynamic and idiosyncratic.

This paper will outline some of the issues involved in evaluating multi-level interventions generally and some of the particular features of sexual assault that increase the challenges involved in planning these type of evaluations. Designs that can generate contextualised information will offer the best understanding of the effectiveness of an intervention. Evaluators should consider how, within a dynamic environment, the most relevant information can be identified, captured and translated to research and practice. Evaluations of multi-level, social change programs in sexual assault prevention will need to adopt a philosophy of providing the most useful information for stakeholders working to untangle the web of factors that contribute to sexual violence.

Framing sexual assault as a public health issue

Framing sexual assault as a public health issue

Sexual assault and other forms of gender-based violence can be considered a public health and a social justice issue with one in five Australian women experiencing sexual violence since the age of 15 (Australian Bureau of Statistics, 2006). The consequences of sexual violence include various short- and long-term health impacts for victim/survivors such as reproductive and sexual health problems as well as mental health issues like depression and post-traumatic stress disorders (Boyd, 2011).

In terms of sexual and other types of violence, primary prevention interventions are those that aim to prevent violence before it occurs, whereas secondary and tertiary interventions are strategies utilised at later stages when there is a clear risk of violence, or violence has already occurred.

Primary prevention work is often conceptualised within the social-ecological model (Bronfenbrenner, 1977) that demonstrates violence as a product of multiple components and social factors. An ecological understanding of violence against women, including sexual violence, conceptualises the causes of violence as interactions between different factors at a personal, community and social level (often described as microsystem, mesosytem and macrosystem) (Quadara & Wall, 2012). The influence of these factors extends beyond the individual to also shape collective behaviours. The ecological approach identifies the important principle that there is no single cause but a range of issues interacting together and creating risk factors at multiple levels. Each level influences aspects of individual and community life that impact on the social problem under analysis (Trickett, 2009). These various influences become so entrenched that they form part of the culture or tradition of the community to become social norms or behaviour shapers (Davis et al., 2006). These include norms about women and gender roles, the value placed on masculine power, tolerance of violence, constructs of masculinity, and notions of family privacy (Davis et al., 2006).

Source: Quadara & Wall, 2012.

Primary prevention interventions using the ecological model

Effective primary prevention requires various reinforcing strategies across different levels of influence and delivered across a range of different settings (VicHealth, 2007). The factors that influence violence, and sexual violence in particular, transcend government portfolios and public and private sectors (VicHealth, 2007). They require social, attitude and behaviour change on a large scale and can only really be addressed by the combined efforts of various stakeholders that may include different sectors, services and policy drivers.

Primary prevention in public health is often targeted at the community level of the social-ecological model because this is where cultural change can influence individual behaviour as well as harness the community's knowledge and resources. At this level, interventions can be better adapted to a community's specific needs, increasing support and participation (Davis et al., 2006).

Evaluating ecological-based interventions

Measuring these types of interventions is difficult because it requires defining concepts such as "community" and "culture", as well as understanding how organisations and institutions at each level intersect in relation to the desired change (Schensul, 2009). Good evaluation considers data in terms of experiences and insights of participants, victim/survivors, perpetrators, educators and those delivering services (Davis et al., 2006). In addition, the desired outcomes in these type of change interventions are difficult to measure because such outcomes may be difficult to define or even identify and may operate simultaneously across levels. There may also be unforeseen "ripple" effects, either positive or negative, which contribute to the impact of the intervention and therefore should be included in evaluation (Trickett, 2009).

Ideas about "knowledge" and "reality" and what these terms mean for different stakeholders, will impact on what is perceived as useful information for an evaluation. Divergent views on what constitutes "valuable" information can detract from the issue of identifying and capturing the information that will best tell stakeholders about the program's effectiveness and impacts. It is therefore important that clear values and objectives are agreed upon at the development phase of an evaluation. Where there are competing ideas between stakeholders, it is important that the evaluator is able to provide clear guidance on what kind of evaluation is best to determine the value of an intervention within all the constraints around it. The approach of identifying what stakeholders need from evaluation has been termed "utilization-focused evaluation" and defined as "evaluation done for and with specific intended primary users, for specific, intended uses" (Patton, 2008, p. 37). By focusing on the purpose of the evaluation, some of the many challenges may be overcome, albeit with clear direction from stakeholders.

Types of evaluation

The type of information produced by evaluation should reflect that its main purpose is to determine the merit, worth or value of something (Patton, 2008). Although, like research more generally, evaluation adds to the knowledge base, evaluation leads more directly to value judgements (Fitzpatrick, Sanders, & Worthen, 2011).

There are different types of evaluation, but each has essentially, the purpose of measuring success or otherwise to enable weighing up of merit and in the process, garnering knowledge of successful strategies. Evaluation is not solely as an exercise in assessing program outcomes as it can contribute further to understanding about a particular phenomenon. A brief outline of some types of evaluation follow - there are many variations and subsets of evaluation within these main purposes but essentially evaluation aims for these functions:

  • Formative evaluations - Often conducted during the development of a program, can provide direction for planning and aim to provide information for improvement. This allows assessment of program requirements against the goals and priorities of the program with the main aim usually being to improve an existing program (Stufflebeam & Shinkfield, 2007)
  • Process evaluation - The aim is to identify successful or other elements of a program and identify areas of change that can enhance outcomes. Monitoring of program delivery can also be assessed by process evaluations. It can assess the context of delivery, the types of users of the program and methods of delivery (Fitzpatrick et al., 2011).
  • Impact or summative evaluation - considers the outcomes of a settled program and can assist with decisions about whether to continue funding, terminate or disseminate a program (Owen, 2006).

Evaluation of discrete, contained programs or interventions can more easily assess specific outcomes and can be used to support funding applications or extensions, ensure support from stakeholders, provide feedback to staff involved, and develop the knowledge base further (Parker & Lamont, 2010). However, evaluation of major, multi-level interventions that aim for long-term change become much more complicated to assess and are more likely to need a combination of methodological approaches potentially requiring greater creativity in their design.

The influence of values and research paradigms in evaluation

The influence of values and research paradigms in evaluation

The world can be viewed in different ways depending on the perspective of each person viewing it. Each person's interest in an evaluation will depend upon their perspective and the emphasis placed on particular values. In research, these perspectives are sometimes termed paradigms. A particular paradigm will guide the way research is carried out and what research questions are asked (Denzin & Lincoln, 2005). What this means is that what questions are asked and how they are asked can result in different answers. The intrinsic character of the knowledge sought will be different. No answer carries greater truth than another, they just reveal different types of information.

The values emphasised in an evaluation question will influence how the evaluation design is formulated. The selection of tools, methodologies and frameworks used to evaluate a particular intervention or program can vary depending on the perspective that is valued by the evaluator and by the relevant stakeholders. This is particularly important when working within a multi-level and interactive prevention environment because evaluating "success" or "failure" may require different types of information. By designing evaluations with the aim of capturing information that will be most useful to stakeholders, different approaches can contribute relevant information to provide an effective evaluation.

Debate about knowledge paradigms

There is a history of debate about whether some kinds of knowledge are better than others (Denzin & Lincoln, 2005; Patton, 2008). Historically, quantitative measures were favoured in evaluation studies and seen as somehow "better" (Patton, 2008). This was based on arguments that experimental data were of higher rigour and validity and as such were more robust and precise (Tomison, 2000). By another view, however, valid and reliable information is that which is useful and meets the needs of stakeholders (Fitzpatrick et al., 2011).

Although quantitative measures have value - depending on the objectives of the evaluation - for a primary prevention intervention in sexual assault this type of data is often not available or not relevant. For example, a prevention intervention should result in fewer sexual assaults but this is not data that can be easily measured. Measurement would have to take place over a lengthy time period. In addition, sexual assaults are vastly underreported and often only disclosed informally to friends or family. How can such a reduction be counted? A more appropriate measure would be to identify positive changes in the social structures that enable sexual assault to occur. These social structures relate to gender equity and improved couple relationships - things difficult to measure in a precise, standardised fashion.

Debate about tools and methodology

A randomised control trial is a trial that measures the differences between participants who have received an intervention and those who haven't. The randomisation factor refers to the necessarily random allocation of participants to the control group to ensure systemic bias is removed (Parker, 2010). This allows end differences between those in the control group and those in the intervention group to be linked to the intervention rather than any difference in selection of the groups (Tomison, 2000). Randomised control trials have often been referred to as the "gold standard" for evaluation and research (Parker, 2010; Patton, 2008) because they are able to attribute cause and effect to aspects of a program and have arguably removed the possibility of bias because of the randomisation factor (Tomison, 2000).

By contrast, a non-experimental method of evaluating may observe how staff or participants operate throughout and after the intervention and what this means for the intervention (Patton, 2008). They may include a case study or interviews with participants. Non-experimental methods may try to capture experiences, feelings or changes that have taken place in communities targeted by the intervention, for example, whether there are more respectful interactions between people or whether changes to organisational policies have impacted on gender inequality. These approaches simply produce different information.

Applying randomised control methods for a community-level prevention intervention could become prohibitively costly due to the need to replicate findings. Where control groups are used, the scope of "control" and the potential number of factors interacting can expand due to the size and number of different participant populations being investigated (Nastasi & Hitchcock, 2009). An example may be whether changes at one level (service provision) impacted positively or negatively on users of a service (another level) but will also require investigation about whether the intervention was correctly implemented as this could affect the service provided. Implementation of programs may be done differently across different sites with each having different resources (Nastasi & Hitchcock, 2009). It is clear that neat, linear data won't capture these differences.

For the investigation of social change, qualitative tools and methodologies can give rich information and contextualised understanding about the experiences of participants and of impacts at different levels (Nastasi & Hitchcock, 2009; Parker, 2010; Tomison, 2000). Qualitative data can inform understanding about why and how policy outcomes were or were not achieved, as well as identifying unintended outcomes that may or may not be beneficial (DeGroff & Cargo, 2009). A qualitative methodology may give information about the nature of changes produced by an intervention or where improvements to implementation are needed, for example, making an intervention culturally relevant. Qualitative methods in evaluation and research should be considered to have a different purpose, in a sense, than quantitative tools and if necessary, used in combination with them. At front of mind should be the aim to produce evidence about a particular phenomenon based on examination of specific contexts, individuals or communities (Brantlinger, Jimenez, Klingner, Pugach, & Richardson, 2005) however this may be achieved.

To measure change at an individual level of sexual assault prevention would still require attention to other levels because of the interactive nature of the causes of sexual assault. Changes in outcomes will be affected by more than one level (Nastasi & Hitchcock, 2009). Analysing interactive factors is also required for understanding when desired change reaches a collective level and is sustainable. Where evaluation is looking at issues of social change that are not easily definable concepts for measurement, maximising the capabilities of all evaluation methodology therefore makes sense.

The complexity of evaluating multi-level prevention programs in sexual assault

The complexity of evaluating multi-level prevention programs in sexual assault

The causes of sexual violence, have been found to be complex and multi-dimensional (VicHealth, 2007). In acknowledgement of this, many prevention frameworks are now being based on "ecological" models of understanding violence and sexual violence (Carmody, 2009).

The ecological model and sexual assault

Ecological systems should also be acknowledged as nested dynamic systems that interact among the levels (Schensul, 2009). A multi-level intervention is directed at change in social and cultural institutes and social relationships as well as individual behaviour (Schensul, 2009). The aim is to introduce change at strategically selected levels in order to move the identified system towards a particular goal and with the likelihood that there will be interrelating effects among the levels (Schensul, 2009).

For sexual assault prevention, these causes include factors such as gender inequality in access to power and resources, cultural constraints around gender roles, as well as a cultural acceptance of violence as a construct of masculinity (Davis et al., 2006; VicHealth, 2007). The complex and interactive nature of such social causes of violence creates real challenges for evaluation purposes as will be outlined further below.

Evaluating multiple levels of impact

Where an intervention targets various causative levels of a problem there must be measurement at each level in order to assess what effect each influence and interaction has had on the problem being considered (Nastasi & Hitchcock, 2009). There will be stakeholders at each level and issues requiring negotiation between them. These include:

  • reaching agreement on what indicators can demonstrate success;
  • management and accountability of a project that spans different domains; and
  • timeframes for concluding whether change has been sustainable.

All of these are challenging areas. To complicate the picture further, many community services are now contracted out to agencies by government. These include family violence and sexual assault services. This type of contractual arrangement adds another layer of complexity to evaluation design. Services may be delivered by several organisations with different organisational cultures, values, resources and priorities. This adds impetus to the use of participatory, interactive approaches to ensure stakeholders at the service level are involved and able to inform the design of evaluations in a way that is relevant to each organisation.

In addition, there will be a range of different actions and interactions that combine to produce the story of an intervention in a community-level context (Trickett, 2009). Relevant data will include the experiences of those participating in and delivering the intervention. Ripple effects of an intervention - the unintended consequences of an intervention, which may be positive or negative - should be followed up (Trickett, 2009) as without including some kind of assessment of these ripple effects, information necessary to understand the entire impact of an intervention will be missed.

The lack of examples of multi-level evaluations in sexual assault preventions could be framed as an opportunity for conceptual development in the evaluation of sexual assault prevention. It may be worth turning to other areas of public health to consider aspects of what has worked in these fields. For example, successful HIV prevention efforts utilised structures and networks within gay communities to orient behavioural changes towards safer sex practices and so may have relevance for design of sexual assault prevention (Casey & Lindhorst, 2009). Evidence of effectiveness in large, multi-level interventions may require more than a single analysis, perhaps a variety of measures incorporated to build the picture (Quinlan, Kane, & Trochim, 2008). This acknowledgement lends itself to serious consideration of research designs that better reflect the contribution that evaluations can make to the sexual violence prevention knowledge base and service stakeholders with more useful information.

Community-level interventions

Community-level intervention is a term commonly used in primary prevention. Community level refers to the meso or organisational level of the ecological model (Bronfenbrenner , 1977) and could be identified as different settings for delivery, for example, a particular neighbourhood or vulnerable population. There is no set definition of "community" in this context other than the ecological setting identified for the particular strategy such as a specific geographic location. For sexual assault prevention, a community-level intervention may, for example, feature schools as a setting for delivery of awareness raising and attitude change programs (Flood, Fergus, & Heenan, 2009). Other community settings could be local government interventions, sporting clubs or faith centres. Examples from the field of domestic violence, can be found in the VicHealth "Respect, Responsibility and Equality Program" which supported five projects aimed at preventing violence against women (VicHealth, 2012). One of these targeted first time fathers and workers who interacted with them, as this is recognised as a time of increased risk for violence in families. The "community" included the new fathers, health workers and professionals involved in service delivery (Whitehorse Community Health Centre, 2008).

There is benefit in aiming prevention interventions at this level because, according to the ecological model, the community level is where formal and informal social structures have the most impact of influence on an individual (DeGue et al., 2012). If social change results from the interaction of various agents (Schensul, 2009) then the community level is one level of influence that can have a further impact on individual behaviour. This is particularly important for sexual assault, as communities can be targeted with prevention according to risk, for example, young people at a critical age of sexual development. This enables prevention efforts to be timely and have the most impact in preventing sexual assault (Carmody, 2009).

Another benefit of targeting the community level with interventions is that this is where local knowledge, culture and resources can be incorporated into the program and evaluation so as to enhance acceptance and support for the evaluation. An evaluation involving young people or particular cultural groups needs to have resonance with participants in order to enhance understanding of it and maximise acceptability and participation. The importance of a democratic and inclusive approach to evaluation is considered in the section below on participatory evaluation.

Community-level strategies may include those that aim to change the norms of that particular community including, for example, risk factors and policies (DeGue et al., 2012). A school-based sexual assault prevention intervention, for instance, should consider the community as comprised of different levels that include students, parents, teachers and other stakeholders in the school. Defining a particular community can be a complicating factor in evaluation design as different meanings can be given to constructs such as "community" and how far this extends (Armstrong & Francis, 2003).

The importance of participatory evaluation

Participatory evaluation engages and incorporates stakeholders such as program users, service implementers and program funders in the design, formulation and undertaking of the evaluation (Squirrel, 2012). The focus is on improving the intervention rather than measuring data. Participatory methods are likely to be an essential ingredient in evaluation design and delivery for complex primary prevention, especially when targeting particular communities or settings. There are various reasons why this is so, including that a participatory approach necessarily develops deeper understanding of the issues being analysed by the evaluation. In addition, participatory approaches incorporate local knowledge and context, thereby making an evaluation more likely to be accepted, supported and relevant to those participating, including the targets of change.

Focusing on what is hoped to be achieved by the evaluation from the design stage can help clarify the goals and objectives of the intervention so that the information produced is useful, focused and informative (Patton, 2008). It also ensures accountability across the stakeholder groups by clearly defining roles and contributions that will be made to the evaluation (Squirrel, 2012). When changes result from interacting interventions and networks, it can be difficult for evaluators to pin down the individual contributions and impacts made by each network or program (DeGroff & Cargo, 2009). Added benefits of including stakeholders in the evaluation's design and planning phase will include their commitment, ownership and an improved understanding of the evaluation's aims and the challenges.

Participatory evaluations look for solutions and improvements rather than attributing failure to particular program aspects (Squirrel, 2012). The aim is to include the various perspectives and needs of the multiple stakeholders affected by and affecting the prevention of sexual assault. Stakeholders may include criminological perspectives, law enforcement, policy developers, victim/survivors, service delivery personnel, educative components and more. Such is the nature of the multiple social aspects to the enabling of sexual violence. The dynamic nature of participatory evaluation (e.g., roles may change during the evaluation) means that such evaluations should be flexible and inclusive, able to develop on initial findings and experiences.

The principles around participatory evaluation incorporate the need to build capacity in the participants involved in the evaluation to enable an inclusive and democratic approach. This is often termed "empowerment evaluation" and aims to increase the likelihood that programs will achieve and sustain results by building capacity and empowering participants to identify and contribute relevant information about an intervention. In doing so, they provide insights and wisdom to inform improvement to policies and programs (Fetterman & Wandersman, 2005).

Features of sexual assault and considerations for evaluation

Features of sexual assault and considerations for evaluation

In acknowledging the importance of an inclusive approach to evaluation, there are specific characteristics of sexual assault that have implications for evaluation.

Lack of guidance on promising prevention strategies

There is a lack of detailed, evidence-based knowledge of the societal-level risk factors for sexual assault perpetration, as well as a lack of theoretical guidance in identifying successful community and society level strategies and policies (Casey & Lindhorst, 2009; DeGue et al., 2012). Although risk factors for sexual assault perpetration have been identified, such as peer and organisational cultures and other social norms that support violence (VicHealth, 2007), there is little confirmed evidence of how best to challenge these and activate long-term change to decrease sexual assault.

Although promising practices in prevention are emerging, there has been little confirmation of effectiveness by way of detailed evaluation of these. One example is bystander prevention training where the focus is on encouraging and enabling bystanders (often men) to intervene in situations where the behaviour or attitudes of others can be challenged when they are supportive of violence or disrespectful to women. Bystander prevention has been a feature in anti-bullying efforts and has been shown to have promise as a strategy (Banyard, Moynihan, & Plante, 2007). However, there has not been much comprehensive evaluation on their impact on actual reduction of sexual assault (Banyard et al., 2007; Casey & Lindhorst, 2009; VicHealth, 2011).

Trauma and privacy

There are issues of trauma and privacy involved in sexual assault that complicate data collection. Primarily, there is difficulty collecting reliable, numerical data due to the silence surrounding sexual assault. The need to evaluate sexual assault prevention in a multi-level setting needs engagement with various stakeholders, and an ability to establish relationships tailored to the characteristics of each group involved, for example, traumatised victims. Data collection and the type of data used, must be ethical and sensitive to the needs of victim/survivors.

Sexual assault is known to be a vastly underreported crime. There are various reasons for this, but stigma, shame and trauma contribute to the silence. This means that measuring the impact of community-level interventions in a purely quantitative sense is likely to be difficult and different to other public health fields that have accurate administrative data such as hospital records or relevant crime data (DeGue et al., 2012). The sensitive nature of the information and the need to respect the privacy of victim/survivors means that this type of administrative data is unlikely to be available and there may be ethical issues to collecting meaningful sexual assault data.

The privacy attached to self-report or confidential survey data on sexual assault may provide a more accessible means for victim/survivors to indicate their experiences and provide a clearer picture of sexual assault prevention. But in order to reach the populations targeted by primary prevention, potentially a large group numerically, self-report or survey data may be expensive to gather. In addition to this, the data will need to be collected over a lengthy period in order to ascertain social change and sustainability over time. Repeated surveying, aside from the expense, may necessarily be invasive of privacy as it will have to track respondents over time. A self-report survey method would require difficult decisions about how long to evaluate for, how to track respondents over time and when a social change can be considered complete.

Ethics

There are also ethical issues in attempting experimental design in sexual assault prevention programs, in that the nature of controlled trials will require that comparison groups or control groups are established and not given the particular treatment or intervention being studied. Obviously by assigning people to potential risk or other negative outcome by not including them in the prevention program contains ethical concerns that are difficult to justify. An example could be where prevention programs in schools are being compared with a control group school with no prevention program. Should a sexual assault issue arises at the control school it would be considered unethical not to respond to a request for assistance to implement prevention or culture change programs, given the potential for harm and the prevalence of violence in the community.

Funding

As with many other challenging social problems, sexual assault prevention projects tend to receive sporadic, project-based funding from governments. Funding may be insufficient to build the most effective evaluation methods into the project. Where there are major social-change objectives, issues of sustainability of change may be neglected if project funding cycles do not allow for analysis of change being maintained over time. The need to create a culture of evaluation is clearly important in order to prioritise effective evaluation. Obviously, evaluation can offer long-term efficiencies for funders by increasing the knowledge base around what works.

Similarities with other fields

Sexual violence prevention can be compared with other fields of prevention where social behaviour change is required and where there may be social stigma attached that requires dealing with sensitive populations. Bullying, domestic violence, substance abuse prevention, and HIV prevention are examples of fields that could be examined to inform evaluation design as they feature similar issues to sexual violence, with individual behaviour change requiring input from the broader community and social levels.

For example, an Australian evaluation of contracted domestic violence services (Carson, Chung, & Day, 2009) provided discussion of issues that may be similar to those found in sexual assault prevention evaluations as there are many parallels with service provision in the sexual assault and domestic violence sectors. Both are prevalent social issues and both feature traumatised victims that are less likely to report incidents than other crimes, making administrative data, such as police statistics, less useful as a measure. In addition, their study found that there was very little standardisation between different organisations providing the services. This was despite the contracting government department having funding and reporting requirements in place to overcome this factor. Locational differences in service delivery and lack of consistency between sites and organisations demonstrates an additional complexity in evaluating social care interventions where contracting to non-government organisations is common (Carson et al., 2009).

A few ideas from other fields:

  • A report on evaluating agency-contracted domestic violence programs emphasised the need for "realist evaluation" that incorporates research paradigms that may usually be in opposition. The complexity of administrative, delivery and outcome issues, and the lack of standardisation between the delivering agencies meant a multi-stranded, tailored approach was necessary (Carson et al., 2009).
  • The US Department of Health and Human Services Centres for Disease Control and Prevention (2004) noted in their Physical Evaluation Handbook that an evaluation is not worth doing if the results won't be utilised. Their framework requires the evaluation to consider multiple, interacting levels, utilise multiple methods including document analysis and observation in designing evaluations for increasing physical activity. This publication encourages "thinking outside the box".
  • HIV primary prevention literature discusses research approaches relevant to evaluation in a community context. An example is an "ecology of lives" perspective. The use of an empowerment paradigm is emphasised as a collaborative model and supports "paradigm stretching" activities which include the active involvement of stakeholders in collaboration. From this conceptualisation, evaluation data may be different stories that provide insight on different aspects of an evaluation, for example, experiences of participants, how better to expand collaboration, or the best attributes for those delivering an intervention (Trickett, 2002).
Identifying indicators in complex interventions - the challenge of what to measure

Identifying indicators in complex interventions - the challenge of what to measure

As noted by Schensul (2009), prevention is a challenging concept to identify and measure over time. One of the specific issues with evaluating the success of prevention measures in the sexual assault field is the problem of how to select indicators. Indicators are the factors that can provide a basis of judgement about whether an intervention or program has been successful or not, and what impacts if any the intervention has had. As social change occurs gradually and over time, indicators of effectiveness can be hard to pin down in complex systems that are adapting and changing according to actions and changes within that ecological setting (Parsons, 2007).

At the heart of the issue of evaluating social interventions across disciplines and government departments, will be identifying indicators that can be utilised across all areas involved in the intervention and that can measure what may seem vague, social constructs. The term "social indicators" is sometimes used as the measure of change in social phenomena (Armstrong & Francis, 2003). However, these are likely to be variable markers and so the selection of what will be measured must reflect the policy goals of stakeholders and must clarify the criteria used to select such indicators. Social indicators are usually relevant to the measurement of large, social goals (Armstrong & Francis, 2003). They should be drawn from careful consideration of the program objectives and incorporated into an evaluation from the evaluation planning stage. By clear articulation of objectives and indicators in the planning stage, there is then opportunity for stakeholder agreement on what evaluation methods can provide the best measure of these objectives.

Interventions that aim to change social contexts and structures in order to then effect more individual-level change, feature elements targeted at the various levels involved. For example, changing the behaviour of individuals across a broad context may require changes at a societal level. This reflects the enormity of the task of selecting indictors. Social norms refer to the structures that are in a society that enable particular attitudes and behaviours to exist. The behaviour of an individual is influenced by the way in which their peers and broader society will perceive behaviour. For example, if gender inequality is accepted as socially "normal" then individuals will be influenced by that perception and continue to perpetuate unequal gender behaviour.

If the aim or objective of a program is to measure whether a particular social norm has changed - for example, whether there is now greater gender equality in the workplace - it will require consideration of how this will be measured. For example, will it be by considering equality of pay data, numbers of women on boards of companies or lower reporting of sexual harassment? Although each of these could be measured numerically and collected with some form of administrative data, will any of these factors, if used as indicators, really provide effective data to record contextual change that will impact on the overall goal of gender equality?

Another issue that should be noted by policy communities and funders is the question of when a primary prevention program is considered "successful". This is a key factor in establishing the type of data to be collected and the methods to be used in the evaluation. If a primary prevention program is successful, for example, a quantitative measure may see a change in some administrative data such as reports to sexual assault services but another less definable measure may be "improved safety". The concept of success in these examples will be affected by what data are collected. In this sense, "success" can be interpreted in different ways.

This is where the importance of embedding discussion between stakeholders is vital throughout the formative, planning stage of the evaluation. This should include what the objectives will be, the methodology used and the change theory or program logic that has been utilised. In other words, clarifying why you are doing what you are doing in the evaluation and how this is related to expected outcomes.1 Without agreement on the concepts that will be appropriate to measure objectives, the impacts may not be meaningful to all involved. By considering early on, the evaluation methods that can be used to measure these specific objectives, the evaluation will be strengthened by agreement and understanding of what is being measured, how and what outcomes will constitute success.

Success may also be something other than the original objectives of an intervention such as the "ripple effects" discussed earlier. The discovery and follow of up these inadvertent consequences can inform efforts that focus on intentional change (Trickett, 2009). Such unintentional ripple effects may be difficult to measure statistically, but are important for contextual information.

Evaluating the evaluation data - establishing what is relevant

In evaluating multi-level interventions there is the potential for variation across sites or levels in so far as context and conditions, participants and implementers are involved (Nastasi & Hitchcock, 2009). As noted earlier, this type of variation makes evaluations such as randomised controlled trials difficult, as validity of the data will be reduced by the variable conditions and possible difficulty in reproducing results across sites and settings (Nastasi & Hitchcock, 2009).

Armstrong and Francis (2003) referred to various criteria for evaluating the quality of indictors selected in an evaluation. These included issues of validity, relevance, appropriateness and manageability and are outlined briefly below.

Robustness or reliability - how stable, reliable and reproducible the indicator is over time, thereby enabling measurement of trends.

Validity - whether the chosen indicator suitably measures that concept. In sexual assault prevention, concepts may be vague and it can be difficult to identify how to validly measure something like "increased gender equality" or a "healthy relationship". It may require a combination of measures to gauge whether there has been an improvement. The problem may be that the data could be influenced by dynamic or specific events that impact on this.

Relevance - the indicator must reflect the objectives of the intervention that are being measured.

Appropriateness - the ability of the indicator to reflect a balanced view of what may be a combination of complex factors impinging on the policy factors or objectives emphasised as requiring change. For example, if an intervention is to change social structures that support gender discrimination in a community, consideration must be given to what is an appropriate indicator to this factor

Manageability - the availability of the data and of the infrastructure to collect and report data.

Ultimately, deciding on the indicators to use to evaluate sexual assault prevention efforts will need to consider what the information obtained is to be used for, and what will constitute "success" based on the objectives of the stakeholders. For example, in sexual assault prevention, will success constitute a change in attitudes towards gender role stereotypes that is sustained for 12 months or over 5 years? Or will it constitute a decrease in social acceptability of male sexual aggression across a particular community, even though this may not have been originally a key objective of an intervention? The emphasis placed on particular values by stakeholders will be the key to deciding appropriateness of indicators, but of course different stakeholders may weigh different aspects according to their own perspective.

In addition, the selection of indicators should consider the issue of sustainability of change - how long after an intervention can or should the effects of change be measured? Should evaluation continue over time to ascertain effectiveness? And what if conditions under which original changes occurred alter in a naturally dynamic process, how can these issues be evaluated (Schensul, 2009)? All of these questions must be considered in the design and development of an evaluation, adding to the complexity of measuring data that may not fit neatly into the constraints of numerical measurement.

1 For discussion on the need to incorporate program logic into operational change, see What is Effective Primary Prevention in Sexual Assault? Translating the Evidence for Action (Quadara & Wall, 2012).

Considering evaluation methods

Considering evaluation methods

This section looks briefly at just a few qualitative methods of evaluation that may be useful in investigating sexual assault prevention initiatives. Often they can be used in conjunction with quantitative data where it is available and can offer value to the evaluation. The qualitative tools can enable the researcher to gain insight into the contextual and exploratory aspects of an evaluation. This includes explanatory factors and motivations of a particular aspect of an intervention, identifying the factors contributing to good or bad service delivery, and the experiences of participants. This is consistent with a pragmatic approach to evaluation that would consider, for example, who does the intervention impact and how and what combination of factors must be present for certain effects. Realist evaluation operates on the principle of context as an influence for implementation and outcomes (Ranmuthgala et al., 2011). The tools exemplified here, can also determine actions that may be able to improve delivery of a program or policy within a particular context. This is not an exhaustive list but a provides a small sample of techniques that can be considered in selecting the best method or combination for the intervention being studied.

Most Significant Change technique

This is a story-based technique of collecting data that involves stakeholders identifying significant program outcomes to date, collecting those stories from participants and evaluating them systematically in order to utilise the information to improve programing and judgement outcomes. In the case of sexual assault prevention, it may, for example, include data from participants in a bystander intervention or a social norms campaign. It is therefore a technique that is incorporated into the intervention and works throughout its implementation. The aim of most significant change is to focus the programming towards explicitly valued directions and provide information concerning unexpected or most successful outcomes (Dart & Davies, 2003), for example, a change in attitudes to gender roles based in an educational setting. This technique is a form of participatory research that can contribute to understanding of impacts when interventions don't have predictable outcomes, as is likely for social change objectives. It can be used to capture unexpected outcomes from open-ended discussions on various aspects of the change process (Shaw, Greene, & Mark, 2006). It is also possible to combine the Most Significant Change technique with other methods of evaluation in order to complement the information - for example, quantitative evidence of outcomes over a particular area, evidence of "average" experiences across participants of particular participants, as well as exceptional outcomes (Dart & Davies, 2003). This technique has not been used in sexual assault evaluation to date, but could be useful in prevention efforts in particular settings, for example, to maximise prevention efforts in education by identifying the features of program delivery that are most beneficial. This type of information will be invaluable for honing program design and maximising impact.

An issue that may arise with using this technique is the time taken to craft and undertake the story gathering process - it is unlikely to be suitable for short timeframes. Another problem in a sexual assault situation could be confidentiality or privacy issues. This will depend on the nature of the intervention that is being evaluated and it will require an assessment of whether it is suitable for the particular project at hand.

Case study

A case study is a detailed investigation that looks at data in the format of specific units of analysis or cases (Patton, 2002). Rather than being specifically designed to test something, a case study collates data collected from relevant sources, for example, from interviewing participants in a program. This is then presented in a format that enables illustration of a particular phenomenon or process. The use of data from a range of sources enables a comprehensive picture to be drawn of peoples' experiences (Patton, 2002). The case study is particularly valuable for demonstrating difficult-to-measure concepts and this would be applicable in sexual assault evaluations such as "improved relationships between couples". A case study could look at changes in a relationship for a particular couple or various couples who had taken part in a program or intervention. A case study would be able to document and illustrate abstract changes such as "improved communication". Where these type of fluid constructs make up the objectives of a program, consultation with experts in the particular field (e.g., relationship counsellors or other appropriate practitioners) may provide insight into appropriate measures of change to operationalise these objectives. The flexible nature of the case study means that it is ideal for investigating issues in depth and through a contextual and potentially causal lens (Hartley, 2004).

One of the key criticisms of case studies is a lack of ability to detect patterns and compare findings across a large population due to the focused nature of the data (Scriven, 1991), however this is not essentially the purpose of case studies. Indeed, case studies can be used along with quantitative data collection in order to provide a large population picture combined with deeper, contextual information that can complement a broad intervention such as a public health prevention measure.

Evaluating with visual media

Although some of the visual media methods may still be considered unconventional in the sense that they don't provide precise data on outcomes, their use should be explored in order to encourage acceptance among stakeholders and funders in the types of evaluations for which they are most suited.

Visual media includes the use of photography and specific techniques such as photo-interviewing and photo-elicitation. Photo-interviewing is where photos are shown to participants with the aim of getting them to talk about specific issues related to the photograph. Photo-elicitation is the use of particular photos in order to provoke a response (Hurworth, 2004) and to study, for example, attitudes and beliefs about a subject.

One of the key benefits of visual techniques in evaluating sexual assault prevention is its ability to deal with more abstract concepts (Hurworth, Clark, Martin, & Thomsen, 2005). As pointed out earlier, sexual assault aetiology has indicated that sexual assault is facilitated by various social conditions that are by nature abstract and fluid concepts - potentially changes in these conditions could be documented visually. For example, photographic data could visually capture the effects of policy changes in an organisation or could be used to elicit discussion responses in relation to improving employee conduct or organisational culture.

The use of visual medium can be valuable for carrying out a participatory evaluation as it secures the perspectives and knowledge of those often most affected and most involved in the phenomenon that is being evaluated. Visual media provides a way to encourage participation by engaging participants to photograph aspects of change being studied where appropriate (Allen, 2009), or by using pictures to elicit responses and generate discussion about particular elements of an intervention.

Examining the effectiveness of visual methods as an evaluation tool for issues around sexuality is difficult because the subject has been socially designated as taboo and private (Allen, 2009). The use of photography may capture some of the more esoteric aspects of attitudes that may otherwise not be discussed because of the nature of the topic (Allen, 2009). In addition, it can provide a pathway to participation for stakeholders who may be otherwise excluded by issues of language barriers or cultural barriers where discussion about sexuality is prohibited (Allen, 2009).

Evaluating interventions with the use of visual media will not negate the need for a theoretical evaluation question to be the focus of the evaluation (Hurworth, 2004). Although further questions may evolve as data are collected, a fundamental question must be at the basis of the development of evaluation objectives and indicators. This will usually be influenced by the paradigm being used to consider the intervention.

A key criticism of the use of visual medium for evaluation is that selection of pictures and their interpretation may be subjective. Of course this criticism could also potentially be attributed to selection of statistics studied when using quantitative methods (Hurworth, 2004). A way to overcome the potential for bias is to triangulate the visual material with other data such as interviews, surveys or focus groups. It is important that selection bias, where intentional, is acknowledged, as for limitations generally in any method. This should be standard for any study. The importance of rigour and credibility is the same as for any data collection method (Hurworth, 2004) and should be considered as part of evaluation planning.

Document analysis

Documents can be a source of factual information to consider against the objectives of the intervention. This can include the use of brochures or information used to publicise the program, diaries, notebooks or drawings undertaken by participants or any other relevant documentary information. It has the advantage of being an efficient and usually a relatively low-cost method of analysis and won't display the reactivity that may be experienced with other forms of research that involve people and emotions (Bowen, 2009). This may be important in analysing sexual assault prevention issues where the trauma and emotional aspects of people involved in this area are potentially highly charged.

Many sexual assaults take place in private settings, and involve personal and relationship issues. This may increase the difficulty of obtaining certain information from participants by interview due to reluctance to discuss material of a sexual nature. Documentary analysis will depend on the objectives of the program, for example, certain media could be analysed over time to get a sense of change in social attitudes to gender. It could include an analysis of organisational policies over time to sense change in gender discrimination in workplaces. The relevance of this method for evaluating sexual assault prevention will be very much determined on the context and objectives of the program being assessed and its objectives.

Document analysis may also be a useful method to employ where historical information or background information is relevant such as why a program developed in a certain way (Bowen, 2009). This is important where changes over time are being analysed as would be necessary in a multi-level social change intervention where sustainability of change is being considered.

Document analysis can also provide a means of triangulating other data sources, such as interviews, by providing a corroborating source of evidence that increases credibility of overall data on a particular issue (Bowen, 2009).

Selection bias in the sourcing of documents is possible. For example, specific media reports could be reviewed to assess changes in media representations of sexual assault as indicative of change in community attitudes to sexual violence - these results may be impacted by selection. Arguably an evaluator could select documents that are most likely to show an effect one way or another, but this can be ameliorated by clear justification of the relevance of documents used to the research problem and purpose as well as a robust strategy for selecting samples.

Conclusion

Conclusion

Evaluation in the context of sexual violence primary prevention at a multi-level scale is a relatively new science in Australia and there is little guidance on a best approach to these type of evaluation questions. Across the literature, there is a gap in the evaluation of community- and societal-level strategies to prevent sexual violence (Casey & Lindhorst, 2009; DeGue et al., 2012; VicHealth, 2007). Although rigorous evaluation is clearly a key step in identifying the best interventions for prevention of sexual assault, there is a major challenge in finding and representing the most appropriate data. The hidden nature of sexual assault means that administrative data such as crime statistics that may be useful in analysing other social issues, is of little benefit when these do not reflect accurately the actual incidence of sexual violence. In order to assess strategies implemented at the broadest level of the ecological strata, for example specific policy changes, the work of measuring "success" will require negotiation between stakeholders to ensure agreement on objectives that will determine effectiveness, as there is unlikely to be a straightforward measure of a policy's impact on social changes.

Understand the complexity of sexual assault prevention

Acknowledgment of the complex social factors at work at multiple levels of influence in facilitating sexual violence will mean that the most effective prevention interventions could be the most difficult to evaluate due to the need to identify and target each of these factors over different levels. Ideally, evaluation of a particular intervention would be measured over time, probably beyond the actual life of the program, in order to justify its sustainability and value (Nastasi & Hitchcock, 2009). This can be a difficult task for various reasons but usually because program funding and therefore evaluation funding is rarely ongoing or indefinite and so will have an end date that will not necessarily enable answering questions of sustainability.

Consider indicators

Possibly the most difficult aspect of the evaluation task in sexual assault prevention will be identifying and defining indicators that suitably demonstrate and measure change objectives and developing a meaningful picture of effectiveness or intervention success from them (Nastasi & Hitchcock, 2009). It is possible that a picture of success may be vastly different from what was originally envisaged by the funders or policy developers. This could be because original objectives were not met, but perhaps some unanticipated benefit to program participants has been identified. Other challenges in identifying indicators include where interventions occur over multiple sites, there may be differences in quality or resources in terms of delivery. This requires some thought as to how this type of effect can be ameliorated to impact different findings. Some types of qualitative research would be appropriate in order to provide context on the delivery in different settings.

Create acceptance for innovative evaluation design

For progress in multi-level evaluations for sexual assault prevention, it is necessary that stakeholders and funders are able to comprehend the need and utility of undertaking evaluations that utilise innovative techniques. In other words, acceptance may require the evaluator to communicate to stakeholders the need for evaluating in a particular way. This type of engagement may be made more difficult by competing perspectives of the many stakeholders who will be involved in any multi-level intervention. Those financing an intervention may like the certainty of facts and figures, while others may be open to different techniques. Some methods that will obtain specific, evidential data - such as case studies or participant observation - may be difficult in an environment where privacy is of major concern, as it will be in many aspects of sexual assault intervention. The evaluation design selected will need to be ably communicated to stakeholders in order that they understand its benefits. Ideally they will participate and contribute to its development. This process obviously requires a thorough understanding of the change phenomena being studied, the levels and impacts at work that affect individual behaviour and the change dynamics at work.

Incorporating innovation

There is little guidance as to how to apply a sexual assault prevention evaluation framework to real-life settings and multi-faceted interventions, but clearly any decisions about evaluation design will require consideration of how the data will ultimately be used and to whom it will be of benefit. What is apparent is that the complexity faced by evaluators of this sensitive field of prevention will require an openness to innovative research design and method selection, to ensure that the design selected is beneficial and applicable to the situation at hand. As the science of primary prevention develops, evaluation is also evolving to incorporate different units of analysis that more appropriately measure and reflect on the effectiveness of interventions at a community and social level. Acceptance of this evolution and support for innovative evaluation is required at the policy and funding level to ensure that progress towards effective assessment of primary prevention can generate important information for the evidence base in prevention of sexual assault.

To develop a stronger evidence base for sexual assault prevention, evaluation design needs to incorporate an understanding of the complexity of social change. Evaluation will require a flexible and multi-stranded approach in order to produce valid, useful information for stakeholders. This includes the consideration and acceptance of innovative methods and techniques that best suit the evaluation question being considered.

References

References

  • Allen, L. (2009). Snapped: Researching the sexual cultures of schools using visual methods. International journal of qualitative studies in education, 22(5), 549-561.
  • Armstrong, A., & Francis, R. (2003). Social indicators - promises and problems: A critical review. Evaluation Journal of Australia, 3(1), 17-26.
  • Australian Bureau of Statistics. (2006). Personal Safety Survey Australia, Reissue (cat. no. 45230). Canberra: Australian Bureau of Statistics.
  • Banyard, V. L., Moynihan, M. M., & Plante, E. G. (2007). Bystander education: Bringing a broader community perspective to sexual violence prevention. Journal of Community Psychology, 35, 61-79.
  • Bowen, A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27-40.
  • Boyd, C. (2011). The impacts of sexual assault on women (ACSSA Resource Sheet). Melbourne: Australian Insitute of Family Studies.
  • Bronfenbrenner, U. (1977). Toward an experimental ecology of human development. American Psychologist, 32(7), 513-531.
  • Brantlinger, E., Jimenez, R., Klingner, J., Pugach, M., & Richardson, V. (2005). Qualitative studies in special education. Exceptional Children, 71(2), 195-207.
  • Carmody, M. (2009). Conceptualising the prevention of sexual assault and the role of education (ACSSA Issues No. 10). Melbourne: Australian Institute of Family Studies.
  • Carson, E., Chung, D., & Day, A. (2009). Evaluating contracted domestic violence programs. Evaluation Journal of Australia, 9(1), 10-19.
  • Casey, E., & Lindhorst, T. (2009). Toward a multi-level, ecological approach to the primary prevention of sexual assault: Prevention in peer and community contexts. Trauma, Violence and Abuse, 10(2), 91-114.
  • Dart, J., & Davies, R. (2003). A dialogical, story-based evaluation tool: The Most Significant Change technique. American Journal of Evaluation, 24(2), 137-155.
  • Davis, R., Fujie Parks, L., & Cohen, L. (2006). Sexual violence and the spectrum of prevention: Towards a community solution. Enola, PA: National Sexual Violence Resource Centre.
  • DeGroff, A., & Cargo, M. (2009). Policy implmentation: Implications for evaluation. In J. M. Ottoson & P. Hawe (Eds.), Knowledge utilisation, diffusion, implementation, transfer and translation: Implications for evaluation New directions for evaluation, 124, 47-67.
  • DeGue, S., Holt, M., Massetti, G., Matjasko, J., Tharp, A., & Valle, L. (2012). Looking ahead toward community-level strategies to prevent sexual violence. Journal of women's health, 21(1), 1-3.
  • Denzin, N., & Lincoln, Y. (Eds.). (2005). The Sage handbook of qualitative research. Thousand Oaks: Sage.
  • Doucet, S., Letourneau, N., & Stoppard, J. (2010). Contemporary paradigms for research related to women's mental health. Health Care for Women International, 31, 296-312.
  • Evans, S., Krogh, C., & Carmody, M. (2009). "Time to get cracking": The challenge of developing best practice in Australian sexual assault prevention education (ACSSA Issues No. 11). Melbourne: Australian Institute of Family Studies.
  • Fetterman, D., & Wandersman, A. (Eds.). (2005). Empowerment evaluation principles in practice. New York: Guildford Press.
  • Fitzpatrick, J., Sanders, J., & Worthen, B. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Upper Saddle River, New Jersey: Pearson.
  • Flood, M., Fergus, L., & Heenan, M. (2009). Respectful relationships education: Violence prevention and respectful relationships in Victorian schools. Melbourne: Department of Educatino and Early Childhood Development.
  • Hartley, J. (2004). Case study research. In C. Cassell & G. Symon (Eds.), Essential guide to qualitative methods in organizational research (pp. 323-333). London: Sage.
  • Hurworth, R. (2004). The use of the visual medium for program evaluation. In C. Pole (Ed.), Seeing is believing? Approaches to visual research (pp. 163-180). Oxford: Elsevier Ltd.
  • Hurworth, R., Clark, E., Martin, J., & Thomsen. (2005). The use of photo-interviewing:three examples from health evaluation and research. Evaluation Journal of Australia, 4(New series)(1 & 2 ), 52-62.
  • Nastasi, B. K., & Hitchcock, J. (2009). Challenges of evaluating multilevel interventions. American Journal of Community Psychology, 43, 360-376.
  • Owen, J. (2006). Program evaluation forms and approaches (3rd ed.). Crows Nest NSW: Allen & Unwin.
  • Parker, R. (2010). Evaluation in family support services (AFRC Issues Paper 6). Melbourne: Australian Insitute of Family Studies.
  • Parker, R., & Lamont, A. (2010). Evaluating programs (CAFCA Resource Sheet).Melbourne: Australian Institute of Family Studies.
  • Parsons, B. (2007). The state of methods and tools for social systems change. American Journal of Community Psychology, 39, 405-409.
  • Patton, M. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks: Sage Publications.
  • Patton, M. (2008). Utilization-Focused Evaluation (4th ed.). Thousand Oaks: Sage Publications.
  • Ponterotto, J. (2005). Qualitative research in counseling psychology: A primer on research paradigms and philosophy of science. Journal of Counseling Psychology, 52(2), 126-136.
  • Quadara, A., & Wall, L. (2012). What is effective primary prevention in sexual assault? Translating the evidence for action(ACSSA Wrap No. 11). Melbourne: Australian Institute of Family Studies.
  • Quinlan, K., Kane, M., & Trochim, W. (2008). Evaluation of large research initiatives: Outcomes, challenges and methodological considerations. Reforming the evaluation of research. New directions for evaluation, 118, 61-72.
  • Ranmuthgala, G., Cunningham, F., Plumb, J., Long, J., Georgiou, A., Westbrook, J., & Braithwaite, J. (2011). A realist evaluation of the role of communities of practice in changing healthcare practice. Implementation Science, 6(49).
  • Schensul, J. (2009). Community, culture and sustainability in a multilevel dynamic systems intervention science. American Journal of Community Psychology, 43, 241-256.
  • Scriven. (1991). Evaluation thesaurus. California: Sage.
  • Shaw, I., Greene, J., & Mark, M. (2006). The Sage handbook of evaluation. London: Sage.
  • Squirrel, G. (2012). Evaluation in action - Theory and practice for effective evaluation. Lyme Regis: Russell House Publishing.
  • Stufflebeam, D., & Shinkfield, A. (2007). Evaluation, theory, models and applications. San Francisco, CA: Jossey-Bass.
  • Tomison, A. (2000). Evaluating child abuse prevention programs (NCPC Issues No. 12). Melbourne: Australian Institute of Family Studies.
  • Trickett. (2002). Context, culture and collaboration in AIDS intervention: Ecological ideas for enhancing community impact. The Journal of Primary Prevention, 23(2), 157-174.
  • Trickett. (2009). Multilevel community-based culturally situated interventions and community impact: An ecological perspective. American Journal of Community Psychology, 43, 257-266.
  • US Department of Health and Human Services Centres for disease control and prevention. (2004). Physical activitiy evaluation handbook. Washington, DC: US Department of Health and Human Services.
  • VicHealth. (2007). Preventing violence before it occurs: A framework and background paper to guide the primary prevention of violence against women in Victoria. Melbourne: VicHealth.
  • VicHealth. (2011). Review of bystander approaches in support of preventing violence against women. Melbourne: Victorian Institute of Health Promotion.
  • VicHealth. (2012). The Respect Responsibility and Equality Program: A summary report. Melbourne: VicHealth.
  • Whitehorse Community Health Centre. (2008). Baby makes 3 - Final report. Melbourne: Whitehorse Community Health Centre.
Acknowledgements

Liz Wall is a Research Officer with the Australian Centre for the Study of Sexual Assault.


Thanks to Robyn Parker, Anastasia Powell, Pauline Kenny and Kelly Hand for contributing their expertise for the purposes of feedback, guidance and review of earlier drafts of this publication.

Citation

Wall, L. (2013). Issues in evaluation of complex social change programs for sexual assault prevention (ACSSA Issues No. 14). Melbourne: Australian Centre for the Study of Sexual Assault, Australian Institute of Family Studies.

ISBN

978-1-922038-26-5

Share