Using a survey to collect data for evaluation: A guide for beginners
About this resource
This practice guide is for people working in child and family support services who are considering using a survey method to collect data for program evaluation but have limited experience or training in using surveys. The resource provides general guidance about using surveys as well as tips on when and how you might use a survey and what steps are involved in conducting surveys. The resource also briefly outlines the different methods of data collection that can be used to evaluate a program. This is a companion resource to How to write a survey questionnaire for evaluation: a guide for beginners.
This resource summarises some of the basic steps needed to undertake a survey for evaluation. These include:
- deciding on an evaluation design
- choosing a survey method (and knowing why you are choosing it)
- understanding who your survey sample will be and adopting a sampling technique
- designing a survey questionnaire (i.e. writing your survey questions)
- administering the questionnaire
- analysing the data.
What is program evaluation?
Before considering how a survey can be used to collect data for an evaluation, it is helpful to briefly review what an ‘evaluation’ is and how you might collect data for evaluation.
In general terms, evaluation is the systematic process of assessing what you do and how you do it. The aim is usually to determine the ‘merit, worth, or value’ of a program (Hepler et al., 2010). A program in this context is a relatively standardised set of activities that aim to achieve a specific result. For more information on definition and steps in program evaluation, refer to the AIFS practice guide: What is evaluation?
Program evaluations commonly focus on program processes and/or program outcomes. ‘Process’ evaluations generally consider the implementation and service delivery aspects of a program, for example, if it was carried out as intended. ‘Outcome’ or ‘impact’ evaluations assess whether a program has met its goals (i.e. achieved its ‘outcomes’) and can also assess how well (or to what degree) and for whom a program has met its goals. Outcome evaluations can also look at whether a program, policy or service has produced any unintended outcomes or changes. You can undertake process and outcome evaluations as separate activities, but it is often beneficial to evaluate program processes and program outcomes at the same time.
Deciding on your evaluation design
A good evaluation starts with a detailed evaluation plan. Before you begin designing your evaluation, you first need to identify what it is that you want to know about your program or service, whether you currently collect any data and, if so, where you get that data. It is also important to consider whether it is appropriate to ask specific questions or speak to specific groups, or if the best course of action is to do an evaluation at all. For more detail about planning your evaluation, refer to the AIFS resource: Planning an evaluation.
Evaluation design builds on your evaluation plan. It involves decisions about suitable evaluation practices and data collection methods and the timing of and responsibilities for completing your evaluation. No single evaluation design applies across all evaluations because design choices are made in the context of the program delivery, existing data/information and the available resources. For more information on how to design your evaluation, refer to the AIFS practice guide: Evaluation design.
The other key step is to consider the ethics of your evaluation plan and design. This helps ensure your evaluation will respect the rights of participants, minimise any potential harm and will be of benefit or value to participants in the evaluation and the recipients of a program or service. More information on ethics is available in the AIFS resource: Ethics in evaluation.
Identify your data sources
There are two broad categories of data that can be used in program evaluation: primary and secondary sources.
- Primary data sources are those collected by the evaluators for the purpose of the evaluation. This category includes survey data, interview and consultation data and observation1 data.
- Secondary data sources are those that have been collected earlier – often for a different purpose. This includes existing program data such as program reports, recruitment information, attendance logs, performance monitoring and meeting minutes. Secondary data sources can also include other publicly available data such as government records, population data (e.g. from the census or local authorities), and data from national surveys or data sources such as the Australian Early Development Index (AEDI).
Although one category of data isn’t necessarily better than the other, it is important to consider the availability and possible value of secondary data before deciding to collect new data. For example, using secondary data may help you avoid over-burdening evaluation participants with unnecessary research. It can also potentially save on resources, time and effort. Furthermore, some publicly available data may have already been analysed and prepared for use.
Even when collecting primary data, it can be useful to use secondary data for comparison or context. For more information on sources of data, refer to the AIFS practice guide: Data sources in needs assessments.
Decide on your approach to evaluation
For evaluations where you have decided to collect your own data, another important decision is whether you need qualitative data, quantitative data, or a combination of both. This will guide subsequent decisions about how you will collect and analyse your data and how you will report your evaluation results. When choosing your evaluation approach, consider answering the following guiding questions.
- What type of data will best answer your evaluation questions?
- What time and resources will the method require?
- Do your staff have the skills to collect and/or analyse qualitative/quantitative data? If not, is your organisation prepared to invest in staff training to build this capability?
Qualitative and quantitative methods in evaluation
Qualitative methods focus on gathering non-numerical information (such as written, spoken or behavioural responses) from data sources such as interviews, group discussions or focus groups, documents, observation, videos or audio recordings and case reports. Surveys can also be used to collect qualitative data through open-ended questions (Newcomer et al., 2015).
Qualitative methods can provide richness and depth and have the potential to capture a range of perspectives on and experiences of your program (Austin & Sutton, 2014). Qualitative approaches help you understand the ‘why’ of your program (e.g. why are people attending or not attending a program, why is it working or not working). They are also useful when looking for novel insights about your program that you may not think to ask about, want the evaluation insights to be led by the people you are surveying, or are most interested in the lived experience of those involved in the program.
Quantitative methods collect numerical data and/or involve analysis of numerical data (including previous survey data or administrative data). Quantitative data is particularly useful when you have a specific question about your program that can be quantified or represented using numerical values or comparisons across groups of people. Quantitative data helps understand the ‘what’ and ‘how many/how often’ questions of your program (e.g. what is your gender, how many people attended a training?).
More information on qualitative and quantitative methods can be accessed from the AIFS practice guide: Planning an evaluation.
A good program evaluation design usually incorporates both qualitative and quantitative methods. This mixed methods approach involves gathering different types of evidence from various sources and can improve the depth, scope and validity of the findings. That said, a good rule of thumb is to use the simplest methods possible that will provide the data you need to reliably answer your evaluation questions. There is no right or wrong method. Your goal should be to obtain trustworthy, authentic and credible evaluation findings.
When choosing your data collection method for evaluation consider the following:
- The purpose of the evaluation.
- Who will use the evaluation results.
- Your respondents (i.e. your evaluation participants).
- The availability of resources to conduct the evaluation (e.g. existing data, funding, time and expertise).
- The type of data you require to answer your questions.
- The pros and cons of each method.
The following table outlines various data collection methods and their pros and cons. The remainder of this guide will focus on the design and use of surveys for program evaluation.
Source: Adapted from Data Collection for Program Evaluation. Northwest Center for Public Health Practice (nwcphp.org).
What is a survey?
A survey is a research method used in research and evaluation. It usually involves using one or more data collection tools that ask a series of set questions (written or verbal) to collect information directly from program participants in a systematic and standardised way. A ‘survey’ is the broad term used to refer to the processes related to participant recruitment, data collection (usually using a questionnaire), analysis and reporting. Sometimes people use the term ‘survey’ to refer to the set of questions asked during a survey, but this data collection tool is more properly known as a ‘questionnaire’ or ‘survey questionnaire’, which is only one part of a survey.
When to use a survey to collect program evaluation data?
The following table provides a general overview about when a survey (and data collection using a survey questionnaire) might be a good choice in an evaluation.
|A survey is a good information gathering tool when you:
|A survey may not be your best choice for information gathering if you:
|need generalisable and statistically valid information
|do not yet know what information is going to be most important to collect for your evaluation
|have the capability and resources to manage and analyse the data you collect
|want to interact with your respondents to probe, clarify or provide them with more information
|have contact information for the people you are seeking information from and believe they will be willing to complete the survey, or when you have good reason to believe you can get a large enough sample to answer the evaluation question
|want information from people who have limited literacy skills, including children
|want to collect a broad range of data from a large group of people or several groups of people
|need in-depth information about people’s experiences or perspectives
|want to measure changes in views (e.g. satisfaction) and self-reported actions or states of being over time
|only need to gather information from a small number of people.
|are gathering data about knowledge, beliefs, attitudes and behaviours of participants
|want to ask the same set of questions (to collect consistent program data) of a group of people (e.g. by using validated tools)
|want to collect demographic and descriptive data
|want to be able to detect frequencies, trends or patterns in a population
|want to establish a causal link between program interventions and outcomes.
It is usually not possible to survey an entire population; they can be geographically dispersed, hard to reach or not willing to participate. Even in a relatively small population (such as all the participants in a local program for children and families), it is unlikely you will ever be able to recruit everyone (i.e. the whole population) to complete a survey, especially if you have limited time or resources. Instead, most evaluations and surveys carried out for an evaluation only ever aim to reach a subset of the population. This subset is sometimes known as a sample.
When choosing a subset (sample) of your population for your survey you need to pay attention to the characteristics and size of the group of people you recruit into your survey. Ideally, your sample will be representative of the population you are interested in i.e. the people in the sample will reflect the characteristics and diversity of the wider population. This allows you to have greater confidence that your findings are generalisable to the population of interest. When your sample is not representative it can mean that the views or characteristics of a particular group may be over emphasised in your findings while those of other groups are under-represented or missed altogether. Although it is not always possible to have a fully representative sample – because it can be difficult to recruit a group of people who exactly match the diversity of the larger population – it is important to be aware of how representative of your population your sample is.
In the section below we briefly describe some techniques for building a survey sample. Note however, that some sampling techniques can be complex and require advanced knowledge of statistical methods. We cannot cover the complexity of sampling in this section, so here we simply introduce some key concepts. For more information on the techniques used in survey sampling, refer to our Further Reading section.
Choose your sampling technique
There are many different sampling techniques, but many of them fall into two broad types:
- Probability sampling (Statistical Sampling/Proportional Sampling)
- Non-probability sampling (Non-statistical Sampling).
The choice of sampling technique will largely depend on your evaluation questions, the population of interest (e.g. all the participants in a program), resources (including time, funding and level of technical skill) and the desired level of accuracy. When choosing a sampling technique, it is important to understand that creating a representative sample, particularly using probability sampling techniques, commonly requires specialist skills or training. Hence, it should only be considered if you have access to someone with those skills or you can invest in building that capability.
Probability sampling is a sampling technique that gives equal chance or probability to all members of the population to be selected for the sample – using random or quasi-random sampling techniques. There are four main sampling techniques used in probability sampling – simple random sampling, systematic sampling, stratified sampling and cluster sampling The first two techniques are relatively straightforward to implement, while the latter two generally require more specialist skills. For each of these techniques you will need access to a list of all the members of your population.
- Simple random sampling is a common probability sampling technique. For example, a local child and family service might have 1,000 clients and wants to survey 100 of them. Simple random sampling could involve assigning a number to every participant in the program from one to 1,000 and using a random number generator like this one to pick 100 numbers. The people assigned those numbers would be included in the sample.
- Systematic sampling allows you to draw your random sample from the target population at a regular interval. Using the example above, the 1,000 program participants could be listed in alphabetical order. A systematic sampling technique would involve selecting participants at regular intervals from the list – in this case the interval would be every 10th person (i.e. your total population divided by your sample 1000÷100 = 10). To start the intervals, you would randomly pick one number between one and 10 (e.g. number 4). From number 4 onwards, you would then select every 10thperson on the list (4, 14, 24, 34, and so on), until you have a sample of 100 people.
- Stratified sampling requires you to group your target population into subgroups (or strata) based on some common characteristics such as gender, age or religion. You then randomly select your sample from each subgroup. Let’s again use the example above. Of the 1,000 child and family service clients, 800 live in metropolitan areas and 200 live in regional areas. To ensure our sample is proportionally represented from both locations, we randomly select 10% from each group, i.e. 80 (800 × 0.1) metropolitan and 20 (200 × 0.1) regional clients to get our total sample of 100 (1000 × 0.1).
- Cluster sampling also involves dividing the population into subgroups, but each subgroup should have similar characteristics to the whole sample. Instead of sampling individuals from each subgroup, you would then randomly select entire subgroups. So, in our example, let’s assume the child and family service has branches in 10 local government areas (LGAs) across Australia. You already know that each of these LGAs have similar characteristics. You may not have the capacity to travel to all 10 branches to collect your data, so you could randomly select three branches – all the clients at these three branches are your clusters.
Probability sampling allows you to representatively sample from your population and prevent sampling bias. This means your results will be more generalisable to your whole population. However, using probability sampling can be time-consuming, costly and often requires specialist knowledge of the appropriate statistical techniques. It also requires a ‘sampling frame’, i.e. knowledge of the most relevant characteristics of the whole population (so that you know if your sample is representative of your population) (Nikolopoulou, 2023).
Non-probability sampling does not use random sampling techniques, so some members of the population may have a greater chance of being selected. There are many sampling techniques that come under this broad heading.
- Opportunistic or convenience sampling is a common type of non-probability sampling where people are selected based on convenience and their availability. For example, you want to understand the experiences of families participating in a supported playgroup. To do this, you could approach all the families who attend the playgroup on a given day or week and ask if they’d like to participate in the evaluation.
- Self-selection involves participants volunteering to be in a study rather than being directly approached by the researcher or evaluator. For example, you may want to assess how a local parenting program is working for participants. To recruit your survey sample, you may create a poster containing all the necessary information about the survey, asking for volunteers, and place it at the entrance of the service centre. You then include anyone from the program who volunteers for the study in your sample.
- Purposive sampling is a form of sampling where the evaluator uses their expertise to select a sample that is most useful to the purposes of the evaluation. For example, you may want to know more about the experiences of children with disability in your program. In this case, you would select families who have a children with disability for your sample to gather data about their experience of the program.
- Snowball sampling is a form of sampling in which a sample is recruited through referrals. This technique is often used with hard-to-reach populations. For example, you want to understand the service needs of young people who engage in problem gambling. It is difficult to get a list of all young people who engage in problem gambling. In this case you will need to use snowball sampling to build your sample by identifying one young person who agrees to participate in the study, and they put you in contact with other young people that they know in the area. These new participants may also put you in contact with other potentially eligible participants, and so on.
Non-probability sampling techniques are usually faster and less resource intensive than probability sampling and are often more feasible when the full population you are interested in is unknown. However, there is also a much higher potential for bias when using non-probability sampling and the results are usually less generalisable to a wider population. How much this matters depends on the nature of your evaluation questions and the population you wish to survey (Pace, 2021). For more on different sampling techniques, refer to our Further Reading section.
Deciding what questions to ask in a survey
Another important step in your program evaluation is to decide on the type of questions you will include in your data collection tool: the survey questionnaire. We cover some of the key points in designing a survey questionnaire in the resource How to write a survey questionnaire for evaluation: a guide for beginners but the below text summarises some of what you need to know when deciding to collect data with a survey questionnaire.
As with the survey as a whole, it is important to be clear about what information you want the survey questionnaire to collect and what you want to measure. Surveys in evaluation (and thus the questions in a survey questionnaire) are typically used to understand the current status or ‘situation’ of a program and/or to assess a program outcome. If you ask the same questions at different time points (such as before and after an intervention), they can also be used to help measure the changes that may be related to a program or service. Read more about pre-test and post-test evaluation designs.
Once you are clear about what you want to measure with your questionnaire, you need to determine whether there is already a suitable questionnaire available or if you will need to develop your own. We recommend that, where possible, you use existing standardised questions and/or validated instruments in your questionnaire. A validated instrument refers to a question (or more often a set of questions) that has been rigorously tested to establish that it is reliable and measures what it is meant to measure. More specifically, validated tools are:
- reliable (they can be relied on to be consistent when used in the same conditions e.g. a program that is run multiple times)
- valid (will measure what it is supposed to measure)
- standardised (all people who answer the survey are asked the same thing in the same order and with the same response options).
Where probability sampling methods are also used, the findings from a validated instrument may also be generalisable. This means they may be compared to data from other groups who have completed the questionnaire or measure, e.g. the general population, other people receiving a different program or the same program at a different time.
A key benefit of using a validated instrument is they allow evaluators to compare across groups, including earlier evaluation studies and normative data for the population.
For details on choosing validated instruments, refer to the AIFS resource: How to choose an outcomes measurement tool.
Tips on adapting validated measures
When using standardised questionnaires or validated measures it is advisable not to remove questions, change the original question wording or alter the order of the questions. Doing so can mean the measure loses the key benefits of reliability, validity and standardisation. Despite this, there are times when adapting a measure is desirable, for example where there are language or cultural factors that need to be accommodated. In some cases, questionnaire developers may have already adapted and tested their measure. Therefore, if you are considering adapting a measure, you should first contact the questionnaire developer to see what already exists.
Despite this, standardised survey questions or questionnaires may not always be readily available for your purposes (e.g. because you cannot find one that addresses your specific context or evaluation questions). If you can’t find an existing standardised item, you may need to write your own survey questions. Although this means you can tailor questions to your specific needs, writing survey questions can be a complex and time-consuming task and may require technical skills in survey questionnaire development to ensure your questions are reliable and valid.
How to administer a survey questionnaire
You can administer a survey questionnaire in several different ways. Table 3 sets out some pros and cons for some different approaches. Your decision about which approach to use will be informed by factors such as the characteristics, size, geographic location and spread of your population; the purpose of the survey; your timeline; and resourcing (i.e. budget and staff capacity). Some risks, like response bias, may need to be considered in relation to all survey administration, regardless of the approach.
|Hand delivered or direct questionnaires (i.e. giving people a hardcopy of a survey to complete at the time)
|Telephone survey interview
|Face-to-face survey interview
Things to consider before you send out your survey questionnaire
In addition to deciding how you will administer your survey and the questions you will ask, there are some other practical things you should consider:
- Be conscious of potential distractions for those completing the survey, particularly when your respondents are parents, carers or children. For more information about collecting data from parents and children, refer to this AIFS resource: Involving children in evaluation.
- Ensure you provide a clear explanation about the purpose of the survey.
- Ensure the privacy of the respondents is protected and that information collected from the participants will not be shared with a third party.
- Ensure you can securely store any data you collect, e.g. hard copies in a locked filing cabinet and online data in secure, password protected folders.
- Indicate how long the survey is likely to take. It is best to do this by giving an indicative time range, e.g. 20–30 minutes.
- Provide contact details in case respondents want to ask questions.
- If the questionnaire is administered in person or on the telephone, ask the participants if they have any questions before you start.
- If the questionnaire will be delivered in person or on the telephone, you may need to provide training to the people administering the survey to ensure they do not lead the respondents to answer in particular ways. You may also need to provide training so interviewers can respond appropriately to questions or participant distress. This can be an added cost but also means that respondent questions or distress can be addressed at the time.
- If the questionnaire will be administered in person, ensure you take appropriate steps to keep both your interviewers and participants safe.
- Do not include any questions in the survey that you cannot confidently explain if someone asks why they are being asked the question.
How to increase your survey response rate
It can sometimes be difficult to collect enough survey responses or involve an adequate proportion of your target population to get meaningful results. Below are some different strategies you can use to increase your response rate and strengthen your evaluation findings.
- Clearly communicate the purpose of the questionnaire, how you plan to use the data, and how the results will be reported. It is also crucial to explain why participation in the survey is important, why respondents’ views are valuable and what, if any, benefits they or the community may receive following the research.
- Advertise the purpose of your survey and the details of who will administer it in advance.
- Ensure participant anonymity (where possible).
- Clarify the length of the survey and approximate time it will take to complete.
- Follow-up with your participants by giving them a reminder to complete your survey (if the survey has been sent out by mail or electronically).
- Provide modest incentives for people who complete your survey, for example, entry to a prize draw or supermarket gift vouchers. If you are delivering a group program, you might host a pizza party with time included to complete a written questionnaire. The key is to ensure the participants feel valued for their contribution and time. However, it is important not to offer something so valuable that a participant might find it difficult to turn down, regardless of whether they want to be involved in the evaluation. Your context will help you judge the right balance. For details on incentives of research participants, refer to NHMRC guidelines: Payment of participants in research: information for researchers, HRECs and other ethics review bodies.
Analysis of survey data
Now that you have collected your program data, the next step is to analyse your data so you can draw conclusions or represent what you’ve learned about your program.
Analysis of survey data requires at least basic statistical skills and knowledge. Data analysis is a large and complex topic that cannot be covered in depth, or even summarised very well, in a beginner’s guide like this. Online resources like How to analyse survey data: Best practices, helpful tips, and our favourite tools can help you with some of the basic concepts of survey data analysis. More complex surveys will often require specialist skills in statistical analysis. However, simple surveys, which are often used in small evaluations of programs for families and children, may often only require relatively simple ‘descriptive analysis’. This is a commonly used form of analysis that includes calculations of frequency (e.g. how many respondents provided a particular answer to a question) and/or calculations of a single number to represent a whole set of responses (i.e. indicators of central tendency).
These can include:
- the mean (the average)
- the median (the middle value of a set of numbers)
- the mode (the value that appears most often in a set of numbers or responses).
For example, simple descriptive analysis might be used to understand who accessed a program, how long they engaged with the service (or how many sessions they attended) and whether they were satisfied with the program. You could use descriptive analysis techniques to show the different percentages of clients in various target groups, the average number of sessions they attended or the average time they stayed with the program, and the most common satisfaction ratings for the program. Although there are several paid software packages that can be used for data analysis, for simple descriptive analysis of small groups software such as Microsoft Excel and Power-Bi can be used. These computer programs can also be used to create tables and figures to present your results.
Surveys are an important method for collecting data for program evaluations. Once you have decided on using a survey method it is crucial to ensure:
- the method is fit for purpose.
- the data can be ethically collected.
- the data can be collected safely.
- the data can be analysed.
Each survey data collection method has advantages and disadvantages. Being aware of these can help you decide what method is most appropriate and ensure that you get the data you need to deliver a meaningful program evaluation.
The following list provides you with more information on what is included in this resource. It should be used as a supplement to resources under the reference list below.
Australian Institute of Family Studies (AIFS) (2023) Involving children in evaluation (EES Practical Guide). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
Australian Institute of Family Studies (AIFS) (2019). Identifying evaluation questions (EES Practical Guide). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
Australian Institute of Family Studies (AIFS). How to choose an outcomes measurement tool (EES Short Article). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
Australian Institute of Family Studies (AIFS) (2016). How to develop a program logic for planning and evaluation (EES Practical Guide). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
BetterEvaluation provides a range of resources for different survey methods, including Designing the Face-to-Face Survey and Collecting Evaluation Data: Surveys, as well as advice for planning and designing surveys.
NSW Government (2021) TEI guide to developing surveys. Department of Communities and Justice Targeted Early Intervention program. Sydney.
William M.K (online resource) Research Methods Knowledge Base. A comprehensive web-based textbook of social research methods. Available at https://conjointly.com/kb/survey-research/
Austin, Z., & Sutton, J. (2014). Qualitative research: Getting started. The Canadian Journal of Hospital Pharmacy, 67(6), 436.
Australian Institute of Family Studies (AIFS). (2020). Planning an evaluation: step by step (EES Practical Guide). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
Australian Institute of Family Studies (AIFS). (2021). Data sources in needs assessments (Expert Panel Project Practical Guide). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
Australian Institute of Family Studies (AIFS). (2021). Ethics in evaluation (Expert Panel Project Practical Guide). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
Australian Institute of Family Studies (AIFS). (2021). Evaluation design (Expert Panel Project Practical Guide). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
Australian Institute of Family Studies (AIFS). (2021). What is evaluation? Expert Panel Project Practical Guide). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
Australian Institute of Family Studies (AIFS). (2016). Using qualitative methods in program evaluation (EES Short Article). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
Australian Institute of Family Studies (AIFS). (2013). Planning for evaluation I: Basic principles (EES Practical Guide). Melbourne: Child Family Community Australia, Australian Institute of Family Studies.
CAFCA (2011). Collecting data from parents and children for the purpose of evaluation Issues for child and family services in disadvantaged communities, Communities and Families Clearinghouse Australia, practical sheet.
Hepler, A. N., Guida, F., Messina, M., & Kanu, M. (2010). Program evaluation with vulnerable populations. Service Delivery for Vulnerable Populations: New Directions in Behavioral Health, 355.
Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (Eds.). (2015). Handbook of practical program evaluation. San Francisco, CA: Jossey-Bass & Pfeiffer Imprints, Wiley.
Nikolopoulou, K. (2023, June 22). What Is Probability Sampling? | Types & Examples. Scribbr. Retrieved from https://www.scribbr.com/methodology/probability-sampling/
Northwest Center for Public Health Practice (nd). Data Collection for Program Evaluation.
Pace, D. S. (2021). Probability and non-probability sampling-an entry point for undergraduate researchers. International Journal of Quantitative and Qualitative Research Methods, 9(2), 1–15.
1 Observation is a non-experimental technique where the evaluator systematically observes the participant in their natural setting, for example, observing a parent as they play with their child during a playgroup session and noting particular behaviours or interactions.
Contributions and acknowledgements
This practice guide was written by Dr Megerssa Walo, with key contributions from Dr Stewart Muir, Sharnee Moore, Dr Jasmine B. MacDonald and Dr Lakshmi Neelakantan.
Walo, M. (2023). Using a survey to collect data for evaluation: A guide for beginners. Melbourne: Australian Institute of Family Studies.
How to write a survey questionnaire for evaluation: A guide…
The resource provides basic information and practical tips to help you design and implement simple survey…
Using surveys to understand your service: Research and…
This webinar will explore effective ways of using surveys in practice settings to understand child and family outcomes…