Engaging Youth in Evaluation Processes

Picture this: it’s the first session of a popular youth sport program at a busy community facility. Dozens of youth are greeting each other, checking out the space, and mingling with coaches on the bleachers. The excitement is palpable as youth move onto the court. Soon the space is filled with the satisfying sounds of balls bouncing on hardwood, smacking against palms, grazing nets, and rattling backboards. Eager voices rise above the background music as youth encourage (and occasionally needle) their peers, and the air temperature steadily increases as coaches keep the program moving from warm-ups to drills to game-play and finally a participatory debrief. After a hearty cheer, the circle breaks and youth stream into the atrium or waiting area to grab snacks, meet family members, rehash the days’ plays and continue catching up with friends.

For 10 points: what’s missing from this wonderful window in time?

Congratulations if your first thought was “a pre-program evaluation.” While easy to overlook, a rigorous and theoretically grounded program evaluation is the key to determining the impact of program participation on positive youth outcomes, from physical literacy to essential life skills such as leadership and self-esteem. Even the immediate, experience-based objectives such as satisfaction, engagement, and enjoyment must be evaluated to determine whether program goals are being met.

Program evaluation – too often last on the priority list

So why is the need for program evaluation in youth sport so commonly ignored or brushed aside? In the context of a highly engaging sport program, the importance of program evaluation tends to be eclipsed by other activities. It is easy to assume that since youth are attending the program and participating in the planned activities, the program is achieving its intended impact. Many organizations – 86% of Ontario non-profits, according to a 2018 report – do not have staff with experience or expertise designing and implementing program evaluations (Ontario Nonprofit Network, 2018). Organizations may not know what outcome to measure or how to measure it (Salesforce, 2019). Lack of money, lack of appropriate staff, and lack of time to implement program evaluations have been cited as the most potent barriers to conducting evaluations and using evaluation findings – and increased investment in program evaluation is a known need (Ontario Nonprofit Network, 2018; Technology Affinity Group, 2018). The need for better tools to capture the impact of sport on youth development is also well-documented (Sportanddev, 2011). But perhaps the number one reason for poor or absent program evaluations in the youth sport setting – and the issue to be addressed in this article – is the difficulty of engaging youth in evaluative processes.

Engaging youth in evaluation – like “catching fish with your hands”

Youth are notoriously difficult to engage in pre- and post-program evaluation. Collecting survey responses can be like catching fish with your hands, with an average response rate of 10-20% in the field (Fryrear, 2015). The effort and hours invested in following-up with youth through phone calls, emails, and in-person reminders can present a diminishing return on investment as well as a deflating experience for staff when response rates remain low despite the best efforts of all involved. Typical survey platforms can not compete with Instagram, YouTube, TikTok, and the latest, greatest games and apps. On a typical day, teens spend 6.5 hours on-screen media use and pre-teens spend 4.5 (Joshi et al., 2019). A well-crafted evaluation survey should take no more than 10 or 15 minutes to complete. Is occasionally spending around 5% of your daily screen time on a sport-related survey too much to ask? We don’t think so – but the statistics disagree.  

Evaluation innovation at MLSE LaunchPad

At MLSE LaunchPad, a creative and innovative approach to youth engagement in program evaluation has increased youth and coach buy-in and generated more and better data. Evaluation findings have been used to increase program quality, enhance program impacts, and improve strategies to appeal and report to donors.

This article shares several strategies used at MLSE Launchpad to successfully engage youth in program evaluation processes. We define “youth” broadly, from ages 6 to 29. Our experience concerns diverse urban youth attending free programs at a collaborative Sport For Development facility located in downtown Toronto. Our population includes a high proportion of racialized youth, newcomer youth, and youth from low-income families, with an approximately equal balance of boys and girls. However, these learnings may be applied in a range of settings, including competitive and fee-based programs, and will be useful in any organization where leaders and administrators see the need to engage more youth – and engage youth more effectively – in evaluation processes.

Current practices in the field

Surveys are a common tool for evaluating outcomes in youth sport programs. Surveys may be created from scratch to address specific outcomes of interest to the sport organization, or may incorporate previously developed outcome measures such as a self-esteem scale or physical literacy questionnaire, which have been tested and validated. Surveys are generally delivered using pencil and paper or through online services such as SurveyMonkey, with data later transferred to software such as Excel for further analysis, visualization, and reporting. In some organizations, the onus is on coaches and program leaders to collect the needed data. Other organizations rely on management staff, volunteers, or external evaluators and consultants. In most cases, response rates are very low, incentivization to complete surveys is ineffective, and the burden of labour to boost the number of respondents and manage resulting data is unsustainable.

Other evaluation techniques employed in the youth sport setting include observed assessments of fundamental movement skills, fitness level, or other outcomes relating to physical performance. Focus groups or interviews with key stakeholders are often used to generate qualitative data, including feedback on program experience and less tangible outcomes that may be more difficult to assess quantitatively, such as how participants intend to apply program learnings outside of the program. Focus group data and a mixed-methods approach can also increase the usefulness of program evaluation results in storytelling and stakeholder engagement.

Typical approaches to program evaluation present several issues relevant to the youth sport setting. Published scales and surveys developed in academic settings tend to be lengthy and may include language not appropriate for youth. It is difficult to locate age-appropriate questionnaires to assess many outcomes of interest in the sport sector. And many surveys use a more clinical or deficit-oriented approach, which can be off-putting for youth and contradicts a Positive Youth Development approach to sport programming (Fraser-Thomas et al., 2005). When tools are not well-designed for their intended purpose, the quality of the resulting data is reduced, and utility is diminished.

Observed movement assessments also present many barriers to implementation. The large amount of space, time, and human resources required make this type of evaluation impractical in many settings. Physical “tests” may be viewed as boring or a waste of time by youth; as such, obtaining the required level of effort and engagement may not be possible. Coaches may resent the incursion into program time – time that they see as better spent on skill development or competitive play. Indeed, within the context of an 8-week program cycle, spending one session each on pre- and post-program movement assessment eats up 25% of available program time.

Focus groups also require the diversion of time from program activities, or that youth attend additional sessions to participate. With many youth rushing from school to their sport program, squeezing in dinner and homework before bedtime, and possibly juggling a part-time job, volunteer role, or family responsibilities, additional time commitments may not be feasible. Youth would naturally rather be on the court – or off engaging in their own lives.

MLSE LaunchPad’s MISSION Measurement Model

The “MISSION” acronym encompasses best practices that emerged from three years of experience working in our setting. The principles have been codified and applied to all our research and evaluation work, informing the design and implementation of our evaluation frameworks, processes and tools, as well as our research partnerships.

Minimal – Only survey youth when results will be used in decision-making.

This principle decreases the burden on youth, helps to avoid survey fatigue, and encourages high engagement by demonstrating to youth that their feedback leads to changes they can see. The end results are increased efficacy and administrative clarity on how evaluation data will be used, i.e. for the specific decision-making purpose that the data were collected to support.

I-statements – Phrase survey items as personal statements.

In our experience surveying more than 10,000 youth participants, I-statements elicit a stronger personal connection and make more sense to youth respondents because they can easily “try the statement on” – resulting in more honest responses that better reflect youth outcomes. For example, we use “I see myself as a leader” or “I feel I have a lot to be proud of” instead of “Do you see yourself as a leader?” or “Do you feel you have a lot to be proud of?”

Short – Use the fewest survey items needed to achieve meaningful results.

We strive to keep our outcome measures brief, with most consisting of approximately ten items. In cases where a program wishes to measure two outcomes (e.g. self-esteem and social competence), the resulting survey will be about 20 items long. This cuts down on time spent completing surveys, keeps youth engaged in the process, and decreases the risk of less reliable data that can result from youth scrolling through a long survey as quickly as possible without really reading and considering the items or response options. Limiting the number of outcome measures also leaves room to incorporate additional process-related questions, such as level of interest in a new initiative, satisfaction with an existing program, or the best day of the week to schedule a special event; or questions of special interest to funders.

Strengths-based – Phrase survey items positively to reinforce positive youth outcomes.

Positively phrased survey items keep the focus on strengths and not on weaknesses, while still allowing a youth to indicate that they do not currently experience the positive attribute being referenced by disagreeing with the statement. Negatively worded items are of questionable utility in surveying youth (Jackson Barnette, 2000), can project a diagnostic sentiment, and create a survey completion experience that can elicit negative emotions or alienate youth. For example, the survey items “I feel respected at my program” or “I feel useful” encourage the respondent to reflect on positive attributes that they may already possess or be developing through the program. This is in stark contrast to the deficit-based items of “I do not feel respected at my program” and “I feel useless.”

Involve coaches – Enlist the coach or youth leader in survey design and delivery.

Evaluation is not a one-person job left only to evaluators, but rather a key part of the program fabric. Front line staff are among the greatest strengths of quality sport programs. Youth-adult relationships are an important determinant of positive youth outcomes, and many youth develop meaningful and long-term relationships with their coaches and other program leaders. As such, coaches and program leaders are often better positioned to pass out tablets for program evaluation purposes or to work through simple surveys with very young participants than management-level or evaluation staff. At best, data collection is a collaborative effort involving each of the above parties – coaches, managers and evaluators – to maximize response rates and data quality, and ensure comprehension of instructions.

Online – Collect data digitally to maximize youth engagement and honesty.

Current research suggests youth are more honest with technology than they are face-to-face with another human (Radovic et al., 2018). Given our increased reliance on digital technology for everything from social interaction to banking to education, this is unsurprising. We can capitalize on this reality by collecting data through engaging digital platforms that incorporate elements of gamification and competition, much like youth’s favorite mobile games and apps. A SurveyMonkey questionnaire achieves part of the aim but presents a bland and unappealing design. Using more enticing mobile-friendly platforms can result in greater enjoyment for youth respondents, increased response rates, and increased buy-in from staff when youth complete program evaluations on their own time – saving valuable program time. At MLSE LaunchPad, on average, 2/3 of youth complete their pre-program survey independently in advance of the first program session. On average, we achieve an 86% response rate.

No Neutrality – Use yes/no and 4-point scales. Eliminate “unsure” as an option.

A Likert scale is a type of rating scale used to measure attitudes or opinions. Respondents are asked to rate items based on level of agreement, frequency of a thought or action, or importance of the survey item (Iowa State University, 2010). Youth prefer Likert scales over other survey formats, finding them easier to complete (van Laerhoven et al., 2007). However, within a five-point Likert scale, youth tend towards a neutral point when it is offered as an option (Dalal et al., 2014). Based on our experiences, MLSE LaunchPad uses 4-point scales with response options ranging from Strongly Disagree to Strongly Agree, or Yes/No questions in cases where the nuanced response that a Likert scale allows is not required.

Take Your Evaluation Courtside

In addition to applying our MISSION measurement model, another key tactic used to increase youth engagement in evaluation processes has been conducting evaluation assessments “courtside” wherever possible. This means distributing tablets for survey completion on the sidelines or in the bleachers, rather than in a classroom or waiting area. Youth feel as if they are still part of the action and not missing out on program time, and coaches can better support both on-court and evaluation activities. We have extended this tactic to our qualitative data collection, conducting brief courtside interviews post-program to solicit youth’s reflections. Having just stepped off the court, we find youth are focused and better able to articulate their program experience.

Applying these practices in your setting

Some of the effective strategies and tactics detailed above admittedly require a high level of resources to execute, including time, space, and technology. Here are several low-resource ideas for implementing some of our key learnings to increase youth engagement in evaluation processes.

1. Focus! Be specific about what you need to evaluate.

Take a minimalist approach to evaluation and measure only what you need to. Stronger results are often generated by focusing evaluation efforts on 1-2 priority outcomes. You are likely to experience higher youth engagement by simply eliminating any surveys or individual questions that are unnecessary. Be clear about what data you require to make decisions relating to future programming, quality improvement and budgeting, and remove everything else.

2. Say it with fewer words.

Do your surveys contain redundant questions or are the questions lengthy, containing multiple ideas within a single run-on sentence? Find ways to simplify, editing all surveys with brevity and clarity in mind. Don’t be afraid to play around with your surveys until you get it right. Although results may not be comparable across different versions, improving your survey to support higher engagement is likely more important than staying consistent.

3. Assess (or re-assess) outcome measures with care.

If your program uses a standardized outcome measure to assess youth progress relating to a key area of positive development, what do you know about the outcome measure? Who developed it, when was it created, and in what contexts has it been used? Has it been validated for use with your demographic, considering age, gender, race and rural vs. urban populations – and if yes, when was the validation completed? Language evolves quickly, and words that made sense to teens in 1985 may be unclear to today’s youth. Consult existing resources to locate youth-friendly outcome measures, or work with an evaluator to develop your survey. See the resources below for some good starting places.

4. Get coaches involved.

Coaches and other front-line staff will engage in data collection more enthusiastically when they know what information they are collecting and how the data will be used, especially if they had a hand in designing the survey content. Coach feedback is important to ensure that the right questions are being asked in the right way, and that evaluation processes do not create barriers to participation for youth. Coach support is also essential to drive home to youth the importance of evaluation, to implement program evaluation efficiently, and to create a culture of evaluation within your organization.

5. Digitize your process.

Digital evaluation processes are not only more engaging for youth, they are also easier and more efficient for staff to manage, and allow for more rapid and thorough utilization of the data generated. Ideally, a single system would be used to manage your participant database, program registration, scheduling, communications, and program evaluation. In the absence of comprehensive software, a good compromise is to centralize different types of data in a common spreadsheet document, including demographics, registration, attendance, and survey responses or other program outcomes. This eliminates redundancies in data collection that can be frustrating for staff and youth, and allows for useful and informative queries – for example, looking at relationships between program type and attendance level, or between gender and program outcomes.

6. Incentivize engagement!

We believe strongly in letting youth know that we value their opinions and feedback by providing intentional and modest incentives for completing evaluation activities. This can range from providing a meal alongside your focus group, to entering all youth who complete pre- and post-program surveys into an end-of-season draw for a piece of sport apparel or equipment. Low-cost incentives include badges, stickers, and certificates for younger participants; small items donated by sponsors, such as low-value gift cards, reusable water bottles or socks; or admission to a special event such as a pizza party or sport-related outing.

High evaluation engagement in youth sport – YES, WE CAN!

At MLSE LaunchPad, an experimental approach to program evaluation has led to the strategic model and specific tactics outlined above. Three years of learning and iterative improvement have effectively increased youth engagement in evaluation processes at our facility, with wide-ranging benefits impacting youth, front-line staff, management, and the entire youth sport sector through shared learnings that can be applied in a variety of settings. Fun and evaluation are not mutually exclusive. We encourage you to adopt this attitude and approach in your next program evaluation with the objective of making it a little more enjoyable and engaging for everyone involved. Happy youth, happy you. Don’t hesitate to reach out to our team to start a conversation on these or other research and evaluation issues in youth sport.

Recommended Resources

Assessing Outcomes in Child and Youth Programs: A Practical Handbook – Excellent resources for planning and conducting program evaluations, and a compilation of evaluation tools.

Canadian Evaluation Society – A roster of Credentialed Evaluators practicing in each province, as well as ethical guidelines and program evaluation standards.

MLSE LaunchPad – Reach out to our team to discuss current practices and issues relating to evaluation design and implementation in the youth sport space.

YouthREX – The Youth Research and Evaluation eXchange – A knowledge hub, a library of evaluation tools, and several professional development resources.

Subscribe to updates

News travels fast. Delivered straight to your inbox, SIRC’s daily newsletter will ensure you stay connected with the latest news, events, jobs, and knowledge in Canadian sport.

latest articles

SIGN up for Canadian sport daily

News travels fast. Delivered straight to your inbox, SIRC’s daily newsletter will ensure you stay connected with the latest news, events, jobs, and knowledge in Canadian sport.

Skip to content