Use double quotes to find documents that include the exact phrase: "aerodynamic AND testing"
Young male athletes participating in drills at a soccer training camp. Wearing gps tracker devices to collect data.

Over a coffee, we recently reminisced about different sporting environments we’ve worked in and how many times we’ve seen expensive technological solutions sit in a corner, collecting dust. Perhaps you can relate to the pattern. A new technology hits the market, and a few marquee teams or athletes adopt it. You truly believe that the technology will help you in the same way that it’s helping them. You purchase the technology and anticipate how great it’s going to be. Then, for whatever reason, it doesn’t go as you anticipated. Using the technology is cumbersome. Athletes or staff resist the technology. You don’t know what the data means or what you should do differently now that you have the data. Eventually, the value isn’t as apparent as you first anticipated, and you eventually stop using the technology. The dust collection begins.

These days, coaches are inundated with technological options claiming to offer “solutions” for athletes and teams. However, many coaches have limited budgets and don’t want technological investments to fail. Based on experiences implementing technology in different applied sport settings, we’ve proposed using a critical decision-making framework before implementing technology in sport (Windt et al., 2020).

In this blog, we review 4 key questions for coaches and other decision-makers to ask if they want new technology to help, not hinder, their ability to coach effectively.

Question 1: Is the data useful?

Everyday, it seems a new technology hits the market with bold claims and fancy marketing. Many technologies are intriguing, and as a coach, it’s easy to be curious or interested. However, the first thing to consider isn’t whether the technology is exciting, but whether it’ll deliver on its promises to inform decisions.

Practically, coaches should “start with the end in mind” (Covey, 2004) by envisioning their decision-making process, and how the technology could contribute. The technology’s data should inform coaches’ decision-making, not “make” the decision for the coach (Gamble et al., 2020).

Another important consideration is whether coaches could access the same information in another, but more affordable way, especially when budgets are tight. For example, global positioning systems (GPS) can provide information about players’ training volumes and physical capabilities, such as maximum velocity, and the information can be aggregated to understand a team’s training progression throughout a micro or macro cycle. If the latter (team load progression) is the coach’s priority, consistently collecting the session rating of perceived exertion (sRPE) responses from each athlete could provide insight into how the team’s training sessions vary across the training cycle. This would reduce the need for GPS to answer this specific question. If players’ maximum velocities during match play are the most important question, then GPS will prevail as the answer.

If coaches can’t imagine how the information would make life easier with more effective decision-making, or if they can already access similar information through other means, then there’s no need to further consider the technology.

Question 2: Is the data trustworthy?

Technology companies have a bottom line: their own. Often, they ultimately care more about selling their product to increase profits than about openly sharing their products’ imperfections. Given that commercial products vary in accuracy, and none capture information perfectly (Linke et al., 2018; Stone et al., 2020), coaches must consider if marketing promises are to be “sufficiently believed.”

Female athlete standing looking over her shoulder while wearing motion capture equipment.This question has 2 parts. First, can the technology be believed? In the scientific realm, this is about validity and reliability. Broadly, validity assesses if the technology is measuring what it promises to measure, while reliability speaks to its consistency and degree of error when providing measurements (Impellizzeri & Marcora, 2009). To answer this believability question, coaches can search for academic, peer-reviewed papers about the technology such as the 2 papers referenced in the previous paragraph. Coaches can also speak to someone in a related academic field to recap available literature on the company. No news or no papers is often not a good sign.

The second question is about sufficiency, after the validity and reliability have been evaluated. Since no measure is perfectly valid or reliable, one must judge whether the errors are small enough that the data can still be sufficiently used for the coaches’ particular purpose. For example, while errors are always present, if a GPS unit is off by an average of 10 metres each day, you’ll be more comfortable relying on it for reviewing a training session’s physical demands than if it’s off by 1000 metres per day. Ultimately, trusting technology is a judgement call, based on understanding a technology’s limitations and weighing them against how precise the data must be to inform your decision.

Question 3: Can coaches access and use the data effectively?

Man performing an exercise test on a stationary bike and wearing medical equipment.While a main goal of technology is to provide data to help with decision-making, actually getting the necessary data may be simple or unreasonably cumbersome. When evaluating whether a technology is appropriate for your needs, ask for a trial. Trials can help coaches discover if they can access the necessary information and evaluate whether the technology meets their needs. For example, how long does it take to get the data? Is it available live or post-session? Can data be accessed on a cell phone or only on a computer? Is there enough detail? Can data be customized, and how is it displayed? Can the data be exported to compare it to other available information? The importance of each question depends specifically on each coach’s context. Trialing a technological solution can help coaches to answer each of these questions (Torres-Ronda & Schelling, 2017).

Question 4: Is the technology usable in real-life situations?

When implemented, what burden will the technology impose on coaches, athletes, and support staff? This may be among the most overlooked questions when considering technological options for sport. Using technology costs 1 or more people their time, energy or convenience. For example, implementing GPS with a soccer team could easily take up to 4 hours a day: 1 hour to prepare the equipment, 2 hours to monitor the session and assign players to the proper drills, and 1 hour to process data, create reports, and provide interpretation. When deciding if a technology is worth it, don’t just weigh the price, but also factor in the cost it’ll demand of everyone involved.

Conclusion

It’s easy to think that new technology will solve the problems we’re having in sport. Coaches and practitioners (especially in amateur sport or in sporting organizations with fewer resources and smaller budgets) may believe that technologies already used in professional environments are the solution. In fact, while professional sport environments often have lucrative budgets and excess technologies at their disposal, technologies can distract from the real performance process and collect dust when implementation fails. Many teams succeed with low budgets and few technologies. In turn, many incredibly well-funded teams fail, even with many technological toys. We hope that by encouraging practitioners to ask these 4 questions, we’ll help others avoid this trap, ensuring that technology they adopt is a help, not a hindrance.


About the Author(s)

Johann Windt, Ph.D., is the Head of Data Science Performance with the Vancouver Whitecaps FC of Major League Soccer (MLS). He aims to make everyone else’s job more effective and efficient by overseeing data collection, integration, and reporting across football operations, including scouting and recruitment, sport science and medicine, first team football, and player development. Prior to joining the Whitecaps, Johann worked at the United States Olympic and Paralympic Committee as a sports medicine data analyst, where he helped build the Athlete 360 monitoring program. This program was responsible for athlete workload and self-reported wellness monitoring of more than 400 athletes, representing more than a dozen different sporting disciplines. Today, his research and professional interests include how technology and data science can inform organizational decision-making, both in the daily training environment and through longer-term research and innovation.

Ben Sporer, Ph.D., is the with the Vancouver Whitecaps FC of Major League Soccer. In 2019, he developed and drove the integrated performance strategy for the football club, aligning the 3 pillars of first team football, player development, and scouting and recruitment while optimizing the daily training environment, research, and innovation. Ben is a physiologist experienced in applying science to practical planning and performance solutions. Over the past 2 decades, he’s worked in multiple sports internationally and professionally and led multidisciplinary teams at 3 Olympic Games. He’s also held senior roles in the Canadian Olympic sport system.

References

Covey, S. R. (2004). The 7 Habits of Highly Effective People: Powerful Lessons in Personal Change. Simon and Schuster.

Gamble, P., Lionel, C., & Allen, S. (2020). The illogic of being data-driven: Reasserting control and restoring balance in our relationship with data and technology in football. Science and Medicine in Football, 4(4), p. 338–41. https://www.doi.org/10.1080/24733938.2020.1854842

Impellizzeri, F. M., & Marcora, S. M. (2009). Test validation in sport physiology: Lessons learned from clinimetrics. International Journal of Sports Physiology and Performance, 4(2), p. 269–77. https://www.doi.org/10.1123/ijspp.4.2.269

Linke, D., Link, D., & Lames, M. (2018). Validation of electronic performance and tracking systems EPTS under field conditions. PLOS ONE, 13(7), e0199519. https://www.doi.org/10.1371/journal.pone.0199519

Stone, J. D., Rentz, L. E., Forsey, J., Ramadan, J., Markwald, R. R., Finomore, V. S., Galster, S. M., Rezai, A., & Hagen, J. A. (2020). Evaluations of commercial sleep technologies for objective monitoring during routine sleeping conditions. Nature and Science of Sleep, 12, p. 821-842. http://www.dio.org/10.2147/NSS.S270705

Torres-Ronda, L., & Schelling, X. (2017). Critical process for the implementation of technology in sport organizations. Strength & Conditioning Journal, 39(6), 54. http://www.doi.org/10.1519/SSC.0000000000000339

Windt, J., MacDonald, K., Taylor, D., Zumbo, B. D., Sporer, B. C., & Martin, D. T. (2020). ‘To tech or not to tech?’ A critical decision-making framework for implementing technology in sport. Journal of Athletic Training, 55(9), p. 902–10. http://www.doi.org/10.4085/1062-6050-0540.19


The information presented in SIRC blogs and SIRCuit articles is accurate and reliable as of the date of publication. Developments that occur after the date of publication may impact the current accuracy of the information presented in a previously published blog or article.