The Art of Sport Science

February 13, 2017

Canadian Sport Centre Atlantic - With 2017 in full swing, you might be starting to check in on those resolutions you set in January. Each New Year gives us an opportunity to reflect on the previous one and, hopefully, to learn and grow. Last year’s goals are evaluated, resolutions are made and shared, and the promise of improvement is palpable. For the sport community, the start of a new year delivers a new season, perhaps a chance at a new team, a new goal, an improved outlook, and another chance in the pursuit of excellence. We also move one step closer to the next Olympic Games and for those who work behind the scenes in high performance sport, it puts an added emphasis on the processes involved in getting there.The Rio Olympics demonstrated that innovation, data collection, and the application of science are more prevalent now than ever before in high performance sport. Higher, Faster, Stronger. That’s what we’re all aiming for – but are we on the best trajectory to achieve it? Have we given as much thought to the process as we have to the outcome?

When setting a goal, creating a plan, forming new habits, and applying new knowledge, CSCA Mental Consultant, Bryce Tully is encouraging us not only to put our best foot forward, but also to think critically about that next step and where it will take us.


The Art of Sport Science

A perspective article by Bryce Tully (MSc), Canadian Sport Centre Atlantic MPC

“In a world of ubiquitous information and advanced analytical tools, logic alone won’t do. What will distinguish those who thrive will be their ability to understand what makes their fellow woman or man tick, to forge relationships, and to care for others.” Daniel Pink, author of Drive

Coaches and researchers have suggested for decades that great leadership is both an art and a science. But has our eagerness to collect scientific performance data outpaced our eagerness to master the art of using it? I believe the current trend within high performance sport is to place disproportional weight on the collection of scientific data, while the organizational and psychological factors essential to its success are largely ignored. Therefore, the next step in modern coaching may rely just as much on data as it does on our ability to use data as a powerful and ethical influencing tool.

I should clarify upfront that this article utilizes both scientific evidence and personal anecdotes cumulated from my relatively limited seven years of experience as a mental performance consultant (MPC). However given the topic area, it only seemed fitting to use a blend of art and science to express my ideas, which ultimately I hope spark meaningful discussion among those involved in high performance sport.

Lets start with some context. It’s no secret that over recent years the world of elite sport has become progressively more invested in quantifying performance. Virtually every university, national, and pro level team in the world has at least one, if not several, data analysts and sport scientists on staff – and with good reason. When used properly, data driven feedback can promote an array of exciting improvements in areas like motivation, focus, decision making, recovery, sleep, tactics, strength, and more. This would explain why the Arizona Coyotes recently hired 26 year old John Chayka, a data analyst who previously served as the co-founder of a data analytics company called Stathletes, as their general manager. Chayka is the youngest person ever to hold that title in the NHL, a cool 16 years younger than the next youngest GM in the league. This would also explain why the Toronto Maple Leafs hired Kyle Dubas, a 28 year old stats guru as their assistant GM, and why hundreds of data analysts were hard at work behind the scenes both during and leading up to the Rio Olympic’s. In fact there are now major companies, like Kinduct Technologies for example, entirely devoted to providing sport organizations with software solutions for managing data – and not just obvious stuff like the famous on-base percentage example from Moneyball, but more granular stuff like heart rate variability, sleep patterns, nutritional intake, wearable device data, etc. And as in other competitive fields, accurate data is crucial because it allows for one variable to be correlated to another, and these correlations allow for more informed leadership. It is important however that more informed leadership be the final goal, and nothing more. Pursuing perfect correlations (i.e., one-to-one relationships) in the fast paced climate of high performance sport is usually a bridge to nowhere, and falsely assuming they exist where they don’t is an even bigger blunder. Therefore, I believe we should be aiming to marry expert coach observations with scientific evidence – because gut impressions on their own will not produce world-class results, and neither will sport science, but effectively integrating the two can be extremely powerful.

From a scientific standpoint, measurable feedback provides the key ingredient to a process that Dr. Anders Ericsson refers to as deliberate practice. That is, the process of providing performers with frequent and accurate measures against a known standard of performance. Ericsson’s research suggests that practicing deliberately is crucial for those pursuing skill expertise and mastery. He suggests that “in the absence of adequate feedback, efficient learning is impossible and improvement only minimal even for highly motivated subjects” (Ericsson, Krampe, Tesch- Romer, 1996). Ericsson advises coaches to provide frequent, measurable feedback not just about outcomes, but also about the detailed processes and actions that occur to create outcomes. His research has provided clear evidence indicating that if we combine the basic principles of goal setting with the amazing new information made accessible by sport science, great things can happen. However in my experiences, success on this front often depends on two key underlying factors: (1) the working relationships between staff members (i.e., sport scientists and coaches), and (2) understanding which measures actually matter.

The first factor is at the heart of any support staff’s success, and can often rely heavily on the individual philosophies and approaches of the people involved. For instance some individuals require strong scientific evidence to make any decision, often being incapable of making decisions without it, whereas other individuals have no trouble making decisions based on gut impressions, anecdotal evidence, and past experiences. Issues can quickly arise though if such individual philosophies are not clearly communicated upfront. Personally, I believe the most effective approaches lie somewhere between these two individuals. I’ve also noticed that how open one’s mind is to growth and news ideas often dictates their use of sport science, which my colleague Leo Thornley and I have attempted to visualize in the plot below.

The figure above, although purely theoretical, highlights why I believe we need find ways to better assess and discuss our individual philosophies as they pertain to sport science. So what are the symptoms of differing sport science philosophies? I’ve observed many, but none as damaging as what author Patrick Lencioni refers to as artificial harmony. In his book The Five Dysfunctions of a Team, Lencioni suggests that artificial harmony is born when individuals are more interested in avoiding conflict and preserving status quo than they are in expressing their true opinions. In the case of sport science, artificial harmony occurs when individuals have different philosophies but choose to keep their true thoughts dormant to avoid socially damaging conflict. According to Lencioni though, tension is actually more damaging to a culture than structured conflict, and tension is what you end up with if you are’t willing to be upfront about your philosophies. It’s also important to recognize that having different philosophies on a support staff can be hugely advantageous as it allows for a wider range of perspectives. If people don’t feel safe expressing their views however, this advantage will quickly turn into a handicap. Maybe a viable solution is to meet with our colleagues and use a figure like the one above to map out and justify where we stand. Although this type of exercise would be a little uncomfortable, it would oblige everyone involved to not only establish their approach, but also to provide valid reasons why, which could be very productive.

The second underlying factor (i.e., understanding which measures actually matter) relies mostly on not jumping the gun out of excitement of a new measure. Just because you can measure something, doesn’t necessarily mean it’s useful or that it should be used as feedback. A more effective strategy is to wait until either a measure is validated, or until you’re certain that it provides useful contextual information about an observation you are already confident in.

From an athlete perspective though, who wouldn’t accept an offer to have their foot contact time precisely measured during a sprint workout, or their heart rate tracked and monitored throughout an entire basketball practice? But here in lies another challenge – at least from a coaching and athlete development standpoint. Our capacity to collect scientific, and albeit useful performance data, has in many cases outpaced our ability to effectively integrate it into the flow of the daily training environment. The reality is that the degree of knowledge required to make use of big data is so advanced that it’s usually outside the skill set of even the highest level of coaches. In many cases there are huge amounts of complex data being expertly collected and analyzed without anyone skilled enough in the art of knowledge transfer to utilize it (a challenge all too common in the world of scientific research as well). And without a well thought out plan, data can easily become distracting, and sometimes damaging, noise. Recent research in this area has even began warning coaches and sport leaders of the mental and emotional risks associated with misusing or abusing performance data. A study published earlier this year examining the effects of wearable GPS devices on rugby players health and well-being stated that “the extent and manner in which GPS analysis is utilized by rugby league coaching staff members needs to be recognized as a dangerous potential contributor to detrimental physical and emotional health amongst working class rugby league players” (Jones, Marshall, & Denison, in press). Other recent studies have suggested that when used improperly, performance data can actually lead to a decrease in performance because of psychological and emotional factors (Markula & Pringle, 2006; Jones and Toner, in press).

Further, there are countless programs and organizations heavily invested in sport science that have not yet established clear protocols regarding performance data. So what is the purpose of collecting performance data anyway? Maybe it’s for development. That is, to initiate a faster rate of improvement through heightened awareness of performance standards and benchmarks (i.e., to promote deliberate practice and increase motivation). Maybe it’s for monitoring. That is, to simply track progression over time in specific areas which can be used to make important program adjustments. Or maybe it’s for selection and de-selection. That is, to yield a more reliable means of choosing teams, rosters, etc.? The answer here is rarely going to be a singular one, but sport organizations should nevertheless strive to create clarity and establish procedures ensuring their athletes are informed on things like who can access their data, where it’s stored, and what it’s going to be used for. Just as in all other scientific research domains, unclear procedures usually cause big problems. Problems like inaccurate self-report scores because nobody is quite sure what might end up being used to make the big decisions. Or problems like frustrated athletes who are fed up with the lack of feedback they’re receiving despite having numerous data points collected on them everyday. In this particular department, those of us living in the high performance sport bubble could learn a thing or two from those living in the research bubble. The research constructs that exist within academic institutions are there for a reason, and it may be time for us to take a closer look at which one’s could be better implemented within sport. The exact direction this should go is a gigantic discussion point, so for now I’m going to leave it be.

As an initial strategy to help maximize the utility of our data collection efforts though, it may be wise to consider assessing the true quality of the measures we have already been collecting before increasing the quantity. That is, we should determine which ones are most impactful and focus our efforts on properly integrating them into the daily training environment with some normalcy. Any credible data collection process should end with a clear interpretation of the results as well as a few practical implications, but I fear in sport we get so eager to collect new measures that this step is rarely accomplished. As the authors of Influencer: The New Science of Leading Change suggest, “a measure won’t drive behaviour if it doesn’t maintain attention, and it certainly won’t maintain attention if it’s rarely assessed – especially if other measures are taken, discussed, and fretted over a hundred times more frequently”. The authors also make reference to the Pareto Principle, which suggests that 80% of results can come from only 20% of effort – as long as you pick the right few measures to narrow your attention and work to implement them properly and frequently. Trying to focus on 25 different measures however, can result in your efforts and attention being spread too thin, create a vibe of chaos and disorganization, and leave everyone with the impression that performance measures are purely secondary, peripheral details. What we should strive for is a climate of grit in which athletes use reliable measures to set specific goals that are frequently revisited and not easily forgotten. It’s also important to recognize that to my knowledge, there is not a positive correlation between the number of data points collected and performance. In other words, it doesn’t matter how much information you collect, what really matters is how effectively you use it. I am also not at all naive to fact that sometimes a lot of data needs to be collected to discover new meaningful measures, but it only makes sense to do this if we commit to fully utilizing these measures once they are discovered, and if everyone is fully aware of the purpose.

I think now is also a good time to clarify that I believe whole-heartedly in using performance data to influence behaviour change and stimulate innovation. In fact the biggest psychological transformation I’ve witnessed thus far as a MPC came as the result of providing the right pieces of feedback, in the right way, at just the right time – all of which was only made possible by the data analyst on our integrated support team. Just like in many other facets of life, the way in which you present information, as well as the time and place you chose to present it, impacts its effectiveness. Let me use an example to illustrate this point. My job requires a healthy amount of air travel, which inevitably means every so often a mechanical issue pops up and everyone has to de-board the plane. In most cases when this happens, the pilot uses the intercom system to politely and apologetically explain the situation. Last March however, on a flight from Montreal to Halifax, the pilot actually walked out of the cockpit into the middle of plane and said “can everyone hear me ok? I’m really sorry folks, I’ve done everything I can to safely get this plane out of here, but there’s something really important the maintenance crew needs to check out. We are going to have to de-board”. Can you guess what the topic of conversation was among the passengers who de-boarded? Yep, you guessed it. They were enthralled with how caring the pilot was, how meaningful it was that he actually came out and talked to them in person, and most importantly, how they’d hoped he was the pilot on their new flight. Because of a simple leadership tactic the majority of passengers had actually gained respect for this pilot, despite the fact that the he was guy who just delayed their travel plans by at least several hours. The point here is that the facts didn’t change, only the method of communicating the facts did. When it comes to leadership and development, numbers are only as powerful as your efforts in the social sciences allow them to be. Just launching them out over email or bringing them up in passing is much less effective than using them to build relationships and have purposeful conversations. For instance if you want to get buy-in and accurate self-report measures on things like sleep and stress, take the time to educate your athletes on how these areas impact performance, and make sure they understand that they wont be de-selected for reporting them honestly. It’s great to send out a cutting edge report, but in all honesty these reports will go to waste if you don’t begin the journey with a meaningful interpersonal interaction to connect the important dots. This is especially true when collecting longitudinal data that won’t be analyzed for a few years. In this case it’s absolutely vital to inform athletes that they won’t be receiving frequent or immediate feedback on the measures you’re collecting. Otherwise, they will likely assume the data is being used for surveillance rather than development, which can quickly diminish trust and spoil a culture.

It has also become clear to me over the past few years that the misuse of performance data can harm performance equally as much as the proper use can help it. For example, a few months ago I travelled to Europe with one of Canada’s developmental national team programs to attend a world championship. Two days out from the competition the head coach and I attended a training session with one of the athletes. The coach, with the best of intentions, decided to collect data during the session. Naturally, the athlete rushed over to see how he did after each trial, only to find out that he wasn’t doing very well – certainly not as well as he was expecting. By the end of the session both his confidence and eagerness had been almost completely depleted. Oddly enough, an almost identical story was told to me by a separate coach who had just returned from Rio. This coach measured all of his athletes’ performances three days before the Olympics began and cited the exact same issue. He even stated that if he could change one thing about his experience in Rio, he would not have measured anything that close to the competition, and instead simply emphasized feel. Although these are relatively specific examples, they nonetheless highlight how important it is for us to increase our structure and planning as it pertains to performance data, even if that means using it less in some circumstances. Think of it this way – data creates feedback, feedback attracts attention, and attention drives behaviour. It’s absolutely crucial as a coach to understand that what enters the attentional field of an athlete has a major impact on how they think and feel, which in turn influences their performance. If you don’t believe me, just ask the coaches mentioned above!

Before I conclude, I want to offer up some brief future directions as discussion points:

  • A better understanding of individual philosophies. There are many different philosophies when it comes to the use of performance data among coaches and IST members. Therefore we need to develop specific strategies to first identify and then work more effectively with individuals who approach things differently than ourselves.
  • A better use of the measures that matter most. Collecting data is exciting, and its moving fast, but I think we can do a better job of normalizing the measures that drive performance the most. New measures shouldn’t cast a shadow over older measures that we already know are effective. We need to take the time to validate measures and understand their limitations.
  • More vigorous procedures for data collection. We need to establish clear procedures regarding the collection and implementation of performance data as a means of ensuring athletes are fully (or at least more) informed (i.e., education sessions, consent forms, etc.). I wonder though, would this be too slow for the competitive environment?
  • Transparency as a non-negotiable. For example, if some data is not going to be analyzed for a few years, and possibly even turn out to be fruitless, everyone needs to understand the plan. Further, if certain data points are going to be used for selection, carding, funding, etc., this must be made clear in advance. But can we realistically put data into perfectly defined boxes? 

The great irony here is that the inches and seconds we are looking to gain through measurement can just as easily be lost by not using those measures effectively. It’s also important to remember that athletic performances, no matter how detailed our measuring capacities become, will always consist of immeasurable social, emotional, and psychological feelings that can only be attained through meaningful human interaction. Therefore, its time to embrace the art of sport science.

Please feel free to email me at Bryce@cscatlantic with any questions, or simply to share your perspective.

Bryce Tully, MSc
Canadian Sport Centre Atlantic Mental Performance


References

Ericsson, K. A., Krampe, R. T. & Tesch-romer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100 (3), 363-406.

Grenny, J., Patterson, K., Maxfield, D., Mcmillan, R. & Switzler, A. (2013). Influencer: The new science of leading change. McGraw Hill Education.

Jones, L., Marshall, P. & Denison, J. (In press). Health and well-being implications surrounding the use of wearable GPS devices in professional rugby league: A Foucauldian disciplinary analysis of the normalised use of a common surveillance aid. Performance Enhancement and Health.

Lencioni, P. (2002). The Five Dysfunctions of a Team. Wiley: New Jersey

Pink, D. H. (2009). Drive: The surprising truth about what motives us. Riverhead Hardcover

Markula, P., & Pringle, R. (2006). Foucault, sport, and exercise: Power, knowledge, and transforming the self. London: Routledge.

Jones, L & Toner, J. (in press). Surveillance technologies as ‘Instruments of discipline’ in the elite sports coaching context: A cautionary post-structural commentary. Sensoria.


Bryce Tully is a member of the sport science staff at the Canadian Sport Centre Atlantic as a mental performance consultant. Bryce works with various National and Provincial training groups including athletes from Canoe Kayak Canada, Canada Basketball, Hockey Canada, and the Nova
Scotia Canada Games Programs. Bryce received his MSc in Kinesiology from Dalhousie University where his research explored the topics of attention, imagery, and specificity of practice. Bryce has taught applied sport psychology at Acadia University, and is an instructor for the Advanced Coaching Diploma in collaboration with the Coaching Association of Canada. Bryce’s current interests are biofeedback, building a high performance culture, and the periodization of mental training.