Athlete monitoring practices in elite sport in the United Kingdom

ABSTRACT Athlete monitoring systems (AMS) aid performance optimisation and support illness/injury prevention. Nonetheless, limited information exists on how AMS are employed across elite sports in the United Kingdom. This study explored how athlete monitoring (AM) data, in particular athlete self-report measures, were collected, analysed and disseminated within elite sports. Thirty elite sports practitioners representing 599 athletes responded to a survey on their AM methodologies. The majority, 83%, (n = 25) utilised an AMS, and a further 84% (n = 21) stated the collection of their AMS data was underpinned by a scientific rationale. Athlete self-report measures (ASRM) were the most commonly employed tool, with muscle soreness, sleep and energy levels amongst the most frequently collected measures. The ubiquitous use of custom single-item ASRM resulted in considerable variability in the questionnaires employed, thus potentially impacting questionnaire validity. Feedback processes were largely felt to be ineffective, with 44% (n = 11) respondents indicating that athletes did not receive sufficient feedback. Some respondents indicated that AMS data was never discussed with athletes and/or coaches. Overall, significant disparities exist in the use of athlete monitoring systems between research and elite sports practice, and the athlete, coach and practitioner experience of monitoring risks being poor if these disparities are not addressed.


Introduction
Athlete monitoring systems (AMS) typically capture information relating to an athlete's response to training and training load. A survey of practitioners in elite Australasian sport found that 91% utilised an athlete monitoring system, (Taylor et al., 2012) with injury prevention, performance and training programme optimisation amongst the most important stated purposes of an AMS (Halson, 2014;McGuigan et al., 2021). AMS can play an important role in elite sport, as lost training days through illness or injury are a significant issue. At any given time, 36% of elite athletes have a health problem, with 15% reporting substantial health problems weekly that may negatively impact sporting performance (Clarsen et al., 2014).
The use and implementation of AMS have therefore received significant research attention to mitigate the risks of athlete maladaptation (Saw et al., 2017). However, the current crosssport trends in monitoring, recording and analysing elite athletic training and performance either go unreported or, have primarily focussed on Australasian or invasion sports, (McGuigan et al., 2021;Taylor et al., 2012) with limited insights available for a more diverse range of sports, or sports in the United Kingdom (UK). Without a clear understanding of what current monitoring practices across sports in the UK are, it is challenging to provide evidence-based guidance for AMS in elite sport.
Athlete monitoring systems have been reported to include a variety of performance, laboratory or field tests and athlete self-report measures (ASRM) specific to the sport (McGuigan et al., 2021;Taylor et al., 2012). Of 55 respondents to one survey, (Taylor et al., 2012) customised ASRM were employed by 84% and collected daily by 55% of practitioners working in elite sport. In contrast, validated ASRM e.g., Acute Recovery Stress Scale, (Kölling et al., 2020) were most frequently reported in published research, (Drew & Finch, 2016;McGuigan et al., 2020) in comparison to applied practice (Taylor et al., 2012). The lack of similar data on athlete monitoring practices in elite sport in the UK makes it unclear whether the adoption of custom singleitem ASRM follows the same pattern as observed in other contexts (McGuigan et al., 2021;Taylor et al., 2012).
Reasons for the use of custom single-item ASRM have included their ease of administration, customisation and their proposed sensitivity to changes in athlete health (Drew & Finch, 2016;Taylor et al., 2012). The use of custom single-item, rather than validated ASRM questionnaires in elite sport has caused concerns pertaining to their validity; (Duignan et al., 2020;Jeffries et al., 2020;Saw et al., 2017) however, some researchers have argued that custom singleitem ASRM may still be sensitive to changes in athlete health (Burgess, 2017). The response scales used in ASRM reportedly vary, with 1-5, (Crowcroft et al., 2017) 1-10 point Likert scales (Montgomery & Hopkins, 2013) and visual analogue scales employed (Gastin et al., 2013). The customisation of single-item ASRM has also led to variance in how self-report questions are constructed and posed to athletes (Crowcroft et al., 2017;Gastin et al., 2013) with no reported consensus on best practice.
The most popular single-item ASRM have reportedly included muscle soreness, perceptions of wellbeing, sleep quality and quantity (Jeffries et al., 2020;Taylor et al., 2012). Most of these variables have been reported as responsive to changes in athlete health or fatigue status; however, sleep quality has been reported as unresponsive, (Saw, Main, Gastin et al., 2015a) with no significant effect on injury incidence (Dennis et al., 2016) AMS data, such as single-item ASRM are primarily collected through mobile devices, (McGuigan et al., 2021) but it is unclear if online data collection positively impacts AMS adherence and engagement. Poor adherence to AMS via mobile devices has been reported where limited or no support from practitioners is available to athletes (AE Saw et al., 2015b). Poor adherence is further exacerbated when technological issues inhibit or complicate data entry (AE Saw et al., 2015c). Consequently, the use of mobile devices might be better viewed as a tool to trigger wider conversations between the athlete and sports personnel (Saw et al., 2017) rather than as a panacea for AMS adherence problems.
Within athlete monitoring datasets, separating the signal from the noise, discerning meaningful change, and interpreting practical significance has led to a move away from assessing statistical significance, and towards methods such as identifying the smallest worthwhile change (Hopkins, 2004). Further, while there has been recent criticism of data analysis tools such as acute to chronic workload ratio  there has also been discussion of best practice methodologies for AMS datasets (Saw et al., 2017;Thorpe et al., 2017) with recommendations for data analysis at the individual (Atkinson et al., 2019;Hecksteden et al., 2015) and group level (Thornton et al., 2019). Some studies point towards some of these methods being employed in elite sport (Akenhead & Nassis, 2016;Crowcroft et al., 2017;Gallo et al., 2017) but it is unclear if this shift towards contemporary statistics use in applied practice is a result of discrete research projects meant for publication, or if the data analysis changes are embedded in the day-to-day practice of practitioners. This lack of clarity is exacerbated by little current insight into data analysis practices for AMS in elite sport in the UK.
Once AMS data has been analysed, ensuring athletes receive AMS feedback is important for their continued engagement with the AMS (Neupert et al., 2019) and a step towards developing a supportive AMS culture (Saw et al., 2017). Current AMS feedback practices within elite sport are under-reported, with descriptions of feedback generally occurring daily (Taylor et al., 2012) or sometimes not at all (Barboza et al., 2017). Further clarification of practitioners' approaches to feedback within elite sport are required to enable evidence-based feedback practices to promote athlete AMS engagement.
Overall, there is a lack of clarity on athlete monitoring practices occurring in elite sport in the UK, from initial data collection through to data analysis and feedback. This is problematic, as it is unclear whether issues observed elsewhere, such as the use of custom metrics with unclear validity, (Jeffries et al., 2020) and poor athlete adherence (Barboza et al., 2017) risks negatively impacting the ability to prevent athlete maladaptation (Halson, 2014). As the use of athlete monitoring and ASRM are endemic in elite sport, this study aims to give an overview of the athlete monitoring methodologies employed by practitioners working at the coalface of elite sport and to highlight any areas of best practice or concern.

Survey development
Prior to being sent to participants, the survey was reviewed for content validity; (Stoszkowski & Collins, 2016) by three applied sports science practitioners (with >5 years of elite sport experience each) and a University sports science academic (>10 years higher education experience). The survey was also informed by previous research in this area (Taylor et al., 2012). This process resulted in several modifications, with one question removed, two added and several questions altered to enhance their readability. The survey took approximately 20 minutes to complete and was split into broad categories, with the number of questions in each category given in brackets: 1. Background information and data collection (16); 2. Data analysis and feedback (10); 3. Adherence (10); 4. Open ended questions on monitoring were routed according to whether an AMS was in place 4a. (7); or not 4b. (6). For expediency, the majority of questions involved selecting from drop-down menus or checking boxes, for example: How do the athletes record the majority of their self-report wellbeing data? Potential responses included: via mobile phone app, pen and paper and 'other' where free text was enabled to allow description of the method used.
Closed questions used Likert response scales, (Vagias, 2006) for example: Do you feel athletes receive sufficient feedback from the athlete monitoring they complete? Responses were rated on a 5-point Likert scale, with each point anchored from Strongly Disagree through to Strongly Agree (Vagias, 2006) Open-ended questions allowed free-text responses, such as: Please briefly explain how, if relevant, athlete monitoring could be improved in your sport.
Ethical approval was granted by the local University Ethics Committee. Access to participants was agreed through gatekeepers at the relevant organisations. All respondents received electronic information and a full written explanation of the study and were subsequently given the opportunity to give electronic informed consent to participate after they had viewed the study information. Any respondents that indicated they did not consent were directed towards the end of the study. Those without an AMS had their opinions on this sought, prior to them finishing the survey.

Participants and data analysis
Seventy-five practitioners working with national team athletes (tiers 3-5), (McKay et al., 2022) were invited to participate in a secure online survey (Online Surveys, JISC, Bristol, UK). Participants received a password protected link to the survey from gatekeepers at their relevant organisations. Two email reminders were sent at 2 and 4 weeks after the initial invite. The return rate to the survey was 40% (n = 30). Data from the survey were collated and presented as percentages and frequencies (n). For open-ended questions, key themes were manually coded, analysed and cross-checked with another university researcher in line with previously described practice (Braun & Clarke, 2006). Direct indicative quotes are presented and participants are identified using codes e.g., P1.

Monitoring purpose and background information
Thirty sports science and medicine practitioners working across 14 sports completed the survey. The sports represented included: Athletics, Para Athletics, Boxing, Canoeing (sprint and slalom), Para Canoeing, Cycling and Para Cycling, Gymnastics, Hockey, Judo, Rowing, Rugby 7ʹs, Sailing, Swimming, Taekwondo and Triathlon. The sports science and medicine practitioners had 8 ± 5 years (mean ± SD) experience of working in tier 3-5, i.e. national team or high-performance sports (McKay et al., 2022) and collectively worked with 599 senior national team athletes. Each respondent worked with a different squad within their designated sporting organisation.
AMS were employed by 83% (n = 25) respondents; of which 84% (n = 21) felt that there was a clear implementation strategy, underpinned by scientific theory. All respondents without an AMS in place (n = 5) indicated a willingness to implement one. Responses to open-ended questions indicated that poor athlete buy-in, logistical issues and being in the process of planning a new AMS were the primary reasons given for having no AMS in place.
I believe we would benefit [from an AMS]. I feel athlete compliance is the issue. (P3) Remote support to a high volume of athletes [prevents monitoring]. However, we have plans to monitor in the future. (P15) While 84% (n = 21) of respondents indicated that there was a clear rationale for collecting their athlete monitoring data, 12% (n = 3) reported that there was an insufficient rationale, with 4% (n = 1) unsure: What are we trying to get out of the data? (P2) We have some mixed messages coming from managers/ coaches/support staff. (P23) The most common rationales for having an AMS were to, reduce illness and injuries 36% (n = 9), and to maintain or optimise performance 36% (n = 9).
Respondents indicated that they collected a variety of measures: 96% collected ASRM (Table 1), with performance tests, the second most frequently collected measure, gathered by 84% of respondents. Sleep quality, illness/injury incidence, muscular soreness, sleep duration and energy (Figure 1) were the most widely used ASRM dimensions. Likert scales were the primary method used in ASRM 84% (n = 21), using a 5-point 38% (n = 8), 7-point 5% (n = 1), or 10-point 57% (n = 12) scale. Other response scales were used by 12% (n = 3) respondents, which included percentages and a "bespoke" scale, with 4% (n = 1) respondent unsure of what method was used. Just over 57% (n = 12) respondents were unsure why their responsescale length had been selected, with one respondent reporting: Five seems too little [points on the Likert scale], 10 gives a good range. (P2) The remainder of respondents indicated via open-ended questions their reasons for their response-scale choice. This included: the response scale being dictated by the software used, or that they felt their chosen scale gave them sufficient variance and measure sensitivity.

Data analysis and feedback
Respondents reported a range of methods to assess meaningful change in their data, dependent upon data type. These included standard deviations 20% (n = 5), raw scores 16% (n = 4), acute to chronic training load ratios 4% (n = 1), and smallest worthwhile change 12% (n = 3). While 76% (n = 19) respondents had a defined approach to assess meaningful change within datasets, 24% (n = 6) did not or were unsure what method was used. One respondent indicated that their analysis method differed: Depending on coach preferences of feedback methods (P26). Where no typical analysis method was reported, respondents indicated that:

Data is [used] as a conversation starter with coaches" (P2). [Data is] generally assessed by coach visual inspection" (P9).
Overall, 44% (n = 11), respondents felt that there was insufficient feedback given to athletes, with 20% (n = 5), undecided and 36% (n = 9) feeling that feedback was sufficient. Processes to feedback AMS information to athletes were in place for 84% (n = 21), respondents. Of the 16% (n = 4) with no feedback process in place, all stated that their athletes did not receive sufficient feedback. Feedback to athletes was primarily provided face-to-face 57% (n = 12), and by email reports 24% (n = 5), with the integral feedback dashboard from the Performance Data Management System (Brownlow & McCaig, 2021) used by 9% (n = 2) of respondents, and the remaining 9% (n = 2) using presentations or bespoke written feedback. Table 2 outlines how frequently respondents discussed AMS data within their sport, with the most frequent discussions occurring within the multi-disciplinary team (MDT) on a daily/ weekly basis 92% (n = 23). In comparison, daily/weekly conversations were less frequently held with athletes 68% (n = 17), and coaches 64% (n = 16). Athlete monitoring data was reportedly never discussed with the athletes and coaches by 4% (n = 1) and 8% (n = 2) of respondents, respectively.

Discussion
Published data on AMS practices in elite sports to date have only provided "snapshots" (Saw et al., 2018), or focussed on an Australasian or team sport contexts when exploring daily AMS practices in elite sports (McGuigan et al., 2021;Taylor et al., 2012). This study therefore gives an overview of practices in athlete monitoring data collection, analysis, and feedback in elite sport in the UK, highlighting current trends and areas that require further consideration.
No AMS was employed by 17% of respondents from this survey. This is higher than 9% reported by personnel working in elite Australasian sport (Taylor et al., 2012), but could still be an underestimate due to non-response bias. All respondents without an AMS expressed a desire to implement one, but, consistent with previous findings (AE Saw et al., 2015c) they reported that poor athlete engagement or logistics prevented it. Nonetheless, the vast majority (83%) of sports did have an AMS in place, but 16% of respondents with an AMS reported that they experienced a lack of a clear AMS implementation strategy underpinned by scientific theory.
Respondents gave broadly equal weighting to the AMS rationales of preventing injury/illness and optimising performance, consistent with previously reported data (Taylor et al., 2012). A lack of a clear rationale for the AMS was described by 12% of respondents; which, to the authors' knowledge, has not been previously reported. Concerns regarding the ability of AMS to effectively deliver injury/illness prediction, prevention and performance optimisation have been discussed,(Coyne   Sands et al., 2017) and perhaps these concerns mirror the lack of a clear AMS rationale indicated by some users in this study. A clear rationale for an AMS, potentially explored through a needs-analysis, is part of creating a successful AMS, with the rationale clearly communicated within the sporting organisation (Saw et al., 2017) Athlete self-report of illness/injury status was not collected by 20% of respondents. The inclusion of selfreported illness/injury data may result in mixed results if not carefully assessed, as self-report data are more susceptible to misdiagnosis in comparison to the opinions of clinicians (Gosling et al., 2008). Further, framing an AMS as an injury reduction tool has been argued to be risk adverse in a high-performance environment (West et al., 2020). However, an AMS can provide a valuable avenue for athletes to communicate any perceived health issues to their support team (Roos et al., 2013;Starling & Lambert, 2018). Furthermore, while a range of personnel collected AMS data, 44% of respondents did not indicate that athlete monitoring data were collected by their medical team. These findings contradict previous research that has demonstrated a close relationship between the medical team and elite athletes, with periodic health reviews and illness/injury monitoring being standard practice (Dijkstra et al., 2014). These apparent differences could be due to misinterpretation of the survey questions, but, more likely, it is a result of confidential medical information being kept separately from day-to-day athlete monitoring data. While privacy may demand the separation of these datasets, it risks causing a silo'd approach to both data management and thinking, which, unless addressed, could prevent a holistic understanding of athlete health and training status (Dijkstra et al., 2014) Data collection for AMS was primarily completed via mobile devices (72% of respondents). While this might be an intuitive approach, the efficacy of mobile devices as a data collection modality is unclear. In health literature, some research has shown that mobile platforms improve patient adherence rates to reporting chronic pain, in comparison to pen and paper, (Stone et al., 2003) whilst others have shown a mix of questionnaire delivery modes to be more effective (Zuidgeest et al., 2011). No similar analyses of data collection methods appear to have been conducted in elite sport however, and while technology can aid data collection and analysis, it is unclear whether this technological shift has improved engagement with AMS, especially in the light of poorly reported athlete adherence (Barboza et al., 2017). Athlete monitoring data should trigger wider conversations between athletes and sports personnel; (Barboza et al., 2017;Bourdon et al., 2017) if instead the use of online monitoring technology is perceived as a barrier to discussion, hostile surveillance or increasing the amount of effort involved to report concerns (Manley & Williams, 2019) the role of technology in relation to athlete monitoring should be re-assessed.
No respondents employed published and validated ASRM e.g., REST-Q (Kellmann & Kallus, 2016). Instead, 96% of respondents used a customised ASRM, consisting of multiple singleitem questions. Ease of access to the customisable Performance Data Management System, (Brownlow & McCaig, 2021) may have facilitated the trend, but the popularity of the singleitem ASRM tool has also been observed in other contexts (Duignan et al., 2020;Jeffries et al., 2020). Where lower utilisation rates of single-item ASRM has been observed, (McGuigan et al., 2021) it has been attributed to time and knowledge constraints of the users.
There has been recent debate on the validity of the singleitem ASRM (Duignan et al., 2020;Jeffries et al., 2020). Previously, some researchers have suggested they are sensitive to changes in training load (Burgess, 2017). However, this statement appeared to be based on studies in a limited range of sports where both the questions and response scales differed (Buchheit et al., 2013;Gastin et al., 2013;Montgomery & Hopkins, 2013). Therefore, it is unclear if single-item ASRM would be sensitive beyond this context. Given the lack of clarity surrounding single-item ASRM it is recommended that practitioners undertake their own validation exercises (Kyprianou et al., 2019;Saw et al., 2017;Windt et al., 2019) as the evidencebased underpinning the use of single-item ASRM remains unclear and requires further investigation (Jeffries et al., 2020).
Similar to findings reported elsewhere (Duignan et al., 2020;Gastin et al., 2013) the most commonly employed single-item ASRMs were as follows: sleep quality, muscle soreness and illness/injury status. Subsequent research has however demonstrated poor responsiveness of sleep quality to changes in athlete training status (Saw, Main, Gastin et al., 2015a). The inconsistency between the widespread use of sleep quality as an ASRM in elite sport, and its reported lack of sensitivity in this particular piece of research might be explained by the questionable validity and reliability of the REST-Q sleep sub-scales. These issues have been discussed elsewhere (Davis et al., 2007) The lack of clear guidelines for practitioners constructing custom ASRM risks vagueness or imprecision in question and response design. This is contrary to best practice in this area (Hughes, 2018). Respondents in this study indicated that they used a range of response scales, with most using a 10-point Likert scale. Despite the majority (57%) of respondents indicating that they did not know why their scale length had been chosen, longer scales were believed by respondents to increase the sensitivity of their measures. Research has however demonstrated that increasing Likert scales beyond seven response points does not increase reliability or validity as it exceeds the discriminatory capacity of the individual. Instead, the optimal number of response points has been suggested to lay between four and seven, and should be anchored by clear written descriptors (Lozano et al., 2008).
Discussions pertaining to best practice for analysing athlete monitoring data have proposed a variety of methods dependent upon the data type, including: exponential and rolling averages for training load data (Menaspà, 2016;Murray et al., 2017) and linear and non-linear modelling Robertson et al., 2017). This study has given a high-level picture of which data analysis methods were most commonly utilised in elite sport. It appeared that few respondents used the analytical approaches discussed above, with standard deviations and raw scores most commonly used to assess meaningful change. While this may superficially indicate a failure to follow best practice, it is perhaps a result of time, resource, and/or knowledge constraints, or practitioners needing to adapt their data analysis style to suit coach-friendly requirements Sedgwick, 2014). It is important to note that given the broad array of sports represented within this survey, a wide variety of monitoring measures were employed. It was therefore deemed impractical and, moreover, beyond the scope of this study to explore how each individual metric was analysed.
Feedback is a cornerstone of an AMS, but processes to feed data back to athletes were not always effective. Athlete feedback was felt to be insufficient by 44% of respondents, consistent with findings reported elsewhere (Barboza et al., 2017). Respondents had the most frequent conversations regarding athlete monitoring data with the MDT, followed by the athlete, then coach, but some respondents never discussed athlete monitoring data with athletes (4%) and coaches (8%). This finding indicates a significant failure of the communication of athlete monitoring data, which may result from either a lack of buy-in from coaches/athletes, (Neupert et al., 2019) or practitioners lacking either time, resource or confidence in analysing and discussing data (Akenhead & Nassis, 2016).
Limitations of this study include non-response bias and transferability (Sedgwick, 2014). Non-response bias was controlled for by survey response deadlines, reminders, and the email invite originating from within the organisation to foster trust in how the data was used. Additionally, closed questions were used to keep the survey brief (Haunberger, 2011). Wider transferability can be cautiously presumed within elite sport, as the survey elicited responses from practitioners working in a broad range of different Olympic and Paralympic sports, (Smith & McGannon, 2018) with some of the study findings supported by previous research (Taylor et al., 2012). An improved response rate to the survey would however further improve confidence in these findings.
Future research may consider why certain data analysis methods are employed in applied sport. This information can then be used to support practitioners to improve their data analysis practices, particularly when it comes to common statistical issues, such as poor or missing data. In addition, future studies may want explore between-sport differences in athlete monitoring practices, such as Paralympic versus Olympic sports.

Conclusion
Consistent with previous research (Duignan et al., 2020;Taylor et al., 2012), this study found that practitioners in elite sport in the UK widely employed custom AMS, favouring this over published and validated athlete monitoring methods. Some respondents also indicated that their AMS was implemented without an evidence-based approach, nor a clear rationale for its use, a novel finding given the broad range of sports surveyed. This study has also highlighted continued uncertainty regarding the sensitivity of custom single-item ASRM, divergence from best-practice data analysis methods in published research, poor perceived athlete adherence, and fragmented feedback processes. In the light of these issues, questions remain on how to ensure AMS remains useful and practical for stakeholders, whilst simultaneously enhancing an athlete's experience of and, ultimately, performance in the elite sport environment.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
The author(s) reported there is no funding associated with the work featured in this article.

Practical implications
• Athlete monitoring systems should have a clearly articulated rationale for their use, supported by an evidence-based implementation strategy.
• Expectations of athlete monitoring data feedback and use should be clearly outlined and agreed (particularly between coach-athlete-practitioners) to prevent poor communication or expectation mismatches.
• Assess and address if research-advocated best practice is being followed in custom question and response-scale design, and data analysis.