More than metrics: The role of socio-environmental factors in determining the success of athlete monitoring

ABSTRACT The perceived value of athlete monitoring systems (AMS) has recently been questioned. Poor perceptions of AMS are important, because where practitioners lack confidence in monitoring their ability to influence programming, and performance is likely diminished. To address this, researchers have primarily sought to improve factors related to monitoring metrics, e.g., validity rather than socio-environmental factors, e.g., buy-in. Seventy-five practitioners (response rate: n = 30) working with Olympic and Paralympic athletes were invited to take part in a survey about their perceptions of AMS value. Fifty-two per cent (n = 13) was confident in the sensitivity of their athlete self-report measures, but only 64% (n = 16), indicated their monitoring was underpinned by scientific evidence. A scientific base was associated with improved athlete feedback (rS (23) = 0.487, p =0.014*) and feedback correlated with athlete monitoring adherence (rS (22) = 0.675, p =  <0.001**). If athletes did not complete their monitoring, 52% (n = 13) of respondents felt performance might be compromised. However, most respondents 56% (n = 14), had worked with internationally successful athlete(s) who did not complete their monitoring. While AMS can be a useful tool to aid performance optimisation, its potential value is not always realised. Addressing socio-environmental factors alongside metric-factors may improve AMS efficacy.


Introduction
Athlete monitoring systems (AMS) are tools used by coaches, multi-disciplinary teams and athletes to collect, analyse and provide information and feedback on the internal and external loads athletes are exposed to and their responses to them (Impellizzeri et al., 2019).Typically, practitioners (sport science and medicine personnel) use AMS with the aim of decreasing injury incidence and enhancing athletic performance and use the data gathered to support coaches' decision-making (Halson, 2014;Saw et al., 2015c).Recently, aspects of athlete monitoring, such as customised athlete self-report measures (ASRM) (Jeffries et al., 2020), its use as an injury predictor (West et al., 2021), and analysis methods, i.e., acute to chronic workload ratio (Impellizzeri et al., 2020) have brought AMS under scrutiny.These issues have led to some researchers reporting that the evidence supporting the efficacy of monitoring systems is not high (Coyne et al., 2018;Heidari et al., 2018).Poor AMS efficacy matters, as this can result in a myriad of issues (Akenhead & Nassis, 2016;Neupert et al., 2019;Weston, 2018).In particular, practitioners may lack confidence in their AMS to deliver a primary objective of detecting changes in athletic training status and subsequently use this information to improve athletic performance (Halson, 2014).Where practitioners perceive AMS efficacy is poor, the likelihood they can positively influence training programme decisions is diminished, and the subsequent role of the AMS within the sporting organisation risks becoming unclear (Coyne et al., 2018).
The key reported barriers to AMS efficacy can be broadly split into two categories (Saw et al., 2015b).The first category metric-related factors include factors such as measure reliability, validity and scientific underpinning, data analysis and equipment choice.The second category socio-environmental factors, are factors which are external to the metric encapsulating both environmental and cultural aspects, e.g., stakeholder buy-in, culture and practitioner expertise (Saw et al., 2015b).To date, research in the area of athlete monitoring has primarily involved a more mechanistic investigation of the metric-related factors, i.e., the science supporting what to monitor, and how best to execute data collection to improve scientific rigour (Bailey, 2019).This focus is important, but arguably, it has been driven in part by practitioners' positivist research leanings (Vaughan et al., 2019) and has come at the expense of investigating socio-environmental factors.Consequently, the value of socio-environmental factors and interventions to improve AMS efficacy such as: education sessions to upskill stakeholders (McGuigan et al., 2023), punitive consequences to improve athletes adherence (Saw et al., 2015b), or strategies to foster trust and improve buy-in to monitoring have yet to be fully established, despite their face validity (McGuigan et al., 2023;Neupert et al., 2019).
Socio-environmental factors tend to be less tangible or easily measurable in comparison to metric-related factors, perhaps explaining why they have received less research attention.
Metric-related factors, including AMS design, content and time to complete the AMS, have been previously ranked by athletes as the top three barriers to their compliance and thus AMS efficacy (Saw et al., 2015a).In comparison, research examining practitioner perceptions of AMS efficacy has highlighted socio-environmental factors as the primary cause of poor AMS efficacy (Akenhead & Nassis, 2016;Neupert et al., 2019).For example, in professional soccer, the top two factors negatively impacting practitioner confidence in athlete monitoring were reported as limited human resources and poor coach buy-in (Akenhead & Nassis, 2016).Poor measure validity, a metricrelated factor, was ranked third.These disparities in viewpoints likely reflect the different roles and responsibilities of these stakeholders, and the degree of influence they may have over the metric or socio-environmental factors.Few studies outside of professional sport (Akenhead & Nassis, 2016;Weston, 2018), have explored practitioner views of AMS efficacy or focussed on AMS socio-environmental factors.It is, however, increasingly apparent that the nature of the inter-personal relationships formed between practitioners and athletes form a key part of how an athlete engages with monitoring (McCall et al., 2023).
Stakeholder buy-in, a socio-environmental factor, is vital for the success of athlete monitoring (Akenhead & Nassis, 2016).While the term "buy-in" is perhaps implicitly understood by sports scientists, it has been poorly defined in relation to monitoring.In organisational change, buy-in refers to a continuum of cognitive and behavioural activities related to an individual's commitment to change (Mathews & Crocker, 2014).Buy-in to an AMS could therefore be described as an individual's cognitive (attitude and beliefs) and behavioural (actions) commitment to the AMS.Examples could include perceptions of AMS value, athlete AMS adherence or reporting truthfulness and responsiveness of the coaches/practitioners to meaningful changes in training status.Buy-in can arguably therefore provide a general indication of the value stakeholders place in their AMS and will likely be influenced by both metric and socioenvironmental factors.
Athlete buy-in to an AMS is central to its success, but attaining buy-in can be problematic (Neupert et al., 2019), and athlete adherence can vary widely (Barboza et al., 2017;Cunniffe et al., 2009).Possible reasons for poor athlete buy-in include engagement differences by sport, variations in organisational infrastructure, inadequate feedback and the dynamics of the coach/athlete/practitioner relationship (Barboza et al., 2017;Jowett & Cockerill, 2003;McCall et al.,2023;Saw et al., 2015a).Indeed, where an AMS has been executed poorly, the consequences can extend beyond problematic athlete buy-in, and potentially negatively impact athlete career progression and mental health (Manley & Williams, 2022).Accordingly, it is important that socio-environmental factors such as buy-in, and how to foster it, particularly in the context of supporting the athlete (McCall et al., 2023), are carefully considered when planning and implementing an AMS.
Coach buy-in to an AMS is also vital, as they are primarily responsible for modifying athlete training programmes, and AMS data can help inform this process (Halson, 2014).
Coaches do, however, need to assimilate considerable amounts of information before making programmatic decisions, including their own expertise, insights, cognitive biases and understanding of the athlete's training status and history (Collins et al., 2016).Athlete monitoring information therefore forms a part of a broader more complex picture which may influence coach buy-in to AMS.Previously, poor coach buy-in to sports science has been attributed to a failure to translate scientific findings into practice, and monitoring metrics usurping coaching craft (Buchheit, 2017).Given the negative attitudes of some athletes and coaches towards AMS, the reported problems with buy-in to athlete monitoring specifically are unsurprising (Akenhead & Nassis, 2016), and more consideration is required to understand why this might be the case.
There are growing concerns about the effectiveness of athlete monitoring, and evidence that this may subsequently pose problems establishing both the clarity of purpose and utility of monitoring for athletes, practitioner and coaches (Coyne et al., 2018;Jeffries et al., 2020).Understanding the perceptions practitioners have of their AMS and how these are influenced by socio-environmental factors, in particular, is an important step towards improving athlete monitoring practices given the focus on metric related factors to-date (Akenhead & Nassis, 2016).Thus, the aim of this study was to investigate elite sport practitioners' perceptions of their AMS efficacy, with a particular focus on how socio-environmental factors, such as buy-in, may impact practitioner perceptions of AMS efficacy.

Participants and methodology
Seventy-five elite sport practitioners who worked with tier 3-5 Olympic and Paralympic athletes (national level to world class) in the United Kingdom (McKay et al., 2022) were invited to participate in an online password protected survey in 2017/18 (Online Surveys, JISC, Bristol) adhering to web survey guidelines (Appendix 1) (Eysenbach, 2004).The survey took ~20 min with questions primarily answered by checkboxes or Likert scale responses (Neupert et al., 2022).Reminders were sent out after two-weeks, and the survey return rate was 40% (n = 30).Respondents were selected through stratified and convenience sampling, and access was gained through gatekeepers at the respective sporting organisations.All respondents received a full written explanation of the study and were given the opportunity to voluntarily agree to participate after viewing the study information.Ethics approval was granted by the Faculty of Business Law and Sport Ethics committee, University of Winchester (Reference: BLS/17/26).

Statistics
As the Likert data were ordinal and not normally distributed, Spearman's correlation coefficient was used to test the strength of relations (SPSS, V26, Armonk, NY: IBM Corp).Significance was set at 0.05 and Bonferroni corrected to 0.017.For clarity, p values reported in bold are significant at the corrected alpha level, and are otherwise reported in non-bold as *p<.05, and **p<.001.The data was separated into two categories which have been previously identified as important for AMS success (Saw et al., 2015b).Firstly, the relation between the scientific underpinning of an AMS and practitioner confidence and actions related to their AMS, and secondly factors influencing AMS engagement.Free text data were grouped into key themes and is represented by indicative quotes.

Background information
Thirty sports science and medicine practitioners completed the survey, each representing a discreet discipline across 14 different Olympic and Paralympic sports, including Athletics, Para Athletics, Boxing, Canoeing (sprint and slalom), Para Canoeing, Cycling and Para Cycling, Gymnastics, Hockey, Judo, Rowing, Rugby 7's, Sailing, Swimming, Taekwondo and Triathlon.Respondents had 8 ± 5 years (mean ± SD) experience of working in elite sport and collectively worked with 599 senior internationally competitive athletes.The majority (83%, n = 25) of respondents employed a customised monitoring system.

Practitioners perceptions of their athlete monitoring systems
Just over half (52%, n = 13) of respondents were quite or very confident in the sensitivity of their athlete self-report measures (ASRM) to detect meaningful change, with 36% (n = 9) neutral and 12% (n = 3) not confident.Respondents reported that scientific studies underpinned their ASRM in 64% (n = 16) of cases, with 24% (n = 6) disagreeing, and 12% (n = 3) neutral.A trend towards respondents expressing confidence in their ASRM sensitivity and reporting scientific evidence supported that their ASRM was apparent (r S (23) = 0.398, p = 0.049*).Reasons respondents gave for poor confidence in ASRM included untruthful athlete reporting practices and individual variability complicating the identification of meaningful change.
[My confidence in my ASRM] varies on an individual basis, it all depends on the athlete understanding the need for this system, and them being honest.(P2) Respondents suggested several ways to address their lack of ASRM confidence, including the production of best practice guidelines and improving the engagement and truthfulness of athlete ASRM reporting.
If the athletes were better engaged this would provide greater [practitioner] confidence in the accuracy of reports.(P25) Athletes were perceived to be truthful in their reporting practices by 56% (n = 14) of respondents, with the remainder neutral, 36% (n = 9), or in disagreement 8%, (n = 2).Some factors that were reported as influencing athlete reporting practices are outlined in Figure 1.
Respondents were divided over whether action was taken, e.g., training programme modifications in response to the detection of meaningful changes within monitoring scores, with 44%, (n = 11) in agreement, 20% (n = 5), neutral and 36%, (n = 9) disagreeing.When action was taken, respondents were more likely to report a scientific underpinning to their measures (r S (23) = 0.490, p = 0.013*).Reasons given for not modifying training where meaningful change in ASRM data was detected included: the change being intentional and expected, and poor coach buy-in to the monitoring process preventing action being taken.The reasons behind meaningful change in athlete monitoring scores were key, with one practitioner stating: I would never pull an athlete entirely on the basis of scores.Would need interrogation incl.athlete + practitioner conversation to understand big picture.Scores (−ve) may be intentional.(P26) Respondents rated the degree to which key stakeholders supported their AMS (Figure 2).The AMS providers and fellow practitioners were perceived as providing full support to ensure their AMS and were successful in 87% (n = 20) and 96% (n = 22) of cases, respectively.In comparison, respondents felt fully supported by their management, 74% (n = 17) and coaches 43%, (n = 10).Two respondents did not rate the support they received.
In relation to expected adherence rates, 64% (n = 16) of respondents indicated that athletes always or very frequently completed their AMS data, with 32% (n = 8) reporting that their AMS was rarely, or occasionally completed and 4% (n = 1), unsure.While the majority 56% (n = 14) of respondents felt that poor adherence could be tied to a specific timeframe, e.g., during competitions, there was no consensus on when this primarily occurred during the training calendar.Where practitioners reported that their metrics had a scientific underpinning, there was also a correlation with improved feedback to the athletes (r S (23) = 0.487, p =0.014*) with the provision of sufficient feedback also associated with improved athlete adherence (r S (22) = 0.675, p = <0.001**).Additionally, reported athlete adherence was significantly correlated to perceptions that athletes had received enough AMS education (r S (22) = 0.547, p = 0.006*), but not coach AMS education (r S (22) = 0.278, p = 0.188).Nonetheless, over half of respondents reported that athletes 56% (n = 14), and coaches 60% (n = 15) had sufficient AMS education.When asked how to improve athlete adherence, respondents stated more coach-led feedback was required: If the athlete reports anything it must be followed up, otherwise the trust in the process is gone.(P19) One respondent inferred increased athlete education was required: Examples of punitive consequences reported included a reduction in one-to-one coaching sessions, removal from training, no training individualisation and funding withdrawal.Implementation of such consequences was reported by 24% (n = 6) of respondents.However, between those with consequences in place and not, good adherence levels differed little at 67% (n = 4) or 63% (n = 12), respectively.
Just over half of the respondents 52%, (n = 13) felt that athletes' performance might be compromised if they did not complete their athlete monitoring, with 48%, (n = 12) disagreeing.Furthermore, 56% (n = 14) of respondents reported that they worked with internationally successful athletes (defined as those who had medalled at the Olympics, World Championships/Cups) who did not complete the monitoring required by their sporting organisation.Overall, from all 30 respondents, 60% (n = 18) felt that an improvement in athlete monitoring in their sporting organisation was required, with 30% (n = 9), undecided and 10% (n = 3) disagreeing.When given the opportunity to indicate what improvements they might wish to see, the most popularly cited suggestions included improving the evidence base behind measures, improving data analysis and feedback to athletes, integrating data from other objective sources, and addressing technical issues.
A better understanding of how best to analyse the data and improved strategies to enhance adherence.Improved methods of feedback to coaches and athletes.(P10)

Discussion
Practitioners had mixed perceptions of their AMS.Only 52% indicated that they had confidence in their ASRM sensitivity, with 64% stating their metrics had a scientific underpinning (r S (23) = 0.398, p = 0.049*).This is the first time that the reported trend (Duignan et al., 2020;Jeffries et al., 2020), of a lack of scientific evidence underpinning an ASRM has been quantified.Only 44% of respondents agreed that meaningful changes in monitoring scores resulted in appropriate remedial action, with half of respondents reporting that removal of their AMS would not compromise athlete performance.Overall, the potential of an AMS to positively influence performance appeared not to be fully realised.

Practitioner confidence in monitoring
While 64% of respondents felt that their ASRM had a scientific underpinning, nearly a quarter (24%) disagreed.Having an ASRM with a clear underpinning scientific rationale was associated with improved practitioner confidence in the sensitivity of their measures (r S (23) = 0.398, p = 0.049*), greater athlete feedback (r S (23) = 0.487, p =0.014*), and improved responsiveness by key stakeholders, e.g., coaches to changes in training status (r S (23) = 0.490, p = 0.013*).Researchers have indicated that a successful AMS should influence training programming and planning (Halson, 2014).The findings from this study demonstrate the importance of ASRM scientific rigour, i.e., metric-related issues to improve practitioners' ability to provide athlete feedback and influence training programming in "realworld" practice.
The confidence respondents reported in their ASRM was, however, divided, with only 52% confident in the sensitivity of their ASRM to discern meaningful change.Research has previously reported low practitioner confidence in monitoring systems due to the perception of athletes manipulating their self-report data and a reduced ability to monitor athletes during competition in elite football (Akenhead & Nassis, 2016;Saw et al., 2015b).Similar findings were reported in this study with difficulty identifying meaningful change within the data, and the perception of dishonest athlete reporting practices negatively impacting practitioner confidence in their metrics.Just over half (56%) of respondents from this survey felt athletes completed their monitoring honestly, with the remainder either unsure or reporting that athletes were untruthful.Conflictingly, elite athletes have indicated that they selfreport mostly honestly (Neupert et al., 2019) but may "impression manage" (Manley & Williams, 2022).
The primary reasons given for untruthful reporting practices were poor athlete engagement or the potential of training programme consequences (Figure 1).The majority of previously reported reasons for athletes manipulating their ASRM responses have focussed on athlete-related issues, such as: social desirability bias (Saw et al., 2015b), fear of inappropriate training programme modifications (Duignan et al., 2019), and avoidance of punitive consequences (Saw et al., 2015b).Putting the onus back on to practitioners to cultivate trusting relationships with athletes has, however, recently been suggested as a method to tackle dishonest reporting practices (McCall et al., 2023).To date, efforts to mitigate untruthful reporting practices have primarily advocated athlete education sessions, although these appear to have a variable impact (Duignan et al., 2019;Neupert et al., 2019).Implementing a social desirability response scale to adjust for bias (Tracey, 2016) may, in part, tackle concerns about data manipulation.Nevertheless, it simultaneously risks alienating athletes and propagating an ethos of hostile surveillance (Manley & Williams, 2022).Given that the manipulation of ASRM responses is reportedly less likely with senior team rather than junior team members (Duignan et al., 2019), the apparent pervasiveness of poor athlete reporting practices requires further reflection.
Overall, practitioner confidence in their ASRM can be adversely impacted by a range of socio-environmental and metric-related phenomena (Jeffries et al., 2020;Saw et al., 2017).A fundamental shift in athlete monitoring culture is, however, required to positively influence both the socio-environmental and practitioner confidence issues highlighted by this research.Putting athletes at the centre of monitoring and reframing it as a core principle of athlete healthcare with a focus on creating a psychologically safe environment should be explored in future research.

Engagement of end-users with monitoring
Enhancing performance has been described as a primary aim of AMS (Saw et al., 2018).Nevertheless, only half of the respondents in this survey indicated that performance would be compromised if no AMS was in place in their sport.When combined with 58% of respondents reporting that they worked with internationally successful athletes who did not complete their AMS, this inevitably leads to questions about key stakeholder engagement with and the utility of AMS.
Figure 2 outlines the degree of support respondents felt they received for their AMS.Fellow practitioners were the most likely to give full support for the AMS (in 96% of cases).Management was fully supported 74% of respondents, but 52% of respondents indicated that they did not have full support for their AMS from their coach.This is higher than the 37% of elite football practitioners reporting coach buy-in as a substantial barrier to the efficacy of their athlete monitoring (Akenhead & Nassis, 2016).
Research to date has mainly attributed poor coach engagement with athlete monitoring to failures to provide clear practical messaging, inaccessible scientific language, and internal politics caused by a perception of AMS usurping coaching craft in driving targets, funding, and performance assessment (Buchheit, 2017;Weston, 2018).Thus, despite coaches and practitioners having similar beliefs regarding the purpose and utility of athlete monitoring, these views do not necessarily translate into similar perceived benefits of, nor positive engagement with athlete monitoring (Weston, 2018).Strategies for achieving coach buy-in should be incorporated into AMS implementation guidelines (Saw et al., 2017) to avoid monitoring failing to meet expectations or causing conflict (Akenhead & Nassis, 2016;Starling & Lambert, 2018).
In this survey, the majority of respondents used a custom AMS.The brevity and sports specificity of custom AMS are often cited as promoters of athlete adherence (Saw et al., 2015b).However, this study and others (Barboza et al., 2017;Saw et al., 2015b) have shown that athlete adherence to monitoring is still problematic.While the figure of 67% of respondents reporting good adherence from this study is broadly similar to the rates of 56% and 79% reported elsewhere (Barboza et al., 2017;Cunniffe et al., 2009), it remains unclear whether custom AMS positively influences athlete adherence.
Perceptions of athlete adherence were, however, significantly related to whether the respondents reported that athletes received sufficient feedback (r S (22) = 0.675, p = <0.001**).This is an important and novel finding, as researchers have previously only proposed a potential relation between feedback and adherence (Barboza et al., 2017).Improving feedback processes may therefore provide a mechanism for practitioners to positively influence athlete adherence, as effective feedback loops can enhance decision-making (Barboza et al., 2017).Just under quarter (24%) of respondents indicated that their sporting organisation-imposed consequences for poor athlete adherence to AMS, typically in the form of training privilege removal.As highlighted elsewhere, these practices can be negatively viewed by athletes, and often have deleterious effects on AMS engagement (Saw et al., 2015b).Conflictingly, while some respondents from this study with no imposed consequences sought to have them implemented, others with consequences in place wanted them removed.These contradictory views should lead practitioners to reflect on the efficacy of coercion as a behaviour change strategy to promote AMS adherence, to avoid a "grass is greener" view.Overall, punitive consequences should be exercised with caution, and with a shared philosophy and consent from the athletes involved.
Based on the results from this study, the potential value of AMS is not always realised, as half of respondents indicated athlete performance would be compromised if their AMS did not exist, and 58% of respondents reported that they had worked with world-class athletes who did not complete their monitoring.In order for AMS to provide value, sporting organisations should therefore consider how to influence socioenvironmental factors that may impact their AMS as well as metric-related factors.This finding is particularly important given the typically positivistic research philosophies of practitioners' risks biasing their focus towards metric-related factors (Vaughan et al., 2019), and away from socio-environmental factors.Employing AMS as a method to reduce the uncertainty associated with performance enhancement and illness/injury prevention, rather than a panacea for injury prevention and performance optimisation, may assist practitioners situate it as one tool within a more multi-faceted coach decision-making framework.
Limitations in this study include respondent non-response bias and the transferability of findings to other sporting contexts.Similarities between different elite sports settings, particularly within amateur sport, can be cautiously presumed due to the variety of respondents and sports involved in this survey.Practitioners are encouraged to reflect on the applicability of these findings to their own settings (Smith, 2018).Familywise error rate was controlled through the use of Bonferroni corrections.The data were separated into factors relating to athlete AMS adherence and those related to the scientific underpinning of the metric.The partitioning of this data aimed to reduce issues related to multiple statistical comparisons whilst balancing the increased risk of Type 2 errors using Bonferroni corrections.Finally, the recent dramatic increase of monitoring technology has enhanced the ease with which large volumes of data about an athlete can be collected.Therefore, while it is a limitation that this data were collected in 2017/18 it is now, more than ever, important for practitioners to consider the broader context of monitoring beyond metric-related factors.

Practical applications
The information discussed in this manuscript is most likely to benefit practitioners monitoring elite amateur athletes but may also have generalisability to other professional sport settings.
These findings are important in the international sport context because they assist practitioners in developing AMS that improve the monitoring experience and deliver better results for key stakeholders.These findings provide an important call to consider socio-environmental factors alongside metric related factors when evaluating the effectiveness of an AMS.
• Ensure scientific evidence underpins any custom ASRM to promote both practitioner confidence in the metric and athlete feedback.• Formal AMS can provide value but are not necessarily required to develop internationally successful athletes.• Socio-environmental factors, such as buy-in, should be considered alongside metric related factors in an AMS.

Conclusion
Practitioners working across a range of elite sport in the UK reported their perceptions of their AMS efficacy in an online survey.Common issues included a lack of confidence in the sensitivity of ASRM which correlated with ASRM lacking a scientific underpinning.Difficulties establishing monitoring buy-in with coaches and athletes were also reported.Providing sufficient feedback to athletes was statistically correlated with increased athlete monitoring adherence.The difference between some practitioners' beliefs; (a lack of monitoring compromises performance) versus reality; (some internationally successful athletes do not complete monitoring) indicates that the efficacy of monitoring should be regularly reviewed to ensure it is providing value with an eye on both socioenvironmental and metric-related factors.

Randomization of items or questionnaires
To prevent biases items can be randomized or alternated.
The software did not allow items to be randomised.

Adaptive questioning
Use adaptive questioning (certain items, or only conditionally displayed based on responses to other items) to reduce number and complexity of the questions.
Yes, as appropriate-Methods: Participants and methodology

Number of Items
What was the number of questionnaire items per page?
The number of items is an important factor for the completion rate.
This varied depending upon the responses given, but there was a mean of 6 questions per page.
Number of screens (pages) Over how many pages was the questionnaire distributed?The number of items is an important factor for the completion rate.
Responses were over four pages if no athlete monitoring system was in place and eight pages if a monitoring system was in place.The number of people submitting the last questionnaire page, divided by the number of people who agreed to participate (or submitted the first survey page).This is only relevant if there is a separate "informed consent" page or if the survey goes over several pages.This is a measure for attrition.Note that "completion" can involve leaving questionnaire items blank.This is not a measure for how completely questionnaires were filled in.(If you need a measure for this, use the word "completeness rate".)

Completion rate was 100%
Preventing multiple entries from the same individual Cookies used Indicate whether cookies were used to assign a unique user identifier to each client computer.If so, mention the page on which the cookie was set and read, and how long the cookie was valid.Were duplicate entries avoided by preventing users access to the survey twice; or were duplicate database entries having the same user ID eliminated before analysis?In the latter case, which entries were kept for analysis (eg, the first entry or the most recent)?
We were able to ascertain if there were potential duplicates via the responses provided.No duplicates were found.

Item Category Checklist Item Explanation Location
IP check Indicate whether the IP address of the client computer was used to identify potential duplicate entries from the same user.If so, mention the period of time for which no two entries from the same IP address were allowed (eg, 24 hours).Were duplicate entries avoided by preventing users with the same IP address access to the survey twice; or were duplicate database entries having the same IP address within a given period of time eliminated before analysis?If the latter, which entries were kept for analysis (eg, the first entry or the most recent)?
JISC provided a unique identity numbers for each survey response.Any duplicates were identified by the demographic information given.No duplicates were found Log file analysis Indicate whether other techniques to analyse the log file for identification of multiple entries were used.If so, please describe.
The potential for duplicate entries were identified by key information included in the questionnaire, such as years of experience, sport worked with etc.No duplicates were found.

Registration
In "closed" (non-open) surveys, users need to login first and it is easier to prevent duplicate entries from the same user.Describe how this was done.For example, was the survey never displayed a second time once the user had filled it in, or was the username stored together with the survey results and later eliminated?If the latter, which entries were kept for analysis (eg, the first entry or the most recent)? N/A

Handling of incomplete questionnaires
Were only completed questionnaires analyzed?Were questionnaires which terminated early (where, for example, users did not go through all questionnaire pages) also analyzed?
There were no incomplete questionnaires.
Questionnaires submitted with an atypical timestamp Some investigators may measure the time people needed to fill in a questionnaire and exclude questionnaires that were submitted too soon.Specify the timeframe that was used as a cut-off point, and describe how this point was determined.

N/A
Statistical correction Indicate whether any methods such as weighting of items or propensity scores have been used to adjust for the non-representative sample; if so, please describe the methods. N/A

Figure 1 .
Figure 1.Practitioners' perceptions of what factors primarily influenced the honesty of athlete reporting in an AMS.
Them [athletes] simply understanding the WHY (of monitoring).(P2)Respondents suggested methods to promote adherence, including imposing punitive consequences and rewards: Write [adherence] into athlete agreement with consequences if not filled in (stick) and modified and individualised training based off it.(carrot) (P8)Nonetheless, one respondent whose sport did impose consequences for poor adherence commented: Achieve buy-in instead of it being a programme requirement.(P22)

Figure 2 .
Figure 2. Respondents rated the degree to which they felt supported by different stakeholders to implement and ensure the ongoing success of their AMS.