Assessment of non-directed computer-use behaviours in the home can indicate early cognitive impairment: A proof of principle longitudinal study

Abstract Objectives Computer-use behaviours can provide useful information about an individual’s cognitive and functional abilities. However, little research has evaluated unaided and non-directed home computer-use. In this proof of principle study, we explored whether computer-use behaviours recorded during routine home computer-use i) could discriminate between individuals with subjective cognitive decline (SCD) and individuals with mild cognitive impairment (MCI); ii) were associated with cognitive and functional scores; and iii) changed over time. Methods Thirty-two participants with SCD (n = 18) or MCI (n = 14) (mean age = 72.53 years; female n = 19) participated in a longitudinal study in which their in-home computer-use behaviour was passively recorded over 7–9 months. Cognitive and functional assessments were completed at three time points: baseline; mid-point (4.5 months); and end point (month 7 to 9). Results Individuals with MCI had significantly slower keystroke speed and spent less time on the computer than individuals with SCD. More time spent on the computer was associated with better task switching abilities. Faster keystroke speed was associated with better visual attention, recall, recognition, task inhibition, and task switching. No significant change in computer-use behaviour was detected over the study period. Conclusion Passive monitoring of computer-use behaviour shows potential as an indicator of cognitive abilities, and can differentiate between people with SCD and MCI. Future studies should attempt to monitor computer-use behaviours over a longer time period to capture the onset of cognitive decline, and thus could inform timely therapeutic interventions. Supplemental data for this article can be accessed online at http://dx.doi.org/10.1080/13607863.2022.2036946


Introduction
Subtle changes in instrumental activities of daily living (IADL) may be a marker of the development of a neurodegenerative condition leading to dementia. For instance, difficulties with IADL such as managing finances and taking medication may manifest in the prodromal and preclinical stages (Farias et al., 2013;Jekel et al., 2015;Marshall et al., 2012;Sikkes et al., 2011), and can discriminate between cognitively healthy individuals and individuals with mild cognitive impairment (MCI) (Farias et al., 2009;Rodakowski et al., 2014), as well as being able to predict whether a healthy person will go on to develop MCI (Marshall et al., 2015). However, clinic-based assessments of IADL can only provide episodic information; are highly subjective; and lack temporal precision, intraindividual specificity and ecological validity (Dorsey et al., 2017;Kaye et al., 2011).
Advances in ubiquitous computer software and 'smart home' technologies have made it possible to unobtrusively monitor IADL, providing continuous real-time information about a person's cognitive and functional ability from within their own homes (Gold et al., 2018;Piau et al., 2019). These technologies range from sensors distributed around the home (Dodge et al., 2012;Hagler et al., 2010;Hayes et al., 2008); wearable sensors (Kirste et al., 2014;Patel et al., 2012); and software for monitoring computer activities (Kaye et al., 2011(Kaye et al., , 2014Seelye et al., 2015Seelye et al., , 2018. Personal computer-use is increasingly common in older adults. In the UK, internet use in retired older adults aged 65 to 74 has increased from 52% in 2011 to 83.2% in 2019 (Office for National Statistics, 2019b). As such, monitoring older adults' personal computer-use is a particularly viable option for continuously and unobtrusively monitoring functional and cognitive ability. Previous studies have shown that three main aspects of computer-use differ between individuals with cognitive impairment and cognitively healthy controls: time spent on the computer (Kaye et al., 2014;Seelye et al., 2018); frequency, variability and efficiency of mouse movements (Seelye et al., 2015); and keystroke speed (Vizer & Sears, 2015). Furthermore, Stringer et al. (2018) showed that performance on a specific set of computer-use behaviours (including pauses, mouse clicks and typing) could discriminate between individuals with cognitive impairment and cognitively healthy controls, and that these behaviours were associated with performance on cognitive and functional assessments, in particular, those related to memory. Previous studies have used either directed (Seelye et al., 2018;Stringer et al., 2018;Vizer & Sears, 2015) or non-directed tasks (Kaye et al., 2014;Seelye et al., 2015). In studies that have used non-directed tasks, the focus has been on single computer use behaviours such as amount of use (Kaye et al., 2014) or mouse moves (Seelye et al., 2015). Non-directed tasks are more challenging to monitor as the nature of the computer use activity is unknown (or difficult to determine), but they are arguably more useful because they reflect real-world, everyday computer-use. What remains to be explored is the utility of a range of non-directed computer use behaviours for predicting cognitive and functional abilities.
In the present proof of principle study, we evaluated the potential of continuously recorded home computer-use as a marker of the level of, or change in, cognitive and functional ability. To achieve this objective we examined whether this method could show the following expected patterns of behaviour: 1) non-directed computer-use behaviour could differentiate between individuals with MCI and individuals with SCD; 2) associations between non-directed, continuous computer-use behaviour and cognitive and functional scores measured across three time periods; 3) change over time in non-directed computer-use associated with change in cognitive and functional test scores.

Procedure
This was a proof of principle longitudinal study of in-home computer-use behaviours using custom-made monitoring technologies. Participants were recruited to the study on a rolling basis over a period of 2 months. The length of time participants were in the study ranged from 7 to 9 months (mean = 31.94 weeks, SD = 4.47). Participants completed a battery of cognitive and functional assessments at three testing time points: 1) baseline: 2) mid-point (4.5 months); and 3) end point (month 7 to 9). Cognitive and functional assessments, combined with continuous recording of specific computer activities for the entire study period, was completed in participants' own homes.
Participants were recruited through the UK dementia research registry 'Join Dementia Research' , as well as memory clinics and local community groups in the Greater Manchester area. Participants who had taken part in a previous study on assessing computer-use behaviour in controlled settings (Stringer et al., 2018) were also invited to take part. Participants were eligible to take part in the study if they: had the capacity to consent; were 65 years of age or older; were regular computer-users (defined as using a laptop or desktop computer at least once a week); owned a personal computer or laptop that used Microsoft Windows versions 7, 8 or 10; had a home internet connection; and were able to communicate verbally in English.
Participants with MCI referred from memory clinics had all received a clinical diagnosis from a qualified memory specialist based on Peterson's criteria for MCI (Petersen, 2004). Participants who self-referred to the study all reported a diagnosis of MCI given by a specialist memory clinic. Specific clinical subtypes of MCI (i.e. amnestic vs non-amnestic; single vs multiple domain) were not ascertained. SCD participants were identified if they indicated on the ECog (Farias et al., 2008) that they were 'concerned they have a memory or other thinking problem' and their total score was greater than 1.43. This cut-off score corresponds to the upper 95% confidence interval of the mean total ECog scores from a sample of healthy control participants (Stringer et al., 2018), who indicated that they were not 'concerned they have a memory or other thinking problem' .

Global functional status
Global cognitive status was assessed using the ACE III (Hsieh et al., 2013): a concise neuropsychological assessment of cognitive functions commonly used in the UK with validated cut-off scores for MCI and dementia. The test includes five cognitive subdomains: attention, memory, verbal fluency, language and visuospatial abilities, which provide a cognitive score out of a maximum of 100 (a higher score indicates better cognitive function).

Functional ability
Subjective ratings of cognitive and functional capacity were obtained using the self and informant versions of the ECog (Farias et al., 2008), which requires informants or the participant to rate the current functional abilities of the participant compared to 10 years previously. The 39-item questionnaire assesses cognitively-based functional items across six neurological domains: memory, language, visuospatial abilities, planning, organisation and divided attention. Scores range from 1 ('Better or no change') to 4 ('Consistently much worse'). The informant version was used for the 26 of the 32 participants who had an informant (i.e. someone who knew the participant well, either as co-habitants or seeing the participant in-person at least three times per week). The self-report version was used for the other six participants who did not have an informant (MCI n = 2).

Processing speed
Trails Making Test A (TMT A) (Lezak et al., 2012), simple reaction time (SRT) and four-choice reaction time (CRT) (Deary et al., 2011) were used to assess cognitive processing speed. Participants completing TMT A are required to draw lines to connect circled numbers in a numerical sequence (i.e. 1-2-3, etc.) as quickly as possible. Simple reaction time (SRT) and fourchoice reaction time (CRT) means and standard deviations were measured for each participant on the Deary-Liewald reaction time task (Deary et al., 2011).

Episodic memory
Episodic memory was measured using the FCSRT (Grober et al., 2009). The FCRST produces three scores: free recall, total recall and cue efficiency. Free recall (cumulative sum of free recall from three trials, range 0-48) was evaluated for the current analysis because it has been shown to be more sensitive to dementia than the other two measures (Grober et al., 2010).

Recall and recognition
The Doors and People Test was administered to assess verbal and visual recall and recognition (Baddeley et al., 1994). The subtests were administered in the following order: verbal recall (people subtest); visual recall (shapes subtest); verbal recognition (names subtest); visual recognition (doors subtest). Both recognition memory tasks adopt a multiple-alternative forcedchoice design. A higher score indicates worse performance. New stimuli for the recall tasks, using different photos and names for the people and altered shapes, were created by the research team for time points two and three. These alternate versions have not been validated. Total age-scaled recall score, total age-scaled recognition score and overall forgetting score were assessed for the current analysis.

Executive function
Executive function was captured using the Trails Making Test B (TMT B) and Digit Span Backwards (DSB) test (Lezak et al., 2012). Participants completing TMT B are required to draw lines to connect circled numbers and letters in an alternating numeric and alphabetic sequence (i.e. 1-A-2-B, etc.) as rapidly as possible. Participants completing DSB are asked to report digit sequences backwards, beginning with a length of two digits up to eight digits, with two trials at each increasing list length. The test is discontinued after a score of 0 on both trials of any item.
Executive function was also captured using the Color-Word Interference Test (CWIT) (Delis et al., 2001); a recently developed modification of the Stroop test (Stroop, 1935) that includes four conditions (colour naming, word reading, inhibition and task switching). Completion time (seconds) for each condition was used to calculate an interference and task switching score (for details on scoring the Stroop test see (Scarpina & Tagini, 2017)).

Depression and apathy
Baseline measures of depression and apathy were captured using the Geriatric Depression Scale [short form] (GDS) (Yesavage, 1988) and the Starkstein Apathy Scale (Starkstein et al., 1992). Higher scores on these tests indicate a greater level of depression/apathy.

SAMS system architecture
Computer-use behaviours were recorded using custom-made software developed by the SAMS (Software Architecture for Mental Health Self-Management) technical team (for further details of SAMS software see Gledson et al., 2016)). The SAMS recording software captures computer-use activities as a list of time-stamped events. The SAMS desktop logger records all computer activities, including mouse clicks and keystrokes. All alpha numeric keystrokes typed in secure browsers, such as banking or email passwords, are suppressed, but keystroke count and timestamp are still captured. All computer-use data captured by SAMS is immediately encrypted. The software and user interface was developed with input from clinical domain experts and potential end-users, including study participants from initial pilot studies.

SAMS installation and setup
All participants had the SAMS software installed on their home computer. If the computer was used by others in the household, either separate user accounts were set up, or an on-screen prompt would ask the user if they were the participant and only the participant's computer-use would be recorded. This pop-up would occur following a 10-minute period of computer inactivity, with the participant given the option to extend the time between pop-ups to up to 4 hours.
Following the SAMS software setup, a short training session was undertaken to introduce the participant to the software. It was explained that the SAMS software would always run in the background of the computer unless they paused it. A link to the software was available on the desktop and in the windows notification tray (shown in Figure 1(a and b)). If the participant wished to work privately, they could click on the software icon link and a pop-up window would allow them to pause and resume monitoring (shown in Figure 1(c)).
The participants were provided with a technical helpline, which they could call if there was a problem with their computer related to the SAMS software. All participants received a monthly check-up phone call to discuss any computer issues, and to report any days the computer was 'inaccessible' (i.e. planned holiday, no access to computer or computer not working).

Computer-use variables
Although the SAMS recording software is capable of capturing a variety of computer-use behaviours, the current study focussed on mouse clicks, keystroke speed, and computer-use duration, all of which have been previously shown to be associated with cognitive ability (Kaye et al., 2014;Seelye et al., 2015;Stringer et al., 2018;Vizer & Sears, 2015).
The data collected by the SAMS software on day one were not included in the analysis because this included activity from the SAMS technical team when installing the software.
Computer-use duration was recorded across each computer-use 'session': defined as a period of activity on the computer (i.e. mouse moves, clicks, and keystrokes) with a pause of no longer than 15 min. For the longitudinal analysis of change in computer-use over time, and the examination of differences between individuals with MCI and SCD, total daily computer-use was averaged across all days of the study, irrespective of whether the computer was used or reported inaccessible. This method was used as it provides more feasible, unobtrusive, and less burdensome way of measuring computer-use than relying on participants to report periods where the computer was inaccessible. In addition, the pattern of results obtained using this method of calculating daily computer-use was broadly similar to results obtained if computer-use was calculated only on the days that the computer was accessible (i.e. the participant had not reported that the computer was inaccessible), irrespective of whether the computer was used or not (see Supplementary  Tables 1 and 3).
The analysis of associations between passive computer-use behaviour and cognitive and functional scores incorporates computer variables measured over temporal bins corresponding to the dates of the cognitive tests for each participant (see between-group comparisons sub-section for details of temporal bins). Within each of the temporal bins, total daily computer-use was averaged across the days when the computer was accessible and used. This method was used to account for the inconsistent and varied daily computer-use across these shorter temporal bin periods, because the data is less skewed by days when there were 0 min of computer-use. In addition, the pattern of results obtained using this method of calculating daily computer-use was broadly similar, with all associations in the same direction, to results obtained if computer-use was calculated only on the days that the computer was accessible (see Supplementary  Table 2).
Mouse click frequency was calculated by dividing total mouse clicks (left and right) per day by the total duration of computer-use per day.
Keystroke speed was calculated by first identifying distinct bursts of keystroke activity. A burst was defined as a series of at c. the pop-ups that appear when the SAMS icon is pressed, the option to pause (left) when SAMS is monitoring and the option to resume when SAMS is paused (right).
least five consecutive keystrokes with a pause between keystrokes (keystroke up to keystroke down) of no longer than 1.957 s. The 1.957 s pause duration was the upper limit gap (mean gap + 2*SD) between keystrokes on a Word task used in Stringer et al. (2018). Keystroke bursts did not include modifier keys (CTRL, ALT and Shift), because they are used at the same time as other keystrokes and skew the keystroke speed. As the removal of specific keys could only be applied to known keystrokes, and the key code of keys typed in secure browsers was suppressed, all keystrokes occurring in suppressed browsers were not included in calculations of keystroke speed. Daily keystroke speed was calculated by dividing the total number of keystrokes in bursts per day by the total duration of bursts per day.
To encourage participants to type more, and thus collect more data relating to keystroke speed, participants were asked to complete a weekly diary entry. This involved asking them to write about general feelings during the week and report key life events.

Statistical analysis
Statistical analyses were performed using SPSS version 22 and Stata/SE version 12.1. Outliers were calculated for the cognitive data using the non-recursive procedure described by Van Selst and Jolicouer (2018). Two participants' reaction time data were omitted due to technical problems with the reaction time recording software. One participant's Stroop data was excluded because they were colour blind.
A conventional p value of 0.05 was used because of the small sample size and low power. However, as the study was a proof of principle, we also considered the results in light of a false discovery rate (FDR) correction (Q = 0.2), as described by Benjamini and Hochberg (1995), to account for increased risk of false positives.

Between-group comparisons
To investigate differences between individuals with MCI and SCD, we used multi-level modelling (MLM) to allow for the statistical dependency between multiple observations for the same individuals. We regressed the computer-use and cognitive variables on a variable capturing membership to the SCD vs MCI group. This analysis was based on all available data for the full time period of the study. The model was adjusted for variations in age and years of computer-use as these were significantly different between the two groups.

Associations between computer-use and cognitive/ functional measures
In order to examine correlations between computer-use data and cognitive and functional test scores, computer-use variables were first measured over temporal bins that corresponded to the dates of the cognitive tests for each participant: the first three weeks after the baseline assessment (T1); the week of the midpoint assessment (T2) and the two weeks either side; and the three weeks prior to the end point assessment (T3). The three week timeframe at baseline and end point and the 5 weeks at mid-point was selected to create a snapshot of computer use behaviour that balanced capturing enough data whilst also being close enough to the time the cognitive tests were completed. We then used MLM to examine associations between computer-use behaviours and cognitive and functional test scores across the entire study period, again allowing for the statistical dependency between multiple observations for the same individuals, and statistically adjusted for age, educational attainment and years of computer-use.

Change over time
To analyse whether there was any change in computer behaviour and/or cognitive scores over time, we used MLM for repeated measures, treating time from inclusion in the study as a continuous predictor variable and allowing for the statistical dependency between multiple observations per individual. We then adjusted associations for variations in age, educational attainment, and years of computer-use. We considered statistical significance of the adjusted regression coefficient of the time variable (p < 0.05) as evidence for a change over time between baseline and follow-up measurements, with a positive or negative coefficient signalling improvement or deterioration, respectively. The computer-use behaviour data (total computer-use duration, mouse click frequency and keystroke speed) were regressed on the number of days each participant was in the study. The cognitive and functional scores were regressed on the time variables for each participant. The time variable represented the amount of time (in weeks) that passed at each assessment since the baseline assessment. For baseline this was 0 weeks for all participants, for mid-point assessment this ranged between 16 and 21 weeks (mean = 17, SD = 1.54), and for end point this ranged between 20 and 40 weeks (mean = 34, SD = 3.59).

Results
Median days in the study, median days of use, median days the computer was not used and median days the computer was inaccessible is reported in Table 1.

Between-group comparisons
In line with group categorisation, MCI participants had greater impairment on all of the cognitive and functional assessments compared to the SCD participants, and the majority of these differences were significant (Table 2). These effects held significance after applying the false discovery rate. Significant differences between the two groups on the ECog, TMT B and Stroop inhibition did not hold after controlling for age and computer-use experience. Participants with MCI also differed significantly to participants with SCD on two out of three computer behaviours. Participants with MCI spent significantly less time on the computer (p = .026) and had slower keystroke speed (p < .001) compared to individuals with SCD. These effects were significant after controlling for age and computer-use experience and held significance after applying the false discovery rate.

Associations between computer-use and cognitive/ functional measures
After the application of the FDR, there was a significant association between time spent on the computer and scores on the Stroop switching test (p = .016) ( Table 3). These scores suggest that those who are least impaired on the Stroop switching test spend longer on the computer. There were also significant association between keystroke speed and: TMT A (p = .028); recall on the Doors and People Test (p <.001); recognition on the Doors and People Test (p <.001); Stroop inhibition (p = .041); and Stroop switching (p =.006). These scores suggest that individuals who are least impaired on these cognitive tasks have faster keystroke speed. These effects remained significant after controlling for age, years of education and computer-use experience. There were no significant effects for mouse click frequency with any of the functional or cognitive test scores after the application of the FDR (Table 3). note: § Daily computer use was based only on the days when the computer was accessible and used; * estimates adjusted for age, years of education and years of computer use experience; † a larger scores represents poorer performance. Values in bold represent a significant association between the computer use behaviour and cognitive scores (p < .05) for the adjusted estimates. ‡ Value held significance after false discovery rate (FDR) correction (Q = .20).

Change over time
No change was detected in any of the computer-use behaviours over the course of the study (Table 4). After the application of the FDR, over the study period, there was a significant decrease in recall on the Doors and People Test (p < .001). No change was observed in scores on any of the other cognitive or functional tests. As there was no change detected in any of the computer-use variables, further analysis of associations between change in computer-use behaviour and change in cognitive test scores were not pursued.

Discussion
The results of this study showed that non-directed measures of computer-use, such as duration of use (i.e. minutes per day) and keystroke speed (i.e. key presses per second), were able to discriminate between individuals with MCI and individuals with SCD. Whilst no change was detected in any of the computer-use behaviours, or with most of the cognitive and functional test scores, over time, measures of computer-use duration and keystroke speed were also associated with cognitive test scores. Taken together, these results provide proof of principle that recording routine home computer-use could help to differentiate between individuals with MCI and individuals with SCD, and to detect change in cognitive ability. Participants with MCI had slower typing speeds than those with SCD. These findings are consistent with previous work showing a reduction in typing speed with increased cognitive impairment during semi-directed tasks in a controlled environment [25,26], and show that such effects are also observable for non-directed computer tasks in an uncontrolled homebased setting. Faster typing speed was also associated with better visual attention (as measured by TMT A), better recall and recognition (as measured by the Doors and People Test), task inhibition and task switching (as measured by the Stroop) in the current study. The TMT A, The Doors and People Test recall and recognition scores and the Stroop task are shown to be sensitive to early stage dementia of the Alzheimer type Greene et al., 1996;Hutchison et al., 2010;Shindo et al., 2013), and the task switching version of the Stroop is particularly sensitive to cognitive decline in normal-functioning older adults (Fine et al., 2008). In our previous work we found that ACE III and ECog Memory scores were significant predictors of keystroke speed (Stringer et al., 2018). Taken together, these results give us confidence that non-directed measures of typing speed provide valid indicators of cognitive function that can help to discriminate between people with MCI and SCD.
Individuals with MCI spent less time on the computer than individuals with SCD. This decreased level of use could be an indication of participants with MCI stopping using the computer when they find tasks difficult or make mistakes; or using the computer less frequently because they have less activities that they need or want to do on the computer. This is consistent with Kaye et al. (2014), who found that people with MCI spent less time on the computer compared with healthy controls.
Computer-use duration was also associated with traditional neuropsychological test scores. Individuals with stronger task switching abilities spent more time on the computer. This suggests that increased ability to switch between computer tasks could reflect conducting multiple computer tasks at once, and so spending more time on the computer to complete these. In support, Tun and colleagues (2010) observed that increased computer-use per week was associated with better task switching performance (Tun & Lachman, 2010). The current study extends these findings by showing a similar pattern of results during non-directed computer-use, using a more temporally precise measure (i.e. daily computer-use).
Mouse click frequency did not differ significantly between the two groups. In our previous work using directed computer tasks we also found no group differences on the number of mouse clicks per minute (Stringer et al., 2018). In this previous cross-sectional study we did find that cognitively impaired participants executed a higher proportion of mouse clicks compared with healthy controls, but this is likely to reflect the cognitively impaired group spending a longer time on the computer and possibly making more errors on the semi-directed task, but this is not an appropriate measure for self-directed computer tasks. Taken together these results suggest that mouse click frequency may not be a particularly useful measure for detecting differences between groups on directed or non-directed tasks.
Computer-use behaviour did not change over time. For the cognitive and functional assessments the only change was a decrease in recall scores on the Doors and People Test, which may be indicative of cognitive decline. The lack of similar change over time on the FCRST recall test and with the computer-use behaviours could reflect lower sensitivity to detect decline in this cognitive domain using these measures. Mitchell and Shiri-Feshki (2009) found that conversion rates of MCI to AD dementia was 8.1% per year in specialist clinical settings and 6.8% in community settings. Therefore, given our small sample size and a study period of less than a year, the probability of conversion, as well as the likelihood of detecting it, were low. In order to detect change in IADL using self-chosen computer activities, future studies should examine data over a longer period of time and in a larger sample.
There are some limitations of the study that need to be considered. First, whilst the study provides proof of principle for passive monitoring and can inform the direction of future larger-scale investigations, the study is underpowered and potentially too short to detect all effects.
Second, participants varied in how many days they used their computer and there were a considerable number of days where there was no data for some participants. This variability could impact the statistical power, cause bias in the estimation of parameters, and reduce the representativeness of the sample. Although we attempted to disentangle accessibility and usage in the analysis, gaps in computer use data is reflective of how some individuals use their computer in real-life, and is therefore a more valid test of proof of principle. Additional data could be collected by also monitoring mobile or wearable devices. This would not only provide digital biomarker data outside of the home, but also inside the home when individuals choose to use a mobile device over a static home computer or laptop. The number of adults over the age of 65 who accessed the Internet on a mobile phone or smartphone outside the home increased from 9% in 2013 to 40% in 2019 (Office for National Statistics, 2019a), suggesting that it will become even more relevant to monitor mobile devices in this age group in the coming years.
Third, there were significant differences in age and years of computer use between the two participant groups. Despite accounting for these covariates within the models, statistical precision may have been improved by matching participants on these criteria.
Fourth, we did not include a cognitively healthy group who did not have concerns about their cognitive abilities to see if their computer use behaviours differed to those with SCD and MCI. We surmised that by focussing on SCD and MCI participants specifically, we may have been able to capture change more easily within a short time frame. In addition, all SCD participants were cognitively healthy according to the Addenbrookes examination, and so effectively serve as a control for cognitive function when making comparisons to MCI participants. Looking at subtle differences between people with SCD and MCI also expands on previous research that has primarily focussed on differences between healthy controls without subjective decline and people with MCI (Kaye et al., 2014;Seelye et al., 2015Seelye et al., , 2018. Nevertheless, also including an objectively and subjectively cognitively healthy control group is a more comprehensive approach for future research.

Conclusion
In summary, this study provided proof of principle that passive monitoring of time spent on the computer and keystroke speed can differentiate between groups with SCD and MCI. Moreover, keystroke speed was related to a number of neuropsychological test scores and shows potential as an indicator of a person's cognitive status. Importantly, this is true even though participants were engaging in non-directed computer tasks, where the exact nature of the activity was unknown. Such measures of computer-use behaviour could therefore be used to supplement existing means of detecting functional and cognitive decline by collecting information about a person's cognitive status in an unobtrusive way. The next step is to test these relationships in a larger study sample, over a longer period, to gather a better indication of whether computer-use behaviours can capture clinically significant cognitive and/or functional change. It will also be important to develop the SAMS software for touch screen devices such as tablets, smart phones and wearable as their use becomes more ubiquitous amongst older adults.

Data availability statement
The data that support the findings of this study are available from the corresponding author, [GS], upon reasonable request.