Portable markerless hand motion capture system for determining the grasps performed in activities of daily living

ABSTRACT Upper-limb prostheses are either too expensive for many consumers or have a greatly simplified choice of actions. This research aims to enable an improvement to the quality of life for recipients of these devices by providing an understanding of how we use our hands in modern everyday life. To achieve this a taxonomy of grasps used in activities of daily living has been created. A portable motion capture system was developed to collect data. Thirteen participants’ hand movements were recorded (totalling 62 hours and 10 minutes of data). From these data, 38 and 22 grasps were observed from the left and right hands, respectively. The portable system effectively captured natural hand motions, which formed an updated taxonomy of grasps.


Introduction
Modern day upper-limb prostheses are either too expensive for many consumers or have a greatly simplified choice of actions available (Phillips et al. 2015;Kerver et al. 2020;Sensinger and Dosen 2020;Smail et al. 2020).This paper aims to update the currently accepted standard set of grasp taxonomies, potentially offering support for improvements to the design and development of upper-limb prostheses through the utilisation this newly acquired information.The intended impact of this research is to improve the quality of life for recipients of upperlimb prosthetic devices whilst also offering a reduction in the cost, so that they can be made more widely available for amputees.To achieve this goal a taxonomy of the grasps used in current ADL will be created, classifying the most commonly used grasps.This taxonomy will provide knowledge of the hand shapes performed during real world everyday tasks, having the potential to allow for upper-limb prostheses to be better tailored to daily activities.When the price of a device has high precedence during its development, this knowledge could help guide cost reductions through a prudent selection of hand shapes biased by their appearance during ADL.The latest taxonomy of grasps was introduced by Feix et al. (2009); however, this only considered two professions (housekeepers and machinists).Due to the limited range of activities studied, there is a need for these taxonomies to be updated to include modern ADL -such as the use of mobile phones and keyboards.By providing this updated taxonomy of grasps, upper-limb prostheses can be designed with this new knowledge of the importance of each different hand shape.The need to watch every video limits the possible length of the recordings, leading to a limited amount data being collected and used to create the taxonomy.The proposed solution, using a motion capture system to collect the data, will offer the ability to collect more data, due to the potential use of machine learning techniques to process the data.Furthermore, making this system portable means that the requirement for performing such data collection within a standard, fixed motion capture environment will no longer be a requirement -allowing for the recording of more natural ADL.The objective of this study is to implement this portable motion capture device to assess the natural hand motions of modern ADL.

Background
In the literature, there are several studies that have classified the range of hand grasps performed by humans (Napier 1956;Cutkosky 1989;Bullock et al. 2013;Feix et al. 2014Feix et al. , 2016;;Liu et al. 2014;Vergara et al. 2014;Huang et al. 2015).As stated by Bullock et al. (2013), Schlesinger (1919) was the first to attempt to organise human grasps into set categories, these were: cylindrical, top, hook, palmar, spherical and lateral.Napier (1956) later categorised each of the grasps into two categories, power and precision grasps.Following this Cutkosky (1989) employed the same taxonomy, showing its possible application to robot manipulators in manufacturing processes.
More recently Feix et al. (2016) reviewed the existing grasp taxonomies and gave thoughts on an updated, simplified, taxonomy.It was found that there is a possibility for 33 grasp types, given by the previous taxonomies, to be reduced to 17, more general, grasp types -arguing that each cell of the taxonomy presented previously (Feix et al. 2009) could be reduced to a standard grasp.It is typical in these research studies that housekeepers and machinists are chosen as subjects for the study (Zheng et al. 2011;Bullock et al. 2013;Dollar 2014).However, Vergara et al. (2014) stated that these studies, only looking at two professions, were biased, going on to study the grasps used in ADL.The intention behind recording two drastically different professions was to show the range of potential tasks performed during ADL without the necessity to record numerous uniquely different situations.Though these results provide an indication of the range of hand shapes which exist, they display a limited view of everyday life for many people -unable to capture the innumerable amount of daily tasks performed in a rapidly adapting world.
In these studies (Bullock et al. 2013;Feix et al. 2014;Liu et al. 2014;Vergara et al. 2014) a head-mounted video camera is placed on the subject and their hands' actions recorded over a set period of time.After the videos had been captured, trained raters would analyse each of them to determine how many times certain grasps were performed.This method can lead to unreliable results due to its reliance on the judgements of the raters.Additionally, this method of analysis is a long, slow and tedious activity.These inherent issues with this form of analysis have been noted in the literature, motivating studies attempting to rectify the unreliable and taxing nature of this method through the introduction of autonomous analysis techniques for video recorded hand data (Huang et al. 2015;Cai et al. 2017).New developments have pushed for kinesiology recordings to be performed using motion capture technology and numerically analysed; this has resulted in faster analysis of larger amounts of data and, in turn, allowed for more data to be collected.It is typical for vision-based motion capture systems to be used over other methods, such as gloves.The main reasoning for this is the fact that placing devices on the hand encumbers movement, giving a less natural hand motion.
One machine learning technique often employed for the analysis of numerical data is the application of clustering algorithms.These are an unsupervised learning technique used to find clusters within the data without prior knowledge.To obtain clusters of hand motions in this study, indicative of the grasps performed, a k-means algorithm has been utilised.The k-means algorithm is a method of vector quantisation, comparing each data point to the centroids (means) of current clusters -updating the clusters with each iteration.Several studies employ a k-means algorithm as a method to identify postures, described by the clusters, within whole body motions (Tanco and Hilton 2000;Gu et al. 2009;Kapsouras and Nikolaidis 2014;Leightley et al. 2014).Thong et al. (2016) discuss the use of a machine learning clustering algorithm (k-means++) for the analysis of the morphology of adolescent idiopathic scoliosis.The study found new, clinically relevant, classification groups through the use of machine learning techniques on three-dimensional Cartesian co-ordinate data for 915 recordings of spines.In the study performed by Huang et al. (2015) an unsupervised clustering technique was used in order to autonomously determine the grasps performed in first-person point-ofview video recordings.The clusters determined by this method were compared to Cutkosky's taxonomy (Cutkosky 1989) and it was seen that it had created new groups as well as fitting the groups in Cutkosky's taxonomy.

Data collection
For this study the use of motion capture devices to collect kinematic data of the hand has been considered.The LMC is a markerless optical motion capture device which uses three infra-red cameras to determine a 22-point virtual image of the hand.The points of the hand captured by the LMC are as follows: the MCP joint, IP joint and tip of the first digit, the MCP joints, PIP joints, DIP joints and tips of the second to fifth digits, the centre point of the palm, the CMC joint and a point opposite to the CMC joint in the medial direction.The LMC is supported within the literature: proven effective for stroke rehabilitation and musculoskeletal simulation (Khademi et al. 2014;Iosa et al. 2015;Holmes et al. 2016;Fonk et al. 2021) and literature reviewing the employment of a LMC for data collection of hand kinematics provides confident support (Smeragliuolo et al. 2016;Nizamis et al. 2018).Though the LMC has also received some criticism in the literature (Bachmann et al. 2014;Guna et al. 2014), it will be used here due to its high portability, providing an ability to be used during the participants' normal everyday tasks within comfortable environments for them, its ability to work without markers, leading to unencumbered movements, and its noninvasive nature, resulting in natural motions.
The portable motion capture system utilises a LMC, Intel i3 NUC, external battery (2-Power 19 V, 27 Ah), GoPro head strap, LMC GoPro mount, bespoke NUC case and small shoulder carry bag.An exploded diagram of the NUC, battery and bespoke case can be seen in Figure 1, with the completed system being worn as shown in Figure 2. The NUC is a small form factor computer, measuring just over 10 × 10 cm; in this study the NUC7i3DNBE model has been used.The 3D printed, bespoke case holds the NUC and battery and is placed in the small shoulder bag -to be carried by the participant.Connected to the NUC is the LMC, this is held in place on the participant's forehead using the GoPro head strap with a bespoke 3D printed mount.To transfer the data a Kingston DataTraveler 50 16 GB USB Flash Drive was used.The total price of the items used to form the motion capture system was (at the time of purchase) £521.61($696.35,at the time of writing).The advantages this system provides over a marker-based motion capture laboratory are: its significant price reduction, portability, its ability to capture natural and unencumbered movements, its non-invasive and contactless nature and support seen within the literature -collectively resulting in an appealing motion capture system.The advantages this system provides over the video recordings are: a reduction in the time to analysis large collections of data, the elimination of human error during analysis and a wider range of available analysis techniques.
A GoPro head strap was chosen to fix the LMC to the participant's forehead because of the environments it is designed to operate within are far harsher than the one this system is expected to operate within and has a standardised fixing point which is easy to 3D print a component for.The i3 NUC was chosen as it is the smallest device that was found able to meet the minimum requirements of the LMC.A 4 GB RAM module was used to meet the requirements of the LMC and 16 GB of storage was added to be able to store the files of the data collected.The 19 V external battery was chosen in order to be able to power the NUC whilst in use due to its predicted runtime of seven to 8 hours -confirmed with real-world tests.This length of time is appropriate for the environment the system is used in, a typical working day being 8 hours.The NUC and battery, housed within the case, weigh approximately 1 kg and can fit in a small shoulder bag for the participant to carry during data collection.The bag chosen provides a perfect fit for the system and sits comfortably on the participant.A HTML file is loaded to continuously extract the co-ordinate data of the hand joints from the LMC, through the provided API using embedded JavaScript code.The data were recorded at a sampling frequency of 120 Hz -imposed by the native frame rate of the LMC.A USB Flash drive must be used to transfer the data collected on the NUC to a computer for it to be analysed so that the NUC can remain disconnected from the internet, for security reasons.
Despite its selection for the work, the LMC does present some limitations.The limited number of small infra-red sensors used to capture locations not defined by clear external markers results in an accuracy lower than that which would be expected from a state-of-the-art motion capture system in a laboratory environment.The close proximity of these sensors on a single plane results in a high probability of occlusion occurring during the capture.The LMC has been used here due to the ease in which it can be made portable, the support expressed for this device within medical field literature, its ability to capture motion data without markers and the resultant non-invasive nature of this device.When compared against a VICON motion capture system with choreographed hand shaped, the portable motion capture system was found to collect data within a mean of 9.89 mm -calculated as the averaged Euclidean norm  distances between the joint positions recorded by the portable system and VICON system.This level of error was considered to be not great enough to affect the overall resultant hand shapes to a degree where there would be difficulty in the differentiation of each specific hand shape.With consideration of the benefits and limitations of the LMC and created system, this setup was employed for the collection of hand motions in everyday activities.

Data processing and analysis
To detect when a certain grasp is performed, positional data of the digits, relative to the MCP joints and wrist locations, has been used.This creates a local co-ordinate system with the wrist position as the origin, the index metacarpal along one axis, the normal to the anatomical position of the hand along another axis and the remaining axis perpendicular to both of the others.To begin, the new co-ordinate system was centred at the wrist and the second digit metacarpal bone was chosen as the y-axis.Next, the hand was rotated about the y-axis such that a positive direction along the x-axis described a lateral to medial direction across the wrist.The z-axis was then created normal to these two axes, such that a positive direction was defined by the posterior to anterior direction of the hand.The resultant local co-ordinate system can be seen in Figure 3. Analysis within a local coordinate system has been employed as the spatial location of the hand has no relevance here, it is the shape of the hand which prerequisites the desired information.
In order to categorise frames into transitional or functional hand frames a definition of a grasp was necessary.The definition of a functional hand shape was decided to be as follows: a position in which each joint is held within one degree for one second or longer.A grasp is then formed by averaging the group of frames which fall within this definition.This definition was chosen based on analysis attempts with various definitions; more relaxed rules (more than one degree or one second) resulted in a longer computational time and a higher number of anomalies appearing in the final data, though more tighter rules (less than one degree or one second) showed a loss in valid grasps during the reduction process.
To then cluster these averaged grasps a k-means++ algorithm was used; this provided the initial grouping of the grasps within the data.Each average grasp was defined with 60 values (inputted into the algorithm): the x, y and z Cartesian localised co-ordinates for each of the hand joints recorded by the LMC.The clusters were determined through the use of squared Euclidean distances.The algorithm has been set up to repeat the clustering 100,000 times, with a new initial cluster centroid each time, and output the attempt with the lowest sum of Euclidean distances between the data points in each cluster and their respective centroids.The Calinski-Harabasz index Caliński and Harabasz (1974), a method commonly employed to determine the optimal number of resultant clusters from an unsupervised learning algorithm, was implemented to provide confidence in the acceptable values of k.The Calinski-Harabasz index calculates the ratio of the sum of intercluster and intra-cluster dispersion to provide an indication of how dense the clusters are and how well separated each is from the others.For this work, lower values of this index would suggest that the hand shapes categorised within the clusters are closer in shape to each other and that the resultant individual clusters are more uniquely differentiable from one another.Here the Calinski-Harabasz index values showed negligible improvement after a k value of 40 for the left hand and gave the lowest values within 30 to 60 for the right hand.Figures 4 and 5 show the Calinski-Harabasz index values plotted from a k value of 0 to 100 for all of the collected data.There was negligible change to the mean distance between each frame's data points and that of their respective centroids after a k value of 20 and similarly between each frame and the other centroids after a k value of 50, for both hands.The correlation between the frames within a cluster and their centroid increased as k was increased, with minimal plateau at higher values.From these evaluation methods it was decided that the acceptable range for k would be above 50 for the left hand and between 30 and 60 for right hand -with the possibility of a reduction to this through the merging of similar clusters.Within this decided acceptable range for the number of clusters, the changes to the Calinski-Harabasz index for the left and righthand data were approximately 9 and 5, respectively.These compared, respectively, to 326 and 140 outside of the regions show relatively negligibility changes.To additionally determine the validity of clusters found from these data the Euclidean norm distances between each grasp frame's data points and their respective centroids was used.All data points for every grasp frame of the cluster must be within the set distance to the cluster centroid (here this was found, through experimentation, to be best set to 5 mm) for the k value to be deemed suitable.Any value of k found which is within the accepted range and which produces clusters of data with distances between the clustered grasp frames and their centroid below the set distance would be considered acceptable.
After the initial clusters are formed a merger script is then run, which indicates whether any of the clusters could be combined -leading to a final solution.This looks at reducing the number of groups formed from the data using features specific to the data, in this case the Euclidean norm between each of the group centroids' points was chosen.Each of the cluster centroids are compared to each of the others; the x, y and z values of each point from one centroid are subtracted from those of the other considered centroid and the norm of this distance vector found.If this norm is below a set threshold then the grasps are considered, by the script, similar enough to be combined.Visual inspection could then be performed to determine whether it is reasonable for these highlighted clusters to be combined; alternatively, the script can be set to continue with the merger autonomously.

Methodology
The equipment set up process is also designed to be as simple as possible.To begin data collection it is only required that the device is attached to the participant and the LMC examined and calibrated using an external monitor, there is no need for measurements to be taken or any additional items to be placed on the participant.
Approval has been granted by the BSREC at the University of Warwick for the execution of this data collection (approved on 4 October 2018 under the BSREC reference REGO-2018REGO- -2210)).The decided steps for data collection are as follows: (1) Fit the headband, with the LMC secured to it, and the i3 NUC, in its bag with the external battery, to the participant.
(2) Connect the i3 NUC to an external monitor and ask the participant to undertake a few basic tasks they would perform regularly to test the recognition of the participant's hands.( 3) Recapitulate what is required from the participant once the data collection has begun.(4) Open the HTML script, in a browser, on the i3 NUC and disconnect the external monitor.(5) Leave the participant to carry out their day, as if the LMC was not there.( 6) Once the experiment has ended, stop the i3 NUC from recording further data and remove the bag and headband from the participant.
Following collection the data are then loaded into MATLAB and reduced to only include frames in which the hands have been recorded by the LMC.Within this, remaining, data the hands are localised, to be centred at the wrist, and the transitional frames removed -leaving, exclusively, frames in which grasps have been performed.These remaining grasp frames become the inputs to the k-means++ algorithm; therefore, calculating the cluster centroids would effectively represent the commonly performed grasps.These centroids have a possibility of being merged to form the final groupings.Analysis of these final grasp groupings found (the taxonomy) highlights the importance of the grasps seen in ADL by these participants -observing those most often occurring and those held, on average, for the longest period of time.

Results
With the portable motion capture system, a total of 62 hours and 10 minutes of data was collected; an individual breakdown of the participants is shown in Table 1.Recordings were taken over a whole week, collecting various activities in working and recreational environments from healthy.Each recording session lasted until the participant was no longer wished to continue or until the battery of the system was depleted.Recorded tasks include: working at a computer, housework, cooking, shopping, car repairs and playing violin.The final centroids for the left and right hand for these subjects are shown in Figures 6 and 7, respectively.The script to decide the value of k, for application of a k-means++  algorithm, outputted optimal k values of 60 and 30 for the left and right hand, respectively -which were reduced to 38 and 22 unique grasps with the execution of the merging script.Figures 8, 9 , 10 and 11 show the number of occurrences for each of these grasps and the average number of frames each was held for during their occurrences in the left and right hand data, respectively.The average number of frames per occurrence was found by normalising the total number of frames the grasp appears in by its total number of occurrences during the recordings.Additionally, further information for each of the clusters can be found in Tables 2 and 3.
The final resultant groupings of grasps (the clusters outputted from the k-means++ algorithm followed by a merger of similar clusters) gave average R-squared values of 97.74%  Table 2.A table with further left-hand grasp cluster characteristics.The first column gives the label of the cluster.The second the percentage of the data which this grasp exists within.The third and fourth columns show the total number of times the grasp occurs within the data and the total number of frames it is seen within, respectively.The fifth column provides the average number of frames each grasp is held for each time, this has been rounded to a whole integer.and 96.98% for the left and right hand, respectively.These groupings provided cluster mean distances of 8.6055 mm and 8.4725 mm with standard deviations of 4.9202 mm and 6.1191 mm, for the left and right hand respectively.This implies that the collections of hand shapes shown in Figures 6 and 7 resemble the grasps seen in ADL within 8.7 mm, for both the left and right hand.

Grasp
Viewing these results, it can be seen that the grasp which occurred most frequently throughout the recordings, and therefore during ADL, was, for the left hand, grasp 7 and, for the right hand, grasp 1; both of which display a closed fist shape.These made up 36.27% and 36.05% of hand shapes seen for each hand, respectively.Additionally, the second most commonly occurring hand shapes for both hands resemble an open hand.These were grasp 4 for the left hand and 5 for the right hand; respectively, occurring for 16.57% and 13.05% of the recorded grasping times.
A total time of 11 hours 10 minutes and 35.76 seconds was taken to analyse the 62 hours and 10 minutes of recordings.This time comprised of, per hand per recording: loading the data into MATLAB, removing the frames which the hand of interest is not seen in, labelling each of the remaining frames as a grasp or transition frame, calculating the averaged Cartesian locations for each of the grasp occurrences and then clustering each of these averages, merging clusters deemed to be similar and determining the characteristics of each final cluster.It should be noted that a large amount of this time was taken loading the data; once the raw data had been loaded they were saved as MATLAB variables, to later be loaded in more rapidly.The analysis of the data, alone, took 16 minutes 52.44 seconds.This was all performed on a computer with Windows 10, 32GB RAM and an Intel Core i7-7700 processor running MATLAB R2020b (9.9.0.1467703).It should also be noted that this can be executed without the need of a human present, once told the data location the script performs the aforementioned tasks independently.

Discussion
It has been shown that large amounts of data can be obtained from a variety of common ADL and locations and processed quickly and reliably using the measurement approach presented in this paper.The data collection system was able to be used under all situations it was presented with, had no necessity for markers and gave no notable hindrance during the normal daily tasks performed.Though the device exhibited a low accuracy compared to state-of-the-art motion capture systems, its ability to collect hand motion data without markers was seen as an amply significant advantage alone to warrant its application in this study.
Following collection, the data were successfully aggregated into the typical grasps of ADL.These data show that the closed fist and open hand respectively make up 36.2% and 14.8% of the grasping data, averaged across both hands -totalling an averaged percentage of 51.0% together.It is therefore arguable that the prosthetic devices which allow, exclusively, open and close functionality provide acceptable functionality but omit hand shapes which have been seen to be required for approximately half the time during ADL.Viewing the average number of frames the hand shapes were held for shows that a closed fist appeared as the second most longest held hand shape for both hands and a open hand appears as third and longest held shape for the left and right hands, respectively.Though the lack of video data prevented the ability to draw arguments for these hand shapes from their interactions with the outside world, the recorded data still showed them to have a significant presence within everyday tasks.The longest held hand shape for the left hand appeared with the index and middle fingers open (possibly suggesting a pointing action or the grasping of a large object with those digits); this hand shape also appeared as the third longest held for the right hand.In spite of its low number of appearances in the data, this hand shape was seen to be held for a relatively long time during each occurrence.Due to this disparity across different measures, the influence of this hand shape would be dictated by the individual applications of these results.
Throughout the 62 hours and 10 minutes of recordings, grasps occurred for a total of 8 hours and 42 minutes for the left hand and 5 hour and 14 minutes for the right hand.Given the limitations of the LMC and the fact that to be labelled as a functional hand shape frame the hand must remain with all joints within one degree for one second, these times were considered adequate.Repeated hand shapes were prevalent within the final results; however, tests generating greater reductions to the number of clusters showed a loss of some hand shapes with no obvious benefits visually nor any improvements upon observing the Calinski-Harabasz index.Therefore, to further reduce and tailor the results to each desired potential application it is suggested that solely manual intervention is performed.
Given the higher number of right hand dominant participants, the nature of the grasps and the motion capture setup, it could be argued that this greater grasping time for the left hand is due to the right hand being obscured in some frames.Observing the percentage of frames for which the hands were seen within, across all of the participants, showed a greater percentage of grasping frames for the left hand; however, for the left hand dominant participant alone this was shown to be greater for the right hand.This suggests that hand dominance did have had an effect on the collected data.Due to the outlined definition for a functional hand shape, it could be argued that the left-hand data displayed a greater number of hand shapes due to a longer time spent in stasis during actions performed by the right hand.Though this could explain why some hand shapes were more frequently seen within the recordings, it is still held true that they appeared within data from everyday tasks and, in turn, the conclusions drawn from these data still hold validity.
The classification of a functional hand shape under the decided definition results in the results reflecting several hand gestures as well as grasps.Previous taxonomies have only considered grasps, functional hand shapes occurring whilst an object is held.For example, grasp 8 for the left hand resembles an index pointing gesture -not displayed within previous taxonomies.In addition to this, various grasps displayed in this taxonomy may appear similar to those found within previous taxonomies although it could be argued that, in some cases, these are being used as gestures (for example, a closed fist or flat hand).Gestures are also of significant importance to ADL and should not be omitted from a final taxonomy.All references to grasps in the results are not indicative of object manipulation -the taxonomy produced here includes the grasps and gestures of ADL.Comparing the new taxonomy to those previously defined in the literature reveals several novel hand shapes which have been identified, by this taxonomy, as relevant for ADL.The additional hand shapes seen were: index finger point (with thumb abduction and adduction), inferior pincer with index and middle finger, peace sign, relaxed hand, fully closed fist, thumb-little finger opposition, thumb up and hyperextension of all digits.Similarly, there are some grasps that exist in the previous taxonomies which the new taxonomy does not deem relevant.These grasps were, when compared to the taxonomy introduced by Feix et al. (2009): parallel extension, thumb-3 finger and extension type.The analysis of the data collected resulted in 16 more groupings of grasps for the left hand, compared to the right.This dissimilarity between the two hands implies a requirement to develop upper-limb prosthetic devices for each hand, individuallyaccounting for the differences seen within the taxonomy.The taxonomy shown here provides a comprehensive display of the grasps relevant to ADL; highlighting both grasps and gestures performed throughout a typical working day, introducing new grasps to those seen in previously defined taxonomies of grasps and providing a distinction between the everyday needs of the left and right hands.This newly found information has the potential to aid in the development of upper-limb prostheses, enabling an improvement to the quality of life of recipients and a reduction in the cost of these devices.

Conclusions and future work
The introduced portable motion capture system has been proven to be a viable means of collecting hand motion data during ADL.It was found to collect natural, unencumbered, hand movements without issue.The clustering, with subsequent merger, of the collected data gave confident results, reducing all of collected data into 38 and 22 hand shapes to represent all of the collected data within 8.7 mm for both hands.These groups of grasps form the novel taxonomy of grasps introduced here, with information on each group readily available.Given their details, these groups can be manually extracted to reduce the taxonomy to a more concise and customised taxonomy of grasps commonly occurring in ADL to suit specific needs.
Future work includes the collection of additional dataaiming to collect over 100 hours data from a minimum of 20 participants.The majority of participants studied here were right hand dominant, further data collection and analysis could be used to determine the differences seen between participants with differing hand dominance.This increase in the amount of data should consolidate the findings of this paper and require no further modifications to the analysis process.During the collection of additional data, questionnaires could be distributed to participants in order to collect qualitative feedback regarding the comfort and intrusiveness of the introduced system.This would reveal the impact this device has during the performance of daily tasks, giving a reference for potential future modifications.Additionally during future collections, the capturing of video data alongside the LMC would enable a deeper level of analysis to be performed -identifying interactions with the external environment.However, the existence of a video recording of their day being stored could result in the participants' showing a greater hesitance to perform their usual tasks.To counter this, autonomous object detection could be employed.This would eliminate the need for storing raw video data at the cost of an increased computation demand on the system during data collection.If video capture was to be implemented, tests would be necessary to determine the additional computational load placed on the device and, in turn, measure the possibly reduced time of capture due to this addition.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
The work was supported by the Engineering and Physical Sciences Research Council [EP/N509796/1].

Notes on contributors
Callum J. Thornton research looks at capturing how we use our hands during daily life.In particular, aiming to update previous taxonomies of grasps and, through the application of this knowledge, reduce the cost of upper limb prostheses.My work entails the capturing and analysis of hand motions under various challenging conditions.My research has been part of a wider study, in collaboration with UHCW and UCL, titled "Sensorimotor Prosthesis for the Upper Limb: Evaluation and Design".The overall goal of this study is to collect novel data which allows the development of a lowcost upper limb prosthesis with haptic feedback.
Michael J. Chappell research lies mainly in the modelling and analysis of biomedical, pharmacokinetic and biological processes.Much of the emphasis of this work has been on compartmental modelling and the application of techniques in system dynamics, non-linear systems, control theory and system identification.Over recent years my research has centred on techniques for analysing the structural identifiability of non-linear systems and computer algebra/symbolic computation packages have proved invaluable tools in this context.My research has been performed in close collaboration with academic, industrial and hospital-based research groups and funding has been received from a variety of research councils including the EPSRC, the BBSRC and the MRC.
Neil D. Evans research lies mainly in the systems modelling, analysis and control of pharmacokinetic, epidemiological and biomedical processes.I have particular expertise in structural identifiability analysis that addresses the key problem of determining whether the parameters of a postulated model can be estimated uniquely if perfect data are available.Such analysis is an important prerequisite for parameter estimation and experiment design.As of 2023, I have been awarded (as lead or co-applicant) Research Grants and Fellowships totalling over £13 million and have been Chief and Principal Investigator on a variety of clinical trials.I currently supervise PhD, MD and MSc students, as well as regularly teaching medical students and the allied health professionals.

Figure 1 .
Figure 1.An exploded diagram of the portable data collection system.

Figure 2 .
Figure 2.An image of the portable system, as it would be typically worn by a participant.

Figure 3 .
Figure 3.A diagrammatic representation of the local co-ordinate system chosen to represent the hand during data processing.

Figure 4 .
Figure 4.A plot of the Calinski-Harabasz index values for the left hand, using k values of 1 to 100.

Figure 5 .
Figure 5.A plot of the Calinski-Harabasz index values for the right hand, using k values of 1 to 100.

Figure 6 .
Figure 6.The taxonomy of hand shapes for the left hand, provided by line art of the 38 cluster centroids.

Figure 7 .
Figure 7.The taxonomy of hand shapes for the right hand, provided by line art of the 22 cluster centroids.

Figure 8 .
Figure 8.A bar chart showing the total number of occurrences of each grasp for the left hand.

Figure 9 .
Figure9.A bar chart showing the total number of frames each grasp is seen within for the left hand normalised by the total number of occurrences of that grasp for that hand.

Figure 10 .
Figure 10.A bar chart showing the total number of occurrences of each grasp for the right hand.

Figure 11 .
Figure11.A bar chart showing the total number of frames each grasp is seen within for the right hand normalised by the total number of occurrences of that grasp for that hand.
Hardwicke is a Consultant in Plastic and Reconstructive Surgery at the University Hospitals of Coventry and Warwickshire NHS Trust (UHCW) and Honorary Professor at the University of Warwick and Visiting Professor at Coventry University.I am the Director of the Institute of Applied and Translational Technologies in Surgery here at UHCW and have held other leadership positions within the Trust.I completed my Higher Surgical Training within the West Midlands Deanery and at the University of Birmingham, where I held a Lectureship in Plastic Surgery.I was a Royal College of Surgeons Senior Fellow in Microsurgery and Major Trauma at UHCW.I have over 110 peer-reviewed publications on topics within Plastic and Research Surgery, with the award of National and International prizes.

Table 1 .
A table of the anonymised participant information.

Table 3 .
A table with further right-hand grasp cluster characteristics.The first column gives the label of the cluster.The second the percentage of the data which this grasp exists within.The third and fourth columns show the total number of times the grasp occurs within the data and the total number of frames it is seen within, respectively.The fifth column provides the average number of frames each grasp is held for each time, this has been rounded to a whole integer.