From concept to practice: a scoping review of the application of AI to aphasia diagnosis and management

Abstract Purpose Aphasia is an acquired communication disability resulting from impairments in language processing following brain injury, most commonly stroke. People with aphasia experience difficulties in all modalities of language that impact their quality of life. Therefore, researchers have investigated the use of Artificial Intelligence (AI) to deliver innovative solutions in Aphasia management and rehabilitation. Materials and methods We conducted a scoping review of the use of AI in aphasia research and rehabilitation to explore the evolution of AI applications to aphasia, the progression of technologies and applications. Furthermore, we aimed to identify gaps in the use of AI in Aphasia to highlight the potential areas where AI might add value. We analysed 77 studies to determine the research objectives, the history of AI techniques in Aphasia and their progression over time. Results Most of the studies focus on automated assessment using AI, with recent studies focusing on AI for therapy and personalised assistive systems. Starting from prototypes and simulations, the use of AI has progressed to include supervised machine learning, unsupervised machine learning, natural language processing, fuzzy rules, and genetic programming. Conclusion Considerable scope remains to align AI technology with aphasia rehabilitation to empower patient-centred, customised rehabilitation and enhanced self-management. IMPLICATIONS FOR REHABILITATION Aphasia is an acquired communication disorder that impacts everyday functioning due to impairments in speech, auditory comprehension, reading, and writing. Given this communication burden, researchers have focused on utilising artificial intelligence (AI) methods for assessment, therapy and self-management. From a conceptualisation era in the early 1940s, the application of AI has evolved with significant developments in AI applications at different points in time. Despite these developments, there are ample opportunities to exploit the use of AI to deliver more advanced applications in self-management and personalising care.


Introduction
Approximately fifteen million strokes occur annually, with up to 40% of stroke survivors diagnosed with aphasia [1,2].Aphasia is a chronic acquired communication disability commonly caused by damage to the brain after stroke but also results from a head injury, brain tumours or neurodegeneration.Aphasia impacts individuals differently across areas of spoken language, auditory comprehension, reading, and writing.
People with aphasia (PWA) experience life-altering psychosocial consequences.They experience changes in relationships, social isolation, and difficulty reintegrating into community life [3][4][5][6].Consequently, in comparison with non-aphasic stroke survivors, they are more likely to suffer from reduced health-related quality of life, reduced rate of functional recovery, and increased incidence of post-stroke depression [7][8][9].Due to the complexity of living with chronic aphasia, a comprehensive approach to rehabilitation is needed that addresses both the language impairment and the broader impacts on people's lives, but there are many challenges to achieving this.Aphasia can manifest in highly variable ways with individuals demonstrating varying degrees of impairment in different areas of communication.Therefore, it is crucial for rehabilitative interventions and support programs to provide personalised modifications to tackle the large variability in patient presentations.However, these highly personalised and frequently interdisciplinary interventions can place a financial burden on families and the economy, with healthcare costs post-stroke considerably higher in PWA in comparison to patients without aphasia [10,11].Speech-language pathologists (SLPs) play an important role in the evaluation and classification of aphasia subtypes and in establishing interventions with personalized goals that will meet an individual's communication and well-being needs.However, SLPs have identified barriers to the provision of comprehensive, personalised care.Barriers include staffing shortages and limited availability of community support programmes which impacts discharge planning [12].In part, this is due to the current model of service delivery that emphasises acute and sub-acute health care, with little funding focused on long-term and community services for people with chronic aphasia [11].
Technology has an increasingly core role in aphasia management.Computer programmes and applications have been widely integrated into therapy and as a tool for supplemental home practice, and assistive devices have benefited from advancing technology [13].Tele practice has proven to be an additional effective method of service delivery at the individual and group level of intervention [14].A recent scoping review identified technology as a key approach in enabling the self-management of aphasia [15].PWA credited technology with augmenting their communication in activities of daily living, increasing access to information, and providing entertainment [16].However, the current research on the management of aphasia has not been universally translated into accessible and sustainable models of care.Further, the prediction of recovery from aphasia and response to treatment continues to remain elusive [17][18][19].
Artificial intelligence (AI) is a form of advanced technology that has the potential to enable individually tailored, sustainable services that can independently adapt to the heterogeneity of aphasia and the changing needs of people with aphasia over time.AI advancements in recovery and self-management may have a critical role in narrowing the evidence-practice gap caused by healthcare funding shortfalls and financial limitations of families.However, the current application of AI to aphasia management is limited and has likely not been adequately utilized to improve service quality, access, and reach.
Therefore, in order to systematically explore the research done in this area, we conducted a scoping review, with the aim of describing the use of AI in aphasia to date in terms of technology approaches and their application, trends over time, and identifying potential areas where AI can add value to outline future avenues for research and evidence implementation.

Research design
This scoping review followed the process outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) statement for scoping reviews (Supplementary Appendix A) [20].

Search strategy
Potential publications were identified by conducting a comprehensive search on PubMed and Google Scholar.Key terms used for the search were related to AI and aphasia.Terms were related to AI ["AI", "artificial intelligence", "machine learning", "supervised learning", "unsupervised learning", "classification", "mobile application", "virtual reality"] and aphasia ["aphasia", "dysphasia", "anomia"].Since certain mobile applications and virtual reality applications may incorporate AI features, the terms; "mobile application", "virtual reality" were also included in the search process.The publications were assessed for their applicability in the screening stage.
These identified key terms for AI and aphasia were combined using AND.The search was carried out to gather studies from 1980 to early 2022.This process was conducted on 5 May 2022.

Screening
Studies were included if they met the following eligibility criteria: a. directly related to aphasia (any aetiology: stroke, cancer, progressive, etc.) b. related to AI, where the technology applications incorporate AI functions c. full text available in English d. peer-reviewed, either research papers or conference abstracts The screening was conducted by the first author (AA) and cross-checked by two further authors (NH and JEP) to confirm inclusion.

Data extraction and analysis
The data extraction process collected article characteristics such as published date, authors, title and the full text.Next, the full text was reviewed to determine the AI technology and techniques used and the objectives of the research.Data extraction for the objectives was conducted by two authors (NH and JEP) independently and discrepancies were resolved through discussion.
To explore the data at a granular level, we employed a data analysis technique using an interactive dashboard tool [21].The dashboard was created using the extracted dataset as the input containing several attributes: date, authors, title, full text, and identified characteristics of the paper (objective, AI techniques and management stage).It allowed interactive visualisations that combined multiple attributes of the dataset, thereby highlighting associations, insights, and patterns.The developed dashboard was also used for the historical analysis of publications over time, to evaluate the evolution of different AI techniques and the progression of research objectives in Aphasia.This dashboard has been published to be accessed by the readers 1 .
Figure 1 demonstrates screen captures of the dashboard developed.

Search results and selection
Figure 2 illustrates the search yield and study selection process.The number of publications retrieved by the search process was 194, with 107 publications from PubMed and 87 from Google Scholar.Once duplicate records were removed, 180 publications were screened.In the first phase of the screening, 38 publications were excluded as they did not match the primary inclusion criteria; 24 studies were not related to aphasia and 19 studies were clinical trials without any use of AI.This resulted in 137 studies for which full texts were sought.Full texts were available for 127 studies.These full texts were further screened by the authors to confirm eligibility; ultimately, 77 publications aligned with the defined inclusion criteria of the review.The reasons for rejection are shown in Figure 2.
The agreement between the two authors during the extraction of the research objectives was 58/77 (75%) after the initial round (κ = 0.73, 95% CI 0.62-0.85).This level of agreement is considered moderate and was challenging due to some level of inference required in determining the objectives of the research as they relate to aphasia.For example, papers that aimed to improve automatic speech recognition may have done so for the purpose of assessment and diagnosis or for therapy feedback, but more often, the long-term aim was not explicitly stated.Following a consensus discussion, the agreement was 100%.

Overview of results
We present the findings of this review first chronologically, outlining the thematic and technical progression of research over time, and then summarise the distribution of AI technology type and the "stages" of aphasia management according to the research objectives.

The evolution of AI applications to aphasia
We identified four time "epochs" of the application of AI to aphasia, illustrated in Figure 3.The earliest retrieved studies (1984)(1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993) discussed automated therapy software focused on the possibility of the software making clinical decisions about therapy tasks and stimuli, and then updating decisions according to patient performance.Some early software prototypes were described but these were limited to text displays and written input.The practical application of AI technology for aphasia commenced around 1994.From 1994 to 2005, several research studies applied AI techniques to aphasia identification and subtype diagnosis, recovery prediction, early prototypes of assistive software, and computational models of language.Between 2006 and 2014, a broader range of AI approaches were applied to the assessment and differential diagnosis of aphasia and its subtypes.Speech analysis for the purposes of automated treatment feedback was first attempted in this epoch, as was the use of machine learning for lesion-symptom mapping.Most recently (2015 -2022), research has moved towards automation and advanced AI, with a wider range of applications emerging, such as conversational agents (chatbots), rehabilitation software, and affect recognition.Advances have been made in deep learning and language models and these are being more extensively applied within aphasia.

AI technology approaches in aphasia
Figure 4 shows a timeline of publications with labels where specific AI techniques first emerged.The growth in publications since the earliest study retrieved (1984) is approximately exponential and mirrors the progression of AI technology over time.For example, with the rise of deep learning and language models after 2015, aphasia researchers have adopted deep neural networks such as recurrent neural networks and convolutional neural networks [22][23][24][25].These networks have the ability to model sequential data (such as speech) and the ability to remember previous outputs as inputs for the next step.These models have been used specifically to improve speech recognition and automated speech assessment.Below, the results are summarised in relation to the main branches of AI: supervised learning, unsupervised learning, natural language programming (NLP), fuzzy rules and optimisation (Figure 5).

Supervised learning
The majority of studies in this review (69%) used supervised machine learning models for predictions and classifications.Since 2015, there has been a significant rise in the deep learning models comprising deep, recurrent, and convolutional neural networks that can handle a large volume of data with increased dimensionality.However, supervised machine learning models are dependent on labelled datasets which can be challenging to acquire in real-world settings, including in aphasia.

Natural language processing
NLP was a smaller, but growing, AI approach within retrieved studies (18/77), with the majority of these studies published since 2014 (12/18).With the use of lexical analysis and basic linguistic analysis techniques, the application of NLP has recently progressed towards language models, sentiment analysis and chatbot technology that uses deep learning and novel natural language understanding techniques.

Unsupervised learning
Unsupervised machine learning techniques do not rely on labelled datasets; instead, the algorithms can self-learn and adapt based on the characteristics of the data.Although it provides abundant research opportunities, the use of unsupervised machine learning in current aphasia research has been limited to date: just 12% of papers in this review.Until the last decade, unsupervised learning was mostly used for data exploration, clustering, and dimensionality reduction purposes.However, recent research exploited the ability of unsupervised learning to simulate semantic and phonological learning representations using self-organizing maps (SOM) [26], highlighting novel opportunities for the use of these approaches in aphasia research.

Fuzzy rules, optimisation
Fuzzy rules and optimisation techniques were prominent before 2015 but are less used in present-day research.

The use of AI relatingto stages of aphasia management
The objectives of each study were mapped onto stages of aphasia management during data extraction.We defined three stages as Assessment, Therapy, and Self-Management to enable an overview of how AI has been used.An additional stage, Discovery, was defined for early or theoretical work that did not nominate an explicit real-world application.Table 1 shows the publication counts based on the categorisation derived in this study, counts per each stage are shown in bold, italic values.
The majority of the studies (35/77) in the review yield were focused on using AI for diagnostic purposes, with the goals of automating the detection of aphasia, the diagnosis of a subtype, or the classification of aphasia severity.Data used for diagnosis was typically audio recordings, assessment results or language transcriptions.Seven studies explored the use of AI to enhance speech analysis within aphasia -a challenging task for standard speech recognition engines [27].In three, the purpose was to improve the accuracy of feature recognition from acoustic speech samples in order to improve the identification or classification of aphasia [25,28,29].The remaining four aimed at providing  automated feedback within therapy software, including automated judgements of utterance quality [25] or within immersive VR [30].
Three papers explored the use of AI to predict individual responses to aphasia therapy from data sources including demographic, behavioural and imaging data [26,31].In one study, a computational model of bilingual language representations was created based on self-organising maps, a lesion simulated, and predictions of therapy response were tested against clinical data, with promising results [26].
Seven retrieved studies simulated aspects of normal or disrupted language processing through computational models, in order to understand the neural foundations of language representations and aphasic errors [32][33][34][35][36][37][38].Another group of studies examined the neurological correlates of aphasia symptoms through analysis of combined neuroimaging modalities (structural or functional) and pathology data compared to clinical presentations [39][40][41][42]; others went further and explored machine-derived aphasia subtypes from the same data -these three studies focused on Primary Progressive Aphasia subtypes [43][44][45].

Discussion
To our knowledge, there have been no previous explorations of the application of AI to aphasia rehabilitation.Therefore, we undertook a scoping review of studies related to the use of AI in aphasia rehabilitation to describe current applications and highlight future directions for research.The total yield of 77 studies is relatively low given the potential applications of AI in aphasia.One of the limitations of this scoping review is that this study excludes applications of AI within industry that are not being reported in the literature.It is probable that AI-based assistive technology developed for the general population is being utilised by PWA and not reported in research; for example, voice assistants/smart speakers use NLP to understand the requests of users and these have been co-opted by people with other communication disorders for various purposes [46].
Assessment and diagnosis of aphasia was, by far, the most researched use of AI.This might be explained by the fact that categorisation of data, especially into categories developed by humans, is a relatively "clean" problem compared to the complexities of aphasia treatment and real-life communication.Nonetheless, it seems that no assessment systems are yet implemented in clinical practice.One potential barrier could be that publicly available aphasia data (speech and videos) are often not comprehensively annotated given the complexities associated with impaired language as well as the practical difficulties with managing large volumes of data that are being collected.Although there are several features (acoustics, linguistics, facial expressions, gestures) that can be derived from these data, the lack of annotated data poses challenges in developing supervised machine learning models.Efficiency and cost-effectiveness would need to be demonstrated before these systems are implemented.
A small number of studies had used AI to explore prognosis, either predicting general recovery of aphasia severity or in response to therapy.Several projects are analysing large-scale datasets to improve the accuracy of prognostic algorithms, combining neuroimaging, clinical data and demographic factors [47].Machine learning is one method of analysing this highly complex and interactional data and applying it to new cases of aphasia.
While many studies in this review trained machine learning models for differentiating traditional aphasia subtypes, an alternative approach explored in a smaller number of studies was unsupervised learning of subtypes.In this approach, no labels were provided, and the machine identified groupings of aphasia cases using patterns in any relevant dimensions -these are not necessarily intuitive to humans.Landrigan et al. (2021) found three profiles in post-stroke aphasia that were primarily distinguished by semantic and phonological processing, with traditional features such as fluency or expressive/receptive abilities distributed across clusters [45].Within primary progressive aphasia (PPA), initial results suggest there could be five or six distinct PPA subtypes, as opposed to the three clinically derived subtypes currently in use [43].This data-driven approach may ultimately help explain the heterogeneity of treatment response in aphasia by linking clinical presentation more closely to neuropathology.
Interestingly, while the earliest papers conceptualised therapy software that would emulate clinical decision-making, few retrieved studies focused on applications to aphasia treatment.There is a shortage of funding for aphasia intervention relative to the number of people living with aphasia [48], and intervention is an important area where AI could produce adaptive, personalised therapy software that requires less direct clinician input [49].For example, while traditional therapy software can use fixed rules to alter the difficulty of the task according to the accuracy and response time of the user [50], as initially envisaged by Katz (1990) [51], AI can allow nuanced adjustments based on a deep understanding of the tasks, stimuli, patient profiles and other dimensions [50].Constant Therapy is one example where such a system has been implemented in broadly used commercial therapy software [49].The ability to make ongoing adjustments as aphasia symptoms change over time is important for self-managed therapy or maintenance tasks, particularly when clinician input is not available.Such self-managed maintenance after discharge from therapy services is now recognised as crucial to maintaining gains made in therapy [52].
Recently, several AI technologies have advanced considerably and could have applications for therapy software, immediately and into the future as the technologies progress.NLP is now able to both process and generate complex language of considerable length.Language models built using transformers, such as , allow the AI to attend to meaning beyond the phrase or sentence levels, even across paragraphs, allowing a greater understanding of context and referential language [54].Currently, most therapy software is focused on the word level, or where sentences are used, stimuli and correct responses are pre-programmed.NLP advancements open the door to AI software generating sensible sentence-or paragraph-level stimuli and accurately checking the patient's comprehension of compared to its own parsing.NLP also has the potential to enhance virtual therapists.Virtual therapists have been developed and trialled within aphasia [37,38], and NLP could allow more natural, open-ended, chatbot-style interaction with the users.In time, AI-enabled virtual therapists could provide training and correction of conversational language in real-time.While virtual reality and virtual therapists are in use within aphasia, they are currently being used independently.The integration of these components with appropriate data fusion techniques could be used to maximise authentic practice opportunities [29].The integration of chatbot interaction with language models, the most notable example currently being ChatGPT [53], also points to future opportunities for clinicians and those with less severe aphasia to request specific and personalised materials for rehabilitation and practise.For example, GPT-3 and ChatGPT are both capable of generating a list of common sentences using the word "o'clock".These language models are trained on a broad range of texts but can be fine-tuned with more specific training.Similarly, text-toimage generation models may become increasingly accurate at generating visual stimuli suitable for assessment, confrontation naming or communication purposes 2 .Both text and image generated by AI have the advantage of being copyright free in most cases.In the near future, multimodal models that can generate material across images, text, video and audio are likely to become available and could be used for creation of richer and more stimulating aphasia treatment materials.As the sophistication, accuracy and usability of AI tools expand, more users will be able to access the benefits of AI without requiring programming or technical skills.
In addition to very few therapy-focused papers in this review, there was also a low number of studies exploring self-management/ assistive options.Those that were explored do not appear to have progressed to real-world implementation.The barriers are not clear from the literature, but PWA have identified that existing technology offers a range of options in self-management [55].AI could potentially further assist in communicatively complex situations.Using NLP, language models can process complex texts and generate summaries with high readability; with training, this process could automate aphasia-friendly text production.Language models can also recognise and correct grammatical errors, and with a greater understanding of topic and context, may also be able to accurately identify and correct paraphasias and other language-based errors.Future applications in aphasia could leverage the abilities of advanced NLP techniques such as transformers, language models and conversation AI to build more robust, customizable alternative and augmentative communication applications to assist with communication difficulties.This will allow the software to learn patterns at an individual level and compensate accordingly.
Another largely unexplored application of AI in aphasia is the emotional domain.Recognizing emotional health is crucial in PWA, who are at high risk of depression and anxiety [56].However, in our review, only one AI study explored the assessment of the emotions of PWA [57].AI has successfully been utilised in recognising a variety of mental health disorders in non-aphasic speakers [58] and could allow early flagging of mood disorders in PWA based on combined analysis of physical and social activity, facial recognition, eye tracking [59] and linguistic data.Chatbots and conversational agents could then be used to consider options for escalating mental health challenges to relevant healthcare professionals for management.
Challenges remain in applying AI to aphasia.For example, speech recognition is complex, with additional challenges within aphasia being the need to recognise and appropriately take account of paraphasic errors, neologisms, revisions, greater pause times and agrammatism.However, as the sophistication of AI technology advances, it will become more suited to the complexity of communication in PWA.One avenue to improve accuracy is the fusion of data from multiple modalities.For example, while analysis of auditory data alone may provide imperfect word recognition, the addition of facial and gesture recognition, emotion capture and understanding of the visual surrounding of the person may provide enough additional context to improve accuracy.This will lead to applications with a better understanding of individual users, employing concepts such as human-centric AI [60] and digital twins [61].
Digital inclusion is a key factor that needs to be considered for PWA to ensure accessibility of AI-enabled solutions [62].Co-design of solutions alongside PWA is one way to ensure the usability and accessibility of the final products [63,64] thereby providing personalised rehabilitation, management and care options for PWA.

Conclusion
Over time, the use of AI in aphasia research has broadened in terms of the technology used as well as how it has been applied to aphasia.Although AI has progressed exponentially over time, the implementation of AI into aphasia management is relatively slow-paced.Many PWA see technology 'as an enabler of self-management, autonomy and life participation' [16], and the ongoing enhancement of AI within aphasia could facilitate access to personalised care and expand opportunities for self-management.

Glossary
Computational models of language Computational models of language attempt to simulate aspects of the structure and organisation of language in the human brain.These models can then be explored to learn about language processing, storage and disruption.These models are used for word prediction tasks in Aphasia.

Convolutional neural network
Convolutional Neural Networks (CNNs) are able to "scan" spatial data such as images or spectrograms to identify multiple features.Using these features, they are then able to learn to categorise data through training.Image and video processing could be achieved through CNNs that can be used for gesture and facial expression analysis.

(Continued) Decision tree
Decision trees classify or label data by "splitting" data by multiple features using a threshold.For example, data from an aphasia assessment might be split by high/low fluency, high/low comprehension scores, high/low repetition scores, etc.The thresholds in decision trees are adjustments using algorithms until the optimal categorisation is reached, ideally matching known aphasia subtypes.Decision trees could be used for tasks such as Aphasia severity prediction and type classification.

Deep neural network
Deep neural networks contain multiple hidden layers of nodes and these allow learning/processing of highly advanced and complex features, e.g.estimating recovery based on demographic and imaging data.These models could be used for tasks such as Aphasia severity prediction and type classification.Dimensionality Dimensionality refers to the number of features or variables in a dataset.While a higher number of dimensions increases the information available to classify the data with, when training machine learning, the number of examples required to adequately train a model increases exponentially with each additional dimension.

Fuzzy rule systems
Traditional computer/mathematical logic uses binary decision making (e.g.aphasia: true/false) based on fixed thresholds.In contrast, fuzzy rule systems allow sorting of information using multiple variables that may interact with one another.
Genetic programming Allows a categorisation system to 'evolve' over many iterations from relatively random/inaccurate to an optimised categorisation.
Naïve Bayes Uses Bayesian probability to develop classification models from training data.These models could be used for tasks such as Aphasia severity prediction and type classification.

Natural Language Processing
Natural Language Processing is the use of AI to understand and produce unstructured human language (natural language), as opposed to structured, machine-readable text such as programming code.Common examples of NLP in general use are voice assistants and search engine queries.NLP is widely to used to assess communication via automation tools and build text-based systems (therapy software/chatbots) for Aphasia.

Neural network
Neural networks are conceptually similar to the human brain in that they contain nodes and connections.Nodes activate other nodes through connections when a particular threshold is reached.Nodes are arranged in layers.An input layer is fed the data (e.g. an image), a series of hidden layers process the data using various methods, usually not comprehensible to humans, and the output layer provides a response (e.g.0.97 probability image contains 'thumbs down' gesture).During training, the output from prelabelled training data is compared to the desired output, and the network adjusts the connections until its accuracy improves.These models could be used for tasks such as Aphasia severity prediction and type classification.

Optimisation
Optimisation is the adjustment process of machine learning, where the models continue to learn until the difference between the desired output (the human labelled data) and the model output is as low as possible.For example, initially, a neural network may categorise data as aphasic/non-aphasic only 50% accurately, but improve to 90% after training.Recurrent neural network In, recurrent neural networks, nodes are able to learn and analyse data that contains a temporal dimension (in sequences) by retaining or 'remembering' information from previous data in the set.For example, recurrent neural networks are particularly useful for comprehending the meaning of a sentence through retaining the meaning of each word as it is processed.These models could be used for tasks such as Aphasia severity prediction and type classification.Self-organising maps Self organising maps are a tool for sorting data which ultimately allows identification of categories.SOMs represent complex data on a two dimensional 'map' according to their similarity across many dimensions.
During training, the map gradually adjusts to the data (or self organises).SOMs can be used for profiling and clustering different Aphasia types.

Supervised Learning
In supervised machine learning, labelled datasets (previously annotated by humans) are used to train models, allowing the algorithms to learn the characteristics of the data based on labels.Once trained, the model is used to predict or classify data it has not encountered before.Once trained on labelled sets of data, supervised machine learning models could be used for tasks such as Aphasia severity prediction and type classification.
Support Vector Machine Support Vector Machines analyse training data spatially, with a dimension for each variable or feature, and calculates the optimal path to 'slice' through the data that separates it according to the labelled categories.New data can then be classified using the learned information.These models could be used for tasks such as Aphasia severity prediction and type classification.

Transformers
Transformers are deep learning models that have a mechanism of differentially weighting the significance of input data based on a concept called self-attention.

Unsupervised learning
Unsupervised machine learning models sort uncategorised datasets without labels or examples being provided.The models learn to explore data based on any suitable dimensions and results may or may not align with traditional, human classifications but can reveal previously hidden patterns in data.Compared to supervised machine learning models, the key benefit is that unsupervised models do not depend on annotated/pre-labelled datasets.Unsupervised learning can be used for explorations, clustering and profiling of Aphasia patients based on their data.

Glossary. Continued.
Word embedding Humans largely learn word meanings by hearing them used in context, rather than through explicit definitions.Similarly, by analysing where a word is used or embedded within many sentences, the semantic meaning of words can be learned by machines by investigating the neighbouring words (context), a crucial part of processing and generating natural language.Word embedding models are used in text based assessment systems and word prediction applications for Aphasia.

Figure 1 .
Figure 1.screen capture of the data analysis dashboard.

Figure 2 .
Figure 2. PRisMa flow chart used for the scoping review.

Figure 3 .
Figure 3. the evolution of ai in aphasia research.

Figure 4 .
Figure 4. the evolution of ai techniques.

Figure 5 .
Figure 5. the ai landscape in aphasia research.

Table 1 .
Publication counts by categorisation of research objectives.