3,418
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Psychometric properties of the Health Professionals and Aphasia Questionnaire (HPAQ): a new self-assessment tool for evaluating health communication with people with aphasia

ORCID Icon, , ORCID Icon & ORCID Icon
Pages 687-705 | Received 09 Nov 2020, Accepted 03 Mar 2021, Published online: 24 Mar 2021

ABSTRACT

Background

Communication partner training (CPT) of health professionals (HP) is recommended in several international guidelines for stroke and aphasia. The effectiveness of CPT is well established, but research and evaluation is needed for implementation of CPT in different healthcare settings. The Health Professionals and Aphasia Questionnaire (HPAQ) was developed to make available a valid and reliable outcome measure, which is feasible for use with HP working in practice settings with people with aphasia as well as in research.

Aims

The aim was to investigate the test–retest reliability and underlying psychometric properties of the HPAQ when administered four weeks apart to HP in neurorehabilitation.

Methods and Procedures

Participants were recruited from a CPT implementation study in Denmark. Health professionals with contact with patients with aphasia were assigned to CPT courses and asked to fill the questionnaire four weeks before and on the day of their scheduled course. In all, 270 HP responded to the HPAQ. The internal consistency, test–retest reliability, structural validity, and floor and ceiling effects of the HPAQ questions were investigated using descriptive statistics, intraclass correlation coefficient (ICC), Cronbach’s alpha, and factor analysis.

Outcomes and Results

The overall test–retest reliability of HPAQ was excellent (ICC = 0.86); and ICC of the individual questions ranged between 0.80 and 0.48. Minor ceiling or flooring effects in answering questions were present. The factor analysis revealed three underlying factors. The first and strongest was a CPT-related knowledge and skill factor represented in all questions except for question 10. The second component was associated with questions probing environmental factors. Test–retest reliability was excellent for the score developed with variables used in these factors. A third factor was associated with expecting or being prepared for making an extra effort when communicating with patients with aphasia. Test–retest reliability for the variables explaining this factor was good. Regarding professions, nurses have the highest ICC on the overall HPAQ, whereas physio and occupational therapists have the lowest. The overall HPAQ showed excellent or good reliability for all healthcare professionals with six or more years of experience, and fair reliability for respondents with fewer years of experience.

Conclusions

The HPAQ has good reliability and is suitable as an outcome measure in CPT studies aimed at different health professionals working in practice settings with people with aphasia.

Introduction

Speaking with people with aphasia can be hard and present issues in health contexts. Communication problems between healthcare professionals (HP) and patients with aphasia may negatively influence patients’ safety, access to information, and ability to participate in information sharing and decisions relating to their health care (O’Halloran et al., Citation2012, Citation2011). Given that successful communication is a collaborative achievement between conversation partners (Perkins & Milroy, Citation1997), HP may play a crucial role in assisting people with aphasia to gain access to appropriate treatment, care, and therapy if they possess the requisite knowledge and skills to do so.

Communication partner training (CPT) has been demonstrated to be effective in improving the knowledge and skills of communication partners to support people with aphasia in participating in interaction and information exchange (Simmons-Mackie et al., Citation2010, Citation2016; Tessier et al., Citation2020). CPT is an umbrella term for different approaches (cf. Saldert et al., Citation2018), including individualized or dyadic coaching methods, which aim to improve communication between a person with aphasia and one or more communication partners through training relevant to the specific participants (e.g., Beeke et al., Citation2013; Lock et al., Citation2001). Other approaches like Supported Conversation for Adults with AphasiaTM (SCATM) (Kagan, Citation1998) focus on generic principles and strategies, which may be useful for communication partners and people with different types of aphasia.

Both individualized and generic approaches have been used successfully by health professionals as communication partners (Simmons-Mackie et al., Citation2010, Citation2016). Individualized or dyadic approaches have often been applied in long-term care or rehabilitation settings, where patients’ length of stay permits tailoring of the intervention to individual patients and their frequent professional communication partners (Eriksson et al., Citation2016). Some studies have combined an individualized approach with training of generic strategies (Chu et al., Citation2018; Genereux et al., Citation2004; McGilton et al., Citation2018). Generic approaches including the SCATM-method have typically been applied in hospital settings, where staff are likely to treat or care for a range of patients with aphasia for shorter durations (Cameron et al., Citation2019; Heard et al., Citation2017; Horton et al., Citation2015; Jensen et al., Citation2014; Simmons-Mackie et al., Citation2007). Generic approaches are also used in educational settings with students in different health professions (Baylor et al., Citation2019; Cameron et al., Citation2018, Citation2015; Finch et al., Citation2017; Legg et al., Citation2005; Saldert et al., Citation2016).

Evaluation of outcome in CPT studies has taken different forms. Both outcomes pertaining to people with aphasia and to their communication partners may be relevant. However, the focus of this study will be on outcome measures relating to health professionals. As argued by Saldert et al. (Citation2018), measuring outcome of CPT interventions is complex. For a measure to have acceptable face validity, it needs to be aligned with the main components and stipulated outcomes of a given CPT intervention. However, CPT approaches differ in goals and intervention components (Cruice et al., Citation2018) and CPT interventions for health professionals are applied in different settings (acute, rehabilitation, long-term or educational) with different health professions as target group. These are all factors, which complicate the development of shared benchmark outcome measures for CPT. According to the COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) initiative, aiming at improving outcome measurement instruments selected and used both in research and in clinical practice, a range of measurement properties should be evaluated for each outcome measure (Mokkink et al., Citation2010). The COSMIN taxonomy of measurement properties includes interpretability, responsiveness, and different areas of reliability and validity (Mokkink et al., Citation2010).

Questionnaires probing health professionals’ self-reported knowledge, skills or confidence in communicating with people with aphasia seem an acceptable and feasible approach to measuring outcome in studies of CPT intervention with large sample sizes. Several studies in healthcare or educational settings have used a questionnaire approach (e.g., Baylor et al., Citation2019; Cameron et al., Citation2019, Citation2017; Doherty & Lay, Citation2019; Heard et al., Citation2017; Jensen et al., Citation2014; Power et al., Citation2020). However, the surveys used have often not been evaluated for validity and alignment intervention goals and components nor have their measurement properties such as their test–retest reliability been assessed. The following questionnaires have been used or appear relevant to evaluating outcome of CPT interventions with healthcare professionals:

  • Communication-Impairment Questionnaire (CIQ) is an 8-item self-reported survey developed and used as an outcome measure in a dyadic approach to CPT intervention by Genereux et al. (Citation2004). It includes questions probing a healthcare professional’s attitude and perceptions of communication with a specific (named) patient. The questions do not specifically target communication with patients with aphasia, but may be equally suitable for individuals with other types of communication impairments. The CIQ has been further adjusted and used by McGilton et al. (Citation2011) and McGilton et al. (Citation2012), but no published data on its test–retest reliability have been found.

  • Knowledge of Aphasia Questionnaire (KAQ) is a 13-item self-reported survey developed by the Aphasia Institute in Toronto (Simmons-Mackie et al., Citation2007). It has been used before and after CPT intervention in healthcare settings to evaluate changes in healthcare staff’s knowledge of aphasia, as well as their behavior and attitude towards people with aphasia (Jensen et al., Citation2014; Simmons-Mackie et al., Citation2007; Sorin-Peters et al., Citation2010). However, the test–retest reliability of the survey has not been established.

  • Communicative Access Measure for Stroke for Frontline Practice (CAMS2) (Kagan et al., Citation2017) is a more recent survey also developed by the Aphasia Institute in Toronto. It is a carefully developed tool, which includes three questionnaires for evaluating accessibility for people with aphasia at the institutional level, at the level of frontline staff, and from a patient satisfaction level. The CAMS2 consists of 28 items assessing the knowledge and perspectives of healthcare professionals regarding aphasia, including strategies used or not used by staff. The CAMS surveys have been psychometrically evaluated (Kagan et al., Citation2017). Thus, for CAMS2, test–retest reliability was moderate to high for most of the items, with all but one item ranging from .40 to .96. Although the CAMS surveys have been designed to provide an overall evaluation of accessibility and are rather lengthy from an acceptability/feasibility point of view, the questionnaires also have potential as outcome measures probing change at different levels as a consequence of CPT intervention (Isaksen, Fromsejer Heiberg, et al., Citationin preparation).

  • Aphasia Attitudes, Strategies, and Knowledge Survey (AASK) (Power et al., Citation2020) is a new survey, which consists of 11 items using different response formats: free-text responses, tick-off responses, and ratings on a five-point scale. The survey has been developed to align with key components of the SCATM-method, including knowledge of aphasia, knowledge of relevant communication strategies, attitudes and confidence of the respondent in communicating with people with aphasia. The survey has shown strong test–retest reliability with students from different health professions (Power et al., Citation2020). It appears feasible for use in large studies, especially in educational settings, since the majority of the questions probe respondents’ ability to reproduce knowledge about aphasia and communication strategies. There is less emphasis, however, on self-reported practice in communicating with people with aphasia.

Although there are issues with either a tendency to over- or underestimate self-reported outcomes for both patients (e.g., weight and height) and health professionals (e.g., knowledge and self-confidence) (Engstrom et al., Citation2003; Liaw et al., Citation2012; Saldert et al., Citation2018), self-report questionnaires appear to be a feasible choice of outcome measure, when evaluating large-scale implementation of CPT. However, to our knowledge, only the CAMS2 and AASK questionnaires have been psychometrically investigated. The length of CAMS2 and its focus on communicative access rather than the outcome of CPT intervention may make it less appropriate and feasible as an outcome measure in studies aiming specifically to evaluate changes in professional knowledge, attitude and behaviour as a consequence of intervention. In the AASK questionnaire, the focus is on reproducible knowledge about aphasia and communication, less so on changes in professional practice. Despite its confirmed reliability and alignment with core elements of the SCATM-method, the AASK may be less suited for evaluating changes in studies or clinical applications, where CPT intervention is applied in a hospital or similar clinical setting with the aim of changing the practice of trained health professionals.

Given that CPT is recommended in several international guidelines for managing aphasia (e.g., Hebert et al., Citation2016; Power et al., Citation2015; Sundhedsstyrelsen, Citation2011), a valid and reliable self-reported questionnaire is needed, which is sensitive to evaluating the implementation of CPT for health professionals working with people with aphasia in different fields of frontline practice. The current study was part of a project aiming to develop such a tool suited for measuring outcome in large-scale CPT implementation studies. The focus of this study is on the psychometric properties, specifically on the internal consistency, test–retest reliability, structural validity, and floor and ceiling effects of the developed tool, the Health Professionals and Aphasia Questionnaire (HPAQ).

Development of the HPAQ

The HPAQ was developed in multiple stages and based on 27 questions from two existing self-report tools, CAMS2 (Kagan et al., Citation2017) and an abbreviated and revised version of the KAQ (Simmons-Mackie et al., Citation2007). New questions formulated by the authors were also added to this pool of potential survey questions for inclusion in the HPAQ. The full development process, including rationale and details of specific methodological steps are described in Isaksen, Christensen, et al. (Citationin preparation) and are briefly summarized below.

Contents of potential survey item were aligned with four dimensions of professional competence inspired by Epstein and Hundert (Citation2002): knowledge, skill, attitude, practice. Furthermore, these competence categories were considered in relation to the components of generic CPT based on the SCATM-method (Kagan, Citation1998): Thus, participants learn basic information about aphasia and communication (knowledge), are motivated through role-play eliciting feelings associated with not being able to express oneself (attitude) and are introduced to specific communicative strategies, which are practiced in integrative role-play in real life like scenarios (skills, generalization into practice). In keeping with the focus on healthcare practice settings, additional questions were included to probe the supportive or non-supportive role of the work environment in relation to using CPT principles in patients with aphasia. All candidate questions were subsequently rated for face validity by an independent expert panel of 10 hospital-based speech-language therapists familiar with providing CPT to health professionals. Finally, cognitive interviews with health professionals were carried out to evaluate and adjust the wording of questions for inclusion in the HPAQ or to discard them.

For the cognitive interviews, 16 frontline staff were recruited who worked with people with aphasia and had mixed professional backgrounds: One nursing assistant, six nurses, two medical doctors, two physiotherapists, and five occupational therapists. Eight of the participating health professionals constituted a “pre-group”, who had not received CPT; the remaining eight constituted a “post-group”, who had received CPT at their current or earlier workplace. This recruitment procedure served to ensure that the questions were understood by health professionals both with and without prior CPT training, since the questionnaire was intended to be used both before and after CPT. At a later stage, four other health professionals piloted an electronic version of the HPAQ.

Prior to this study, responses on either the CAMS2 (Kagan et al., Citation2017) or on an abbreviated and revised version of the KAQ (Simmons-Mackie et al., Citation2007) from approximately 400 Danish health professionals were analysed for variability, floor and ceiling effects of responses. Based on these results, many items were discarded for use in the HPAQ. Also based on the analyses, it was decided to use a visual analogue (VA) scale as the overall response scale for all questions in the HPAQ (see Isaksen, Christensen, et al. (Citationin preparation) for further details). Visual analogue scales have better metrical characteristics than discrete scales; thus, a wider range of statistical methods can be applied to the measurements (Reips & Funke, Citation2008).

The questionnaire resulting from these multiple steps consisted of 16 items, see . The questionnaire was developed and evaluated psychometrically in Danish. It was subsequently translated into English and validated by back translation into Danish.

Table 1. The 16 HPAQ questions in English translation, grouped according to their conceptual association with different aspects of professional competence

Aim of study

The aim of the study was to investigate the internal consistency, test–retest reliability, structural validity, and floor and ceiling effects of the HPAQ questions administered to health professionals in neurorehabilitation at two different time points four weeks apart. The objective was also to decide if some items should be removed from the final questionnaire due to low test–retest reliability.

Method

Study design

The design for evaluating the psychometric properties of the HPAQ capitalized on a large clinical implementation project in the region of Southern Denmark. The participating institutions were two stroke and neurological units (acute and rehabilitation) of Hospital of Southwest Jutland and four surrounding municipalities. Health professionals, who had frontline contact with people with aphasia, were assigned to CPT courses, regardless of profession and prior experience. Course assignment was mandatory for all hospital staff and for the majority of the municipality staff. Up to 20 participants were trained in each course running over two days (three hours each day with two weeks in between).

For the current study of the HPAQ, all course participants were asked to fill out the questionnaire at two baseline time points before their training: approximately four weeks before their scheduled course and immediately before their training. Between the two baseline measures, no additional promotion took place apart from reminders of completing the questionnaire. The questionnaire was also given twice post-training for outcome evaluation, but the results are not reported here.

Respondents to the HPAQ questionnaire included nurses and nursing assistants, physiotherapists, occupational therapists, medical doctors, secretaries and other staff groups with frontline roles in neurorehabilitation. Background data were collected on demographic details, including profession, work setting, prior professional experience, age and gender.

Prior to the study ethical approval was obtained from the University of Southern Denmark (project no. 10.052) and the Hospital of Southwest Jutland/Region of Southern Denmark.

Materials and procedures

For practical reasons, the HPAQ was administered in two different versions: an electronic version and a paper and pen version (see Appendix for the full version of the survey excluding demographic questions). Thus, four weeks before their scheduled course attendance respondents received an email with a link to an electronic version of the HPAQ constructed and distributed in the software SurveyXact. On the day of the course, respondents sat in a classroom and were given a pen and a paper version of the questionnaire, which they had to fill out and hand in before the actual training began. Completing the questionnaire took approximately 10 minutes for the participants.

The electronic version and the paper and pen version were exactly the same in wording and general formatting. However, where a 10 cm VA scale was used in the paper and pen version, the exact length of the VA scale in the electronic version depended on the monitor being used and required the use of a slider button providing the number of the chosen setting (between 0 and 100). Thus, in the electronic version, the participants could see the number at which they placed the slider as opposed to the blank 10 cm VA scale in the paper and pen version (see further in study limitations). The electronic version was piloted by four health professionals to explore possible difficulties with responding to the electronic version, including using the slider. No indications of any difficulties were found. The data provided by both versions were regardless of the format a number between 0 and 100 either measured by hand with a ruler in the paper and pen version or for the electronic version a registration of where the slider was set.

Data analysis

Descriptive statistics were carried out to evaluate floor and ceiling effects. A floor or ceiling effect was present if more than 15% of the respondents achieved the lowest or highest possible score on a questionnaire (Terwee et al., Citation2007). Data from both assessments were used for the analysis.

Internal consistency describes interrelatedness among items, assuming the questionnaire to be unidimensional. Cronbach’s α was calculated for the total scale and considered adequate if it ranged from 0.70 to 0.95 (Terwee et al., Citation2007).

Reliability is explained using the proportion of the total variance in the measurements due to “true” differences between respondents. For test–retest reliability, the intraclass correlation coefficient (ICC) for paired data was used using a one-way analysis of variance (Shrout & Fleiss, Citation1979). In order of interpretation of the ICC, the guideline from Cicchetti and Sparrow (Citation1981) was used, which proposed the interpretation as poor (ICC < 0.40); fair (ICC 0.40–0.60); good (ICC 0.60–0.75); and excellent (ICC 0.75–1.00).

Structural validity explains which scores of a questionnaire are an adequate reflection of the dimensionality of the construct to be measured. Factor analysis using principal component analyses was employed in order to explore the underlying components in the survey. With an eigenvalue of <1.0, three factors were constructed, and for visualization purposes, factor loadings below 0.3 were not presented. The data used for the factor analysis were from the second baseline (on the day of the course) as the number of participants was larger. Even though factor analysis is robust to assumption violation in large samples (Flora et al., Citation2012), we checked the model assumption graphically.

ICC was initially calculated for each question and each of the factors found in the factor analysis. The ICC calculations for the factors were also subdivided into relevant demographic variables, such as profession and work experience within stroke care and rehabilitation. There were only minor missing data; therefore, the data were not imputed. The analysis was conducted in STATA Version 16.0 for Windows.

Results

Respondent characteristics: descriptive demographics

A total of 270 health professionals responded to the HPAQ at one or both of the two time points. As can be seen in (), the demography of the respondents at the two different time points is more or less the same. There is a majority of women (90–91%), physio- and occupational therapists (40%) and of working in a rehabilitation setting. Age is almost equally distributed in all age groups, so is work experience, where the main part of the respondents has only 0–5 years of work experience within stroke care and rehabilitation (45–48%). Most of the respondents have on average 1–10 contacts with people with aphasia per week (66–70%).

Table 2. Respondent characteristics at four weeks before and on the morning of the day of their scheduled course

Floor and ceiling effects

The floor and ceiling effects of each question at both times of administration can be seen in (). No floor effects were found, but ceiling effects appeared in some of the questions. Question 9 (As health professional, I have a responsibility to make an extra effort when communicating with people with aphasia) was the only one question to have ceiling effect in both questionnaires, and the only question to have a ceiling effect in the questionnaire given on the morning of the course day. Also, questions 13, 15 and 16 have a minor ceiling effect, but the effect is only seen in the first questionnaire, and is not as pronounced as question 9.

Table 3. Floor and ceiling effect in the questions given by prevalence of respondents, who achieved the lowest or highest possible score in each questionnaire

Internal consistency

The internal consistency of the overall score was tested with Cronbach’s Alpha. In all, the score for the first baseline survey revealed a Cronbach’s Alpha from 0.91 and alpha was not below 0.90 when any of the variables were dropped from the model. The second baseline survey revealed similar results (Cronbach’s Alpha was 0.91 and alpha was not below 0.89 when any of the variables were dropped from the model).

Reliability

Overall, the ICC for each of the questions is acceptable (see ). Three of the questions (9, 10 and 13) have moderate reliability, but the rest have good or excellent reliability within the interpretation of Cicchetti and Sparrow (Citation1981). The lowest ICC (0.48; 95% CI: 0.36–0.59) is found for question 10 (I experience the situation as frustrating if communication with people with aphasia is unsuccessful) and question 13 (0.48; 95% CI: 0.36–0.60) (When planning my work, I always take into consideration the communicative problems of people with aphasia), meaning both questions have a poor to fair reliability. The highest reliability (0.80; 95% CI: 0.75–0.85) was found for question 2 (How much knowledge do you have about how to communicate best with people with aphasia?).

Table 4. ICC comparing the HPAQ responses four weeks before with responses on the morning of the day of the course for each question separately; health professionals at two units of hospital of Sourthwest Jutland and four surrounding municipalities, Denmark, 2019

Structural validity

The factor analysis yielded three factors, see (). The first factor (eigenvalue = 7.37) is connected to nearly all items and may be interpreted as an overall knowledge and skill factor measured by the instrument as a whole. Only question 10, which probes an emotional reaction (frustration, when unsuccessful) is not represented by this factor.

Table 5. Factor loadings on the 16 questions of the HPAQ

The second factor (eigenvalue = 1.88) summarizes all scales connected to the work surroundings, colleagues and management and appears to explain knowledge-related aspects in the work environment. The last factor with the smallest eigenvalue (eigenvalue = 1.29) summarizes an expectation of our readiness to make an extra effort from HP working with patients with aphasia. This extra effort factor is positively associated with questions 9, 10, and 13, but negatively associated with question 14, in which respondents indicate their satisfaction with the availability of supportive materials in their workplace.

As the HPAQ was intended for use with different groups of health professionals, further analyses were carried out to examine test–retest reliability of the HPAQ and the robustness of its underlying factors in relation to participants’ profession and experience with stroke care or neurorehabilitation in terms of the number of years they had worked within the field. In (), the ICC for each of the factors is presented together with the subdivided ICC for profession and years of work experience.

Table 6. Test–retest reliability of the overall HPAQ and each of the three underlying factors split by profession and years of experience within stroke care/rehabilitation

Using the criteria from Cicchetti and Sparrow (Citation1981), the overall ICCs for the variables connected to the knowledge and skill factor and the environmental factor are excellent; for the variables mentioned in the third factor, which was related to expecting extra effort, ICC is good. Regarding profession, nurses have the highest ICC for all three constructs (Knowledge and skill 0.77; Environmental 0.74; Expecting extra effort 0.66), whereas physio- and occupational therapists have the lowest (Knowledge and skill 0.59; Environmental 0.65; Expecting extra effort 0.57). The respondents with 6 or more years of work experience within stroke care and rehabilitation have excellent or good ICC for all three constructs (Knowledge and skill 0.82; Environmental 0.74; Expecting extra effort 0.64). However, for respondents with less work experience (0–5 years), the overall ICC is only fair.

Discussion

When implementing CPT in healthcare practice settings as well as in large-scale studies, self-report questionnaires seem a feasible choice of outcome measure compared to other options, such as using rating scales to evaluate videotaped interaction (Kagan et al., Citation2004). Only a few existing surveys have been psychometrically evaluated, i.e., the CAMS2 (Simmons-Mackie et al., Citation2007) and the AASK survey (Power et al., Citation2020). These tools have good or excellent reliability, but they either focus on communicative accessibility or have been evaluated with students rather than trained health professionals working in a clinical setting. The HPAQ was developed in order to provide a relevant outcome measure for CPT implementation and studies carried out in a clinical setting. It consists of 16 questions, which have been evaluated by an expert panel as probing important outcomes of CPT in healthcare settings. In this study, the psychometric qualities of the HPAQ were investigated, including the possibility that some questions might need to be discarded from the HPAQ to obtain a reliable measurement tool.

The overall ICC for the HPQA was found to be sound, and for 13 of the 16 questions ICC was interpreted as good or excellent (Cicchetti & Sparrow, Citation1981). The weakest ICC was found for question 10 and question 13, but reliability was still within an acceptable range. Unlike question 13, question 10 did not contribute to the main construct measured by the HPAQ as revealed by the factor analysis. Hence, it was considered whether to eliminate question 10 from the final version of the questionnaire. This will be discussed in further detail below.

The main construct revealed by the factor analysis was related to a general knowledge and skill factor in relation to communicating with patients with aphasia. All questions except for question 10 were associated with the knowledge and skill factor. Questions 1–7 which were intended to probe knowledge and skills had the highest loadings, but also question 8 about confidence in communicating with patients with aphasia and question 12 about the use of strategies in one’s practice showed high loadings. It was unexpected that questions 14–16, which address environmental factors, were associated with the knowledge and skill factor albeit with lower loadings. This may suggest that awareness of the possibility of enlisting supportive materials or colleagues to assist in communicating with patients with aphasia is part of the knowledge conveyed in CPT.

The main knowledge and skill factor showed good reliability when analyzed in relation to profession: The ICC for this main construct was excellent for nurses and good for nursing assistants, but it was only fair for physio- or occupational therapists. We have no obvious explanation for this. Representatives for those health professionals participated in the cognitive interviews, which were carried out in developing the HPAQ and did not demonstrate any difficulties understanding or comment on lack of relevance of the wording of the questions.

A high test–retest reliability was also found for the main factor for all participants with more than 5 years of experience in the practice field (range 0.74–0.82), whereas it was only moderate (0.49) for participants with less than 5 years of experience. This suggests that the HPAQ is especially suited for evaluating outcome of CPT for health professionals who have some experience in the field of neurorehabilitation whereas it has a more moderate reliability with recently educated health professionals. The HPAQ questions require respondents to report on their own practice, attitudes and emotions and work environment in relation to communicating with patients with aphasia. A possible explanation for the lower reliability obtained with recently educated health professionals might be that this target group may not have had enough experience with patients with aphasia to develop a consistent sense of their own practice with this patient group. It may be that the AASK survey (Power et al., Citation2020) is a more appropriate outcome measure for recently educated health professionals. Thus, the majority of the AASK questions probe respondents’ ability to reproduce knowledge about aphasia and communication strategies and have less emphasis on self-reported practice in communicating with people with aphasia. The AASK survey was not designed specifically for students as a target group, but intended to align with key components of the SCATM CPT program. However, in their study of the test–retest reliability of the AASK, Power et al. (Citation2020) utilised a mixed group of allied student health professionals and showed very strong test–retest reliability for the measure used with this group.

Besides the overall knowledge and skill factor, the factor analysis revealed an environmental factor and a weaker factor associated with expecting or being prepared for extra effort. The environmental factor was especially represented in question 15 and 16, which probe respondents’ perception that colleagues and management prioritize the ability to communicate with patients with aphasia. Question 14, which concerned the availability of supportive materials, was also positively associated with the environmental factor. The third factor, expecting or being prepared for extra effort, was especially evident in question 9, As health professional, I have a responsibility to make an extra effort when communicating with people with aphasia. The “extra effort” factor was also associated with feeling frustrated when communication failed (question 10) and with considering the communicative problems of patients with aphasia when planning one’s work (question 13). One question showed a negative loading on this factor: Question 14, At my workplace there are materials readily available for me to support communication with people with aphasia. The negative loading may indicate that HP experience that they have to make an extra effort, if supportive materials are not readily available; or that they tend not to be satisfied with the availability of supportive materials in their work environment if they are preparing to make an extra effort when communicating with patients with aphasia.

Overall, the analyses did not suggest that removing any specific question might significantly enhance the measurement qualities of the HPAQ as a whole. However, question 10, which probes how the participant responds emotionally to communication failure (I experience the situation as frustrating if communication with people with aphasia is unsuccessful), was considered for possible exclusion from the HPAQ for two reasons. Firstly, the item had the lowest test–retest reliability of the 16 questions. Low test–retest reliability suggests that responses are less stable over time, and it is possible that responses about emotional reactions are variable or more sensitive to recent experiences, than questions concerning self-reported knowledge, skills or practice. In favour of retaining question 10 was the fact that an expert panel of hospital-based speech-language therapists had found the question important to include.

Another issue with question 10 was that it was the only item, which did not contribute to the main construct found in the factor analysis, i.e., the general knowledge and skill factor measured with the HPAQ. This suggests that health professionals’ reported degree of frustration with failed communication may not be associated with their knowledge and skill level in communication partner training. Possibly, respondents’ frustration ratings may be influenced by the nature and severity of specific situations of communicative breakdown recalled by respondents when answering the question. By retaining question 10, the relationship between frustration with communication failure and knowledge and skill level might be explored further in the future research. Also in favour of retaining question 10 was the fact that it did contribute to the other two factors measured by the HPAQ. Since the overall test–retest reliability of the HPAQ as a whole was quite acceptable, excluding question 10 was not deemed prerequisite to obtaining an acceptable measurement tool. It was also considered if question 9 (As health professional, I have a responsibility to make an extra effort when communicating with people with aphasia) should be excluded from the HPAQ as there was a problem in terms of ceiling effect. But as the internal consistency given by Cronbach’s alpha was good, the question was kept in the HPAQ.

Study limitations

Although this study has explored some of the psychometric properties of the HPAQ, other important properties, e.g., the responsiveness of the tool to change after training, need to be explored in the future research.

One limitation is that the study evaluated test–retest reliability of the HPAQ based on data derived from a paper and pen administration compared to data from an electronic administration. The electronic VA scale varied in length according to the size of the monitor used, but that was made up for by having the slider display the numbers between 0 and 100 according to how the participant placed it. One might speculate that this difference might lead to systematic bias (e.g., always placing the slides at 10, 20, 30, etc.), either weakening or strengthening the correlation between the two data sets. However, in pilot testing of the electronic version with four HPs we found no indication of such a bias.

A second limitation relates to the use of self-reporting raised in the introduction. Self-report measures may not fully reflect the actual situation. Overestimation of knowledge has been ascribed in the health literature to social acceptability (e.g., Liaw et al., Citation2012). However, studies of conversation partners to people with aphasia have also shown that they sometimes rate themselves lower after a CPT-intervention due to more knowledge or increased awareness of what their own communicative patterns (Saldert et al., Citation2018).

Another limitation is that the exact number of days between filling out the two questionnaires varied from the intended four weeks due to some participants being rescheduled for a later course in case of illness or other issues. While this is a potential threat to obtaining good reliability, it did not seem to be a problem in terms of the actual results. Unequal number of participants in the professional groups is another weakness, since it is harder to establish good reliability with a smaller group than a larger one. However, the highest reliability was found for the nurses, which had the smallest number of participants, so in terms of the results, the unequal number of participants did not contribute to lower reliability of the HPAQ.

Conclusion

The HPAQ is a new 16-item self-report questionnaire intended for assessing the outcome of CPT for HP working with patients with aphasia. The current study found the test–retest reliability for the overall tool to be excellent or good for HP with six or more years of experience and for nursing staff as a group. For respondents with fewer years of experience and for occupational therapists and physiotherapists, test–retest reliability was fair. The study also revealed underlying factors, including a knowledge and skill factor measured by all questions except for one question, which probed frustration with failed communication. The HPAQ accordingly appears well suited as an outcome measure in CPT studies carried out in practice settings or implementation studies.

Supplemental material

Appendix A - Questionnaire

Download MS Word (249.9 KB)

Acknowledgments

This study was funded by the Danish foundation TrygFonden (grant no. 125384). We gratefully acknowledge all health professionals, who participated in the study. Particular thanks to Iben Christensen, Anne Mølgaard Olsen, and the team from project KomTil for involvement in recruitment, data collection, and data organization. The final questionnaire (in Danish and English) can be freely used and is available in appendix A and can also be emailed by request to the authors.

Supplementary material

Supplemental data for this article can be accessed here.

Disclosure statement

The authors report no potential conflicts of interest.

Additional information

Funding

This work was supported by the TrygFonden [125384].

References