518
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Development and Validation of Four Social Scales for the UX Evaluation of Interactive Products

ORCID Icon, ORCID Icon, ORCID Icon &
Pages 6608-6621 | Received 18 May 2023, Accepted 07 Sep 2023, Published online: 19 Sep 2023

Abstract

The social dimension of interactive products covers all aspects of our relationships with others that are impacted by owning and using such products. Although social features are making their way into a growing number of interactive products, there is a lack of an evaluation tool to capture the social dimension of the user experience (UX). This study addressed this shortcoming by developing and validating new social scales based on the UEQ + framework. We developed four social scales to encompass various aspects within the social dimension. For scale development, 229 participants rated their UX with products having social aspects. Exploratory factor analysis allowed us to identify four sub-dimensions (Identification, Social interaction, Social stimulation, and Social acceptance), each evaluated with four items. For scale validation, 450 participants evaluated the UX of three product categories, using the new social scales, AttrakDiff, and the six UX dimensions of UEQ+. Results of MANOVA showed that the social scales discriminated the three categories (F (8, 560) = 20.68, p < 0.001, Pillai’s trace = 0.456). The four social scales developed in this study can be combined with other UX dimensions of the UEQ + modular framework to provide a comprehensive overview of user interaction with products.

1. Introduction

User experience (UX) is a multidimensional construct that encompasses all aspects of user interaction with products, services, and systemsFootnote1 (Robert & Lesage, Citation2011). It includes pragmatic dimensions, that are associated with products usability and functionality, as well as hedonic dimensions that are associated with users’ psychological well-being, like pleasure and stimulation, that together give a comprehensive picture of user-product interaction (Hassenzahl, Citation2003). As technology advances, new environments and ways of interactions such as virtual reality are introduced, resulting in new UX dimensions that should be considered in UX design and evaluation (Mortazavi et al., Citation2021). Despite the fact that people use more and more products that connect them to others, the social aspects of UX have not been adequately investigated in UX studies (Hinderks et al., Citation2019; Mortazavi et al., Citation2021). Yet, they have major impacts at a personal level for communication and entertainment purposes, as well as at a business level to improve technology adoption and productivity (Li, Citation2015). An evident example of such products is social media platforms that are being used by 4.7 billion people (Petrosyan, Citation2023). Social media serves purposes beyond basic individual communication, as businesses design marketing strategies that leverage the potential of customers on these platforms to promote and share information about the launch of new products (Zarnadze, Citation2020). Moreover, social features can contribute to improving people’s wellbeing and preventing isolation. The case of wearable social interaction aid for children with autism is a prominent example of assistive technologies that improve social interaction (Washington et al., Citation2016). By using wearable glasses, children with autism are provided with real-time social cues during their interactions with others, which would have otherwise been challenging for them to discern. This assistance aids them in sustaining engagement within social interactions with others (Washington et al., Citation2016). Similarly, a social glasses device and a tactile wristband were used to replicate eye contact during a blind-sighted conversation which can help blind people by giving them more visual information and enhance their communication quality (Qiu et al., Citation2020). Even products that were not primarily built for social activities are now incorporating them into their core features. For instance, thanks to the influence of social media, social sharing and social competing features are added to fitness trackers to encourage users to exercise more and share their achievements with their community (Zhu et al., Citation2017). Communicating with others, expressing oneself through the possession or use of a product, collaborating with others on a task, and creating an experience with others (i.e., co-experience) are all manifestations of social aspects.

The impact of social aspects on UX has been studied from two perspectives. The first characterizes it as an element of the context of use. Arhippainen and Tahti (Arhippainen & Tähti, Citation2003) identified social and cultural factors as part of the context in user-product interaction that influence UX. They mentioned time pressure, pressure to succeed or fail, and explicit and implicit requirements as examples of social factors (Arhippainen & Tähti, Citation2003). Lallemand and Koenig (Lallemand & Koenig, Citation2020) took a step further and developed the UX context scale (UXCS), a tool that can be used in conjunction with UX evaluation tools to portray the contexts within which users interact with a product. Other people’s presence, interactions with them, and how users feel in that environment are items measured in UXCS under social context (Lallemand & Koenig, Citation2020). However, it does not evaluate the social aspects of the UX with a product.

The second perspective considers social aspect as a UX dimension that should be measured on its own. In order to ensure a common understanding, the terms used in this paper are classified as follows: at the highest level, a “dimension” represents a significant factor explaining the UX of a product (Robert, Citation2014), “sub-dimensions” are the constituent elements of each dimension and “items” are the constituent elements of a sub-dimension. Identification is an example of a UX sub-dimension of the social dimension present in the AttrakDiff questionnaire (Hassenzahl et al., Citation2003). Identification is the degree to which a user relates to a product and expresses oneself through its possession (Hassenzahl, Citation2003). It is described as either self-focused or relationship-focused. The former is concerned with the users’ self-perception as a result of possessing or using a product, whereas the latter reflects their social identity (Yoon et al., Citation2020). Identification has roots in the influence and popularity needs of individuals, i.e., the need to be liked, respected, and regarded as influential to others (Hassenzahl et al., Citation2010), which is in line with Jordan’s notion of socio-pleasure (Jordan, Citation2000). In this study, we followed the second perspective, which considers social aspects as an integral part of UX evaluation tools, treating them as a UX dimension in need of more investigation. Knowing how socially attractive a product is, and how its social features influence the overall UX are of growing importance and deserve more attention from the research community. As presented in the following section, despite the wealth of available UX evaluation tools, there remains a gap in having a single tool that comprehensively measures different aspects of the social dimension without being bounded to a specific field of use. This study aims to bridge this gap by developing and validating social scales that can be used in conjunction with other UX evaluation tools.

2. Literature review

Reviewing the UX evaluation tools shows that social dimension is evaluated under different terms. shows subjective UX evaluation tools and the social sub-dimensions they cover. General evaluation tools such as AttrakDiff are applicable to a variety of products, while others, such as the game experience questionnaire (GEQ), are developed for a particular field of use. However, neither covers all sub-dimensions of the social dimension.

Table 1. UX evaluation tools with social sub-dimensions.

In addition to AttrakDiff, meCUE (Minge et al., Citation2017) is another generally applicable UX evaluation tool that includes a social sub-dimension. It is analogous to AttrakDiff in that it focuses on self-expression and how a product can communicate identity to others under the UX sub-dimension of “status.” A more comprehensive view on social dimension of UX is presented by Park et al (Park et al., Citation2013). Although they did not develop a UX evaluation tool they defined sociability with three sub-dimensions—social emotion, social value, and friendship—for the UX of mobile phones and services. Sociability was defined as the “degree to which a product satisfies the user’s desire of being sociable.” The sub-dimensions respectively evaluate product’s ability to enable feeling and sharing emotions socially; its ability to support user values such as social issues and problems; and its ability to enable making relationship with other people. In another study, Ryu and Kim (Ryu & Kim, Citation2019) evaluated sociability with only one item in their questionnaire (without covering the three sub-dimensions) to find whether or not a medical information system satisfied the user social needs. One can notice the limited attention to different social aspects in general UX evaluation tools. AttrakDiff and meCUE only evaluate identification, and the User Experience Questionnaire (UEQ) (Laugwitz et al., Citation2008)—the most commonly used subjective evaluation tool—does not cover any social aspect (Mortazavi et al., Citation2021). The same goes for UEQ+ (Schrepp & Thomaschewski, Citation2019), a modular framework developed based on UEQ, including 20 UX dimensions that can be used for the evaluation of different products, yet none relates to social aspect.

Considering the impact of culture on UX, the Chinese UX questionnaire was developed based on AttrakDiff (Liu et al., Citation2013). Its authors proposed “conformity” as the new UX dimension that addressed the eastern culture of giving importance to other’s opinions. Conformity deals with the prevalence of a product in the market and how widely it is used by people. It is to some extent close to social acceptance, another sub-dimension that has been rarely used in UX evaluation (Weiss & Willkomm, Citation2013). Social acceptance is “how a user feels when interacting with a system in relation to the social situation, e.g., how uncomfortable or embarrassed they feel with respect to other people or one’s own norm” (Weiss & Willkomm, Citation2013). The negative impact of lack of social acceptability affects not just the UX of a product, but also the user’s self and social image, with the risk of stigmatization and misjudgment by others (Koelle et al., Citation2020). Making an intimate relationship with others when watching smart TVs (Jang & Yi, Citation2019), sharing information via a mobile app (Sun & May, Citation2013), collaborating with other students to co-write a poem (Kantosalo & Riihiaho, Citation2019), and using an interactive public display system for improving social interaction (Kang et al., Citation2022) are all examples of where different social sub-dimensions of UX need to be addressed.

Social interaction in the form of communicating with others, collaborating with teammates, or expressing oneself through game characters has been widely used in the game industry. Multiplayer online games, gaming conventions, and game streaming services are all examples of social interaction media for players and spectators. As a result, UX evaluation tools developed specifically for games include social dimension, such as GEQ (IJsselsteijn et al., Citation2013) and Game User Experience Satisfaction Scale (GUESS) (Phan et al., Citation2016). GEQ features a separate questionnaire for the social presence that covers players’ emphatic responses, negative feelings towards others, and behavioral involvement in games (IJsselsteijn et al., Citation2013). Another field where social dimension has gained attention is education. Interactions within and between learners and instructors, subjective norm, and self-image are social sub-dimensions evaluated in the FASER LX tool that evaluates learners experience in an e-learning system (Safsouf et al., Citation2019). Other researchers used serious games for learning purposes and developed tools like EGameFlow (Fu et al., Citation2009) and Model for the Evaluation of Educational Games (MEEGA+) (Petri et al., Citation2016) with social sub-dimensions. Reviewing the items of these tools highlights the case-specific nature of their development. Moreover, there is no suggestion of the possibility of a modular use of these tools except for GEQ social presence module. As a result, it is unclear whether these tools can be applied to other contexts.

Analyzing the UX evaluation tools showed that the social dimension of UX concerns three categories: product, user, and group (society). The product-related items investigate whether a product can provide social interactions. For example in EGameFlow, the item “the game supports social interaction between players” highlights the product’s capability of making interaction. The user-related items evaluate what users can do with the social features of a product. For instance, Smart TV UX questionnaire (Jang & Yi, Citation2019) uses the item “I can form an intimate relationship with others by using the smart TV” and GEQ uses “I empathized with the other(s)” as actions that can be done by users with social features of products. The group-related items investigate how users/owners of a product are perceived by other people and social norms. The item “I was not worried about other people’s judgement” of the flow sub-dimension of UX in IVE questionnaire (Tcha-Tokey et al., Citation2016) and the item “Rarely used – widely used” of the Chinese UX questionnaire are examples of this category. Regardless of the naming in different tools, the social sub-dimensions provided in includes at least one of the three categories of items. However, none of the UX tools cover all the social sub-dimensions.

Clearly, the UX community needs a new tool to evaluate the social dimension of UX when interacting with a product that is not bounded to a specific application field (e.g., games or education) and ensure a comprehensive coverage. A promising way of achieving this goal is to develop social scales and include them into the UEQ + framework. Modularity of the UEQ + enables it to be used in different contexts, allowing the evaluator the flexibility to tailor the tool and evaluate the set of UX dimensions most relevant to the study (Schrepp & Thomaschewski, Citation2019). Researchers can construct new UX dimensions and add them to the UEQ + framework. For instance, response behavior, response quality, and comprehensibility are the new UX dimensions developed for voice assistants (Klein et al., Citation2020). Similarly, haptic and acoustic dimensions were developed for the UX evaluation of household devices (Boos & Brau, Citation2017). In UEQ+, each dimension consists of four items measuring the dimension and a single item that determines the importance of that dimension for the UX evaluation of the product. The evaluation of an item is done on a 7-point Likert scale with two semantic differential anchor points. The consistency of UEQ + simplifies the UX evaluation compared to using a combination of UX evaluation tools each with different rating scales and possible overlapping dimensions (Schrepp & Thomaschewski, Citation2019). It also provides an overall UX rating for products that can be used for comparisons. All these advantages make UEQ + a suitable candidate for hosting UX social dimension that can be used with other UX dimensions for different products.

In this study, we designed, developed, and validated social scales that can be used with other UX dimensions of the UEQ + framework. To this end, we followed the dimension development process used in Schrepp and Thomaschewski (Citation2019) for other UX dimensions of the UEQ + framework, focusing on social dimension and present the validated questionnaire.

3. Study framework

This study followed three phases: identification, integration, and validation (). The identification phase consisted in extracting social sub-dimensions and corresponding items from the literature and from the analysis of the UX evaluation tools, and in selecting a sample of items for each social sub-dimension by UX experts. These items were used in the integration phase to develop social scales by performing Exploratory Factor Analysis (EFA) on the responses of 229 participants who rated their UX with interactive products having social characteristics. The validation phase included another survey study with 450 participants who rated their UX with specific products from three different product categories with various levels of social dimension. The results were used to calculate the reliability and validity of the social scales. This study was approved by Polytechnique Montreal’s Research Ethics Board (CER-2122-47-D) and participants read and signed an informed consent form before taking part in the study.

Figure 1. Three phases of the study.

Figure 1. Three phases of the study.

This paper is organized into eight sections. Section 4 presents the identification phase, followed by Section 5 including the procedure, methodology, and results of the integration phase. Then, Section 6 presents the validation phase and its main results with the validated social scales. Section 7 puts our results into perspective by comparing them with those of other social scales and presents limitations of our work. Section 8 concludes the paper with the main take-aways and some propositions for future research work.

3.1. Participants

Two types of participants took part in this study, UX experts in the identification phase and end users for the two rounds of data collection (integration and validation phases). Seven UX experts (two from the industry and five from academia) were brought together to work in an online workshop during the identification phase. They had from 5 to 25 years of work experience in the fields of human-computer interaction (HCI), UX, cognitive ergonomics, or digital accessibility.

End users who filled out questionnaires in the two rounds of data collection were from Canada and recruited through Amazon Mechanical Turk (AMT). The first survey included a sample of 229 participants who had experience in using an interactive product with social aspects. The second data collection captured users’ experience with specific products from three product categories: Social network, Online shopping, and Online banking. To that end, three separate questionnaires were prepared, and 150 participants responded to each category (total n = 450). presents the demographic data of experts and end user participants for each questionnaire in terms of age, gender, and level of education.

Table 2. Demographic data.

The 30–39 age group had the highest number of participants. For all questionnaires, participants were well balanced on gender. Overall, more than 70% of each group had a university-level education.

4. Phase 1 identification

In a previous study, we performed a systematic literature review and an analysis of the UX evaluation tools and their dimensions (Mortazavi et al., Citation2021). As a result, we found that social dimension can be translated into three main sub-dimensions: identification, sociability, and social acceptance. Identification is the degree to which a user relates to a product and expresses oneself through its use or possession (Hassenzahl, Citation2003). Sociability is the degree to which a product enables communication with others to meet user’s social need (Park et al., Citation2013). Social acceptance is “how a user feels when interacting with a system in relation to the social situation, e.g., how uncomfortable or embarrassed they feel with respect to other people or one’s own norm” (Weiss & Willkomm, Citation2013).

An online workshop with seven UX experts took place in December 2021 on the collaborative whiteboard platform Miro. They reviewed the three social sub-dimensions and 24 items, discussed the possibility of merging them, and proposed 12 new items in a 90-minute session. Results of the workshop were reassessed by four of these experts and the number of items was reduced from 36 to 27. Similar-meaning items were removed, and some modifications were made to the wording of the semantic differential poles. Overall, the 27 items were grouped under three social sub-dimensions: identification, sociability, and social acceptance.

5. Phase 2 integration

5.1. Procedure

The first data collection with end users was done via an online questionnaire in a one-month period (March–April 2022). The questionnaire was designed on the QuestionPro website and data collection was done through AMT with participants receiving $2 compensation after having completed their response. The questionnaire was written in English and contained 34 questions. The first seven questions covered end users’ demographic data, frequency of using interactive products with social aspects, name of their selected product, and a brief description of the main usage of the product. The remaining 27 questions were on social items, among which there were seven items on identification, eight on social acceptance, and 12 on sociability. The 27 items on a 7-point semantic differential scale were presented in 5 sections, each with a short introductory sentence, similar to Schrepp and Thomaschewski (Citation2019). Participants were asked to evaluate their experience with interactive products having social aspects. They were given examples of such products like social networks, messengers, forums, online games, and online collaboration tools. However, they were given the freedom to choose any social product with which they would frequently interact. Those who rarely or never used any of the selected products did not qualify to fill out the questionnaire. The average response time was 4 minutes and responses that took less than 2 minutes were removed. In addition to eliminating incomplete answers, red-herring questionsFootnote2 were used in the questionnaires to catch participants who seemed to have randomly answered the questions. 148 participants (39%) were removed including 22 (6%) who did not meet the qualifications to answer the questionnaire, 57 (15%) who left the questionnaire incomplete, and 69 (18%) who failed the red-herring questions. In total 229 responses were kept for the analysis. shows the variety of interactive products with social aspects that were evaluated by participants, including both software (e.g., websites or application) and hardware (e.g., handheld devices or gaming consoles).

Table 3. Frequency of the evaluated products of the 1st data collection.

5.2. Methodology

Exploratory factor analysis (EFA) was used for refining and reducing the items of the first survey. Following the review by seven UX experts, we assumed that the candidate items describe only the sub-dimension they relate to. With a modular approach in mind for the social scales, we conducted 3 separates EFAs on identification (7 items), social acceptance (8 items), and sociability (12 items) instead of a single EFA on 27 items. However, the structural model is tested in the validation phase. As the initial step of EFA, we assessed the adequacy of sample size and the strength of correlations between items (Field, Citation2013). There are different suggestions for the suitable sample size for factor analysis. Some assert that the total number should be at least 300, while others suggest to use the ratio of participants to items such as 10 to 1 or 5 to 1 (Pallant, Citation2020). We adopted the conservative approach of having at least 200 respondents (Hinkin, Citation1998). Suitability of data for factor analysis was explored by using Bartlett’s test and the Kaiser-Meyer-Olkin measure of sampling adequacy (KMO). Multicollinearity was tested by checking whether the determinant of the correlation matrix of items is above 0.00001 and no pair of items has a correlation coefficient greater than 0.9. There should be a reasonable correlation between items measuring the same construct. Therefore, by investigating the correlation matrix, we removed the items displaying several correlations below 0.3 with other items (Field, Citation2013).

We did factor extraction and rotation as the main analysis of EFA (Field, Citation2013). We applied principal axis factoring as the extraction method, and we used Kaiser’s criterion, Scree plot, and parallel analysis to determine how many factors to retain. We used three criteria for factor extraction: factors with eigenvalues above Kaiser’s criterion of 1, clear break on the Scree plot, and eigenvalues greater than the corresponding values for a randomly generated data matrix of the same size (Pallant, Citation2020). Factor analysis is an exploratory tool that assists researchers in making decisions based on different tests (Field, Citation2013). As a result, when criteria differ, decisions should be based on the knowledge of the researchers in respective fields. We applied the Varimax rotation technique and analyzed the rotated factor matrix for items with factor loadings above 0.3 (Field, Citation2013). Following the four items per UX dimension format of the UEQ + framework (Schrepp & Thomaschewski, Citation2019), we kept the four items with highest factor loadings for each social sub-dimension. Lastly, the reliability of each scale was tested with Cronbach’s alpha. Reliability refers to the internal consistency of the items of each factor. We used IBM SPSS 28 for statistical analyses.

5.3. Results

The result of the online workshop with experts was a sample of 27 items organized under five introductory sentences as shown in . They are at the basis of the questionnaire submitted to the end-user participants. The sub-dimension column shows the sub-dimension to which items used in each EFA belong. The sample size of 229 participants was adequate (i.e., n > 200) for performing EFAs.

Table 4. List of items that were used in the first survey.

Investigating the correlation matrix of items of each EFA resulted in excluding two items (Q3-1 and Q3-2) for social acceptance and one item (Q3-7) for sociability. For these items, more than half of the correlation coefficients with other items were below 0.3. After their removal, the correlation matrices were recalculated. Results showed determinant values above the recommended value of 0.00001 for all the new correlation matrices. The KMO values exceeded the recommended value of 0.6 (Pallant, Citation2020). The Bartlett’s test of Sphericity was statistically significant for all sub-dimensions (at the alpha level of 0.01). Overall, all measures confirmed the suitability of the sample data and the absence of multicollinearity ().

Table 5. Measures for determining the suitability of sample data for EFAs.

The Scree plot, parallel analysis, and Kaiser’s criterion retained one factor for identification and one factor for social acceptance. It explained 55.2% of the variance for identification and 54.3% for social acceptance. However, there was not a consensus on the number of factors among the three criteria for sociability. Parallel analysis and scree plot showed one factor while Kaiser’s criterion suggested retaining two. Considering the items that were loading on each factor, sociability was divided into two factors of social stimulation and social interaction. We defined social stimulation as qualities of a product that foster social interaction. Social interaction was also defined as the ways through which users make contact with others. The two-factor solution explained a total of 48.4% of the variance. Social interaction accounted for 27.7% of the variance, whereas social stimulation contributed to 20.7%.

shows the rotated factor matrices of the 3 EFAs with corresponding factor loadings. The factor loadings represent the contribution each item makes to a factor (Field, Citation2013). The four items with the highest factor loading were selected for each social sub-dimension. These items had factor loading greater than 0.6, except for Q3-6. We tested content validity by investigating how theoretically relevant each item was to its factor which resulted in keeping Q3-6. This item was kept regardless of its relatively low loading on both factors, so as to conform with the 4-item format of the UEQ + framework.

Table 6. Factor loading of items for the 3 EFAs.

Cronbach’ alpha exceeded the suggested cutoff of 0.7 confirming the reliability of all four social scales of identification, social acceptance, social interaction, and social stimulation as shown in .

Table 7. Reliability statistics of the four social scales.

6. Phase 3 validation

The goal of this phase was three-fold. First, to investigate the relationship between the social sub-dimensions using confirmatory factor analysis (CFA) and to test the fitness of this new data to the structural model we proposed. Second, to validate the scales by comparing the scores with those of other common evaluation tools. Third, to evaluate the social aspects of three product categories with the new social scales.

6.1. Procedure

The data collection was done over one month (May 2022) on the same survey and data collection websites and participants received $2 compensation after having completed their response. This time, three separate questionnaires, all written in English, were created with specific software products to be evaluated from three different product categories. These products were Facebook, Instagram, and LinkedIn for the Social network product category; Amazon, Walmart, and eBay for the Online shopping product category; and CIBC, RBC, TD, and Scotiabank for the Online banking product category. We chose these three categories as they represented a range of socially engaging products, with social networks being highly social (e.g., messaging, community interactions), online shopping being moderately social (e.g., products posting, reviews, shared cart), and online banking being low on that continuum. We used software products during the validation phase to facilitate comparison. We aimed to avoid introducing a new variable, such as the type of product (i.e., physical and software), that could potentially impact the outcomes. Each questionnaire contained the AttrakDiff questionnaire, six UX dimensions of the UEQ + framework, and our newly developed social scales. This was done to investigate the validity of our social scales and the extent to which it is consistent with existing UX evaluation tools. Following the format of the UEQ + framework, participants rated the importance of UX dimensions for the UX evaluation of each product category. Overall, participants answered a total of 89 questions. Due to the similar format, we maintained the same order for both the new scales and the UEQ + scales for all participants. However, AttrakDiff was administered either before or after the other two questionnaires to mitigate any potential bias arising from respondent boredom and fatigue due to answering many questions. The average response time was 7 minutes for each questionnaire, and we removed responses that took less than 3 minutes. Similar data cleaning approach to the first survey was taken and 290 participants (i.e., 39%) were removed from which 49 (7%) participants did not meet the qualifications to answer the questionnaire, 90 (12%) left the questionnaire incomplete, and 151 (20%) failed red-herring questions resulting in a total of 450 responses. shows the frequency of selection of the products evaluated.

Table 8. Frequency of selection of the products evaluated during the 2nd data collection (n = 150 for each product category).

6.2. Methodology

Prior to conducting CFA for each product category, we used Mahanalobis distances to identify and remove multivariate outliers from the three samples collected in the second round of data collection (Field, Citation2013). Mahanalobis distances have a chi-square distribution and we used the recommended threshold value of p < 0.001 for removing outliers (Tabachnick & Fidell, Citation2019). As a result, 146, 142, and 144 valid responses were used for Social network, Online shopping, and Online banking product categories, respectively. We ran CFA for two models. The first model included four factors of identification, social stimulation, social interaction, and social acceptance. Considering that two factors of social stimulation and social interaction were two measures of the same underlying construct (i.e., sociability), large correlations between them were expected, which is why we used second-order CFA for sociability (Wang & Wang, Citation2020). As a result, we did the CFA for the second model on three latent variables of identification, sociability, and social acceptance. We also used modification indices to improve the fitness of the model. CFA uses different goodness-of-fit indices to assess the quality of model fit to the data (Hinkin, Citation1998). In this study, we reported: Chi-square statistical significance test, PCMIN/DF, Comparative Fit Index (CFI), Tucker Lewis Index (TLI), and Root Mean Square Error of Approximation (RMSEA). Composite Reliability (CR) was calculated, with values greater than 0.7 indicating acceptable reliability. Convergent validity that shows to what extent items of a factor are interrelated were tested by calculating Average Variance Extracted (AVE), with values above 0.5 deemed acceptable. The discriminant validity shows the extent to which latent factors are different and was tested by calculating AVE and Maximum Shared Variance (MSV) values (Jang & Yi, Citation2019).

To further investigate the validity of our scales, we analyzed the correlations of the four social scales with the scales of AttrakDiff and six scales of the UEQ + framework to verify their redundancy with a threshold value of 0.8. As stated earlier, in addition to rating different items on a semantic differential scale, participants rated the importance of each dimension in the UX evaluation for the three product categories. We studied the ratings of social sub-dimensions to see whether they met our expectations of descending importance rating for Social network, Online shopping, and Online banking product categories.

The mean differences in ratings on the social scales between product categories were then examined using Multivariate Analysis of Variance (MANOVA). In this study the three product categories were considered as independent variables and the four social scales as the dependent variables. The data was checked to confirm that it met the MANOVA assumptions. The minimum sample size for conducting MANOVA is 20 responses per independent variable (Hair et al., Citation2014). We tested the normality of the sample using the Shapiro-Wilk test. Multivariate normality was checked by calculating Mahanalobis distances (Pallant, Citation2020). We kept equal sample sizes across all independent variables (Hair et al., Citation2014). We checked for the linearity between each pair of dependent variables that was tested using a scatterplot. Finally, multicollinearity was tested by measuring correlations among the dependent variables where values above 0.9 show potential for multicollinearity (Pallant, Citation2020). We used IBM SPSS 28 for statistical analyses and IBM SPSS Amos 28 for CFA.

6.3. Results

6.3.1. CFA

The results of the three CFAs of the first model showed a high correlation (0.92) between social stimulation and social interaction for the social network product category (-left). According to EFA results, these two dimensions could be represented by a higher-level factor of sociability. Thus, using sociability as a second-order factor, we kept both social interaction and social stimulation in the second model (-right). The three latent variables had correlation coefficients above 0.3 but below 0.9, indicating that they were reasonably related, but not multicollinear. Although below the threshold, the correlation coefficient of 0.77 was observed between sociability and social acceptance.

Figure 2. CFA for the first model with four factors (left) and the second model with three factors (right) for social network category.

Figure 2. CFA for the first model with four factors (left) and the second model with three factors (right) for social network category.

shows the goodness of fit measures calculated for the three CFAs on the second model after the modifications. The thresholds shown in the table are from (Hu & Bentler, Citation1999; Wang & Wang, Citation2020). Results confirm the model fit on the data for the three product categories.

Table 9. Goodness of fit measures for the 3 CFAs of the second model.

shows factor loadings greater than 0.5 for all items of each factor, which is better than the results of the EFAs (). The average variance extract (AVE) of all factors exceeded the cutoff value of 0.5, confirming the convergent validity of the second model (Hair et al., Citation2014). The discriminant validity of the three factors of identification, social acceptance, and sociability was confirmed because maximum shared variance (MSV) was greater than AVE for each factor, and the squared root of AVE (bold in ) was greater than the correlation of the factors (Gefen & Straub, Citation2005).

Table 10. Factor loadings of the CFAs for the 3 product categories.

Table 11. Model validity measures of the CFAs for the 3 product categories (AVE squared root in bold).

shows the final four social scales with the corresponding items used for the UX evaluation of the three product categories.

Table 12. Developed social scales and their items.

6.3.2. Comparisons

shows the correlations between the four social scales developed in this study and the four scales of the AttrakDiff questionnaire for the three product categories. The highest correlations were observed with identification and attractiveness scales, particularly for the social network product category. All the statistically significant correlations are marked with a star in the table.

Table 13. Correlations between the social scales and those of AttrakDiff (* significant with p < 0.05).

shows the correlations between the four social scales developed in this study and the six scales of UEQ+. The four social scales were not redundant given that all the correlation coefficients were below 0.8. The highest correlations were observed with the Value and Attractiveness scales of the UEQ+. However, the social network product category displayed strong correlations with the pragmatic dimensions of Usefulness, Efficiency and Intuitive usage.

Table 14. Correlations between the social scales and the UEQ + scales.

shows the average importance rating of each social scale. The highest ratings for all social scales went to social network and the lowest to online banking product category, as expected.

Table 15. Average importance rating of social scales for the three product categories (from 1: completely irrelevant to 7: very important).

6.3.3. MANOVA

Before conducting MANOVA to test the social scales ratings differences between product categories, participants who responded to more than one product category were removed yielding three independent samples of 106, 98, and 96 participants for social network, online shopping, and online banking product categories, respectively. Following Mahanalobis distance calculations, multivariate outliers were eliminated such that further analyses were performed on three independent samples of 95 participants.

Results of Shapiro–Wilk test of normality was significant (p < 0.001), showing that the data was not normality distributed. However, the sample size was greater than 30 for each independent variable (i.e., product category), which is robust to the violation of normality or equality of variance (Pallant, Citation2020). The general pattern of scatterplot showed a linear relationship for each pair of dependent variables (i.e., four social scales). shows that there was no multicollinearity between dependent variables because none of them had a correlation coefficient larger than 0.9 with another one. Moreover, being greater than 0.2, they were all at least moderately correlated. Box’s test of equality of covariance matrices was significant (p < 0.001) meaning that the observed covariance matrices of the dependent variables were not equal for all categories. Assumptions testing showed no violations for conducting MANOVA except for normality of data and homogeneity of variance-covariance matrices. As our sample size exceeds 30, violation of normality is not a concern. Moreover, the results of Pillai’s trace were used for multivariate test because it is more robust when there is violation of assumptions (Tabachnick & Fidell, Citation2019).

Table 16. Correlation between the dependent variables.

The results of one-way MANOVA showed significant difference between the three product categories on the combined social scales, F (8, 560) = 20.68, p < 0.001, Pillai’s trace = 0.456, partial eta squared = 0.228. More investigation on each dependent variable showed statistical significance for social interaction and social stimulation using a Bonferroni adjusted alpha level of 0.0125 (highlighted in ). Bonferroni adjustment is used to reduce the risk of a Type 1 error when several separate analyses are performed. shows the multiple comparisons between product categories for each social scale.

Table 17. Significance of mean differences of dependent variables for three product categories.

7. Discussion

This study covered the design, development, and validation of social scales for the UX evaluation of interactive products. UEQ + scales used EFA to identify the four most representative items. However, scale development in this study was different in that we also performed CFA on three different samples to provide more validity measures for the social scales. Results showed good internal consistency (), convergent validity, and discriminant validity ( and ) for the social scales. Although we only have a good model fit for the second model, the CFA confirmed our structural model of social dimension with the three main sub-dimensions of identification, sociability, and social acceptance.

Identification, social stimulation, social interaction, and social acceptance are the four social scales that were developed in this study (). The identification scale is similar to that of AttrakDiff and status dimension of meCUE. It deals with users’ personal and social image. Users meet their influence and popularity need by expressing their identity, influence, and power to others through the use or possession of a product. The social interaction and social stimulation scales are complementary. The social stimulation scale represents the social incentives provided by a product to enable users to be socially active. A socially engaging, encouraging, empowering, and inclusive product enables users to achieve their social interaction goals in the form of communication, collaboration, and sharing of emotions with others. Finally, the social acceptance scale entails the feeling of being accepted by others. It can evoke self-conscious emotions like pride that are fulfilled when one is recognized and approved by others.

The social scales were subject to validation studies to find their relationship with other UX dimensions. Identification, the only social dimension of the AttrakDiff questionnaire, was expected to have strong correlations with the four social scales developed in this study. Our results confirmed this expectation by finding correlation coefficients between 0.45 and 0.69 with identification, except for social interaction (r = 0.34) and social stimulation (r = 0.24) for the online banking product category (). In addition to identification, we observed high correlations between the social scales and the attractiveness dimension of AttrakDiff. Hassenzahl (Hassenzahl, Citation2004) found a strong correlation between identification and beauty. He stated that beauty is social because it communicates identity and can be shared with and approved by others. Considering that beauty is an item in the attractiveness dimension of AttrakDiff, we can justify finding high correlations of social scales with attractiveness.

In addition to AttrakDiff, we measured the correlations of the four social scales with six UX dimensions of UEQ+. In the AttrakDiff, UEQ, and UEQ + questionnaires, the attractiveness dimension is used for the overall assessment of a product. Boos and Brau (Boos & Brau, Citation2017) calculated the correlations of the acoustic and haptic dimensions with the attractiveness dimension of UEQ + to demonstrate that their new UX dimensions measured the same construct (i.e., UX). They concluded that the strong correlations with attractiveness validated the two new UX dimensions for the UEQ + modular framework. Following the same approach, we found correlation coefficients ranging from 0.42 to 0.67 between the four social scales and the attractiveness dimension of UEQ+ (). However, there were three exceptions: social interaction (r = 0.34) and social stimulation (r = 0.24) for online banking, and social stimulation (r = 0.30) for online shopping product category. We observed higher correlations between the social scales and the value dimension of UEQ+. This suggests that social scales are closer in nature to the hedonic dimensions of UX, such as value, than the pragmatic dimensions of efficiency or intuitive usage. However, high correlations with pragmatic dimensions were found in the social network product categories. Other studies have presented the influence of social features on both hedonic and pragmatic dimensions. For instance, (Guo & Li, Citation2021) found that interactivity, recommendations, and feedback as social features of social commerce platforms have an impact on both utilitarian and hedonic values that ultimately influence consumers’ repurchasing intention. The influence of perceived co-experience on enjoyment in social live streaming services (Bründl et al., Citation2017) and the impact of social image on the use of hedonic systems like games (Lin & Bhattacherjee, Citation2010) are additional examples of social aspects that affect user experience.

Validation of the social scales was tested by evaluating the UX of three product categories. These categories were selected to represent different levels of social aspects. The average importance rating of each scale in the UX evaluation of each product category followed our expectations, with social network having the highest importance ratings and online banking having the lowest across the four social scales (). Validation with three product categories addressed Lallemand and Koenig’s (Lallemand & Koenig, Citation2017) concerns on standardized scales development performing only a single validation study.

The MANOVA results showed a significant difference between the three product categories for the combination of social scales. Further analysis of each pair of categories revealed significant differences for social interaction and social stimulation (). The greatest difference was found in the social interaction scale between the social network and online banking product categories. It was to be expected, given that the former is designed as a communication platform, whereas the latter is exclusively task-related and rarely incorporates social interaction aspects. The mean differences for the two scales of identification and social acceptance were not significant for any product category when using a Bonferroni adjusted alpha level of 0.0125. The fact that all of the evaluated products were software applications could have had an impact on the results. It can be expected that the identification and social acceptance dimensions of software products do differ from those of physical interactive products that attract more attention from users and spectators in a social situation, such as virtual reality glasses, or even non-interactive products such as jewelry or designer handbags. However, at the alpha level of 0.05, significant differences could be observed between social network and online banking, which are at the opposite extremes regarding the social features.

Overall, the four social scales developed in this study can be used in conjunction with other dimensions of the UEQ + modular framework.

The first limitation of this study is that the selected products were all interactive products. Evaluating other products that can develop social UX such as designer clothes, watches, physical board games, and museum pieces could be considered in future studies. The second limitation of this study is that we relied on the users’ reported experience with the selected products. Using laboratory studies and doing specific tasks with better control over the hardware and context could provide more reliable results. However, this would have been possible only at a significantly increased cost with a likely much smaller sample.

8. Conclusion

In this study, we developed and validated four social scales for the UX evaluation of interactive products: identification, social stimulation, social interaction, and social acceptance. These scales are each measured with four items, following the same format as the other scales of the UEQ + modular framework. They can be used in conjunction with the other dimensions of the framework for the UX evaluation of product with social aspects. Therefore, they have the benefits of UEQ+, such as being freely available, rapid to use, and providing a global score for easier comparisons. We used the results of an online workshop with UX experts to prepare the candidate items for each social scale. Then we did two rounds of data collection, the first used for scale development and the second for calculating reliability and the validity of scales. The scales were then validated through correlation analysis with the dimensions of the AttrakDiff questionnaire and the six UX dimensions of the UEQ + framework. Results showed a high correlation with the identification and attractiveness dimensions of these two UX evaluation tools. Finally, using the social scales developed in this study we successfully discriminated among three product categories with different levels of social features. This study highlighted the importance of paying more attention to social dimensions. We recommend that future studies evaluate physical interactive products with social dimensions to better validate the social scales.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research was funded by NSERC (Natural Sciences and Engineering Research Council of Canada) Individual Research Grant awarded to Dr. Jean-Marc Robert [#RGPIN-2018-06733].

Notes on contributors

Ehsan Mortazavi

Ehsan Mortazavi is a PhD candidate in the Department of Mathematics and Industrial Engineering at Polytechnique Montreal. His research focuses on UX subjective evaluation tools, UX dimensions, and cognitive ergonomics.

Philippe Doyon-Poulin

Philippe Doyon-Poulin is an assistant professor in the Department of Mathematics and Industrial Engineering at Polytechnique Montreal. His research focuses on human factors in aviation, error prevention and decision-making with automated systems. He authored more than 25 aircraft certification reports to show compliance on flight deck usability and pilot error.

Daniel Imbeau

Daniel Imbeau is a Full professor in the Department of Mathematics and Industrial Engineering at Polytechnique Montreal. His research focuses primarily on occupational ergonomics, continuous improvement, and human-centered engineering design to improve worker health & safety at work, and productivity.

Jean-Marc Robert

Jean-Marc Robert is an Adjunct professor in the Department of Mathematics and Industrial Engineering at Polytechnique Montreal. He retired in 2020. He taught and conducted research in the fields of Cognitive Ergonomics, Prospective Ergonomics, User Experience, and Human-Computer Interaction.

Notes

1 For the sake of brevity, hereafter, we only use product(s) instead of product(s), service(s), and system(s).

2 In questionnaires, red-herring or attention check questions are used to detect participants who do not read carefully or answer randomly.

References

  • Arhippainen, L., & Tähti, M. (2003). Empirical evaluation of user experience in two adaptive mobile application prototypes [Paper presentation]. Paper Presented at the Mum 2003. Proceedings of the 2nd International Conference on Mobile and Ubiquitous Multimedia, Norrköping, Sweden.
  • Boos, B., & Brau, H. (2017). Erweiterung des UEQ um die Dimensionen Akustik und Haptik. In S. Hess & H. Fischer (Eds.), Mensch und Computer 2017 – Usability Professionals. Gesellschaft für Informatik e.V.
  • Bründl, S., Matt, C., & Hess, T. (2017, June 5–10). Consumer use of social live streaming services: The influence of co-experience and effectance on enjoyment [Paper presentation]. Paper presented at the Proceedings of the 25th European Conference on Information Systems (ECIS), Guimarães, Portugal.
  • Field, A. (2013). Discovering statistics using IBM SPSS statistics. (4th ed.): Sage.
  • Fu, F.-L., Su, R.-C., & Yu, S.-C. (2009). EGameFlow: A scale to measure learners’ enjoyment of e-learning games. Computers & Education, 52(1), 101–112. https://doi.org/10.1016/j.compedu.2008.07.004
  • Gefen, D., & Straub, D. (2005). A practical guide to factorial validity using PLS-graph: Tutorial and annotated example, communications of the AIS. Communications of the Association for Information Systems, 16(1), 91–109. https://doi.org/10.17705/1CAIS.01605
  • Guo, J., & Li, L. (2021). Exploring the relationship between social commerce features and consumers' repurchase intentions: The mediating role of perceived value. Frontiers in Psychology, 12, 775056. https://doi.org/10.3389/fpsyg.2021.775056
  • Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2014). Multivariate data analysis (7th ed.): Pearson.
  • Hassenzahl, M. (2003). The thing and I: Understanding the relationship between user and product. In M. A. Blythe, K. Overbeeke, A. F. Monk, & P. C. Wright (Eds.), Funology: From usability to enjoyment (pp. 31–42). Springer Netherlands.
  • Hassenzahl, M. (2004). The interplay of beauty, goodness, and usability in interactive products. Human-Computer Interaction, 19(4), 319–349. https://doi.org/10.1207/s15327051hci1904_2
  • Hassenzahl, M., Burmester, M., & Koller, F. (2003). AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualität. In G. Szwillus & J. Ziegler (Eds.), Mensch & Computer 2003: Interaktion in Bewegung (pp. 187–196). Vieweg + Teubner Verlag.
  • Hassenzahl, M., Diefenbach, S., & Göritz, A. (2010). Needs, affect, and interactive products–Facets of user experience. Interacting with Computers, 22(5), 353–362. https://doi.org/10.1016/j.intcom.2010.04.002
  • Hinderks, A., Winter, D., Schrepp, M., & Thomaschewski, J. (2019). Applicability of user experience and usability questionnaires. Journal of Universal Computer Science, 25(13), 1717–1735. https://doi.org/10.3217/jucs-025-13-1717
  • Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods, 1(1), 104–121. https://doi.org/10.1177/109442819800100106
  • Hu, L. t., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. https://doi.org/10.1080/10705519909540118
  • IJsselsteijn, W. A., de Kort, Y. A., & Poels, K. (2013). The game experience questionnaire. Eindhoven: Technische Universiteit Eindhoven, 46(1), 1–9. https://research.tue.nl/en/publications/the-game-experience-questionnaire(open in a new window)
  • Jang, J., & Yi, M. Y. (2019). Determining and validating smart TV UX factors: A multiple-study approach. International Journal of Human-Computer Studies, 130, 58–72. https://doi.org/10.1016/j.ijhcs.2019.05.001
  • Jordan, P. W. (2000). Designing pleasurable products: An introduction to the new human factors. CRC press.
  • Kang, K., Hengeveld, B., Hummels, C., & Hu, J. (2022). Enhancing social interaction among nursing homes residents with interactive public display systems. International Journal of Human–Computer Interaction, 38(17), 1701–1717. https://doi.org/10.1080/10447318.2021.2016234
  • Kantosalo, A., & Riihiaho, S. (2019). Quantifying co-creative writing experiences. Digital Creativity, 30(1), 23–38. https://doi.org/10.1080/14626268.2019.1575243
  • Klein, A. M., Schrepp, M., Hinderks, A., & Thomaschewski, J. (2020). Measuring user experience quality of voice assistants voice communication scales for the UEQ plus framework [Paper presentation]. In A. Rocha, B. E. Perez, F. G. Penalvo, M. D. Miras, & R. Goncalves (Eds.), 2020 15th Iberian Conference on Information Systems and Technologies. IEEE.
  • Koelle, M., Ananthanarayan, S., & Boll, S. (2020). Social acceptability in HCI: A survey of methods, measures, and design strategies [Paper presentation]. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–19). Association for Computing Machinery.
  • Lallemand, C., & Koenig, V. (2017). “How could an intranet be like a friend to me?" - Why standardized UX scales don’t always fit [Paper presentation]. Paper Presented at the 35th Annual Conference of the European Association of Cognitive Ergonomics, ECCE 2017, September 20, 2017–September 22, 2017, Umea, Sweden.
  • Lallemand, C., & Koenig, V. (2020). Measuring the contextual dimension of user experience: development of the user experience context scale (UXCS) [Paper presentation]. Paper Presented at the Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society. Association for Computing Machinery.
  • Laugwitz, B., Held, T., Schrepp, M. (2008). Construction and evaluation of a user experience questionnaire [Paper presentation]. Paper Presented at the Symposium of the Austrian HCI and Usability Engineering Group. Springer.
  • Li, H. (2015). Enhancing user experience of enterprise systems for improved employee productivity: A first stage of case study [Paper presentation]. Paper Presented at the 2nd International Conference on HCI in Business, HCIB 2015 Held as Part of 17th International Conference on Human-Computer Interaction, HCI International 2015, August 2, 2015 – August 7, 2015, Los Angeles, CA, United States.
  • Lin, C.-P., & Bhattacherjee, A. (2010). Extending technology usage models to interactive hedonic technologies: A theoretical model and empirical test. Information Systems Journal, 20(2), 163–181. https://doi.org/10.1111/j.1365-2575.2007.00265.x
  • Liu, S., Zheng, X. S., Liu, G., Jian, J., & Peng, K. (2013). Beautiful, usable, and popular: Good experience of interactive products for Chinese users. Science China Information Sciences, 56(5), 1–14. https://doi.org/10.1007/s11432-013-4835-4
  • Minge, M., Thuring, M., Wagner, I., & Kuhr, C. V. (2017). The meCUE questionnaire: A modular tool for measuring user experience [Paper presentation]. Paper Presented at the International Conference on Ergonomics Modeling, Usability and Special Populations, AHFE 2016, July 27, 2016 – July 31, 2016, Walt Disney World, FL, United States.
  • Mortazavi, E., Doyon-Poulin, P., Imbeau, D., Taraghi, M., & Robert, J.-M. (2021). Exploring the landscape of UX subjective evaluation tools and UX dimensions: A Systematic Literature Review (2010–2021). Interacting with Computers. (Manuscript submitted for publication).
  • Pallant, J. (2020). SPSS survival manual: A step by step guide to data analysis using IBM SPSS (7th ed.): Routledge.
  • Park, J., Han, S. H., Kim, H. K., Cho, Y., & Park, W. (2013). Developing elements of user experience for mobile phones and services: Survey, interview, and observation approaches. Human Factors and Ergonomics in Manufacturing & Service Industries, 23(4), 279–293. https://doi.org/10.1002/hfm.20316
  • Petri, G., von Wangenheim, C. G., & Borgatto, A. F. (2016). MEEGA+: An evolution of a model for the evaluation of educational games (Technical Report: INCoD/GQS.03.2016.E). Brazilian Institute for Digital Convergence.
  • Petrosyan, A. (2023). Worldwide digital population 2023. https://www.statista.com/statistics/617136/digital-population-worldwide/(open in a new window).
  • Phan, M. H., Keebler, J. R., & Chaparro, B. S. (2016). The development and validation of the game user experience satisfaction scale (GUESS). Human Factors, 58(8), 1217–1247. https://doi.org/10.1177/0018720816669646
  • Qiu, S., Hu, J., Han, T., Osawa, H., & Rauterberg, M. (2020). An evaluation of a wearable assistive device for augmenting social interactions. IEEE Access. 8, 164661–164677. https://doi.org/10.1109/ACCESS.2020.3022425
  • Robert, J.-M. (2014). Defining and structuring the dimensions of user experience with interactive products [Paper presentation]. Paper Presented at the 11th International Conference on Engineering Psychology and Cognitive Ergonomics, EPCE 2014, Held as Part of 16th International Conference on Human-Computer Interaction, HCI International 2014, June 22, 2014 – June 27, 2014, Heraklion, Crete, Greece.
  • Robert, J.-M., & Lesage, A. (2011). Designing and evaluating user experience. In G. A. Boy (Ed.), The handbook of human-machine interaction: A human-centered design approach (pp. 321–338). Ashgate.
  • Ryu, H., & Kim, J. (2019). Evaluation of user experience of new defense medical information system. Healthcare Informatics Research, 25(2), 73–81. https://doi.org/10.4258/hir.2019.25.2.73
  • Safsouf, Y., Mansouri, K., & Poirier, F. (2019). Design of a new scale to measure the learner experience in e-learning systems [Paper presentation]. Paper Presented at the International Conference on e-Learning 2019, EL 2019, July 17, 2019 –July 19, 2019, Porto, Portugal.
  • Schrepp, M., & Thomaschewski, J. (2019). Design and validation of a framework for the creation of user experience questionnaires. International Journal of Interactive Multimedia & Artificial Intelligence, 5(7), 88–95. https://doi.org/10.9781/ijimai.2019.06.006
  • Sun, X., & May, A. (2013). A comparison of field-based and lab-based experiments to evaluate user experience of personalised mobile devices. Advances in Human-Computer Interaction, 2013, 1–9. https://doi.org/10.1155/2013/619767
  • Tabachnick, B. G., & Fidell, L. S. (2019). Using multivariate statistics (7th ed.). Pearson.
  • Tcha-Tokey, K., Christmann, O., Loup-Escande, E., & Richir, S. (2016). Proposition and validation of a questionnaire to measure the user experience in immersive virtual environments. International Journal of Virtual Reality, 16(1), 33–48. https://doi.org/10.20870/IJVR.2016.16.1.2880
  • Wang, J., & Wang, X. (2020). Structural equation modeling: Applications using Mplus (2nd ed.). John Wiley & Sons.
  • Washington, P., Voss, C., Haber, N., Tanaka, S., Daniels, J., Feinstein, C., Winograd, T., & Wall, D. (2016). A wearable social interaction aid for children with autism [Paper presentation]. Paper Presented at the Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, San Jose, California, USA.
  • Weiss, B., & Willkomm, S. (2013). User experience and social attribution for an embodied spoken dialog system [Paper presentation]. Paper Presented at the Workshop on Computers as Social Actors, CASA 2013 – Co-Located with 13th International Conference on Intelligent Virtual Agents, IVA 2013, September 28, 2013, Edinburgh, United Kingdom.
  • Yoon, J., Kim, C., & Kang, R. (2020). Positive user experience over product usage life cycle and the influence of demographic factors. International Journal of Design, 14(2), 85–102. http://www.ijdesign.org/index.php/IJDesign/article/view/3641(open in a new window)
  • Zarnadze, G. (2020). Social interactions impact on product and service development [Paper presentation]. Proceedings of the International Conference on Business Excellence (vol. 14, iss. 1, pp. 324–332). https://doi.org/10.2478/picbe-2020-0031
  • Zhu, Y., Dailey, S. L., Kreitzberg, D., & Bernhardt, J. (2017). “Social Networkout”: Connecting social features of wearable fitness trackers with physical exercise. Journal of Health Communication, 22(12), 974–980. https://doi.org/10.1080/10810730.2017.1382617

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.