Do you trust ChatGPTs? Effects of the ethical and quality issues of generative AI on travel decisions

ABSTRACT This study investigated the impact of ChatGPT’s recommendation quality and ethical concerns on travelers’ acceptance, satisfaction, and perceived trustworthiness. Results showed that when quality and ethical concerns were prominent, acceptance of and satisfaction with ChatGPT’s recommendations decreased significantly, and the negative effects were mediated by perceived trustworthiness. This study also identified that message framing containing ChatGPT’s errors, and the information types delivered by ChatGPT, acted as moderators of the positive effect of its recommendations. These findings underscore the significance of addressing ethical and quality concerns in using AI (Artificial intelligence)-powered chatbots, with implications for AI acceptance and satisfaction.


Introduction
ChatGPT is an artificial intelligence (AI)-powered chatbot that attracted 100 million users worldwide within two months of its launch in November 2022 (Hu, 2023).Chatbots have become increasingly popular in various industries, including the travel industry (Demir & Demir, 2023).ChatGPT, based on generative AI, was designed to help travelers make travel decisions in a variety of ways, such as by offering a constantly available service, personalized and selective information, acting as an intermediary between travelers and travel companies and destinations, and reducing the costs of customer services.ChatGPT is easily accessible through platforms such as social media sites, other websites, and messaging apps, so travelers can access assistance and information from wherever they are.
In spite of these advantages, concerns exist regarding the input of incorrect, untrustworthy or biased information, the absence of a filtering or verification process for knowledge or information, the lack of ability to assess precision or errors in information when sources provide conflicting information, poor quality of information provision when there is little or no known information, and the weak generation of information on new products and services.It is crucial to acknowledge that AI is not flawless.ChatGPT can often generate errors.Sometimes it is unable to address logical queries, or provides entirely erroneous "facts".This "hallucination" effect poses significant risks, and it is noteworthy that the ChatGPT primary webpage admits to its potential to "occasionally generate incorrect information."It remains unclear to what extent such quality issues affect consumers and how much consumers can trust ChatGPT's responses.However, ChatGPT is still recognized as a game changer in most knowledge-based businesses, because of its ability to provide a wide range of knowledge or information, removing the need for people to visit a physical library, or consult dictionaries or experts (Demir & Demir, 2023;Dwivedi et al., 2023).
Together with the urgent industrial demand to understand customers' reactions to using ChatGPT, a literature review of publications relating to ChatGPT revealed several urgent research gaps.First, there are few studies on customers' reactions to ChatGPT, because it is a newly emerging technology.As mentioned earlier, the quality and ethical issues associated with AI and ChatGPT have become a growing concern for customers (Dwivedi et al., 2023).In addition, even though the quality of a chatbot's recommendations, including the accuracy and relevance of the information it provides, significantly determines travelers' trust in and acceptance of the new technology, and their satisfaction with the experience, studies of chatbot use have not often been conducted.Some studies have found that poor quality recommendations by chatbots decrease users' intentions to adopt the technology, and their satisfaction with using the technology (Ashfaq et al., 2020;Zhou et al., 2023).Furthermore, ethical issues, such as the possibility of including erroneous information, privacy concerns, and problems with generalizations, untrustworthy information, and biased answers can negatively affect travelers' opinions of chatbots' recommendations (Hagendorff, 2020;Kim et al., 2023b).Message framing is regarded as one of the most common features used to adjust customers' attitudes and behaviors (Kim et al., 2023a;Tanner et al., 2008).Therefore, there is a need to empirically test consumers' responses to different scenarios relating to ChatGPT's performance in the tourism context.
This study was thus initiated in response to industrial demand and academic curiosity about customers' responses to this innovative and potentially worldtransforming technology.This study aimed to provide insights into how ChatGPT's quality and ethical issues affect travelers' behavior, and how a chatbot can be improved to enhance users' experiences.More specifically, the study had four research objectives: (1) to investigate the effects of the quality and ethical issues of ChatGPT on travelers' acceptance of and satisfaction with ChatGPT's recommendations; (2) to identify the effects of the quality and ethical issues of ChatGPT on the perceived trustworthiness of ChatGPT's recommendations; (3) to explore the mediating role of perceived trustworthiness on unethical or poor quality issues on the acceptance and satisfaction with ChatGPT's recommendations; and (4) to examine the existence of the moderating effect of ChatGPT's errors being exposed, because the negative effects of unethical issues and poor quality responses could potentially be mitigated if ChatGPT's recommendations contain errors.
The results of this study help in understanding the factors that influence the acceptance of ChatGPT's recommendations from a tourist's perspective.In addition to their academic contribution, the findings of this study also have practical implications for chatbot developers and designers for developing more effective and user-friendly chatbots.

AI and chatbots
"AI" refers to computing technology that provides intelligence in machines that can assist human activities (Bulchand-Gidumal, 2022;Nadikattu, 2016).AI technology has been applied in the travel and tourism industry by integrating robots, providing personalized recommendation systems, prediction and forecasting systems, smart travel agents, information on tourism, facilities and services, language translation and conversation applications, and voice recognition (Buhalis & Moldavska, 2022).Although adopting recommendations is contingent on the type of AI, such as Chatbots/virtual agents, robots, and search/booking engines (Huang et al., 2022), AI can facilitate tourists' decision making about future travel plans, including choosing a destination, accommodation, transportation, and activities (Bulchand-Gidumal, 2022).
Chatbots are computer programs developed to communicate with people using AI over the internet (Pillai & Sivathanu, 2020).Adopting natural language processing and machine learning technologies, the software undertakes tasks such as the provision of information to users, providing rapid responses to questions, assisting with product purchasing, and providing prompt services to customers (Ashfaq et al., 2020).This is why early investigators have stressed that Chatbots are among the most useful technologies for facilitating human-computer interactions because AI's attributes help formulate mutual interactions between computers and humans (Athikkal & Jenq, 2022;Jan et al., 2023;Martin et al., 2020;Shi et al., 2021).
In recent years, the role of chatbots controlled by AI has increased in tourism business operations, including retail, decision-making support, state-of-the-art payment systems, customer services, and online community building (Kim et al., 2023c;Popesku, 2019;Zsarnoczky, 2017).Prior research has argued that several factors influence users' intentions to use chatbots in hospitality and tourism contexts.These factors include anthropomorphism, perceived usefulness, ease of use, intelligence, and trustworthiness (Pillai & Sivathanu, 2020); habits, automation, health consciousness, and social presence (Hasan et al., 2021); expected performance, habitual chatbot usage, a predisposition to use self-service technologies, hedonic elements in chatbot interactions, human-like chatbot behaviors, and social influences (Melián-González et al., 2021).
However, while users appreciate the speed of accessing basic information through chatbots, concerns about the accuracy of the information provided (Arsenijevic & Jovic, 2019) and potential weaknesses in common sense and flexibility (Lv et al., 2022) can erode trust.The inconvenience of communicating with chatbots can negatively affect users' intentions to use them.Therefore, enhancing chatbots by anthropomorphizing them and incorporating emotional and social cues can lead to more positive customer responses during interactions (Cai et al., 2022).Interestingly, hotel guests tend to appreciate chatbot services for their cost-effectiveness, ability to understand guest preferences, and the provision of personalized experiences (Buhalis & Cheng, 2020).However, there are also certain limitations, such as chatbots' inability to handle complex guest queries, limited guest acceptance and awareness, and the absence of creativity, emotion, and a personal touch in chatbot interactions.
In conclusion, chatbots, a byproduct of AI, have gained prominence in a range of industries that offer diverse services.Research highlights the effectiveness of chatbots in human-computer interactions; however, the inconvenience of communicating with chatbots can be a deterrent.Although empirical studies have offered initial insights into chatbot use, managerial guidance on the role of ChatGPT (a new AI Chatbot) in tourists' decision-making is limited.We believe that the growing accessibility and efficiency of generative AI technology as perceived recommendation systems will drive further innovation and research in the tourism and hospitality sectors.

Travel recommendation in the digital-technology age
The marketing business environment has been rapidly reshaped by the development of advanced digital technology, software, and applications such as online travel agencies (OTAs), social media platforms, online portal sites, mobile transactions, VR (virtual reality), and ChatGPT, all of which have influenced tourists' decision making (Demir & Demir, 2023;Tussyadiah, 2020;Yu et al., 2023;Zui et al., 2022).For example, because consumers can easily share word-of-mouth (WOM) information about hedonic or utilitarian travel experiences in the form of written and oral communications on social media platforms (e.g.Twitter and Facebook), this has a significant impact on tourists' decision making (Fang et al., 2023).Demir and Demir (2023) also provided empirical evidence that ChatGPT could enhance the value co-creation in the travel service setting.They suggested that the experience of usage in ChatGPT was a critical moderator as well.
However, there are negative aspects to online reviews, as there is a growing concern about so-called "fake news" and reviews by paid online consumers (i.e.ghost writers) (Ayeh et al., 2013).For these reasons, supporting website trust by providing quality information has been strongly emphasized for online travel agencies such as TripAdvisor (Filieri et al., 2015;Zui et al., 2022), in relation to promoting both recommendation adoption and WOM intention.Similarly, Li et al. (2019) proposed that the popularity of user-curated multi-place recommendations through Qyer.com depends on both the recommender (e.g. through identity disclosure and reputation) and the recommendation-related heuristic factors (e.g.helpfulness rating and the length of the recommendation).
Consumers are often overwhelmed by online information overload, and increasingly want helpful recommendations to support better consumption decisions.This creates opportunities for offering a new level of recommendations based on technology-mediated systems and personalization, such as in mobile-driven personalization practices (Buhalis & Moldavska, 2022;Lei et al., 2022) and personalized day tour routes (Liao & Zheng, 2018).By responding to consumer demand for tailored and personalized travel experiences, prior studies have helped make significant improvements to the recommendation diversity and calculation efficiency of tourism recommendation systems, integrating recent AI technology (Chen et al., 2021).
The potential positive impacts on tourism of AI-based recommendation systems, such as ChatGPT, in reducing online information overload and enhancing personalization are readily apparent.Moreover, ChatGPT is anticipated to transform into an essential personal assistant that provides a comprehensive range of services, which will equip tourists with pragmatic and contemporary guidance (Wong et al., 2023).However, further investigation is needed into why and how travelers adopt such systems.Furthermore, because AI exhibits the characteristics of human intelligence, it seems likely that consumers who have used smartphones for some time and have stronger anthropomorphic tendencies, view AIcurated review information on travel destinations more favorably (Martin et al., 2020).In this context, Shi et al. (2021) offered a comprehensive framework showing how different types of cognitive and emotional trust play a pivotal role in linking the impacts of systematic (i.e.efficacy and personalization) and heuristic (i.e.anthropomorphism and social influence) cues on the intention to adopt AI-generated recommendations in travel planning.
However, the potential weaknesses of these systems must be addressed.For example, as consumers tend to be concerned about privacy and information transparency (Lei et al., 2022), they may hesitate to adopt AIgenerated recommendations because of the perceived uncertainty of their functions, which may lead to inaccurate decision-making (Kim et al., 2021;Shi et al., 2021).Because the significance of trust in AI-generated recommendations is applicable to ChatGPT, the current study postulated that although AI-based recommendation systems that help travelers make better decisions have obvious strengths, it is also crucial for marketers to understand and mitigate travelers' risk perceptions when considering AI-based recommendations.
The focus of this study was therefore to examine the impact of ChatGPT's issues (including poor quality and unethical aspects) on travelers' reactions and their acceptance of ChatGPT's recommendations, to increase our understanding of the interactions between advanced ChatGPT and users in the travel industry.

Message framing effects
The effects of message framing have been widely researched in the tourism and hospitality literature because it is an effective way to alter customers' attitudes and behaviors (Blose et al., 2015;Chi et al., 2021;Grazzini et al., 2018;S.;Kim et al., 2022).Among framing messages, those framed with gain or loss have been widely researched in tourism context.A gain message indicates willingness to buy a product or participate in an activity because the message emphasises the benefits or advantages of using a product or brand, whereas a loss message elicits willingness to buy the product to avoid the psychological discomfort generated by missing out on benefits (Chi et al., 2021;S.;Kim et al., 2022).Gain or loss framing can be explained by the prospect theory, which illustrates the asymmetric effect of psychological gain or loss in moving from a psychological reference point (Tversky & Kahneman, 1981).
Relevant studies showed inconsistent results for the effectiveness of gain or loss framed messages.Some studies revealed the effectiveness of gain framing (vs loss framing) (Chi et al., 2021;Maheswaran & Meyers-Levy, 1990), whereas other studies found better efficacy for loss framing (vs gain framing) (Blose et al., 2015;Grazzini et al., 2018, Meyerowitz andChaiken, 1987;Tversky & Kahneman, 1981).The efficacies of the messages differed with the meaning of the messages, their context, and the business area involved.For example, in a study by S. Kim et al. (2022), who examined the effectiveness of 13 types of messages, mixed results were obtained because effectiveness differed according to the types of messages and the wordings of the dependent variables.Specifically, in predicting intention to take an international flight, gain (extra mileage provision), gain (upgraded services in the cabin), and loss (coupon) were most effective.However, in light of asking persuasiveness of the provided message, gain (extra mileage provision), loss (coupon), and loss (extra mileage provision) were most effective, while gain (upgraded services in the cabin) was relatively ineffective.Therefore, this study explored the effect of message framing (positive vs negative performance of ChatGPT) on customers' responses.

Main hypotheses regarding quality issues
We adopted information value as the central mechanism for evaluating ChatGPT's suggestions in a tourism context.Advances in AI-based technology such as ChatGPT are widening the array of tools that consumers can use in their information searches; tools that the travel and tourism industry have made good use of.However, as some scholars (Bushwick & Mukerjee, 2022;Dwivedi et al., 2023) have argued, ChatGPT may provide incorrect information and recommendations that are inferior to those offered in users' reviews and recommendations.Furthermore, Bigman and Gray (2018) found that humans are averse to AI decision-making, especially in relation to ethical issues.In this context, a question remains in relation to how consumers react to information provided by ChatGPT. Holbrook Morris (1994, p. 22) defined information value as "an interactive relativistic preference experience of information which in essence involves a process of comparative valuation of information."Information value refers to high-quality information that meets the needs and standards of the information source.Because tourism activity involves uncertainty and risk, information value in travel decision-making is particularly important.Consumers seek high-quality information in order to make informed choices about their travel (Jun et al., 2007;Kim et al., 2023c).
Tourists often rely on online reviews to obtain further information and evaluate a product (Chen & Law, 2016).Cheung et al. (2008) argued that consumers base their judgments on the quality, credibility, and trustworthiness of information when evaluating online reviews and recommendations.Furthermore, highly trustworthy or credible information is associated with being strongly persuasive and stimulating behavioral change (Eagly et al., 1978;Pornpitakpan, 2004;Zui et al., 2022).Research has found that the trustworthiness and accuracy of online reviews is highly influential, affecting attitudes and behavioral intentions (Ayeh et al., 2013;Filieri & McLeay, 2014).It could be expected that the acceptance of AI recommendations would be reduced when the perceived trustworthiness of the AI system is salient in the travel recommendation context.
Therefore, based on research suggesting a strong relationship between perceived quality and new technology adoption, we predicted that the salience of poor quality ChatGPT recommendations would have a negative effect on the outcomes of the recommendations.If consumers perceive that information generated by ChatGPT is inaccurate and poor quality, they will be less willing to accept the information and follow the recommendations, and hence feel less satisfied with the recommendations.Thus, we proposed the following hypotheses: H1a: Travelers' acceptance of and satisfaction with ChatGPT's recommendations will decrease when the poor quality of ChatGPT recommendations is salient (vs.not).
H2a: Travelers' perceived trustworthiness of ChatGPT's recommendations will decrease when the poor quality of ChatGPT recommendations is salient (vs.not).

Main hypotheses regarding ethical issues
Superior computational abilities and advances in algorithms continue to widen the range of fields that can adopt AI-based technology.However, people respond differently to ethical issues.For example, research on AI aversion suggests that humans are critical of the instructions and recommendations given by algorithms (Bigman & Gray, 2018;Dietvorst et al., 2015).Furthermore, studies have mostly investigated the unethical behaviors of consumers (Kim et al., 2022) and employees (Lanz et al., 2023) in the decision-making context.Therefore, it is timely and important to explore the impact of ethical issues on people's opinions of AI technology.
We predicted that consumers would be less likely to trust ChatGPT-generated recommendations if they are produced in an unethical manner.This argument is based on literature suggesting a relationship between ethical evaluations and the behavioral intention to use a product.Although there is a subset of literature discussing moral decoupling (i.e. the tendency to ignore ethical aspects when making an overall judgment [e.g.Orth et al., 2019;Xiao et al., 2021]), most of the existing literature strongly suggests that consumers actively consider a firm's ethical behaviors when deciding whether to purchase its products (Creyer, 1997;Lee et al., 2017).This tendency to include ethical considerations in purchasing decisions is magnified when customers do not have a strong brand loyalty to a specific firm (e.g.Creyer, 1997), or when the economic benefits are not directly related to the ethical considerations (e.g.Tanner et al., 2008).
In addition, consumers will be less likely to trust the information provided by ChatGPT when it is of a low ethical standard, since the ethicality and perceived trustworthiness of products are directly related (Eberhardt et al., 2021;Peifer & Newman, 2020).Therefore, we proposed the following hypotheses: H1b: Travelers' acceptance of and satisfaction with ChatGPT's recommendations will decrease when the unethicality of ChatGPT recommendations is salient (vs.not).
H2b: Travelers' perceived trustworthiness of ChatGPT's recommendations will decrease when the unethicality of ChatGPT recommendations is salient (vs.not).

Mediating hypotheses regarding quality and ethical issues
We further predicted that perceived trustworthiness would mediate the relationship between ethical and quality issues, as well as the acceptance of and satisfaction with ChatGPT's recommendations: H3a: The negative effect of poor quality on the acceptance of and satisfaction with ChatGPT's recommendations will be mediated by perceived trustworthiness.

H3b:
The negative effect of poor ethicality on the acceptance of and satisfaction with ChatGPT's recommendations will be mediated by perceived trustworthiness.

Hypotheses regarding the moderating role of ChatGPT's errors
In this section, we discuss the moderating role of ChatGPT errors in recommendations.We predicted that the positive effect of exposure to the positive (versus negative) aspects of ChatGPT on acceptance and satisfaction will diminish when ChatGPT's recommendations include errors.
For several decades, research has consistently demonstrated the presence of "algorithm avoidance," in which people tend to avoid algorithms, preferring human recommendations (Diab et al., 2011;Longoni et al., 2019).This tendency has been observed in various task types, such as choosing medical services or financial investments (Longoni et al., 2019;Önkal et al., 2009).
Several factors can influence the degree of algorithm avoidance, including the task type and individual differences (Logg et al., 2019;Longoni et al., 2019).One such factor is exposure to algorithm errors.For instance, Dietvorst et al. (2015) found that when both humans and algorithms make errors, people are more likely to rely on the human-based recommendations, and lose trust in algorithms more readily.
Based on the theory of algorithm avoidance, we predicted that people would significantly reduce their reliance on ChatGPT's recommendations when an error made by ChatGPT was apparent.Specifically, when this occurred, the additional information value of ChatGPT, regardless of the specific domains and valences, would be reduced significantly, eliminating the negative effect of exposure to the poor quality information.This is because the negative effect of poor quality can be verified by actual exposure to ChatGPT's errors.In addition, exposure to poor quality and unethical information also generates negative responses.Finally, positive expectations of ChatGPT stemming from positive reports could generate a disconfirmation effect when an error occurs (Oliver, 1980).In summary, we expected that visiting intention and perceived trustworthiness could be reduced significantly regardless of the issue type (positive vs. negative-ethical vs. negative-quality).
Therefore, formal moderation hypotheses were proposed as follows: H4a: The negative effect of ChatGPT's poor quality aspects on travelers' acceptance of ChatGPT's recommendations is reduced when ChatGPT recommendations contain errors.

H4b:
The negative effect of ChatGPT's unethical aspects on travelers' acceptance of ChatGPT's recommendations is reduced when ChatGPT recommendations contain errors.

Overall theoretical framework and empirical studies
Figure 1 presents the overall framework, which specifies constructs, hypotheses, and empirical studies.The sample size was determined using the G*Power program (Faul et al., 2007) with the criteria of medium effect size (f = .25),significance level (α = .05),and power (1-β = .95).Based on these criteria, the minimum total sample sizes for comparing the experimental conditions for Studies 1, and 2 were 252, and 280 respectively.We therefore attempted to collect enough samples to exceed these limits.The empirical studies were conducted in February, March 2023 and October 2023; none of the participants participated in multiple studies.We strategically selected less-familiar destinations for the key participants to control for the potential effects of familiarity and regional proximity to the destination, following a similar approach to that used by other researchers (e.g.Kim & Seo, 2019) Study 1: providing initial evidence Study 1 investigated the impact of ChatGPT's ethical and quality issues on the acceptance of its recommendations, and focused on the effect of these issues on intentions to visit the destinations recommended by ChatGPT.We hypothesized that participants would demonstrate weaker intentions to visit the destinations recommended by ChatGPT when either ethical or quality issues were salient before the recommendation task had been undertaken.To prime the participants to think about ethical and quality issues, we asked them to read a newspaper article related to ChatGPT.This method has been widely used in marketing research (e.g.Galoni et al., 2020) as well as in tourism research (Kim et al., 2023b).

Method: participants, design, and procedure
Participants in this study were 302 US adults (M _age = 40.31,SD = 12.06; 44.0% female) recruited from Amazon MTurk (a survey participant recruitment service), in exchange for nominal compensation.They were randomly assigned to one of three experimental conditions (salience of ChatGPT: positive vs. negative quality issue vs. negative ethical issue) using a between-subjects design.Initially, the participants were informed that the study consisted of multiple unrelated tasks.First, they were asked to read a newspaper article about ChatGPT, based on the work of Kim et al. (2023b).Participants in the positive condition read an article about the technological superiority of ChatGPT, titled "ChatGPT is the most advanced AI model in existence," as shown in Figure 2. Participants in the quality issue condition were asked to read an article titled "ChatGPT isn't always right and can cause real-world harm," and those in the ethical issue condition read an article titled "Open AI hired Kenyan workers at less than $2/hour to decrease the toxicity of ChatGPT, 1 " as shown in Figure 2.
After completing the reading tasks, participants were asked to imagine that were planning to visit the South Island of New Zealand, and were seeking information about their destination, based on Kim et al. (2023b).They were asked to imagine that they had asked ChatGPT to recommend destinations to visit, and that ChatGPT provided ten destinations, as shown in Figure 3.They then rated their visit intention for each recommended place on a 2-item 7-point scale (1 = not at all/very weak, 7 = very much/very strong, Cronbach's α = .956)and their satisfaction with the recommendation on a 7-point scale (1= not satisfied at all, 7 = very satisfied).They also rated the perceived accuracy (1 = not at all objective/accurate, 7 = very objective/accurate, α = .768,Westbrook et al., 2023) and trustworthiness (1 = not at all credible/trustworthy, 7 = very credible/trustworthy, α = .968,Ewing et al., 2015) of each recommendation on a 2-item 7-point scale.Finally, participants were asked to rate the perceived realism of the scenario on a 7-point scale (1= highly unrealistic, 7 = highly realistic).

Study 2: replication of the previous study
In Study 1, we found a negative effect of exposure to ethical or quality issues of ChatGPT on the acceptance of ChatGPT's recommendations.Even though the direction of the effect was negative, the overall effect could have been mainly driven by the positive newspaper article about ChatGPT.To test this possibility, we included control experimental conditions in the second study.Specifically, participants in this condition were asked to read a newspaper article about food, which was irrelevant to ChatGPT.In addition, to extend the generalizability of the results, we used different ethical issues (e.g.issue of plagiarism during program development) and a different travel destination (Hong Kong).

Method: participants, design, and procedure
The participants in this study were 312 US adults (M _age = 42.87,SD = 12.68; 53.6% female) also recruited from Amazon MTurk in exchange for nominal compensation.They were randomly assigned to one of four experimental conditions (salience of ChatGPT: control vs. positive vs. negative quality issue vs. negative ethical issue) using a between-subjects design.
The overall procedure of this study was similar to that of Study 1. First, the participants were asked to read newspaper articles either related or not related to ChatGPT.Participants in the control condition were asked to read an article titled "The secret to making perfect fried chicken and waffles."Participants in the positive and negative quality issues conditions were asked to read the same articles as in Study 1. Participants in the ethical issue condition were asked to read an article titled "Open AI utilized advanced plagiarism to boost the performance of ChatGPT," as shown in Figure 2.After completing the reading tasks, participants were asked to imagine that they were planning to visit Hong Kong and had asked ChatGPT to recommend destinations to visit.ChatGPT initially provided 15 destinations and narrowed them down to four recommendations based on further requests, as shown in Figure 3.They then rated their visit intention for each recommended place (α = .952),along with perceived trustworthiness (α = .901)and perceived realism, using the same scale as in Study 1.

Study 3: testing the moderation effect
In the previous two studies, we found a negative effect of the ethical or quality issues of ChatGPT on acceptance of the chatbot's recommendations, and a mediating role of perceived trustworthiness.In this third study, we examined the moderating effect of ChatGPT's errors on this prediction.We expected the negative effect of exposure to negative (vs. positive) newspaper articles on the acceptance of ChatGPT's recommendations to be significantly reduced when participants were exposed to ChatGPT's errors.

Method: participants, design, and procedure
Participants in this study were 382 US adults (M _age = 42.63,SD = 12.99; 56.5% female) recruited from Amazon MTurk in exchange for a nominal payment.They were randomly assigned to one of three (salience of ChatGPT: positive vs. negative quality issue vs. negative ethical issue) X two (ChatGPT errors: present vs absent) experimental conditions using a between-subjects design.The overall procedure of this study was similar to that of Study 1. First, participants were asked to read the same newspaper articles as those used in Study 1, thereby exposing them to three different articles on ChatGPT.
After completing the reading tasks, participants were asked to imagine that they were planning to visit the North Island of New Zealand and had turned to ChatGPT for recommendations.They were also asked to imagine that ChatGPT provided 20 destinations, as shown in Figure 6.In this situation, we manipulated the presence (versus the absence) of ChatGPT errors.Specifically, participants in the errorpresent condition were informed that the 17th recommended place (i.e.Lake Tekapo) was located in the North Island of New Zealand.By contrast, those in the error-absent condition were not provided with this information, as shown in Figure 6.
Afterwards, participants rated their visit intention for each recommended place (α = .960)and perceived trustworthiness (α = .959),using the same items as in Study 1. Finally, they were asked to rate the perceived realism of the foregoing scenario using the same scale as used in the previous two studies.

Study 4: testing the moderation effect of ChatGPT's information types on moral decoupling
In previous studies, we found a pattern of moral decoupling in travelers' visit intentions to places recommended by ChatGPT, whether the unethical behavior of ChatGPT's company was salient or not.Moral decoupling can be defined as "a psychological separation process by which consumers selectively dissociate judgments of morality from judgments of performance" (p.1168, Bhattacharjee, Berman, & Reed, 2013).For example, the visit intention was statistically similar when travelers were exposed to positive (or morally negative) news in Study 2. The same pattern was found when travelers were exposed to errors in Study 3.These results indicate that travelers evaluate ChatGPT's recommendations and information without seriously considering the moral aspects.
In this study, we investigated the boundary conditions for this moral decoupling.Our main focus was on the information type provided by ChatGPT.ChatGPT can provide us with specific and concrete information, such as suggesting a particular destination, as well as abstract and general information, such as providing basic information about visiting places.Previous literature suggests that moral judgment is more salient when people are in an abstract (rather than concrete) thinking mode (Cowan & Yazdanparast, 2019;Napier & Luguri, 2013), resulting in a lower level of moral decoupling.Additionally, the direct benefits of exposure to specific recommendations in concrete situations are also expected to increase moral decoupling (Orth, Hoffmann, & Nickel, 2019).
In summary, we expected that moral decoupling would be higher when ChatGPT provided abstract and general (rather than specific and concrete) information.Specifically, the evaluation of information from positive (or morally negative) news would differ when ChatGPT provided general (rather than specific) information.Furthermore, we predicted that this pattern would not apply to negative news involving non-moral aspects, as moral decoupling is specifically related to the ethical dimension.Therefore, we proposed the following hypotheses: H5a: The negative effect of ChatGPT's poor quality aspects on travelers' information evaluation will be the same, whether ChatGPT's information contains general or specific information.

H5b:
The negative effect of ChatGPT's unethical aspects on travelers' information evaluation will be stronger when ChatGPT's information contains general (vs.specific) information.

Method: participants, design, and procedure
Participants in this study were 526 US adults (M _age = 42.35,SD = 12.75; 53.6% female) recruited from Amazon MTurk in exchange for a nominal payment.They were randomly assigned to one of three (salience of ChatGPT: positive vs. negative quality issue vs. negative ethical issue) X two (type of ChatGPT information: specific vs. general) experimental conditions using a between-subjects design.The overall procedure of this study was similar to that of previous studies.First, participants were asked to read the same newspaper articles as those used in Study 1.After that, participants were asked to imagine that they were planning to visit the South Island of New Zealand and had turned to ChatGPT for information.The information was manipulated differently.Participants in the specific information condition were exposed to 10 different visiting places, as in Study 1.In contrast, those in the general information condition were exposed to information regarding the visiting places in 9 categories including geography, major cities and outdoor activities, as shown in Figure 8. Afterwards, all participants rated their information satisfaction (i.e.how satisfied are you with the information provided by ChatGPT above?) on a 7-point scale (1= not satisfied at all, 7 = very satisfied).Finally, they were asked to rate the perceived realism of the scenario as well as their familiarity with the suggested destinations in New Zealand on a 7-point scale (1= not at all familiar, 7 = highly familiar).

Summary of empirical studies
This study demonstrated that ChatGPT's poor quality responses and unethical aspects significantly decreased travelers' acceptance of, satisfaction with, and perceived trustworthiness of its recommendations.As results of previous studies (Blose et al., 2015;Chi et al., 2021;Grazzini et al., 2018;S.;Kim et al., 2022), the efficacy of message framing was important because negative messages led to a decrease in customers' acceptance of the new technology.Similar to those of previous studies (Demir & Demir, 2023;Kim et al., 2023bKim et al., , 2023b;;Lv et al., 2022), the findings suggest that travelers are sensitive to ChatGPT's quality and ethical issues when judging its recommendations.They also indicate that travelers' acceptance of and satisfaction with ChatGPT's recommendations decreased significantly when the poor quality issue was salient.Similarly, the perceived trustworthiness of ChatGPT's recommendations decreased when either the unethical or poor quality issues were salient.The findings are similar to those of prior studies (Kim et al., 2023b(Kim et al., , 2023c)).This study also showed that perceived trustworthiness mediated the negative effects of unethical or poor quality issues on the acceptance and satisfaction of ChatGPT's recommendations.This result suggests that the perceived trustworthiness of ChatGPT is an essential factor moderating the effect of these issues on travelers' acceptance and satisfaction.Furthermore, this study explored the impact of ChatGPT's errors on the negative effects of its recommendations and found that the negative effect of ChatGPT on travelers' acceptance and satisfaction decreased when ChatGPT's recommendations contained errors.Finally, Study 4 investigated the boundary conditions for moral decoupling in ChatGPT interactions and suggested that moral decoupling was stronger when ChatGPT offered specific information as opposed to general information.

Theoretical and practical implications
The results of this study have several significant theoretical implications.First, the findings suggest that travelers are sensitive to ChatGPT's quality and ethical issues when judging its recommendations.Therefore, message framing using ChatGPT's quality and ethical issues was valid in adjusting customers' acceptance of the technology, as in previous studies in which gain or loss message framing influenced tourists' psychological mechanisms for decision making (Blose et al., 2015;Grazzini et al., 2018;S. Kim et al., 2023a).The results indicate that travelers' acceptance of and satisfaction with ChatGPT's recommendations decreased significantly when the poor quality issue was salient.Therefore, positive message framing generated higher levels of positive behavioral intention compared to negative message framing.This finding also highlights the importance of the quality and ethical aspects of an AI system and provides implications for the broader field of AI development.Demir and Demir (2023) further contended that the issues related to ethical matters and influence of ChatGPT on tourism enterprises are as serious as data privacy, by suggesting that ChatGPT could potentially instigate prejudice or discrimination in its responses.Therefore, service designers should improve users' experiences by ensuring that the system's quality and ethical aspects align with users' expectations so that consumers are satisfied with and can trust the service.In addition, tourism companies need to disclose their customers' records of using AI-powered chatbot and the results generated (Demir & Demir, 2023).This is in line with previous research that has shown the importance of chatbot accuracy and quality to users' satisfaction and trust (Melián-González et al., 2021;Pillai & Sivathanu, 2020).
Second, this study has enriched the discourse on generative AI chatbots because it sheds light on potential adverse effects, such as ethical issues, in terms of users' acceptance or recommendations to other customers.While earlier studies have primarily focused on the initial acceptance stage of AI technologies in tourism contexts (Gursoy et al., 2019;Sun et al., 2019), the findings of this study go beyond those of prior research, which simply emphasized the significance of understanding tourists' behaviors following adoption of the new technology (Shi et al., 2021;Xiang et al., 2020).
Third, this study extends our understanding of the role of trustworthiness in new technology adaptation situations.As explained in the previous paragraph, the perceived trustworthiness of ChatGPT's recommendations decreased when either unethical or poor quality issues were salient.In other words, the negative effect of unethical or poor quality issues on the acceptance and satisfaction of ChatGPT recommendations was mediated by perceived trustworthiness.This finding indicates that trust plays a crucial role in accepting and being satisfied with chatbots' recommendations, particularly when consumers are adopting new technology.The previous literature on persuasive communication has also emphasized the importance of perceived trustworthiness (e.g.Clementson, 2020;Sparks et al., 2013).Since ChatGPT's answers may not be based on facts and occasionally could disseminate deceptive information, some studies (Kim et al., 2023c;Paul et al., 2023) have underscored the need to investigate consumers' perceptions and attitudes concerning the use of ChatGPT, particularly in the context of trust.By extending our understanding of the role of trustworthiness in novel technology adaptation contexts, the current study sheds light on the importance of building trust in chatbots.
Fourth, this study extends knowledge of the phenomenon of algorithm aversion as related to a new interactive AI setting by exploring the impact of ChatGPT's errors on its recommendations' positive effects.This study's results revealed that the positive effect of ChatGPT on traveler acceptance and satisfaction decreased when ChatGPT made errors.This finding highlights the importance of accuracy and quality when designing an AI chatbot.The overall effect was similar to that identified in the extant literature demonstrating algorithm aversion, which is people's tendency to be less tolerant of AI errors (e.g.Burton et al., 2020;Dietvorst et al., 2015 in review).Therefore, accuracy and quality should be prioritized when designing chatbots to maintain a positive user experience.The findings resonate with the quality concerns about ChatGPT: as the quality of the response produced by machine learning models including ChatGPT is contingent upon the training data and the prompts provided to the model, the possibility for errors or omissions exists.In addition, the nature of the training data provided inevitably leads to the potential for bias and requires appropriate measures to mitigate it (Iskender, 2023).By showing that algorithm aversion also applies to interactive AI settings, such as chatbots, and that the impact of its errors on the acceptance and satisfaction of AI recommendations is a crucial factor, this study contributes to understanding AI in the travel recommendation context.
Fifth, this study extends our understanding of moral decoupling in travel decision-making.We proposed and tested the boundary conditions for moral decoupling (Bhattacharjee et al., 2013).Our results in Study 4 indicated that travelers highly evaluated travel information, even when the product was negatively associated with ethical aspects, especially when ChatGPT provided detailed and specific information to users.Therefore, this paper offers an initial response to requests for research on ChatGPT in the ethical and moral dimensions (Dwivedi et al., 2023).
Finally, by presenting preliminary evidence of the potential adverse effects of ChatGPT on tourist attitudes and behaviors, this study emphasizes the need for a more mindful approach to technological advancements.It raises concerns pertinent to the evolution of technology and its application in tourism and beyond.There is a pronounced disparity between human learning and AI.AI may lack a holistic comprehension of context and ethical aspects.Such differences are relevant in making informed decisions about the use of AI, and its limitations, particularly in areas where human values, ethics, and creativity are indispensable.This concurs with prior academic emphasis on the importance of critically scrutinizing unchecked technological progression.For instance, Gretzel et al. (2020) advocated for restrictions and democratization of technological developments in tourism to foster equity and sustainability.Fuchs (2009) challenged technological determinism, endorsing constraints on technological advancement to stimulate innovative decision-making.Tribe and Mkono (2017) warned against overdependence on technology in tourism, which could result in e-alienation rather than authentic human interactions.While chatbots and generative AIs offer efficiency and convenience, their limitations must be meticulously evaluated to ensure alignment with human interests and values.In all, this study further underscores how crucial it is to approach new technologies with caution, especially when we are unable to understand the learning processes or inherent value systems driving the solutions proposed by AI.
The results of this study also have several significant practical and managerial implications.They provide valuable insights into how travel service providers or chatbot designers can design and develop ethical and high-quality chatbots that meet travelers' needs and expectations.First, based on the finding that exposure to poor quality issues affects travelers' acceptance of and satisfaction with ChatGPT recommendations, travel service providers using chatbots need to focus on providing accurate and high-quality recommendations.In addition, chatbot designers should ensure that their chatbots have access to accurate and reliable data sources and use sophisticated algorithms to generate relevant and personalized recommendations for individual travelers' needs.
Second, concerning ethical considerations, the findings of this study suggest that travel service providers employing chatbots should guarantee ethical behavior and adherence to ethical principles.Upholding user trust can be accomplished by developing an interface that respects user privacy (Brown et al., 2007;Lee & Cranage, 2011), maintains transparency in decision-making processes, and offers clear and concise explanations for its recommendations.Furthermore, the concept of trust can serve as a critical metric in examining user confidence in such systems and assessing its subsequent influence on their propensity to adhere to suggestions or use the system for customer service purposes (Paul et al., 2023).
Finally, the negative effect of unethical or poor quality issues on the acceptance and satisfaction of ChatGPT recommendations was mediated by perceived trustworthiness, implying that OTAs using chatbots, and chatbot designers, both need to focus on building trust in their chatbots.Chatbot designers can achieve this by designing chatbots that are trustworthy and ethical, by being transparent with their decision-making processes, responding to user feedback and concerns, and providing clarity and conciseness.

Limitations and future research directions
This study had several limitations that suggest directions for future research.First, hypothetical scenarios have been widely used in previous literature (e.g.Yao et al., 2023), and can provide valuable insights into users' perceptions and attitudes towards robots, chatbots, or AI (Choi et al., 2020(Choi et al., , 2021;;Kim;Kim et al., 2021Kim et al., , 2022;;Zhang et al., 2022).However, hypothetical scenarios may not accurately reflect users' actual experiences with chatbots in the real world.Future studies should address this limitation by conducting field studies in real-life travel settings.Second, the empirical studies were conducted in February and March 2023 using ChatGPT version GPT-3.5.As the study was conducted over a brief period of time, it may not have captured changes in users' perceptions and attitudes towards different versions of the chatbot.Future studies could conduct longitudinal studies to examine how users' perceptions and attitudes toward chatbots change over time.Third, even though this study provided moderating variables such as exposure to ChatGPT's errors, future studies need to examine a range of moderating variables such as situational differences, including time pressure, individual differences in technology adaptation or processing style, and cultural differences (e.g.Assaker, 2020;Kim et al., 2022).In addition, this study focused on using ChatGPT for travel recommendations.Future studies could investigate the use of chatbots for other types of information searches in travel settings, such as trip planning and budgeting, and in fields such as healthcare information or job search recommendations.Fourth, the significant results of study May 3 have been driven by the simple association of quality issues of ChatGPT with ChatGPT's errors.Future study needs to elaborate this issue further.Finally, future studies should focus on developing practical and theoretical strategies to mitigate the negative impact of ChatGPT's poor quality responses and unethical aspects on user outcomes.

Figure 1 .
Figure 1.Theoretical framework and empirical studies.

Figure 8 .
Figure 8. Stimuli for study 4 and results.