Skip to main content Accessibility help
×
Hostname: page-component-54dcc4c588-wlffp Total loading time: 0 Render date: 2025-10-03T20:02:51.730Z Has data issue: false hasContentIssue false

4 - Credibility Assessment of Human–Generative AI Interaction

Published online by Cambridge University Press:  19 September 2025

Dan Wu
Affiliation:
Wuhan University, China
Shaobo Liang
Affiliation:
Wuhan University, China

Summary

This chapter aims to provide a comprehensive overview of the current state of credibility research in human–generative AI interactions by analyzing literature from various disciplines. It begins by exploring the key dimensions of credibility assessment and provides an overview of two main measurement methods: user-oriented and technology-oriented. The chapter then examines the factors that influence human perceptions of AI-generated content (AIGC), including attributes related to data, systems, algorithms, and user-specific factors. Additionally, it investigates the challenges and ethical considerations involved in assessing credibility in human–generative AI interactions, scrutinizing the potential consequences of misplaced trust in AIGC. These risks include concerns over security, privacy, power dynamics, responsibility, cognitive biases, and the erosion of human autonomy. Emerging approaches and technological solutions aimed at improving credibility assessment in AI systems are also discussed, alongside a focus on domains where AI credibility assessments are critical. Finally, the chapter proposes several directions for future research on AIGC credibility assessments.

Information

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

4 Credibility Assessment of Human–Generative AI Interaction

4.1 Introduction

With the rapid development of artificial intelligence, Human–AI Interaction (HAII) has gradually become the focus of Human–Computer Interaction (HCI) and its related cross-fields (Amershi et al., Reference Amershi, Weld, Vorvoreanu, Fourney, Nushi, Collisson, Suh, Iqbal, Bennett and Inkpen2019). The emergence of ChatGPT indicates that generative artificial intelligence (GAI) based on the large language model (LLM) has entered a new stage of development. Particularly, a deep learning model is employed to generate human-like content; that is, AI-Generated Content (AIGC) in response to complex and diverse prompts (Lim et al., Reference Lim, Gunasekara, Pallant, Pallant and Pechenkina2023). Human interaction with GAI will greatly enhance people’s productivity and creativity, and further penetrate all aspects of the public’s life (De Freitas et al., Reference De Freitas, Agarwal, Schmitt and Haslam2023). However, while bringing benefits to people, GAI will inevitably raise a series of technical, socio-cultural, and ethical issues, among which, the credibility of GAI remains a research concern worth attention in the new era (Longoni et al., Reference Longoni, Fradkin, Cian and Pennycook2022).

Credibility was originally defined as ‘believability’ or perceived information quality from the perspective of information recipients but credibility is not necessarily equal to objective information quality (Flanagin & Metzger, Reference Flanagin, Metzger, Kenski and Jamieson2017). Many researchers agree that the concept of credibility is multidimensional, including the components such as trustworthiness, expertise, objectivity (Choi & Stvilia, Reference Choi and Stvilia2015). As technology advances and times move forward, credibility studies also need to pay close attention to the depth of interaction between people and information, digital artifacts, and socio-cultural environments (Shin, Reference Shin2022). The credibility problem in the traditional mass media era is not comparable to that in the Internet era. Similarly, credibility issues in the era of GAI are facing more challenges brought on by new technologies, new businesses, and new environments (Huschens et al., Reference Huschens, Briesch, Sobania and Rothlauf2023), and therefore credibility research needs to keep up with the times and be critically examined.

The credibility assessment and judgment of AI has become an important topic in the research of explainable AI (Wagle et al., Reference Wagle, Kaur, Kamat, Patil and Kotecha2021). While AI technology injects vitality into social development, it also triggers negative problems such as technological black box (Castelvecchi, Reference Castelvecchi2016), algorithmic discrimination (Shin, Reference Shin2022), dissemination of misinformation (Zhou et al., Reference Zhou, Zhang, Luo, Parker and De Choudhury2023), and echo chambers (Jeon et al., Reference Jeon, Kim, Park, Ko, Ryu, Kim and Han2024); in particular, the rapid development of GAI in recent years has created a series of concerns among the public about privacy, employment opportunities, and the loss of control, which in turn affects the trust between people and technology, as well as the adoption and use of GAI by individuals and organizations (Wach et al., Reference Wach, Duong, Ejdys, Kazlauskaitė, Korzynski, Mazurek, Paliszkiewicz and Ziemba2023). Therefore, the credibility assessment of GAI aims to alleviate people’s concerns about new technologies represented by ChatGPT to a certain extent, and advocate the development of human-centered AI, which promotes a harmonious symbiotic relationship between humans and the new generation of AI. For example, Johnson et al. (Reference Johnson, Goodman, Patrinely, Stone, Zimmerman, Donald, Chang, Berkowitz, Finn and Jahangir2023) suggested that verifying the reliability of the content generated by ChatGPT is conducive to further designing models to improve the robustness of an AI system, thus increasing the credibility of users’ perception of AI.

In view of this, the topic of credibility in human–generative AI interaction needs to be further explored. Although there have been reviews of credibility research in the algorithmic era in recent years (Alrubaian et al., Reference Alrubaian, Al-Qurishi, Alamri, Al-Rakhami, Hassan and Fortino2018), studies specifically addressing credibility issues from the GAI perspective remain limited. So far, there have been some studies focusing on the topic of credibility in user’s adoption and use of GAIs, and some studies have specifically explored the trust and reliability of GAIs empirically in various contexts. Therefore, the aim of this chapter is to present a clear picture of the current state of credibility research in human–generative AI interaction by analyzing the relevant literature dispersed across various disciplines and to provide a holistic review of measurement instruments, influencing factors, challenges, emerging technologies and optimization methods for the assessment of AIGC credibility. Finally, the chapter also proposes several directions for further investigation with respect to the limitations of AIGC credibility assessment.

4.2 The Concept of AI Credibility

4.2.1 What Is Credibility?

Credibility is a multifaceted construct that pertains to the degree to which an entity – be it information, an individual, or a system – is perceived as trustworthy and reliable in a specific context (Rieh & Danielson, Reference Rieh and Danielson2007). The foundational definition of credibility often revolves around the term “believability,” signifying the extent to which stakeholders are willing to trust and rely on a given source or system (Fogg & Tseng, Reference Fogg and Tseng1999). However, credibility encompasses a broader array of dimensions beyond mere believability. Credibility is often complex and multidimensional, encompassing a comprehensive evaluation of various characteristics or factors, such as reliability, accuracy, expertise, authority, objectivity, and appeal (Fogg & Tseng, Reference Fogg and Tseng1999; McCroskey & Young, Reference McCroskey and Young1981; Rieh, Reference Rieh2002). From the perspective of the subject being assessed, researchers have classified credibility into categories like advertisement credibility, review credibility, and media credibility (Cheung et al., Reference Cheung, Sia and Kuan2012; Cotte et al., Reference Cotte, Coulter and Moore2005). Furthermore, Flanagin and Metzger (Reference Flanagin and Metzger2007) subdivided credibility into content credibility, source credibility, and design credibility.

Credibility is a key factor for individuals, corporations, governments, and the media in maintaining a good reputation, and it also influences public trust in the broader social structure (Tseng & Fogg, Reference Tseng and Fogg1999). Whether in information dissemination, investment decisions, or policy formulation, credibility often becomes a benchmark for evaluating the success and effectiveness of these activities. As such, credibility has become increasingly important across various sectors, from news reporting (Hofeditz et al., Reference Hofeditz, Mirbabaie, Holstein and Stieglitz2021) and scientific research (Alam & Mohanty, Reference Alam and Mohanty2022) to business marketing (Khan & Mishra, Reference Khan and Mishra2024) and smart healthcare (Aliyeva & Mehdiyev, Reference Aliyeva and Mehdiyev2024; Stevens & Stetson, Reference Stevens and Stetson2023). However, in the age of AI – marked by the rapid proliferation of emerging technologies, the lack of algorithmic transparency, the risks of bias and manipulation, and the globalized, decentralized digital environment – the task of maintaining and enhancing AI credibility presents both significant opportunities and substantial challenges.

There are several similarities between the concept of credibility and the concept of human-centered AI, as both emphasize the important position of users in shaping perceived experience. Some researchers suggest that the design of human-centered AI should pay attention to the influence of AI on people and put the user experience at the center (Shneiderman, Reference Shneiderman2020; Xu, Reference Xu2019). Furthermore, traditional human–computer interaction is actively evolving toward human–generative AI interaction, and the original credibility dimension can no longer fully cover and reflect the connotation of AI credibility. Therefore, it is necessary to revisit the conceptualization of AI credibility in the context of human–generative AI interaction. The integrated framework of credibility evaluation (Hilligoss & Rieh, Reference Hilligoss and Rieh2008), dominance-interpretation theory (Fogg, Reference Fogg2003), credibility MAIN model (Sundar, Reference Sundar2008), and other related theories lay a theoretical foundation for expanding the conceptual map of AIGC credibility.

4.2.2 Main Dimensions of AI Credibility

It is necessary to consider the characteristics of the AI when constructing the concept of AIGC credibility. Shin (Reference Shin2022) suggests that the credibility of AIGC should be mapped with some characteristics of AI in a broader scope. At present, researchers generally agree that human-centered AI should be Explainable AI (Capel & Brereton, Reference Capel and Brereton2023), which can be embodied in the characteristics of AI, such as fairness, accountability, transparency, and interpretability. This section elaborates and expands on AI credibility based on the primary dimensions of explainable AI. Table 4.1 summarizes the main dimensions and corresponding concepts of AI credibility.

Table 4.1Main dimensions of AI credibility
DimensionDescriptionReferences
ReliabilityThe ability of AI to consistently deliver accurate and stable results under various conditions – including flexibility, accessibility, and timeliness – also encompasses the system’s robustness when faced with data changes, failures, or stress. Reliability is crucial in assessing AI credibility, as users expect the system to maintain stable performance even in complex, uncertain, or extreme situations.Bedué and Fritzsche (Reference Bedué and Fritzsche2022); Hayashi and Wakabayashi (Reference Hayashi and Wakabayashi2017)
FairnessFairness in AI credibility requires that the system not only avoids overt biases but also possesses the ability to detect and correct hidden biases. To ensure AI credibility, developers must rigorously control for bias throughout model design, data collection, training, and testing processes.Mehrabi et al. (Reference Mehrabi, Morstatter, Saxena, Lerman and Galstyan2021); Sambasivan et al. (Reference Sambasivan, Arnesen, Hutchinson, Doshi and Prabhakaran2021)
AccountabilityA key component of AI credibility is ensuring clear accountability when errors or failures occur. Whether involving developers, operators, or users, the responsibility framework for AI systems must be well-defined to ensure that issues can be traced back to their source and corrective actions taken.Busuioc (Reference Busuioc2021); Hallowell et al. (Reference Hallowell, Badger, Sauerbrei, Nellåker and Kerasidou2022)
TransparencyTransparency in AI refers not only to the explainability and comprehensibility of the system’s decision-making processes but also to the transparency of information, such as data sources and algorithm choices, and the transparency of processes, like records of system updates or adjustments. A transparent AI system enables users to understand how data is collected, processed, and analyzed, allowing them to better grasp and trust the decision-making flow of the AI.Ehsan et al. (Reference Ehsan, Liao, Muller, Riedl and Weisz2021); Vössing et al. (Reference Vössing, Kühl, Lind and Satzger2022)
SecuritySecurity and robustness ensure that AI systems do not make erroneous decisions in abnormal situations, such as when faced with malicious inputs or adversarial attacks, thereby safeguarding user trust.Hu et al. (Reference Hu, Kuang, Qin, Li, Zhang, Gao, Li and Li2021)
EthicAI decisions must not only be technically accurate but also align with social, ethical, and moral standards. By addressing issues such as privacy protection, eliminating algorithmic bias, and considering the impact on vulnerable groups, AI systems can enhance user trust.Reinhardt (Reference Reinhardt2023)
IntelligibilityFrom the user’s perspective, AI outputs need to be understandable, and its decisions must provide clear explanations to practitioners without a technical background. This allows users to maintain trust in the results while utilizing AI.Lim et al. (Reference Lim, Yang, Abdul and Wang2019)

Firstly, the reliability and security of AI systems are paramount, as users expect stable performance and data integrity even in complex or uncertain situations. For example, in the healthcare domain, the accuracy of AIGC will affect patients trust (Johnson et al., Reference Johnson, Goodman, Patrinely, Stone, Zimmerman, Donald, Chang, Berkowitz, Finn and Jahangir2023). Secondly, transparency and intelligibility are key dimensions of AI credibility, helping users understand the logic and reasoning behind AI decisions, thus reducing fear or distrust of “black box” models (Shin, Reference Shin2023). Thirdly, accountability refers to the presence of clear responsibility mechanisms in AI systems, ensuring that issues can be traced, corrected, and prevented from recurring – an essential aspect of building and maintaining user trust (Hallowell et al., Reference Hallowell, Badger, Sauerbrei, Nellåker and Kerasidou2022). Lastly, fairness and ethics represent two extended dimensions of AI credibility, reflecting the importance of social values and human cultural norms in AI applications. Enhancing AI credibility requires not only technological advancements but also the establishment of strict ethical and fairness standards, ensuring that AI systems make more responsible decisions within various social contexts (Zhang & Zhang, Reference Zhang and Zhang2023).

4.3 Measures of AI Credibility in Human–Generative AI Interaction

Measuring and evaluating AI credibility is a crucial aspect of achieving trustworthy and human-centered AI systems. Through a review of the literature, we classify the measurement of AI credibility into subjective assessments from a user-centric perspective and relatively objective measurements using technical methods.

On the one hand, the user-centered subjective measurement approach primarily refers to measure users’ perceived credibility of AI products through questionnaires, focusing on specific research situations and research questions (Xiang et al., Reference Xiang, Zhou and Xie2023). For example, in order to evaluate the perceived credibility of students on ChatGPT, Tossell et al. (Reference Tossell, Tenhundfeld, Momen, Cooley and de Visser2024) used the updated multi-dimensional measure of trust (MDMT), version 2 questionnaire. The measurement dimensions include reliability, ability, morality, transparency, and kindness (Ullman & Malle, Reference Ullman and Malle2019). In addition, Tossell et al. (Reference Tossell, Tenhundfeld, Momen, Cooley and de Visser2024) used 7-point Likert scales to evaluate students’ trust in ChatGPT, and the measurement items were adapted from the surveys used in military training (Dzindolet et al., Reference Dzindolet, Peterson, Pomranky, Pierce and Beck2003) and autonomous driving research (Tenhundfeld et al., Reference Tenhundfeld, de Visser, Ries, Finomore and Tossell2020). Uzir et al. (Reference Uzir, Bukari, Al Halbusi, Lim, Wahab, Rasul, Thurasamy, Jerin, Chowdhury and Tarofder2023) used the form of questionnaire, which included two dimensions of privacy and security, and measured the perceived credibility of smart watches by elderly consumers.

In addition, some researchers assess the users’ perceived AI credibility from other dimensions. For example, measuring the propensity of users relying on agents in future situations is one of the initial methods used to assess the credibility (Kohn et al., Reference Kohn, De Visser, Wiese, Lee and Shaw2021; Momen et al., Reference Momen, De Visser, Wolsten, Cooley, Walliser and Tossell2023; Monfort et al., Reference Monfort, Graybeal, Harwood, McKnight and Shaw2018). Because trust comes from the drive of rational factors and the stimulation of positive emotions, or the comprehensive effect of the two, Chen and Park (Reference Chen and Park2021) divide users’ trust in intelligent personal assistants into cognitive trust (e.g., usefulness, reliability, honesty and integrity of AI) and emotional trust (e.g., safety, comfort and satisfaction of AI).

On the other hand, relatively objective measurements using technical methods can assist researchers and developers in quantifying and evaluating AI system credibility, thereby enhancing its reliability and safety in practical applications. For example, automated methods may assess an AI system’s responsiveness and explainability (Lin et al., Reference Lin, Lee and Celik2021), test model performance on specific tasks (Huang et al., Reference Huang, Sun, Wang, Wu, Zhang, Li, Gao, Huang, Lyu and Zhang2024), and develop quantitative metrics to evaluate the robustness of deep neural networks (Ruan et al., Reference Ruan, Wu, Sun, Huang, Kroening and Kwiatkowska2019). Some researchers also use machine learning techniques to assess AI system explainability (Yang, Reference Yang2019), employ blockchain technology to enhance data credibility (Distefano et al., Reference Distefano, Di Giacomo and Mazzara2021). Some frameworks such as DeepTrust (Cheng et al., Reference Cheng, Nazarian and Bogdan2020) and credibility metrics models (Uslu et al., Reference Uslu, Kaur, Rivera, Durresi, Durresi and Babbar-Sebens2021) are proposed to measure AI system reliability.

Overall, while numerous studies have highlighted the need to improve the credibility of AI systems, relatively few have explored the quantitative assessment of AI credibility, in particular the contextualized measurement of AIGC credibility and the refinement of credibility dimensions in human–generative AI interaction.

4.4 Influences on Credibility Assessment in Human–Generative AI Interaction

Early research on information credibility assessment identified information sources, cues, and affordances as key factors influencing users’ perceived credibility. Since then, numerous studies on the credibility of HCI have highlighted the impact of technical signifiers in the interaction environment on users’ credibility assessment (Liao & Mak, Reference Liao and Mak2019). For instance, when users search for health information on short video platforms, social media indicators positively influence their perception of credibility (Song et al., Reference Song, Zhao, Yao, Ba and Zhu2021). As HCI evolves into human–generative AI interaction, AI credibility assessment not only involves technical aspects such as system components and algorithm optimization, but also focuses on the practical performance of AI systems across diverse application scenarios and users’ trust perceptions in human–generative AI interaction. Therefore, recent research trends toward a comprehensive consideration of various factors affecting AI credibility assessment, including data, system, algorithm and user factors in addition to information factors. Specific details and examples are provided in Table 4.2.

Table 4.2Influencing factors of AI credibility
DimensionsCategoriesExamples
Date and informationData qualityData acquisition, data processing and data storage (Hu et al., Reference Hu, Kuang, Qin, Li, Zhang, Gao, Li and Li2021; Liang et al., Reference Liang, Tadesse, Ho, Fei-Fei, Zaharia, Zhang and Zou2022; Zhang & Zhang, Reference Zhang and Zhang2023)
Date and informationInformation sourceNews organizations/media with cognitive authority (Kim & Kim, Reference Kim and Kim2020)
Date and informationInformation contentAccuracy, authenticity, completeness, and timeliness (Kim et al., Reference Kim, Giroux and Lee2021; Van Bulck & Moons, Reference Van Bulck and Moons2024)
SystemSystem interpretabilityAudit integrity (Raji et al., Reference Raji, Smart, White, Mitchell, Gebru, Hutchinson, Smith-Loud, Theron and Barnes2020), trust calibration (Zhang et al., Reference Zhang, Liao and Bellamy2020), agency transparency (Araujo et al., Reference Araujo, Helberger, Kruikemeier and De Vreese2020), explanatory element types (Ha & Kim, Reference Ha and Kim2024; Pareek et al., Reference Pareek, van Berkel, Velloso and Goncalves2024)
SystemSystem attribute characteristicsSystem reliability (Hayashi & Wakabayashi, Reference Hayashi and Wakabayashi2017), system (service) quality (Chen et al., Reference Chen, Lu, Gong and Xiong2023), model performance (Zhang et al., Reference Zhang, Genc, Wang, Ahsen and Fan2021)
SystemAI anthropomorphismAI anthropomorphism features (Chen & Park, Reference Chen and Park2021), AI voice features (Kim et al., Reference Kim, Merrill, Xu and Kelly2022), AI warmth and ability (Chandra et al., Reference Chandra, Shirish and Srivastava2022)
AlgorithmAlgorithm complexityComplexity degree of algorithm (Lehmann et al., Reference Lehmann, Haubitz, Fügener and Thonemann2022)
AlgorithmAlgorithm transparencyAlgorithmic interpretability (Chen, Reference Chen2024; Grimmelikhuijsen, Reference Grimmelikhuijsen2023; Markus et al., Reference Markus, Kors and Rijnbeek2021), algorithm reliability (Durán & Jongsma, Reference Durán and Jongsma2021)
AlgorithmAlgorithm securityAlgorithm errors (Schmitt et al., Reference Schmitt, Wambsganss, Söllner and Janson2021)
AlgorithmFairness of algorithmAlgorithm bias (Bernagozzi et al., Reference Bernagozzi, Srivastava, Rossi and Usmani2021; Winkle et al., Reference Winkle, Melsión, McMillan and Leite2021)
UserInteractive experiencePerceptual interactive experience (Zhuang et al., Reference Zhuang, Ma, Zhou, Li, Wang, Huang, Zhai and Ying2024)
UserIndividual abilityAlgorithm literacy (Shin, Reference Shin2022)
UserSociocultural contextsSocial and cultural environment (Chien et al., Reference Chien, Lewis, Sycara, Liu and Kumru2018)

4.4.1 Date and Information-related Attributes

In terms of data factors, data quality significantly impacts the credibility of medical AI. Issues such as data errors and omissions, the lack of standardized metadata, and the prevalence of unstructured data can undermine technical reliability, negatively affecting the credibility of medical AI (Zhang & Zhang, Reference Zhang and Zhang2023). Additionally, aspects of the AI data process (e.g., data design, data archiving and data evaluation) also influence the credibility of the AI model (Liang et al., Reference Liang, Tadesse, Ho, Fei-Fei, Zaharia, Zhang and Zou2022).

As for information, the content quality is a key factor influencing users’ perception of AI credibility. For instance, users’ overall trust in an AI system largely depends on its ability to provide accurate, authentic, complete, and timely information to support their tasks (Kim et al., Reference Kim, Giroux and Lee2021). Moreover, it has been found that content generated by ChatGPT often lacks completeness, which can easily mislead users and diminish its credibility (Van Bulck & Moons, Reference Van Bulck and Moons2024).

4.4.2 System-related Attributes

For the system, explanation directly affects the transparency of AI, which in turn positively correlates with AI credibility. For example, research has shown that providing users with text-based explanations can enhance their trust in explainable AI systems more effectively than visual explanations (Ha & Kim, Reference Ha and Kim2024). Additionally, the security and reliability of AI systems impact users’ perceived credibility. For instance, the service quality of AI chatbots positively influences customer loyalty by enhancing perceived value, cognitive trust, and emotional trust (Chen et al., Reference Chen, Lu, Gong and Xiong2023).

The anthropomorphism of AI is supported by its strong comprehension and innovative capabilities (Pelau et al., Reference Pelau, Dabija and Ene2021), enabling AI systems to grasp the nuances of human–generative AI interaction. The anthropomorphic traits of AI enhance users’ trust, making AI systems with human-like expression styles more approachable and trustworthy (Chen & Park, Reference Chen and Park2021; L. Lu et al., Reference Lu, McDonald, Kelleher, Lee, Chung, Mueller, Vielledent and Yue2022; Wang & Zhao, Reference Wang and Zhao2023). For instance, AI instructors with human-like voices tend to achieve higher perceived credibility among students than those with robotic voices (Kim et al., Reference Kim, Merrill, Xu and Kelly2022). However, humans generally possess greater social appeal, competence, and credibility compared to robots (Beattie et al., Reference Beattie, Edwards, Edwards, Nah, McNealy, Kim and Joo2020; Edwards et al., Reference Edwards, Edwards and Omilion-Hodges2018; Finkel & Krämer, Reference Finkel and Krämer2022).

4.4.3 Algorithm-related Attributes

In the realm of algorithms, specific characteristics such as fairness, accountability, transparency, and explainability are closely linked to trust and performance expectations (Shin, Reference Shin2023). Algorithm transparency can significantly influence users’ trust in the information provided by the algorithm (Grimmelikhuijsen, Reference Grimmelikhuijsen2023; Yeomans et al., Reference Yeomans, Shah, Mullainathan and Kleinberg2019), as well as their confidence in algorithmic outcomes and decision-makers, ultimately impacting their interactive experiences and decision-making processes (Cadario et al., Reference Cadario, Longoni and Morewedge2021). However, when the complexity of an algorithm falls below users’ expectations, increased transparency can actually diminish perceived credibility (Lehmann et al., Reference Lehmann, Haubitz, Fügener and Thonemann2022). Additionally, algorithmic bias can undermine users’ trust in AI systems, with gender bias being a particularly prominent issue in human–generative AI interaction (Bernagozzi et al., Reference Bernagozzi, Srivastava, Rossi and Usmani2021; Winkle et al., Reference Winkle, Melsión, McMillan and Leite2021).

4.4.4 User-related Attributes

In the early years of theories of information credibility, it was widely accepted that the user’s understanding, judgment, and cognitive processing of information clues or components would have a significant impact on the evaluation of information credibility during interaction with the computer (Fogg, Reference Fogg2003). In the context of human–generative AI intelligence interaction, the interaction experience between users and AI systems also influences their evaluation of AI credibility. For example, older adults have had positive experiences watching short medical videos created by large language models, which has enhanced their trust in medical care (Zhuang et al., Reference Zhuang, Ma, Zhou, Li, Wang, Huang, Zhai and Ying2024).

From the user’s perspective, algorithm literacy is a key factor influencing the credibility assessment of AI, which represents an advanced stage of both information and digital literacy, manifesting a profound understanding of AI (Shin et al., Reference Shin2022). It is indispensable in forecasting user decisions in human–generative AI interaction (Shin, Reference Shin2022). In addition, social and cultural backgrounds also influence the evaluation of AI credibility (Chien et al., Reference Chien, Lewis, Sycara, Liu and Kumru2018). This aligns with sociocultural perspectives, which suggests that people’s evaluation of credibility are constrained by their particular cultural, systemic, and historical backgrounds (Mansour & Francke, Reference Mansour and Francke2017).

4.5 Challenges in Credibility Assessment of Human–Generative AI Interaction

The challenges in assessing AI credibility encompass issues related to transparency, ethics, security, privacy, and rights, as detailed in Table 4.3. AI models generally use complex algorithms such as machine learning and deep learning, so users cannot understand the process of AI decision-making in a direct way (Hamon et al., Reference Hamon, Junklewitz, Malgieri, Hert, Beslay and Sanchez2021). For example, “black box” problems are common in AI systems in healthcare, characterized by a lack of interpretability and potential biases. This situation can clash with clinicians’ and patients’ expectations for a clear logical chain, thereby undermining trust in AI (Esmaeilzadeh, Reference Esmaeilzadeh2024). Additionally, as the amount of explanatory information provided by AI systems increases, especially in time-sensitive situations, managing information overload and identifying the most relevant details becomes a significant challenge (Ehsan et al., Reference Ehsan, Liao, Muller, Riedl and Weisz2021).

Table 4.3Challenges in credibility assessment of human–generative AI interaction
ChallengesExamples
Transparency issueLack of explanatory (Ehsan et al., Reference Ehsan, Liao, Muller, Riedl and Weisz2021; Esmaeilzadeh, Reference Esmaeilzadeh2024), technical black box (Schoenherr et al., Reference Schoenherr, Abbas, Michael, Rivas and Anderson2023)
Moral and ethical issuesGender prejudice (Winkle et al., Reference Winkle, Melsión, McMillan and Leite2021), moral conflict (Morley et al., Reference Morley, Machado, Burr, Cowls, Joshi, Taddeo and Floridi2020)
Security and privacy issuesAlgorithm deviation and error (Kaissis et al., Reference Kaissis, Makowski, Rückert and Braren2020), data abuse (Kaissis et al., Reference Kaissis, Makowski, Rückert and Braren2020), privacy violation (Mou & Meng, Reference Mou and Meng2024)
Power and responsibility issuesResponsibility attribution (Leo & Huh, Reference Leo and Huh2020)
Other risk issuesMisinformation dissemination (Esmaeilzadeh, Reference Esmaeilzadeh2020; Molina & Sundar, Reference Molina and Sundar2022), cognitive biases (Ehsan et al., Reference Ehsan, Liao, Muller, Riedl and Weisz2021), weakening of human autonomy (Abbass, Reference Abbass2019; Ernst, Reference Ernst2020)

Secondly, moral and ethical issues, such as gender bias (Winkle et al., Reference Winkle, Melsión, McMillan and Leite2021) and moral conflicts (Morley et al., Reference Morley, Machado, Burr, Cowls, Joshi, Taddeo and Floridi2020), must be thoroughly considered in assessing AI credibility. These issues often arise from algorithmic bias. Data security and privacy are also major challenges in AI credibility assessment. The inherent fragility of algorithms can lead to incorrect decisions when processing data, directly impacting the stability and security of AI systems (Zhang & Zhang, Reference Zhang and Zhang2023). Additionally, using extensive data sets for credibility evaluation raises substantial privacy and security concerns. If data is misused, it can severely threaten user privacy and security (Kaissis et al., Reference Kaissis, Makowski, Rückert and Braren2020). For example, users’ normative behaviors and reactions may be exploited by intelligent machines (Leong & Selinger, Reference Leong and Selinger2019) and their designers for monitoring, tracking, or fraudulent activities (Shahriar et al., Reference Shahriar, Allana, Hazratifard and Dara2023), posing a serious threat to personal privacy and potentially resulting in significant privacy violations.

Besides the above challenges, there is also an important issue of how to clarify the attribution of responsibility when AI systems fail or cause harm to users. In particular, this issue is critical and urgent when AI applications directly affect the health and safety of patients (Esmaeilzadeh, Reference Esmaeilzadeh2020), and solving this problem requires a combination of technical, legal, and ethical considerations.

It is important to recognize that misplaced or inappropriate trust in GAI can lead to a variety of potential consequences and risks, including the spread of misinformation (Molina & Sundar, Reference Molina and Sundar2022), cognitive biases (Ehsan et al., Reference Ehsan, Liao, Muller, Riedl and Weisz2021), and reduced human autonomy (Abbass, Reference Abbass2019). For instance, AIGC, while fueling efficient content creation, also risks the spread of disinformation (Shusas, Reference Shusas2024).

4.6 Ways to Enhance the Credibility Assessment in Human–Generative AI Interaction

Due to the complexity and opacity of AI systems, users often find it difficult to understand and trust their decision-making processes and outcomes. Therefore, exploring new methods and technical solutions to enhance the credibility evaluation of AI systems is crucial. Currently, a key approach is to calibrate AI system trust and robustness using advanced technologies. Techniques such as machine learning (Carvalho et al., Reference Carvalho, Pereira and Cardoso2019), deep learning (Chander et al., Reference Chander, John, Warrier and Gopalakrishnan2024), federated learning (P. Chen et al., Reference Chen, Liu and Lee2022; Lo et al., Reference Lo, Liu, Lu, Wang, Xu, Paik and Zhu2022), and Shapley Additional Explanations (SHAP) (Sabharwal et al., Reference Sabharwal, Miah, Wamba and Cook2024; Trindade Neves et al., Reference Trindade Neves, Aparicio and de Castro Neto2024) are used to improve model transparency and system explanation. Toreini et al. (Reference Toreini, Aitken, Coopamootoo, Elliott, Zelaya and Van Moorsel2020) proposed four technologies to enhance AI credibility: Fairness, Explanatory Ability, Auditability, and Safety (FEAS), which should be considered throughout all stages of the system life cycle.

The human–computer collaborative decision-making design method integrates system decision-making with human experience and cross-domain knowledge, aiming to enhance both the credibility and operational efficiency of AI systems. This approach includes measures such as optimizing configurations or designs to improve user–AI collaboration (Jain et al., Reference Jain, Garg and Khera2023), incorporating domain-specific knowledge to interpret local data errors in AI-assisted decision-making (Zhang et al., Reference Zhang, Liao and Bellamy2020) and enabling users to provide feedback to algorithms (Molina & Sundar, Reference Molina and Sundar2022). These strategies can significantly enhance AI system credibility and ensure its reliable use across various application scenarios. Researchers urge various institutions – including government bodies, accounting firms, insurance companies, non-governmental organizations, civil society organizations, professional groups, and research institutions – to collaborate in exploring new ways to improve the credibility of human-centered AI and advance interpretable AI (Arnold et al., Reference Arnold, Bellamy, Hind, Houde, Mehta, Mojsilović, Nair, Ramamurthy, Olteanu and Piorkowski2019; Shneiderman, Reference Shneiderman2020).

In addition to the above technical approaches, some important theoretical frameworks have also been proposed to ensure the reliability of AI system reliability assessment from multiple dimensions. For example, the algorithmic audit framework can be applied to the whole life cycle of AI system assessment (Raji et al., Reference Raji, Smart, White, Mitchell, Gebru, Hutchinson, Smith-Loud, Theron and Barnes2020). The AI Public Trust Model (Knowles & Richards, Reference Knowles and Richards2021) and the AI Trust, Risk and Security Management (AI TRiSM) framework (Habbal et al., Reference Habbal, Ali and Abuzaraida2024) aim to improve the trustworthiness and reliability of AI; Context-cognitive frameworks for Explainable AI (SAFE-AI) (Sanneman & Shah, Reference Sanneman and Shah2022), confidence measures frameworks (van der Waa et al., Reference van der Waa, Schoonderwoerd, van Diggelen and Neerincx2020) for explainability, and multidimensional interpretative matrices (Hamon et al., Reference Hamon, Junklewitz, Malgieri, Hert, Beslay and Sanchez2021) can be used to assess the explainable behavior of AI systems.

Some auxiliary evaluation classifications are proposed to solve the ethical problems brought by the credibility of AI. For example, the interpretable classification of evaluation system (Sokol & Flach, Reference Sokol and Flach2020), the “dishonest personification” classification of AI robot (Leong & Selinger, Reference Leong and Selinger2019), and the visual evaluation classification of gender bias in AI systems (Bernagozzi et al., Reference Bernagozzi, Srivastava, Rossi and Usmani2021). These auxiliary evaluations can alleviate, to varying degrees, the diverse socio-cultural and technological ethical dilemmas raised by human–generative AI interaction, and help users make better use of GAI.

4.7 Domains of Credibility Assessment in Human–Generative AI

With the rapid advancement and widespread application of AI technology, credibility issues in human–generative AI interaction have become increasingly significant across various industries, including healthcare, finance, services, and education. Specific application scenarios such as smart healthcare, autonomous driving, investment forecasting, and intelligent customer service highlight these concerns, as detailed in Table 4.4. This underscores the need for heightened attention to AI credibility within both industry and academia. Addressing these issues is crucial for fostering effective interactions between users and AI technology and advancing the development of trustworthy AI.

Table 4.4Domains of credibility assessment for human–generative AI
DomainsExamples
HealthSmart medical care (Aliyeva & Mehdiyev, Reference Aliyeva and Mehdiyev2024; Hallowell et al., Reference Hallowell, Badger, Sauerbrei, Nellåker and Kerasidou2022; Stevens & Stetson, Reference Stevens and Stetson2023; Yokoi et al., Reference Yokoi, Eguchi, Fujita and Nakayachi2021), health information (Van Bulck & Moons, Reference Van Bulck and Moons2024; Zalzal et al., Reference Zalzal, Abraham, Cheng and Shah2024), AI health product adoption (Sebastian et al., Reference Sebastian, George and Jackson2023)
Financial industryInvestment forecasting (Sabharwal et al., Reference Sabharwal, Miah, Wamba and Cook2024), credit risk (Candello et al., Reference Candello, Soella, Sanctos, Grave and De Brito Filho2023)
CommunicateIntelligent customer service, intelligent voice (Shin, Reference Shin2022), interpersonal communication (Hohenstein & Jung, Reference Hohenstein and Jung2020)
Media communicationAutomated journalism (Hofeditz et al., Reference Hofeditz, Mirbabaie, Holstein and Stieglitz2021; Kim & Kim, Reference Kim and Kim2020; Z. Lu et al., Reference Lu, McDonald, Kelleher, Lee, Chung, Mueller, Vielledent and Yue2022; Tandoc Jr et al., Reference Tandoc, Yao and Wu2020)
TrafficSelf-driving (Lee & Kolodge, Reference Lee and Kolodge2020; Stettinger et al., Reference Stettinger, Weissensteiner and Khastgir2024)
EducationOnline learning and teaching (Alam & Mohanty, Reference Alam and Mohanty2022; M. Chen et al., Reference Chen, Liu and Lee2022; Tossell et al., Reference Tossell, Tenhundfeld, Momen, Cooley and de Visser2024; Vincent-Lancrin & Van der Vlies, Reference Vincent-Lancrin and Van der Vlies2020), subject education (Cukurova et al., Reference Cukurova, Luckin and Kent2020)
PhilosophyEthics (Durán & Jongsma, Reference Durán and Jongsma2021; Lee, Reference Lee2022), morality (Giroux et al., Reference Giroux, Kim, Lee and Park2022)
LawJudicial assistance (Hayashi & Wakabayashi, Reference Hayashi and Wakabayashi2017), legal rules (Shank, Reference Shank2021)
MarketingConsumer experience (Alboqami, Reference Alboqami2023; Khan & Mishra, Reference Khan and Mishra2024), consumer’s purchase intentions (Uzir et al., Reference Uzir, Bukari, Al Halbusi, Lim, Wahab, Rasul, Thurasamy, Jerin, Chowdhury and Tarofder2023), consumer’s willingness to use (Yue & Li, Reference Yue and Li2023)
OthersOnline translation (Bernagozzi et al., Reference Bernagozzi, Srivastava, Rossi and Usmani2021)

Research on AI credibility assessment in the healthcare sector primarily focuses on several key areas. Firstly, as AI applications in disease prediction, diagnosis, treatment, and health management become increasingly prevalent (Holzinger et al., Reference Holzinger, Langs, Denk, Zatloukal and Müller2019; Zahlan et al., Reference Zahlan, Ranjan and Hayes2023), ensuring the credibility of AI outcomes is crucial. Trust in medical AI is considered foundational for the adoption of smart healthcare. Studies have shown that clinicians’ acceptance of AI is influenced by AI credibility (Stevens & Stetson, Reference Stevens and Stetson2023), and there are also differences in how patients perceive trust in both their doctors and AI medical systems (Yokoi et al., Reference Yokoi, Eguchi, Fujita and Nakayachi2021). Secondly, recent research has concentrated on the impact of transparency and explainability of medical models on AI credibility (Albahri et al., Reference Albahri, Duhaim, Fadhel, Alnoor, Baqer, Alzubaidi, Albahri, Alamoodi, Bai and Salhi2023), particularly addressing issues such as the “black box” nature of algorithms that can lead to distrust or even aversion among patients (Zhang & Zhang, Reference Zhang and Zhang2023). Finally, AI systems and tools such as ChatGPT (Van Bulck & Moons, Reference Van Bulck and Moons2024), AI chatbots (Weeks et al., Reference Weeks, Sangha, Cooper, Sedoc, White, Gretz, Toledo, Lahav, Hartner and Martin2023), and AI medical devices (Fehr et al., Reference Fehr, Jaramillo-Gutierrez, Oala, Gröschel, Bierwirth, Balachandran, Werneck-Leite and Lippert2022) are central to assessing AI credibility in the healthcare field. Future research needs to focus on enhancing credibility through human–AI collaboration, addressing privacy, ethical, and responsibility issues in medical practice, and improving AI’s decision-making capabilities.

Research on the AI credibility in education primarily focuses on students’ perceived trust in AI and the exploration of human–AI collaboration in online learning models. Students’ trust in AI teaching tools may affect the effectiveness of online education, and such studies are usually analyzed using user experiments. For instance, the perceived credibility of AI instructors is influenced by AI voice features and their social presence (Kim et al., Reference Kim, Merrill, Xu and Kelly2022). Additionally, personal perceptions and communication styles influence how students perceive the credibility of AI graders in the classroom (Abendschein et al., Reference Abendschein, Lin, Edwards, Edwards and Rijhwani2024). Researchers are also exploring how to foster a collaborative relationship between AI systems and human educators, rather than relying solely on AI or using it as a supplementary tool, to enhance teaching outcomes (M. Chen et al., Reference Chen, Liu and Lee2022; Tossell et al., Reference Tossell, Tenhundfeld, Momen, Cooley and de Visser2024). Meanwhile, large language models like ChatGPT present significant risks for higher education, including the spread of misinformation, a potential decline in students’ critical thinking abilities, and a reduction in the credibility of educational research evidence (M. Chen et al., Reference Chen, Liu and Lee2022; Cukurova et al., Reference Cukurova, Luckin and Kent2020).

The evaluation of AI credibility in the marketing field is predominantly based on empirical research, supplemented by qualitative methods such as interviews. The primary focus of these studies is the impact of credibility on consumers’ AI experience and their willingness to purchase AI products. The perceived quality of user experience is a key factor influencing the credibility of AI systems. When consumers use AI products or platforms, their trust in the AI is shaped by the system’s interaction experience, the accuracy of its recommendations, the credibility of its sources, and its anthropomorphic characteristics (Alboqami, Reference Alboqami2023; Khan & Mishra, Reference Khan and Mishra2024; Kim et al., Reference Kim, Giroux and Lee2021). Moreover, consumers’ perceived credibility of AI has a significant impact on both their intention to use AI and their actual purchasing behavior (Uzir et al., Reference Uzir, Bukari, Al Halbusi, Lim, Wahab, Rasul, Thurasamy, Jerin, Chowdhury and Tarofder2023). Traditional models like the Technology Acceptance Model (TAM) and the Stimulus-Organism-Response (S-O-R) theory provide a theoretical foundation for AI credibility research, though there is a need to extend these theories in the context of human–generative AI interaction (Cheng et al., Reference Cheng, Zhang, Cohen and Mou2022; Wang et al., Reference Wang, Ahmad, Ayassrah, Awwad, Irshad, Ali, Al-Razgan, Khan and Han2023).

In the financial sector, AI credibility assessment has garnered significant attention from scholars, particularly in areas such as market volatility, credit risk evaluation, and fraud detection. Recent research has focused on developing more interpretable models, aiming to enable financial professionals to better understand and validate the reasoning behind AI-driven decisions, thereby enhancing transparency and trust in decision-making processes (Edunjobi & Odejide, Reference Edunjobi and Odejide2024; Sabharwal et al., Reference Sabharwal, Miah, Wamba and Cook2024).

4.8 Future Research Agenda

The issue of credibility as a cross-cutting research area has been the subject of extensive and sustained attention. In the new context of human—generative AI interaction, credibility research will continue to derive new propositions with the development of technology, changes in scenarios, updating of measurement approaches, and adaptive use of theories.

4.8.1 Reconceptualizing the AI Credibility

Compared to the earlier years of website credibility and the social media credibility in the Web 2.0 era, the research object of AI credibility has changed considerably and the emergence of some technologies rich in intelligent features may have made some new changes to the concept of credibility. For example, the credibility perception and evaluation of AIGC is quite different from the previous credibility measurement of user-generated content (UGC), and the production and dissemination of information content is not comparable in terms of speed, scale, and degree of influence. In addition, the GAI era on digital artifact and the embodiment of intelligences need to be further incorporated into the conceptual kernel of AI credibility. Further, the traditional credibility research conducted based on individuals urgently needs to break through toward a collectivist perspective, especially with the development of crowdsourcing, citizen science, and crowd science, the concept of AI credibility needs to take into account the characteristics of collectiveness in order to better construct the measurements of credibility in the interactions of different groups of people with GAI.

4.8.2 Examining Technological Advancement in GAI Credibility

Advances in algorithm have created a complex digital environment in which credibility assessment has become even more difficult. People no longer rely only on information cues (e.g., author, credentials, news source, etc.) to make a judgment of credibility. Instead, they make a holistic assessment of the platform, the source, the content, and even the judgments of other users. In this view, how to use algorithm-driven new technologies to improve human capabilities such as decision-making, problem-solving, situational learning, and work performance will be important future research topics. With the development of GAI, misinformation and disinformation (e.g., fake news, fake videos, fake pictures) could be intentionally created and quickly spread by various social bots. How to combat the dark side of AIGC will be an important topic for future research. While the dark side creates critical problems, the bright side of algorithmic affordances creates promising opportunities for credibility research. We can observe that GAI could be a powerful tool for filtering misinformation, combating fake news, and supporting laypeople’s credibility judgment. While analyzing large-scale data to understand credibility judgment patterns utilizing deep learning and computational methods, it will be important to incorporate much previous credibility research in which individuals’ multiple dimensions of credibility assessment are characterized and identified.

4.8.3 Evolution of Credibility Measures in Human–Generative AI Interaction

We found that a variety of methodological approaches have been taken to investigate credibility issues. Traditional credibility research methods, such as interviews, focus groups, case studies, ethnography, grounded theory, and content analysis, as well as quantitative research methods, such as surveys, experiments, network analyses, sentiment analyses, and data-mining techniques, are often used in combination to assess credibility. Recently, various algorithmic techniques have been developed to detect falseness or inaccuracy of information. We call for more mixed-methods analyses in future AI credibility studies, especially in combining the characteristics of the algorithms themselves as well as the characteristics of the people interacting with the algorithms and using multi-source data to carry out AI credibility measurements. Furthermore, credibility researchers could examine AI credibility judgments from neuro-information science perspectives while using EEG/FMRI techniques to do in-depth studies on interaction effects between information cues and judgments.

4.8.4 Building a Human-centered Theoretical Lens of AI Credibility

The development of technology and the richness of research objects place new demands on extending the theory of credibility. Some of the traditional dimensions of the concept of credibility, trustworthiness, expertise, authority, and objectivity, may no longer be sufficient in the context of GAI. The anonymity and “authorless” character of the next-generation Internet increases when machine learning and GAI play the role of content creators and gatekeepers of information dissemination. Therefore, future research needs to enrich and deepen the theoretical foundation of credibility by building and testing new dimensions of AIGC credibility by drawing extensively on relevant theories from different disciplines. Some concepts of AI credibility that are considered core dimensions of human judgment, such as fairness, openness, inclusiveness, and diversity, can be integrated into the development of machine learning and AI algorithms. Given that people will increasingly conduct holistic credibility assessments in the context of GAI, humanistic elements will be an important point for future AI credibility constructs.

In addition, the ethical and moral issues surrounding the assessment of AI credibility are complex and multifaceted, involving data privacy, attribution of responsibility, transparency and interpretability during system design, testing, and feedback. Current research lacks a clear mechanism for attributing responsibility and does not adequately address the details of user consent in the collection and use of feedback data. Future research should take a humanistic approach by establishing a clear regulatory framework for AI credibility assessment, strengthening accountability mechanisms, and ensuring rigorous ethical scrutiny of user feedback collected through surveys and other approaches.

4.9 Conclusion

Credibility has always been a central topic of concern for information-related fields, and the intelligent era has brought new opportunities and challenges to the assessment of credibility. With the dual empowerment of digital and intelligent technologies, future credibility assessment of AI should focus on diverse human–generative AI interaction scenarios (Appelganc et al., Reference Appelganc, Rieger, Roesler and Manzey2022), with the goal of developing trustworthy AI (Peckham, Reference Peckham2024). This undoubtedly puts forward higher requirements for theoretical and methodological tools for credibility assessment. Today, the credibility of GAI faces a series of ethical, information security, and data governance challenges. This review outlines the conceptual connotation of AI credibility and analyzes the main dimensions of GAI. In terms of research content, the main measures, influencing factors, challenges, and emerging approaches to AI credibility assessment are reviewed. We advocate researchers to strengthen interdisciplinary dialogue, exchange, and cooperation in the future, further enrich the theoretical lens, and innovate assessment methods, expand the application scenarios of GAI credibility assessment, and pay attention to the role of human–generative AI interaction experience in credibility assessment.

References

Abbass, H. A. (2019). Social Integration of Artificial Intelligence: Functions, Automation Allocation Logic and Human–Autonomy Trust. Cognitive Computation, 11(2), 159171.10.1007/s12559-018-9619-0CrossRefGoogle Scholar
Abendschein, B., Lin, X., Edwards, C., Edwards, A., & Rijhwani, V. (2024). Credibility and Altered Communication Styles of AI Graders in the Classroom. Journal of Computer Assisted Learning, 40(4), 17661776.10.1111/jcal.12979CrossRefGoogle Scholar
Alam, A., & Mohanty, A. (2022). Facial Analytics or Virtual Avatars: Competencies and Design Considerations for Student–Teacher Interaction in AI-powered Online Education for Effective Classroom Engagement. International Conference on Communication, Networks and Computing (pp. 252265). Springer.Google Scholar
Albahri, A. S., Duhaim, A. M., Fadhel, M. A., Alnoor, A., Baqer, N. S., Alzubaidi, L., Albahri, O. S., Alamoodi, A. H., Bai, J., & Salhi, A. (2023). A Systematic Review of Trustworthy and Explainable Artificial Intelligence in Healthcare: Assessment of Quality, Bias Risk, and Data Fusion. Information Fusion, 96, 156191.10.1016/j.inffus.2023.03.008CrossRefGoogle Scholar
Alboqami, H. (2023). Trust Me, I’m an Influencer! Causal Recipes for Customer Trust in Artificial Intelligence Influencers in the Retail Industry. Journal of Retailing and Consumer Services, 72, 103242.10.1016/j.jretconser.2022.103242CrossRefGoogle Scholar
Aliyeva, K., & Mehdiyev, N. (2024). Uncertainty-Aware Multi-criteria Decision Analysis for Evaluation of Explainable Artificial Intelligence Methods: A Use Case from the Healthcare Domain. Information Sciences, 657, 119987.CrossRefGoogle Scholar
Alrubaian, M., Al-Qurishi, M., Alamri, A., Al-Rakhami, M., Hassan, M. M., & Fortino, G. (2018). Credibility in Online Social Networks: A Survey. IEEE Access, 7, 28282855.10.1109/ACCESS.2018.2886314CrossRefGoogle Scholar
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., & Inkpen, K. (2019). Guidelines for Human–AI Interaction. Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems (pp. 1–13). ACM.10.1145/3290605.3300233CrossRefGoogle Scholar
Appelganc, K., Rieger, T., Roesler, E., & Manzey, D. (2022). How Much Reliability Is Enough? A Context-specific View on Human Interaction with (Artificial) Agents from Different Perspectives. Journal of Cognitive Engineering and Decision Making, 16, 207221.10.1177/15553434221104615CrossRefGoogle Scholar
Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI We Trust? Perceptions about Automated Decision-making by Artificial Intelligence. AI & Society, 35(3), 611623.10.1007/s00146-019-00931-wCrossRefGoogle Scholar
Arnold, M., Bellamy, R. K., Hind, M., Houde, S., Mehta, S., Mojsilović, A., Nair, R., Ramamurthy, K. N., Olteanu, A., & Piorkowski, D. (2019). FactSheets: Increasing Trust in AI sServices through Supplier’s Declarations of Conformity. IBM Journal of Research and Development, 63(4/5), 6:1–6:13.10.1147/JRD.2019.2942288CrossRefGoogle Scholar
Beattie, A., Edwards, A. P., & Edwards, C. (2020). A Bot and a Smile: Interpersonal Impressions of Chatbots and Humans using Emoji in Computer-mediated Communication. In Nah, S., McNealy, J. E., Kim, J. H., & Joo, J. (eds.), Communicating Artificial Intelligence (AI) (pp. 4159). Routledge.10.4324/9781003133735-4CrossRefGoogle Scholar
Bedué, P., & Fritzsche, A. (2022). Can We Trust AI? An Empirical Investigation of Trust Requirements and Guide to Successful AI Adoption. Journal of Enterprise Information Management, 35(2), 530549.10.1108/JEIM-06-2020-0233CrossRefGoogle Scholar
Bernagozzi, M., Srivastava, B., Rossi, F., & Usmani, S. (2021). Gender Bias in Online Language Translators: Visualization, Human Perception, and Bias/Accuracy Tradeoffs. IEEE Internet Computing, 25(5), 5363.10.1109/MIC.2021.3097604CrossRefGoogle Scholar
Busuioc, M. (2021). Accountable Artificial Intelligence: Holding Algorithms to Account. Public Administration Review, 81(5), 825836.10.1111/puar.13293CrossRefGoogle ScholarPubMed
Cadario, R., Longoni, C., & Morewedge, C. K. (2021). Understanding, Explaining, and Utilizing Medical Artificial Intelligence. Nature Human Behaviour, 5(12), 16361642.10.1038/s41562-021-01146-0CrossRefGoogle ScholarPubMed
Candello, H., Soella, G. M., Sanctos, C. S., Grave, M. C., & De Brito Filho, A. A. (2023). “This Means Nothing to Me”: Building Credibility in Conversational Systems. Proceedings of the 5th International Conference on Conversational User Interfaces (pp. 1–6). ACM.10.1145/3571884.3603759CrossRefGoogle Scholar
Capel, T., & Brereton, M. (2023). What is Human-centered about Human-centered AI? A Map of the Research Landscape. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–23). ACM.10.1145/3544548.3580959CrossRefGoogle Scholar
Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, 8(8), 832.CrossRefGoogle Scholar
Castelvecchi, D. (2016). Can We Open the Black Box of AI? Nature News, 538(7623), 20.10.1038/538020aCrossRefGoogle ScholarPubMed
Chander, B., John, C., Warrier, L., & Gopalakrishnan, K. (2024). Toward Trustworthy Artificial Intelligence (TAI) in the Context of Explainability and Robustness. ACM Computing Surveys, 57(6), 149.10.1145/3675392CrossRefGoogle Scholar
Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To Be or Not to Be … Human? Theorizing the Role of Human-like Competencies in Conversational Artificial Intelligence Agents. Journal of Management Information Systems, 39(4), 9691005.10.1080/07421222.2022.2127441CrossRefGoogle Scholar
Chen, C. (2024). How Consumers Respond to Service Failures Caused by Algorithmic Mistakes: The Role of Algorithmic Interpretability. Journal of Business Research, 176, 114610.10.1016/j.jbusres.2024.114610CrossRefGoogle Scholar
Chen, M., Liu, F., & Lee, Y.-H. (2022). My Tutor Is an AI: The Effects of Involvement and Tutor Type on Perceived Quality, Perceived Credibility, and Use Intention. International Conference on Human–Computer Interaction (pp. 232–244). Springer.CrossRefGoogle Scholar
Chen, P., Du, X., Lu, Z., Wu, J., & Hung, P. C. (2022). Evfl: An Explainable Vertical Federated Learning for Data-oriented Artificial Intelligence Systems. Journal of Systems Architecture, 126, 102474.10.1016/j.sysarc.2022.102474CrossRefGoogle Scholar
Chen, Q., Lu, Y., Gong, Y., & Xiong, J. (2023). Can AI Chatbots Help Retain Customers? Impact of AI Service Quality on Customer Loyalty. Internet Research, 33(6), 22052243.10.1108/INTR-09-2021-0686CrossRefGoogle Scholar
Chen, Q. Q., & Park, H. J. (2021). How Anthropomorphism Affects Trust in Intelligent Personal Assistants. Industrial Management & Data Systems, 121(12), 27222737.10.1108/IMDS-12-2020-0761CrossRefGoogle Scholar
Cheng, M., Nazarian, S., & Bogdan, P. (2020). There Is Hope after All: Quantifying Opinion and Trustworthiness in Neural Networks. Frontiers in Artificial Intelligence, 3, 54.10.3389/frai.2020.00054CrossRefGoogle ScholarPubMed
Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: Understanding the Impact of Anthropomorphism on Consumer Response to Chatbots from the Perspective of Trust and Relationship Norms. Information Processing & Management, 59(3), 102940.10.1016/j.ipm.2022.102940CrossRefGoogle Scholar
Cheung, C. M.-Y., Sia, C.-L., & Kuan, K. K. (2012). Is this Review Believable? A Study of Factors Affecting the Credibility of Online Consumer Reviews from an ELM Perspective. Journal of the Association for Information Systems, 13(8), 2.CrossRefGoogle Scholar
Chien, S.-Y., Lewis, M., Sycara, K., Liu, J.-S., & Kumru, A. (2018). The Effect of Culture on Trust in Automation: Reliability and Workload. ACM Transactions on Interactive Intelligent Systems (TiiS), 8(4), 131.10.1145/3230736CrossRefGoogle Scholar
Choi, W., & Stvilia, B. (2015). Web Credibility Assessment: Conceptualization, Operationalization, Variability, and Models. Journal of the Association for Information Science and Technology, 66(12), 23992414.10.1002/asi.23543CrossRefGoogle Scholar
Cotte, J., Coulter, R. A., & Moore, M. (2005). Enhancing or Disrupting Guilt: The Role of Ad Credibility and Perceived Manipulative Intent. Journal of Business Research, 58(3), 361368.10.1016/S0148-2963(03)00102-4CrossRefGoogle Scholar
Cukurova, M., Luckin, R., & Kent, C. (2020). Impact of an Artificial Intelligence Research Frame on the Perceived Credibility of Educational Research Evidence. International Journal of Artificial Intelligence in Education, 30(2), 205235.10.1007/s40593-019-00188-wCrossRefGoogle Scholar
De Freitas, J., Agarwal, S., Schmitt, B., & Haslam, N. (2023). Psychological Factors underlying Attitudes toward AI Tools. Nature Human Behaviour, 7(11), 18451854.CrossRefGoogle ScholarPubMed
Distefano, S., Di Giacomo, A., & Mazzara, M. (2021). Trustworthiness for Transportation Ecosystems: The Blockchain Vehicle Information System. IEEE Transactions on Intelligent Transportation Systems, 22(4), 20132022.10.1109/TITS.2021.3054996CrossRefGoogle Scholar
Durán, J. M., & Jongsma, K. R. (2021). Who Is Afraid of Black Box Algorithms? On the Epistemological and Ethical Basis of Trust in Medical AI. Journal of Medical Ethics, 47(5), 329335.Google Scholar
Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The Role of Trust in Automation Reliance. International Journal of Human–Computer Studies, 58(6), 697718.10.1016/S1071-5819(03)00038-7CrossRefGoogle Scholar
Edunjobi, T. E., & Odejide, O. A. (2024). Theoretical Frameworks in AI for Credit Risk Assessment: Towards Banking Efficiency and Accuracy. International Journal of Scientific Research Updates, 7(1), 92102.CrossRefGoogle Scholar
Edwards, C., Edwards, A., & Omilion-Hodges, L. (2018). Receiving Medical Treatment Plans from a Robot: Evaluations of Presence, Credibility, and Attraction. Companion of the 2018 ACM/IEEE International Conference on Human–Robot Interaction (pp. 101102). ACM.Google Scholar
Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021). Expanding Explainability: Towards Social Transparency in AI Systems. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 101–102). ACM.10.1145/3411764.3445188CrossRefGoogle Scholar
Ernst, C. (2020). Artificial Intelligence and Autonomy: Self-determination in the Age of Automated Systems. Regulating Artificial Intelligence, 53–73.10.1007/978-3-030-32361-5_3CrossRefGoogle Scholar
Esmaeilzadeh, P. (2020). Use of AI-based Tools for Healthcare Purposes: A Survey Study from Consumers’ Perspectives. BMC Medical Informatics and Decision Making, 20, 119.10.1186/s12911-020-01191-1CrossRefGoogle ScholarPubMed
Esmaeilzadeh, P. (2024). Challenges and Strategies for Wide-scale Artificial Intelligence (AI) Deployment in Healthcare Practices: A Perspective for Healthcare Organizations. Artificial Intelligence in Medicine, 151, 102861.10.1016/j.artmed.2024.102861CrossRefGoogle ScholarPubMed
Fehr, J., Jaramillo-Gutierrez, G., Oala, L., Gröschel, M. I., Bierwirth, M., Balachandran, P., Werneck-Leite, A., & Lippert, C. (2022). Piloting a Survey-based Assessment of Transparency and Trustworthiness with Three Medical AI Tools. Healthcare, 10(10), 1923.10.3390/healthcare10101923CrossRefGoogle ScholarPubMed
Finkel, M., & Krämer, N. C. (2022). Humanoid Robots–Artificial. Human-like. Credible? Empirical Comparisons of Source Credibility Attributions between Humans, Humanoid Robots, and Non-human-like Devices. International Journal of Social Robotics, 14(6), 13971411.10.1007/s12369-022-00879-wCrossRefGoogle Scholar
Flanagin, A. J., & Metzger, M. J. (2007). The Role of Site Features, User Attributes, and Information Verification Behaviors on the Perceived Credibility of Web-based Information. New Media & Society, 9(2), 319342.CrossRefGoogle Scholar
Flanagin, A. J., & Metzger, M. J. (2017). Digital Media and Perceptions of Source Credibility in Political Communication. In Kenski, K. & Jamieson, K. H. (eds.), The Oxford Handbook of Political Communication (pp. 417436). Oxford University Press.Google Scholar
Fogg, B. J. (2003). Prominence–Interpretation Theory: Explaining How People Assess Credibility Online. CHI’03 Extended Abstracts on Human Factors in Computing Systems (pp. 722723). ACM.10.1145/765891.765951CrossRefGoogle Scholar
Fogg, B. J., & Tseng, H. (1999). The Elements of Computer Credibility. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 80–87). ACM.10.1145/302979.303001CrossRefGoogle Scholar
Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022). Artificial Intelligence and Declined Guilt: Retailing Morality Comparison between Human and AI. Journal of Business Ethics, 178(4), 10271041.10.1007/s10551-022-05056-7CrossRefGoogle ScholarPubMed
Grimmelikhuijsen, S. (2023). Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-making. Public Administration Review, 83(2), 241262.Google Scholar
Ha, T., & Kim, S. (2024). Improving Trust in AI with Mitigating Confirmation Bias: Effects of Explanation Type and Debiasing Strategy for Decision-Making with Explainable AI. International Journal of Human–Computer Interaction, 40(24), 85628573.10.1080/10447318.2023.2285640CrossRefGoogle Scholar
Habbal, A., Ali, M. K., & Abuzaraida, M. A. (2024). Artificial Intelligence Trust, Risk and Security Management (AI trism): Frameworks, Applications, Challenges and Future Research Directions. Expert Systems with Applications, 240, 122442.CrossRefGoogle Scholar
Hallowell, N., Badger, S., Sauerbrei, A., Nellåker, C., & Kerasidou, A. (2022). “I Don’t Think People Are Ready to Trust These Algorithms at Face Value”: Trust and the Use of Machine Learning Algorithms in the Diagnosis of Rare Disease. BMC Medical Ethics, 23(1), 112.10.1186/s12910-022-00842-4CrossRefGoogle ScholarPubMed
Hamon, R., Junklewitz, H., Malgieri, G., Hert, P. D., Beslay, L., & Sanchez, I. (2021). Impossible Explanations? Beyond Explainable AI in the GDPR from a COVID-19 Use Case Scenario. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 549–559). ACM.10.1145/3442188.3445917CrossRefGoogle Scholar
Hayashi, Y., & Wakabayashi, K. (2017). Can AI Become Reliable Source to Support Human Decision Making in a Court Scene? Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 195198). ACM.10.1145/3022198.3026338CrossRefGoogle Scholar
Hilligoss, B., & Rieh, S. Y. (2008). Developing a Unifying Framework of Credibility Assessment: Construct, Heuristics, and Interaction in Context. Information Processing & Management, 44(4), 14671484.10.1016/j.ipm.2007.10.001CrossRefGoogle Scholar
Hofeditz, L., Mirbabaie, M., Holstein, J., & Stieglitz, S. (2021). Do You Trust an AI-Journalist? A Credibility Analysis of News Content with AI-Authorship. ECIS.Google Scholar
Hohenstein, J., & Jung, M. (2020). AI as a Moral Crumple Zone: The Effects of AI-mediated Communication on Attribution and Trust. Computers in Human Behavior, 106, 106190.10.1016/j.chb.2019.106190CrossRefGoogle Scholar
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and Explainability of Artificial Intelligence in Medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.Google ScholarPubMed
Hu, Y., Kuang, W., Qin, Z., Li, K., Zhang, J., Gao, Y., Li, W., & Li, K. (2021). Artificial Intelligence Security: Threats and Countermeasures. ACM Computing Surveys (CSUR), 55(1), 136.10.1145/3487890CrossRefGoogle Scholar
Huang, Y., Sun, L., Wang, H., Wu, S., Zhang, Q., Li, Y., Gao, C., Huang, Y., Lyu, W., & Zhang, Y. (2024). Position: TrustLLM: Trustworthiness in Large Language Models. International Conference on Machine Learning (pp. 20166–20270). PMLR.Google Scholar
Huschens, M., Briesch, M., Sobania, D., & Rothlauf, F. (2023). Do You Trust ChatGPT?: Perceived Credibility of Human and AI-Generated Content. [arXiv preprint]. arXiv:2309.02524.Google Scholar
Jain, R., Garg, N., & Khera, S. N. (2023). Effective Human–AI Work Design for Collaborative Decision-making. Kybernetes, 52(11), 50175040.10.1108/K-04-2022-0548CrossRefGoogle Scholar
Jeon, Y., Kim, J., Park, S., Ko, Y., Ryu, S., Kim, S.-W., & Han, K. (2024). HearHere: Mitigating Echo Chambers in News Consumption through an AI-based Web System. Proceedings of the ACM on Human–Computer Interaction, 8(CSCW1), 134.10.1145/3637340CrossRefGoogle Scholar
Johnson, D., Goodman, R., Patrinely, J., Stone, C., Zimmerman, E., Donald, R., Chang, S., Berkowitz, S., Finn, A., & Jahangir, E. (2023). Assessing the Accuracy and Reliability of AI-generated Medical Responses: An Evaluation of the Chat-GPT Model. Research Square.10.21203/rs.3.rs-2566942/v1CrossRefGoogle Scholar
Kaissis, G. A., Makowski, M. R., Rückert, D., & Braren, R. F. (2020). Secure, Privacy-Preserving and Federated Machine Learning in Medical Imaging. Nature Machine Intelligence, 2(6), 305311.10.1038/s42256-020-0186-1CrossRefGoogle Scholar
Khan, A. W., & Mishra, A. (2024). AI Credibility and Consumer–AI Experiences: A Conceptual Framework. Journal of Service Theory and Practice, 34(1), 6697.10.1108/JSTP-03-2023-0108CrossRefGoogle Scholar
Kim, J., Giroux, M., & Lee, J. C. (2021). When Do You Trust AI? The Effect of Number Presentation Detail on Consumer Trust and Acceptance of AI Recommendations. Psychology & Marketing, 38(7), 11401155.10.1002/mar.21498CrossRefGoogle Scholar
Kim, J., Merrill, K. Jr, Xu, K., & Kelly, S. (2022). Perceived Credibility of an AI Instructor in Online Education: The Role of Social Presence and Voice Features. Computers in Human Behavior, 136, 107383.10.1016/j.chb.2022.107383CrossRefGoogle Scholar
Kim, S., & Kim, B. (2020). A Decision-making Model for Adopting AI-generated News Articles: Preliminary Results. Sustainability, 12(18), 7418.10.3390/su12187418CrossRefGoogle Scholar
Knowles, B., & Richards, J. T. (2021). The Sanction of Authority: Promoting Public Trust in AI. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 262–271). ACM.10.1145/3442188.3445890CrossRefGoogle Scholar
Kohn, S. C., De Visser, E. J., Wiese, E., Lee, Y.-C., & Shaw, T. H. (2021). Measurement of Trust in Automation: A Narrative Review and Reference Guide. Frontiers in Psychology, 12, 604977.10.3389/fpsyg.2021.604977CrossRefGoogle ScholarPubMed
Lee, J. D., & Kolodge, K. (2020). Exploring Trust in Self-driving Vehicles through Text Analysis. Human Factors, 62(2), 260277.10.1177/0018720819872672CrossRefGoogle ScholarPubMed
Lee, S. S. (2022). Philosophical Evaluation of the Conceptualisation of Trust in the NHS’ Code of Conduct for Artificial Intelligence-driven Technology. Journal of Medical Ethics, 48(4), 272277.10.1136/medethics-2020-106905CrossRefGoogle ScholarPubMed
Lehmann, C. A., Haubitz, C. B., Fügener, A., & Thonemann, U. W. (2022). The Risk of Algorithm Transparency: How Algorithm Complexity Drives the Effects on the Use of Advice. Production and Operations Management, 31(9), 34193434.10.1111/poms.13770CrossRefGoogle Scholar
Leo, X., & Huh, Y. E. (2020). Who Gets the Blame for Service Failures? Attribution of Responsibility toward Robot versus Human Service Providers and Service Firms. Computers in Human Behavior, 113, 106520.10.1016/j.chb.2020.106520CrossRefGoogle Scholar
Leong, B., & Selinger, E. (2019). Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism. Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 299–308). ACM.10.1145/3287560.3287591CrossRefGoogle Scholar
Liang, W., Tadesse, G. A., Ho, D., Fei-Fei, L., Zaharia, M., Zhang, C., & Zou, J. (2022). Advances, Challenges and Opportunities in Creating Data for Trustworthy AI. Nature Machine Intelligence, 4(8), 669677.10.1038/s42256-022-00516-1CrossRefGoogle Scholar
Liao, M.-Q., & Mak, A. K. (2019). “Comments are Disabled for This Video”: A Technological Affordances Approach to Understanding Source Credibility Assessment of CSR Information on YouTube. Public Relations Review, 45(5), 101840.10.1016/j.pubrev.2019.101840CrossRefGoogle Scholar
Lim, B. Y., Yang, Q., Abdul, A. M., & Wang, D. (2019). Why These Explanations? Selecting Intelligibility Types for Explanation Goals. IUI Workshops.Google Scholar
Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the Future of Education: Ragnarök or Reformation? A Paradoxical Perspective from Management Educators. The International Journal of Management Education, 21(2), 100790.10.1016/j.ijme.2023.100790CrossRefGoogle Scholar
Lin, Y.-S., Lee, W.-C., & Celik, Z. B. (2021). What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (pp. 1027–1035). ACM.10.1145/3447548.3467213CrossRefGoogle Scholar
Lo, S. K., Liu, Y., Lu, Q., Wang, C., Xu, X., Paik, H.-Y., & Zhu, L. (2022). Toward Trustworthy AI: Blockchain-based Architecture Design for Accountability and Fairness of Federated Learning Systems. IEEE Internet of Things Journal, 10(4), 32763284.10.1109/JIOT.2022.3144450CrossRefGoogle Scholar
Longoni, C., Fradkin, A., Cian, L., & Pennycook, G. (2022). News from Generative Artificial Intelligence Is Believed Less. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 97–106). ACM.10.1145/3531146.3533077CrossRefGoogle Scholar
Lu, L., McDonald, C., Kelleher, T., Lee, S., Chung, Y. J., Mueller, S., Vielledent, M., & Yue, C. A. (2022). Measuring Consumer-perceived Humanness of Online Organizational Agents. Computers in Human Behavior, 128, 107092.10.1016/j.chb.2021.107092CrossRefGoogle Scholar
Lu, Z., Li, P., Wang, W., & Yin, M. (2022). The Effects of AI-based Credibility Indicators on the Detection and Spread of Misinformation under Social Influence. Proceedings of the ACM on Human–Computer Interaction, 6(CSCW2), 127.Google Scholar
Mansour, A., & Francke, H. (2017). Credibility Assessments of Everyday Life Information on Facebook: A Sociocultural Investigation of a Group of Mothers. Information Research, 22(2).Google Scholar
Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies. Journal of Biomedical Informatics, 113, 103655.10.1016/j.jbi.2020.103655CrossRefGoogle ScholarPubMed
McCroskey, J. C., & Young, T. J. (1981). Ethos and Credibility: The Construct and Its Measurement after Three Decades. Communication Studies, 32(1), 2434.Google Scholar
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys (CSUR), 54(6), 135.10.1145/3457607CrossRefGoogle Scholar
Molina, M. D., & Sundar, S. S. (2022). When AI Moderates Online Content: Effects of Human Collaboration and Interactive Transparency on User Trust. Journal of Computer-mediated Communication, 27(4), zmac010.10.1093/jcmc/zmac010CrossRefGoogle Scholar
Momen, A., De Visser, E., Wolsten, K., Cooley, K., Walliser, J., & Tossell, C. C. (2023). Trusting the Moral Judgments of a Robot: Perceived Moral Competence and Humanlikeness of a GPT-3 enabled AI. In Proceedings of the 56th Hawaii International Conference on System Sciences (pp. 501510). IEEE.Google Scholar
Monfort, S. S., Graybeal, J. J., Harwood, A. E., McKnight, P. E., & Shaw, T. H. (2018). A Single-item Assessment for Remaining Mental Resources: Development and Validation of the Gas Tank Questionnaire (GTQ). Theoretical Issues in Ergonomics Science, 19(5), 530552.10.1080/1463922X.2017.1397228CrossRefGoogle Scholar
Morley, J., Machado, C. C., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2020). The Ethics of AI in Health Care: A Mapping Review. Social Science & Medicine, 260, 113172.10.1016/j.socscimed.2020.113172CrossRefGoogle Scholar
Mou, Y., & Meng, X. (2024). Alexa, It Is Creeping over Me: Exploring the Impact of Privacy Concerns on Consumer Resistance to Intelligent Voice Assistants. Asia Pacific Journal of Marketing and Logistics, 36(2), 261292.10.1108/APJML-10-2022-0869CrossRefGoogle Scholar
Pareek, S., van Berkel, N., Velloso, E., & Goncalves, J. (2024). Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility Assessment. Proceedings of the ACM on Human–Computer Interaction CSCW.10.1145/3686922CrossRefGoogle Scholar
Peckham, J. B. (2024). An AI Harms and Governance Framework for Trustworthy AI. Computer, 57(3), 5968.10.1109/MC.2024.3354040CrossRefGoogle Scholar
Pelau, C., Dabija, D.-C., & Ene, I. (2021). What Makes an AI Device Human-like? The Role of Interaction Quality, Empathy and Perceived Psychological Anthropomorphic Characteristics in the Acceptance of Artificial Intelligence in the Service Industry. Computers in Human Behavior, 122, 106855.CrossRefGoogle Scholar
Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44). ACM.10.1145/3351095.3372873CrossRefGoogle Scholar
Reinhardt, K. (2023). Trust and Trustworthiness in AI Ethics. AI and Ethics, 3(3), 735744.10.1007/s43681-022-00200-5CrossRefGoogle Scholar
Rieh, S. Y. (2002). Judgment of Information Quality and Cognitive Authority in the Web. Journal of the American Society for Information Science and Technology, 53(2), 145161.10.1002/asi.10017CrossRefGoogle Scholar
Rieh, S. Y., & Danielson, D. R. (2007). Credibility: A Multidisciplinary Framework. Annual Review of Information Science and Technology, 41(1), 307364.10.1002/aris.2007.1440410114CrossRefGoogle Scholar
Ruan, W., Wu, M., Sun, Y., Huang, X., Kroening, D., & Kwiatkowska, M. (2019). Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance. IJCAI-19.CrossRefGoogle Scholar
Sabharwal, R., Miah, S. J., Wamba, S. F., & Cook, P. (2024). Extending Application of Explainable Artificial Intelligence for Managers in Financial Organizations. Annals of Operations Research, 1–31.10.1007/s10479-024-05825-9CrossRefGoogle Scholar
Sambasivan, N., Arnesen, E., Hutchinson, B., Doshi, T., & Prabhakaran, V. (2021). Re-imagining Algorithmic Fairness in India and Beyond. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 315–328). ACM.10.1145/3442188.3445896CrossRefGoogle Scholar
Sanneman, L., & Shah, J. A. (2022). The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems. International Journal of Human–Computer Interaction, 38(18–20), 17721788.10.1080/10447318.2022.2081282CrossRefGoogle Scholar
Schmitt, A., Wambsganss, T., Söllner, M., & Janson, A. (2021). Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice. ICIS.Google Scholar
Schoenherr, J. R., Abbas, R., Michael, K., Rivas, P., & Anderson, T. D. (2023). Designing AI Using a Human-centered Approach: Explainability and Accuracy toward Trustworthiness. IEEE Transactions on Technology and Society, 4(1), 923.10.1109/TTS.2023.3257627CrossRefGoogle Scholar
Sebastian, G., George, A., & Jackson, G. Jr (2023). Persuading Patients Using Rhetoric to Improve Artificial Intelligence Adoption: Experimental Study. Journal of Medical Internet Research, 25, e41430.10.2196/41430CrossRefGoogle ScholarPubMed
Shahriar, S., Allana, S., Hazratifard, S. M., & Dara, R. (2023). A Survey of Privacy Risks and Mitigation Strategies in the Artificial Intelligence Life Cycle. IEEE Access, 11, 6182961854.10.1109/ACCESS.2023.3287195CrossRefGoogle Scholar
Shank, C. E. (2021). Credibility of Soft Law for Artificial Intelligence: Planning and Stakeholder Considerations. IEEE Technology and Society Magazine, 40(4), 2536.10.1109/MTS.2021.3123737CrossRefGoogle Scholar
Shin, D. (2022). How Do People Judge the Credibility of Algorithmic Sources? AI & Society, 1–16.10.1007/s00146-021-01158-4CrossRefGoogle Scholar
Shin, D. (2023). Embodying Algorithms, Enactive Artificial Intelligence and the Extended Cognition: You Can See as Much as You Know About Algorithm. Journal of Information Science, 49(1), 1831.10.1177/0165551520985495CrossRefGoogle Scholar
Shin, D., Rasul, A., & Fotiadis, A. (2022). Why am I Seeing This? Deconstructing Algorithm Literacy through the Lens of Users. Internet Research, 32(4), 12141234.10.1108/INTR-02-2021-0087CrossRefGoogle Scholar
Shneiderman, B. (2020). Bridging the Gap between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems. ACM Transactions on Interactive Intelligent Systems (TiiS), 10(4), 131.10.1145/3419764CrossRefGoogle Scholar
Shusas, E. (2024). Designing Better Credibility Indicators: Understanding How Emerging Adults Assess Source Credibility of Misinformation Identification and Labeling. In Companion Publication of the 2024 ACM Designing Interactive Systems Conference (pp. 41–44).10.1145/3656156.3665126CrossRefGoogle Scholar
Sokol, K., & Flach, P. (2020). Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 56–67). ACM.10.1145/3351095.3372870CrossRefGoogle Scholar
Song, S., Zhao, Y. C., Yao, X., Ba, Z., & Zhu, Q. (2021). Short Video Apps as a Health Information Source: An Investigation of Affordances, User Experience and Users’ Intention to Continue the Use of TikTok. Internet Research, 31(6), 21202142.10.1108/INTR-10-2020-0593CrossRefGoogle Scholar
Stettinger, G., Weissensteiner, P., & Khastgir, S. (2024). Trustworthiness Assurance Assessment for High-Risk AI-Based Systems. IEEE Access.10.1109/ACCESS.2024.3364387CrossRefGoogle Scholar
Stevens, A. F., & Stetson, P. (2023). Theory of Trust and Acceptance of Artificial Intelligence Technology (TrAAIT): An Instrument to Assess Clinician Trust and Acceptance of Artificial Intelligence. Journal of Biomedical Informatics, 148, 104550.10.1016/j.jbi.2023.104550CrossRefGoogle ScholarPubMed
Sundar, S. S. (2008). The MAIN Model: A Heuristic Approach to Understanding Technology Effects on Credibility. MacArthur Foundation Digital Media and Learning Initiative.Google Scholar
Tandoc, E. C. Jr, Yao, L. J., & Wu, S. (2020). Man vs. Machine? The Impact of Algorithm Authorship on News Credibility. Digital Journalism, 8(4), 548562.10.1080/21670811.2020.1762102CrossRefGoogle Scholar
Tenhundfeld, N. L., de Visser, E. J., Ries, A. J., Finomore, V. S., & Tossell, C. C. (2020). Trust and Distrust of Automated Parking in a Tesla Model X. Human Factors, 62(2), 194210.10.1177/0018720819865412CrossRefGoogle Scholar
Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G., & Van Moorsel, A. (2020). The Relationship between Trust in AI and Trustworthy Machine Learning Technologies. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 272–283). ACM.10.1145/3351095.3372834CrossRefGoogle Scholar
Tossell, C. C., Tenhundfeld, N. L., Momen, A., Cooley, K., & de Visser, E. J. (2024). Student Perceptions of ChatGPT Use in a College Essay Assignment: Implications for Learning, Grading, and Trust in Artificial Intelligence. IEEE Transactions on Learning Technologies.CrossRefGoogle Scholar
Trindade Neves, F., Aparicio, M., & de Castro Neto, M. (2024). The Impacts of Open Data and eXplainable AI on Real Estate Price Predictions in Smart Cities. Applied Sciences, 14(5), 2209.10.3390/app14052209CrossRefGoogle Scholar
Tseng, S., & Fogg, B. J. (1999). Credibility and Computing Technology. Communications of the ACM, 42(5), 3944.10.1145/301353.301402CrossRefGoogle Scholar
Ullman, D., & Malle, B. F. (2019). Measuring Gains and Losses in Human–Robot Trust: Evidence for Differentiable Components of Trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 618–619). IEEE.10.1109/HRI.2019.8673154CrossRefGoogle Scholar
Uslu, S., Kaur, D., Rivera, S. J., Durresi, A., Durresi, M., & Babbar-Sebens, M. (2021). Trustworthy Acceptance: A New Metric for Trustworthy Artificial Intelligence Used in Decision Making in Food–Energy–Water Sectors. International Conference on Advanced Information Networking and Applications (pp. 208219). Springer International Publishing.10.1007/978-3-030-75100-5_19CrossRefGoogle Scholar
Uzir, M. U. H., Bukari, Z., Al Halbusi, H., Lim, R., Wahab, S. N., Rasul, T., Thurasamy, R., Jerin, I., Chowdhury, M. R. K., & Tarofder, A. K. (2023). Applied Artificial Intelligence: Acceptance-Intention-Purchase and Satisfaction on Smartwatch Usage in a Ghanaian Context. Heliyon, 9(8).10.1016/j.heliyon.2023.e18666CrossRefGoogle Scholar
Van Bulck, L., & Moons, P. (2024). What if Your Patient Switches from Dr. Google to Dr. ChatGPT? A Vignette-based Survey of the Trustworthiness, Value, and Danger of ChatGPT-generated Responses to Health Questions. European Journal of Cardiovascular Nursing, 23(1), 9598.10.1093/eurjcn/zvad038CrossRefGoogle ScholarPubMed
Vincent-Lancrin, S., & Van der Vlies, R. (2020). Trustworthy Artificial Intelligence (AI) in Education: Promises and Challenges. OECD Education Working Papers, 218.Google Scholar
Vössing, M., Kühl, N., Lind, M., & Satzger, G. (2022). Designing Transparency for Effective Human–AI Collaboration. Information Systems Frontiers, 24(3), 877895.10.1007/s10796-022-10284-3CrossRefGoogle Scholar
van der Waa, J., Schoonderwoerd, T., van Diggelen, J., & Neerincx, M. (2020). Interpretable Confidence Measures for Decision Support Systems. International Journal of Human–Computer Studies, 144, 102493.Google Scholar
Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The Dark Side of Generative Artificial Intelligence: A Critical Analysis of Controversies and Risks of ChatGPT. Entrepreneurial Business and Economics Review, 11(2), 730.10.15678/EBER.2023.110201CrossRefGoogle Scholar
Wagle, V., Kaur, K., Kamat, P., Patil, S., & Kotecha, K. (2021). Explainable AI for Multimodal Credibility Analysis: Case Study of Online Beauty Health (mis)-Information. IEEE Access, 9, 127985128022.10.1109/ACCESS.2021.3111527CrossRefGoogle Scholar
Wang, C., Ahmad, S. F., Ayassrah, A. Y. B. A., Awwad, E. M., Irshad, M., Ali, Y. A., Al-Razgan, M., Khan, Y., & Han, H. (2023). An Empirical Evaluation of Technology Acceptance Model for Artificial Intelligence in E-commerce. Heliyon, 9(8).Google ScholarPubMed
Wang, X., & Zhao, Y. C. (2023). Understanding Older Adults’ Intention to Use Patient-accessible Electronic Health Records: Based on the Affordance Lens. Frontiers in Public Health, 10, 1075204.10.3389/fpubh.2022.1075204CrossRefGoogle ScholarPubMed
Weeks, R., Sangha, P., Cooper, L., Sedoc, J., White, S., Gretz, S., Toledo, A., Lahav, D., Hartner, A.-M., & Martin, N. M. (2023). Usability and Credibility of a COVID-19 Vaccine Chatbot for Young Adults and Health Workers in the United States: Formative Mixed Methods Study. JMIR Human Factors, 10(1), e40533.10.2196/40533CrossRefGoogle ScholarPubMed
Winkle, K., Melsión, G. I., McMillan, D., & Leite, I. (2021). Boosting Robot Credibility and Challenging Gender Norms in Responding to Abusive Behaviour: A Case for Feminist Robots. Companion of the 2021 ACM/IEEE International Conference on Human–Robot Interaction (pp. 2937). ACM.Google Scholar
Xiang, H., Zhou, J., & Xie, B. (2023). AI Tools for Debunking Online Spam Reviews? Trust of Younger and Older Adults in AI Detection Criteria. Behaviour & Information Technology, 42(5), 478497.10.1080/0144929X.2021.2024252CrossRefGoogle Scholar
Xu, W. (2019). Toward Human-centered AI: A Perspective from Human–Computer Interaction. Interactions, 26(4), 4246.10.1145/3328485CrossRefGoogle Scholar
Yang, Z. (2019). Fidelity: A Property of Deep Neural Networks to Measure the Trustworthiness of Prediction Results. Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security (pp. 676–678). ACM.10.1145/3321705.3331005CrossRefGoogle Scholar
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making Sense of Recommendations. Journal of Behavioral Decision Making, 32(4), 403414.10.1002/bdm.2118CrossRefGoogle Scholar
Yokoi, R., Eguchi, Y., Fujita, T., & Nakayachi, K. (2021). Artificial Intelligence Is Trusted Less Than a Doctor in Medical Treatment Decisions: Influence of Perceived Care and Value Similarity. International Journal of Human–Computer Interaction, 37(10), 981990.CrossRefGoogle Scholar
Yue, B., & Li, H. (2023). The Impact of Human–AI Collaboration Types on Consumer Evaluation and Usage Intention: A Perspective of Responsibility Attribution. Frontiers in Psychology, 14, 1277861.10.3389/fpsyg.2023.1277861CrossRefGoogle ScholarPubMed
Zahlan, A., Ranjan, R. P., & Hayes, D. (2023). Artificial Intelligence Innovation in Healthcare: Literature Review, Exploratory Analysis, and Future Research. Technology in Society, 102321.10.1016/j.techsoc.2023.102321CrossRefGoogle Scholar
Zalzal, H. G., Abraham, A., Cheng, J., & Shah, R. K. (2024). Can ChatGPT Help Patients Answer Their Otolaryngology Questions? Laryngoscope Investigative Otolaryngology, 9(1), e1193.10.1002/lio2.1193CrossRefGoogle ScholarPubMed
Zhang, J., & Zhang, Z.-M. (2023). Ethics and Governance of Trustworthy Medical Artificial Intelligence. BMC Medical Informatics and Decision Making, 23(1), 7.10.1186/s12911-023-02103-9CrossRefGoogle ScholarPubMed
Zhang, Y., Liao, Q. V., & Bellamy, R. K. (2020). Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-assisted Decision Making. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 295–305).10.1145/3351095.3372852CrossRefGoogle Scholar
Zhang, Z., Genc, Y., Wang, D., Ahsen, M. E., & Fan, X. (2021). Effect of AI Explanations on Human Perceptions of Patient-facing AI-powered Healthcare Systems. Journal of Medical Systems, 45(6), 64.10.1007/s10916-021-01743-6CrossRefGoogle ScholarPubMed
Zhou, J., Zhang, Y., Luo, Q., Parker, A. G., & De Choudhury, M. (2023). Synthetic Lies: Understanding AI-generated Misinformation and Evaluating Algorithmic and Human Solutions. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–20). ACM.10.1145/3544548.3581318CrossRefGoogle Scholar
Zhuang, N., Ma, Z., Zhou, Y., Li, X., Wang, P., Huang, Z., Zhai, S., & Ying, F. (2024). Alleviating Elderly’s Medical Communication Issue with Personalized LLM-Generated Short-Form Video. International Symposium on World Ecological Design (pp. 763772). IOS Press.Google Scholar
Figure 0

Table 4.1 Main dimensions of AI credibility

Figure 1

Table 4.2 Influencing factors of AI credibility

Figure 2

Table 4.3 Challenges in credibility assessment of human–generative AI interaction

Figure 3

Table 4.4 Domains of credibility assessment for human–generative AI

Accessibility standard: Unknown

Accessibility compliance for the HTML of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×