Skip to main content Accessibility help
×
Hostname: page-component-54dcc4c588-42vt5 Total loading time: 0 Render date: 2025-10-07T13:22:52.425Z Has data issue: false hasContentIssue false

4 - Credibility Assessment of Human–Generative AI Interaction

Published online by Cambridge University Press:  19 September 2025

Dan Wu
Affiliation:
Wuhan University, China
Shaobo Liang
Affiliation:
Wuhan University, China
Get access

Summary

This chapter aims to provide a comprehensive overview of the current state of credibility research in human–generative AI interactions by analyzing literature from various disciplines. It begins by exploring the key dimensions of credibility assessment and provides an overview of two main measurement methods: user-oriented and technology-oriented. The chapter then examines the factors that influence human perceptions of AI-generated content (AIGC), including attributes related to data, systems, algorithms, and user-specific factors. Additionally, it investigates the challenges and ethical considerations involved in assessing credibility in human–generative AI interactions, scrutinizing the potential consequences of misplaced trust in AIGC. These risks include concerns over security, privacy, power dynamics, responsibility, cognitive biases, and the erosion of human autonomy. Emerging approaches and technological solutions aimed at improving credibility assessment in AI systems are also discussed, alongside a focus on domains where AI credibility assessments are critical. Finally, the chapter proposes several directions for future research on AIGC credibility assessments.

Information

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Book purchase

Temporarily unavailable

References

Abbass, H. A. (2019). Social Integration of Artificial Intelligence: Functions, Automation Allocation Logic and Human–Autonomy Trust. Cognitive Computation, 11(2), 159171.10.1007/s12559-018-9619-0CrossRefGoogle Scholar
Abendschein, B., Lin, X., Edwards, C., Edwards, A., & Rijhwani, V. (2024). Credibility and Altered Communication Styles of AI Graders in the Classroom. Journal of Computer Assisted Learning, 40(4), 17661776.10.1111/jcal.12979CrossRefGoogle Scholar
Alam, A., & Mohanty, A. (2022). Facial Analytics or Virtual Avatars: Competencies and Design Considerations for Student–Teacher Interaction in AI-powered Online Education for Effective Classroom Engagement. International Conference on Communication, Networks and Computing (pp. 252265). Springer.Google Scholar
Albahri, A. S., Duhaim, A. M., Fadhel, M. A., Alnoor, A., Baqer, N. S., Alzubaidi, L., Albahri, O. S., Alamoodi, A. H., Bai, J., & Salhi, A. (2023). A Systematic Review of Trustworthy and Explainable Artificial Intelligence in Healthcare: Assessment of Quality, Bias Risk, and Data Fusion. Information Fusion, 96, 156191.10.1016/j.inffus.2023.03.008CrossRefGoogle Scholar
Alboqami, H. (2023). Trust Me, I’m an Influencer! Causal Recipes for Customer Trust in Artificial Intelligence Influencers in the Retail Industry. Journal of Retailing and Consumer Services, 72, 103242.10.1016/j.jretconser.2022.103242CrossRefGoogle Scholar
Aliyeva, K., & Mehdiyev, N. (2024). Uncertainty-Aware Multi-criteria Decision Analysis for Evaluation of Explainable Artificial Intelligence Methods: A Use Case from the Healthcare Domain. Information Sciences, 657, 119987.CrossRefGoogle Scholar
Alrubaian, M., Al-Qurishi, M., Alamri, A., Al-Rakhami, M., Hassan, M. M., & Fortino, G. (2018). Credibility in Online Social Networks: A Survey. IEEE Access, 7, 28282855.10.1109/ACCESS.2018.2886314CrossRefGoogle Scholar
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., & Inkpen, K. (2019). Guidelines for Human–AI Interaction. Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems (pp. 1–13). ACM.10.1145/3290605.3300233CrossRefGoogle Scholar
Appelganc, K., Rieger, T., Roesler, E., & Manzey, D. (2022). How Much Reliability Is Enough? A Context-specific View on Human Interaction with (Artificial) Agents from Different Perspectives. Journal of Cognitive Engineering and Decision Making, 16, 207221.10.1177/15553434221104615CrossRefGoogle Scholar
Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI We Trust? Perceptions about Automated Decision-making by Artificial Intelligence. AI & Society, 35(3), 611623.10.1007/s00146-019-00931-wCrossRefGoogle Scholar
Arnold, M., Bellamy, R. K., Hind, M., Houde, S., Mehta, S., Mojsilović, A., Nair, R., Ramamurthy, K. N., Olteanu, A., & Piorkowski, D. (2019). FactSheets: Increasing Trust in AI sServices through Supplier’s Declarations of Conformity. IBM Journal of Research and Development, 63(4/5), 6:1–6:13.10.1147/JRD.2019.2942288CrossRefGoogle Scholar
Beattie, A., Edwards, A. P., & Edwards, C. (2020). A Bot and a Smile: Interpersonal Impressions of Chatbots and Humans using Emoji in Computer-mediated Communication. In Nah, S., McNealy, J. E., Kim, J. H., & Joo, J. (eds.), Communicating Artificial Intelligence (AI) (pp. 4159). Routledge.10.4324/9781003133735-4CrossRefGoogle Scholar
Bedué, P., & Fritzsche, A. (2022). Can We Trust AI? An Empirical Investigation of Trust Requirements and Guide to Successful AI Adoption. Journal of Enterprise Information Management, 35(2), 530549.10.1108/JEIM-06-2020-0233CrossRefGoogle Scholar
Bernagozzi, M., Srivastava, B., Rossi, F., & Usmani, S. (2021). Gender Bias in Online Language Translators: Visualization, Human Perception, and Bias/Accuracy Tradeoffs. IEEE Internet Computing, 25(5), 5363.10.1109/MIC.2021.3097604CrossRefGoogle Scholar
Busuioc, M. (2021). Accountable Artificial Intelligence: Holding Algorithms to Account. Public Administration Review, 81(5), 825836.10.1111/puar.13293CrossRefGoogle ScholarPubMed
Cadario, R., Longoni, C., & Morewedge, C. K. (2021). Understanding, Explaining, and Utilizing Medical Artificial Intelligence. Nature Human Behaviour, 5(12), 16361642.10.1038/s41562-021-01146-0CrossRefGoogle ScholarPubMed
Candello, H., Soella, G. M., Sanctos, C. S., Grave, M. C., & De Brito Filho, A. A. (2023). “This Means Nothing to Me”: Building Credibility in Conversational Systems. Proceedings of the 5th International Conference on Conversational User Interfaces (pp. 1–6). ACM.10.1145/3571884.3603759CrossRefGoogle Scholar
Capel, T., & Brereton, M. (2023). What is Human-centered about Human-centered AI? A Map of the Research Landscape. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–23). ACM.10.1145/3544548.3580959CrossRefGoogle Scholar
Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, 8(8), 832.CrossRefGoogle Scholar
Castelvecchi, D. (2016). Can We Open the Black Box of AI? Nature News, 538(7623), 20.10.1038/538020aCrossRefGoogle ScholarPubMed
Chander, B., John, C., Warrier, L., & Gopalakrishnan, K. (2024). Toward Trustworthy Artificial Intelligence (TAI) in the Context of Explainability and Robustness. ACM Computing Surveys, 57(6), 149.10.1145/3675392CrossRefGoogle Scholar
Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To Be or Not to Be … Human? Theorizing the Role of Human-like Competencies in Conversational Artificial Intelligence Agents. Journal of Management Information Systems, 39(4), 9691005.10.1080/07421222.2022.2127441CrossRefGoogle Scholar
Chen, C. (2024). How Consumers Respond to Service Failures Caused by Algorithmic Mistakes: The Role of Algorithmic Interpretability. Journal of Business Research, 176, 114610.10.1016/j.jbusres.2024.114610CrossRefGoogle Scholar
Chen, M., Liu, F., & Lee, Y.-H. (2022). My Tutor Is an AI: The Effects of Involvement and Tutor Type on Perceived Quality, Perceived Credibility, and Use Intention. International Conference on Human–Computer Interaction (pp. 232–244). Springer.CrossRefGoogle Scholar
Chen, P., Du, X., Lu, Z., Wu, J., & Hung, P. C. (2022). Evfl: An Explainable Vertical Federated Learning for Data-oriented Artificial Intelligence Systems. Journal of Systems Architecture, 126, 102474.10.1016/j.sysarc.2022.102474CrossRefGoogle Scholar
Chen, Q., Lu, Y., Gong, Y., & Xiong, J. (2023). Can AI Chatbots Help Retain Customers? Impact of AI Service Quality on Customer Loyalty. Internet Research, 33(6), 22052243.10.1108/INTR-09-2021-0686CrossRefGoogle Scholar
Chen, Q. Q., & Park, H. J. (2021). How Anthropomorphism Affects Trust in Intelligent Personal Assistants. Industrial Management & Data Systems, 121(12), 27222737.10.1108/IMDS-12-2020-0761CrossRefGoogle Scholar
Cheng, M., Nazarian, S., & Bogdan, P. (2020). There Is Hope after All: Quantifying Opinion and Trustworthiness in Neural Networks. Frontiers in Artificial Intelligence, 3, 54.10.3389/frai.2020.00054CrossRefGoogle ScholarPubMed
Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: Understanding the Impact of Anthropomorphism on Consumer Response to Chatbots from the Perspective of Trust and Relationship Norms. Information Processing & Management, 59(3), 102940.10.1016/j.ipm.2022.102940CrossRefGoogle Scholar
Cheung, C. M.-Y., Sia, C.-L., & Kuan, K. K. (2012). Is this Review Believable? A Study of Factors Affecting the Credibility of Online Consumer Reviews from an ELM Perspective. Journal of the Association for Information Systems, 13(8), 2.CrossRefGoogle Scholar
Chien, S.-Y., Lewis, M., Sycara, K., Liu, J.-S., & Kumru, A. (2018). The Effect of Culture on Trust in Automation: Reliability and Workload. ACM Transactions on Interactive Intelligent Systems (TiiS), 8(4), 131.10.1145/3230736CrossRefGoogle Scholar
Choi, W., & Stvilia, B. (2015). Web Credibility Assessment: Conceptualization, Operationalization, Variability, and Models. Journal of the Association for Information Science and Technology, 66(12), 23992414.10.1002/asi.23543CrossRefGoogle Scholar
Cotte, J., Coulter, R. A., & Moore, M. (2005). Enhancing or Disrupting Guilt: The Role of Ad Credibility and Perceived Manipulative Intent. Journal of Business Research, 58(3), 361368.10.1016/S0148-2963(03)00102-4CrossRefGoogle Scholar
Cukurova, M., Luckin, R., & Kent, C. (2020). Impact of an Artificial Intelligence Research Frame on the Perceived Credibility of Educational Research Evidence. International Journal of Artificial Intelligence in Education, 30(2), 205235.10.1007/s40593-019-00188-wCrossRefGoogle Scholar
De Freitas, J., Agarwal, S., Schmitt, B., & Haslam, N. (2023). Psychological Factors underlying Attitudes toward AI Tools. Nature Human Behaviour, 7(11), 18451854.CrossRefGoogle ScholarPubMed
Distefano, S., Di Giacomo, A., & Mazzara, M. (2021). Trustworthiness for Transportation Ecosystems: The Blockchain Vehicle Information System. IEEE Transactions on Intelligent Transportation Systems, 22(4), 20132022.10.1109/TITS.2021.3054996CrossRefGoogle Scholar
Durán, J. M., & Jongsma, K. R. (2021). Who Is Afraid of Black Box Algorithms? On the Epistemological and Ethical Basis of Trust in Medical AI. Journal of Medical Ethics, 47(5), 329335.Google Scholar
Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The Role of Trust in Automation Reliance. International Journal of Human–Computer Studies, 58(6), 697718.10.1016/S1071-5819(03)00038-7CrossRefGoogle Scholar
Edunjobi, T. E., & Odejide, O. A. (2024). Theoretical Frameworks in AI for Credit Risk Assessment: Towards Banking Efficiency and Accuracy. International Journal of Scientific Research Updates, 7(1), 92102.CrossRefGoogle Scholar
Edwards, C., Edwards, A., & Omilion-Hodges, L. (2018). Receiving Medical Treatment Plans from a Robot: Evaluations of Presence, Credibility, and Attraction. Companion of the 2018 ACM/IEEE International Conference on Human–Robot Interaction (pp. 101102). ACM.Google Scholar
Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021). Expanding Explainability: Towards Social Transparency in AI Systems. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 101–102). ACM.10.1145/3411764.3445188CrossRefGoogle Scholar
Ernst, C. (2020). Artificial Intelligence and Autonomy: Self-determination in the Age of Automated Systems. Regulating Artificial Intelligence, 53–73.10.1007/978-3-030-32361-5_3CrossRefGoogle Scholar
Esmaeilzadeh, P. (2020). Use of AI-based Tools for Healthcare Purposes: A Survey Study from Consumers’ Perspectives. BMC Medical Informatics and Decision Making, 20, 119.10.1186/s12911-020-01191-1CrossRefGoogle ScholarPubMed
Esmaeilzadeh, P. (2024). Challenges and Strategies for Wide-scale Artificial Intelligence (AI) Deployment in Healthcare Practices: A Perspective for Healthcare Organizations. Artificial Intelligence in Medicine, 151, 102861.10.1016/j.artmed.2024.102861CrossRefGoogle ScholarPubMed
Fehr, J., Jaramillo-Gutierrez, G., Oala, L., Gröschel, M. I., Bierwirth, M., Balachandran, P., Werneck-Leite, A., & Lippert, C. (2022). Piloting a Survey-based Assessment of Transparency and Trustworthiness with Three Medical AI Tools. Healthcare, 10(10), 1923.10.3390/healthcare10101923CrossRefGoogle ScholarPubMed
Finkel, M., & Krämer, N. C. (2022). Humanoid Robots–Artificial. Human-like. Credible? Empirical Comparisons of Source Credibility Attributions between Humans, Humanoid Robots, and Non-human-like Devices. International Journal of Social Robotics, 14(6), 13971411.10.1007/s12369-022-00879-wCrossRefGoogle Scholar
Flanagin, A. J., & Metzger, M. J. (2007). The Role of Site Features, User Attributes, and Information Verification Behaviors on the Perceived Credibility of Web-based Information. New Media & Society, 9(2), 319342.CrossRefGoogle Scholar
Flanagin, A. J., & Metzger, M. J. (2017). Digital Media and Perceptions of Source Credibility in Political Communication. In Kenski, K. & Jamieson, K. H. (eds.), The Oxford Handbook of Political Communication (pp. 417436). Oxford University Press.Google Scholar
Fogg, B. J. (2003). Prominence–Interpretation Theory: Explaining How People Assess Credibility Online. CHI’03 Extended Abstracts on Human Factors in Computing Systems (pp. 722723). ACM.10.1145/765891.765951CrossRefGoogle Scholar
Fogg, B. J., & Tseng, H. (1999). The Elements of Computer Credibility. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 80–87). ACM.10.1145/302979.303001CrossRefGoogle Scholar
Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022). Artificial Intelligence and Declined Guilt: Retailing Morality Comparison between Human and AI. Journal of Business Ethics, 178(4), 10271041.10.1007/s10551-022-05056-7CrossRefGoogle ScholarPubMed
Grimmelikhuijsen, S. (2023). Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-making. Public Administration Review, 83(2), 241262.Google Scholar
Ha, T., & Kim, S. (2024). Improving Trust in AI with Mitigating Confirmation Bias: Effects of Explanation Type and Debiasing Strategy for Decision-Making with Explainable AI. International Journal of Human–Computer Interaction, 40(24), 85628573.10.1080/10447318.2023.2285640CrossRefGoogle Scholar
Habbal, A., Ali, M. K., & Abuzaraida, M. A. (2024). Artificial Intelligence Trust, Risk and Security Management (AI trism): Frameworks, Applications, Challenges and Future Research Directions. Expert Systems with Applications, 240, 122442.CrossRefGoogle Scholar
Hallowell, N., Badger, S., Sauerbrei, A., Nellåker, C., & Kerasidou, A. (2022). “I Don’t Think People Are Ready to Trust These Algorithms at Face Value”: Trust and the Use of Machine Learning Algorithms in the Diagnosis of Rare Disease. BMC Medical Ethics, 23(1), 112.10.1186/s12910-022-00842-4CrossRefGoogle ScholarPubMed
Hamon, R., Junklewitz, H., Malgieri, G., Hert, P. D., Beslay, L., & Sanchez, I. (2021). Impossible Explanations? Beyond Explainable AI in the GDPR from a COVID-19 Use Case Scenario. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 549–559). ACM.10.1145/3442188.3445917CrossRefGoogle Scholar
Hayashi, Y., & Wakabayashi, K. (2017). Can AI Become Reliable Source to Support Human Decision Making in a Court Scene? Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 195198). ACM.10.1145/3022198.3026338CrossRefGoogle Scholar
Hilligoss, B., & Rieh, S. Y. (2008). Developing a Unifying Framework of Credibility Assessment: Construct, Heuristics, and Interaction in Context. Information Processing & Management, 44(4), 14671484.10.1016/j.ipm.2007.10.001CrossRefGoogle Scholar
Hofeditz, L., Mirbabaie, M., Holstein, J., & Stieglitz, S. (2021). Do You Trust an AI-Journalist? A Credibility Analysis of News Content with AI-Authorship. ECIS.Google Scholar
Hohenstein, J., & Jung, M. (2020). AI as a Moral Crumple Zone: The Effects of AI-mediated Communication on Attribution and Trust. Computers in Human Behavior, 106, 106190.10.1016/j.chb.2019.106190CrossRefGoogle Scholar
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and Explainability of Artificial Intelligence in Medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.Google ScholarPubMed
Hu, Y., Kuang, W., Qin, Z., Li, K., Zhang, J., Gao, Y., Li, W., & Li, K. (2021). Artificial Intelligence Security: Threats and Countermeasures. ACM Computing Surveys (CSUR), 55(1), 136.10.1145/3487890CrossRefGoogle Scholar
Huang, Y., Sun, L., Wang, H., Wu, S., Zhang, Q., Li, Y., Gao, C., Huang, Y., Lyu, W., & Zhang, Y. (2024). Position: TrustLLM: Trustworthiness in Large Language Models. International Conference on Machine Learning (pp. 20166–20270). PMLR.Google Scholar
Huschens, M., Briesch, M., Sobania, D., & Rothlauf, F. (2023). Do You Trust ChatGPT?: Perceived Credibility of Human and AI-Generated Content. [arXiv preprint]. arXiv:2309.02524.Google Scholar
Jain, R., Garg, N., & Khera, S. N. (2023). Effective Human–AI Work Design for Collaborative Decision-making. Kybernetes, 52(11), 50175040.10.1108/K-04-2022-0548CrossRefGoogle Scholar
Jeon, Y., Kim, J., Park, S., Ko, Y., Ryu, S., Kim, S.-W., & Han, K. (2024). HearHere: Mitigating Echo Chambers in News Consumption through an AI-based Web System. Proceedings of the ACM on Human–Computer Interaction, 8(CSCW1), 134.10.1145/3637340CrossRefGoogle Scholar
Johnson, D., Goodman, R., Patrinely, J., Stone, C., Zimmerman, E., Donald, R., Chang, S., Berkowitz, S., Finn, A., & Jahangir, E. (2023). Assessing the Accuracy and Reliability of AI-generated Medical Responses: An Evaluation of the Chat-GPT Model. Research Square.10.21203/rs.3.rs-2566942/v1CrossRefGoogle Scholar
Kaissis, G. A., Makowski, M. R., Rückert, D., & Braren, R. F. (2020). Secure, Privacy-Preserving and Federated Machine Learning in Medical Imaging. Nature Machine Intelligence, 2(6), 305311.10.1038/s42256-020-0186-1CrossRefGoogle Scholar
Khan, A. W., & Mishra, A. (2024). AI Credibility and Consumer–AI Experiences: A Conceptual Framework. Journal of Service Theory and Practice, 34(1), 6697.10.1108/JSTP-03-2023-0108CrossRefGoogle Scholar
Kim, J., Giroux, M., & Lee, J. C. (2021). When Do You Trust AI? The Effect of Number Presentation Detail on Consumer Trust and Acceptance of AI Recommendations. Psychology & Marketing, 38(7), 11401155.10.1002/mar.21498CrossRefGoogle Scholar
Kim, J., Merrill, K. Jr, Xu, K., & Kelly, S. (2022). Perceived Credibility of an AI Instructor in Online Education: The Role of Social Presence and Voice Features. Computers in Human Behavior, 136, 107383.10.1016/j.chb.2022.107383CrossRefGoogle Scholar
Kim, S., & Kim, B. (2020). A Decision-making Model for Adopting AI-generated News Articles: Preliminary Results. Sustainability, 12(18), 7418.10.3390/su12187418CrossRefGoogle Scholar
Knowles, B., & Richards, J. T. (2021). The Sanction of Authority: Promoting Public Trust in AI. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 262–271). ACM.10.1145/3442188.3445890CrossRefGoogle Scholar
Kohn, S. C., De Visser, E. J., Wiese, E., Lee, Y.-C., & Shaw, T. H. (2021). Measurement of Trust in Automation: A Narrative Review and Reference Guide. Frontiers in Psychology, 12, 604977.10.3389/fpsyg.2021.604977CrossRefGoogle ScholarPubMed
Lee, J. D., & Kolodge, K. (2020). Exploring Trust in Self-driving Vehicles through Text Analysis. Human Factors, 62(2), 260277.10.1177/0018720819872672CrossRefGoogle ScholarPubMed
Lee, S. S. (2022). Philosophical Evaluation of the Conceptualisation of Trust in the NHS’ Code of Conduct for Artificial Intelligence-driven Technology. Journal of Medical Ethics, 48(4), 272277.10.1136/medethics-2020-106905CrossRefGoogle ScholarPubMed
Lehmann, C. A., Haubitz, C. B., Fügener, A., & Thonemann, U. W. (2022). The Risk of Algorithm Transparency: How Algorithm Complexity Drives the Effects on the Use of Advice. Production and Operations Management, 31(9), 34193434.10.1111/poms.13770CrossRefGoogle Scholar
Leo, X., & Huh, Y. E. (2020). Who Gets the Blame for Service Failures? Attribution of Responsibility toward Robot versus Human Service Providers and Service Firms. Computers in Human Behavior, 113, 106520.10.1016/j.chb.2020.106520CrossRefGoogle Scholar
Leong, B., & Selinger, E. (2019). Robot Eyes Wide Shut: Understanding Dishonest Anthropomorphism. Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 299–308). ACM.10.1145/3287560.3287591CrossRefGoogle Scholar
Liang, W., Tadesse, G. A., Ho, D., Fei-Fei, L., Zaharia, M., Zhang, C., & Zou, J. (2022). Advances, Challenges and Opportunities in Creating Data for Trustworthy AI. Nature Machine Intelligence, 4(8), 669677.10.1038/s42256-022-00516-1CrossRefGoogle Scholar
Liao, M.-Q., & Mak, A. K. (2019). “Comments are Disabled for This Video”: A Technological Affordances Approach to Understanding Source Credibility Assessment of CSR Information on YouTube. Public Relations Review, 45(5), 101840.10.1016/j.pubrev.2019.101840CrossRefGoogle Scholar
Lim, B. Y., Yang, Q., Abdul, A. M., & Wang, D. (2019). Why These Explanations? Selecting Intelligibility Types for Explanation Goals. IUI Workshops.Google Scholar
Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the Future of Education: Ragnarök or Reformation? A Paradoxical Perspective from Management Educators. The International Journal of Management Education, 21(2), 100790.10.1016/j.ijme.2023.100790CrossRefGoogle Scholar
Lin, Y.-S., Lee, W.-C., & Celik, Z. B. (2021). What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (pp. 1027–1035). ACM.10.1145/3447548.3467213CrossRefGoogle Scholar
Lo, S. K., Liu, Y., Lu, Q., Wang, C., Xu, X., Paik, H.-Y., & Zhu, L. (2022). Toward Trustworthy AI: Blockchain-based Architecture Design for Accountability and Fairness of Federated Learning Systems. IEEE Internet of Things Journal, 10(4), 32763284.10.1109/JIOT.2022.3144450CrossRefGoogle Scholar
Longoni, C., Fradkin, A., Cian, L., & Pennycook, G. (2022). News from Generative Artificial Intelligence Is Believed Less. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 97–106). ACM.10.1145/3531146.3533077CrossRefGoogle Scholar
Lu, L., McDonald, C., Kelleher, T., Lee, S., Chung, Y. J., Mueller, S., Vielledent, M., & Yue, C. A. (2022). Measuring Consumer-perceived Humanness of Online Organizational Agents. Computers in Human Behavior, 128, 107092.10.1016/j.chb.2021.107092CrossRefGoogle Scholar
Lu, Z., Li, P., Wang, W., & Yin, M. (2022). The Effects of AI-based Credibility Indicators on the Detection and Spread of Misinformation under Social Influence. Proceedings of the ACM on Human–Computer Interaction, 6(CSCW2), 127.Google Scholar
Mansour, A., & Francke, H. (2017). Credibility Assessments of Everyday Life Information on Facebook: A Sociocultural Investigation of a Group of Mothers. Information Research, 22(2).Google Scholar
Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies. Journal of Biomedical Informatics, 113, 103655.10.1016/j.jbi.2020.103655CrossRefGoogle ScholarPubMed
McCroskey, J. C., & Young, T. J. (1981). Ethos and Credibility: The Construct and Its Measurement after Three Decades. Communication Studies, 32(1), 2434.Google Scholar
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys (CSUR), 54(6), 135.10.1145/3457607CrossRefGoogle Scholar
Molina, M. D., & Sundar, S. S. (2022). When AI Moderates Online Content: Effects of Human Collaboration and Interactive Transparency on User Trust. Journal of Computer-mediated Communication, 27(4), zmac010.10.1093/jcmc/zmac010CrossRefGoogle Scholar
Momen, A., De Visser, E., Wolsten, K., Cooley, K., Walliser, J., & Tossell, C. C. (2023). Trusting the Moral Judgments of a Robot: Perceived Moral Competence and Humanlikeness of a GPT-3 enabled AI. In Proceedings of the 56th Hawaii International Conference on System Sciences (pp. 501510). IEEE.Google Scholar
Monfort, S. S., Graybeal, J. J., Harwood, A. E., McKnight, P. E., & Shaw, T. H. (2018). A Single-item Assessment for Remaining Mental Resources: Development and Validation of the Gas Tank Questionnaire (GTQ). Theoretical Issues in Ergonomics Science, 19(5), 530552.10.1080/1463922X.2017.1397228CrossRefGoogle Scholar
Morley, J., Machado, C. C., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2020). The Ethics of AI in Health Care: A Mapping Review. Social Science & Medicine, 260, 113172.10.1016/j.socscimed.2020.113172CrossRefGoogle Scholar
Mou, Y., & Meng, X. (2024). Alexa, It Is Creeping over Me: Exploring the Impact of Privacy Concerns on Consumer Resistance to Intelligent Voice Assistants. Asia Pacific Journal of Marketing and Logistics, 36(2), 261292.10.1108/APJML-10-2022-0869CrossRefGoogle Scholar
Pareek, S., van Berkel, N., Velloso, E., & Goncalves, J. (2024). Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility Assessment. Proceedings of the ACM on Human–Computer Interaction CSCW.10.1145/3686922CrossRefGoogle Scholar
Peckham, J. B. (2024). An AI Harms and Governance Framework for Trustworthy AI. Computer, 57(3), 5968.10.1109/MC.2024.3354040CrossRefGoogle Scholar
Pelau, C., Dabija, D.-C., & Ene, I. (2021). What Makes an AI Device Human-like? The Role of Interaction Quality, Empathy and Perceived Psychological Anthropomorphic Characteristics in the Acceptance of Artificial Intelligence in the Service Industry. Computers in Human Behavior, 122, 106855.CrossRefGoogle Scholar
Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33–44). ACM.10.1145/3351095.3372873CrossRefGoogle Scholar
Reinhardt, K. (2023). Trust and Trustworthiness in AI Ethics. AI and Ethics, 3(3), 735744.10.1007/s43681-022-00200-5CrossRefGoogle Scholar
Rieh, S. Y. (2002). Judgment of Information Quality and Cognitive Authority in the Web. Journal of the American Society for Information Science and Technology, 53(2), 145161.10.1002/asi.10017CrossRefGoogle Scholar
Rieh, S. Y., & Danielson, D. R. (2007). Credibility: A Multidisciplinary Framework. Annual Review of Information Science and Technology, 41(1), 307364.10.1002/aris.2007.1440410114CrossRefGoogle Scholar
Ruan, W., Wu, M., Sun, Y., Huang, X., Kroening, D., & Kwiatkowska, M. (2019). Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance. IJCAI-19.CrossRefGoogle Scholar
Sabharwal, R., Miah, S. J., Wamba, S. F., & Cook, P. (2024). Extending Application of Explainable Artificial Intelligence for Managers in Financial Organizations. Annals of Operations Research, 1–31.10.1007/s10479-024-05825-9CrossRefGoogle Scholar
Sambasivan, N., Arnesen, E., Hutchinson, B., Doshi, T., & Prabhakaran, V. (2021). Re-imagining Algorithmic Fairness in India and Beyond. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 315–328). ACM.10.1145/3442188.3445896CrossRefGoogle Scholar
Sanneman, L., & Shah, J. A. (2022). The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems. International Journal of Human–Computer Interaction, 38(18–20), 17721788.10.1080/10447318.2022.2081282CrossRefGoogle Scholar
Schmitt, A., Wambsganss, T., Söllner, M., & Janson, A. (2021). Towards a Trust Reliance Paradox? Exploring the Gap Between Perceived Trust in and Reliance on Algorithmic Advice. ICIS.Google Scholar
Schoenherr, J. R., Abbas, R., Michael, K., Rivas, P., & Anderson, T. D. (2023). Designing AI Using a Human-centered Approach: Explainability and Accuracy toward Trustworthiness. IEEE Transactions on Technology and Society, 4(1), 923.10.1109/TTS.2023.3257627CrossRefGoogle Scholar
Sebastian, G., George, A., & Jackson, G. Jr (2023). Persuading Patients Using Rhetoric to Improve Artificial Intelligence Adoption: Experimental Study. Journal of Medical Internet Research, 25, e41430.10.2196/41430CrossRefGoogle ScholarPubMed
Shahriar, S., Allana, S., Hazratifard, S. M., & Dara, R. (2023). A Survey of Privacy Risks and Mitigation Strategies in the Artificial Intelligence Life Cycle. IEEE Access, 11, 6182961854.10.1109/ACCESS.2023.3287195CrossRefGoogle Scholar
Shank, C. E. (2021). Credibility of Soft Law for Artificial Intelligence: Planning and Stakeholder Considerations. IEEE Technology and Society Magazine, 40(4), 2536.10.1109/MTS.2021.3123737CrossRefGoogle Scholar
Shin, D. (2022). How Do People Judge the Credibility of Algorithmic Sources? AI & Society, 1–16.10.1007/s00146-021-01158-4CrossRefGoogle Scholar
Shin, D. (2023). Embodying Algorithms, Enactive Artificial Intelligence and the Extended Cognition: You Can See as Much as You Know About Algorithm. Journal of Information Science, 49(1), 1831.10.1177/0165551520985495CrossRefGoogle Scholar
Shin, D., Rasul, A., & Fotiadis, A. (2022). Why am I Seeing This? Deconstructing Algorithm Literacy through the Lens of Users. Internet Research, 32(4), 12141234.10.1108/INTR-02-2021-0087CrossRefGoogle Scholar
Shneiderman, B. (2020). Bridging the Gap between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems. ACM Transactions on Interactive Intelligent Systems (TiiS), 10(4), 131.10.1145/3419764CrossRefGoogle Scholar
Shusas, E. (2024). Designing Better Credibility Indicators: Understanding How Emerging Adults Assess Source Credibility of Misinformation Identification and Labeling. In Companion Publication of the 2024 ACM Designing Interactive Systems Conference (pp. 41–44).10.1145/3656156.3665126CrossRefGoogle Scholar
Sokol, K., & Flach, P. (2020). Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 56–67). ACM.10.1145/3351095.3372870CrossRefGoogle Scholar
Song, S., Zhao, Y. C., Yao, X., Ba, Z., & Zhu, Q. (2021). Short Video Apps as a Health Information Source: An Investigation of Affordances, User Experience and Users’ Intention to Continue the Use of TikTok. Internet Research, 31(6), 21202142.10.1108/INTR-10-2020-0593CrossRefGoogle Scholar
Stettinger, G., Weissensteiner, P., & Khastgir, S. (2024). Trustworthiness Assurance Assessment for High-Risk AI-Based Systems. IEEE Access.10.1109/ACCESS.2024.3364387CrossRefGoogle Scholar
Stevens, A. F., & Stetson, P. (2023). Theory of Trust and Acceptance of Artificial Intelligence Technology (TrAAIT): An Instrument to Assess Clinician Trust and Acceptance of Artificial Intelligence. Journal of Biomedical Informatics, 148, 104550.10.1016/j.jbi.2023.104550CrossRefGoogle ScholarPubMed
Sundar, S. S. (2008). The MAIN Model: A Heuristic Approach to Understanding Technology Effects on Credibility. MacArthur Foundation Digital Media and Learning Initiative.Google Scholar
Tandoc, E. C. Jr, Yao, L. J., & Wu, S. (2020). Man vs. Machine? The Impact of Algorithm Authorship on News Credibility. Digital Journalism, 8(4), 548562.10.1080/21670811.2020.1762102CrossRefGoogle Scholar
Tenhundfeld, N. L., de Visser, E. J., Ries, A. J., Finomore, V. S., & Tossell, C. C. (2020). Trust and Distrust of Automated Parking in a Tesla Model X. Human Factors, 62(2), 194210.10.1177/0018720819865412CrossRefGoogle Scholar
Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G., & Van Moorsel, A. (2020). The Relationship between Trust in AI and Trustworthy Machine Learning Technologies. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 272–283). ACM.10.1145/3351095.3372834CrossRefGoogle Scholar
Tossell, C. C., Tenhundfeld, N. L., Momen, A., Cooley, K., & de Visser, E. J. (2024). Student Perceptions of ChatGPT Use in a College Essay Assignment: Implications for Learning, Grading, and Trust in Artificial Intelligence. IEEE Transactions on Learning Technologies.CrossRefGoogle Scholar
Trindade Neves, F., Aparicio, M., & de Castro Neto, M. (2024). The Impacts of Open Data and eXplainable AI on Real Estate Price Predictions in Smart Cities. Applied Sciences, 14(5), 2209.10.3390/app14052209CrossRefGoogle Scholar
Tseng, S., & Fogg, B. J. (1999). Credibility and Computing Technology. Communications of the ACM, 42(5), 3944.10.1145/301353.301402CrossRefGoogle Scholar
Ullman, D., & Malle, B. F. (2019). Measuring Gains and Losses in Human–Robot Trust: Evidence for Differentiable Components of Trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 618–619). IEEE.10.1109/HRI.2019.8673154CrossRefGoogle Scholar
Uslu, S., Kaur, D., Rivera, S. J., Durresi, A., Durresi, M., & Babbar-Sebens, M. (2021). Trustworthy Acceptance: A New Metric for Trustworthy Artificial Intelligence Used in Decision Making in Food–Energy–Water Sectors. International Conference on Advanced Information Networking and Applications (pp. 208219). Springer International Publishing.10.1007/978-3-030-75100-5_19CrossRefGoogle Scholar
Uzir, M. U. H., Bukari, Z., Al Halbusi, H., Lim, R., Wahab, S. N., Rasul, T., Thurasamy, R., Jerin, I., Chowdhury, M. R. K., & Tarofder, A. K. (2023). Applied Artificial Intelligence: Acceptance-Intention-Purchase and Satisfaction on Smartwatch Usage in a Ghanaian Context. Heliyon, 9(8).10.1016/j.heliyon.2023.e18666CrossRefGoogle Scholar
Van Bulck, L., & Moons, P. (2024). What if Your Patient Switches from Dr. Google to Dr. ChatGPT? A Vignette-based Survey of the Trustworthiness, Value, and Danger of ChatGPT-generated Responses to Health Questions. European Journal of Cardiovascular Nursing, 23(1), 9598.10.1093/eurjcn/zvad038CrossRefGoogle ScholarPubMed
Vincent-Lancrin, S., & Van der Vlies, R. (2020). Trustworthy Artificial Intelligence (AI) in Education: Promises and Challenges. OECD Education Working Papers, 218.Google Scholar
Vössing, M., Kühl, N., Lind, M., & Satzger, G. (2022). Designing Transparency for Effective Human–AI Collaboration. Information Systems Frontiers, 24(3), 877895.10.1007/s10796-022-10284-3CrossRefGoogle Scholar
van der Waa, J., Schoonderwoerd, T., van Diggelen, J., & Neerincx, M. (2020). Interpretable Confidence Measures for Decision Support Systems. International Journal of Human–Computer Studies, 144, 102493.Google Scholar
Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E. (2023). The Dark Side of Generative Artificial Intelligence: A Critical Analysis of Controversies and Risks of ChatGPT. Entrepreneurial Business and Economics Review, 11(2), 730.10.15678/EBER.2023.110201CrossRefGoogle Scholar
Wagle, V., Kaur, K., Kamat, P., Patil, S., & Kotecha, K. (2021). Explainable AI for Multimodal Credibility Analysis: Case Study of Online Beauty Health (mis)-Information. IEEE Access, 9, 127985128022.10.1109/ACCESS.2021.3111527CrossRefGoogle Scholar
Wang, C., Ahmad, S. F., Ayassrah, A. Y. B. A., Awwad, E. M., Irshad, M., Ali, Y. A., Al-Razgan, M., Khan, Y., & Han, H. (2023). An Empirical Evaluation of Technology Acceptance Model for Artificial Intelligence in E-commerce. Heliyon, 9(8).Google ScholarPubMed
Wang, X., & Zhao, Y. C. (2023). Understanding Older Adults’ Intention to Use Patient-accessible Electronic Health Records: Based on the Affordance Lens. Frontiers in Public Health, 10, 1075204.10.3389/fpubh.2022.1075204CrossRefGoogle ScholarPubMed
Weeks, R., Sangha, P., Cooper, L., Sedoc, J., White, S., Gretz, S., Toledo, A., Lahav, D., Hartner, A.-M., & Martin, N. M. (2023). Usability and Credibility of a COVID-19 Vaccine Chatbot for Young Adults and Health Workers in the United States: Formative Mixed Methods Study. JMIR Human Factors, 10(1), e40533.10.2196/40533CrossRefGoogle ScholarPubMed
Winkle, K., Melsión, G. I., McMillan, D., & Leite, I. (2021). Boosting Robot Credibility and Challenging Gender Norms in Responding to Abusive Behaviour: A Case for Feminist Robots. Companion of the 2021 ACM/IEEE International Conference on Human–Robot Interaction (pp. 2937). ACM.Google Scholar
Xiang, H., Zhou, J., & Xie, B. (2023). AI Tools for Debunking Online Spam Reviews? Trust of Younger and Older Adults in AI Detection Criteria. Behaviour & Information Technology, 42(5), 478497.10.1080/0144929X.2021.2024252CrossRefGoogle Scholar
Xu, W. (2019). Toward Human-centered AI: A Perspective from Human–Computer Interaction. Interactions, 26(4), 4246.10.1145/3328485CrossRefGoogle Scholar
Yang, Z. (2019). Fidelity: A Property of Deep Neural Networks to Measure the Trustworthiness of Prediction Results. Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security (pp. 676–678). ACM.10.1145/3321705.3331005CrossRefGoogle Scholar
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making Sense of Recommendations. Journal of Behavioral Decision Making, 32(4), 403414.10.1002/bdm.2118CrossRefGoogle Scholar
Yokoi, R., Eguchi, Y., Fujita, T., & Nakayachi, K. (2021). Artificial Intelligence Is Trusted Less Than a Doctor in Medical Treatment Decisions: Influence of Perceived Care and Value Similarity. International Journal of Human–Computer Interaction, 37(10), 981990.CrossRefGoogle Scholar
Yue, B., & Li, H. (2023). The Impact of Human–AI Collaboration Types on Consumer Evaluation and Usage Intention: A Perspective of Responsibility Attribution. Frontiers in Psychology, 14, 1277861.10.3389/fpsyg.2023.1277861CrossRefGoogle ScholarPubMed
Zahlan, A., Ranjan, R. P., & Hayes, D. (2023). Artificial Intelligence Innovation in Healthcare: Literature Review, Exploratory Analysis, and Future Research. Technology in Society, 102321.10.1016/j.techsoc.2023.102321CrossRefGoogle Scholar
Zalzal, H. G., Abraham, A., Cheng, J., & Shah, R. K. (2024). Can ChatGPT Help Patients Answer Their Otolaryngology Questions? Laryngoscope Investigative Otolaryngology, 9(1), e1193.10.1002/lio2.1193CrossRefGoogle ScholarPubMed
Zhang, J., & Zhang, Z.-M. (2023). Ethics and Governance of Trustworthy Medical Artificial Intelligence. BMC Medical Informatics and Decision Making, 23(1), 7.10.1186/s12911-023-02103-9CrossRefGoogle ScholarPubMed
Zhang, Y., Liao, Q. V., & Bellamy, R. K. (2020). Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-assisted Decision Making. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 295–305).10.1145/3351095.3372852CrossRefGoogle Scholar
Zhang, Z., Genc, Y., Wang, D., Ahsen, M. E., & Fan, X. (2021). Effect of AI Explanations on Human Perceptions of Patient-facing AI-powered Healthcare Systems. Journal of Medical Systems, 45(6), 64.10.1007/s10916-021-01743-6CrossRefGoogle ScholarPubMed
Zhou, J., Zhang, Y., Luo, Q., Parker, A. G., & De Choudhury, M. (2023). Synthetic Lies: Understanding AI-generated Misinformation and Evaluating Algorithmic and Human Solutions. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–20). ACM.10.1145/3544548.3581318CrossRefGoogle Scholar
Zhuang, N., Ma, Z., Zhou, Y., Li, X., Wang, P., Huang, Z., Zhai, S., & Ying, F. (2024). Alleviating Elderly’s Medical Communication Issue with Personalized LLM-Generated Short-Form Video. International Symposium on World Ecological Design (pp. 763772). IOS Press.Google Scholar

Accessibility standard: Unknown

Accessibility compliance for the PDF of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge-org.demo.remotlog.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×