Hostname: page-component-6bb9c88b65-fsdjw Total loading time: 0 Render date: 2025-07-24T08:39:48.966Z Has data issue: false hasContentIssue false

Ethical decision-making for AI in mental health: the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework

Published online by Cambridge University Press:  24 July 2025

Andrea Putica*
Affiliation:
Department of Psychology, Counselling and Therapy, https://ror.org/01rxfrp27 La Trobe University , Melbourne, VIC, Australia Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
Rahul Khanna
Affiliation:
Phoenix Australia – Centre for Posttraumatic Mental Health, Department of Psychiatry, University of Melbourne, Melbourne, VIC, Australia Department of Psychiatry, Austin Health, Heidelberg, Melbourne, Australia
Wiliam Bosl
Affiliation:
School of Nursing and Health Professions, https://ror.org/029m7xn54 University of San Francisco , San Francisco, CA, USA Computational Health Informatics Program, Boston Children’s Hospital, Boston, MA, USA Boston Children’s Hospital, Harvard Medical School, Boston, MA, USA
Sudeep Saraf
Affiliation:
Department of Psychiatry, https://ror.org/04scfb908 Alfred Health , Melbourne, VIC, Australia
Juliet Edgcomb
Affiliation:
Mental Health Informatics and Data Science Hub, Semel Institute, University of California Los Angeles, Los Angeles, CA, USA Division of Child & Adolescent Psychiatry, Department of Psychiatry, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
*
Corresponding author: Andrea Putica; Email: a.putica@latrobe.edu.au
Rights & Permissions [Opens in a new window]

Abstract

The integration of computational methods into psychiatry presents profound ethical challenges that extend beyond existing guidelines for AI and healthcare. While precision medicine and digital mental health tools offer transformative potential, they also raise concerns about privacy, algorithmic bias, transparency, and the erosion of clinical judgment. This article introduces the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework, developed through a conceptual synthesis of 83 studies. The framework comprises five procedural stages – Identification, Analysis, Decision-making, Implementation, and Review – each informed by six core ethical values – beneficence, autonomy, justice, privacy, transparency, and scientific integrity. By systematically addressing ethical dilemmas inherent in computational psychiatry, the IEACP provides clinicians, researchers, and policymakers with structured decision-making processes that support patient-centered, culturally sensitive, and equitable AI implementation. Through case studies, we demonstrate framework adaptability to real-world applications, underscoring the necessity of ethical innovation alongside technological progress in psychiatric care.

Information

Type
Review Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Introduction

Computational psychiatry integrates insights from psychiatry, neuroscience, and computer science to develop data-driven approaches for diagnosis, prognosis, and treatment of mental health conditions (Corlett & Fletcher, Reference Corlett and Fletcher2014; Friston, Reference Friston2023; Khanna et al., Reference Khanna, Robinson, O’Donnell, Eyre and Smith2022). Clinicians face ethical challenges when implementing these approaches in practice (Espejo, Reiner, & Wenzinger, Reference Espejo, Reiner and Wenzinger2023; McCradden, Hui, & Buchman, Reference McCradden, Hui and Buchman2023). Recent advancements in artificial intelligence (AI), including machine learning and generative language models, have expanded the potential of computational psychiatry to transform mental healthcare (Salazar de Pablo et al., Reference Salazar de Pablo, Studerus, Vaquerizo-Serrano, Irving, Catalan, Oliver and Fusar-Poli2021). While these challenges exist across medicine (Uusitalo, Tuominen, & Arstila, Reference Uusitalo, Tuominen and Arstila2021), the sensitive nature of mental healthcare amplifies the ethical implications of these challenges. Despite existing ethical guidelines for general psychiatric practice and health AI (Solanki, Grundy, & Hussain, Reference Solanki, Grundy and Hussain2023; World Health Organization, 2021), computational psychiatry remains ethically underregulated. Without structured ethical governance, AI risks reinforcing systemic biases, compromising patient autonomy, and exacerbating existing disparities in mental healthcare.

Psychiatric data are sensitive, stigmatizing, and at times subjective, which can complicate informed consent and clinical transparency. Computational models trained on homogeneous datasets can amplify biases, skewing psychiatric diagnoses and treatments, particularly when applied across diverse cultural, gender, and age groups (Capon, Hall, Fry, & Carter, Reference Capon, Hall, Fry and Carter2016; Coley et al., Reference Coley, Johnson, Simon, Cruz and Shortreed2021; Leslie et al., Reference Leslie, Mazumder, Peppin, Wolters and Hagerty2021; Roy, Reference Roy2017). Psychiatry’s reliance on subjective patient experiences means that overemphasis on computational tools may disguise paternalism (Juengst, McGowan, Fishman, & Settersten, Reference Juengst, McGowan, Fishman and Settersten2016) if AI systems’ outputs override patients’ lived experiences. In this context, AI model opacity often further complicates patient-centered decision-making (Chin-Yee & Upshur, Reference Chin-Yee and Upshur2018; Ploug & Holm, Reference Ploug and Holm2016). Erosion in trust can damage the therapeutic alliance, a cornerstone of psychiatric practice, and diminish clinician’s ability to integrate psychosocial and cultural contexts into care (Chin-Yee & Upshur, Reference Chin-Yee and Upshur2018; Tekin, Reference Tekin2014; Walter, Reference Walter2013). Furthermore, rapid evolution of AI demands that ethical frameworks remain adaptable to new technologies affecting symptomatology, diagnosis, and even therapeutic interventions (Barnett et al., Reference Barnett, Torous, Staples, Sandoval, Keshavan and Onnela2018; Starke, De Clercq, Borgwardt, & Elger, Reference Starke, De Clercq, Borgwardt and Elger2021; Torous et al., Reference Torous, Bucci, Bell, Kessing, Faurholt-Jepsen, Whelan and Firth2021). Finally, by emphasizing genetics and biomarkers, established precision medicine frameworks risk reductionism and may overlook the crucial roles of environmental, cultural, and interpersonal determinants in mental health and psychiatric care (Venkatasubramanian & Keshavan, Reference Venkatasubramanian and Keshavan2016).

The digital divide complicates ethical considerations of AI in global psychiatric contexts, particularly in rural, underserved, and impoverished regions. This divide between those with and without access to mobile or internet technologies intersects with cultural variations in mental health conceptualization and treatment. In low- and middle-income countries, primary care providers, who are often the first point of contact for mental health concerns, face numerous challenges in implementing computational tools that align with patient autonomy and local cultural perspectives regarding mental health treatment (Naslund et al., Reference Naslund, Aschbrenner, Araya, Marsch, Unützer, Patel and Bartels2017). For example, cultures may conceptualize epilepsy as a neurological disorder, mental health condition, or spiritual issue (Gilkinson et al., Reference Gilkinson, Kinney, Olaniyan, Murtala, Sipilon, Malunga and Shankar2022). Such variations extend to suicide reporting and help-seeking behaviors, where cultural and systemic factors significantly influence data accuracy and treatment engagement (Monosov et al., Reference Monosov, Zimmermann, Frank, Mathis and Baker2024; Naslund et al., Reference Naslund, Aschbrenner, Araya, Marsch, Unützer, Patel and Bartels2017; Starke et al., Reference Starke, De Clercq, Borgwardt and Elger2021). The tripartite challenge of AI access, cultural understanding, and clinical implementation underscores the need for an ethical framework adaptable to the technical and sociocultural aspects of global mental healthcare delivery.

This article introduces the Integrated Ethical Approach for Computational Psychiatry (IEACP), a framework that addresses computational psychiatry’s ethical challenges while supporting patient-centered interdisciplinary care. We outline the framework’s development, emphasize patient and lived experience integration in ethical decision-making, and demonstrate its application through case studies. We also examine the framework’s implications, limitations, and future directions for global mental health care scalability. The ‘Patient and lived experience involvement methods’ section details our approach to incorporating patient perspectives to evaluate the alignment of computational psychiatric tools with real-world patient needs and experiences.

Framework development methodology

The literature search strategy, inclusion and exclusion criteria, study selection process, data extraction procedures, and the full Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram are provided in the Supplementary Material Figure S-1.

Development of framework components through structured review

The IEACP Framework was developed through interpretive synthesis of 83 peer-reviewed studies identified in a systematic literature review (see Supplementary Material). Structured data extraction captured information on implementation contexts, ethical challenges, and ethical values. Using repeated rounds of concept clustering, five core framework stages were identified: Identification, Analysis, Decision-making, Implementation, and Review. Each stage includes three commonly observed implementation processes. A stage refers to a broad phase in ethical implementation (e.g. the identification of risks), while a process refers to a recurring, practical approach within that stage (e.g. stakeholder mapping). Values refer to overarching ethical principles, such as privacy, autonomy, or justice, that guide decision-making throughout each stage and process. The framework focuses on the most frequently recurring processes within each stage that reflect clear ethical strategies across diverse settings. This approach aligns with the conceptual framework methodology (Jabareen, Reference Jabareen2009).

Implementation contexts and ethical challenges revealed recurring, stage-specific implementation processes. For example, three distinct identification stage processes emerged: systematic recognition of ethical risks (e.g. D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; privacy concerns), gathering of implementation information (e.g. Clarke, Foltz, & Garrard, Reference Clarke, Foltz and Garrard2020) examining data collection and storage requirements) and stakeholder mapping (e.g. Clarke et al., Reference Clarke, Foltz and Garrard2020; affected parties). Analysis stage processes were derived from studies evaluating ethical considerations, including technical evaluation (e.g. Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023; data quality), compliance assessment (e.g. Ball, Kalinowski, & Williams, Reference Ball, Kalinowski and Williams2020; regulatory requirements), and guideline review (e.g. D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; compliance frameworks). Decision-making stage processes emerged from implementation planning approaches, such as strategy development (e.g. Hurley et al., Reference Hurley, Sonig, Herrington, Storch, Lázaro-Muñoz, Blumenthal-Barby and Kostick-Quenet2024; human supervision), implementation planning (e.g. Zhang et al., Reference Zhang, Scandiffio, Younus, Jeyakumar, Karsan, Charow and Wiljer2023; standardization), and consensus building (e.g. Zidaru, Morrow, & Stockley, Reference Zidaru, Morrow and Stockley2021; codesign). Implementation stage processes were reflected in clinical application approaches, including clinical integration (e.g. Torous et al., Reference Torous, Bucci, Bell, Kessing, Faurholt-Jepsen, Whelan and Firth2021; healthcare adoption), staff training (e.g. D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; workforce preparation), and performance monitoring (e.g. Clarke et al., Reference Clarke, Foltz and Garrard2020; ongoing evaluation). Review-stage processes were evident in the studies’ evaluation approaches, including outcome assessment (e.g. Dwyer & Koutsouleris, Reference Dwyer and Koutsouleris2022; clinical translation), performance evaluation (e.g. Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023; error tracking), and framework refinement (e.g. Fusar-Poli et al., Reference Fusar-Poli, Manchia, Koutsouleris, Leslie, Woopen, Calkins and Andreassen2022; continuous monitoring).

Analysis of 83 studies identified the following six key ethical values in computational psychiatry: privacy and confidentiality, transparency and explainability, justice and equity, beneficence and non-maleficence, autonomy and informed consent, and scientific integrity and validity. Privacy and confidentiality were most frequently discussed, followed by transparency in clinical decision-making. Justice considerations span algorithmic bias and healthcare access, while beneficence, autonomy, and integrity connect to risk assessment, informed consent, and model validation, respectively. The alignment of these ethical values with the IEACP framework stages was determined based on their frequency and implementation context in the literature (see Supplementary Table S-2). A selective set of exemplary references will be included throughout the article.

These six ethical principles were inductively derived through interpretive synthesis of ethical content across the included literature, consistent with conceptual framework development (Jabareen, Reference Jabareen2009). The framework synthesizes established bioethical traditions, including traditional principlism (Beauchamp & Childress, Reference Beauchamp and Childress2001) expressed in the values of beneficence, non-maleficence, autonomy, and justice; information ethics (Floridi et al., Reference Floridi, Cowls, Beltrametti, Chatila, Chazerand, Dignum and Vayena2018) emphasizing privacy protections in algorithmic data processing; virtue epistemology (Zagzebski, Reference Zagzebski1996), highlighting scientific integrity and epistemic responsibility; and epistemic justice theory (Fricker, Reference Fricker2007), with a focus on transparency and algorithmic accountability. This multitheoretical synthesis addresses computational psychiatry’s novel challenges that no single established framework adequately encompasses. Privacy extends beyond traditional confidentiality to encompass algorithmic data processing, model training on sensitive psychiatric information, and digital phenotyping that infers mental states from behavioral patterns. Transparency addresses epistemic opacity inherent in machine learning models where clinical recommendations emerge from processes that may be fundamentally unexplainable even to developers (Petch, Di, & Nelson, Reference Petch, Di and Nelson2022). Scientific integrity ensures methodological rigor in computational models whose predictive validity directly influences psychiatric interventions. These six principles emerged as essential through repeated patterns in the literature addressing ethical challenges in computational psychiatry, with other commonly cited AI ethics principles either conceptually encompassed within the selected six or having limited direct applicability to clinical psychiatric contexts.

Ethical dilemmas in computational psychiatry characteristically arise from irreconcilable conflicts between these principles, requiring contextual navigation rather than algorithmic resolution. Transparency demands for explainable AI outputs can directly conflict with privacy requirements when model explanations necessarily reveal sensitive patient data or demographic patterns. Beneficence-driven early interventions based on suicide risk algorithms may fundamentally override patient autonomy when individuals reject computational predictions of their mental state. Justice requires equitable AI performance across populations while scientific integrity demands acknowledging when models perform poorly for certain demographic groups, creating tensions between deployment and methodological honesty. Privacy protections may conflict with justice when de-identification procedures disproportionately exclude marginalized populations from model development. The IEACP framework addresses these inherent tensions not by providing predetermined resolutions, but by requiring systematic identification of competing principles during stakeholder mapping, explicit analysis of trade-offs during compliance assessment, collaborative negotiation of acceptable compromises during consensus building, and ongoing monitoring of principle conflicts during implementation review. This procedural approach recognizes that ethical reasoning in computational psychiatry involves navigating irreducible tensions through structured deliberation rather than eliminating conflicts through hierarchical principle ranking, consistent with contextualist approaches to applied ethics (Jonsen & Toulmin, Reference Jonsen and Toulmin1988) and established methods for managing principal conflicts in clinical ethics (Gillon, Reference Gillon2003).

The IEACP framework

The IEACP framework addresses computational psychiatry ethics through five stages with three processes each, guided by six core values. Ethical values do not simply serve as retrospective considerations; they actively shape decision points at both stage transitions and within processes, embedding ethical integrity throughout implementation. Table 1 illustrates how ethical values inform decision points across each stage.

Table 1. Ethical decision-making across the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework

Note: This table outlines the five procedural stages of the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework – Identification, Analysis, Decision-making, Implementation, and Review and Reflection – while highlighting how ethical values underpin each stage. LIME, Local Interpretable Model-agnostic Explanations; SHAP, SHapley Additive exPlanations.

Identification

The Identification stage consists of three processes as follows: recognizing ethical challenges, gathering implementation data, and mapping stakeholders. Recognition is the process of identifying ethical challenges at the intersection of computational psychiatry and clinical ethics, including moral dilemmas that arise from AI-driven decision-making. For example, in schizophrenia prediction models, the recognition process examines the ethical tension between early intervention and the risk of overdiagnosis, focusing on how predictive modeling may inadvertently reinforce diagnostic overshadowing, limit patient autonomy, and exacerbate social stigma. Gathering implementation data involves collecting information on real-world implementation contexts, including technical, clinical, and systemic factors affecting ethical deployment. For example, in machine learning models for depression screening, information gathering involves documenting existing screening workflows across clinical settings, mapping the technological infrastructure needed for ethical deployment, and characterizing the target patient demographics to anticipate potential disparities. Stakeholder mapping identifies all individuals and groups impacted by computational psychiatry tools, a critical process given the complex relationships between clinicians, patients, carers, and automated systems. In an automated suicide risk prediction system, stakeholder mapping identifies key individuals and groups impacted by the tool, including emergency clinicians, at-risk patients, family members, mental health teams, hospital administrators, and healthcare funders, such as government agencies and insurance providers. Special attention is given to populations with variable decision-making capacity, such as individuals with first-episode psychosis, by identifying their interactions with AI tools, mapping support networks, and documenting contexts requiring added protections (Vayena, Blasimme, & Cohen, Reference Vayena, Blasimme and Cohen2018).

Ethical values shape the Identification stage, guiding the recognition of ethical challenges, data collection, and stakeholder mapping to ensure responsible integration of computational psychiatry tools. Identifying algorithmic risks, such as biased data sources, unintended clinical consequences, or ethical dilemmas in psychiatric prediction models, ensures that potential harms are flagged before deployment (beneficence and non-maleficence; Ball et al., Reference Ball, Kalinowski and Williams2020; Fusar-Poli et al., Reference Fusar-Poli, Manchia, Koutsouleris, Leslie, Woopen, Calkins and Andreassen2022). The identification of methodological limitations during instrument development helps prevent flawed assumptions from influencing clinical judgment (scientific integrity; McCradden et al., Reference McCradden, Hui and Buchman2023; Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023). Pre-implementation data collection identifies ethical risks related to psychiatric information security, storage, and sharing, preventing potential misuse or unauthorized access (privacy and confidentiality; D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; Upreti, Lind, Elmokashfi, & Yazidi, Reference Upreti, Lind, Elmokashfi and Yazidi2024). Recognizing disparities in how AI systems identify psychiatric risk across different demographic groups is essential to prevent reinforcing existing inequities (justice and equity; Gooding & Kariotis, Reference Gooding and Kariotis2021; Sahin et al., Reference Sahin, Kambeitz-Ilankovic, Wood, Dwyer, Upthegrove and Salokangas2024). Stakeholder mapping ensures that AI-driven tools acknowledge patient rights and decision-making capacity, particularly in populations where autonomy may be impacted, such as individuals with severe mental illness (autonomy and informed consent; Davidson, Reference Davidson2022; Jacobson et al., Reference Jacobson, Bentley, Walton, Wang, Fortgang, Millner and Coppersmith2020). Recognizing when computational psychiatry tools produce opaque decision-making processes ensures that risks related to interpretability are identified (transparency and explainability; D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; McCradden et al., Reference McCradden, Hui and Buchman2023). This integrated approach embeds ethical rules into the earliest stages of tool development, shifting ethical considerations from retrospective checkpoints to foundational guideposts that shape computational psychiatry’s trajectory from inception.

Analysis

The Analysis stage deepens the examination of ethical considerations identified in the previous stage, moving from recognizing potential challenges to evaluation of their implications and compliance requirements. While the Identification stage maps ethical concerns, the Analysis stage rigorously assesses them against established regulatory, institutional, and professional standards. This involves three key processes: technical evaluation, compliance assessment, and guideline review. Technical evaluation assesses computational methods against predefined benchmarks, ensuring accuracy, sensitivity, specificity, and fairness across diverse populations. This involves evaluating model reliability, demographic fairness, and interpretability within psychiatric contexts. For example, when assessing a depression screening algorithm, technical evaluation determines whether the model performs equitably across age groups, ethnicities, and socioeconomic backgrounds, identifying potential biases and performance disparities. Compliance assessment evaluates alignment with relevant healthcare laws and data protection regulations. In computational psychiatry, this involves reviewing adherence to healthcare laws, data protection frameworks such as the European Union’s General Data Protection Regulation (European Union, 2016) and the United States Health Insurance Portability and Accountability Act (HIPAA, 1996), particularly regarding sensitive mental health data. For example, this might involve assessing whether a mood prediction algorithm appropriately manages patient consent requirements or ensuring diagnostic support systems comply with data privacy regulations in psychiatric settings. Guideline review examines adherence to professional and institutional standards in computational psychiatry. This includes evaluating compliance with frameworks such as the World Health Organization’s Ethics and Governance of Artificial Intelligence for Health (World Health Organization, 2021), as well as institution-specific psychiatric governance policies.

During Analysis, ethical values structure the evaluation of computational tools against healthcare’s regulatory, institutional, and professional requirements. Assessing computational models against predefined benchmarks for accuracy, sensitivity, specificity, and fairness ensures that potential biases and risks to patient safety are identified before deployment (beneficence and non-maleficence; Ball et al., Reference Ball, Kalinowski and Williams2020; Fusar-Poli et al., Reference Fusar-Poli, Manchia, Koutsouleris, Leslie, Woopen, Calkins and Andreassen2022). Validation and external assessment of model assumptions safeguard against methodological flaws that could compromise clinical decisions (scientific integrity; McCradden et al., Reference McCradden, Hui and Buchman2023; Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023). Ensuring compliance with healthcare laws and data protection frameworks, such as General Data Protection Regulation (GDPR) and HIPAA, prevents unauthorized data use and security breaches (privacy and confidentiality; D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; Farmer, Lockwood, Goforth, & Thomas, Reference Farmer, Lockwood, Goforth and Thomas2024). Evaluating demographic performance variations in psychiatric assessments ensures that AI-driven tools do not reinforce inequities in access to care (justice and equity; Singhal et al., Reference Singhal, Cooke, Villareal, Stoddard, Lin and Dempsey2024; Upreti et al., Reference Upreti, Lind, Elmokashfi and Yazidi2024). Reviewing adherence to institutional and professional ethical guidelines safeguards patient autonomy and ensures that AI-assisted decision-making does not override informed consent policies (autonomy and informed consent; Ahmed & Hens, Reference Ahmed and Hens2022; Wouters et al., Reference Wouters, van der Horst, Aalfs, Bralten, Luykx and Zinkstok2024). Maintaining interpretability in analytic outputs allows clinicians and regulators to scrutinize AI decision-making before implementation (transparency and explainability; D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; McCradden et al., Reference McCradden, Hui and Buchman2023).

Decision-making

The Decision-making stage translates ethical analysis into actionable implementation strategies through three key processes: strategy development, implementation planning, and consensus building. Strategy development formulates approaches to address ethical challenges, ensuring AI tools align with professional and patient-centered considerations, such as restricting suicide risk prediction models to clinician-mediated use to mitigate risks of self-directed harm or unnecessary emergency interventions. Implementation planning translates ethical requirements into structured operational protocols that guide clinical practice, including developing tiered consent mechanisms for depression screening algorithms that dynamically adjust based on patient cognitive capacity, ensuring informed decision-making at different stages of illness progression. Consensus building facilitates structured consultation and collaborative review among clinicians, ethicists, and individuals with lived experience to define ethically defensible risk thresholds in automated psychiatric screening, such as distinguishing passive suicidal ideation from active crisis situations that require immediate intervention. Through iterative stakeholder engagement, this process refines actionable criteria for when algorithmic predictions should trigger clinician review, ensuring decisions optimize predictive accuracy while maintaining patient autonomy, clinical feasibility, and ethical oversight.

During the Decision-making stage, ethical values structure the translation of analysis into actionable plans, ensuring that strategic interventions align with patient safety, clinical integrity, and equitable outcomes. Defining intervention thresholds and protocols ensures that suicide risk predictions lead to appropriate clinical responses, balancing proactive intervention with harm prevention (beneficence and non-maleficence; Torous et al., Reference Torous, Bucci, Bell, Kessing, Faurholt-Jepsen, Whelan and Firth2021; Wang et al., Reference Wang, Wu, Zhang, He and Huang2024). Developing structured validation standards and operational procedures safeguards against methodological inconsistencies, ensuring AI applications maintain reliability across psychiatric settings (scientific integrity; Kirtley et al., Reference Kirtley, van Mens, Hoogendoorn, Kapur and de Beurs2022; Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023). Establishing consent frameworks through stakeholder collaboration ensures that AI-driven recommendations respect patient autonomy across varying cognitive capacities (autonomy and informed consent; Jacobson et al., Reference Jacobson, Bentley, Walton, Wang, Fortgang, Millner and Coppersmith2020; Wouters et al., Reference Wouters, van der Horst, Aalfs, Bralten, Luykx and Zinkstok2024). Defining equitable intervention criteria prevents disparities in AI-assisted psychiatric care, ensuring that deployment procedures address demographic considerations (justice and equity; Koutsouleris, Hauser, Skvortsova, & De Choudhury, Reference Koutsouleris, Hauser, Skvortsova and De Choudhury2022; Starke et al., Reference Starke, De Clercq, Borgwardt and Elger2021). Determining data access levels and security protocols ensures psychiatric information remains protected against breaches and unauthorized use (privacy and confidentiality; Parziale & Mascalzoni, Reference Parziale and Mascalzoni2022; Upreti et al., Reference Upreti, Lind, Elmokashfi and Yazidi2024). Establishing clear guidelines for communicating AI system outputs ensures that stakeholders can accurately interpret AI-generated insights, thereby reducing ambiguity and informing structured decision-making (transparency and explainability; Gültekin & Şahin, Reference Gültekin and Şahin2024; Torous et al., Reference Torous, Bucci, Bell, Kessing, Faurholt-Jepsen, Whelan and Firth2021). By embedding ethical values into decision-making at every stage, this structured framework ensures that computational psychiatry tools align with professional standards while maintaining stakeholder trust and clinical applicability.

Implementation

The Implementation stage transitions from decision-making to active deployment, integrating computational psychiatry tools into clinical workflows through three interconnected processes: clinical integration, staff training, and performance monitoring. Clinical integration embeds computational tools into existing mental healthcare systems through systematic protocol development and infrastructure integration. For example, implementing a psychosis relapse prediction algorithm requires three key components. First, establishing empirically validated clinical triggers based on quantifiable indicators (e.g. appointment adherence and medication compliance). Second, securely integrating outputs with electronic health record systems while maintaining data integrity. Third, structuring evidence-based response pathways that escalate from automated surveillance to structured clinical assessments and expedited psychiatric intervention within established governance frameworks. Staff training develops clinical competency in three clinical areas: first, quantitative risk score interpretation within clinical contexts, incorporating confidence intervals, and limitation awareness. Second, integrating algorithmic insights with clinical expertise, emphasizing systematic evaluation against patient-specific factors. Third, training in communicating AI-generated outputs while maintaining a therapeutic alliance. Performance monitoring establishes comprehensive evaluation frameworks to track predictive accuracy, monitor unintended consequences (e.g. demographic disparities), assess care quality impact, and ensure ongoing ethical compliance.

In the Implementation stage, ethical values guide the active deployment of computational psychiatry tools through clinical integration, staff training, and performance monitoring. Clinical teams implement alert systems and response pathways using predetermined risk thresholds, ensuring AI supports rather than overrides clinical judgment (beneficence and non-maleficence; Monaco et al., Reference Monaco, Vignapiano, Piacente, Pagano, Mancuso, Steardo and Corrivetti2024; Tabb & Lemoine, Reference Tabb and Lemoine2021). Validation procedures and quality control measures become part of routine clinical practice through systematic staff training (scientific integrity; Sultan, Scholz, & van den Bos, Reference Sultan, Scholz and van den Bos2023; Zhang et al., Reference Zhang, Scandiffio, Younus, Jeyakumar, Karsan, Charow and Wiljer2023). Dynamic consent processes roll out with documentation systems that adapt to varying levels of patient cognitive capacity throughout illness phases (autonomy and informed consent; Ball et al., Reference Ball, Kalinowski and Williams2020; Wouters et al., Reference Wouters, van der Horst, Aalfs, Bralten, Luykx and Zinkstok2024). Monitoring systems track utilization patterns as standardized access protocols take effect, ensuring equitable tool deployment across patient populations (justice and equity; Lewis et al., Reference Lewis, Chisholm, Connolly, Esplin, Glessner, Gordon and Fullerton2024; Wang et al., Reference Wang, Wu, Zhang, He and Huang2024). Clinical staff receive training in security protocols while system-level protections and access controls safeguard sensitive psychiatric data (privacy and confidentiality; Upreti et al., Reference Upreti, Lind, Elmokashfi and Yazidi2024; Wray et al., Reference Wray, Lin, Austin, McGrath, Hickie, Murray and Visscher2021). Clear formats for sharing AI outputs roll out alongside training in effective communication methods, maintaining transparency throughout implementation (transparency and explainability; Levkovich, Shinan-Altman, & Elyoseph, Reference Levkovich, Shinan-Altman and Elyoseph2024; Wiese & Friston, Reference Wiese and Friston2022). For deep learning systems where algorithmic interpretability is not possible, transparency focuses on post-hoc explanations (e.g. SHapley Additive exPlanations and Local Interpretable Model-agnostic Explanations) and model behavior rather than step-by-step algorithmic logic (Lundberg & Lee, Reference Lundberg and Lee2017; Ribeiro, Singh, & Guestrin, Reference Ribeiro, Singh and Guestrin2016). This coordinated integration of ethical principles into clinical practice transforms theoretical frameworks and planned protocols into living systems that actively safeguard and enhance psychiatric care delivery.

Review and reflection

The Review and reflection stage provides a structured approach to evaluating real-world impact and refining AI tools over time. The Review and reflection stage evaluates real-world performance through outcome assessment, performance evaluation, and framework refinement to ensure continuous improvement. Outcome assessment systematically measures the impact of implemented computational psychiatry tools using quantitative and qualitative metrics, tracking clinical outcomes, such as symptom improvement rates, treatment adherence, and changes in healthcare utilization. For example, when assessing an automated depression screening system in primary care, outcome assessment would measure time-to-treatment initiation, referral success rates, patient-reported symptom changes at 6 months, and emergency department utilization for mental health crises. Performance evaluation examines prediction accuracy, response times, and clinical workflow integration. In a psychosis relapse prediction system, this involves assessing how accurately the model identifies early warning signs, how quickly clinical teams respond to alerts, and whether predictions integrate smoothly into routine assessments without disrupting existing workflows. Framework refinement updates protocols based on implementation experience and emerging evidence, such as modifying alert thresholds to reduce false positives or adjusting clinical workflows based on staff feedback. By continuously adapting predictive models, clinical integration protocols, and risk management frameworks, this stage ensures computational psychiatry tools evolve ethically and remain clinically responsible. For example, if an algorithm achieves 85% accuracy in detecting early warning signs but overnight alerts take significantly longer to receive a clinical response, framework refinement would focus on optimizing workflow protocols during off-hours to improve efficiency and clinical impact.

During the Review and reflection stage, ethical values guide the evaluation of how computational psychiatry tools performed, ensuring that their real-world impact aligns with ethical principles and clinical objectives. Assessing documented clinical effects and incident reports ensures that AI-driven interventions minimize harm while optimizing patient care (beneficence and non-maleficence; Levkovich et al., Reference Levkovich, Shinan-Altman and Elyoseph2024; Torous et al., Reference Torous, Bucci, Bell, Kessing, Faurholt-Jepsen, Whelan and Firth2021). Reviewing performance data identifies model reliability across diverse clinical settings, ensuring psychiatric AI applications maintain scientific rigor (scientific integrity; Kleine et al., Reference Kleine, Lermer, Cecil, Heinrich and Gaube2023; Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023). Examining consent records evaluates whether patients engaged meaningfully with AI-assisted care, ensuring informed decision-making was upheld (autonomy and informed consent; Davidson, Reference Davidson2022; Zidaru et al., Reference Zidaru, Morrow and Stockley2021). Evaluating demographic trends in patient outcomes ensures that psychiatric AI tools do not reinforce disparities in treatment access and effectiveness (justice and equity; Lewis et al., Reference Lewis, Chisholm, Connolly, Esplin, Glessner, Gordon and Fullerton2024; Wang et al., Reference Wang, Wu, Zhang, He and Huang2024). Reviewing security protocols and access logs ensures that psychiatric data remains protected against breaches and misuse (privacy and confidentiality; Upreti et al., Reference Upreti, Lind, Elmokashfi and Yazidi2024; Wray et al., Reference Wray, Lin, Austin, McGrath, Hickie, Murray and Visscher2021). Analyzing stakeholder feedback on AI applications assesses whether decision-making processes remained transparent, interpretable, and clinically actionable (transparency and explainability; Kline, Prichett, McKim, & Palm Reed, Reference Kline, Prichett, McKim and Palm Reed2023; Wiese & Friston, Reference Wiese and Friston2022). Without structured ethical oversight, AI risks amplifying biases, diminishing clinical accountability, and undermining patient trust in mental healthcare. The IEACP framework provides an essential ethical infrastructure to ensure computational psychiatry advances in a way that prioritizes transparency, equity, and patient autonomy. Table 2 operationalizes all IEACP framework processes, providing step-by-step implementation guidance with systematic cultural integration that transforms the conceptual framework into actionable procedures for clinical practice.

Table 2. Operationalization of the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework with cultural adaptations

Note: This table operationalizes the IEACP framework across all 15 processes, providing step-by-step implementation guidance with systematic cultural integration for each ethical principle.

Ethical values: AIC, autonomy/informed consent; BN, beneficence/non-maleficence; JE ,justice/equity; PC, privacy/confidentiality; TE, transparency/explainability; SIV, scientific integrity/validity.

Technical terms: AI,  artificial intelligence; APA, American Psychological Association; SM, Diagnostic and Statistical Manual of Mental `Disorders; FDA, Food and Drug Administration; HIPAA, Health Insurance Portability and Accountability Act; IRB, Institutional Review Board; IT, information technology; WHO, World Health Organization.

a These thresholds were developed to support practical decision-making and are not intended as rigid cutoffs. They reflect the cumulative risk approach discussed in prior ethical frameworks for digital and AI-enabled mental healthcare (Ball et al., Reference Ball, Kalinowski and Williams2020; Vayena et al., Reference Vayena, Blasimme and Cohen2018).

Patient and lived experience involvement methods

Ensuring computational psychiatry tools align with patient needs requires meaningful involvement, particularly for individuals with severe mental illness (Zima, Edgcomb, & Fortuna, Reference Zima, Edgcomb and Fortuna2024). Methods include adapted consent processes, codesign initiatives, lived experience advisory panels, peer-led focus groups, and representation in ethical oversight committees. Key considerations are outlined in Table 3.

Table 3. Patient considerations within the IEACP framework

Framework applications across computational psychiatry contexts

To illustrate the framework’s utility and adaptability, we examined its application across diverse computational psychiatry contexts. To that end, we analyzed two recent studies employing machine learning for mental health applications. Curtiss (Reference Curtiss, Smoller and Pedrelli2024) developed an ensemble machine learning approach to optimize second-step depression treatment selection. Their study used data from 1,439 patients in the Sequenced Treatment Alternatives to Relieve Depression trial who had failed to achieve remission with initial antidepressant treatment. The authors created models to predict outcomes for seven different second-step treatments, highlighting the complexity of personalized treatment selection in psychiatry. Grimland (Reference Grimland, Benatov, Yeshayahu, Izmaylov, Segal, Gal and Levi-Belz2024) focused on real-time suicide risk prediction in crisis hotline chats. They analyzed 17,654 chat sessions using natural language processing and a theory-driven lexicon of suicide risk factors. To demonstrate a concrete framework application, Table 4 presents illustrative scenarios that extrapolate from documented study limitations and established patterns of algorithmic bias in psychiatric populations.

Table 4. Application of the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework

Note: Framework applications represent illustrative scenarios based on study limitations and known disparities in psychiatric AI to demonstrate concrete ethical decision-making processes. These examples are designed to show how the IEACP framework would guide decision-making in realistic clinical situations.

Ethical values: AIC, autonomy/informed consent; BN, beneficence/non-maleficence; JE, justice/equity; PC, privacy/confidentiality; TE, transparency/explainability; SIV, scientific integrity/validity.

Technical and clinical terms: APA, American Psychiatric Association; AUC, area under curve; CT, clinical trials; EHR, electronic health record; FDA, US Food and Drug Administration; HIPAA, Health Insurance Portability and Accountability Act; HITECH, Health Information Technology for Economic and Clinical Health Act; IRB, Institutional Review Board; SR-BERT, Suicide Risk–Bidirectional Encoder Representations From Transformers; STAR*D, Sequenced Treatment Alternatives to Relieve Depression; VPN, virtual private network.

The framework applications demonstrate the IEACP’s capacity to provide structured ethical guidance for computational psychiatry implementation. To contextualize this contribution within the existing landscape of AI ethics frameworks, we compare the IEACP with prominent approaches, including the WHO’s Ethics and Governance of AI for Health (World Health Organization, 2021), AI4People’s Ethical Framework (Floridi et al., Reference Floridi, Cowls, Beltrametti, Chatila, Chazerand, Dignum and Vayena2018), and IEEE’s Ethically Aligned Design. Table 5 illustrates how the IEACP addresses gaps in existing frameworks by providing the first procedural approach specifically designed for computational psychiatry contexts.

Table 5. Ethical frameworks for AI: A comparative analysis featuring the IEACP

Note: This table compares four ethical frameworks for AI, emphasizing the distinctive procedural and domain-specific features of the Integrated Ethical Approach for Computational Psychiatry (IEACP). While these initiatives are commonly termed ‘frameworks’ in the AI ethics literature, they vary significantly in procedural specificity and implementation guidance, with the IEACP being the only approach providing structured decision-making processes for clinical implementation. Dimensions reflect core ethical values, structural orientation, clinical applicability, and stakeholder integration.

Abbreviations: AI, artificial intelligence; AI/AS, artificial intelligence and autonomous systems; IEEE EAD, Institute of Electrical and Electronics Engineers – Ethically Aligned Design; LMIC, low- and middle-income countries.

Discussion

AI’s growing role in psychiatric care creates unresolved ethical challenges around patient autonomy, algorithmic bias, and clinical accountability. The IEACP framework addresses these challenges by integrating six core ethical values with a structured five-stage process. Unlike existing generalist guidelines for psychiatric ethics (Solanki et al., Reference Solanki, Grundy and Hussain2023; World Health Organization, 2021), the IEACP provides psychiatry-specific guidance, addressing the complexity of fluctuating mental states, clinician-patient dynamics, and the biopsychosocial model of care.

The framework’s dynamic ethical governance acknowledges that challenges in computational psychiatry evolve alongside AI advances and shifting psychiatric paradigms. (Barnett et al., Reference Barnett, Torous, Staples, Sandoval, Keshavan and Onnela2018; Starke et al., Reference Starke, De Clercq, Borgwardt and Elger2021). For example, existing AI ethics frameworks often assume stable patient autonomy (Ploug & Holm, Reference Ploug and Holm2016; Vayena et al., Reference Vayena, Blasimme and Cohen2018), whereas IEACP explicitly accounts for fluctuating decision-making capacities in psychiatric populations. Additionally, by incorporating stakeholder mapping and real-time performance monitoring, IEACP moves beyond static, principle-based ethical models to an adaptive framework that can accommodate emerging challenges, such as digital phenotyping, predictive psychiatry, and personalized treatment algorithms (Fusar-Poli et al., Reference Fusar-Poli, Manchia, Koutsouleris, Leslie, Woopen, Calkins and Andreassen2022; McCradden et al., Reference McCradden, Hui and Buchman2023).

A key limitation of the IEACP framework is that it has not yet been empirically validated in real-world psychiatric AI applications. Although derived from a systematic analysis of 83 studies, the framework’s practical utility remains untested. However, similar principle-based ethical frameworks have been widely adopted based on their theoretical grounding rather than empirical validation (Floridi & Cowls, Reference Floridi and Cowls2019; McCradden et al., Reference McCradden, Hui and Buchman2023; Mittelstadt, Reference Mittelstadt2019). The structured, principle-based nature of IEACP ensures its relevance in addressing AI ethics in psychiatry, even in the absence of direct validation.

To bridge this gap, future research should pilot-test the IEACP framework in psychiatric AI implementation. A mixed-method evaluation could involve (1) clinician decision-making studies assessing its applicability in guiding AI-assisted psychiatric interventions, (2) stakeholder engagement studies incorporating patient and clinician perspectives, and (3) real-world validation through case applications in clinical psychiatry. Establishing an iterative refinement process through empirical testing would enhance the framework’s adaptability and ensure alignment with evolving computational psychiatry challenges. Such research will be critical in advancing the IEACP from a theoretically grounded model to an empirically validated tool for ethical AI integration in psychiatric practice.

Conclusion

Computational psychiatry requires ethical approaches that balance innovation with human-centered care. The IEACP framework offers a systematic method for addressing ethical challenges in AI-driven psychiatric applications. Unlike retrospective evaluations, it embeds ethical considerations into AI development, reducing oversight risks and aligning with bioethical principles. Its design supports immediate use and future adaptation. Pilot studies and refinement will enhance its applicability. IEACP may catalyze domain-specific ethical frameworks that preserve psychiatry’s human foundations.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/S0033291725101311.

Funding statement

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interests

The authors declare none.

References

Ahmed, E., & Hens, K. (2022). Microbiome in precision psychiatry: An overview of the ethical challenges regarding microbiome big data and microbiome-based interventions. AJOB Neuroscience, 13(4), 270286. https://doi.org/10.1080/21507740.2021.1958096.CrossRefGoogle ScholarPubMed
Ball, T. M., Kalinowski, A., & Williams, L. M. (2020). ). Ethical implementation of precision psychiatry. Personalized Medicine in Psychiatry, 19–20, 100046. https://doi.org/10.1016/j.pmip.2019.05.003.CrossRefGoogle Scholar
Barnett, I., Torous, J., Staples, P., Sandoval, L., Keshavan, M., & Onnela, J.-P. (2018). Relapse prediction in schizophrenia through digital phenotyping: A pilot study. Neuropsychopharmacology, 43(8), 16601666. https://doi.org/10.1038/s41386-018-0030-z.CrossRefGoogle ScholarPubMed
Beauchamp, T. L., & Childress, J. F. (2001). Principles of biomedical ethics. Oxford University Press.Google Scholar
Capon, H., Hall, W., Fry, C., & Carter, A. (2016). Realising the technological promise of smartphones in addiction research and treatment: An ethical review. International Journal of Drug Policy, 36, 4757. https://doi.org/10.1016/j.drugpo.2016.05.013.CrossRefGoogle ScholarPubMed
Chin-Yee, B., & Upshur, R. (2018). Clinical judgement in the era of big data and predictive analytics. Journal of Evaluation in Clinical Practice, 24(3), 638645. https://doi.org/10.1111/jep.12852.CrossRefGoogle ScholarPubMed
Clarke, N., Foltz, P., & Garrard, P. (2020). How to do things with (thousands of) words: Computational approaches to discourse analysis in Alzheimer’s disease. Cortex, 129, 446463. https://doi.org/10.1016/j.cortex.2020.05.001.CrossRefGoogle Scholar
Coley, R. Y., Johnson, E., Simon, G. E., Cruz, M., & Shortreed, S. M. (2021). Racial/ethnic disparities in the performance of prediction models for death by suicide after mental health visits. JAMA Psychiatry, 78(7), 726734. https://doi.org/10.1001/jamapsychiatry.2021.0493.CrossRefGoogle ScholarPubMed
Corlett, P. R., & Fletcher, P. C. (2014). Computational psychiatry: A Rosetta stone linking the brain to mental illness. The Lancet Psychiatry, 1(5), 399402. https://doi.org/10.1016/S2215-0366(14)70298-6.CrossRefGoogle ScholarPubMed
Curtiss, J., Smoller, J. W., & Pedrelli, P. (2024). Optimizing precision medicine for second-step depression treatment: A machine learning approach. Psychological Medicine, 54(10), 23612368. https://doi.org/10.1017/S0033291724000497.CrossRefGoogle ScholarPubMed
Davidson, B. I. (2022). The crossroads of digital phenotyping. General Hospital Psychiatry, 74, 126132. https://doi.org/10.1016/j.genhosppsych.2020.11.009.CrossRefGoogle ScholarPubMed
D’Souza, R. F., Mathew, M., Amanullah, S., Thornton, J. E., Mishra, V., Mohandas, E., Palatty, P. L. & Surapaneni, K. M. (2024). Navigating merits and limits on the current perspectives and ethical challenges in the utilization of artificial intelligence in psychiatry—An exploratory mixed methods study. Asian Journal of Psychiatry, 97, 104067. https://doi.org/10.1016/j.ajp.2024.104067.CrossRefGoogle Scholar
Dwyer, D., & Koutsouleris, N. (2022). Annual research review: Translational machine learning for child and adolescent psychiatry. Journal of Child Psychology and Psychiatry, 63(4), 421443. https://doi.org/10.1111/jcpp.13545.CrossRefGoogle ScholarPubMed
Espejo, G., Reiner, W., & Wenzinger, M. (2023). Exploring the role of artificial intelligence in mental healthcare: Progress, pitfalls, and promises. Cureus, 15(9), e44748. https://doi.org/10.7759/cureus.44748.Google Scholar
European Union. (2023). Regulation (EU) Regulation (EU) (2016)/679 (General Data Protection Regulation). Official Journal of the European Union, L 119, 188.Google Scholar
Farmer, R. L., Lockwood, A. B., Goforth, A., & Thomas, C. (2024). Artificial intelligence in practice: Opportunities, challenges, and ethical considerations. Professional Psychology: Research and Practice, 56(1), 1932. https://doi.org/10.1037/pro0000595CrossRefGoogle Scholar
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1), 535545. https://doi.org/10.1162/99608f92.8cd550d1.Google Scholar
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689707. https://doi.org/10.1007/s11023-018-9482-5.CrossRefGoogle ScholarPubMed
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198237907.001.0001.CrossRefGoogle Scholar
Friston, K. (2023). Computational psychiatry: From synapses to sentience. Molecular Psychiatry, 28(1), 256268. https://doi.org/10.1038/s41380-022-01743-z.CrossRefGoogle ScholarPubMed
Fusar-Poli, P., Manchia, M., Koutsouleris, N., Leslie, D., Woopen, C., Calkins, M. E., … Andreassen, O. A. (2022). Ethical considerations for precision psychiatry: A roadmap for research and clinical practice. European Neuropsychopharmacology, 63, 1734. https://doi.org/10.1016/j.euroneuro.2022.08.001.CrossRefGoogle ScholarPubMed
Gilkinson, C., Kinney, M., Olaniyan, T., Murtala, B., Sipilon, M., Malunga, A., … Shankar, R. (2022). Perceptions about mental healthcare for people with epilepsy in Africa. Epilepsy & Behavior, 127, 108504. https://doi.org/10.1016/j.yebeh.2021.108504.CrossRefGoogle ScholarPubMed
Gillon, R. (2003). Ethics needs principles—Four can encompass the rest—And respect for autonomy should be “first among equals. Journal of Medical Ethics, 29(5), 307. https://doi.org/10.1136/jme.29.5.307.CrossRefGoogle ScholarPubMed
Gooding, P., & Kariotis, T. (2021). Ethics and law in research on algorithmic and data-driven Technology in Mental Health Care: Scoping review. JMIR Mental Health, 8(6), e24668. https://doi.org/10.2196/24668.CrossRefGoogle Scholar
Grimland, M., Benatov, J., Yeshayahu, H., Izmaylov, D., Segal, A., Gal, K., & Levi-Belz, Y. (2024). Predicting suicide risk in real-time crisis hotline chats integrating machine learning with psychological factors: Exploring the black box. Suicide and Life-threatening Behavior, 54(3), 416424. https://doi.org/10.1111/sltb.13056.CrossRefGoogle ScholarPubMed
Gültekin, M., & Şahin, M. (2024). The use of artificial intelligence in mental health services in Turkey: What do mental health professionals think? Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 18(1). https://doi.org/10.5817/CP2024-1-6.CrossRefGoogle Scholar
Health Insurance Portability and Accountability Act of 1996 (1996). Pub. L. No. 104–191, 42 U.S.C. § 1320d et seq. United States Code.Google Scholar
Hurley, M. E., Sonig, A., Herrington, J., Storch, E. A., Lázaro-Muñoz, G., Blumenthal-Barby, J., & Kostick-Quenet, K. (2024). Ethical considerations for integrating multimodal computer perception and neurotechnology. Frontiers in Human Neuroscience, 18, 1332451. https://doi.org/10.3389/fnhum.2024.1332451.CrossRefGoogle ScholarPubMed
Jabareen, Y. (2009). Building a conceptual framework: Philosophy, definitions, and procedure. International Journal of Qualitative Methods, 8(4), 4962. https://doi.org/10.1177/160940690900800406.CrossRefGoogle Scholar
Jacobson, N. C., Bentley, K. H., Walton, A., Wang, S. B., Fortgang, R. G., Millner, A. J., … Coppersmith, D. D. L. (2020). Ethical dilemmas posed by mobile health and machine learning in psychiatry research. Bulletin of the World Health Organization, 98(4), 270276. https://doi.org/10.2471/BLT.19.237107.CrossRefGoogle ScholarPubMed
Juengst, E., McGowan, M. L., Fishman, J. R., & Settersten, R. A. (2016). From “personalized” to “precision” medicine: The ethical and social implications of rhetorical reform in genomic medicine. Hastings Center Report, 46(5), 2133. https://doi.org/10.1002/hast.614.CrossRefGoogle Scholar
Khanna, R., Robinson, N., O’Donnell, M., Eyre, H., & Smith, E. (2022). Affective computing in psychotherapy. Advances in Psychiatry and Behavioral Health, 2(1), 95105. https://doi.org/10.1016/j.ypsc.2022.05.006.CrossRefGoogle Scholar
Kirtley, O. J., van Mens, K., Hoogendoorn, M., Kapur, N., & de Beurs, D. (2022). Translating promise into practice: A review of machine learning in suicide research and prevention. The Lancet Psychiatry, 9(3), 243252. https://doi.org/10.1016/S2215-0366(21)00254-6.CrossRefGoogle Scholar
Kleine, A.-K., Lermer, E., Cecil, J., Heinrich, A., & Gaube, S. (2023). Advancing mental health care with AI-enabled precision psychiatry tools: A patent review. Computers in Human Behavior Reports, 12, 100322. https://doi.org/10.1016/j.chbr.2023.100322.CrossRefGoogle Scholar
Kline, N. K., Prichett, B., McKim, K. G., & Palm Reed, K. (2023). Interpersonal emotion regulation in betrayal trauma survivors: A preliminary qualitative exploration. Journal of Aggression, Maltreatment and Trauma, 32(4), 631649. https://doi.org/10.1080/10926771.2022.2133658.CrossRefGoogle Scholar
Koutsouleris, N., Hauser, T. U., Skvortsova, V., & De Choudhury, M. (2022). From promise to practice: Towards the realisation of AI-informed mental health care. The Lancet Digital Health, 4(11), e829e840. https://doi.org/10.1016/S2589-7500(22)00153-4.CrossRefGoogle ScholarPubMed
Leslie, D., Mazumder, A., Peppin, A., Wolters, M. K., & Hagerty, A. (2021). Does “AI” stand for augmenting inequality in the era of covid-19 healthcare? BMJ, 372, n304. https://doi.org/10.1136/bmj.n304.CrossRefGoogle Scholar
Levkovich, I., Shinan-Altman, S., & Elyoseph, Z. (2024). Can large language models be sensitive to culture suicide risk assessment?. Journal of Cultural Cognitive Science, 8(3), 275287. https://doi.org/10.1007/s41809-024-00151-9CrossRefGoogle Scholar
Lewis, A. C. F., Chisholm, R. L., Connolly, J. J., Esplin, E. D., Glessner, J., Gordon, A., … Fullerton, S. M. (2024). Managing differential performance of polygenic risk scores across groups: Real-world experience of the eMERGE network. The American Journal of Human Genetics, 111(6), 9991005. https://doi.org/10.1016/j.ajhg.2024.04.005.CrossRefGoogle ScholarPubMed
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Advances in neural information processing systems, 30. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.htmlGoogle Scholar
McCradden, M., Hui, K., & Buchman, D. Z. (2023). Evidence, ethics and the promise of artificial intelligence in psychiatry. Journal of Medical Ethics, 49(8), 573. https://doi.org/10.1136/jme-2022-108447.CrossRefGoogle ScholarPubMed
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501507. https://doi.org/10.1038/s42256-019-0114-4.CrossRefGoogle Scholar
Monaco, F., Vignapiano, A., Piacente, M., Pagano, C., Mancuso, C., Steardo, L., … Corrivetti, G. (2024). An advanced artificial intelligence platform for a personalised treatment of eating disorders. Frontiers in Psychiatry, 15, 1414439. https://doi.org/10.3389/fpsyt.2024.1414439.CrossRefGoogle ScholarPubMed
Monosov, I. E., Zimmermann, J., Frank, M. J., Mathis, M. W., & Baker, J. T. (2024). Ethological computational psychiatry: Challenges and opportunities. Current Opinion in Neurobiology, 86, 102881. https://doi.org/10.1016/j.conb.2024.102881.CrossRefGoogle ScholarPubMed
Monteith, S., Glenn, T., Geddes, J. R., Achtyes, E. D., Whybrow, P. C., & Bauer, M. (2023). Challenges and ethical considerations to successfully implement artificial intelligence in clinical medicine and neuroscience. A narrative review. Pharmacopsychiatry, 56(6), 209213. https://doi.org/10.1055/a-2142-9325.Google ScholarPubMed
Naslund, J. A., Aschbrenner, K. A., Araya, R., Marsch, L. A., Unützer, J., Patel, V., & Bartels, S. J. (2017). Digital technology for treating and preventing mental disorders in low-income and middle-income countries: A narrative review of the literature. The Lancet Psychiatry, 4(6), 486500. https://doi.org/10.1016/S2215-0366(17)30096-2.CrossRefGoogle ScholarPubMed
Parziale, A., & Mascalzoni, D. (2022). Digital biomarkers in psychiatric research: Data protection qualifications in a complex ecosystem. Frontiers in Psychiatry, 13, 873392. https://doi.org/10.3389/fpsyt.2022.873392.CrossRefGoogle Scholar
Petch, J., Di, S., & Nelson, W. (2022). Opening the black box: The promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology, 38(2), 204213. https://doi.org/10.1016/j.cjca.2021.09.004.CrossRefGoogle ScholarPubMed
Ploug, T., & Holm, S. (2016). Meta consent – A flexible solution to the problem of secondary use of health data. Bioethics, 30(9), 721732. https://doi.org/10.1111/bioe.12286.CrossRefGoogle Scholar
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 11351144). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2939672.2939778CrossRefGoogle Scholar
Roy, A. L. (2017). Innovation or violation? Leveraging Mobile technology to conduct socially responsible community research. American Journal of Community Psychology, 60(3–4), 385390. https://doi.org/10.1002/ajcp.12187.CrossRefGoogle ScholarPubMed
Sahin, D., Kambeitz-Ilankovic, L., Wood, S., Dwyer, D., Upthegrove, R., Salokangas, R., … PRONIA Study Group. (2024). Algorithmic fairness in precision psychiatry: Analysis of prediction models in individuals at clinical high risk for psychosis. British Journal of Psychiatry, 224(2), 5565. (WOS:001098752700001). https://doi.org/10.1192/bjp.2023.141CrossRefGoogle ScholarPubMed
Salazar de Pablo, G., Studerus, E., Vaquerizo-Serrano, J., Irving, J., Catalan, A., Oliver, D., … Fusar-Poli, P. (2021). Implementing precision psychiatry: A systematic review of individualized prediction models for clinical practice. Schizophrenia Bulletin, 47(2), 284297. https://doi.org/10.1093/schbul/sbaa120.CrossRefGoogle ScholarPubMed
Singhal, S., Cooke, D. L., Villareal, R. I., Stoddard, J. J., Lin, C.-T., & Dempsey, A. G. (2024). Machine learning for mental health: Applications, challenges, and the clinician’s role. Current Psychiatry Reports, 26, 694702. https://doi.org/10.1007/s11920-024-01561-w.CrossRefGoogle ScholarPubMed
Solanki, P., Grundy, J., & Hussain, W. (2023). Operationalising ethics in artificial intelligence for healthcare: A framework for AI developers. AI and Ethics, 3(1), 223240. https://doi.org/10.1007/s43681-022-00195-z.CrossRefGoogle Scholar
Starke, G., De Clercq, E., Borgwardt, S., & Elger, B. S. (2021). Computing schizophrenia: Ethical challenges for machine learning in psychiatry. Psychological Medicine, 51(15), 25152521. https://doi.org/10.1017/S0033291720001683.CrossRefGoogle ScholarPubMed
Sultan, M., Scholz, C., & van den Bos, W. (2023). Leaving traces behind: Using social media digital trace data to study adolescent wellbeing. Computers in Human Behavior Reports, 10, 100281. https://doi.org/10.1016/j.chbr.2023.100281.CrossRefGoogle Scholar
Tabb, K., & Lemoine, M. (2021). The prospects of precision psychiatry. Theoretical Medicine and Bioethics, 42(5), 193210. https://doi.org/10.1007/s11017-022-09558-3.CrossRefGoogle ScholarPubMed
Tekin, Ş. (2014). Psychiatric taxonomy: At the crossroads of science and ethics. Journal of Medical Ethics, 40(8), 513. https://doi.org/10.1136/medethics-2014-102339.CrossRefGoogle ScholarPubMed
Torous, J., Bucci, S., Bell, I. H., Kessing, L. V., Faurholt-Jepsen, M., Whelan, P., … Firth, J. (2021). The growing field of digital psychiatry: Current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry, 20(3), 318335. https://doi.org/10.1002/wps.20883.CrossRefGoogle ScholarPubMed
Upreti, R., Lind, P. G., Elmokashfi, A., & Yazidi, A. (2024). Trustworthy machine learning in the context of security and privacy. International Journal of Information Security, 23(3), 22872314. https://doi.org/10.1007/s10207-024-00813-3.CrossRefGoogle Scholar
Uusitalo, S., Tuominen, J., & Arstila, V. (2021). Mapping out the philosophical questions of AI and clinical practice in diagnosing and treating mental disorders. Journal of Evaluation in Clinical Practice, 27(3), 478484. https://doi.org/10.1111/jep.13485.CrossRefGoogle ScholarPubMed
Vayena, E., Blasimme, A., & Cohen, I. G. (2018). Machine learning in medicine: Addressing ethical challenges. PLoS Medicine, 15(11), e1002689. https://doi.org/10.1371/journal.pmed.1002689.CrossRefGoogle Scholar
Venkatasubramanian, G., & Keshavan, M. S. (2016). Biomarkers in psychiatry—A critique. Annals of Neurosciences, 23(1), 35. https://doi.org/10.1159/000443549.CrossRefGoogle ScholarPubMed
Walter, H. (2013). The third wave of biological psychiatry. Frontiers in Psychology, 4, 582. https://doi.org/10.3389/fpsyg.2013.00582.CrossRefGoogle ScholarPubMed
Wang, M., Wu, Z., Zhang, X., He, X., & Huang, L. (2024). Computing addiction: Epistemic injustice challenges in the culture of computational psychiatry. Acta Bioethica, 30(2), 694702. https://doi.org/10.4067/s1726-569x2024000200263.CrossRefGoogle Scholar
Wiese, W., & Friston, K. J. (2022). AI ethics in computational psychiatry: From the neuroscience of consciousness to the ethics of consciousness. Behavioural Brain Research, 420, 113704. https://doi.org/10.1016/j.bbr.2021.113704.CrossRefGoogle Scholar
World Health Organization. (2021). Ethics and governance of artificial intelligence for health. Geneva, Switzerland: World Health Organization. Retrieved from World Health Organization website: https://www.who.int/publications/i/item/9789240029200Google Scholar
Wouters, R. H. P., van der Horst, M. Z., Aalfs, C. M., Bralten, J., Luykx, J. J., & Zinkstok, J. R. (2024). The ethics of polygenic scores in psychiatry: Minefield or opportunity for patient-centered psychiatry? Psychiatric Genetics, 34(2), 3136. https://journals.lww.com/psychgenetics/fulltext/2024/04000/the_ethics_of_polygenic_scores_in_psychiatry_.1.aspxGoogle ScholarPubMed
Wray, N. R., Lin, T., Austin, J., McGrath, J. J., Hickie, I. B., Murray, G. K., & Visscher, P. M. (2021). From basic science to clinical application of polygenic risk scores: A primer. JAMA Psychiatry, 78(1), 101109. https://doi.org/10.1001/jamapsychiatry.2020.3049.CrossRefGoogle ScholarPubMed
Zagzebski, L. T. (1996). Virtues of the mind: An inquiry into the nature of virtue and the ethical foundations of knowledge. Cambridge University Press. https://doi.org/10.1017/CBO9781139174763.CrossRefGoogle Scholar
Zhang, M., Scandiffio, J., Younus, S., Jeyakumar, T., Karsan, I., Charow, R., … Wiljer, D. (2023). The adoption of AI in mental health care–perspectives from mental health professionals: Qualitative descriptive study. JMIR Formative Research, 7, e47847. https://doi.org/10.2196/47847.CrossRefGoogle ScholarPubMed
Zidaru, T., Morrow, E. M., & Stockley, R. (2021). Ensuring patient and public involvement in the transition to AI-assisted mental health care: A systematic scoping review and agenda for design justice. Health Expectations, 24(4), 10721124. https://doi.org/10.1111/hex.13299.CrossRefGoogle ScholarPubMed
Zima, B. T., Edgcomb, J. B., & Fortuna, L. R. (2024). Identifying precise targets to improve child mental health care equity: Leveraging advances in clinical research informatics and lived experience. Child and Adolescent Psychiatric Clinics, 33(3), 471483. https://doi.org/10.1016/j.chc.2024.03.009.CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Ethical decision-making across the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework

Figure 1

Table 2. Operationalization of the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework with cultural adaptations

Figure 2

Table 3. Patient considerations within the IEACP framework

Figure 3

Table 4. Application of the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework

Figure 4

Table 5. Ethical frameworks for AI: A comparative analysis featuring the IEACP

Supplementary material: File

Putica et al. supplementary material

Putica et al. supplementary material
Download Putica et al. supplementary material(File)
File 519.5 KB