Introduction
Computational psychiatry integrates insights from psychiatry, neuroscience, and computer science to develop data-driven approaches for diagnosis, prognosis, and treatment of mental health conditions (Corlett & Fletcher, Reference Corlett and Fletcher2014; Friston, Reference Friston2023; Khanna et al., Reference Khanna, Robinson, O’Donnell, Eyre and Smith2022). Clinicians face ethical challenges when implementing these approaches in practice (Espejo, Reiner, & Wenzinger, Reference Espejo, Reiner and Wenzinger2023; McCradden, Hui, & Buchman, Reference McCradden, Hui and Buchman2023). Recent advancements in artificial intelligence (AI), including machine learning and generative language models, have expanded the potential of computational psychiatry to transform mental healthcare (Salazar de Pablo et al., Reference Salazar de Pablo, Studerus, Vaquerizo-Serrano, Irving, Catalan, Oliver and Fusar-Poli2021). While these challenges exist across medicine (Uusitalo, Tuominen, & Arstila, Reference Uusitalo, Tuominen and Arstila2021), the sensitive nature of mental healthcare amplifies the ethical implications of these challenges. Despite existing ethical guidelines for general psychiatric practice and health AI (Solanki, Grundy, & Hussain, Reference Solanki, Grundy and Hussain2023; World Health Organization, 2021), computational psychiatry remains ethically underregulated. Without structured ethical governance, AI risks reinforcing systemic biases, compromising patient autonomy, and exacerbating existing disparities in mental healthcare.
Psychiatric data are sensitive, stigmatizing, and at times subjective, which can complicate informed consent and clinical transparency. Computational models trained on homogeneous datasets can amplify biases, skewing psychiatric diagnoses and treatments, particularly when applied across diverse cultural, gender, and age groups (Capon, Hall, Fry, & Carter, Reference Capon, Hall, Fry and Carter2016; Coley et al., Reference Coley, Johnson, Simon, Cruz and Shortreed2021; Leslie et al., Reference Leslie, Mazumder, Peppin, Wolters and Hagerty2021; Roy, Reference Roy2017). Psychiatry’s reliance on subjective patient experiences means that overemphasis on computational tools may disguise paternalism (Juengst, McGowan, Fishman, & Settersten, Reference Juengst, McGowan, Fishman and Settersten2016) if AI systems’ outputs override patients’ lived experiences. In this context, AI model opacity often further complicates patient-centered decision-making (Chin-Yee & Upshur, Reference Chin-Yee and Upshur2018; Ploug & Holm, Reference Ploug and Holm2016). Erosion in trust can damage the therapeutic alliance, a cornerstone of psychiatric practice, and diminish clinician’s ability to integrate psychosocial and cultural contexts into care (Chin-Yee & Upshur, Reference Chin-Yee and Upshur2018; Tekin, Reference Tekin2014; Walter, Reference Walter2013). Furthermore, rapid evolution of AI demands that ethical frameworks remain adaptable to new technologies affecting symptomatology, diagnosis, and even therapeutic interventions (Barnett et al., Reference Barnett, Torous, Staples, Sandoval, Keshavan and Onnela2018; Starke, De Clercq, Borgwardt, & Elger, Reference Starke, De Clercq, Borgwardt and Elger2021; Torous et al., Reference Torous, Bucci, Bell, Kessing, Faurholt-Jepsen, Whelan and Firth2021). Finally, by emphasizing genetics and biomarkers, established precision medicine frameworks risk reductionism and may overlook the crucial roles of environmental, cultural, and interpersonal determinants in mental health and psychiatric care (Venkatasubramanian & Keshavan, Reference Venkatasubramanian and Keshavan2016).
The digital divide complicates ethical considerations of AI in global psychiatric contexts, particularly in rural, underserved, and impoverished regions. This divide between those with and without access to mobile or internet technologies intersects with cultural variations in mental health conceptualization and treatment. In low- and middle-income countries, primary care providers, who are often the first point of contact for mental health concerns, face numerous challenges in implementing computational tools that align with patient autonomy and local cultural perspectives regarding mental health treatment (Naslund et al., Reference Naslund, Aschbrenner, Araya, Marsch, Unützer, Patel and Bartels2017). For example, cultures may conceptualize epilepsy as a neurological disorder, mental health condition, or spiritual issue (Gilkinson et al., Reference Gilkinson, Kinney, Olaniyan, Murtala, Sipilon, Malunga and Shankar2022). Such variations extend to suicide reporting and help-seeking behaviors, where cultural and systemic factors significantly influence data accuracy and treatment engagement (Monosov et al., Reference Monosov, Zimmermann, Frank, Mathis and Baker2024; Naslund et al., Reference Naslund, Aschbrenner, Araya, Marsch, Unützer, Patel and Bartels2017; Starke et al., Reference Starke, De Clercq, Borgwardt and Elger2021). The tripartite challenge of AI access, cultural understanding, and clinical implementation underscores the need for an ethical framework adaptable to the technical and sociocultural aspects of global mental healthcare delivery.
This article introduces the Integrated Ethical Approach for Computational Psychiatry (IEACP), a framework that addresses computational psychiatry’s ethical challenges while supporting patient-centered interdisciplinary care. We outline the framework’s development, emphasize patient and lived experience integration in ethical decision-making, and demonstrate its application through case studies. We also examine the framework’s implications, limitations, and future directions for global mental health care scalability. The ‘Patient and lived experience involvement methods’ section details our approach to incorporating patient perspectives to evaluate the alignment of computational psychiatric tools with real-world patient needs and experiences.
Framework development methodology
The literature search strategy, inclusion and exclusion criteria, study selection process, data extraction procedures, and the full Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram are provided in the Supplementary Material Figure S-1.
Development of framework components through structured review
The IEACP Framework was developed through interpretive synthesis of 83 peer-reviewed studies identified in a systematic literature review (see Supplementary Material). Structured data extraction captured information on implementation contexts, ethical challenges, and ethical values. Using repeated rounds of concept clustering, five core framework stages were identified: Identification, Analysis, Decision-making, Implementation, and Review. Each stage includes three commonly observed implementation processes. A stage refers to a broad phase in ethical implementation (e.g. the identification of risks), while a process refers to a recurring, practical approach within that stage (e.g. stakeholder mapping). Values refer to overarching ethical principles, such as privacy, autonomy, or justice, that guide decision-making throughout each stage and process. The framework focuses on the most frequently recurring processes within each stage that reflect clear ethical strategies across diverse settings. This approach aligns with the conceptual framework methodology (Jabareen, Reference Jabareen2009).
Implementation contexts and ethical challenges revealed recurring, stage-specific implementation processes. For example, three distinct identification stage processes emerged: systematic recognition of ethical risks (e.g. D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; privacy concerns), gathering of implementation information (e.g. Clarke, Foltz, & Garrard, Reference Clarke, Foltz and Garrard2020) examining data collection and storage requirements) and stakeholder mapping (e.g. Clarke et al., Reference Clarke, Foltz and Garrard2020; affected parties). Analysis stage processes were derived from studies evaluating ethical considerations, including technical evaluation (e.g. Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023; data quality), compliance assessment (e.g. Ball, Kalinowski, & Williams, Reference Ball, Kalinowski and Williams2020; regulatory requirements), and guideline review (e.g. D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; compliance frameworks). Decision-making stage processes emerged from implementation planning approaches, such as strategy development (e.g. Hurley et al., Reference Hurley, Sonig, Herrington, Storch, Lázaro-Muñoz, Blumenthal-Barby and Kostick-Quenet2024; human supervision), implementation planning (e.g. Zhang et al., Reference Zhang, Scandiffio, Younus, Jeyakumar, Karsan, Charow and Wiljer2023; standardization), and consensus building (e.g. Zidaru, Morrow, & Stockley, Reference Zidaru, Morrow and Stockley2021; codesign). Implementation stage processes were reflected in clinical application approaches, including clinical integration (e.g. Torous et al., Reference Torous, Bucci, Bell, Kessing, Faurholt-Jepsen, Whelan and Firth2021; healthcare adoption), staff training (e.g. D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; workforce preparation), and performance monitoring (e.g. Clarke et al., Reference Clarke, Foltz and Garrard2020; ongoing evaluation). Review-stage processes were evident in the studies’ evaluation approaches, including outcome assessment (e.g. Dwyer & Koutsouleris, Reference Dwyer and Koutsouleris2022; clinical translation), performance evaluation (e.g. Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023; error tracking), and framework refinement (e.g. Fusar-Poli et al., Reference Fusar-Poli, Manchia, Koutsouleris, Leslie, Woopen, Calkins and Andreassen2022; continuous monitoring).
Analysis of 83 studies identified the following six key ethical values in computational psychiatry: privacy and confidentiality, transparency and explainability, justice and equity, beneficence and non-maleficence, autonomy and informed consent, and scientific integrity and validity. Privacy and confidentiality were most frequently discussed, followed by transparency in clinical decision-making. Justice considerations span algorithmic bias and healthcare access, while beneficence, autonomy, and integrity connect to risk assessment, informed consent, and model validation, respectively. The alignment of these ethical values with the IEACP framework stages was determined based on their frequency and implementation context in the literature (see Supplementary Table S-2). A selective set of exemplary references will be included throughout the article.
These six ethical principles were inductively derived through interpretive synthesis of ethical content across the included literature, consistent with conceptual framework development (Jabareen, Reference Jabareen2009). The framework synthesizes established bioethical traditions, including traditional principlism (Beauchamp & Childress, Reference Beauchamp and Childress2001) expressed in the values of beneficence, non-maleficence, autonomy, and justice; information ethics (Floridi et al., Reference Floridi, Cowls, Beltrametti, Chatila, Chazerand, Dignum and Vayena2018) emphasizing privacy protections in algorithmic data processing; virtue epistemology (Zagzebski, Reference Zagzebski1996), highlighting scientific integrity and epistemic responsibility; and epistemic justice theory (Fricker, Reference Fricker2007), with a focus on transparency and algorithmic accountability. This multitheoretical synthesis addresses computational psychiatry’s novel challenges that no single established framework adequately encompasses. Privacy extends beyond traditional confidentiality to encompass algorithmic data processing, model training on sensitive psychiatric information, and digital phenotyping that infers mental states from behavioral patterns. Transparency addresses epistemic opacity inherent in machine learning models where clinical recommendations emerge from processes that may be fundamentally unexplainable even to developers (Petch, Di, & Nelson, Reference Petch, Di and Nelson2022). Scientific integrity ensures methodological rigor in computational models whose predictive validity directly influences psychiatric interventions. These six principles emerged as essential through repeated patterns in the literature addressing ethical challenges in computational psychiatry, with other commonly cited AI ethics principles either conceptually encompassed within the selected six or having limited direct applicability to clinical psychiatric contexts.
Ethical dilemmas in computational psychiatry characteristically arise from irreconcilable conflicts between these principles, requiring contextual navigation rather than algorithmic resolution. Transparency demands for explainable AI outputs can directly conflict with privacy requirements when model explanations necessarily reveal sensitive patient data or demographic patterns. Beneficence-driven early interventions based on suicide risk algorithms may fundamentally override patient autonomy when individuals reject computational predictions of their mental state. Justice requires equitable AI performance across populations while scientific integrity demands acknowledging when models perform poorly for certain demographic groups, creating tensions between deployment and methodological honesty. Privacy protections may conflict with justice when de-identification procedures disproportionately exclude marginalized populations from model development. The IEACP framework addresses these inherent tensions not by providing predetermined resolutions, but by requiring systematic identification of competing principles during stakeholder mapping, explicit analysis of trade-offs during compliance assessment, collaborative negotiation of acceptable compromises during consensus building, and ongoing monitoring of principle conflicts during implementation review. This procedural approach recognizes that ethical reasoning in computational psychiatry involves navigating irreducible tensions through structured deliberation rather than eliminating conflicts through hierarchical principle ranking, consistent with contextualist approaches to applied ethics (Jonsen & Toulmin, Reference Jonsen and Toulmin1988) and established methods for managing principal conflicts in clinical ethics (Gillon, Reference Gillon2003).
The IEACP framework
The IEACP framework addresses computational psychiatry ethics through five stages with three processes each, guided by six core values. Ethical values do not simply serve as retrospective considerations; they actively shape decision points at both stage transitions and within processes, embedding ethical integrity throughout implementation. Table 1 illustrates how ethical values inform decision points across each stage.
Table 1. Ethical decision-making across the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework

Note: This table outlines the five procedural stages of the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework – Identification, Analysis, Decision-making, Implementation, and Review and Reflection – while highlighting how ethical values underpin each stage. LIME, Local Interpretable Model-agnostic Explanations; SHAP, SHapley Additive exPlanations.
Identification
The Identification stage consists of three processes as follows: recognizing ethical challenges, gathering implementation data, and mapping stakeholders. Recognition is the process of identifying ethical challenges at the intersection of computational psychiatry and clinical ethics, including moral dilemmas that arise from AI-driven decision-making. For example, in schizophrenia prediction models, the recognition process examines the ethical tension between early intervention and the risk of overdiagnosis, focusing on how predictive modeling may inadvertently reinforce diagnostic overshadowing, limit patient autonomy, and exacerbate social stigma. Gathering implementation data involves collecting information on real-world implementation contexts, including technical, clinical, and systemic factors affecting ethical deployment. For example, in machine learning models for depression screening, information gathering involves documenting existing screening workflows across clinical settings, mapping the technological infrastructure needed for ethical deployment, and characterizing the target patient demographics to anticipate potential disparities. Stakeholder mapping identifies all individuals and groups impacted by computational psychiatry tools, a critical process given the complex relationships between clinicians, patients, carers, and automated systems. In an automated suicide risk prediction system, stakeholder mapping identifies key individuals and groups impacted by the tool, including emergency clinicians, at-risk patients, family members, mental health teams, hospital administrators, and healthcare funders, such as government agencies and insurance providers. Special attention is given to populations with variable decision-making capacity, such as individuals with first-episode psychosis, by identifying their interactions with AI tools, mapping support networks, and documenting contexts requiring added protections (Vayena, Blasimme, & Cohen, Reference Vayena, Blasimme and Cohen2018).
Ethical values shape the Identification stage, guiding the recognition of ethical challenges, data collection, and stakeholder mapping to ensure responsible integration of computational psychiatry tools. Identifying algorithmic risks, such as biased data sources, unintended clinical consequences, or ethical dilemmas in psychiatric prediction models, ensures that potential harms are flagged before deployment (beneficence and non-maleficence; Ball et al., Reference Ball, Kalinowski and Williams2020; Fusar-Poli et al., Reference Fusar-Poli, Manchia, Koutsouleris, Leslie, Woopen, Calkins and Andreassen2022). The identification of methodological limitations during instrument development helps prevent flawed assumptions from influencing clinical judgment (scientific integrity; McCradden et al., Reference McCradden, Hui and Buchman2023; Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023). Pre-implementation data collection identifies ethical risks related to psychiatric information security, storage, and sharing, preventing potential misuse or unauthorized access (privacy and confidentiality; D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; Upreti, Lind, Elmokashfi, & Yazidi, Reference Upreti, Lind, Elmokashfi and Yazidi2024). Recognizing disparities in how AI systems identify psychiatric risk across different demographic groups is essential to prevent reinforcing existing inequities (justice and equity; Gooding & Kariotis, Reference Gooding and Kariotis2021; Sahin et al., Reference Sahin, Kambeitz-Ilankovic, Wood, Dwyer, Upthegrove and Salokangas2024). Stakeholder mapping ensures that AI-driven tools acknowledge patient rights and decision-making capacity, particularly in populations where autonomy may be impacted, such as individuals with severe mental illness (autonomy and informed consent; Davidson, Reference Davidson2022; Jacobson et al., Reference Jacobson, Bentley, Walton, Wang, Fortgang, Millner and Coppersmith2020). Recognizing when computational psychiatry tools produce opaque decision-making processes ensures that risks related to interpretability are identified (transparency and explainability; D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; McCradden et al., Reference McCradden, Hui and Buchman2023). This integrated approach embeds ethical rules into the earliest stages of tool development, shifting ethical considerations from retrospective checkpoints to foundational guideposts that shape computational psychiatry’s trajectory from inception.
Analysis
The Analysis stage deepens the examination of ethical considerations identified in the previous stage, moving from recognizing potential challenges to evaluation of their implications and compliance requirements. While the Identification stage maps ethical concerns, the Analysis stage rigorously assesses them against established regulatory, institutional, and professional standards. This involves three key processes: technical evaluation, compliance assessment, and guideline review. Technical evaluation assesses computational methods against predefined benchmarks, ensuring accuracy, sensitivity, specificity, and fairness across diverse populations. This involves evaluating model reliability, demographic fairness, and interpretability within psychiatric contexts. For example, when assessing a depression screening algorithm, technical evaluation determines whether the model performs equitably across age groups, ethnicities, and socioeconomic backgrounds, identifying potential biases and performance disparities. Compliance assessment evaluates alignment with relevant healthcare laws and data protection regulations. In computational psychiatry, this involves reviewing adherence to healthcare laws, data protection frameworks such as the European Union’s General Data Protection Regulation (European Union, 2016) and the United States Health Insurance Portability and Accountability Act (HIPAA, 1996), particularly regarding sensitive mental health data. For example, this might involve assessing whether a mood prediction algorithm appropriately manages patient consent requirements or ensuring diagnostic support systems comply with data privacy regulations in psychiatric settings. Guideline review examines adherence to professional and institutional standards in computational psychiatry. This includes evaluating compliance with frameworks such as the World Health Organization’s Ethics and Governance of Artificial Intelligence for Health (World Health Organization, 2021), as well as institution-specific psychiatric governance policies.
During Analysis, ethical values structure the evaluation of computational tools against healthcare’s regulatory, institutional, and professional requirements. Assessing computational models against predefined benchmarks for accuracy, sensitivity, specificity, and fairness ensures that potential biases and risks to patient safety are identified before deployment (beneficence and non-maleficence; Ball et al., Reference Ball, Kalinowski and Williams2020; Fusar-Poli et al., Reference Fusar-Poli, Manchia, Koutsouleris, Leslie, Woopen, Calkins and Andreassen2022). Validation and external assessment of model assumptions safeguard against methodological flaws that could compromise clinical decisions (scientific integrity; McCradden et al., Reference McCradden, Hui and Buchman2023; Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023). Ensuring compliance with healthcare laws and data protection frameworks, such as General Data Protection Regulation (GDPR) and HIPAA, prevents unauthorized data use and security breaches (privacy and confidentiality; D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; Farmer, Lockwood, Goforth, & Thomas, Reference Farmer, Lockwood, Goforth and Thomas2024). Evaluating demographic performance variations in psychiatric assessments ensures that AI-driven tools do not reinforce inequities in access to care (justice and equity; Singhal et al., Reference Singhal, Cooke, Villareal, Stoddard, Lin and Dempsey2024; Upreti et al., Reference Upreti, Lind, Elmokashfi and Yazidi2024). Reviewing adherence to institutional and professional ethical guidelines safeguards patient autonomy and ensures that AI-assisted decision-making does not override informed consent policies (autonomy and informed consent; Ahmed & Hens, Reference Ahmed and Hens2022; Wouters et al., Reference Wouters, van der Horst, Aalfs, Bralten, Luykx and Zinkstok2024). Maintaining interpretability in analytic outputs allows clinicians and regulators to scrutinize AI decision-making before implementation (transparency and explainability; D’Souza et al., Reference D’Souza, Mathew, Amanullah, Thornton, Mishra, Mohandas, Palatty and Surapaneni2024; McCradden et al., Reference McCradden, Hui and Buchman2023).
Decision-making
The Decision-making stage translates ethical analysis into actionable implementation strategies through three key processes: strategy development, implementation planning, and consensus building. Strategy development formulates approaches to address ethical challenges, ensuring AI tools align with professional and patient-centered considerations, such as restricting suicide risk prediction models to clinician-mediated use to mitigate risks of self-directed harm or unnecessary emergency interventions. Implementation planning translates ethical requirements into structured operational protocols that guide clinical practice, including developing tiered consent mechanisms for depression screening algorithms that dynamically adjust based on patient cognitive capacity, ensuring informed decision-making at different stages of illness progression. Consensus building facilitates structured consultation and collaborative review among clinicians, ethicists, and individuals with lived experience to define ethically defensible risk thresholds in automated psychiatric screening, such as distinguishing passive suicidal ideation from active crisis situations that require immediate intervention. Through iterative stakeholder engagement, this process refines actionable criteria for when algorithmic predictions should trigger clinician review, ensuring decisions optimize predictive accuracy while maintaining patient autonomy, clinical feasibility, and ethical oversight.
During the Decision-making stage, ethical values structure the translation of analysis into actionable plans, ensuring that strategic interventions align with patient safety, clinical integrity, and equitable outcomes. Defining intervention thresholds and protocols ensures that suicide risk predictions lead to appropriate clinical responses, balancing proactive intervention with harm prevention (beneficence and non-maleficence; Torous et al., Reference Torous, Bucci, Bell, Kessing, Faurholt-Jepsen, Whelan and Firth2021; Wang et al., Reference Wang, Wu, Zhang, He and Huang2024). Developing structured validation standards and operational procedures safeguards against methodological inconsistencies, ensuring AI applications maintain reliability across psychiatric settings (scientific integrity; Kirtley et al., Reference Kirtley, van Mens, Hoogendoorn, Kapur and de Beurs2022; Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023). Establishing consent frameworks through stakeholder collaboration ensures that AI-driven recommendations respect patient autonomy across varying cognitive capacities (autonomy and informed consent; Jacobson et al., Reference Jacobson, Bentley, Walton, Wang, Fortgang, Millner and Coppersmith2020; Wouters et al., Reference Wouters, van der Horst, Aalfs, Bralten, Luykx and Zinkstok2024). Defining equitable intervention criteria prevents disparities in AI-assisted psychiatric care, ensuring that deployment procedures address demographic considerations (justice and equity; Koutsouleris, Hauser, Skvortsova, & De Choudhury, Reference Koutsouleris, Hauser, Skvortsova and De Choudhury2022; Starke et al., Reference Starke, De Clercq, Borgwardt and Elger2021). Determining data access levels and security protocols ensures psychiatric information remains protected against breaches and unauthorized use (privacy and confidentiality; Parziale & Mascalzoni, Reference Parziale and Mascalzoni2022; Upreti et al., Reference Upreti, Lind, Elmokashfi and Yazidi2024). Establishing clear guidelines for communicating AI system outputs ensures that stakeholders can accurately interpret AI-generated insights, thereby reducing ambiguity and informing structured decision-making (transparency and explainability; Gültekin & Şahin, Reference Gültekin and Şahin2024; Torous et al., Reference Torous, Bucci, Bell, Kessing, Faurholt-Jepsen, Whelan and Firth2021). By embedding ethical values into decision-making at every stage, this structured framework ensures that computational psychiatry tools align with professional standards while maintaining stakeholder trust and clinical applicability.
Implementation
The Implementation stage transitions from decision-making to active deployment, integrating computational psychiatry tools into clinical workflows through three interconnected processes: clinical integration, staff training, and performance monitoring. Clinical integration embeds computational tools into existing mental healthcare systems through systematic protocol development and infrastructure integration. For example, implementing a psychosis relapse prediction algorithm requires three key components. First, establishing empirically validated clinical triggers based on quantifiable indicators (e.g. appointment adherence and medication compliance). Second, securely integrating outputs with electronic health record systems while maintaining data integrity. Third, structuring evidence-based response pathways that escalate from automated surveillance to structured clinical assessments and expedited psychiatric intervention within established governance frameworks. Staff training develops clinical competency in three clinical areas: first, quantitative risk score interpretation within clinical contexts, incorporating confidence intervals, and limitation awareness. Second, integrating algorithmic insights with clinical expertise, emphasizing systematic evaluation against patient-specific factors. Third, training in communicating AI-generated outputs while maintaining a therapeutic alliance. Performance monitoring establishes comprehensive evaluation frameworks to track predictive accuracy, monitor unintended consequences (e.g. demographic disparities), assess care quality impact, and ensure ongoing ethical compliance.
In the Implementation stage, ethical values guide the active deployment of computational psychiatry tools through clinical integration, staff training, and performance monitoring. Clinical teams implement alert systems and response pathways using predetermined risk thresholds, ensuring AI supports rather than overrides clinical judgment (beneficence and non-maleficence; Monaco et al., Reference Monaco, Vignapiano, Piacente, Pagano, Mancuso, Steardo and Corrivetti2024; Tabb & Lemoine, Reference Tabb and Lemoine2021). Validation procedures and quality control measures become part of routine clinical practice through systematic staff training (scientific integrity; Sultan, Scholz, & van den Bos, Reference Sultan, Scholz and van den Bos2023; Zhang et al., Reference Zhang, Scandiffio, Younus, Jeyakumar, Karsan, Charow and Wiljer2023). Dynamic consent processes roll out with documentation systems that adapt to varying levels of patient cognitive capacity throughout illness phases (autonomy and informed consent; Ball et al., Reference Ball, Kalinowski and Williams2020; Wouters et al., Reference Wouters, van der Horst, Aalfs, Bralten, Luykx and Zinkstok2024). Monitoring systems track utilization patterns as standardized access protocols take effect, ensuring equitable tool deployment across patient populations (justice and equity; Lewis et al., Reference Lewis, Chisholm, Connolly, Esplin, Glessner, Gordon and Fullerton2024; Wang et al., Reference Wang, Wu, Zhang, He and Huang2024). Clinical staff receive training in security protocols while system-level protections and access controls safeguard sensitive psychiatric data (privacy and confidentiality; Upreti et al., Reference Upreti, Lind, Elmokashfi and Yazidi2024; Wray et al., Reference Wray, Lin, Austin, McGrath, Hickie, Murray and Visscher2021). Clear formats for sharing AI outputs roll out alongside training in effective communication methods, maintaining transparency throughout implementation (transparency and explainability; Levkovich, Shinan-Altman, & Elyoseph, Reference Levkovich, Shinan-Altman and Elyoseph2024; Wiese & Friston, Reference Wiese and Friston2022). For deep learning systems where algorithmic interpretability is not possible, transparency focuses on post-hoc explanations (e.g. SHapley Additive exPlanations and Local Interpretable Model-agnostic Explanations) and model behavior rather than step-by-step algorithmic logic (Lundberg & Lee, Reference Lundberg and Lee2017; Ribeiro, Singh, & Guestrin, Reference Ribeiro, Singh and Guestrin2016). This coordinated integration of ethical principles into clinical practice transforms theoretical frameworks and planned protocols into living systems that actively safeguard and enhance psychiatric care delivery.
Review and reflection
The Review and reflection stage provides a structured approach to evaluating real-world impact and refining AI tools over time. The Review and reflection stage evaluates real-world performance through outcome assessment, performance evaluation, and framework refinement to ensure continuous improvement. Outcome assessment systematically measures the impact of implemented computational psychiatry tools using quantitative and qualitative metrics, tracking clinical outcomes, such as symptom improvement rates, treatment adherence, and changes in healthcare utilization. For example, when assessing an automated depression screening system in primary care, outcome assessment would measure time-to-treatment initiation, referral success rates, patient-reported symptom changes at 6 months, and emergency department utilization for mental health crises. Performance evaluation examines prediction accuracy, response times, and clinical workflow integration. In a psychosis relapse prediction system, this involves assessing how accurately the model identifies early warning signs, how quickly clinical teams respond to alerts, and whether predictions integrate smoothly into routine assessments without disrupting existing workflows. Framework refinement updates protocols based on implementation experience and emerging evidence, such as modifying alert thresholds to reduce false positives or adjusting clinical workflows based on staff feedback. By continuously adapting predictive models, clinical integration protocols, and risk management frameworks, this stage ensures computational psychiatry tools evolve ethically and remain clinically responsible. For example, if an algorithm achieves 85% accuracy in detecting early warning signs but overnight alerts take significantly longer to receive a clinical response, framework refinement would focus on optimizing workflow protocols during off-hours to improve efficiency and clinical impact.
During the Review and reflection stage, ethical values guide the evaluation of how computational psychiatry tools performed, ensuring that their real-world impact aligns with ethical principles and clinical objectives. Assessing documented clinical effects and incident reports ensures that AI-driven interventions minimize harm while optimizing patient care (beneficence and non-maleficence; Levkovich et al., Reference Levkovich, Shinan-Altman and Elyoseph2024; Torous et al., Reference Torous, Bucci, Bell, Kessing, Faurholt-Jepsen, Whelan and Firth2021). Reviewing performance data identifies model reliability across diverse clinical settings, ensuring psychiatric AI applications maintain scientific rigor (scientific integrity; Kleine et al., Reference Kleine, Lermer, Cecil, Heinrich and Gaube2023; Monteith et al., Reference Monteith, Glenn, Geddes, Achtyes, Whybrow and Bauer2023). Examining consent records evaluates whether patients engaged meaningfully with AI-assisted care, ensuring informed decision-making was upheld (autonomy and informed consent; Davidson, Reference Davidson2022; Zidaru et al., Reference Zidaru, Morrow and Stockley2021). Evaluating demographic trends in patient outcomes ensures that psychiatric AI tools do not reinforce disparities in treatment access and effectiveness (justice and equity; Lewis et al., Reference Lewis, Chisholm, Connolly, Esplin, Glessner, Gordon and Fullerton2024; Wang et al., Reference Wang, Wu, Zhang, He and Huang2024). Reviewing security protocols and access logs ensures that psychiatric data remains protected against breaches and misuse (privacy and confidentiality; Upreti et al., Reference Upreti, Lind, Elmokashfi and Yazidi2024; Wray et al., Reference Wray, Lin, Austin, McGrath, Hickie, Murray and Visscher2021). Analyzing stakeholder feedback on AI applications assesses whether decision-making processes remained transparent, interpretable, and clinically actionable (transparency and explainability; Kline, Prichett, McKim, & Palm Reed, Reference Kline, Prichett, McKim and Palm Reed2023; Wiese & Friston, Reference Wiese and Friston2022). Without structured ethical oversight, AI risks amplifying biases, diminishing clinical accountability, and undermining patient trust in mental healthcare. The IEACP framework provides an essential ethical infrastructure to ensure computational psychiatry advances in a way that prioritizes transparency, equity, and patient autonomy. Table 2 operationalizes all IEACP framework processes, providing step-by-step implementation guidance with systematic cultural integration that transforms the conceptual framework into actionable procedures for clinical practice.
Table 2. Operationalization of the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework with cultural adaptations

Note: This table operationalizes the IEACP framework across all 15 processes, providing step-by-step implementation guidance with systematic cultural integration for each ethical principle.
Ethical values: AIC, autonomy/informed consent; BN, beneficence/non-maleficence; JE ,justice/equity; PC, privacy/confidentiality; TE, transparency/explainability; SIV, scientific integrity/validity.
Technical terms: AI, artificial intelligence; APA, American Psychological Association; SM, Diagnostic and Statistical Manual of Mental `Disorders; FDA, Food and Drug Administration; HIPAA, Health Insurance Portability and Accountability Act; IRB, Institutional Review Board; IT, information technology; WHO, World Health Organization.
a These thresholds were developed to support practical decision-making and are not intended as rigid cutoffs. They reflect the cumulative risk approach discussed in prior ethical frameworks for digital and AI-enabled mental healthcare (Ball et al., Reference Ball, Kalinowski and Williams2020; Vayena et al., Reference Vayena, Blasimme and Cohen2018).
Patient and lived experience involvement methods
Ensuring computational psychiatry tools align with patient needs requires meaningful involvement, particularly for individuals with severe mental illness (Zima, Edgcomb, & Fortuna, Reference Zima, Edgcomb and Fortuna2024). Methods include adapted consent processes, codesign initiatives, lived experience advisory panels, peer-led focus groups, and representation in ethical oversight committees. Key considerations are outlined in Table 3.
Table 3. Patient considerations within the IEACP framework

Framework applications across computational psychiatry contexts
To illustrate the framework’s utility and adaptability, we examined its application across diverse computational psychiatry contexts. To that end, we analyzed two recent studies employing machine learning for mental health applications. Curtiss (Reference Curtiss, Smoller and Pedrelli2024) developed an ensemble machine learning approach to optimize second-step depression treatment selection. Their study used data from 1,439 patients in the Sequenced Treatment Alternatives to Relieve Depression trial who had failed to achieve remission with initial antidepressant treatment. The authors created models to predict outcomes for seven different second-step treatments, highlighting the complexity of personalized treatment selection in psychiatry. Grimland (Reference Grimland, Benatov, Yeshayahu, Izmaylov, Segal, Gal and Levi-Belz2024) focused on real-time suicide risk prediction in crisis hotline chats. They analyzed 17,654 chat sessions using natural language processing and a theory-driven lexicon of suicide risk factors. To demonstrate a concrete framework application, Table 4 presents illustrative scenarios that extrapolate from documented study limitations and established patterns of algorithmic bias in psychiatric populations.
Table 4. Application of the Integrated Ethical Approach for Computational Psychiatry (IEACP) framework

Note: Framework applications represent illustrative scenarios based on study limitations and known disparities in psychiatric AI to demonstrate concrete ethical decision-making processes. These examples are designed to show how the IEACP framework would guide decision-making in realistic clinical situations.
Ethical values: AIC, autonomy/informed consent; BN, beneficence/non-maleficence; JE, justice/equity; PC, privacy/confidentiality; TE, transparency/explainability; SIV, scientific integrity/validity.
Technical and clinical terms: APA, American Psychiatric Association; AUC, area under curve; CT, clinical trials; EHR, electronic health record; FDA, US Food and Drug Administration; HIPAA, Health Insurance Portability and Accountability Act; HITECH, Health Information Technology for Economic and Clinical Health Act; IRB, Institutional Review Board; SR-BERT, Suicide Risk–Bidirectional Encoder Representations From Transformers; STAR*D, Sequenced Treatment Alternatives to Relieve Depression; VPN, virtual private network.
The framework applications demonstrate the IEACP’s capacity to provide structured ethical guidance for computational psychiatry implementation. To contextualize this contribution within the existing landscape of AI ethics frameworks, we compare the IEACP with prominent approaches, including the WHO’s Ethics and Governance of AI for Health (World Health Organization, 2021), AI4People’s Ethical Framework (Floridi et al., Reference Floridi, Cowls, Beltrametti, Chatila, Chazerand, Dignum and Vayena2018), and IEEE’s Ethically Aligned Design. Table 5 illustrates how the IEACP addresses gaps in existing frameworks by providing the first procedural approach specifically designed for computational psychiatry contexts.
Table 5. Ethical frameworks for AI: A comparative analysis featuring the IEACP

Note: This table compares four ethical frameworks for AI, emphasizing the distinctive procedural and domain-specific features of the Integrated Ethical Approach for Computational Psychiatry (IEACP). While these initiatives are commonly termed ‘frameworks’ in the AI ethics literature, they vary significantly in procedural specificity and implementation guidance, with the IEACP being the only approach providing structured decision-making processes for clinical implementation. Dimensions reflect core ethical values, structural orientation, clinical applicability, and stakeholder integration.
Abbreviations: AI, artificial intelligence; AI/AS, artificial intelligence and autonomous systems; IEEE EAD, Institute of Electrical and Electronics Engineers – Ethically Aligned Design; LMIC, low- and middle-income countries.
Discussion
AI’s growing role in psychiatric care creates unresolved ethical challenges around patient autonomy, algorithmic bias, and clinical accountability. The IEACP framework addresses these challenges by integrating six core ethical values with a structured five-stage process. Unlike existing generalist guidelines for psychiatric ethics (Solanki et al., Reference Solanki, Grundy and Hussain2023; World Health Organization, 2021), the IEACP provides psychiatry-specific guidance, addressing the complexity of fluctuating mental states, clinician-patient dynamics, and the biopsychosocial model of care.
The framework’s dynamic ethical governance acknowledges that challenges in computational psychiatry evolve alongside AI advances and shifting psychiatric paradigms. (Barnett et al., Reference Barnett, Torous, Staples, Sandoval, Keshavan and Onnela2018; Starke et al., Reference Starke, De Clercq, Borgwardt and Elger2021). For example, existing AI ethics frameworks often assume stable patient autonomy (Ploug & Holm, Reference Ploug and Holm2016; Vayena et al., Reference Vayena, Blasimme and Cohen2018), whereas IEACP explicitly accounts for fluctuating decision-making capacities in psychiatric populations. Additionally, by incorporating stakeholder mapping and real-time performance monitoring, IEACP moves beyond static, principle-based ethical models to an adaptive framework that can accommodate emerging challenges, such as digital phenotyping, predictive psychiatry, and personalized treatment algorithms (Fusar-Poli et al., Reference Fusar-Poli, Manchia, Koutsouleris, Leslie, Woopen, Calkins and Andreassen2022; McCradden et al., Reference McCradden, Hui and Buchman2023).
A key limitation of the IEACP framework is that it has not yet been empirically validated in real-world psychiatric AI applications. Although derived from a systematic analysis of 83 studies, the framework’s practical utility remains untested. However, similar principle-based ethical frameworks have been widely adopted based on their theoretical grounding rather than empirical validation (Floridi & Cowls, Reference Floridi and Cowls2019; McCradden et al., Reference McCradden, Hui and Buchman2023; Mittelstadt, Reference Mittelstadt2019). The structured, principle-based nature of IEACP ensures its relevance in addressing AI ethics in psychiatry, even in the absence of direct validation.
To bridge this gap, future research should pilot-test the IEACP framework in psychiatric AI implementation. A mixed-method evaluation could involve (1) clinician decision-making studies assessing its applicability in guiding AI-assisted psychiatric interventions, (2) stakeholder engagement studies incorporating patient and clinician perspectives, and (3) real-world validation through case applications in clinical psychiatry. Establishing an iterative refinement process through empirical testing would enhance the framework’s adaptability and ensure alignment with evolving computational psychiatry challenges. Such research will be critical in advancing the IEACP from a theoretically grounded model to an empirically validated tool for ethical AI integration in psychiatric practice.
Conclusion
Computational psychiatry requires ethical approaches that balance innovation with human-centered care. The IEACP framework offers a systematic method for addressing ethical challenges in AI-driven psychiatric applications. Unlike retrospective evaluations, it embeds ethical considerations into AI development, reducing oversight risks and aligning with bioethical principles. Its design supports immediate use and future adaptation. Pilot studies and refinement will enhance its applicability. IEACP may catalyze domain-specific ethical frameworks that preserve psychiatry’s human foundations.
Supplementary material
The supplementary material for this article can be found at http://doi.org/10.1017/S0033291725101311.
Funding statement
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Competing interests
The authors declare none.