Hostname: page-component-54dcc4c588-b5cpw Total loading time: 0 Render date: 2025-10-12T06:40:17.893Z Has data issue: false hasContentIssue false

Artificial intelligence-enabled predictive modelling in psychiatry: overview of machine learning applications in mental health research

Published online by Cambridge University Press:  22 August 2025

Gemma Lewin
Affiliation:
A Specialty Trainee (Year 5) in the psychiatry of intellectual disability, working in Leicestershire Partnership Trust, Leicester, UK. She has an interest in the physical health needs of individuals with intellectual disability and is currently undertaking research and a postgraduate certificate in this field.
Emeka Abakasanga
Affiliation:
A research associate in artificial intelligence in the Department of Computer Science at Loughborough University, UK. He holds a PhD in electrical and computer engineering from the Ben-Gurion University of the Negev, Israel. His research focuses on information theory, artificial intelligence for healthcare and industrial applications, signal processing and data science.
Isabel Titcombe
Affiliation:
An undergraduate student at the University of Leicester, UK. She is studying psychology with cognitive neuroscience, graduating in 2025. She aspires to become a clinical psychologist in the future.
Georgina Cosma
Affiliation:
A professor of artificial intelligence and data science in the Department of Computer Science at Loughborough University, UK. She holds a PhD in computer science from the University of Warwick, UK. Her research focuses on artificial intelligence in healthcare, and on neural information processing, modelling and retrieval.
Satheesh Gangadharan*
Affiliation:
A consultant in the psychiatry of intellectual disability, working in Leicestershire Partnership NHS Trust, Leicester, UK. He is also a clinical researcher with interests in intellectual disability, autism and use of artificial intelligence in healthcare. One of his current research projects is focused on the use of machine learning in identification of the clusters and trajectory of multiple long-term conditions in people with intellectual disability.
*
Correspondence Satheesh Gangadharan. Email: s.gangadharan1@nhs.net
Rights & Permissions [Opens in a new window]

Summary

Machine learning, an artificial intelligence (AI) approach, provides scope for developing predictive modelling in mental health. The ability of machine learning algorithms to analyse vast amounts of data and make predictions about the onset or course of mental health problems makes this approach a valuable tool in mental health research of the future. The right use of this approach could improve personalisation and precision of medical and non-medical treatment approaches. However, ensuring the availability of large, good-quality data-sets that represent the diversity of the population, along with the need for openness and transparency of the AI approaches, are some of the challenges that need to be overcome. This article provides an overview of current machine learning applications in mental health research, synthesising literature identified through targeted searches of key databases and expert knowledge to examine research developments and emerging applications of AI-enabled predictive modelling in psychiatry. The article appraises both the potential applications and current challenges of AI-based predictive modelling in psychiatric practice and research.

Information

Type
Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Royal College of Psychiatrists

LEARNING OBJECTIVES

After reading this article you will be able to:

  • understand different machine learning approaches with potential use in mental health research

  • evaluate the opportunities these approaches present, along with the challenges

  • appraise the potential use of predictive modelling supported by machine learning approaches in research and mental healthcare.

Artificial intelligence (AI) refers to the capability of machines to mimic human abilities such as communication, reasoning and decision-making with minimal human intervention (Jiang Reference Jiang, Jiang and Zhi2017). In recent years, advancements in AI have increasingly focused on machine learning, a subset of AI that involves systems processing vast data-sets using algorithms to identify patterns, which are revised and refined through a series of training, calibration and testing. Predictive modelling uses these identified patterns to make predictions about new, unseen cases.

In healthcare, machine learning has emerged as a powerful tool for developing predictive models that enhance diagnosis, treatment and patient outcomes. By analysing vast amounts of medical data, machine learning algorithms can uncover patterns and insights that may be missed by traditional methods (Eloranta Reference Eloranta and Boman2022). For example, these systems can identify patterns in electronic health records that may predict treatment response or disease progression.

Although psychiatry can be a practice relying on the interpretation of subjective and nuanced symptomatology that would seem to present a challenge for AI models, novel techniques have been developed that address the humanistic elements of psychiatric practice. Examples include the use of natural language processing (NLP) to develop AI-generated rapport, interpret the quality of textual language, and provide summaries of large volumes of medical data in a way that has utility for the end user. Furthermore, the burgeoning area of medical data provides scope for AI to harness continuously captured data from non-traditional technologies, such as wearable devices and smartphones.

Data sources could include social media posts, text messages, pictures or videos and chat rooms, in addition to the traditional health data. Extracting large amounts of data from traditional sources such as electronic health records and new sources as described provides a huge potential to explore emotions expressed in a variety of ways and can map these to patterns to enable identification of conditions or events, as well as monitoring progress over time. The quality of the data and explainability (see below) of the machine learning models are key to assessing the reliability and trustworthiness of the interpretations.

Machine learning approaches: supervised, unsupervised and deep learning

Two common machine learning approaches in predictive modelling are supervised learning and unsupervised learning. Supervised learning is goal-oriented and follows a ‘task-driven’ approach, where the system learns from labelled data with a known desired outcome, such as analysing patient records where the diagnosis is known, enabling it to make predictions or classifications on new, unseen data. Unsupervised learning adopts a ‘data-driven’ approach. In this method, the model is not given explicit goals or labelled data; instead it explores data independently, uncovering patterns, relationships or knowledge without prior guidance. This approach is particularly valuable in psychiatry for identifying previously unknown patient subgroups or symptom clusters that may not fit traditional diagnostic categories (Abakasanga Reference Abakasanga, Kousovista and Cosma2024a, Reference Abakasanga, Kousovista and Cosma2024b).

Deep learning is an advanced form of machine learning that can utilise both supervised and unsupervised mechanisms and employs multi-layer neural networks for computations. These networks can process complex clinical information through multiple stages, with each layer analysing increasingly complex features. The term ‘deep’ signifies the multiple layers or stages through which data pass, enabling the model to extract hierarchical features as it learns. Although deep learning models require significant computational time for training owing to their complex parameters, they execute predictions rapidly compared with traditional machine learning algorithms (Sarker Reference Sarker2021). This efficiency enables fast clinical decision support in practice.

Applications of predictive modelling in psychiatry

Data value in psychiatric diagnosis and treatment

The diagnostic process in psychiatry requires the integration of multiple complex and subjective information sources, which has historically limited the translation of predictive models from research to clinical practice (Rocheteau Reference Rocheteau2023). Clinical trial data have emerged as especially valuable for machine learning applications, providing structured information with clear baseline characteristics, measurable outcomes and controlled interventions that can be used effectively to train predictive models (Koppe Reference Koppe, Meyer-Lindenberg and Durstewitz2021).

Evidence from large-scale clinical trials

Availability of large-scale clean data-sets that could be accessed to train the models is essential for the future. The Sequenced Treatment Alternatives for Depression (STAR*D) trial (Rush Reference Rush, Fava and Wisniewski2004) is an example of a data-set source that has provided a springboard for several research studies utilising machine learning techniques to create predictive models aiding personalised treatment. Perlis (Reference Perlis2013) developed a model using STAR*D data predicting treatment resistance based on whether an individual would reach remission following two antidepressant trials, enabling the early identification of the problem. Chekroud et al (2016) identified 25 variables that could predict remission when citalopram was prescribed to treat depression. This model has been used to predict remission with other antidepressants and it has been externally validated using an entirely different data-set to demonstrate generalisability, which is the ability to predict outcomes beyond the original data. Although these examples focus on depression (because of the robust data-sets available), the field is increasingly moving towards transdiagnostic approaches that can identify shared patterns and predict outcomes across multiple mental health conditions, drawing on the dimensional nature of psychiatric symptoms (Hansen 2022; de Lacy Reference de Lacy, Ramshaw and McCauley2023; Rosellini Reference Rosellini, Andrea and Galiano2023; El-Sappagh Reference El-Sappagh, Nazih and Alharbi2025).

Applications in psychological therapy

In psychotherapy, predictive models have been used to predict the efficacy of a specific modality of therapy for the individual, as well as the relative efficacy when choosing between multiple therapies, or between pharmacological and psychotherapeutic management. Models can further recommend the optimal intensity of therapy, as well as the order of modules that should be delivered for the individual. Beyond treatment selection, predictive models have been created to analyse patient engagement data, including the frequency and duration of engagement with internet-based cognitive–behavioural therapy (CBT), and markers of engagement in language collected from audio or visual transcripts of CBT, where the degree of engagement is used as a predictor of efficacy (Chekroud 2021). Among different therapeutic approaches, CBT models are well represented, probably because of the high volume of patients, which supports machine learning techniques.

Natural language processing

Natural language processing is the analysis of language in unstructured text, which can gain insights such as sentiment and satisfaction and can explore large volumes of narrative information within medical records. It is also used in text generation, such as the development of chatbots and AI-generated summaries (Crema Reference Crema, Attardi and Sartiano2022). NLP has been utitlised to integrate both use cases to identify language differences in individuals with emotional distress and implement timely interventions that reduced the incidence of anxiety and depression (Sachan Reference Sachan2018). NLP can also identify depression remission states (Hansen 2022). Interactive chatbots, virtual avatars and even robots have been used to simulate rapport building and foster a therapeutic relationship, which has had wide utility, such as improving treatment adherence in the management of schizophrenia (Bain Reference Bain, Shafner and Walling2017) and improving well-being, mood and social connections in individuals with dementia (Yu Reference Yu, Hui and Lee2015).

Several examples of NLP demonstrate the potential of AI-based approaches in identifying and utilising digital markers for symptom severity or as potential indicators of specific conditions, across clinical prediction, symptom monitoring and population surveillance contexts. Hansen et al (2025) conducted a machine learning analysis of electronic health records from 24 449 psychiatric patients, demonstrating that diagnostic progression to schizophrenia or bipolar disorder could be predicted using routine clinical data, although with higher accuracy for the former than the latter. The study suggests that such approaches are feasible for detecting schizophrenia progression in particular, potentially reducing diagnostic delays and duration of untreated illness. At the individual level, NLP approaches have shown promise for real-time symptom assessment. Crocamo et al (2025) conducted a pilot mobile health (mHealth) study with 32 individuals with bipolar disorder, using a mobile app to analyse speech patterns through NLP and acoustic signal processing. The study found that remotely collected speech patterns underlying both linguistic and acoustic features were associated with symptom severity levels and may help differentiate clinical conditions during mood state assessments.

Systematic examination of the field reveals both rapid growth and persistent challenges. Malgaroli et al (Reference Malgaroli, Hull and Zech2023) conducted a systematic review of 102 studies examining NLP applications in mental health interventions, finding rapid growth since 2019, with increased use of large language models and larger sample sizes. The review found that text-based features contributed more to model accuracy than audio markers, and it highlighted limitations, including lack of linguistic diversity, limited reproducibility and population bias in current NLP mental health research.

Beyond clinical settings, NLP has demonstrated utility for monitoring population-level mental health trends. Crocamo et al (2021) conducted sentiment analysis on 3 308 476 English-language COVID-19-related posts (tweets) on the Twitter platform between January and March 2020, finding that negative sentiment gradually increased following key pandemic events, with most active users showing increasingly negative sentiment scores. The study authors suggest integrating social media surveillance as a preventive approach to hinder emotional contagion and support community mental health during health crises. Similarly, Low et al (Reference Low, Rumker and Talkar2020) used NLP to analyse posts from 15 mental health support groups on Reddit during COVID-19, finding that a community for people with health anxiety (r/HealthAnxiety) showed early spikes in pandemic-related posts and that support groups related to attention-deficit hyperactivity disorder, eating disorders and anxiety showed the most negative semantic changes. The study revealed that suicidality and loneliness clusters more than doubled during the pandemic, with mental health support groups becoming linguistically more similar to health anxiety discussions.

Advances in the use of medical data

Medical data can be collated in an active or passive manner. Active data collection refers to the submission of data, typically via smartphone or computer-based devices. Examples include self-reported symptoms, which could contribute to assessment, risk management or monitoring of treatment response. In comparison, passive data are collated automatically through wearable devices or smartphones (AI-READI Consortium 2024). Data types collected include voice recording, heart rate variability, sleep, GPS data, social media use and textual communication. Predictive modelling can harness data to identify behavioural patterns and make predictions based on models. Saito et al (Reference Saito, Suzuki and Kishi2022) developed a predictive model integrating passive medical data collected on a wearable device with medical examination data to predict the onset of mental illness, using training based on past mental illness insurance claims data. Additionally, predictive models can identify behavioural anomalies for the individual when compared with their baseline, which is particularly useful for episodic mental health conditions, for example behavioural patterns gleaned from active and passive medical data to predict relapse risk and early detection of relapse in schizophrenia (Henson Reference Henson, D’Mello and Vaidyam2021).

Advanced applications and multimodal approaches

Moving beyond the single-modality approaches discussed above, advanced machine learning applications in psychiatry now integrate multiple types of data source simultaneously. The widespread use of digital medical records, data collection from smart devices and the ability to link multiple databases have redefined the use of big data in psychiatry, promising to overcome the limitations of conventional statistical methods in capturing psychiatric phenotypes (Stein Reference Stein, Shoptaw and Vigo2022). Multimodal machine learning approaches, which combine multiple types of data source, have demonstrated success in predicting psychosis risk and social functioning impairments in clinically high-risk individuals. By integrating clinical assessment data, neurocognitive testing results, neurophysiological measurements and magnetic resonance imaging (MRI) scans (Koutsouleris Reference Koutsouleris, Kambeitz-Ilankovic and Ruhrmann2018), these models process multiple sources of patient information simultaneously to make predictions, similar to how clinicians consider multiple factors in their assessment. These models can efficiently identify patterns across varied data sources without requiring separate statistical tests (Chekroud 2021) and continue to evolve as new data become available. The ability to access and manipulate such diverse data promises to advance individualised medicine by matching patients with appropriate therapies (Stein Reference Stein, Shoptaw and Vigo2022). However, these models have primarily been developed using data from Western populations, requiring further validation across different ethnic groups.

Deep learning applications in precision psychiatry

The field is progressing towards precision psychiatry through innovative approaches like deep neural networks (DNNs) (Koppe Reference Koppe, Meyer-Lindenberg and Durstewitz2021). The critical need in psychiatry for precision therapies and deeper understanding of neurobiological mechanisms has driven these developments. Although DNNs show promise where traditional approaches have fallen short, their complexity typically demands large samples. However, models can be adapted for smaller data-sets by first training on group data before refining for individual predictions. Understanding DNN internal representations could reveal new insights into pathological mechanisms, as demonstrated by patterns in schizophrenia and autism. Current developments in DNN visualisation techniques may help uncover interpretable multimodal biomarkers, connecting pathophysiological understanding with tailored treatment.

Advantages and challenges of predictive modelling in psychiatry

The applications described above highlight both the potential and limitations of machine learning in psychiatric practice. Figure 1 summarises the key advantages and challenges in psychiatric predictive modelling.

FIG 1 Key advantages and challenges in psychiatric predictive modelling.

Advantages and applications

Predictive modelling in psychiatry offers significant potential benefits alongside notable challenges. The key advantages include the ability to integrate complex clinical data through multimodal machine learning, which can process various information sources simultaneously, from clinical notes to brain scans. These models can enhance clinical decision-making by identifying subtle patterns in patient data, enabling earlier interventions and more efficient resource allocation. They show promise in predicting treatment outcomes, particularly for antidepressants and psychological therapy, and can provide continuous monitoring through wearable devices and automated screening capabilities.

Current implementation challenges and limitations

The literature identifies several limitations in psychiatric predictive modelling. Areas where smaller sample sizes have been an obstacle to robust machine learning application include neurobiological treatments such as transcranial magnetic stimulation (TMS) and electroconvulsive therapy (ECT). When data-sets are small, there is an increased risk of over-reliance on limited data, which can result in model bias and overfitting. Specifically, bias occurs when the model systematically favours certain outcomes, and overfitting happens when the model becomes excessively tailored to the training data, reducing its ability to generalise to new, unseen data. Other factors that can have an impact on the translation of predictive models from research to clinic include logistical and cost-related barriers, for example in the application of predictive models within electrophysiology and neuroimaging. A systematic review of the genetic prediction of psychiatric conditions indicates a high degree of bias and low levels of standardisation between studies, which affects the utility of existing predictive models (Bracher-Smith Reference Bracher-Smith, Crawford and Escott-Price2021). Polygenic risk scores may support the performance of multimodal clinical models but remain insufficient in isolation.

Natural language processing considerations

Natural language processing is a particularly promising area in psychiatric predictive modelling, demonstrating effectiveness across clinical prediction, individual symptom monitoring and population-level surveillance. NLP techniques have shown capability in predicting diagnostic progression, analysing speech patterns for mood assessment, and monitoring mental health trends through social media platforms. However, NLP applications face specific challenges, including limited linguistic diversity in training data, difficulties in capturing cultural variations in language expression and the need for robust validation across different demographic groups. The integration of clinical notes and conversational data through NLP offers unique insights into patient experiences, although concerns regarding privacy and the interpretation of complex linguistic patterns remain significant considerations.

Technical and data considerations

High-quality psychiatric data-sets are scarce, and existing electronic health records often contain poorly structured data with inconsistent coding practices. Technical limitations include the risk of overfitting and substantial computational requirements. Data security is particularly important given the sensitive nature of mental health information, and gathering data through personal devices raises additional privacy concerns. The issue of model transparency is vital, as clinicians and patients must understand how these algorithms reach their conclusions to trust and adopt them effectively. Demographic bias is another challenge, as models trained on limited population groups may perform poorly for others, failing to account for cultural, age and gender differences in how mental health problems manifest.

Trade-offs between optimisation and explainability of predictive models

The integration of AI and machine learning into psychiatry introduces a fundamental tension between model optimisation and explainability. Highly optimised models, such as deep neural networks (DNNs), can process vast amounts of patient data, yielding highly accurate predictions about mental health conditions and treatment outcomes. However, their complexity often makes them difficult to understand how specific predictions are generated. These sophisticated models can uncover subtle patterns across diverse data sources, ranging from electronic health records to genetic information – potentially surpassing human capabilities in predictive accuracy. In contrast, simpler models like decision trees or linear regression may offer better explainability but often fail to capture the full complexity of the data, leading to less accurate predictions.

In healthcare, however, transparency and understanding of AI decisions are crucial, as they have a direct impact on patient care. Clinicians and patients need to trust and understand the reasoning behind AI-generated treatment recommendations or diagnoses, making explainability a top priority. To address this need, several explainable AI (XAI) techniques are being explored. Post hoc explainability methods, such as LIME (local interpretable model-agnostic explanations) and SHAP (Shapley additive explanations), offer valuable insights into the decision-making process of complex AI models (Ali Reference Ali, Abuhmed and El-Sappagh2023; El-Sappagh Reference El-Sappagh, Nazih and Alharbi2025). These methods help clinicians understand which features of the data most influenced a given prediction, regardless of the underlying model’s complexity.

Another approach is the use of hybrid models (Ali Reference Ali, Abuhmed and El-Sappagh2023; Pavez Reference Pavez and Allende2024), which combine the simplicity of decision trees or linear models with the power of complex deep learning algorithms. This strategy aims to deliver both enhanced accuracy and improved explainability. It is also important to note that AI models are not intended to replace clinicians, but rather to support them in decision-making. These models work alongside healthcare professionals, providing predictions and recommendations, while the final decisions remain in the hands of qualified clinicians who can evaluate the rationale behind the suggestions generated by AI.

Ultimately, the goal is to develop AI systems that strike a balance between high predictive accuracy and clear, interpretable insights, ensuring that the systems are both trustworthy and effective in clinical settings.

Future directions

Looking ahead, priorities should include developing standardised data collection practices, establishing clear ethical frameworks and ensuring models are validated across diverse populations. Success will depend on balancing technological innovation with robust ethical standards while maintaining trust in mental health services. Despite these challenges, the potential benefits to patient care and service efficiency suggest this remains a valuable area for continued development.

MCQs

Select the single best option for each question stem

  1. 1 Potential uses of machine learning-based predictive modelling in mental healthcare could include:

    1. a early diagnosis of conditions

    2. b identification of adverse events

    3. c personalisation of treatment

    4. d all of the above

    5. e none of the above.

  2. 2 Challenges to the successful integration of machine learning-based predictive modelling in mental healthcare include:

    1. a demographic bias

    2. b lack of standardisation

    3. c model explainability

    4. d model complexity

    5. e all of the above.

  3. 3 A strength of the unsupervised machine learning approach is that it:

    1. a has good explainability

    2. b requires labelled training data

    3. c is intuitive and easy to interpret

    4. d works better with small data-sets

    5. e identifies hidden patterns in data.

  4. 4 Which of the following is not a machine learning approach?

    1. a supervised learning

    2. b deep learning

    3. c natural language processing

    4. d neuro-linguistic programming

    5. e unsupervised learning.

  5. 5 Types of data that could be used by a machine learning approach include:

    1. a coded data in electronic health records

    2. b descriptive notes in electronic health records

    3. c videos/images

    4. d data from wearable devices

    5. e all of the above.

MCQ answers

  1. 1 d

  2. 2 c

  3. 3 e

  4. 4 a

  5. 5 b

Data availability

Data availability is not applicable to this article as no new data were created or analysed in this study.

Author contributions

G.L., E.A. and I.T. undertook the review of literature, supervised by G.C. and S.G. An initial draft written by G.L. and I.T. was substantially revised by E.A., G.C. and S.G. All authors reviewed and approved the final draft and are accountable for all aspects of the work.

Funding

This research received no specific grant from any funding agency, commercial or not-for-profit sectors. Leicestershire Partnership NHS Trust and Loughborough University provided support in kind by allowing time for the authors to do this review.

Declaration of interest

None.

References

Abakasanga, E, Kousovista, R, Cosma, G, et al (2024a) Identifying clusters on multiple long-term conditions for adults with learning disabilities. In Artificial Intelligence in Healthcare: First International Conference, AIiH 2024. Proceedings, Part I (Lecture Notes in Computer Science, vol. 14975) (eds X Xie, I Styles, G Powathil, et al): 45–58. Springer.Google Scholar
Abakasanga, E, Kousovista, R, Cosma, G, et al (2024b) Cluster and trajectory analysis of multiple long-term conditions in adults with learning disabilities. In Artificial Intelligence in Healthcare: First International Conference, AIiH 2024. Proceedings, Part I (Lecture Notes in Computer Science, vol. 14975) (eds X Xie, I Styles, G Powathil, et al): 3–16. Springer.Google Scholar
AI-READI Consortium (2024) AI-READI: rethinking AI data collection, preparation and sharing in diabetes research and beyond. Nature Metabolism, 6: 2210–2.10.1038/s42255-024-01165-xCrossRefGoogle Scholar
Ali, S, Abuhmed, T, El-Sappagh, S, et al (2023) Explainable Artificial Intelligence (XAI): what we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99: 101805.CrossRefGoogle Scholar
Bain, EE, Shafner, L, Walling, DP, et al (2017) Use of a novel artificial intelligence platform on mobile devices to assess dosing compliance in a Phase 2 clinical trial in subjects with schizophrenia. JMIR mHealth and uHealth, 5: 18.10.2196/mhealth.7030CrossRefGoogle Scholar
Bracher-Smith, M, Crawford, K, Escott-Price, V (2021) Machine learning for genetic prediction of psychiatric disorders: a systematic review. Molecular Psychiatry, 26: 70–9.10.1038/s41380-020-0825-2CrossRefGoogle ScholarPubMed
Chekroud, AM, Zotti, RJ, Shehzad, Z, et al (2016) Cross-trial prediction of treatment outcome in depression: a machine learning approach. Lancet Psychiatry, 3: 243–50.10.1016/S2215-0366(15)00471-XCrossRefGoogle ScholarPubMed
Chekroud, AM, Bondar, J, Delgadillo, J, et al (2021) The promise of machine learning in predicting treatment outcomes in psychiatry. World Psychiatry, 20: 154–70.10.1002/wps.20882CrossRefGoogle ScholarPubMed
Crema, C, Attardi, G, Sartiano, D, et al (2022) Natural language processing in clinical neuroscience and psychiatry: a review. Frontiers in Psychiatry, 13: 946387.10.3389/fpsyt.2022.946387CrossRefGoogle ScholarPubMed
Crocamo, C, Viviani, M, Famiglini, L, et al (2021) Surveilling COVID-19 emotional contagion on Twitter by sentiment analysis. European Psychiatry, 64: e17.10.1192/j.eurpsy.2021.3CrossRefGoogle ScholarPubMed
Crocamo, C, Cioni, R, Canestro, A, et al (2025) Acoustic and natural language markers for bipolar disorder: a pilot, mHealth cross-sectional study. JMIR Formative Research, 9: e65555.10.2196/65555CrossRefGoogle ScholarPubMed
de Lacy, N, Ramshaw, MJ, McCauley, E, et al (2023) Predicting individual cases of major adolescent psychiatric conditions with artificial intelligence. Translational Psychiatry, 13: 314.10.1038/s41398-023-02599-9CrossRefGoogle ScholarPubMed
Eloranta, S, Boman, M (2022) Predictive models for clinical decision making: deep dives in practical machine learning. Journal of Internal Medicine, 292: 278–95.10.1111/joim.13483CrossRefGoogle ScholarPubMed
El-Sappagh, S, Nazih, W, Alharbi, M, et al (2025) Responsible artificial intelligence for mental health disorders: current applications and future challenges. Journal of Disability Research, 4: 129.10.57197/JDR-2024-0101CrossRefGoogle Scholar
Hansen, L, Zhang, YP, Wolf, D, et al (2022) A generalizable speech emotion recognition model reveals depression and remission. Acta Psychiatrica Scandinavica, 145: 186–99.10.1111/acps.13388CrossRefGoogle ScholarPubMed
Hansen, L, Bernstorff, M, Enevoldsen, K, et al (2025) Predicting diagnostic progression to schizophrenia or bipolar disorder via machine learning. JAMA Psychiatry, 82: 459–69.10.1001/jamapsychiatry.2024.4702CrossRefGoogle ScholarPubMed
Henson, P, D’Mello, R, Vaidyam, A, et al (2021) Anomaly detection to predict relapse risk in schizophrenia. Translational Psychiatry, 11: 28.10.1038/s41398-020-01123-7CrossRefGoogle ScholarPubMed
Jiang, F, Jiang, Y, Zhi, H (2017) Artificial intelligence in healthcare: past, present and future. Stroke and Vascular Neurology, 2: 230–43.10.1136/svn-2017-000101CrossRefGoogle ScholarPubMed
Koppe, G, Meyer-Lindenberg, A, Durstewitz, D (2021) Deep learning for small and big data in psychiatry. Neuropsychopharmacology, 46: 176–90.10.1038/s41386-020-0767-zCrossRefGoogle ScholarPubMed
Koutsouleris, N, Kambeitz-Ilankovic, L, Ruhrmann, S, et al (2018) Prediction models of functional outcomes for individuals in the clinical high-risk state for psychosis or with recent-onset depression: a multimodal, multisite machine learning analysis. JAMA Psychiatry, 75: 1156–72.10.1001/jamapsychiatry.2018.2165CrossRefGoogle ScholarPubMed
Low, D, Rumker, L, Talkar, T, et al (2020) Natural language processing reveals vulnerable mental health support groups and heightened health anxiety on Reddit during COVID-19: observational study. Journal of Medical Internet Research, 22: e22635.10.2196/22635CrossRefGoogle ScholarPubMed
Malgaroli, M, Hull, TD, Zech, JM, et al (2023) Natural language processing for mental health interventions: a systematic review and research framework. Translational Psychiatry, 13: 309.10.1038/s41398-023-02592-2CrossRefGoogle Scholar
Pavez, J, Allende, H (2024) A hybrid system based on Bayesian networks and deep learning for explainable mental health diagnosis. Applied Sciences, 14: 8283.10.3390/app14188283CrossRefGoogle Scholar
Perlis, RH (2013) A clinical risk stratification tool for predicting treatment resistance in major depressive disorder. Biological Psychiatry, 74: 714.10.1016/j.biopsych.2012.12.007CrossRefGoogle ScholarPubMed
Rocheteau, E (2023) On the role of artificial intelligence in psychiatry. British Journal of Psychiatry, 222: 54–7.10.1192/bjp.2022.132CrossRefGoogle ScholarPubMed
Rosellini, AJ, Andrea, AM, Galiano, CS, et al (2023) Developing transdiagnostic internalizing disorder prognostic indices for outpatient cognitive behavioral therapy. Behavior Therapy, 54: 461–75.10.1016/j.beth.2022.11.004CrossRefGoogle ScholarPubMed
Rush, AJ, Fava, M, Wisniewski, SR, et al (2004) Sequenced treatment alternatives to relieve depression (STAR*D): rationale and design. Controlled Clinical Trials, 25: 119–42.10.1016/S0197-2456(03)00112-0CrossRefGoogle Scholar
Sachan, D (2018) Self-help robots drive blues away. Lancet Psychiatry, 5: 547.10.1016/S2215-0366(18)30230-XCrossRefGoogle ScholarPubMed
Saito, T, Suzuki, H, Kishi, A (2022) Predictive modelling of mental illness onset using wearable devices and medical examination data: machine learning approach. Frontiers in Digital Health, 4: 861808.10.3389/fdgth.2022.861808CrossRefGoogle ScholarPubMed
Sarker, IH (2021) Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Computer Science, 2: 420.10.1007/s42979-021-00815-1CrossRefGoogle ScholarPubMed
Stein, DJ, Shoptaw, SJ, Vigo, DV, et al (2022) Psychiatric diagnosis and treatment in the 21st century: paradigm shifts versus incremental integration. World Psychiatry; 21: 393414.CrossRefGoogle ScholarPubMed
Yu, R, Hui, E, Lee, J, et al (2015) Use of a therapeutic, socially assistive pet robot (PARO) in improving mood and stimulating social interaction and communication for people with dementia: study protocol for a randomized controlled trial. JMIR Research Protocols, 4: 45.10.2196/resprot.4189CrossRefGoogle ScholarPubMed
Figure 0

FIG 1 Key advantages and challenges in psychiatric predictive modelling.

Submit a response

eLetters

No eLetters have been published for this article.