1
|
Levis M, Levy J, Dimambro M, Dufort V, Ludmer DJ, Goldberg M, Shiner B. Using natural language processing to evaluate temporal patterns in suicide risk variation among high-risk Veterans. Psychiatry Res 2024; 339:116097. [PMID: 39083961 DOI: 10.1016/j.psychres.2024.116097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 06/24/2024] [Accepted: 07/21/2024] [Indexed: 08/02/2024]
Abstract
Measuring suicide risk fluctuation remains difficult, especially for high-suicide risk patients. Our study addressed this issue by leveraging Dynamic Topic Modeling, a natural language processing method that evaluates topic changes over time, to analyze high-suicide risk Veterans Affairs patients' unstructured electronic health records. Our sample included all high-risk patients that died (cases) or did not (controls) by suicide in 2017 and 2018. Cases and controls shared the same risk, location, and treatment intervals and received nine months of mental health care during the year before the relevant end date. Each case was matched with five controls. We analyzed case records from diagnosis until death and control records from diagnosis until matched case's death date. Our final sample included 218 cases and 943 controls. We analyzed the corpus using a Python-based Dynamic Topic Modeling algorithm. We identified five distinct topics, "Medication," "Intervention," "Treatment Goals," "Suicide," and "Treatment Focus." We observed divergent change patterns over time, with pathology-focused care increasing for cases and supportive care increasing for controls. The case topics tended to fluctuate more than the control topics, suggesting the importance of monitoring lability. Our study provides a method for monitoring risk fluctuation and strengthens the groundwork for time-sensitive risk measurement.
Collapse
Affiliation(s)
- Maxwell Levis
- White River Junction VA Medical Center, White River Junction, VT, USA; Geisel School of Medicine at Dartmouth, Hanover, NH, USA.
| | - Joshua Levy
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | - Monica Dimambro
- White River Junction VA Medical Center, White River Junction, VT, USA
| | - Vincent Dufort
- White River Junction VA Medical Center, White River Junction, VT, USA
| | - Dana J Ludmer
- National Institute for the Psychotherapies, New York, NY, USA
| | | | - Brian Shiner
- White River Junction VA Medical Center, White River Junction, VT, USA; Geisel School of Medicine at Dartmouth, Hanover, NH, USA; National Center for PTSD Executive Division, White River Junction, VT, USA
| |
Collapse
|
2
|
Zang C, Hou Y, Lyu D, Jin J, Sacco S, Chen K, Aseltine R, Wang F. Accuracy and transportability of machine learning models for adolescent suicide prediction with longitudinal clinical records. Transl Psychiatry 2024; 14:316. [PMID: 39085206 PMCID: PMC11291985 DOI: 10.1038/s41398-024-03034-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 07/15/2024] [Accepted: 07/23/2024] [Indexed: 08/02/2024] Open
Abstract
Machine Learning models trained from real-world data have demonstrated promise in predicting suicide attempts in adolescents. However, their transportability, namely the performance of a model trained on one dataset and applied to different data, is largely unknown, hindering the clinical adoption of these models. Here we developed different machine learning-based suicide prediction models based on real-world data collected in different contexts (inpatient, outpatient, and all encounters) with varying purposes (administrative claims and electronic health records), and compared their cross-data performance. The three datasets used were the All-Payer Claims Database in Connecticut, the Hospital Inpatient Discharge Database in Connecticut, and the Electronic Health Records data provided by the Kansas Health Information Network. We included 285,320 patients among whom we identified 3389 (1.2%) suicide attempters and 66% of the suicide attempters were female. Different machine learning models were evaluated on source datasets where models were trained and then applied to target datasets. More complex models, particularly deep long short-term memory neural network models, did not outperform simpler regularized logistic regression models in terms of both local and transported performance. Transported models exhibited varying performance, showing drops or even improvements compared to their source performance. While they can achieve satisfactory transported performance, they are usually upper-bounded by the best performance of locally developed models, and they can identify additional new cases in target data. Our study uncovers complex transportability patterns and could facilitate the development of suicide prediction models with better performance and generalizability.
Collapse
Affiliation(s)
- Chengxi Zang
- Department of Population Health Sciences, Weill Cornell Medicine, Cornell University, Cornell, USA
- Institute of Artificial Intelligence for Digital Health, Weill Cornell Medicine, Cornell University, Cornell, USA
| | - Yu Hou
- Department of Population Health Sciences, Weill Cornell Medicine, Cornell University, Cornell, USA
- Institute of Artificial Intelligence for Digital Health, Weill Cornell Medicine, Cornell University, Cornell, USA
| | - Daoming Lyu
- Department of Population Health Sciences, Weill Cornell Medicine, Cornell University, Cornell, USA
- Institute of Artificial Intelligence for Digital Health, Weill Cornell Medicine, Cornell University, Cornell, USA
| | - Jun Jin
- Department of Statistics, University of Connecticut, Connecticut, USA
| | - Shane Sacco
- Department of Statistics, University of Connecticut, Connecticut, USA
| | - Kun Chen
- Department of Statistics, University of Connecticut, Connecticut, USA.
| | | | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, Cornell University, Cornell, USA.
- Institute of Artificial Intelligence for Digital Health, Weill Cornell Medicine, Cornell University, Cornell, USA.
| |
Collapse
|
3
|
Carson NJ, Yang X, Mullin B, Stettenbauer E, Waddington M, Zhang A, Williams P, Rios Perez GE, Cook BL. Predicting adolescent suicidal behavior following inpatient discharge using structured and unstructured data. J Affect Disord 2024; 350:382-387. [PMID: 38158050 PMCID: PMC10923087 DOI: 10.1016/j.jad.2023.12.059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 11/30/2023] [Accepted: 12/24/2023] [Indexed: 01/03/2024]
Abstract
BACKGROUND The objective was to develop and assess performance of an algorithm predicting suicide-related ICD codes within three months of psychiatric discharge. METHODS This prognostic study used a retrospective cohort of EHR data from 2789 youth (12 to 20 years old) hospitalized in a safety net institution in the Northeastern United States. The dataset combined structured data with unstructured data obtained through natural language processing of clinical notes. Machine learning approaches compared gradient boosting to random forest analyses. RESULTS Area under the ROC and precision-recall curve were 0.88 and 0.17, respectively, for the final Gradient Boosting model. The cutoff point of the model-generated predicted probabilities of suicide that optimally classified the individual as high risk or not was 0.009. When applying the chosen cutoff (0.009) to the hold-out testing set, the model correctly identified 8 positive cases out of 10, and 418 negative cases out 548. The corresponding performance metrics showed 80 % sensitivity, 76 % specificity, 6 % PPV, 99 % NPV, F-1 score of 0.11, and an accuracy of 76 %. LIMITATIONS The data in this study comes from a single health system, possibly introducing bias in the model's algorithm. Thus, the model may have underestimated the incidence of suicidal behavior in the study population. Further research should include multiple system EHRs. CONCLUSIONS These performance metrics suggest a benefit to including both unstructured and structured data in design of predictive algorithms for suicidal behavior, which can be integrated into psychiatric services to help assess risk.
Collapse
Affiliation(s)
- Nicholas J Carson
- Health Equity Research Lab, Cambridge Health Alliance, 1035 Cambridge Street, Cambridge, MA 02139, USA.
| | - Xinyu Yang
- Parexel, 275 Grove St., Suite 101C, Newton, MA 02466, USA
| | - Brian Mullin
- Health Equity Research Lab, Cambridge Health Alliance, 1035 Cambridge Street, Cambridge, MA 02139, USA
| | | | - Marin Waddington
- Division of Gastroenterology at Brigham and Women's Hospital, Resnek Family Center for PSC Research, 75 Francis Street, Boston, MA 02115, USA
| | - Alice Zhang
- Department of Psychology, New York University, 6 Washington Place, New York, NY 10003, USA
| | - Peyton Williams
- Health Equity Research Lab, Cambridge Health Alliance, 1035 Cambridge Street, Cambridge, MA 02139, USA
| | - Gabriel E Rios Perez
- Health Equity Research Lab, Cambridge Health Alliance, 1035 Cambridge Street, Cambridge, MA 02139, USA
| | - Benjamin Lê Cook
- Health Equity Research Lab, Cambridge Health Alliance, 1035 Cambridge Street, Cambridge, MA 02139, USA
| |
Collapse
|
4
|
Pigoni A, Delvecchio G, Turtulici N, Madonna D, Pietrini P, Cecchetti L, Brambilla P. Machine learning and the prediction of suicide in psychiatric populations: a systematic review. Transl Psychiatry 2024; 14:140. [PMID: 38461283 PMCID: PMC10925059 DOI: 10.1038/s41398-024-02852-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 02/22/2024] [Accepted: 02/22/2024] [Indexed: 03/11/2024] Open
Abstract
Machine learning (ML) has emerged as a promising tool to enhance suicidal prediction. However, as many large-sample studies mixed psychiatric and non-psychiatric populations, a formal psychiatric diagnosis emerged as a strong predictor of suicidal risk, overshadowing more subtle risk factors specific to distinct populations. To overcome this limitation, we conducted a systematic review of ML studies evaluating suicidal behaviors exclusively in psychiatric clinical populations. A systematic literature search was performed from inception through November 17, 2022 on PubMed, EMBASE, and Scopus following the PRISMA guidelines. Original research using ML techniques to assess the risk of suicide or predict suicide attempts in the psychiatric population were included. An assessment for bias risk was performed using the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) guidelines. About 1032 studies were retrieved, and 81 satisfied the inclusion criteria and were included for qualitative synthesis. Clinical and demographic features were the most frequently employed and random forest, support vector machine, and convolutional neural network performed better in terms of accuracy than other algorithms when directly compared. Despite heterogeneity in procedures, most studies reported an accuracy of 70% or greater based on features such as previous attempts, severity of the disorder, and pharmacological treatments. Although the evidence reported is promising, ML algorithms for suicidal prediction still present limitations, including the lack of neurobiological and imaging data and the lack of external validation samples. Overcoming these issues may lead to the development of models to adopt in clinical practice. Further research is warranted to boost a field that holds the potential to critically impact suicide mortality.
Collapse
Affiliation(s)
- Alessandro Pigoni
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Giuseppe Delvecchio
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Nunzio Turtulici
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Domenico Madonna
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Pietro Pietrini
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Luca Cecchetti
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Paolo Brambilla
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy.
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy.
| |
Collapse
|
5
|
Rahayu DS, Khairi AM, Islami CC, Nafi A, Yuliastini NKS. 'Unleashing the guardians: the dynamic triad of AI, social media and school counsellors safeguarding teenage lives from the abyss'. J Public Health (Oxf) 2024; 46:e167-e168. [PMID: 37533218 DOI: 10.1093/pubmed/fdad139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Indexed: 08/04/2023] Open
Affiliation(s)
- Dwi Sri Rahayu
- Department of Guidance and Counseling, Faculty of Education, Universitas Negeri Malang, Malang, Jawa Timur 65145, Indonesia
- Department of Guidance and Counseling, Faculty of Training and Education, Universitas Katolik Widya Mandala Surabaya, Madiun, Jawa Timur 63131, Indonesia
| | - Alfin Miftahul Khairi
- Department of Islamic Guidance and Counseling, Faculty of Ushuluddin and Da'wa, UIN Raden Mas Said Surakarta, Solo, Jawa Tengah 57168, Indonesia
| | - Chitra Charisma Islami
- Department of teacher education for early childhood education, STKIP Muhammadiyah Kuningan, Kuningan, Jawa Barat, 45511, Indonesia
| | - Ahmad Nafi
- Department of Islamic Guidance and Counseling, Faculty of Da'wa and Islamic Communication, IAIN Kudus, Kudus, Jawa Tengah 59322, Indonesia
| | - Ni Komang Sri Yuliastini
- Department of Guidance and Counseling, Faculty of Training and Education, Universitas PGRI Mahadewa Indonesia, Denpasar, Provinsi Bali 80239, Indonesia
| |
Collapse
|
6
|
Li X, Chen F, Ma L. Exploring the Potential of Artificial Intelligence in Adolescent Suicide Prevention: Current Applications, Challenges, and Future Directions. Psychiatry 2024; 87:7-20. [PMID: 38227496 DOI: 10.1080/00332747.2023.2291945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
ObjectiveThe global surge in adolescent suicide necessitates the development of innovative and efficacious preventive measures. Traditionally, various approaches have been used, but with limited success. However, with the rapid advancements in artificial intelligence (AI), new possibilities have emerged. This paper reviews the potentials and challenges of integrating AI into suicide prevention strategies, focusing on adolescents. Method: This narrative review assesses the impact of AI on suicide prevention strategies, the strategies and cases of AI applications in adolescent suicide prevention, as well as the challenges faced. Through searches on the PubMed, web of science, PsycINFO, and EMBASE databases, 19 relevant articles were included in the review. Results: AI has significantly improved risk assessment and predictive modeling for identifying suicidal behavior. It has enabled the analysis of textual data through natural language processing and fostered novel intervention strategies. Although AI applications, such as chatbots and monitoring systems, show promise, they must navigate challenges like data privacy and ethical considerations. The research underscores the potential of AI to enhance future suicide prevention efforts through personalized interventions and integration with emerging technologies. Conclusion: AI possesses transformative potential for adolescent suicide prevention by offering targeted and adaptive solutions, while they also raise crucial ethical and practical considerations. Looking forward, AI can play a critical role in mitigating adolescent suicide rates, marking a new frontier in mental health care.
Collapse
|
7
|
Dutta R, Gkotsis G, Velupillai SU, Downs J, Roberts A, Stewart R, Hotopf M. Identifying features of risk periods for suicide attempts using document frequency and language use in electronic health records. Front Psychiatry 2023; 14:1217649. [PMID: 38152362 PMCID: PMC10752595 DOI: 10.3389/fpsyt.2023.1217649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 11/13/2023] [Indexed: 12/29/2023] Open
Abstract
Background Individualising mental healthcare at times when a patient is most at risk of suicide involves shifting research emphasis from static risk factors to those that may be modifiable with interventions. Currently, risk assessment is based on a range of extensively reported stable risk factors, but critical to dynamic suicide risk assessment is an understanding of each individual patient's health trajectory over time. The use of electronic health records (EHRs) and analysis using machine learning has the potential to accelerate progress in developing early warning indicators. Setting EHR data from the South London and Maudsley NHS Foundation Trust (SLaM) which provides secondary mental healthcare for 1.8 million people living in four South London boroughs. Objectives To determine whether the time window proximal to a hospitalised suicide attempt can be discriminated from a distal period of lower risk by analysing the documentation and mental health clinical free text data from EHRs and (i) investigate whether the rate at which EHR documents are recorded per patient is associated with a suicide attempt; (ii) compare document-level word usage between documents proximal and distal to a suicide attempt; and (iii) compare n-gram frequency related to third-person pronoun use proximal and distal to a suicide attempt using machine learning. Methods The Clinical Record Interactive Search (CRIS) system allowed access to de-identified information from the EHRs. CRIS has been linked with Hospital Episode Statistics (HES) data for Admitted Patient Care. We analysed document and event data for patients who had at some point between 1 April 2006 and 31 March 2013 been hospitalised with a HES ICD-10 code related to attempted suicide (X60-X84; Y10-Y34; Y87.0/Y87.2). Findings n = 8,247 patients were identified to have made a hospitalised suicide attempt. Of these, n = 3,167 (39.8%) of patients had at least one document available in their EHR prior to their first suicide attempt. N = 1,424 (45.0%) of these patients had been "monitored" by mental healthcare services in the past 30 days. From 60 days prior to a first suicide attempt, there was a rapid increase in the monitoring level (document recording of the past 30 days) increasing from 35.1 to 45.0%. Documents containing words related to prescribed medications/drugs/overdose/poisoning/addiction had the highest odds of being a risk indicator used proximal to a suicide attempt (OR 1.88; precision 0.91 and recall 0.93), and documents with words citing a care plan were associated with the lowest risk for a suicide attempt (OR 0.22; precision 1.00 and recall 1.00). Function words, word sequence, and pronouns were most common in all three representations (uni-, bi-, and tri-gram). Conclusion EHR documentation frequency and language use can be used to distinguish periods distal from and proximal to a suicide attempt. However, in our study 55.0% of patients with documentation, prior to their first suicide attempt, did not have a record in the preceding 30 days, meaning that there are a high number who are not seen by services at their most vulnerable point.
Collapse
Affiliation(s)
- Rina Dutta
- King’s College London, IoPPN, London, United Kingdom
- South London and Maudsley NHS Foundation Trust, London, United Kingdom
| | | | | | - Johnny Downs
- King’s College London, IoPPN, London, United Kingdom
- South London and Maudsley NHS Foundation Trust, London, United Kingdom
| | - Angus Roberts
- King’s College London, IoPPN, London, United Kingdom
| | - Robert Stewart
- King’s College London, IoPPN, London, United Kingdom
- South London and Maudsley NHS Foundation Trust, London, United Kingdom
| | - Matthew Hotopf
- King’s College London, IoPPN, London, United Kingdom
- South London and Maudsley NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
8
|
Garriga R, Buda TS, Guerreiro J, Omaña Iglesias J, Estella Aguerri I, Matić A. Combining clinical notes with structured electronic health records enhances the prediction of mental health crises. Cell Rep Med 2023; 4:101260. [PMID: 37913776 PMCID: PMC10694623 DOI: 10.1016/j.xcrm.2023.101260] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 07/12/2023] [Accepted: 10/05/2023] [Indexed: 11/03/2023]
Abstract
An automatic prediction of mental health crises can improve caseload prioritization and enable preventative interventions, improving patient outcomes and reducing costs. We combine structured electronic health records (EHRs) with clinical notes from 59,750 de-identified patients to predict the risk of mental health crisis relapse within the next 28 days. The results suggest that an ensemble machine learning model that relies on structured EHRs and clinical notes when available, and relying solely on structured data when the notes are unavailable, offers superior performance over models trained with either of the two data streams alone. Furthermore, the study provides key takeaways related to the required amount of clinical notes to add value in predictive analytics. This study sheds light on the untapped potential of clinical notes in the prediction of mental health crises and highlights the importance of choosing an appropriate machine learning method to combine structured and unstructured EHRs.
Collapse
Affiliation(s)
- Roger Garriga
- Koa Health, 08019 Barcelona, Spain; Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018 Barcelona, Spain.
| | | | | | | | | | | |
Collapse
|
9
|
Rawat BPS, Reisman J, Pogoda TK, Liu W, Rongali S, Aseltine RH, Chen K, Tsai J, Berlowitz D, Yu H, Carlson KF. Intentional Self-Harm Among US Veterans With Traumatic Brain Injury or Posttraumatic Stress Disorder: Retrospective Cohort Study From 2008 to 2017. JMIR Public Health Surveill 2023; 9:e42803. [PMID: 37486751 PMCID: PMC10407646 DOI: 10.2196/42803] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 03/06/2023] [Accepted: 04/12/2023] [Indexed: 07/25/2023] Open
Abstract
BACKGROUND Veterans with a history of traumatic brain injury (TBI) and/or posttraumatic stress disorder (PTSD) may be at increased risk of suicide attempts and other forms of intentional self-harm as compared to veterans without TBI or PTSD. OBJECTIVE Using administrative data from the US Veterans Health Administration (VHA), we studied associations between TBI and PTSD diagnoses, and subsequent diagnoses of intentional self-harm among US veterans who used VHA health care between 2008 and 2017. METHODS All veterans with encounters or hospitalizations for intentional self-harm were assigned "index dates" corresponding to the date of the first related visit; among those without intentional self-harm, we randomly selected a date from among the veteran's health care encounters to match the distribution of case index dates over the 10-year period. We then examined the prevalence of TBI and PTSD diagnoses within the 5-year period prior to veterans' index dates. TBI, PTSD, and intentional self-harm were identified using International Classification of Diseases diagnosis and external cause of injury codes from inpatient and outpatient VHA encounters. We stratified analyses by veterans' average yearly VHA utilization in the 5-year period before their index date (low, medium, or high). Variations in prevalence and odds of intentional self-harm diagnoses were compared by veterans' prior TBI and PTSD diagnosis status (TBI only, PTSD only, and comorbid TBI/PTSD) for each VHA utilization stratum. Multivariable models adjusted for age, sex, race, ethnicity, marital status, Department of Veterans Affairs service-connection status, and Charlson Comorbidity Index scores. RESULTS About 6.7 million veterans with at least two VHA visits in the 5-year period before their index dates were included in the analyses; 86,644 had at least one intentional self-harm diagnosis during the study period. During the periods prior to veterans' index dates, 93,866 were diagnosed with TBI only; 892,420 with PTSD only; and 102,549 with comorbid TBI/PTSD. Across all three VHA utilization strata, the prevalence of intentional self-harm diagnoses was higher among veterans diagnosed with TBI, PTSD, or TBI/PTSD than among veterans with neither diagnosis. The observed difference was most pronounced among veterans in the high VHA utilization stratum. The prevalence of intentional self-harm was six times higher among those with comorbid TBI/PTSD (6778/58,295, 11.63%) than among veterans with neither TBI nor PTSD (21,979/1,144,991, 1.92%). Adjusted odds ratios suggested that, after accounting for potential confounders, veterans with TBI, PTSD, or comorbid TBI/PTSD had higher odds of self-harm compared to veterans without these diagnoses. Among veterans with high VHA utilization, those with comorbid TBI/PTSD were 4.26 (95% CI 4.15-4.38) times more likely to receive diagnoses for intentional self-harm than veterans with neither diagnosis. This pattern was similar for veterans with low and medium VHA utilization. CONCLUSIONS Veterans with TBI and/or PTSD diagnoses, compared to those with neither diagnosis, were substantially more likely to be subsequently diagnosed with intentional self-harm between 2008 and 2017. These associations were most pronounced among veterans who used VHA health care most frequently. These findings suggest a need for suicide prevention efforts targeted at veterans with these diagnoses.
Collapse
Affiliation(s)
- Bhanu Pratap Singh Rawat
- Manning College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst, MA, United States
| | - Joel Reisman
- Center for Healthcare Organization & Implementation Research, VA Boston Healthcare System, Bedford, MA, United States
| | - Terri K Pogoda
- Center for Healthcare Organization & Implementation Research, VA Boston Healthcare System, Boston, MA, United States
- Boston University School of Public Health, Boston, MA, United States
| | - Weisong Liu
- Center of Biomedical and Health Research in Data Sciences, University of Massachusetts Lowell, Lowell, MA, United States
| | - Subendhu Rongali
- Manning College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst, MA, United States
| | - Robert H Aseltine
- Division of Behavioral Sciences and Community Health, UConn Health, Farmington, CT, United States
| | - Kun Chen
- Department of Statistics, University of Connecticut, Storrs, CT, United States
| | - Jack Tsai
- Center of Biomedical and Health Research in Data Sciences, University of Massachusetts Lowell, Lowell, MA, United States
| | - Dan Berlowitz
- Center of Biomedical and Health Research in Data Sciences, University of Massachusetts Lowell, Lowell, MA, United States
| | - Hong Yu
- Manning College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst, MA, United States
- Center for Healthcare Organization & Implementation Research, VA Boston Healthcare System, Boston, MA, United States
- Center of Biomedical and Health Research in Data Sciences, University of Massachusetts Lowell, Lowell, MA, United States
| | - Kathleen F Carlson
- Center to Improve Veteran Involvement in Care, VA Portland Health Care System, Portland, OR, United States
- Oregon Health & Science University-Portland State University School of Public Health, Portland, OR, United States
| |
Collapse
|
10
|
Levis M, Levy J, Dufort V, Russ CJ, Shiner B. Dynamic suicide topic modelling: Deriving population-specific, psychosocial and time-sensitive suicide risk variables from Electronic Health Record psychotherapy notes. Clin Psychol Psychother 2023; 30:795-810. [PMID: 36797651 PMCID: PMC11172400 DOI: 10.1002/cpp.2842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 02/14/2023] [Indexed: 02/18/2023]
Abstract
In the machine learning subfield of natural language processing, a topic model is a type of unsupervised method that is used to uncover abstract topics within a corpus of text. Dynamic topic modelling (DTM) is used for capturing change in these topics over time. The study deploys DTM on corpus of electronic health record psychotherapy notes. This retrospective study examines whether DTM helps distinguish closely matched patients that did and did not die by suicide. Cohort consists of United States Department of Veterans Affairs (VA) patients diagnosed with Posttraumatic Stress Disorder (PTSD) between 2004 and 2013. Each case (those who died by suicide during the year following diagnosis) was matched with five controls (those who remained alive) that shared psychotherapists and had similar suicide risk based on VA's suicide prediction algorithm. Cohort was restricted to patients who received psychotherapy for 9+ months after initial PTSD diagnoses (cases = 77; controls = 362). For cases, psychotherapy notes from diagnosis until death were examined. For controls, psychotherapy notes from diagnosis until matched case's death date were examined. A Python-based DTM algorithm was utilized. Derived topics identified population-specific themes, including PTSD, psychotherapy, medication, communication and relationships. Control topics changed significantly more over time than case topics. Topic differences highlighted engagement, expressivity and therapeutic alliance. This study strengthens groundwork for deriving population-specific, psychosocial and time-sensitive suicide risk variables.
Collapse
Affiliation(s)
- Maxwell Levis
- White River Junction VA Medical Center, Hartford, Vermont, USA
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Joshua Levy
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Vincent Dufort
- White River Junction VA Medical Center, Hartford, Vermont, USA
| | - Carey J. Russ
- White River Junction VA Medical Center, Hartford, Vermont, USA
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
| | - Brian Shiner
- White River Junction VA Medical Center, Hartford, Vermont, USA
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire, USA
- National Center for PTSD Executive Division, Hartford, Vermont, USA
| |
Collapse
|
11
|
Levis M, Levy J, Dent KR, Dufort V, Gobbel GT, Watts BV, Shiner B. Leveraging Natural Language Processing to Improve Electronic Health Record Suicide Risk Prediction for Veterans Health Administration Users. J Clin Psychiatry 2023; 84:22m14568. [PMID: 37341477 PMCID: PMC11157783 DOI: 10.4088/jcp.22m14568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
Abstract
Background: Suicide risk prediction models frequently rely on structured electronic health record (EHR) data, including patient demographics and health care usage variables. Unstructured EHR data, such as clinical notes, may improve predictive accuracy by allowing access to detailed information that does not exist in structured data fields. To assess comparative benefits of including unstructured data, we developed a large case-control dataset matched on a state-of-the-art structured EHR suicide risk algorithm, utilized natural language processing (NLP) to derive a clinical note predictive model, and evaluated to what extent this model provided predictive accuracy over and above existing predictive thresholds. Methods: We developed a matched case-control sample of Veterans Health Administration (VHA) patients in 2017 and 2018. Each case (all patients that died by suicide in that interval, n = 4,584) was matched with 5 controls (patients who remained alive during treatment year) who shared the same suicide risk percentile. All sample EHR notes were selected and abstracted using NLP methods. We applied machine-learning classification algorithms to NLP output to develop predictive models. We calculated area under the curve (AUC) and suicide risk concentration to evaluate predictive accuracy overall and for high-risk patients. Results: The best performing NLP-derived models provided 19% overall additional predictive accuracy (AUC = 0.69; 95% CI, 0.67, 0.72) and 6-fold additional risk concentration for patients at the highest risk tier (top 0.1%), relative to the structured EHR model. Conclusions: The NLP-supplemented predictive models provided considerable benefit when compared to conventional structured EHR models. Results support future structured and unstructured EHR risk model integrations.
Collapse
Affiliation(s)
- Maxwell Levis
- VAMC White River Junction, White River Junction, Vermont
- Department of Psychiatry, Geisel School of Medicine, Hanover, New Hampshire
- Corresponding Author: Maxwell Levis, PhD, White River Junction VA Medical Center, 163 Veterans Dr, White River Junction, VT 05009
| | - Joshua Levy
- Departments of Pathology and Laboratory Medicine, Geisel School of Medicine, Hanover, New Hampshire
| | - Kallisse R Dent
- VA Serious Mental Illness Treatment Resource and Evaluation Center, Ann Arbor, Michigan
| | - Vincent Dufort
- VAMC White River Junction, White River Junction, Vermont
| | - Glenn T Gobbel
- Department of Biomedical Informatics, Nashville, Tennessee
| | - Bradley V Watts
- VAMC White River Junction, White River Junction, Vermont
- Department of Psychiatry, Geisel School of Medicine, Hanover, New Hampshire
- VA Office of Systems Redesign and Improvement, White River Junction, Vermont
| | - Brian Shiner
- VAMC White River Junction, White River Junction, Vermont
- Department of Psychiatry, Geisel School of Medicine, Hanover, New Hampshire
- National Center for PTSD, White River Junction, Vermont
| |
Collapse
|
12
|
A review of natural language processing in the identification of suicidal behavior. JOURNAL OF AFFECTIVE DISORDERS REPORTS 2023. [DOI: 10.1016/j.jadr.2023.100507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023] Open
|
13
|
The performance of machine learning models in predicting suicidal ideation, attempts, and deaths: A meta-analysis and systematic review. J Psychiatr Res 2022; 155:579-588. [PMID: 36206602 DOI: 10.1016/j.jpsychires.2022.09.050] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 08/21/2022] [Accepted: 09/24/2022] [Indexed: 11/21/2022]
Abstract
Research has posited that machine learning could improve suicide risk prediction models, which have traditionally performed poorly. This systematic review and meta-analysis evaluated the performance of machine learning models in predicting longitudinal outcomes of suicide-related outcomes of ideation, attempt, and death and examines outcome, data, and model types as potential covariates of model performance. Studies were extracted from PubMed, Web of Science, Embase, and PsycINFO. A bivariate mixed effects meta-analysis and meta-regression analyses were performed for studies using machine learning to predict future events of suicidal ideation, attempts, and/or deaths. Risk of bias was assessed for each study using an adaptation of the Prediction model Risk Of Bias Assessment Tool. Narrative review included 56 studies, and analyses examined 54 models from 35 studies. The models achieved a very good pooled AUC of 0.86, sensitivity of 0.66 (95% CI [0.60, 0.72)], and specificity of 0.87 (95% CI [0.84, 0.90]). Pooled AUCs for ideation, attempt, and death were similar at 0.88, 0.87, and 0.84 respectively. Model performance was highly varied; however, meta-regressions did not provide evidence that performance varied by outcome, data, or model types. Findings suggest that machine learning has the potential to improve suicide risk detection, with pooled estimates of machine learning performance comparing favourably to performance of traditional suicide prediction models. However, more studies with lower risk of bias are necessary to improve the application of machine learning in suicidology.
Collapse
|
14
|
Levis M, Levy J, Dufort V, Gobbel GT, Watts BV, Shiner B. Leveraging unstructured electronic medical record notes to derive population-specific suicide risk models. Psychiatry Res 2022; 315:114703. [PMID: 35841702 DOI: 10.1016/j.psychres.2022.114703] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/17/2022] [Accepted: 06/29/2022] [Indexed: 01/11/2023]
Abstract
Electronic medical record (EMR)-based suicide risk prediction methods typically rely on analysis of structured variables such as demographics, visit history, and prescription data. Leveraging unstructured EMR notes may improve predictive accuracy by allowing access to nuanced clinical information. We utilized natural language processing (NLP) to analyze a large EMR note corpus to develop a data-driven suicide risk prediction model. We developed a matched case-control sample of U.S. Department of Veterans Affairs (VA) patients in 2015 and 2016. We randomly matched each case (all patients that died by suicide in that interval, n = 5029) with five controls (patients that remained alive). We processed note corpus using NLP methods and applied machine-learning classification algorithms to output. We calculated area under the curve (AUC) and risk tiers to determine predictive accuracy. NLP-derived models demonstrated strong predictive accuracy. Patients that scored within top 10% of risk model accounted for up to 29% of suicide decedents. NLP-derived model compares positively to other leading prediction methods. Our approach is highly implementable, only requiring access to text data and open-source software. Additional studies should evaluate ensemble models incorporating NLP-derived information alongside more typical structured variables.
Collapse
Affiliation(s)
- Maxwell Levis
- VAMC White River Junction, 163 Veterans Dr., White River Junction VT, 05009 United States; Department of Psychiatry, Geisel School of Medicine, 1 Rope Ferry Rd, Hanover NH, 03755 United States.
| | - Joshua Levy
- Departments of Pathology and Laboratory Medicine, Geisel School of Medicine, 1 Rope Ferry Rd, Hanover NH, 03755 United States
| | - Vincent Dufort
- VAMC White River Junction, 163 Veterans Dr., White River Junction VT, 05009 United States
| | - Glenn T Gobbel
- Department of Biomedical Informatics, 2201 West End Ave, Nashville TN, 37235 United States
| | - Bradley V Watts
- VAMC White River Junction, 163 Veterans Dr., White River Junction VT, 05009 United States; Department of Psychiatry, Geisel School of Medicine, 1 Rope Ferry Rd, Hanover NH, 03755 United States; VA Office of Systems Redesign and Improvement, 215 North Main Street, White River Junction VT, 05009, United States
| | - Brian Shiner
- VAMC White River Junction, 163 Veterans Dr., White River Junction VT, 05009 United States; Department of Psychiatry, Geisel School of Medicine, 1 Rope Ferry Rd, Hanover NH, 03755 United States; National Center for PTSD, White River Junction, VT, United States
| |
Collapse
|
15
|
Meerwijk EL, Tamang SR, Finlay AK, Ilgen MA, Reeves RM, Harris AHS. Suicide theory-guided natural language processing of clinical progress notes to improve prediction of veteran suicide risk: protocol for a mixed-method study. BMJ Open 2022; 12:e065088. [PMID: 36002210 PMCID: PMC9413184 DOI: 10.1136/bmjopen-2022-065088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 08/02/2022] [Indexed: 11/17/2022] Open
Abstract
INTRODUCTION The state-of-the-art 3-step Theory of Suicide (3ST) describes why people consider suicide and who will act on their suicidal thoughts and attempt suicide. The central concepts of 3ST-psychological pain, hopelessness, connectedness, and capacity for suicide-are among the most important drivers of suicidal behaviour but they are missing from clinical suicide risk prediction models in use at the US Veterans Health Administration (VHA). These four concepts are not systematically recorded in structured fields of VHA's electronic healthcare records. Therefore, this study will develop a domain-specific ontology that will enable automated extraction of these concepts from clinical progress notes using natural language processing (NLP), and test whether NLP-based predictors for these concepts improve accuracy of existing VHA suicide risk prediction models. METHODS AND ANALYSIS Our mixed-method study has an exploratory sequential design where a qualitative component (aim 1) will inform quantitative analyses (aims 2 and 3). For aim 1, subject matter experts will manually annotate progress notes of clinical encounters with veterans who attempted or died by suicide to develop a domain-specific ontology for the 3ST concepts. During aim 2, we will use NLP to machine-annotate clinical progress notes and derive longitudinal representations for each patient with respect to the presence and intensity of hopelessness, psychological pain, connectedness and capacity for suicide in temporal proximity of suicide attempts and deaths by suicide. These longitudinal representations will be evaluated during aim 3 for their ability to improve existing VHA prediction models of suicide and suicide attempts, STORM (Stratification Tool for Opioid Risk Mitigation) and REACHVET (Recovery Engagement and Coordination for Health - Veterans Enhanced Treatment). ETHICS AND DISSEMINATION Ethics approval for this study was granted by the Stanford University Institutional Review Board and the Research and Development Committee of the VA Palo Alto Health Care System. Results of the study will be disseminated through several outlets, including peer-reviewed publications and presentations at national conferences.
Collapse
Affiliation(s)
- Esther Lydia Meerwijk
- VA Health Services Research & Development, Center for Innovation to Implementation, VA Palo Alto Health Care System, Palo Alto, California, USA
| | - Suzanne R Tamang
- VA Health Services Research & Development, Center for Innovation to Implementation, VA Palo Alto Health Care System, Palo Alto, California, USA
- Department of Biomedical Data Science, Stanford University, Stanford, California, USA
| | - Andrea K Finlay
- VA Health Services Research & Development, Center for Innovation to Implementation, VA Palo Alto Health Care System, Palo Alto, California, USA
- Schar School of Policy and Government, George Mason University, Arlington, Virginia, USA
- VA National Center on Homelessness Among Veterans, Durham, North Carolina, USA
| | - Mark A Ilgen
- Department of Psychiatry, University of Michigan, Ann Arbor, Michigan, USA
- VA Health Services Research & Development, Center for Clinical Management Research, VA Ann Arbor Health Care System, Ann Arbor, Michigan, USA
| | - Ruth M Reeves
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- VA Health Sevices Research & Development, VA Tennessee Valley Health Care System, Nashville, Tennessee, USA
| | - Alex H S Harris
- VA Health Services Research & Development, Center for Innovation to Implementation, VA Palo Alto Health Care System, Palo Alto, California, USA
- Stanford-Surgical Policy Improvement Research and Education Center, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
16
|
Machine learning model to predict mental health crises from electronic health records. Nat Med 2022; 28:1240-1248. [PMID: 35577964 PMCID: PMC9205775 DOI: 10.1038/s41591-022-01811-5] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 04/01/2022] [Indexed: 12/02/2022]
Abstract
The timely identification of patients who are at risk of a mental health crisis can lead to improved outcomes and to the mitigation of burdens and costs. However, the high prevalence of mental health problems means that the manual review of complex patient records to make proactive care decisions is not feasible in practice. Therefore, we developed a machine learning model that uses electronic health records to continuously monitor patients for risk of a mental health crisis over a period of 28 days. The model achieves an area under the receiver operating characteristic curve of 0.797 and an area under the precision-recall curve of 0.159, predicting crises with a sensitivity of 58% at a specificity of 85%. A follow-up 6-month prospective study evaluated our algorithm’s use in clinical practice and observed predictions to be clinically valuable in terms of either managing caseloads or mitigating the risk of crisis in 64% of cases. To our knowledge, this study is the first to continuously predict the risk of a wide range of mental health crises and to explore the added value of such predictions in clinical practice. Machine learning applied on electronic health records can predict mental health crises 28 days in advance and become a clinically valuable tool for managing caseloads and mitigating the risk of crisis.
Collapse
|
17
|
Abstract
OBJECTIVE Neuropsychiatric disorders in brain tumor patients are commonly observed. It is difficult to anticipate these disorders in different types of brain tumors. The goal of the study was to see how well machine learning (ML)-based decision algorithms might predict neuropsychiatric problems in different types of brain tumors. METHODS 145 histopathologically-confirmed primary brain tumors of both gender aged 25-65 years of age, were included for neuropsychiatric assessments. The datasets of brain tumor patients were employed for building the models. Four different decision ML classification trees/models (J48, Random Forest, Random Tree & Hoeffding Tree) with supervised learning were trained, tested, and validated on class labeled data of brain tumor patients. The models were compared in order to determine the best accurate classifier in predicting neuropsychiatric problems in various brain tumors. Following categorical attributes as independent variables (predictors) were included from the data of brain tumor patients: age, gender, depression, dementia, and brain tumor types. With the machine learning decision tree/model techniques, a multi-target classification was performed with classes of neuropsychiatric diseases that were predicted from the selected attributes. RESULTS 86 percent of patients were depressed, and 55 percent were suffering from dementia. Anger was the most often reported neuropsychiatric condition in brain tumor patients (92.41%), followed by sleep disorders (83%), apathy (80%), and mood swings (76.55%). When compared to other tumor types, glioblastoma patients had a higher rate of depression (20%) and dementia (20.25%). The developed models Random Forest and Random Tree were found successful with an accuracy of up to 94% (10-folds) for the prediction of neuropsychiatric disorders in brain tumor patients. The multiclass target (neuropsychiatric ailments) accuracies were having good measures of precision (0.9-1.0), recall (0.9-1.0), F-measure (0.9-1.0), and ROC area (0.9-1.0) in decision models. CONCLUSION Random Forest Trees can be used to accurately predict neuropsychiatric illnesses. Based on the model output, the ML-decision trees will aid the physician in pre-diagnosing the mental issue and deciding on the best therapeutic approach to avoid subsequent neuropsychiatric issues in brain tumor patients.
Collapse
Affiliation(s)
- Saman Shahid
- Department of Sciences & Humanities, National University of Computer & Emerging Sciences (NUCES), Foundation for Advancement of Science and Technology (FAST), Lahore, Pakistan
| | - Sadaf Iftikhar
- Department of Neurology, King Edward Medical University (KEMU), Mayo Hospital, Lahore, Pakistan
| |
Collapse
|
18
|
Lejeune A, Le Glaz A, Perron PA, Sebti J, Baca-Garcia E, Walter M, Lemey C, Berrouiguet S. Artificial intelligence and suicide prevention: a systematic review. Eur Psychiatry 2022; 65:1-22. [PMID: 35166203 PMCID: PMC8988272 DOI: 10.1192/j.eurpsy.2022.8] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 12/13/2021] [Accepted: 12/20/2021] [Indexed: 11/23/2022] Open
Abstract
Background Suicide is one of the main preventable causes of death. Artificial intelligence (AI) could improve methods for assessing suicide risk. The objective of this review is to assess the potential of AI in identifying patients who are at risk of attempting suicide. Methods A systematic review of the literature was conducted on PubMed, EMBASE, and SCOPUS databases, using relevant keywords. Results Thanks to this research, 296 studies were identified. Seventeen studies, published between 2014 and 2020 and matching inclusion criteria, were selected as relevant. Included studies aimed at predicting individual suicide risk or identifying at-risk individuals in a specific population. The AI performance was overall good, although variable across different algorithms and application settings. Conclusions AI appears to have a high potential for identifying patients at risk of suicide. The precise use of these algorithms in clinical situations, as well as the ethical issues it raises, remain to be clarified.
Collapse
Affiliation(s)
- Alban Lejeune
- URCI Mental Health Department, Brest Medical University Hospital, Brest, France
| | - Aziliz Le Glaz
- URCI Mental Health Department, Brest Medical University Hospital, Brest, France
| | | | - Johan Sebti
- Mental Health Department, French Polynesia Hospital, FFC3+H9G, Pirae, French Polynesia
| | | | - Michel Walter
- URCI Mental Health Department, Brest Medical University Hospital, Brest, France
- EA 7479 SPURBO, Université de Bretagne Occidentale, Brest, France
| | - Christophe Lemey
- URCI Mental Health Department, Brest Medical University Hospital, Brest, France
- EA 7479 SPURBO, Université de Bretagne Occidentale, Brest, France
- SPURBO, IMT Atlantique, Lab-STICC, UMR CNRS 6285, F-29238, Brest, France
| | - Sofian Berrouiguet
- URCI Mental Health Department, Brest Medical University Hospital, Brest, France
- LaTIM, INSERM, UMR 1101, Brest, France
| |
Collapse
|
19
|
Predictive structured-unstructured interactions in EHR models: A case study of suicide prediction. NPJ Digit Med 2022; 5:15. [PMID: 35087182 PMCID: PMC8795240 DOI: 10.1038/s41746-022-00558-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 12/13/2021] [Indexed: 11/20/2022] Open
Abstract
Clinical risk prediction models powered by electronic health records (EHRs) are becoming increasingly widespread in clinical practice. With suicide-related mortality rates rising in recent years, it is becoming increasingly urgent to understand, predict, and prevent suicidal behavior. Here, we compare the predictive value of structured and unstructured EHR data for predicting suicide risk. We find that Naive Bayes Classifier (NBC) and Random Forest (RF) models trained on structured EHR data perform better than those based on unstructured EHR data. An NBC model trained on both structured and unstructured data yields similar performance (AUC = 0.743) to an NBC model trained on structured data alone (0.742, p = 0.668), while an RF model trained on both data types yields significantly better results (AUC = 0.903) than an RF model trained on structured data alone (0.887, p < 0.001), likely due to the RF model’s ability to capture interactions between the two data types. To investigate these interactions, we propose and implement a general framework for identifying specific structured-unstructured feature pairs whose interactions differ between case and non-case cohorts, and thus have the potential to improve predictive performance and increase understanding of clinical risk. We find that such feature pairs tend to capture heterogeneous pairs of general concepts, rather than homogeneous pairs of specific concepts. These findings and this framework can be used to improve current and future EHR-based clinical modeling efforts.
Collapse
|
20
|
Boggs JM, Quintana LM, Powers JD, Hochberg S, Beck A. Frequency of Clinicians' Assessments for Access to Lethal Means in Persons at Risk for Suicide. Arch Suicide Res 2022; 26:127-136. [PMID: 32379012 DOI: 10.1080/13811118.2020.1761917] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
OBJECTIVE We measured the frequency of clinicians' assessments for access to lethal means, including firearms and medications in patients at risk of suicide from electronic medical and mental health records in outpatient and emergency settings. METHODS We included adult patients who reported suicide ideation on the PHQ-9 depression screener in behavioral health and primary care outpatient settings of a large integrated health system in the U.S. and those with suicidal behavior treated in the emergency department. Two separate natural language processing queries were developed on medical record text documentation: (1) assessment for access to firearms (8,994 patients), (2) assessment for access to medications (4,939 patients). RESULTS Only 35% of patients had documentation of firearm or medication assessment in the month following treatment for suicidal behavior in the emergency setting. Among those reporting suicidal ideation in outpatient setting, 31% had documentation of firearm assessment and 23% for medication assessment. The accuracy of the estimates was very good for firearm assessment (F1 = 89%) and medication assessment in the outpatient setting (F1 = 91%) and fair for medication assessment in the emergency setting (F1 = 70%) due to more varied documentation styles. CONCLUSIONS Lethal means assessment following report of suicidal ideation or behavior is low in a nonacademic health care setting. Until health systems implement more structured documentation to measure lethal means assessment, such as discrete data field, NLP methods may be used to conduct research and surveillance of this important prevention practice in real-world settings.
Collapse
|
21
|
Bright RA, Rankin SK, Dowdy K, Blok SV, Bright SJ, Palmer LAM. Finding Potential Adverse Events in the Unstructured Text of Electronic Health Care Records: Development of the Shakespeare Method. JMIRX MED 2021; 2:e27017. [PMID: 37725533 PMCID: PMC10414364 DOI: 10.2196/27017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 04/03/2021] [Accepted: 05/01/2021] [Indexed: 09/21/2023]
Abstract
BACKGROUND Big data tools provide opportunities to monitor adverse events (patient harm associated with medical care) (AEs) in the unstructured text of electronic health care records (EHRs). Writers may explicitly state an apparent association between treatment and adverse outcome ("attributed") or state the simple treatment and outcome without an association ("unattributed"). Many methods for finding AEs in text rely on predefining possible AEs before searching for prespecified words and phrases or manual labeling (standardization) by investigators. We developed a method to identify possible AEs, even if unknown or unattributed, without any prespecifications or standardization of notes. Our method was inspired by word-frequency analysis methods used to uncover the true authorship of disputed works credited to William Shakespeare. We chose two use cases, "transfusion" and "time-based." Transfusion was chosen because new transfusion AE types were becoming recognized during the study data period; therefore, we anticipated an opportunity to find unattributed potential AEs (PAEs) in the notes. With the time-based case, we wanted to simulate near real-time surveillance. We chose time periods in the hope of detecting PAEs due to contaminated heparin from mid-2007 to mid-2008 that were announced in early 2008. We hypothesized that the prevalence of contaminated heparin may have been widespread enough to manifest in EHRs through symptoms related to heparin AEs, independent of clinicians' documentation of attributed AEs. OBJECTIVE We aimed to develop a new method to identify attributed and unattributed PAEs using the unstructured text of EHRs. METHODS We used EHRs for adult critical care admissions at a major teaching hospital (2001-2012). For each case, we formed a group of interest and a comparison group. We concatenated the text notes for each admission into one document sorted by date, and deleted replicate sentences and lists. We identified statistically significant words in the group of interest versus the comparison group. Documents in the group of interest were filtered to those words, followed by topic modeling on the filtered documents to produce topics. For each topic, the three documents with the maximum topic scores were manually reviewed to identify PAEs. RESULTS Topics centered around medical conditions that were unique to or more common in the group of interest, including PAEs. In each use case, most PAEs were unattributed in the notes. Among the transfusion PAEs was unattributed evidence of transfusion-associated cardiac overload and transfusion-related acute lung injury. Some of the PAEs from mid-2007 to mid-2008 were increased unattributed events consistent with AEs related to heparin contamination. CONCLUSIONS The Shakespeare method could be a useful supplement to AE reporting and surveillance of structured EHR data. Future improvements should include automation of the manual review process.
Collapse
Affiliation(s)
- Roselie A Bright
- US Food and Drug Administration, Silver Spring, MD, United States
| | | | | | | | - Susan J Bright
- US Food and Drug Administration, Rockville, MD, United States
| | | |
Collapse
|
22
|
Levis M, Westgate CL, Gui J, Watts BV, Shiner B. Natural language processing of clinical mental health notes may add predictive value to existing suicide risk models. Psychol Med 2021; 51:1382-1391. [PMID: 32063248 PMCID: PMC8920410 DOI: 10.1017/s0033291720000173] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
BACKGROUND This study evaluated whether natural language processing (NLP) of psychotherapy note text provides additional accuracy over and above currently used suicide prediction models. METHODS We used a cohort of Veterans Health Administration (VHA) users diagnosed with post-traumatic stress disorder (PTSD) between 2004-2013. Using a case-control design, cases (those that died by suicide during the year following diagnosis) were matched to controls (those that remained alive). After selecting conditional matches based on having shared mental health providers, we chose controls using a 5:1 nearest-neighbor propensity match based on the VHA's structured Electronic Medical Records (EMR)-based suicide prediction model. For cases, psychotherapist notes were collected from diagnosis until death. For controls, psychotherapist notes were collected from diagnosis until matched case's date of death. After ensuring similar numbers of notes, the final sample included 246 cases and 986 controls. Notes were analyzed using Sentiment Analysis and Cognition Engine, a Python-based NLP package. The output was evaluated using machine-learning algorithms. The area under the curve (AUC) was calculated to determine models' predictive accuracy. RESULTS NLP derived variables offered small but significant predictive improvement (AUC = 0.58) for patients that had longer treatment duration. A small sample size limited predictive accuracy. CONCLUSIONS Study identifies a novel method for measuring suicide risk over time and potentially categorizing patient subgroups with distinct risk sensitivities. Findings suggest leveraging NLP derived variables from psychotherapy notes offers an additional predictive value over and above the VHA's state-of-the-art structured EMR-based suicide prediction model. Replication with a larger non-PTSD specific sample is required.
Collapse
Affiliation(s)
- Maxwell Levis
- White River Junction VA Medical Center, White River Junction, VT, USA
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | | | - Jiang Gui
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
| | - Bradley V. Watts
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
- VA Office of Systems Redesign and Improvement, White River Junction, VT, USA
| | - Brian Shiner
- White River Junction VA Medical Center, White River Junction, VT, USA
- Geisel School of Medicine at Dartmouth, Hanover, NH, USA
- VA Office of Systems Redesign and Improvement, White River Junction, VT, USA
- National Center for PTSD Executive Division, White River Junction, VT, USA
| |
Collapse
|
23
|
Gaur M, Aribandi V, Alambo A, Kursuncu U, Thirunarayan K, Beich J, Pathak J, Sheth A. Characterization of time-variant and time-invariant assessment of suicidality on Reddit using C-SSRS. PLoS One 2021; 16:e0250448. [PMID: 33999927 PMCID: PMC8128252 DOI: 10.1371/journal.pone.0250448] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 04/06/2021] [Indexed: 11/19/2022] Open
Abstract
Suicide is the 10th leading cause of death in the U.S (1999-2019). However, predicting when someone will attempt suicide has been nearly impossible. In the modern world, many individuals suffering from mental illness seek emotional support and advice on well-known and easily-accessible social media platforms such as Reddit. While prior artificial intelligence research has demonstrated the ability to extract valuable information from social media on suicidal thoughts and behaviors, these efforts have not considered both severity and temporality of risk. The insights made possible by access to such data have enormous clinical potential-most dramatically envisioned as a trigger to employ timely and targeted interventions (i.e., voluntary and involuntary psychiatric hospitalization) to save lives. In this work, we address this knowledge gap by developing deep learning algorithms to assess suicide risk in terms of severity and temporality from Reddit data based on the Columbia Suicide Severity Rating Scale (C-SSRS). In particular, we employ two deep learning approaches: time-variant and time-invariant modeling, for user-level suicide risk assessment, and evaluate their performance against a clinician-adjudicated gold standard Reddit corpus annotated based on the C-SSRS. Our results suggest that the time-variant approach outperforms the time-invariant method in the assessment of suicide-related ideations and supportive behaviors (AUC:0.78), while the time-invariant model performed better in predicting suicide-related behaviors and suicide attempt (AUC:0.64). The proposed approach can be integrated with clinical diagnostic interviews for improving suicide risk assessments.
Collapse
Affiliation(s)
- Manas Gaur
- Artificial Intelligence Institute, University of South Carolina, Columbia, SC, United States of America
| | - Vamsi Aribandi
- Kno.e.sis Center, Wright State University, Dayton, OH, United States of America
| | - Amanuel Alambo
- Kno.e.sis Center, Wright State University, Dayton, OH, United States of America
| | - Ugur Kursuncu
- Artificial Intelligence Institute, University of South Carolina, Columbia, SC, United States of America
| | | | - Jonathan Beich
- Department of Psychiatry, Wright State University, Dayton, OH, United States of America
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States of America
| | - Amit Sheth
- Kno.e.sis Center, Wright State University, Dayton, OH, United States of America
| |
Collapse
|
24
|
Fichter MM, Quadflieg N. How precisely can psychotherapists predict the long-term outcome of anorexia nervosa and bulimia nervosa at the end of inpatient treatment? Int J Eat Disord 2021; 54:535-544. [PMID: 33320351 DOI: 10.1002/eat.23443] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 11/27/2020] [Accepted: 11/27/2020] [Indexed: 01/14/2023]
Abstract
OBJECTIVE To assess the ability of psychotherapists to predict the future outcome for inpatients with anorexia nervosa (AN) and bulimia nervosa (BN). METHOD Psychotherapists rated the prognosis of the patient's eating disorder on a five point Likert scale on several dimensions at the end of inpatient treatment. Actual outcome was assessed about 10 years after treatment. The sample comprised 1,065 patients treated for AN, and 1,192 patients treated for BN. RESULTS Psychotherapists' rating of their patient's prognosis was not better than chance for good outcome in AN and BN and for poor outcome in BN. Prediction of poor outcome in AN was somewhat better with approximately two thirds of correct predictions. In logistic regression analysis, psychotherapists' rating of the patients' prognosis for AN contributed to the explained variance of long-term outcome, increasing the variance explained from 7% (by conventional predictors) to 8% after including psychotherapists' prognosis. In BN, there was no significant contribution of psychotherapists' prognosis to overall prediction. DISCUSSION Our current knowledge of risk and protective factors for the course of eating disorders is unsatisfying. More specialized research is urgently needed.
Collapse
Affiliation(s)
- Manfred M Fichter
- Ludwig-Maximilians-University (LMU), Munich, Department of Psychiatry and Psychotherapy, Munich, Germany.,Schoen Klinik Roseneck affiliated with the Medical Faculty of the University of Munich (LMU), Prien, Germany
| | - Norbert Quadflieg
- Ludwig-Maximilians-University (LMU), Munich, Department of Psychiatry and Psychotherapy, Munich, Germany
| |
Collapse
|
25
|
Kumar P, Nestsiarovich A, Nelson SJ, Kerner B, Perkins DJ, Lambert CG. Imputation and characterization of uncoded self-harm in major mental illness using machine learning. J Am Med Inform Assoc 2021; 27:136-146. [PMID: 31651956 PMCID: PMC7647246 DOI: 10.1093/jamia/ocz173] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 08/16/2019] [Accepted: 09/09/2019] [Indexed: 01/01/2023] Open
Abstract
Objective We aimed to impute uncoded self-harm in administrative claims data of individuals with major mental illness (MMI), characterize self-harm incidence, and identify factors associated with coding bias. Materials and Methods The IBM MarketScan database (2003-2016) was used to analyze visit-level self-harm in 10 120 030 patients with ≥2 MMI codes. Five machine learning (ML) classifiers were tested on a balanced data subset, with XGBoost selected for the full dataset. Classification performance was validated via random data mislabeling and comparison with a clinician-derived “gold standard.” The incidence of coded and imputed self-harm was characterized by year, patient age, sex, U.S. state, and MMI diagnosis. Results Imputation identified 1 592 703 self-harm events vs 83 113 coded events, with areas under the curve >0.99 for the balanced and full datasets, and 83.5% agreement with the gold standard. The overall coded and imputed self-harm incidence were 0.28% and 5.34%, respectively, varied considerably by age and sex, and was highest in individuals with multiple MMI diagnoses. Self-harm undercoding was higher in male than in female individuals and increased with age. Substance abuse, injuries, poisoning, asphyxiation, brain disorders, harmful thoughts, and psychotherapy were the main features used by ML to classify visits. Discussion Only 1 of 19 self-harm events was coded for individuals with MMI. ML demonstrated excellent performance in recovering self-harm visits. Male individuals and seniors with MMI are particularly vulnerable to self-harm undercoding and may be at risk of not getting appropriate psychiatric care. Conclusions ML can effectively recover unrecorded self-harm in claims data and inform psychiatric epidemiological and observational studies.
Collapse
Affiliation(s)
- Praveen Kumar
- Center for Global Health, Department of Internal Medicine, University of New Mexico Health Sciences Center, Albuquerque, New Mexico, USA.,Department of Computer Science, University of New Mexico, Albuquerque, New Mexico, USA
| | - Anastasiya Nestsiarovich
- Center for Global Health, Department of Internal Medicine, University of New Mexico Health Sciences Center, Albuquerque, New Mexico, USA
| | - Stuart J Nelson
- Biomedical Informatics Center, Department of Clinical Research & Leadership, George Washington University, Washington, DC, USA
| | - Berit Kerner
- Semel Institute for Neuroscience and Human Behavior, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California, USA
| | - Douglas J Perkins
- Center for Global Health, Department of Internal Medicine, University of New Mexico Health Sciences Center, Albuquerque, New Mexico, USA
| | - Christophe G Lambert
- Center for Global Health, Department of Internal Medicine, University of New Mexico Health Sciences Center, Albuquerque, New Mexico, USA.,Department of Computer Science, University of New Mexico, Albuquerque, New Mexico, USA.,Translational Informatics Division, Department of Internal Medicine, University of New Mexico Health Sciences Center, Albuquerque, New Mexico, USA
| |
Collapse
|
26
|
Friedrich O, Seifert J, Schleidgen S. [AI-Based Self-Tracking of the Mind: Philosophical-Ethical Implications]. PSYCHIATRISCHE PRAXIS 2021; 48:S42-S47. [PMID: 33652487 DOI: 10.1055/a-1364-5068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
OBJECTIVE AI-based applications are increasingly developed to support users to digitally record, manage and change their emotions, beliefs and behavior patterns. Such forms of self-tracking in the mental sphere are accompanied by a variety of medical benefits in diagnostics, prevention, and therapy. This article pursues the question of which philosophical-ethical implications must be taken into account when dealing with these advantages. METHODS First, some AI-based applications for self-tracking of mental characteristics and processes are outlined. Subsequently, relevant philosophical-ethical implications are presented. RESULTS The following aspects prove to be normatively relevant: improvement versus reduction of self-determination; improvement of self-knowledge versus alienation; positive versus negative aspects of self-responsible health care; epistemic challenges of AI applications; difficulties of conceptual and normative definitions in the applications.
Collapse
Affiliation(s)
- Orsolya Friedrich
- Institut für Philosophie, Juniorprofessur für Medizinethik, FernUniversität Hagen
| | - Johanna Seifert
- Institut für Philosophie, Juniorprofessur für Medizinethik, FernUniversität Hagen
| | - Sebastian Schleidgen
- Institut für Philosophie, Juniorprofessur für Medizinethik, FernUniversität Hagen
| |
Collapse
|
27
|
Stapelberg NJ, Randall M, Sveticic J, Fugelli P, Dave H, Turner K. Data mining of hospital suicidal and self-harm presentation records using a tailored evolutionary algorithm. MACHINE LEARNING WITH APPLICATIONS 2021. [DOI: 10.1016/j.mlwa.2020.100012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
|
28
|
Cusick M, Adekkanattu P, Campion TR, Sholle ET, Myers A, Banerjee S, Alexopoulos G, Wang Y, Pathak J. Using weak supervision and deep learning to classify clinical notes for identification of current suicidal ideation. J Psychiatr Res 2021; 136:95-102. [PMID: 33581461 DOI: 10.1016/j.jpsychires.2021.01.052] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 01/17/2021] [Accepted: 01/29/2021] [Indexed: 10/22/2022]
Abstract
Mental health concerns, such as suicidal thoughts, are frequently documented by providers in clinical notes, as opposed to structured coded data. In this study, we evaluated weakly supervised methods for detecting "current" suicidal ideation from unstructured clinical notes in electronic health record (EHR) systems. Weakly supervised machine learning methods leverage imperfect labels for training, alleviating the burden of creating a large manually annotated dataset. After identifying a cohort of 600 patients at risk for suicidal ideation, we used a rule-based natural language processing approach (NLP) approach to label the training and validation notes (n = 17,978). Using this large corpus of clinical notes, we trained several statistical machine learning models-logistic classifier, support vector machines (SVM), Naive Bayes classifier-and one deep learning model, namely a text classification convolutional neural network (CNN), to be evaluated on a manually-reviewed test set (n = 837). The CNN model outperformed all other methods, achieving an overall accuracy of 94% and a F1-score of 0.82 on documents with "current" suicidal ideation. This algorithm correctly identified an additional 42 encounters and 9 patients indicative of suicidal ideation but missing a structured diagnosis code. When applied to a random subset of 5,000 clinical notes, the algorithm classified 0.46% (n = 23) for "current" suicidal ideation, of which 87% were truly indicative via manual review. Implementation of this approach for large-scale document screening may play an important role in point-of-care clinical information systems for targeted suicide prevention interventions and improve research on the pathways from ideation to attempt.
Collapse
Affiliation(s)
- Marika Cusick
- Department of Information and Technology Services, Weill Cornell Medicine, New York, USA; Department Population Health Sciences, Weill Cornell Medicine, New York, USA.
| | - Prakash Adekkanattu
- Department of Information and Technology Services, Weill Cornell Medicine, New York, USA.
| | - Thomas R Campion
- Department of Information and Technology Services, Weill Cornell Medicine, New York, USA; Department Population Health Sciences, Weill Cornell Medicine, New York, USA.
| | - Evan T Sholle
- Department of Information and Technology Services, Weill Cornell Medicine, New York, USA.
| | - Annie Myers
- Department Population Health Sciences, Weill Cornell Medicine, New York, USA.
| | - Samprit Banerjee
- Department Population Health Sciences, Weill Cornell Medicine, New York, USA.
| | | | - Yanshan Wang
- Division of Digital Health Sciences, Mayo Clinic, MN, USA; Department of Health Sciences Research, Mayo Clinic, MN, USA.
| | - Jyotishman Pathak
- Department Population Health Sciences, Weill Cornell Medicine, New York, USA; Department of Psychiatry, Weill Cornell Medicine, New York, USA.
| |
Collapse
|
29
|
Tsui FR, Shi L, Ruiz V, Ryan ND, Biernesser C, Iyengar S, Walsh CG, Brent DA. Natural language processing and machine learning of electronic health records for prediction of first-time suicide attempts. JAMIA Open 2021; 4:ooab011. [PMID: 33758800 PMCID: PMC7966858 DOI: 10.1093/jamiaopen/ooab011] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 02/02/2021] [Accepted: 02/10/2021] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE Limited research exists in predicting first-time suicide attempts that account for two-thirds of suicide decedents. We aimed to predict first-time suicide attempts using a large data-driven approach that applies natural language processing (NLP) and machine learning (ML) to unstructured (narrative) clinical notes and structured electronic health record (EHR) data. METHODS This case-control study included patients aged 10-75 years who were seen between 2007 and 2016 from emergency departments and inpatient units. Cases were first-time suicide attempts from coded diagnosis; controls were randomly selected without suicide attempts regardless of demographics, following a ratio of nine controls per case. Four data-driven ML models were evaluated using 2-year historical EHR data prior to suicide attempt or control index visits, with prediction windows from 7 to 730 days. Patients without any historical notes were excluded. Model evaluation on accuracy and robustness was performed on a blind dataset (30% cohort). RESULTS The study cohort included 45 238 patients (5099 cases, 40 139 controls) comprising 54 651 variables from 5.7 million structured records and 798 665 notes. Using both unstructured and structured data resulted in significantly greater accuracy compared to structured data alone (area-under-the-curve [AUC]: 0.932 vs. 0.901 P < .001). The best-predicting model utilized 1726 variables with AUC = 0.932 (95% CI, 0.922-0.941). The model was robust across multiple prediction windows and subgroups by demographics, points of historical most recent clinical contact, and depression diagnosis history. CONCLUSIONS Our large data-driven approach using both structured and unstructured EHR data demonstrated accurate and robust first-time suicide attempt prediction, and has the potential to be deployed across various populations and clinical settings.
Collapse
Affiliation(s)
- Fuchiang R Tsui
- Tsui Laboratory, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
- Department of Anesthesiology and Critical Care Medicine, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
- Department of Biomedical and Health Informatics, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
- Department of Anesthesiology and Critical Care, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Lingyun Shi
- Tsui Laboratory, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
- Department of Biomedical and Health Informatics, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
| | - Victor Ruiz
- Tsui Laboratory, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
- Department of Biomedical and Health Informatics, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, USA
| | - Neal D Ryan
- Department of Psychiatry, School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Candice Biernesser
- Department of Psychiatry, School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Satish Iyengar
- Department of Statistics, School of Arts and Sciences, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Colin G Walsh
- Department of Biomedical Informatics, School of Medicine, Vanderbilt University, Nashville, Tennessee, USA
| | - David A Brent
- Department of Psychiatry, School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
30
|
Su C, Aseltine R, Doshi R, Chen K, Rogers SC, Wang F. Machine learning for suicide risk prediction in children and adolescents with electronic health records. Transl Psychiatry 2020; 10:413. [PMID: 33243979 PMCID: PMC7693189 DOI: 10.1038/s41398-020-01100-0] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 10/18/2020] [Accepted: 11/05/2020] [Indexed: 12/28/2022] Open
Abstract
Accurate prediction of suicide risk among children and adolescents within an actionable time frame is an important but challenging task. Very few studies have comprehensively considered the clinical risk factors available to produce quantifiable risk scores for estimation of short- and long-term suicide risk for pediatric population. In this paper, we built machine learning models for predicting suicidal behavior among children and adolescents based on their longitudinal clinical records, and determining short- and long-term risk factors. This retrospective study used deidentified structured electronic health records (EHR) from the Connecticut Children's Medical Center covering the period from 1 October 2011 to 30 September 2016. Clinical records of 41,721 young patients (10-18 years old) were included for analysis. Candidate predictors included demographics, diagnosis, laboratory tests, and medications. Different prediction windows ranging from 0 to 365 days were adopted. For each prediction window, candidate predictors were first screened by univariate statistical tests, and then a predictive model was built via a sequential forward feature selection procedure. We grouped the selected predictors and estimated their contributions to risk prediction at different prediction window lengths. The developed predictive models predicted suicidal behavior across all prediction windows with AUCs varying from 0.81 to 0.86. For all prediction windows, the models detected 53-62% of suicide-positive subjects with 90% specificity. The models performed better with shorter prediction windows and predictor importance varied across prediction windows, illustrating short- and long-term risks. Our findings demonstrated that routinely collected EHRs can be used to create accurate predictive models for suicide risk among children and adolescents.
Collapse
Affiliation(s)
- Chang Su
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Robert Aseltine
- Division of Behavioral Sciences and Community Health, UConn Health, Farmington, CT, USA
- Center for Population Health, UConn Health, Farmington, CT, USA
| | - Riddhi Doshi
- Division of Behavioral Sciences and Community Health, UConn Health, Farmington, CT, USA
- Center for Population Health, UConn Health, Farmington, CT, USA
| | - Kun Chen
- Center for Population Health, UConn Health, Farmington, CT, USA
- Department of Statistics, University of Connecticut, Storrs, CT, USA
| | - Steven C Rogers
- Center for Population Health, UConn Health, Farmington, CT, USA
- Connecticut Children's Medical Center, Hartford, CT, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
| |
Collapse
|
31
|
Pestian J, Santel D, Sorter M, Bayram U, Connolly B, Glauser T, DelBello M, Tamang S, Cohen K. A Machine Learning Approach to Identifying Changes in Suicidal Language. Suicide Life Threat Behav 2020; 50:939-947. [PMID: 32484597 DOI: 10.1111/sltb.12642] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Revised: 10/31/2019] [Accepted: 01/18/2020] [Indexed: 12/30/2022]
Abstract
OBJECTIVE With early identification and intervention, many suicidal deaths are preventable. Tools that include machine learning methods have been able to identify suicidal language. This paper examines the persistence of this suicidal language up to 30 days after discharge from care. METHOD In a multi-center study, 253 subjects were enrolled into either suicidal or control cohorts. Their responses to standardized instruments and interviews were analyzed using machine learning algorithms. Subjects were re-interviewed approximately 30 days later, and their language was compared to the original language to determine the presence of suicidal ideation. RESULTS The results show that language characteristics used to classify suicidality at the initial encounter are still present in the speech 30 days later (AUC = 89% (95% CI: 85-95%), p < .0001) and that algorithms trained on the second interviews could also identify the subjects that produced the first interviews (AUC = 85% (95% CI: 81-90%), p < .0001). CONCLUSIONS This approach explores the stability of suicidal language. When using advanced computational methods, the results show that a patient's language is similar 30 days after first captured, while responses to standard measures change. This can be useful when developing methods that identify the data-based phenotype of a subject.
Collapse
Affiliation(s)
- John Pestian
- Department of Pediatrics, Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Daniel Santel
- Department of Pediatrics, Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Michael Sorter
- Department of Pediatrics, Division of Psychiatry, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Ulya Bayram
- Department of Pediatrics, Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA.,Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati, OH, USA
| | - Brian Connolly
- Department of Pediatrics, Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Tracy Glauser
- Department of Pediatrics, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Melissa DelBello
- Department of Psychiatry & Behavioral Neuroscience, University of Cincinnati, Cincinnati, OH, USA
| | - Suzanne Tamang
- Department of Biomedical Data Science, Center for Population Health Sciences, Stanford University, Stanford, CA, USA
| | - Kevin Cohen
- Computational Bioscience Program, University of Colorado School of Medicine, Denver, CO, USA
| |
Collapse
|
32
|
Bruen AJ, Wall A, Haines-Delmont A, Perkins E. Exploring Suicidal Ideation Using an Innovative Mobile App-Strength Within Me: The Usability and Acceptability of Setting up a Trial Involving Mobile Technology and Mental Health Service Users. JMIR Ment Health 2020; 7:e18407. [PMID: 32985995 PMCID: PMC7551108 DOI: 10.2196/18407] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 07/07/2020] [Accepted: 07/21/2020] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND Suicide is a growing global public health problem that has resulted in an increase in the demand for psychological services to address mental health issues. It is expected that 1 in 6 people on a waiting list for mental health services will attempt suicide. Although suicidal ideation has been shown to be linked to a higher risk of death by suicide, not everybody openly discloses their suicidal thoughts or plans to friends and family or seeks professional help before suicide. Therefore, new methods are needed to track suicide risk in real time together with a better understanding of the ways in which people communicate or express their suicidality. Considering the dynamic nature and challenges in understanding suicide ideation and suicide risk, mobile apps could be better suited to prevent suicide as they have the ability to collect real-time data. OBJECTIVE This study aims to report the practicalities and acceptability of setting up and trialing digital technologies within an inpatient mental health setting in the United Kingdom and highlight their implications for future studies. METHODS Service users were recruited from 6 inpatient wards in the north west of England. Service users who were eligible to participate and provided consent were given an iPhone and Fitbit for 7 days and were asked to interact with a novel phone app, Strength Within Me (SWiM). Interaction with the app involved journaling (recording daily activities, how this made them feel, and rating their mood) and the option to create safety plans for emotions causing difficulties (identifying strategies that helped with these emotions). Participants also had the option to allow the study to access their personal Facebook account to monitor their social media use and activity. In addition, clinical data (ie, assessments conducted by trained researchers targeting suicidality, depression, and sleep) were also collected. RESULTS Overall, 43.0% (80/186 response rate) of eligible participants were recruited for the study. Of the total sample, 67 participants engaged in journaling, with the average number of entries per user being 8.2 (SD 8.7). Overall, only 24 participants created safety plans and the most common difficult emotion to be selected was feeling sad (n=21). This study reports on the engagement with the SWiM app, the technical difficulties the research team faced, the importance of building key relationships, and the implications of using Facebook as a source to detect suicidality. CONCLUSIONS To develop interventions that can be delivered in a timely manner, prediction of suicidality must be given priority. This paper has raised important issues and highlighted lessons learned from implementing a novel mobile app to detect the risk of suicidality for service users in an inpatient setting.
Collapse
Affiliation(s)
- Ashley Jane Bruen
- Department of Primary Care and Mental Health, University of Liverpool, Liverpool, United Kingdom
| | - Abbie Wall
- Department of Primary Care and Mental Health, University of Liverpool, Liverpool, United Kingdom
| | - Alina Haines-Delmont
- Department of Nursing, Faculty of Health, Psychology and Social Care, Manchester Metropolitan University, Manchester, United Kingdom
| | - Elizabeth Perkins
- Department of Primary Care and Mental Health, University of Liverpool, Liverpool, United Kingdom
| |
Collapse
|
33
|
Identifying risk factors for mortality among patients previously hospitalized for a suicide attempt. Sci Rep 2020; 10:15223. [PMID: 32938955 PMCID: PMC7495431 DOI: 10.1038/s41598-020-71320-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 07/21/2020] [Indexed: 01/03/2023] Open
Abstract
Age-adjusted suicide rates in the US have increased over the past two decades across all age groups. The ability to identify risk factors for suicidal behavior is critical to selected and indicated prevention efforts among those at elevated risk of suicide. We used widely available statewide hospitalization data to identify and test the joint predictive power of clinical risk factors associated with death by suicide for patients previously hospitalized for a suicide attempt (N = 19,057). Twenty-eight clinical factors from the prior suicide attempt were found to be significantly associated with the hazard of subsequent suicide mortality. These risk factors and their two-way interactions were used to build a joint predictive model via stepwise regression, in which the predicted individual survival probability was found to be a valid measure of risk for later suicide death. A high-risk group with a four-fold increase in suicide mortality risk was identified based on the out-of-sample predicted survival probabilities. This study demonstrates that the combination of state-level hospital discharge and mortality data can be used to identify suicide attempters who are at high risk of subsequent suicide death.
Collapse
|
34
|
Jiang T, Gradus JL, Rosellini AJ. Supervised Machine Learning: A Brief Primer. Behav Ther 2020; 51:675-687. [PMID: 32800297 PMCID: PMC7431677 DOI: 10.1016/j.beth.2020.05.002] [Citation(s) in RCA: 164] [Impact Index Per Article: 41.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Revised: 05/13/2020] [Accepted: 05/13/2020] [Indexed: 12/23/2022]
Abstract
Machine learning is increasingly used in mental health research and has the potential to advance our understanding of how to characterize, predict, and treat mental disorders and associated adverse health outcomes (e.g., suicidal behavior). Machine learning offers new tools to overcome challenges for which traditional statistical methods are not well-suited. This paper provides an overview of machine learning with a specific focus on supervised learning (i.e., methods that are designed to predict or classify an outcome of interest). Several common supervised learning methods are described, along with applied examples from the published literature. We also provide an overview of supervised learning model building, validation, and performance evaluation. Finally, challenges in creating robust and generalizable machine learning algorithms are discussed.
Collapse
Affiliation(s)
| | - Jaimie L Gradus
- Boston University School of Public Health; Boston University School of Medicine
| | - Anthony J Rosellini
- Center for Anxiety and Related Disorders, Boston University; Department of Psychological and Brain Sciences, Boston University.
| |
Collapse
|
35
|
Bernert RA, Hilberg AM, Melia R, Kim JP, Shah NH, Abnousi F. Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:E5929. [PMID: 32824149 PMCID: PMC7460360 DOI: 10.3390/ijerph17165929] [Citation(s) in RCA: 70] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 07/28/2020] [Indexed: 12/12/2022]
Abstract
Suicide is a leading cause of death that defies prediction and challenges prevention efforts worldwide. Artificial intelligence (AI) and machine learning (ML) have emerged as a means of investigating large datasets to enhance risk detection. A systematic review of ML investigations evaluating suicidal behaviors was conducted using PubMed/MEDLINE, PsychInfo, Web-of-Science, and EMBASE, employing search strings and MeSH terms relevant to suicide and AI. Databases were supplemented by hand-search techniques and Google Scholar. Inclusion criteria: (1) journal article, available in English, (2) original investigation, (3) employment of AI/ML, (4) evaluation of a suicide risk outcome. N = 594 records were identified based on abstract search, and 25 hand-searched reports. N = 461 reports remained after duplicates were removed, n = 316 were excluded after abstract screening. Of n = 149 full-text articles assessed for eligibility, n = 87 were included for quantitative synthesis, grouped according to suicide behavior outcome. Reports varied widely in methodology and outcomes. Results suggest high levels of risk classification accuracy (>90%) and Area Under the Curve (AUC) in the prediction of suicidal behaviors. We report key findings and central limitations in the use of AI/ML frameworks to guide additional research, which hold the potential to impact suicide on broad scale.
Collapse
Affiliation(s)
- Rebecca A. Bernert
- Stanford Suicide Prevention Research Laboratory, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94304, USA
| | - Amanda M. Hilberg
- Stanford Suicide Prevention Research Laboratory, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94304, USA
| | - Ruth Melia
- Stanford Suicide Prevention Research Laboratory, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94304, USA
- Department of Psychology, National University of Ireland, Galway, Ireland
| | - Jane Paik Kim
- Stanford Suicide Prevention Research Laboratory, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94304, USA
| | - Nigam H. Shah
- Department of Medicine, Center for Biomedical Informatics Research, Stanford University School of Medicine, Stanford, CA 94304, USA
- Informatics, Stanford Center for Clinical and Translational Research, and Education (Spectrum), Stanford University, Stanford CA 94304, USA
| | - Freddy Abnousi
- Facebook, Menlo Park, CA 94025, USA
- Yale University School of Medicine, New Haven, CT 06510, USA
| |
Collapse
|
36
|
Obeid JS, Dahne J, Christensen S, Howard S, Crawford T, Frey LJ, Stecker T, Bunnell BE. Identifying and Predicting Intentional Self-Harm in Electronic Health Record Clinical Notes: Deep Learning Approach. JMIR Med Inform 2020; 8:e17784. [PMID: 32729840 PMCID: PMC7426805 DOI: 10.2196/17784] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 04/25/2020] [Accepted: 05/21/2020] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Suicide is an important public health concern in the United States and around the world. There has been significant work examining machine learning approaches to identify and predict intentional self-harm and suicide using existing data sets. With recent advances in computing, deep learning applications in health care are gaining momentum. OBJECTIVE This study aimed to leverage the information in clinical notes using deep neural networks (DNNs) to (1) improve the identification of patients treated for intentional self-harm and (2) predict future self-harm events. METHODS We extracted clinical text notes from electronic health records (EHRs) of 835 patients with International Classification of Diseases (ICD) codes for intentional self-harm and 1670 matched controls who never had any intentional self-harm ICD codes. The data were divided into training and holdout test sets. We tested a number of algorithms on clinical notes associated with the intentional self-harm codes using the training set, including several traditional bag-of-words-based models and 2 DNN models: a convolutional neural network (CNN) and a long short-term memory model. We also evaluated the predictive performance of the DNNs on a subset of patients who had clinical notes 1 to 6 months before the first intentional self-harm event. Finally, we evaluated the impact of a pretrained model using Word2vec (W2V) on performance. RESULTS The area under the receiver operating characteristic curve (AUC) for the CNN on the phenotyping task, that is, the detection of intentional self-harm in clinical notes concurrent with the events was 0.999, with an F1 score of 0.985. In the predictive task, the CNN achieved the highest performance with an AUC of 0.882 and an F1 score of 0.769. Although pretraining with W2V shortened the DNN training time, it did not improve performance. CONCLUSIONS The strong performance on the first task, namely, phenotyping based on clinical notes, suggests that such models could be used effectively for surveillance of intentional self-harm in clinical text in an EHR. The modest performance on the predictive task notwithstanding, the results using DNN models on clinical text alone are competitive with other reports in the literature using risk factors from structured EHR data.
Collapse
Affiliation(s)
- Jihad S Obeid
- Medical University of South Carolina, Charleston, SC, United States
| | - Jennifer Dahne
- Medical University of South Carolina, Charleston, SC, United States
| | - Sean Christensen
- Medical University of South Carolina, Charleston, SC, United States
| | - Samuel Howard
- Medical University of South Carolina, Charleston, SC, United States
| | - Tami Crawford
- Medical University of South Carolina, Charleston, SC, United States
| | - Lewis J Frey
- Medical University of South Carolina, Charleston, SC, United States
| | - Tracy Stecker
- Medical University of South Carolina, Charleston, SC, United States
| | | |
Collapse
|
37
|
Kim B, Kim Y, Park CHK, Rhee SJ, Kim YS, Leventhal BL, Ahn YM, Paik H. Identifying the Medical Lethality of Suicide Attempts Using Network Analysis and Deep Learning: Nationwide Study. JMIR Med Inform 2020; 8:e14500. [PMID: 32673253 PMCID: PMC7380907 DOI: 10.2196/14500] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Revised: 01/08/2020] [Accepted: 03/23/2020] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND Suicide is one of the leading causes of death among young and middle-aged people. However, little is understood about the behaviors leading up to actual suicide attempts and whether these behaviors are specific to the nature of suicide attempts. OBJECTIVE The goal of this study was to examine the clusters of behaviors antecedent to suicide attempts to determine if they could be used to assess the potential lethality of the attempt. To accomplish this goal, we developed a deep learning model using the relationships among behaviors antecedent to suicide attempts and the attempts themselves. METHODS This study used data from the Korea National Suicide Survey. We identified 1112 individuals who attempted suicide and completed a psychiatric evaluation in the emergency room. The 15-item Beck Suicide Intent Scale (SIS) was used for assessing antecedent behaviors, and the medical outcomes of the suicide attempts were measured by assessing lethality with the Columbia Suicide Severity Rating Scale (C-SSRS; lethal suicide attempt >3 and nonlethal attempt ≤3). RESULTS Using scores from the SIS, individuals who had lethal and nonlethal attempts comprised two different network nodes with the edges representing the relationships among nodes. Among the antecedent behaviors, the conception of a method's lethality predicted suicidal behaviors with severe medical outcomes. The vectorized relationship values among the elements of antecedent behaviors in our deep learning model (E-GONet) increased performances, such as F1 and area under the precision-recall gain curve (AUPRG), for identifying lethal attempts (up to 3% for F1 and 32% for AUPRG), as compared with other models (mean F1: 0.81 for E-GONet, 0.78 for linear regression, and 0.80 for random forest; mean AUPRG: 0.73 for E-GONet, 0.41 for linear regression, and 0.69 for random forest). CONCLUSIONS The relationships among behaviors antecedent to suicide attempts can be used to understand the suicidal intent of individuals and help identify the lethality of potential suicide attempts. Such a model may be useful in prioritizing cases for preventive intervention.
Collapse
Affiliation(s)
- Bora Kim
- Department of Psychiatry, University of California, San Francisco, San Francisco, CA, United States
| | - Younghoon Kim
- Center for Supercomputing Applications, Division of Supercomputing, Korea Institute of Science and Technology Information (KISTI), Daejeon, Republic of Korea
| | - C Hyung Keun Park
- Department of Neuropsychiatry, Seoul National University Hospital, Seoul, Republic of Korea.,Department of Psychiatry and Behavioral Science, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Sang Jin Rhee
- Department of Neuropsychiatry, Seoul National University Hospital, Seoul, Republic of Korea.,Department of Psychiatry and Behavioral Science, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Young Shin Kim
- Department of Psychiatry, University of California, San Francisco, San Francisco, CA, United States
| | - Bennett L Leventhal
- Department of Psychiatry, University of California, San Francisco, San Francisco, CA, United States
| | - Yong Min Ahn
- Department of Neuropsychiatry, Seoul National University Hospital, Seoul, Republic of Korea.,Department of Psychiatry and Behavioral Science, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Hyojung Paik
- Center for Supercomputing Applications, Division of Supercomputing, Korea Institute of Science and Technology Information (KISTI), Daejeon, Republic of Korea
| |
Collapse
|
38
|
Haines-Delmont A, Chahal G, Bruen AJ, Wall A, Khan CT, Sadashiv R, Fearnley D. Testing Suicide Risk Prediction Algorithms Using Phone Measurements With Patients in Acute Mental Health Settings: Feasibility Study. JMIR Mhealth Uhealth 2020; 8:e15901. [PMID: 32442152 PMCID: PMC7380988 DOI: 10.2196/15901] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Revised: 02/21/2020] [Accepted: 02/29/2020] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Digital phenotyping and machine learning are currently being used to augment or even replace traditional analytic procedures in many domains, including health care. Given the heavy reliance on smartphones and mobile devices around the world, this readily available source of data is an important and highly underutilized source that has the potential to improve mental health risk prediction and prevention and advance mental health globally. OBJECTIVE This study aimed to apply machine learning in an acute mental health setting for suicide risk prediction. This study uses a nascent approach, adding to existing knowledge by using data collected through a smartphone in place of clinical data, which have typically been collected from health care records. METHODS We created a smartphone app called Strength Within Me, which was linked to Fitbit, Apple Health kit, and Facebook, to collect salient clinical information such as sleep behavior and mood, step frequency and count, and engagement patterns with the phone from a cohort of inpatients with acute mental health (n=66). In addition, clinical research interviews were used to assess mood, sleep, and suicide risk. Multiple machine learning algorithms were tested to determine the best fit. RESULTS K-nearest neighbors (KNN; k=2) with uniform weighting and the Euclidean distance metric emerged as the most promising algorithm, with 68% mean accuracy (averaged over 10,000 simulations of splitting the training and testing data via 10-fold cross-validation) and an average area under the curve of 0.65. We applied a combined 5×2 F test to test the model performance of KNN against the baseline classifier that guesses training majority, random forest, support vector machine and logistic regression, and achieved F statistics of 10.7 (P=.009) and 17.6 (P=.003) for training majority and random forest, respectively, rejecting the null of performance being the same. Therefore, we have taken the first steps in prototyping a system that could continuously and accurately assess the risk of suicide via mobile devices. CONCLUSIONS Predicting for suicidality is an underaddressed area of research to which this paper makes a useful contribution. This is part of the first generation of studies to suggest that it is feasible to utilize smartphone-generated user input and passive sensor data to generate a risk algorithm among inpatients at suicide risk. The model reveals fair concordance between phone-derived and research-generated clinical data, and with iterative development, it has the potential for accurate discriminant risk prediction. However, although full automation and independence of clinical judgment or input would be a worthy development for those individuals who are less likely to access specialist mental health services, and for providing a timely response in a crisis situation, the ethical and legal implications of such advances in the field of psychiatry need to be acknowledged.
Collapse
Affiliation(s)
- Alina Haines-Delmont
- Faculty of Health, Psychology and Social Care, Manchester Metropolitan University, Manchester, United Kingdom
| | - Gurdit Chahal
- CLARA Labs, CLARA Analytics, Santa Clara, CA, United States
| | - Ashley Jane Bruen
- University of Liverpool, Health Services Research, Liverpool, United Kingdom
| | - Abbie Wall
- University of Liverpool, Health Services Research, Liverpool, United Kingdom
| | | | | | - David Fearnley
- Mersey Care NHS Foundation Trust, Prescot, United Kingdom
| |
Collapse
|
39
|
Senior M, Burghart M, Yu R, Kormilitzin A, Liu Q, Vaci N, Nevado-Holgado A, Pandit S, Zlodre J, Fazel S. Identifying Predictors of Suicide in Severe Mental Illness: A Feasibility Study of a Clinical Prediction Rule (Oxford Mental Illness and Suicide Tool or OxMIS). Front Psychiatry 2020; 11:268. [PMID: 32351413 PMCID: PMC7175991 DOI: 10.3389/fpsyt.2020.00268] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 03/19/2020] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Oxford Mental Illness and Suicide tool (OxMIS) is a brief, scalable, freely available, structured risk assessment tool to assess suicide risk in patients with severe mental illness (schizophrenia-spectrum disorders or bipolar disorder). OxMIS requires further external validation, but a lack of large-scale cohorts with relevant variables makes this challenging. Electronic health records provide possible data sources for external validation of risk prediction tools. However, they contain large amounts of information within free-text that is not readily extractable. In this study, we examined the feasibility of identifying suicide predictors needed to validate OxMIS in routinely collected electronic health records. METHODS In study 1, we manually reviewed electronic health records of 57 patients with severe mental illness to calculate OxMIS risk scores. In study 2, we examined the feasibility of using natural language processing to scale up this process. We used anonymized free-text documents from the Clinical Record Interactive Search database to train a named entity recognition model, a machine learning technique which recognizes concepts in free-text. The model identified eight concepts relevant for suicide risk assessment: medication (antidepressant/antipsychotic treatment), violence, education, self-harm, benefits receipt, drug/alcohol use disorder, suicide, and psychiatric admission. We assessed model performance in terms of precision (similar to positive predictive value), recall (similar to sensitivity) and F1 statistic (an overall performance measure). RESULTS In study 1, we estimated suicide risk for all patients using the OxMIS calculator, giving a range of 12 month risk estimates from 0.1-3.4%. For 13 out of 17 predictors, there was no missing information in electronic health records. For the remaining 4 predictors missingness ranged from 7-26%; to account for these missing variables, it was possible for OxMIS to estimate suicide risk using a range of scores. In study 2, the named entity recognition model had an overall precision of 0.77, recall of 0.90 and F1 score of 0.83. The concept with the best precision and recall was medication (precision 0.84, recall 0.96) and the weakest were suicide (precision 0.37), and drug/alcohol use disorder (recall 0.61). CONCLUSIONS It is feasible to estimate suicide risk with the OxMIS tool using predictors identified in routine clinical records. Predictors could be extracted using natural language processing. However, electronic health records differ from other data sources, particularly for family history variables, which creates methodological challenges.
Collapse
Affiliation(s)
- Morwenna Senior
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Matthias Burghart
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Rongqin Yu
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | | | - Qiang Liu
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Nemanja Vaci
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | | | - Smita Pandit
- Oxford Health NHS Foundation Trust, Warneford Hospital, Oxford, United Kingdom
| | - Jakov Zlodre
- Oxford Health NHS Foundation Trust, Warneford Hospital, Oxford, United Kingdom
| | - Seena Fazel
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
40
|
Thompson E, Spirito A, Frazier E, Thompson A, Hunt J, Wolff J. Suicidal thoughts and behavior (STB) and psychosis-risk symptoms among psychiatrically hospitalized adolescents. Schizophr Res 2020; 218:240-246. [PMID: 31948902 PMCID: PMC7299764 DOI: 10.1016/j.schres.2019.12.037] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Revised: 11/04/2019] [Accepted: 12/27/2019] [Indexed: 12/26/2022]
Abstract
BACKGROUND Individuals in the early stages of psychosis have a markedly high risk for suicidal thoughts and behavior (STB). It is not well understood if STB among those with psychosis-risk symptoms is accounted for by co-occurring psychopathology (e.g., depression), unique experiences specific to psychosis-spectrum symptomatology (e.g., hallucinations, delusions), or combined effects of different factors. This cross-sectional study explored the link between psychosis-spectrum symptoms, co-occurring disorders, and STB. METHODS This record review included 569 adolescents (mean age = 14.83) admitted to a psychiatric inpatient hospital due to exhibiting behavior indicating they were an imminent threat to themselves or others. Upon intake to the hospital, participants completed a diagnostic interview and self-report measures of suicidal ideation, suicide attempt history, and psychosis-spectrum symptoms. The primary analysis used linear regression to predict suicidal ideation from psychosis-spectrum symptom scores, controlling for known characteristics associated with STB including specific psychiatric disorders (i.e. depressive, anxiety, post-traumatic stress, and psychotic disorders), biological sex, and race. RESULTS Psychosis-spectrum symptoms predicted suicidal ideation above and beyond the significant effects of a depressive disorder diagnosis and sex, as well as the non-significant effects of anxiety, PTSD, full-threshold psychosis, and race. Item-level correlations demonstrated that several psychosis-spectrum symptoms were significantly associated with ideation and lifetime suicide attempts. CONCLUSIONS Results indicate that within this sample of psychiatrically hospitalized youth, psychosis-risk symptoms were uniquely linked to STB. These findings suggest that attention to psychosis-spectrum symptoms, including several specific psychosis-risk experiences, may be clinically important for better assessment and treatment of suicidal youth.
Collapse
Affiliation(s)
- Elizabeth Thompson
- Department of Psychiatry and Human Behavior, Warren Alpert Medical School of Brown University, Box G-BH, Providence, RI 02912, United States of America; Bradley Hospital, 1011 Veterans Memorial Pkwy, Riverside, RI 02915, United States of America; Child and Adolescent Psychiatry, Rhode Island Hospital, One Hoppin Street, Coro West Suite 204, Providence, RI 02903, United States of America.
| | - Anthony Spirito
- Department of Psychiatry and Human Behavior, Warren Alpert Medical School of Brown University, Box G-BH, Providence, RI 02912, United States of America; Bradley Hospital, 1011 Veterans Memorial Pkwy, Riverside, RI 02915, United States of America.
| | - Elisabeth Frazier
- Department of Psychiatry and Human Behavior, Warren Alpert Medical School of Brown University, Box G-BH, Providence, RI 02912, United States of America; Bradley Hospital, 1011 Veterans Memorial Pkwy, Riverside, RI 02915, United States of America.
| | - Alysha Thompson
- Department of Psychiatry and Human Behavior, Warren Alpert Medical School of Brown University, Box G-BH, Providence, RI 02912, United States of America; Bradley Hospital, 1011 Veterans Memorial Pkwy, Riverside, RI 02915, United States of America.
| | - Jeffrey Hunt
- Department of Psychiatry and Human Behavior, Warren Alpert Medical School of Brown University, Box G-BH, Providence, RI 02912, United States of America; Bradley Hospital, 1011 Veterans Memorial Pkwy, Riverside, RI 02915, United States of America.
| | - Jennifer Wolff
- Department of Psychiatry and Human Behavior, Warren Alpert Medical School of Brown University, Box G-BH, Providence, RI 02912, United States of America; Bradley Hospital, 1011 Veterans Memorial Pkwy, Riverside, RI 02915, United States of America; Child and Adolescent Psychiatry, Rhode Island Hospital, One Hoppin Street, Coro West Suite 204, Providence, RI 02903, United States of America.
| |
Collapse
|
41
|
Agurto C, Cecchi GA, Norel R, Ostrand R, Kirkpatrick M, Baggott MJ, Wardle MC, Wit HD, Bedi G. Detection of acute 3,4-methylenedioxymethamphetamine (MDMA) effects across protocols using automated natural language processing. Neuropsychopharmacology 2020; 45:823-832. [PMID: 31978933 PMCID: PMC7075895 DOI: 10.1038/s41386-020-0620-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 11/28/2019] [Accepted: 01/08/2020] [Indexed: 11/17/2022]
Abstract
The detection of changes in mental states such as those caused by psychoactive drugs relies on clinical assessments that are inherently subjective. Automated speech analysis may represent a novel method to detect objective markers, which could help improve the characterization of these mental states. In this study, we employed computer-extracted speech features from multiple domains (acoustic, semantic, and psycholinguistic) to assess mental states after controlled administration of 3,4-methylenedioxymethamphetamine (MDMA) and intranasal oxytocin. The training/validation set comprised within-participants data from 31 healthy adults who, over four sessions, were administered MDMA (0.75, 1.5 mg/kg), oxytocin (20 IU), and placebo in randomized, double-blind fashion. Participants completed two 5-min speech tasks during peak drug effects. Analyses included group-level comparisons of drug conditions and estimation of classification at the individual level within this dataset and on two independent datasets. Promising classification results were obtained to detect drug conditions, achieving cross-validated accuracies of up to 87% in training/validation and 92% in the independent datasets, suggesting that the detected patterns of speech variability are associated with drug consumption. Specifically, we found that oxytocin seems to be mostly driven by changes in emotion and prosody, which are mainly captured by acoustic features. In contrast, mental states driven by MDMA consumption appear to manifest in multiple domains of speech. Furthermore, we find that the experimental task has an effect on the speech response within these mental states, which can be attributed to presence or absence of an interaction with another individual. These results represent a proof-of-concept application of the potential of speech to provide an objective measurement of mental states elicited during intoxication.
Collapse
Affiliation(s)
- Carla Agurto
- Computational Biology Center - Neuroscience, IBM T.J. Watson Research Center, Yorktown Heights, NY, USA
| | - Guillermo A Cecchi
- Computational Biology Center - Neuroscience, IBM T.J. Watson Research Center, Yorktown Heights, NY, USA.
| | - Raquel Norel
- Computational Biology Center - Neuroscience, IBM T.J. Watson Research Center, Yorktown Heights, NY, USA
| | - Rachel Ostrand
- Computational Biology Center - Neuroscience, IBM T.J. Watson Research Center, Yorktown Heights, NY, USA
| | - Matthew Kirkpatrick
- Department of Preventive Medicine, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Matthew J Baggott
- Addiction and Pharmacology Research Laboratory, Friends Research Institute, San Francisco, CA, USA
| | - Margaret C Wardle
- Department of Psychology, University of Illinois at Chicago, Chicago, IL, USA
| | - Harriet de Wit
- Human Behavioral Pharmacology Laboratory, Department of Psychiatry and Behavioral Neuroscience, University of Chicago, Chicago, IL, USA
| | - Gillinder Bedi
- Centre for Youth Mental Health, University of Melbourne, and Orygen National Centre of Excellence in Youth Mental Health, Melbourne, Australia
| |
Collapse
|
42
|
Gradus JL, Rosellini AJ, Horváth-Puhó E, Street AE, Galatzer-Levy I, Jiang T, Lash TL, Sørensen HT. Prediction of Sex-Specific Suicide Risk Using Machine Learning and Single-Payer Health Care Registry Data From Denmark. JAMA Psychiatry 2020; 77:25-34. [PMID: 31642880 PMCID: PMC6813578 DOI: 10.1001/jamapsychiatry.2019.2905] [Citation(s) in RCA: 81] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 07/28/2019] [Indexed: 01/26/2023]
Abstract
Importance Suicide is a public health problem, with multiple causes that are poorly understood. The increased focus on combining health care data with machine-learning approaches in psychiatry may help advance the understanding of suicide risk. Objective To examine sex-specific risk profiles for death from suicide using machine-learning methods and data from the population of Denmark. Design, Setting, and Participants A case-cohort study nested within 8 national Danish health and social registries was conducted from January 1, 1995, through December 31, 2015. The source population was all persons born or residing in Denmark as of January 1, 1995. Data were analyzed from November 5, 2018, through May 13, 2019. Exposures Exposures included 1339 variables spanning domains of suicide risk factors. Main Outcomes and Measures Death from suicide from the Danish cause of death registry. Results A total of 14 103 individuals died by suicide between 1995 and 2015 (10 152 men [72.0%]; mean [SD] age, 43.5 [18.8] years and 3951 women [28.0%]; age, 47.6 [18.8] years). The comparison subcohort was a 5% random sample (n = 265 183) of living individuals in Denmark on January 1, 1995 (130 591 men [49.2%]; age, 37.4 [21.8] years and 134 592 women [50.8%]; age, 39.9 [23.4] years). With use of classification trees and random forests, sex-specific differences were noted in risk for suicide, with physical health more important to men's suicide risk than women's suicide risk. Psychiatric disorders and possibly associated medications were important to suicide risk, with specific results that may increase clarity in the literature. Generally, diagnoses and medications measured 48 months before suicide were more important indicators of suicide risk than when measured 6 months earlier. Individuals in the top 5% of predicted suicide risk appeared to account for 32.0% of all suicide cases in men and 53.4% of all cases in women. Conclusions and Relevance Despite decades of research on suicide risk factors, understanding of suicide remains poor. In this study, the first to date to develop risk profiles for suicide based on data from a full population, apparent consistency with what is known about suicide risk was noted, as well as potentially important, understudied risk factors with evidence of unique suicide risk profiles among specific subpopulations.
Collapse
Affiliation(s)
- Jaimie L. Gradus
- Department of Epidemiology, Boston University School of Public Health, Boston, Massachusetts
- Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark
| | - Anthony J. Rosellini
- Center for Anxiety and Related Disorders, Department of Psychological and Brain Sciences, Boston University, Boston, Massachusetts
| | | | - Amy E. Street
- National Center for Posttraumatic Stress Disorder, Veterans Affairs Boston Healthcare System, Boston, Massachusetts
- Department of Psychiatry, Boston University School of Medicine, Boston, Massachusetts
| | - Isaac Galatzer-Levy
- Department of Psychiatry, New York University School of Medicine, New York
- AICure, New York, New York
| | - Tammy Jiang
- Department of Epidemiology, Boston University School of Public Health, Boston, Massachusetts
| | - Timothy L. Lash
- Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark
- Department of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, Georgia
| | - Henrik T. Sørensen
- Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark
| |
Collapse
|
43
|
Duvall M, North F, Leasure W, Pecina J. Patient portal message characteristics and reported thoughts of self-harm and suicide: A retrospective cohort study. J Telemed Telecare 2019; 27:501-508. [PMID: 31726902 DOI: 10.1177/1357633x19887262] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
INTRODUCTION As use of electronic portal communication with healthcare teams increases, processes that effectively recognize messages that contain critical information are needed. This study aims to evaluate whether certain language and other characteristics of patient portal messages are associated with expressions of self-harm and suicidal ideation. METHODS Using patient portal messages sent between 1 January 2013 and 30 June 2017, we searched for words and letter combinations 'suicid' (to identify words suicide and suicidal), 'depress' (for depression, depressed, depressing), 'harm himself' (or 'herself 'or 'myself'), 'hurt himself' ('herself' or 'myself'), 'kill', 'shoot', 'cutting', 'knife', 'gun', 'overdose', 'over dose' and 'jump'. RESULTS Of 831,009 messages, 11,174 messages contained one or more search terms. We manually reviewed 7,736 messages for content expressing self-harm or suicidality. Of the reviewed messages, 3.2% indicated thoughts of self-harm or suicide and 2.2% of messages suggested active suicidality. Of those expressing any thoughts of self-harm or suicide, 13.4% mentioned a specific plan, 20% were passively suicidal. Messages indicating thoughts of self-harm and suicide were more common in patients who were unmarried, non-white and younger than 18 years. Factors significantly associated with thoughts of self-harm were messages addressed to psychiatry or containing the letter combinations 'suicide', 'die', 'depress' and 'harm/hurt my/her/himself'. DISCUSSION Certain letter combinations and patient portal message characteristics may be associated with expressions of self-harm and suicide. These factors should be considered as we develop systems of effectively screening patient portal messages for critical clinical information.
Collapse
Affiliation(s)
- Michelle Duvall
- Department of Family Medicine, Mayo Clinic, Rochester, MN, USA
| | - Frederick North
- Department of Community Internal Medicine, Mayo Clinic, Rochester, MN, USA
| | - William Leasure
- Department of Psychiatry & Psychology, Mayo Clinic, Rochester, MN, USA
| | - Jennifer Pecina
- Department of Family Medicine, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
44
|
Fonseka TM, Bhat V, Kennedy SH. The utility of artificial intelligence in suicide risk prediction and the management of suicidal behaviors. Aust N Z J Psychiatry 2019; 53:954-964. [PMID: 31347389 DOI: 10.1177/0004867419864428] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
OBJECTIVE Suicide is a growing public health concern with a global prevalence of approximately 800,000 deaths per year. The current process of evaluating suicide risk is highly subjective, which can limit the efficacy and accuracy of prediction efforts. Consequently, suicide detection strategies are shifting toward artificial intelligence platforms that can identify patterns within 'big data' to generate risk algorithms that can determine the effects of risk (and protective) factors on suicide outcomes, predict suicide outbreaks and identify at-risk individuals or populations. In this review, we summarize the role of artificial intelligence in optimizing suicide risk prediction and behavior management. METHODS This paper provides a general review of the literature. A literature search was conducted in OVID Medline, EMBASE and PsycINFO databases with coverage from January 1990 to June 2019. Results were restricted to peer-reviewed, English-language articles. Conference and dissertation proceedings, case reports, protocol papers and opinion pieces were excluded. Reference lists were also examined for additional articles of relevance. RESULTS At the individual level, prediction analytics help to identify individuals in crisis to intervene with emotional support, crisis and psychoeducational resources, and alerts for emergency assistance. At the population level, algorithms can identify at-risk groups or suicide hotspots, which help inform resource mobilization, policy reform and advocacy efforts. Artificial intelligence has also been used to support the clinical management of suicide across diagnostics and evaluation, medication management and behavioral therapy delivery. There could be several advantages of incorporating artificial intelligence into suicide care, which includes a time- and resource-effective alternative to clinician-based strategies, adaptability to various settings and demographics, and suitability for use in remote locations with limited access to mental healthcare supports. CONCLUSION Based on the observed benefits to date, artificial intelligence has a demonstrated utility within suicide prediction and clinical management efforts and will continue to advance mental healthcare forward.
Collapse
Affiliation(s)
- Trehani M Fonseka
- Centre for Mental Health and Krembil Research Centre, University Health Network, Toronto, ON, Canada.,Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada.,School of Social Work, King's University College, Western University, London, ON, Canada
| | - Venkat Bhat
- Centre for Mental Health and Krembil Research Centre, University Health Network, Toronto, ON, Canada.,Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada.,Department of Psychiatry, University of Toronto, Toronto, ON, Canada
| | - Sidney H Kennedy
- Centre for Mental Health and Krembil Research Centre, University Health Network, Toronto, ON, Canada.,Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada.,Department of Psychiatry, University of Toronto, Toronto, ON, Canada.,Keenan Research Centre for Biomedical Science, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, ON, Canada
| |
Collapse
|
45
|
Brown RC, Bendig E, Fischer T, Goldwich AD, Baumeister H, Plener PL. Can acute suicidality be predicted by Instagram data? Results from qualitative and quantitative language analyses. PLoS One 2019; 14:e0220623. [PMID: 31504042 PMCID: PMC6736249 DOI: 10.1371/journal.pone.0220623] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Accepted: 06/27/2019] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND Social media has become increasingly important for communication among young people. It is also often used to communicate suicidal ideation. AIMS To investigate the link between acute suicidality and language use as well as activity on Instagram. METHOD A total of 52 participants, aged on average around 16 years, who had posted pictures of non-suicidal self-injury on Instagram, and reported a lifetime history of suicidal ideation, were interviewed using Instagram messenger. Of those participants, 45.5% reported suicidal ideation on the day of the interview (acute suicidal ideation). Qualitative text analysis (software ATLAS.ti 7) was used to investigate experiences with expressions of active suicidal thoughts on Instagram. Quantitative text analysis of language use in the interviews and directly on Instagram (in picture captions) was performed using the Linguistic Inquiry and Word Count software. Language markers in the interviews and in picture captions, as well as activity on Instagram were added to regression analyses, in order to investigate predictors for current suicidal ideation. RESULTS Most participants (80%) had come across expressions of active suicidal thoughts on Instagram and 25% had expressed active suicidal thoughts themselves. Participants with acute suicidal ideation used significantly more negative emotion words (Cohen's d = 0.66, 95% CI: 0.088-1.232) and words expressing overall affect (Cohen's d = 0.57, 95% CI: 0.001-1.138) in interviews. However, activity and language use on Instagram did not predict acute suicidality. CONCLUSIONS While participants differed with regard to their use of language in interviews, differences in activity and language use on Instagram were not associated with acute suicidality. Other mechanisms of machine learning, like identifying picture content, might be more valuable.
Collapse
Affiliation(s)
- Rebecca C. Brown
- University of Ulm, Department of Child and Adolescent Psychiatry and Psychotherapy, Ulm, Germany
| | - Eileen Bendig
- University of Ulm, Department of Clinical Psychology and Psychotherapy, Ulm
| | - Tin Fischer
- Independent Contributor, Freelancing Data Journalist, Berlin, Germany
| | - A. David Goldwich
- Independent Contributor, Freelancing Software Developer, Berlin, Germany
| | - Harald Baumeister
- University of Ulm, Department of Clinical Psychology and Psychotherapy, Ulm
| | - Paul L. Plener
- University of Ulm, Department of Child and Adolescent Psychiatry and Psychotherapy, Ulm, Germany
- Medical University of Vienna, Department for Child and Adolescent Psychiatry, Vienna, Austria
| |
Collapse
|
46
|
Abstract
BACKGROUND This paper aims to synthesise the literature on machine learning (ML) and big data applications for mental health, highlighting current research and applications in practice. METHODS We employed a scoping review methodology to rapidly map the field of ML in mental health. Eight health and information technology research databases were searched for papers covering this domain. Articles were assessed by two reviewers, and data were extracted on the article's mental health application, ML technique, data type, and study results. Articles were then synthesised via narrative review. RESULTS Three hundred papers focusing on the application of ML to mental health were identified. Four main application domains emerged in the literature, including: (i) detection and diagnosis; (ii) prognosis, treatment and support; (iii) public health, and; (iv) research and clinical administration. The most common mental health conditions addressed included depression, schizophrenia, and Alzheimer's disease. ML techniques used included support vector machines, decision trees, neural networks, latent Dirichlet allocation, and clustering. CONCLUSIONS Overall, the application of ML to mental health has demonstrated a range of benefits across the areas of diagnosis, treatment and support, research, and clinical administration. With the majority of studies identified focusing on the detection and diagnosis of mental health conditions, it is evident that there is significant room for the application of ML to other areas of psychology and mental health. The challenges of using ML techniques are discussed, as well as opportunities to improve and advance the field.
Collapse
Affiliation(s)
- Adrian B R Shatte
- Federation University, School of Science, Engineering & Information Technology,Melbourne,Australia
| | - Delyse M Hutchinson
- Deakin University, Centre for Social and Early Emotional Development, School of Psychology, Faculty of Health,Geelong,Australia
| | - Samantha J Teague
- Deakin University, Centre for Social and Early Emotional Development, School of Psychology, Faculty of Health,Geelong,Australia
| |
Collapse
|
47
|
Fiske A, Henningsen P, Buyx A. Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy. J Med Internet Res 2019; 21:e13216. [PMID: 31094356 PMCID: PMC6532335 DOI: 10.2196/13216] [Citation(s) in RCA: 165] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 02/21/2019] [Accepted: 02/26/2019] [Indexed: 12/11/2022] Open
Abstract
Background Research in embodied artificial intelligence (AI) has increasing clinical relevance for therapeutic applications in mental health services. With innovations ranging from ‘virtual psychotherapists’ to social robots in dementia care and autism disorder, to robots for sexual disorders, artificially intelligent virtual and robotic agents are increasingly taking on high-level therapeutic interventions that used to be offered exclusively by highly trained, skilled health professionals. In order to enable responsible clinical implementation, ethical and social implications of the increasing use of embodied AI in mental health need to be identified and addressed. Objective This paper assesses the ethical and social implications of translating embodied AI applications into mental health care across the fields of Psychiatry, Psychology and Psychotherapy. Building on this analysis, it develops a set of preliminary recommendations on how to address ethical and social challenges in current and future applications of embodied AI. Methods Based on a thematic literature search and established principles of medical ethics, an analysis of the ethical and social aspects of currently embodied AI applications was conducted across the fields of Psychiatry, Psychology, and Psychotherapy. To enable a comprehensive evaluation, the analysis was structured around the following three steps: assessment of potential benefits; analysis of overarching ethical issues and concerns; discussion of specific ethical and social issues of the interventions. Results From an ethical perspective, important benefits of embodied AI applications in mental health include new modes of treatment, opportunities to engage hard-to-reach populations, better patient response, and freeing up time for physicians. Overarching ethical issues and concerns include: harm prevention and various questions of data ethics; a lack of guidance on development of AI applications, their clinical integration and training of health professionals; ‘gaps’ in ethical and regulatory frameworks; the potential for misuse including using the technologies to replace established services, thereby potentially exacerbating existing health inequalities. Specific challenges identified and discussed in the application of embodied AI include: matters of risk-assessment, referrals, and supervision; the need to respect and protect patient autonomy; the role of non-human therapy; transparency in the use of algorithms; and specific concerns regarding long-term effects of these applications on understandings of illness and the human condition. Conclusions We argue that embodied AI is a promising approach across the field of mental health; however, further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental health care. We conclude by indicating areas of future research and developing recommendations for high-priority areas in need of concrete ethical guidance.
Collapse
Affiliation(s)
- Amelia Fiske
- Institute for History and Ethics of Medicine, Technical University of Munich School of Medicine, Technical University of Munich, Munich, Germany
| | - Peter Henningsen
- Department of Psychosomatic Medicine and Psychotherapy, Klinikum rechts der Isar at Technical University of Munich, Munich, Germany
| | - Alena Buyx
- Institute for History and Ethics of Medicine, Technical University of Munich School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
48
|
Lopez-Castroman J, Moulahi B, Azé J, Bringay S, Deninotti J, Guillaume S, Baca-Garcia E. Mining social networks to improve suicide prevention: A scoping review. J Neurosci Res 2019; 98:616-625. [PMID: 30809836 DOI: 10.1002/jnr.24404] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Revised: 12/03/2018] [Accepted: 02/07/2019] [Indexed: 12/18/2022]
Abstract
Attention about the risks of online social networks (SNs) has been called upon reports describing their use to express emotional distress and suicidal ideation or plans. On the Internet, cyberbullying, suicide pacts, Internet addiction, and "extreme" communities seem to increase suicidal behavior (SB). In this study, the scientific literature about SBs and SNs was narratively reviewed. Some authors focus on detecting at-risk populations through data mining, identification of risks factors, and web activity patterns. Others describe prevention practices on the Internet, such as websites, screening, and applications. Targeted interventions through SNs are also contemplated when suicidal ideation is present. Multiple predictive models should be defined, implemented, tested, and combined in order to deal with the risk of SB through an effective decision support system. This endeavor might require a reorganization of care for SNs users presenting suicidal ideation.
Collapse
Affiliation(s)
- Jorge Lopez-Castroman
- INSERM U888, La Colombière Hospital, Montpellier, France.,Department of Adult Psychiatry, CHRU Nimes, Nimes, France.,Departments of Psychiatry, Media and Internet, and Telecommunication and Networks, University of Montpellier UM, Montpellier, France
| | - Bilel Moulahi
- Departments of Psychiatry, Media and Internet, and Telecommunication and Networks, University of Montpellier UM, Montpellier, France.,LIRMM UMR 5506, Montpellier, France
| | - Jérôme Azé
- Departments of Psychiatry, Media and Internet, and Telecommunication and Networks, University of Montpellier UM, Montpellier, France.,LIRMM UMR 5506, Montpellier, France
| | - Sandra Bringay
- Departments of Psychiatry, Media and Internet, and Telecommunication and Networks, University of Montpellier UM, Montpellier, France.,LIRMM UMR 5506, Montpellier, France.,Department of Applied Mathematics and Informatics, Paul-Valery University, Montpellier, France
| | | | - Sebastien Guillaume
- INSERM U888, La Colombière Hospital, Montpellier, France.,Departments of Psychiatry, Media and Internet, and Telecommunication and Networks, University of Montpellier UM, Montpellier, France.,Department of Emergency Psychiatry and Post-Acute Care, Montpellier University Hospital, Montpellier, France
| | - Enrique Baca-Garcia
- Department of Psychiatry, Fundacion Jimenez Diaz University Hospital, Madrid, Spain.,Department of Psychiatry, University Hospital Rey Juan Carlos, Mostoles, Spain.,Department of Psychiatry, General Hospital of Villalba, Madrid, Spain.,Department of Psychiatry, University Hospital Infanta Elena, Valdemoro, Spain.,Department of Psychiatry, Madrid Autonomous University, Madrid, Spain.,CIBERSAM (Centro de Investigacion en Salud Mental), Carlos III Institute of Health, Madrid, Spain.,Universidad Catolica del Maule, Talca, Chile
| |
Collapse
|
49
|
Carson NJ, Mullin B, Sanchez MJ, Lu F, Yang K, Menezes M, Cook BL. Identification of suicidal behavior among psychiatrically hospitalized adolescents using natural language processing and machine learning of electronic health records. PLoS One 2019; 14:e0211116. [PMID: 30779800 PMCID: PMC6380543 DOI: 10.1371/journal.pone.0211116] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Accepted: 01/09/2019] [Indexed: 01/01/2023] Open
Abstract
Objective The rapid proliferation of machine learning research using electronic health records to classify healthcare outcomes offers an opportunity to address the pressing public health problem of adolescent suicidal behavior. We describe the development and evaluation of a machine learning algorithm using natural language processing of electronic health records to identify suicidal behavior among psychiatrically hospitalized adolescents. Methods Adolescents hospitalized on a psychiatric inpatient unit in a community health system in the northeastern United States were surveyed for history of suicide attempt in the past 12 months. A total of 73 respondents had electronic health records available prior to the index psychiatric admission. Unstructured clinical notes were downloaded from the year preceding the index inpatient admission. Natural language processing identified phrases from the notes associated with the suicide attempt outcome. We enriched this group of phrases with a clinically focused list of terms representing known risk and protective factors for suicide attempt in adolescents. We then applied the random forest machine learning algorithm to develop a classification model. The model performance was evaluated using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy. Results The final model had a sensitivity of 0.83, specificity of 0.22, AUC of 0.68, a PPV of 0.42, NPV of 0.67, and an accuracy of 0.47. The terms mostly highly associated with suicide attempt clustered around terms related to suicide, family members, psychiatric disorders, and psychotropic medications. Conclusion This analysis demonstrates modest success of a natural language processing and machine learning approach to identifying suicide attempt among a small sample of hospitalized adolescents in a psychiatric setting.
Collapse
Affiliation(s)
- Nicholas J Carson
- Health Equity Research Lab, Cambridge Health Alliance, Cambridge, MA, United States of America.,Department of Psychiatry, Harvard Medical School, Boston, MA, United States of America
| | - Brian Mullin
- Health Equity Research Lab, Cambridge Health Alliance, Cambridge, MA, United States of America
| | - Maria Jose Sanchez
- Health Equity Research Lab, Cambridge Health Alliance, Cambridge, MA, United States of America.,Prevention and Community Health Department, Milken School of Public Health, George Washington University, Washington, D.C., United States of America
| | - Frederick Lu
- Health Equity Research Lab, Cambridge Health Alliance, Cambridge, MA, United States of America
| | - Kelly Yang
- Health Equity Research Lab, Cambridge Health Alliance, Cambridge, MA, United States of America.,Department of Psychiatry, Albert Einstein College of Medicine, Bronx, NY, United States of America
| | - Michelle Menezes
- Health Equity Research Lab, Cambridge Health Alliance, Cambridge, MA, United States of America.,University of Virginia, Charlottesville, VA, United States of America
| | - Benjamin Lê Cook
- Health Equity Research Lab, Cambridge Health Alliance, Cambridge, MA, United States of America.,Department of Psychiatry, Harvard Medical School, Boston, MA, United States of America
| |
Collapse
|
50
|
Burke TA, Ammerman BA, Jacobucci R. The use of machine learning in the study of suicidal and non-suicidal self-injurious thoughts and behaviors: A systematic review. J Affect Disord 2019; 245:869-884. [PMID: 30699872 DOI: 10.1016/j.jad.2018.11.073] [Citation(s) in RCA: 95] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 10/20/2018] [Accepted: 11/11/2018] [Indexed: 12/23/2022]
Abstract
BACKGROUND Machine learning techniques offer promise to improve suicide risk prediction. In the current systematic review, we aimed to review the existing literature on the application of machine learning techniques to predict self-injurious thoughts and behaviors (SITBs). METHOD We systematically searched PsycINFO, PsycARTICLES, ERIC, CINAHL, and MEDLINE for articles published through February 2018. RESULTS Thirty-five articles met criteria to be included in the review. Included articles were reviewed by outcome: suicide death, suicide attempt, suicide plan, suicidal ideation, suicide risk, and non-suicidal self-injury. We observed three general aims in the use of SITB-focused machine learning analyses: (1) improving prediction accuracy, (2) identifying important model indicators (i.e., variable selection) and indicator interactions, and (3) modeling underlying subgroups. For studies with the aim of boosting predictive accuracy, we observed greater prediction accuracy of SITBs than in previous studies using traditional statistical methods. Studies using machine learning for variable selection purposes have both replicated findings of well-known SITB risk factors and identified novel variables that may augment model performance. Finally, some of these studies have allowed for subgroup identification, which in turn has helped to inform clinical cutoffs. LIMITATIONS Limitations of the current review include relatively low paper sample size, inconsistent reporting procedures resulting in an inability to compare model accuracy across studies, and lack of model validation on external samples. CONCLUSIONS We concluded that leveraging machine learning techniques to further predictive accuracy and identify novel indicators will aid in the prediction and prevention of suicide.
Collapse
Affiliation(s)
- Taylor A Burke
- Temple University, Department of Psychology, Philadelphia, PA, USA.
| | - Brooke A Ammerman
- University of Notre Dame, Department of Psychology, Notre Dame, IN, USA
| | - Ross Jacobucci
- University of Notre Dame, Department of Psychology, Notre Dame, IN, USA
| |
Collapse
|