1
|
Perfalk E, Bernstorff M, Danielsen AA, Østergaard SD. Patient trust in the use of machine learning-based clinical decision support systems in psychiatric services: A randomized survey experiment. Eur Psychiatry 2024; 67:e72. [PMID: 39450771 DOI: 10.1192/j.eurpsy.2024.1790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/26/2024] Open
Abstract
BACKGROUND Clinical decision support systems (CDSS) based on machine-learning (ML) models are emerging within psychiatry. If patients do not trust this technology, its implementation may disrupt the patient-clinician relationship. Therefore, the aim was to examine whether receiving basic information about ML-based CDSS increased trust in them. METHODS We conducted an online randomized survey experiment in the Psychiatric Services of the Central Denmark Region. The participating patients were randomized into one of three arms: Intervention = information on clinical decision-making supported by an ML model; Active control = information on a standard clinical decision process, and Blank control = no information. The participants were unaware of the experiment. Subsequently, participants were asked about different aspects of trust and distrust regarding ML-based CDSS. The effect of the intervention was assessed by comparing scores of trust and distrust between the allocation arms. RESULTS Out of 5800 invitees, 992 completed the survey experiment. The intervention increased trust in ML-based CDSS when compared to the active control (mean increase in trust: 5% [95% CI: 1%; 9%], p = 0.0096) and the blank control arm (mean increase in trust: 4% [1%; 8%], p = 0.015). Similarly, the intervention reduced distrust in ML-based CDSS when compared to the active control (mean decrease in distrust: -3%[-1%; -5%], p = 0.021) and the blank control arm (mean decrease in distrust: -4% [-1%; -8%], p = 0.022). No statistically significant differences were observed between the active and the blank control arms. CONCLUSIONS Receiving basic information on ML-based CDSS in hospital psychiatry may increase patient trust in such systems.
Collapse
Affiliation(s)
- Erik Perfalk
- Department of Affective Disorders, Aarhus University Hospital - Psychiatry, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Martin Bernstorff
- Department of Affective Disorders, Aarhus University Hospital - Psychiatry, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Andreas Aalkjær Danielsen
- Department of Affective Disorders, Aarhus University Hospital - Psychiatry, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Søren Dinesen Østergaard
- Department of Affective Disorders, Aarhus University Hospital - Psychiatry, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
2
|
Adams R, Haroz EE, Rebman P, Suttle R, Grosvenor L, Bajaj M, Dayal RR, Maggio D, Kettering CL, Goklish N. Developing a suicide risk model for use in the Indian Health Service. NPJ MENTAL HEALTH RESEARCH 2024; 3:47. [PMID: 39414996 PMCID: PMC11484872 DOI: 10.1038/s44184-024-00088-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 09/10/2024] [Indexed: 10/18/2024]
Abstract
We developed and evaluated an electronic health record (EHR)-based model for suicide risk specific to an American Indian patient population. Using EHR data for all patients over 18 with a visit between 1/1/2017 and 10/2/2021, we developed a model for the risk of a suicide attempt or death in the 90 days following a visit. Features included demographics, medications, diagnoses, and scores from relevant screening tools. We compared the predictive performance of logistic regression and random forest models against existing suicide screening, which was augmented to include the history of previous attempts or ideation. During the study, 16,835 patients had 331,588 visits, with 490 attempts and 37 deaths by suicide. The logistic regression and random forest models (area under the ROC (AUROC) 0.83 [0.80-0.86]; both models) performed better than enhanced screening (AUROC 0.64 [0.61-0.67]). These results suggest that an EHR-based suicide risk model can add value to existing practices at Indian Health Service clinics.
Collapse
Affiliation(s)
- Roy Adams
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins School of Medicine, 1800 Orleans St., Baltimore, MD, 21287, USA
| | - Emily E Haroz
- Center for Indigenous Health, Department of International Health, Johns Hopkins Bloomberg School of Public Health, 415 N. Washington St., Baltimore, MD, 21205, USA.
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe St., Baltimore, MD, 21205, USA.
| | - Paul Rebman
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe St., Baltimore, MD, 21205, USA
| | - Rose Suttle
- Center for Indigenous Health, Department of International Health, Johns Hopkins Bloomberg School of Public Health, 415 N. Washington St., Baltimore, MD, 21205, USA
| | - Luke Grosvenor
- Division of Research, Kaiser Permanente Northern California, 4480 Hacienda Dr, Pleasanton, CA, 94588, USA
| | - Mira Bajaj
- Mass General Brigham McLean, Harvard Medical School, 115 Mill St., Belmont, MA, 02478, USA
| | - Rohan R Dayal
- Center for Indigenous Health, Department of International Health, Johns Hopkins Bloomberg School of Public Health, 415 N. Washington St., Baltimore, MD, 21205, USA
| | - Dominick Maggio
- Whiteriver Indian Hospital, 200 W Hospital Dr, Whiteriver, Arizona, USA
| | | | - Novalene Goklish
- Center for Indigenous Health, Department of International Health, Johns Hopkins Bloomberg School of Public Health, 415 N. Washington St., Baltimore, MD, 21205, USA
| |
Collapse
|
3
|
Benda N, Desai P, Reza Z, Zheng A, Kumar S, Harkins S, Hermann A, Zhang Y, Joly R, Kim J, Pathak J, Reading Turchioe M. Patient Perspectives on AI for Mental Health Care: Cross-Sectional Survey Study. JMIR Ment Health 2024; 11:e58462. [PMID: 39293056 PMCID: PMC11447436 DOI: 10.2196/58462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/26/2024] [Accepted: 07/14/2024] [Indexed: 09/20/2024] Open
Abstract
BACKGROUND The application of artificial intelligence (AI) to health and health care is rapidly increasing. Several studies have assessed the attitudes of health professionals, but far fewer studies have explored the perspectives of patients or the general public. Studies investigating patient perspectives have focused on somatic issues, including those related to radiology, perinatal health, and general applications. Patient feedback has been elicited in the development of specific mental health care solutions, but broader perspectives toward AI for mental health care have been underexplored. OBJECTIVE This study aims to understand public perceptions regarding potential benefits of AI, concerns about AI, comfort with AI accomplishing various tasks, and values related to AI, all pertaining to mental health care. METHODS We conducted a 1-time cross-sectional survey with a nationally representative sample of 500 US-based adults. Participants provided structured responses on their perceived benefits, concerns, comfort, and values regarding AI for mental health care. They could also add free-text responses to elaborate on their concerns and values. RESULTS A plurality of participants (245/497, 49.3%) believed AI may be beneficial for mental health care, but this perspective differed based on sociodemographic variables (all P<.05). Specifically, Black participants (odds ratio [OR] 1.76, 95% CI 1.03-3.05) and those with lower health literacy (OR 2.16, 95% CI 1.29-3.78) perceived AI to be more beneficial, and women (OR 0.68, 95% CI 0.46-0.99) perceived AI to be less beneficial. Participants endorsed concerns about accuracy, possible unintended consequences such as misdiagnosis, the confidentiality of their information, and the loss of connection with their health professional when AI is used for mental health care. A majority of participants (80.4%, 402/500) valued being able to understand individual factors driving their risk, confidentiality, and autonomy as it pertained to the use of AI for their mental health. When asked who was responsible for the misdiagnosis of mental health conditions using AI, 81.6% (408/500) of participants found the health professional to be responsible. Qualitative results revealed similar concerns related to the accuracy of AI and how its use may impact the confidentiality of patients' information. CONCLUSIONS Future work involving the use of AI for mental health care should investigate strategies for conveying the level of AI's accuracy, factors that drive patients' mental health risks, and how data are used confidentially so that patients can determine with their health professionals when AI may be beneficial. It will also be important in a mental health care context to ensure the patient-health professional relationship is preserved when AI is used.
Collapse
Affiliation(s)
- Natalie Benda
- School of Nursing, Columbia University, New York, NY, United States
| | - Pooja Desai
- Department of Biomedical Informatics, Columbia University, New York, NY, United States
| | - Zayan Reza
- Mailman School of Public Health, Columbia University, New York, NY, United States
| | - Anna Zheng
- Stuyvestant High School, New York, NY, United States
| | - Shiveen Kumar
- College of Agriculture and Life Science, Cornell University, Ithaca, NY, United States
| | - Sarah Harkins
- School of Nursing, Columbia University, New York, NY, United States
| | - Alison Hermann
- Department of Psychiatry, Weill Cornell Medicine, New York, NY, United States
| | - Yiye Zhang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Rochelle Joly
- Department of Obstetrics and Gynecology, Weill Cornell Medicine, New York, NY, United States
| | - Jessica Kim
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Jyotishman Pathak
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | | |
Collapse
|
4
|
Antor E, Owusu-Marfo J, Kissi J. Usability evaluation of electronic health records at the trauma and emergency directorates at the Komfo Anokye teaching hospital in the Ashanti region of Ghana. BMC Med Inform Decis Mak 2024; 24:231. [PMID: 39169338 PMCID: PMC11340109 DOI: 10.1186/s12911-024-02636-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 08/16/2024] [Indexed: 08/23/2024] Open
Abstract
BACKGROUND Electronic health records (EHRs) are currently gaining popularity in emerging economies because they provide options for exchanging patient data, increasing operational efficiency, and improving patient outcomes. This study examines how service providers at Ghana's Komfo Anokye Teaching Hospital adopt and use an electronic health records (EHRs) system. The emphasis is on identifying factors impacting adoption and the problems that healthcare personnel encounter in efficiently using the EHRs system. METHOD A quantitative cross-sectional technique was utilised to collect data from 234 trauma and emergency department staff members via standardised questionnaires. The participants were selected using the purposive sampling method. The Pearson Chi-square Test was used to examine the relationship between respondents' acceptability and use of EHRs. RESULTS The study discovered that a sizable number of respondents (86.8%) embraced and actively used the EHRs system. However, other issues were noted, including insufficient system training and malfunctions (35.9%), power outages (18.8%), privacy concerns (9.4%), and insufficient maintenance (4.7%). The respondents' comfortability in using the electronic health record system (X2=11.30, p=0.001), system dependability (X2=30.74, p=0.0001), and EHR's ability to reduce patient waiting time (X2=14.39, p=0.0001) were all strongly associated with their degree of satisfaction with the system. Furthermore, respondents who said elects increase patient care (X2= 75.59, p = 0.0001) and income creation (X2= 8.48, p = 0.004), which is related to the acceptability of the electronic health records system. CONCLUSION The study revealed that comfort, reliability, and improved care quality all had an impact on the EHRs system's acceptability and utilization. Challenges, including equipment malfunctions and power outages, were found. Continuous professional training was emphasized as a means of increasing employee confidence, as did the construction of a power backup system to combat disruptions. Patient data privacy was highlighted. In conclusion, this study highlights the relevance of EHRs system adoption and usability in healthcare. While the benefits are obvious, addressing obstacles through training, technical support, and infrastructure improvements is critical for increasing system effectiveness.
Collapse
Affiliation(s)
- Edith Antor
- Komfo Anokye Teaching Hospital, Kumasi, Ashanti Region, Ghana
| | - Joseph Owusu-Marfo
- Department of Epidemiology, Biostatistics and Disease Control, School of Public Health, University for Development Studies (UDS), P. O. Box TL1350, Tamale, Northern Region, Ghana.
| | - Jonathan Kissi
- Department of Health Information Management, School of Allied Health Sciences, College of Health and Allied Sciences, University of Cape-Coast, Cape-Coast, Central Region, Ghana
| |
Collapse
|
5
|
Frost EK, Bosward R, Aquino YSJ, Braunack-Mayer A, Carter SM. Facilitating public involvement in research about healthcare AI: A scoping review of empirical methods. Int J Med Inform 2024; 186:105417. [PMID: 38564959 DOI: 10.1016/j.ijmedinf.2024.105417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 03/06/2024] [Accepted: 03/17/2024] [Indexed: 04/04/2024]
Abstract
OBJECTIVE With the recent increase in research into public views on healthcare artificial intelligence (HCAI), the objective of this review is to examine the methods of empirical studies on public views on HCAI. We map how studies provided participants with information about HCAI, and we examine the extent to which studies framed publics as active contributors to HCAI governance. MATERIALS AND METHODS We searched 5 academic databases and Google Advanced for empirical studies investigating public views on HCAI. We extracted information including study aims, research instruments, and recommendations. RESULTS Sixty-two studies were included. Most were quantitative (N = 42). Most (N = 47) reported providing participants with background information about HCAI. Despite this, studies often reported participants' lack of prior knowledge about HCAI as a limitation. Over three quarters (N = 48) of the studies made recommendations that envisaged public views being used to guide governance of AI. DISCUSSION Provision of background information is an important component of facilitating research with publics on HCAI. The high proportion of studies reporting participants' lack of knowledge about HCAI as a limitation reflects the need for more guidance on how information should be presented. A minority of studies adopted technocratic positions that construed publics as passive beneficiaries of AI, rather than as active stakeholders in HCAI design and implementation. CONCLUSION This review draws attention to how public roles in HCAI governance are constructed in empirical studies. To facilitate active participation, we recommend that research with publics on HCAI consider methodological designs that expose participants to diverse information sources.
Collapse
Affiliation(s)
- Emma Kellie Frost
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Rebecca Bosward
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, Faculty of the Arts, Social Sciences, and Humanities, University of Wollongong, Australia.
| |
Collapse
|
6
|
Davis M, Dysart GC, Doupnik SK, Hamm ME, Schwartz KTG, George-Milford B, Ryan ND, Melhem NM, Stepp SD, Brent DA, Young JF. Adolescent, Parent, and Provider Perceptions of a Predictive Algorithm to Identify Adolescent Suicide Risk in Primary Care. Acad Pediatr 2024; 24:645-653. [PMID: 38190885 PMCID: PMC11056301 DOI: 10.1016/j.acap.2023.12.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 12/27/2023] [Accepted: 12/30/2023] [Indexed: 01/10/2024]
Abstract
OBJECTIVE To understand adolescent, parent, and provider perceptions of a machine learning algorithm for detecting adolescent suicide risk prior to its implementation primary care. METHODS We conducted semi-structured, qualitative interviews with adolescents (n = 9), parents (n = 12), and providers (n = 10; mixture of behavioral health and primary care providers) across two major health systems. Interviews were audio recorded and transcribed with analyses supported by use of NVivo. A codebook was developed combining codes derived inductively from interview transcripts and deductively from implementation science frameworks for content analysis. RESULTS Reactions to the algorithm were mixed. While many participants expressed privacy concerns, they believed the algorithm could be clinically useful for identifying adolescents at risk for suicide and facilitating follow-up. Parents' past experiences with their adolescents' suicidal thoughts and behaviors contributed to their openness to the algorithm. Results also aligned with several key Consolidated Framework for Implementation Research domains. For example, providers mentioned barriers inherent to the primary care setting such as time and resource constraints likely to impact algorithm implementation. Participants also cited a climate of mistrust of science and health care as potential barriers. CONCLUSIONS Findings shed light on factors that warrant consideration to promote successful implementation of suicide predictive algorithms in pediatric primary care. By attending to perspectives of potential end users prior to the development and testing of the algorithm, we can ensure that the risk prediction methods will be well-suited to the providers who would be interacting with them and the families who could benefit.
Collapse
Affiliation(s)
- Molly Davis
- Department of Child and Adolescent Psychiatry and Behavioral Sciences (M Davis, GC Dysart, KTG Schwartz, and JF Young), Children's Hospital of Philadelphia, Philadelphia, Pa; PolicyLab (M Davis, GC Dysart, SK Doupnik, KTG Schwartz, and JF Young), Children's Hospital of Philadelphia, Philadelphia, Pa; Clinical Futures (M Davis and SK Doupnik), Children's Hospital of Philadelphia, Philadelphia, Pa; Department of Psychiatry (M Davis and JF Young), University of Pennsylvania Perelman School of Medicine, Philadelphia, Pa; Penn Implementation Science Center at the Leonard Davis Institute of Health Economics (PISCE@LDI) (M Davis and SK Doupnik), University of Pennsylvania, Philadelphia, Pa.
| | - Gillian C Dysart
- Department of Child and Adolescent Psychiatry and Behavioral Sciences (M Davis, GC Dysart, KTG Schwartz, and JF Young), Children's Hospital of Philadelphia, Philadelphia, Pa; PolicyLab (M Davis, GC Dysart, SK Doupnik, KTG Schwartz, and JF Young), Children's Hospital of Philadelphia, Philadelphia, Pa
| | - Stephanie K Doupnik
- PolicyLab (M Davis, GC Dysart, SK Doupnik, KTG Schwartz, and JF Young), Children's Hospital of Philadelphia, Philadelphia, Pa; Clinical Futures (M Davis and SK Doupnik), Children's Hospital of Philadelphia, Philadelphia, Pa; Penn Implementation Science Center at the Leonard Davis Institute of Health Economics (PISCE@LDI) (M Davis and SK Doupnik), University of Pennsylvania, Philadelphia, Pa; Division of General Pediatrics (SK Doupnik), Children's Hospital of Philadelphia, Philadelphia, Pa; Department of Pediatrics (SK Doupnik), University of Pennsylvania Perelman School of Medicine, Philadelphia, Pa
| | - Megan E Hamm
- Department of Medicine (ME Hamm), University of Pittsburgh, Pittsburgh, Pa
| | - Karen T G Schwartz
- Department of Child and Adolescent Psychiatry and Behavioral Sciences (M Davis, GC Dysart, KTG Schwartz, and JF Young), Children's Hospital of Philadelphia, Philadelphia, Pa; PolicyLab (M Davis, GC Dysart, SK Doupnik, KTG Schwartz, and JF Young), Children's Hospital of Philadelphia, Philadelphia, Pa
| | - Brandie George-Milford
- University of Pittsburgh Medical Center Western Psychiatric Hospital (B George-Milford and DA Brent), Pittsburgh, Pa
| | - Neal D Ryan
- Department of Psychiatry (ND Ryan, NM Melhem, SD Stepp, and DA Brent), University of Pittsburgh School of Medicine, Pittsburgh, Pa; Clinical and Translational Science Institute (ND Ryan), University of Pittsburgh, Pittsburgh, Pa
| | - Nadine M Melhem
- Department of Psychiatry (ND Ryan, NM Melhem, SD Stepp, and DA Brent), University of Pittsburgh School of Medicine, Pittsburgh, Pa
| | - Stephanie D Stepp
- Department of Psychiatry (ND Ryan, NM Melhem, SD Stepp, and DA Brent), University of Pittsburgh School of Medicine, Pittsburgh, Pa
| | - David A Brent
- University of Pittsburgh Medical Center Western Psychiatric Hospital (B George-Milford and DA Brent), Pittsburgh, Pa; Department of Psychiatry (ND Ryan, NM Melhem, SD Stepp, and DA Brent), University of Pittsburgh School of Medicine, Pittsburgh, Pa
| | - Jami F Young
- Department of Child and Adolescent Psychiatry and Behavioral Sciences (M Davis, GC Dysart, KTG Schwartz, and JF Young), Children's Hospital of Philadelphia, Philadelphia, Pa; PolicyLab (M Davis, GC Dysart, SK Doupnik, KTG Schwartz, and JF Young), Children's Hospital of Philadelphia, Philadelphia, Pa; Department of Psychiatry (M Davis and JF Young), University of Pennsylvania Perelman School of Medicine, Philadelphia, Pa
| |
Collapse
|
7
|
Moy S, Irannejad M, Manning SJ, Farahani M, Ahmed Y, Gao E, Prabhune R, Lorenz S, Mirza R, Klinger C. Patient Perspectives on the Use of Artificial Intelligence in Health Care: A Scoping Review. J Patient Cent Res Rev 2024; 11:51-62. [PMID: 38596349 PMCID: PMC11000703 DOI: 10.17294/2330-0698.2029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024] Open
Abstract
Purpose Artificial intelligence (AI) technology is being rapidly adopted into many different branches of medicine. Although research has started to highlight the impact of AI on health care, the focus on patient perspectives of AI is scarce. This scoping review aimed to explore the literature on adult patients' perspectives on the use of an array of AI technologies in the health care setting for design and deployment. Methods This scoping review followed Arksey and O'Malley's framework and Preferred Reporting Items for Systematic Reviews and Meta-Analysis for Scoping Reviews (PRISMA-ScR). To evaluate patient perspectives, we conducted a comprehensive literature search using eight interdisciplinary electronic databases, including grey literature. Articles published from 2015 to 2022 that focused on patient views regarding AI technology in health care were included. Thematic analysis was performed on the extracted articles. Results Of the 10,571 imported studies, 37 articles were included and extracted. From the 33 peer-reviewed and 4 grey literature articles, the following themes on AI emerged: (i) Patient attitudes, (ii) Influences on patient attitudes, (iii) Considerations for design, and (iv) Considerations for use. Conclusions Patients are key stakeholders essential to the uptake of AI in health care. The findings indicate that patients' needs and expectations are not fully considered in the application of AI in health care. Therefore, there is a need for patient voices in the development of AI in health care.
Collapse
Affiliation(s)
- Sally Moy
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Mona Irannejad
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | | | - Mehrdad Farahani
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Yomna Ahmed
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ellis Gao
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Radhika Prabhune
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Suzan Lorenz
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Raza Mirza
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Christopher Klinger
- Translational Research Program, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
- National Initiative for the Care of the Elderly, Toronto, Canada
| |
Collapse
|
8
|
Giddings R, Joseph A, Callender T, Janes SM, van der Schaar M, Sheringham J, Navani N. Factors influencing clinician and patient interaction with machine learning-based risk prediction models: a systematic review. Lancet Digit Health 2024; 6:e131-e144. [PMID: 38278615 DOI: 10.1016/s2589-7500(23)00241-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 10/20/2023] [Accepted: 11/14/2023] [Indexed: 01/28/2024]
Abstract
Machine learning (ML)-based risk prediction models hold the potential to support the health-care setting in several ways; however, use of such models is scarce. We aimed to review health-care professional (HCP) and patient perceptions of ML risk prediction models in published literature, to inform future risk prediction model development. Following database and citation searches, we identified 41 articles suitable for inclusion. Article quality varied with qualitative studies performing strongest. Overall, perceptions of ML risk prediction models were positive. HCPs and patients considered that models have the potential to add benefit in the health-care setting. However, reservations remain; for example, concerns regarding data quality for model development and fears of unintended consequences following ML model use. We identified that public views regarding these models might be more negative than HCPs and that concerns (eg, extra demands on workload) were not always borne out in practice. Conclusions are tempered by the low number of patient and public studies, the absence of participant ethnic diversity, and variation in article quality. We identified gaps in knowledge (particularly views from under-represented groups) and optimum methods for model explanation and alerts, which require future research.
Collapse
Affiliation(s)
- Rebecca Giddings
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK.
| | - Anabel Joseph
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Thomas Callender
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Sam M Janes
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Mihaela van der Schaar
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK; The Alan Turing Institute, London, UK
| | - Jessica Sheringham
- Department of Applied Health Research, University College London, London, UK
| | - Neal Navani
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| |
Collapse
|
9
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
10
|
Ho V, Brown Johnson C, Ghanzouri I, Amal S, Asch S, Ross E. Physician- and Patient-Elicited Barriers and Facilitators to Implementation of a Machine Learning-Based Screening Tool for Peripheral Arterial Disease: Preimplementation Study With Physician and Patient Stakeholders. JMIR Cardio 2023; 7:e44732. [PMID: 37930755 PMCID: PMC10660241 DOI: 10.2196/44732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 07/23/2023] [Accepted: 08/21/2023] [Indexed: 11/07/2023] Open
Abstract
BACKGROUND Peripheral arterial disease (PAD) is underdiagnosed, partially due to a high prevalence of atypical symptoms and a lack of physician and patient awareness. Implementing clinical decision support tools powered by machine learning algorithms may help physicians identify high-risk patients for diagnostic workup. OBJECTIVE This study aims to evaluate barriers and facilitators to the implementation of a novel machine learning-based screening tool for PAD among physician and patient stakeholders using the Consolidated Framework for Implementation Research (CFIR). METHODS We performed semistructured interviews with physicians and patients from the Stanford University Department of Primary Care and Population Health, Division of Cardiology, and Division of Vascular Medicine. Participants answered questions regarding their perceptions toward machine learning and clinical decision support for PAD detection. Rapid thematic analysis was performed using templates incorporating codes from CFIR constructs. RESULTS A total of 12 physicians (6 primary care physicians and 6 cardiovascular specialists) and 14 patients were interviewed. Barriers to implementation arose from 6 CFIR constructs: complexity, evidence strength and quality, relative priority, external policies and incentives, knowledge and beliefs about intervention, and individual identification with the organization. Facilitators arose from 5 CFIR constructs: intervention source, relative advantage, learning climate, patient needs and resources, and knowledge and beliefs about intervention. Physicians felt that a machine learning-powered diagnostic tool for PAD would improve patient care but cited limited time and authority in asking patients to undergo additional screening procedures. Patients were interested in having their physicians use this tool but raised concerns about such technologies replacing human decision-making. CONCLUSIONS Patient- and physician-reported barriers toward the implementation of a machine learning-powered PAD diagnostic tool followed four interdependent themes: (1) low familiarity or urgency in detecting PAD; (2) concerns regarding the reliability of machine learning; (3) differential perceptions of responsibility for PAD care among primary care versus specialty physicians; and (4) patient preference for physicians to remain primary interpreters of health care data. Facilitators followed two interdependent themes: (1) enthusiasm for clinical use of the predictive model and (2) willingness to incorporate machine learning into clinical care. Implementation of machine learning-powered diagnostic tools for PAD should leverage provider support while simultaneously educating stakeholders on the importance of early PAD diagnosis. High predictive validity is necessary for machine learning models but not sufficient for implementation.
Collapse
Affiliation(s)
- Vy Ho
- Division of Vascular Surgery, Department of Surgery, Stanford University School of Medicine, Stanford, CA, United States
| | - Cati Brown Johnson
- Division of Primary Care and Population Health, Department of Medicine, Stanford University School of Medicine, Stanford, CA, United States
| | - Ilies Ghanzouri
- Division of Vascular Surgery, Department of Surgery, Stanford University School of Medicine, Stanford, CA, United States
| | - Saeed Amal
- College of Engineering, Northeastern University, Boston, MA, United States
| | - Steven Asch
- Division of Primary Care and Population Health, Department of Medicine, Stanford University School of Medicine, Stanford, CA, United States
- Center for Innovation to Implementation, Veterans Affairs Palo Alto Healthcare System, Palo Alto, CA, United States
| | - Elsie Ross
- Division of Vascular Surgery, Department of Surgery, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
11
|
Nau CL, Braciszewski JM, Rossom RC, Penfold RB, Coleman KJ, Simon GE, Hong B, Padilla A, Butler RK, Chen A, Waters HC. Assessment of Disruptive Life Events for Individuals Diagnosed With Schizophrenia or Bipolar I Disorder Using Data From a Consumer Credit Reporting Agency. JAMA Psychiatry 2023:2804639. [PMID: 37163288 PMCID: PMC10173103 DOI: 10.1001/jamapsychiatry.2023.1179] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Importance There is a dearth of population-level data on major disruptive life events (defined here as arrests by a legal authority, address changes, bankruptcy, lien, and judgment filings) for patients with bipolar I disorder (BPI) or schizophrenia, which has limited studies on mental health and treatment outcomes. Objective To conduct a population-level study on disruptive life events by using publicly available data on disruptive life events, aggregated by a consumer credit reporting agency in conjunction with electronic health record (EHR) data. Design, Setting, and Participants This study used EHR data from 2 large, integrated health care systems, Kaiser Permanente Southern California and Henry Ford Health. Cohorts of patients diagnosed from 2007 to 2019 with BPI or schizophrenia were matched 1:1 by age at analysis, age at diagnosis (if applicable), sex, race and ethnicity, and Medicaid status to (1) an active comparison group with diagnoses of major depressive disorder (MDD) and (2) a general health (GH) cohort without diagnoses of BPI, schizophrenia, or MDD. Patients with diagnoses of BPI or schizophrenia and their respective comparison cohorts were matched to public records data aggregated by a consumer credit reporting agency (98% match rate). Analysis took place between November 2020 and December 2022. Main Outcomes and Measures The differences in the occurrence of disruptive life events among patients with BPI or schizophrenia and their comparison groups. Results Of 46 167 patients, 30 008 (65%) had BPI (mean [SD] age, 42.6 [14.2] years) and 16 159 (35%) had schizophrenia (mean [SD], 41.4 [15.1] years). The majoriy of patients were White (30 167 [65%]). In addition, 18 500 patients with BPI (62%) and 6552 patients with schizophrenia (41%) were female. Patients with BPI were more likely to change addresses than patients in either comparison cohort (with the incidence ratio being as high as 1.25 [95% CI, 1.23-1.28]) when compared with GH cohort. Patients with BPI were also more likely to experience any of the financial disruptive life events with odds ratio ranging from 1.15 [95% CI, 1.07-1.24] to 1.50 [95% CI, 1.42-1.58]). The largest differences in disruptive life events were seen in arrests of patients with either BPI or schizophrenia compared with GH peers (3.27 [95% CI, 2.84-3.78] and 3.04 [95% CI, 2.57-3.59], respectively). Patients with schizophrenia had fewer address changes and were less likely to experience a financial event than their matched comparison cohorts. Conclusions and Relevance This study demonstrated that data aggregated by a consumer credit reporting agency can support population-level studies on disruptive life events among patients with BPI or schizophrenia.
Collapse
Affiliation(s)
- Claudia L Nau
- Department of Research and Evaluation, Kaiser Permanente Southern California, Pasadena
| | | | | | - Robert B Penfold
- Kaiser Permanente Washington Health Research Institute, Seattle, Washington
| | - Karen J Coleman
- Department of Research and Evaluation, Kaiser Permanente Southern California, Pasadena
| | - Gregory E Simon
- Kaiser Permanente Washington Health Research Institute, Seattle, Washington
| | - Benjamin Hong
- Department of Research and Evaluation, Kaiser Permanente Southern California, Pasadena
| | - Ariadna Padilla
- Department of Research and Evaluation, Kaiser Permanente Southern California, Pasadena
| | - Rebecca K Butler
- Department of Research and Evaluation, Kaiser Permanente Southern California, Pasadena
| | - Aiyu Chen
- Department of Research and Evaluation, Kaiser Permanente Southern California, Pasadena
| | - Heidi C Waters
- Global Value & Real World Evidence, Otsuka Pharmaceutical Development & Commercialization, Inc, Princeton, New Jersey
| |
Collapse
|
12
|
Yarborough BJH, Stumbo SP. A Stakeholder-Informed Ethical Framework to Guide Implementation of Suicide Risk Prediction Models Derived from Electronic Health Records. Arch Suicide Res 2023; 27:704-717. [PMID: 35446244 PMCID: PMC9665102 DOI: 10.1080/13811118.2022.2064255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
OBJECTIVE Develop a stakeholder-informed ethical framework to provide practical guidance to health systems considering implementation of suicide risk prediction models. METHODS In this multi-method study, patients and family members participating in formative focus groups (n = 4 focus groups, 23 participants), patient advisors, and a bioethics consultant collectively informed the development of a web-based survey; survey results (n = 1,357 respondents) and themes from interviews with stakeholders (patients, health system administrators, clinicians, suicide risk model developers, and a bioethicist) were used to draft the ethical framework. RESULTS Clinical, ethical, operational, and technical issues reiterated by multiple stakeholder groups and corresponding questions for risk prediction model adopters to consider prior to and during suicide risk model implementation are organized within six ethical principles in the resulting stakeholder-informed framework. Key themes include: patients' rights to informed consent and choice to conceal or reveal risk (autonomy); appropriate application of risk models, data and model limitations and consequences including ambiguous risk predictors in opaque models (explainability); selecting actionable risk thresholds (beneficence, distributive justice); access to risk information and stigma (privacy); unanticipated harms (non-maleficence); and planning for expertise and resources to continuously audit models, monitor harms, and redress grievances (stewardship). CONCLUSIONS Enthusiasm for risk prediction in the context of suicide is understandable given the escalating suicide rate in the U.S. Attention to ethical and practical concerns in advance of automated suicide risk prediction model implementation may help avoid unnecessary harms that could thwart the promise of this innovation in suicide prevention. HIGHLIGHTSPatients' desire to consent/opt out of suicide risk prediction models.Recursive ethical questioning should occur throughout risk model implementation.Risk modeling resources are needed to continuously audit models and monitor harms.
Collapse
|
13
|
Yarborough BJH, Stumbo SP, Schneider J, Richards JE, Hooker SA, Rossom R. Clinical implementation of suicide risk prediction models in healthcare: a qualitative study. BMC Psychiatry 2022; 22:789. [PMID: 36517785 PMCID: PMC9748385 DOI: 10.1186/s12888-022-04400-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 11/17/2022] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Suicide risk prediction models derived from electronic health records (EHR) are a novel innovation in suicide prevention but there is little evidence to guide their implementation. METHODS In this qualitative study, 30 clinicians and 10 health care administrators were interviewed from one health system anticipating implementation of an automated EHR-derived suicide risk prediction model and two health systems piloting different implementation approaches. Site-tailored interview guides focused on respondents' expectations for and experiences with suicide risk prediction models in clinical practice, and suggestions for improving implementation. Interview prompts and content analysis were guided by Consolidated Framework for Implementation Research (CFIR) constructs. RESULTS Administrators and clinicians found use of the suicide risk prediction model and the two implementation approaches acceptable. Clinicians desired opportunities for early buy-in, implementation decision-making, and feedback. They wanted to better understand how this manner of risk identification enhanced existing suicide prevention efforts. They also wanted additional training to understand how the model determined risk, particularly after patients they expected to see identified by the model were not flagged at-risk and patients they did not expect to see identified were. Clinicians were concerned about having enough suicide prevention resources for potentially increased demand and about their personal liability; they wanted clear procedures for situations when they could not reach patients or when patients remained at-risk over a sustained period. Suggestions for making risk model workflows more efficient and less burdensome included consolidating suicide risk information in a dedicated module in the EHR and populating risk assessment scores and text in clinical notes. CONCLUSION Health systems considering suicide risk model implementation should engage clinicians early in the process to ensure they understand how risk models estimate risk and add value to existing workflows, clarify clinician role expectations, and summarize risk information in a convenient place in the EHR to support high-quality patient care.
Collapse
Affiliation(s)
- Bobbi Jo H. Yarborough
- grid.414876.80000 0004 0455 9821Kaiser Permanente Center for Health Research, 3800 N Interstate Ave Portland, 97227 Portland, OR USA
| | - Scott P. Stumbo
- grid.414876.80000 0004 0455 9821Kaiser Permanente Center for Health Research, 3800 N Interstate Ave Portland, 97227 Portland, OR USA
| | - Jennifer Schneider
- grid.414876.80000 0004 0455 9821Kaiser Permanente Center for Health Research, 3800 N Interstate Ave Portland, 97227 Portland, OR USA
| | - Julie E. Richards
- grid.488833.c0000 0004 0615 7519Kaiser Permanente Washington Health Research Institute, WA Seattle, USA ,grid.34477.330000000122986657Health Services Department, University of Washington, WA Seattle, USA
| | - Stephanie A. Hooker
- grid.280625.b0000 0004 0461 4886HealthPartners Institute, Minneapolis, MN USA
| | - Rebecca Rossom
- grid.280625.b0000 0004 0461 4886HealthPartners Institute, Minneapolis, MN USA
| |
Collapse
|
14
|
Boggs JM, Kafka JM. A Critical Review of Text Mining Applications for Suicide Research. CURR EPIDEMIOL REP 2022; 9:126-134. [PMID: 35911089 PMCID: PMC9315081 DOI: 10.1007/s40471-022-00293-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2022] [Indexed: 11/28/2022]
Abstract
Purpose of Review Applying text mining to suicide research holds a great deal of promise. In this manuscript, literature from 2019 to 2021 is critically reviewed for text mining projects that use electronic health records, social media data, and death records. Recent Findings Text mining has helped identify risk factors for suicide in general and specific populations (e.g., older adults), has been combined with structured variables in EHRs to predict suicide risk, and has been used to track trends in social media suicidal discourse following population level events (e.g., COVID-19, celebrity suicides). Summary Future research should utilize text mining along with data linkage methods to capture more complete information on risk factors and outcomes across data sources (e.g., combining death records and EHRs), evaluate effectiveness of NLP-based intervention programs that use suicide risk prediction, establish standards for reporting accuracy of text mining programs to enable comparison across studies, and incorporate implementation science to understand feasibility, acceptability, and technical considerations.
Collapse
Affiliation(s)
- Jennifer M Boggs
- Kaiser Permanente Colorado, Institute for Health Research, Aurora, CO USA
| | - Julie M Kafka
- Department of Health Behavior, Gillings School of Global Public Health at University of North Carolina Chapel Hill, Chapel Hill, NC USA
| |
Collapse
|
15
|
Yarborough BJH, Stumbo SP, Schneider JL, Richards JE, Hooker SA, Rossom RC. Patient expectations of and experiences with a suicide risk identification algorithm in clinical practice. BMC Psychiatry 2022; 22:494. [PMID: 35870919 PMCID: PMC9308306 DOI: 10.1186/s12888-022-04129-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 07/11/2022] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND Suicide risk prediction models derived from electronic health records (EHR) and insurance claims are a novel innovation in suicide prevention but patient perspectives on their use have been understudied. METHODS In this qualitative study, between March and November 2020, 62 patients were interviewed from three health systems: one anticipating implementation of an EHR-derived suicide risk prediction model and two others piloting different implementation approaches. Site-tailored interview guides focused on patients' perceptions of this technology, concerns, and preferences for and experiences with suicide risk prediction model implementation in clinical practice. A constant comparative analytic approach was used to derive themes. RESULTS Interview participants were generally supportive of suicide risk prediction models derived from EHR data. Concerns included apprehension about inducing anxiety and suicidal thoughts, or triggering coercive treatment, particularly among those who reported prior negative experiences seeking mental health care. Participants who were engaged in mental health care or case management expected to be asked about their suicide risk and largely appreciated suicide risk conversations, particularly by clinicians comfortable discussing suicidality. CONCLUSION Most patients approved of suicide risk models that use EHR data to identify patients at-risk for suicide. As health systems proceed to implement such models, patient-centered care would involve dialogue initiated by clinicians experienced with assessing suicide risk during virtual or in person care encounters. Health systems should proactively monitor for negative consequences that result from risk model implementation to protect patient trust.
Collapse
Affiliation(s)
- Bobbi Jo H. Yarborough
- grid.414876.80000 0004 0455 9821Kaiser Permanente Northwest Center for Health Research, 3800 N Interstate Ave, Portland, OR 97227 USA
| | - Scott P. Stumbo
- grid.414876.80000 0004 0455 9821Kaiser Permanente Northwest Center for Health Research, 3800 N Interstate Ave, Portland, OR 97227 USA
| | - Jennifer L. Schneider
- grid.414876.80000 0004 0455 9821Kaiser Permanente Northwest Center for Health Research, 3800 N Interstate Ave, Portland, OR 97227 USA
| | - Julie E. Richards
- grid.488833.c0000 0004 0615 7519Kaiser Permanente Washington Health Research Institute, WA Seattle, USA ,grid.34477.330000000122986657Department of Health Systems and Population Health, University of Washington, WA Seattle, USA
| | - Stephanie A. Hooker
- grid.280625.b0000 0004 0461 4886HealthPartners Institute, MN Minneapolis, USA
| | - Rebecca C. Rossom
- grid.280625.b0000 0004 0461 4886HealthPartners Institute, MN Minneapolis, USA
| |
Collapse
|
16
|
Bentley KH, Zuromski KL, Fortgang RG, Madsen EM, Kessler D, Lee H, Nock MK, Reis BY, Castro VM, Smoller JW. Implementing Machine Learning Models for Suicide Risk Prediction in Clinical Practice: Focus Group Study With Hospital Providers. JMIR Form Res 2022; 6:e30946. [PMID: 35275075 PMCID: PMC8956996 DOI: 10.2196/30946] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 01/14/2022] [Accepted: 01/24/2022] [Indexed: 11/19/2022] Open
Abstract
Background Interest in developing machine learning models that use electronic health record data to predict patients’ risk of suicidal behavior has recently proliferated. However, whether and how such models might be implemented and useful in clinical practice remain unknown. To ultimately make automated suicide risk–prediction models useful in practice, and thus better prevent patient suicides, it is critical to partner with key stakeholders, including the frontline providers who will be using such tools, at each stage of the implementation process. Objective The aim of this focus group study is to inform ongoing and future efforts to deploy suicide risk–prediction models in clinical practice. The specific goals are to better understand hospital providers’ current practices for assessing and managing suicide risk; determine providers’ perspectives on using automated suicide risk–prediction models in practice; and identify barriers, facilitators, recommendations, and factors to consider. Methods We conducted 10 two-hour focus groups with a total of 40 providers from psychiatry, internal medicine and primary care, emergency medicine, and obstetrics and gynecology departments within an urban academic medical center. Audio recordings of open-ended group discussions were transcribed and coded for relevant and recurrent themes by 2 independent study staff members. All coded text was reviewed and discrepancies were resolved in consensus meetings with doctoral-level staff. Results Although most providers reported using standardized suicide risk assessment tools in their clinical practices, existing tools were commonly described as unhelpful and providers indicated dissatisfaction with current suicide risk assessment methods. Overall, providers’ general attitudes toward the practical use of automated suicide risk–prediction models and corresponding clinical decision support tools were positive. Providers were especially interested in the potential to identify high-risk patients who might be missed by traditional screening methods. Some expressed skepticism about the potential usefulness of these models in routine care; specific barriers included concerns about liability, alert fatigue, and increased demand on the health care system. Key facilitators included presenting specific patient-level features contributing to risk scores, emphasizing changes in risk over time, and developing systematic clinical workflows and provider training. Participants also recommended considering risk-prediction windows, timing of alerts, who will have access to model predictions, and variability across treatment settings. Conclusions Providers were dissatisfied with current suicide risk assessment methods and were open to the use of a machine learning–based risk-prediction system to inform clinical decision-making. They also raised multiple concerns about potential barriers to the usefulness of this approach and suggested several possible facilitators. Future efforts in this area will benefit from incorporating systematic qualitative feedback from providers, patients, administrators, and payers on the use of these new approaches in routine care, especially given the complex, sensitive, and unfortunately still stigmatized nature of suicide risk.
Collapse
Affiliation(s)
- Kate H Bentley
- Center for Precision Psychiatry, Department of Psychiatry, Massachusetts General Hospital, Boston, MA, United States.,Department of Psychology, Harvard University, Cambridge, MA, United States.,Harvard Medical School, Boston, MA, United States
| | - Kelly L Zuromski
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Rebecca G Fortgang
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Emily M Madsen
- Center for Precision Psychiatry, Department of Psychiatry, Massachusetts General Hospital, Boston, MA, United States.,Psychiatric and Neurodevelopmental Genetics Unit, Center for Genomic Medicine, Massachusetts General Hospital, Boston, MA, United States
| | - Daniel Kessler
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Hyunjoon Lee
- Center for Precision Psychiatry, Department of Psychiatry, Massachusetts General Hospital, Boston, MA, United States.,Psychiatric and Neurodevelopmental Genetics Unit, Center for Genomic Medicine, Massachusetts General Hospital, Boston, MA, United States
| | - Matthew K Nock
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Ben Y Reis
- Harvard Medical School, Boston, MA, United States.,Predictive Medicine Group, Computational Health Informatics Program, Boston Children's Hospital, Boston, MA, United States
| | - Victor M Castro
- Research Information Science and Computing, Mass General Brigham, Somerville, MA, United States
| | - Jordan W Smoller
- Center for Precision Psychiatry, Department of Psychiatry, Massachusetts General Hospital, Boston, MA, United States.,Harvard Medical School, Boston, MA, United States.,Psychiatric and Neurodevelopmental Genetics Unit, Center for Genomic Medicine, Massachusetts General Hospital, Boston, MA, United States
| |
Collapse
|
17
|
Luk JW, Pruitt LD, Smolenski DJ, Tucker J, Workman DE, Belsher BE. From everyday life predictions to suicide prevention: Clinical and ethical considerations in suicide predictive analytic tools. J Clin Psychol 2021; 78:137-148. [PMID: 34195998 DOI: 10.1002/jclp.23202] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 06/02/2021] [Accepted: 06/13/2021] [Indexed: 11/08/2022]
Abstract
Advances in artificial intelligence and machine learning have fueled growing interest in the application of predictive analytics to identify high-risk suicidal patients. Such application will require the aggregation of large-scale, sensitive patient data to help inform complex and potentially stigmatizing health care decisions. This paper provides a description of how suicide prediction is uniquely difficult by comparing it to nonmedical (weather and traffic forecasting) and medical predictions (cancer and human immunodeficiency virus risk), followed by clinical and ethical challenges presented within a risk-benefit conceptual framework. Because the misidentification of suicide risk may be associated with unintended negative consequences, clinicians and policymakers need to carefully weigh the risks and benefits of using suicide predictive analytics across health care populations. Practical recommendations are provided to strengthen the protection of patient rights and enhance the clinical utility of suicide predictive analytics tools.
Collapse
Affiliation(s)
- Jeremy W Luk
- Psychological Health Center of Excellence, Defense Health Agency, Silver Spring, Maryland, USA
| | - Larry D Pruitt
- Department of Psychiatry and Behavioral Sciences, VA Puget Sound Healthcare System & University of Washington School of Medicine, Seattle, Washington, USA
| | - Derek J Smolenski
- Psychological Health Center of Excellence, Defense Health Agency, Silver Spring, Maryland, USA
| | - Jennifer Tucker
- Psychological Health Center of Excellence, Defense Health Agency, Silver Spring, Maryland, USA
| | - Don E Workman
- Psychological Health Center of Excellence, Defense Health Agency, Silver Spring, Maryland, USA
| | - Bradley E Belsher
- Psychological Health Center of Excellence, Defense Health Agency, Silver Spring, Maryland, USA
| |
Collapse
|