1
|
Haroz EE, Rebman P, Goklish N, Garcia M, Suttle R, Maggio D, Clattenburg E, Mega J, Adams R. Performance of Machine Learning Suicide Risk Models in an American Indian Population. JAMA Netw Open 2024; 7:e2439269. [PMID: 39401036 PMCID: PMC11474420 DOI: 10.1001/jamanetworkopen.2024.39269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 08/06/2024] [Indexed: 10/15/2024] Open
Abstract
Importance Few suicide risk identification tools have been developed specifically for American Indian and Alaska Native populations, even though these populations face the starkest suicide-related inequities. Objective To examine the accuracy of existing machine learning models in a majority American Indian population. Design, Setting, and Participants This prognostic study used secondary data analysis of electronic health record data collected from January 1, 2017, to December 31, 2021. Existing models from the Mental Health Research Network (MHRN) and Vanderbilt University (VU) were fitted. Models were compared with an augmented screening indicator that included any previous attempt, recent suicidal ideation, or a recent positive suicide risk screen result. The comparison was based on the area under the receiver operating characteristic curve (AUROC). The study was performed in partnership with a tribe and local Indian Health Service (IHS) in the Southwest. All patients were 18 years or older with at least 1 encounter with the IHS unit during the study period. Data were analyzed between October 6, 2022, and July 29, 2024. Exposures Suicide attempts or deaths within 90 days. Main Outcomes and Measures Model performance was compared based on the ability to distinguish between those with a suicide attempt or death within 90 days of their last IHS visit with those without this outcome. Results Of 16 835 patients (mean [SD] age, 40.0 [17.5] years; 8660 [51.4%] female; 14 251 [84.7%] American Indian), 324 patients (1.9%) had at least 1 suicide attempt, and 37 patients (0.2%) died by suicide. The MHRN model had an AUROC value of 0.81 (95% CI, 0.77-0.85) for 90-day suicide attempts, whereas the VU model had an AUROC value of 0.68 (95% CI, 0.64-0.72), and the augmented screening indicator had an AUROC value of 0.66 (95% CI, 0.63-0.70). Calibration was poor for both models but improved after recalibration. Conclusion and Relevance This prognostic study found that existing risk identification models for suicide prevention held promise when applied to new contexts and performed better than relying on a combined indictor of a positive suicide risk screen result, history of attempt, and recent suicidal ideation.
Collapse
Affiliation(s)
- Emily E. Haroz
- Center for Indigenous Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Paul Rebman
- Department of Mental Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Novalene Goklish
- Center for Indigenous Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Mitchell Garcia
- Center for Indigenous Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Rose Suttle
- Center for Indigenous Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Dominick Maggio
- Indian Health Service, US Department of Health and Human Services, Rockville, Maryland
| | - Eben Clattenburg
- Indian Health Service, US Department of Health and Human Services, Rockville, Maryland
| | - Joe Mega
- Indian Health Service, US Department of Health and Human Services, Rockville, Maryland
| | - Roy Adams
- Department of Psychiatry, Johns Hopkins School of Medicine, Baltimore, Maryland
| |
Collapse
|
2
|
Anawati A, Fleming H, Mertz M, Bertrand J, Dumond J, Myles S, Leblanc J, Ross B, Lamoureux D, Patel D, Carrier R, Cameron E. Artificial intelligence and social accountability in the Canadian health care landscape: A rapid literature review. PLOS DIGITAL HEALTH 2024; 3:e0000597. [PMID: 39264934 PMCID: PMC11392241 DOI: 10.1371/journal.pdig.0000597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/14/2024]
Abstract
BACKGROUND Situated within a larger project entitled "Exploring the Need for a Uniquely Different Approach in Northern Ontario: A Study of Socially Accountable Artificial Intelligence," this rapid review provides a broad look into how social accountability as an equity-oriented health policy strategy is guiding artificial intelligence (AI) across the Canadian health care landscape, particularly for marginalized regions and populations. This review synthesizes existing literature to answer the question: How is AI present and impacted by social accountability across the health care landscape in Canada? METHODOLOGY A multidisciplinary expert panel with experience in diverse health care roles and computer sciences was assembled from multiple institutions in Northern Ontario to guide the study design and research team. A search strategy was developed that broadly reflected the concepts of social accountability, AI and health care in Canada. EMBASE and Medline databases were searched for articles, which were reviewed for inclusion by 2 independent reviewers. Search results, a description of the studies, and a thematic analysis of the included studies were reported as the primary outcome. PRINCIPAL FINDINGS The search strategy yielded 679 articles of which 36 relevant studies were included. There were no studies identified that were guided by a comprehensive, equity-oriented social accountability strategy. Three major themes emerged from the thematic analysis: (1) designing equity into AI; (2) policies and regulations for AI; and (3) the inclusion of community voices in the implementation of AI in health care. Across the 3 main themes, equity, marginalized populations, and the need for community and partner engagement were frequently referenced, which are key concepts of a social accountability strategy. CONCLUSION The findings suggest that unless there is a course correction, AI in the Canadian health care landscape will worsen the digital divide and health inequity. Social accountability as an equity-oriented strategy for AI could catalyze many of the changes required to prevent a worsening of the digital divide caused by the AI revolution in health care in Canada and should raise concerns for other global contexts.
Collapse
Affiliation(s)
- Alex Anawati
- Dr. Gilles Arcand Centre for Health Equity, NOSM University, Thunder Bay/Sudbury, Ontario, Canada
- Clinical Sciences Division, NOSM University, Sudbury/Thunder Bay, Ontario, Canada
- Health Sciences North, Sudbury, Ontario, Canada
| | - Holly Fleming
- Dr. Gilles Arcand Centre for Health Equity, NOSM University, Thunder Bay/Sudbury, Ontario, Canada
| | - Megan Mertz
- Dr. Gilles Arcand Centre for Health Equity, NOSM University, Thunder Bay/Sudbury, Ontario, Canada
| | - Jillian Bertrand
- NOSM University, UME Learner, Sudbury/Thunder Bay, Ontario, Canada
| | - Jennifer Dumond
- Health Sciences Library, NOSM University, Sudbury/Thunder Bay, Ontario, Canada
| | - Sophia Myles
- School of Sociological and Anthropological Studies, University of Ottawa, Ottawa, Ontario, Canada
- School of Kinesiology and Health Sciences, Laurentian University, Sudbury, Ontario, Canada
| | - Joseph Leblanc
- Dr. Gilles Arcand Centre for Health Equity, NOSM University, Thunder Bay/Sudbury, Ontario, Canada
- Human Sciences Division, NOSM University, Sudbury/Thunder Bay, Ontario, Canada
| | - Brian Ross
- Medical Sciences Division, NOSM University, Sudbury/Thunder Bay, Ontario, Canada
| | - Daniel Lamoureux
- NOSM University, UME Learner, Sudbury/Thunder Bay, Ontario, Canada
| | - Div Patel
- NOSM University, UME Learner, Sudbury/Thunder Bay, Ontario, Canada
| | | | - Erin Cameron
- Dr. Gilles Arcand Centre for Health Equity, NOSM University, Thunder Bay/Sudbury, Ontario, Canada
- Human Sciences Division, NOSM University, Sudbury/Thunder Bay, Ontario, Canada
| |
Collapse
|
3
|
Liebrenz M, Bhugra D, Alibudbud R, Ventriglio A, Smith A. AI in health care and the fragile pursuit of equity and social justice. Lancet 2024; 404:843. [PMID: 39216963 DOI: 10.1016/s0140-6736(24)01604-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2024] [Accepted: 07/31/2024] [Indexed: 09/04/2024]
Affiliation(s)
- Michael Liebrenz
- Department of Forensic Psychiatry, University of Bern, Bern 3012, Switzerland.
| | - Dinesh Bhugra
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Rowalt Alibudbud
- Department of Sociology and Behavioral Sciences, De La Salle University, Manila, Philippines
| | - Antonio Ventriglio
- Department of Clinical and Experimental Medicine, University of Foggia, Foggia, Italy
| | - Alexander Smith
- Department of Forensic Psychiatry, University of Bern, Bern 3012, Switzerland
| |
Collapse
|
4
|
Lacsa JEM. Is internet access really the key to achieving AI-driven health equity in the Philippines, or should we focus on direct healthcare investments instead? J Public Health (Oxf) 2024:fdae123. [PMID: 38964782 DOI: 10.1093/pubmed/fdae123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Accepted: 06/20/2024] [Indexed: 07/06/2024] Open
Affiliation(s)
- Jose Eric M Lacsa
- Department of Sociology and Behavioral Sciences, De La Salle University, De La Salle University, 1004 Taft Avenue, Manila, Philippines
| |
Collapse
|
5
|
Nadarzynski T, Knights N, Husbands D, Graham CA, Llewellyn CD, Buchanan T, Montgomery I, Ridge D. Achieving health equity through conversational AI: A roadmap for design and implementation of inclusive chatbots in healthcare. PLOS DIGITAL HEALTH 2024; 3:e0000492. [PMID: 38696359 PMCID: PMC11065243 DOI: 10.1371/journal.pdig.0000492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 03/25/2024] [Indexed: 05/04/2024]
Abstract
BACKGROUND The rapid evolution of conversational and generative artificial intelligence (AI) has led to the increased deployment of AI tools in healthcare settings. While these conversational AI tools promise efficiency and expanded access to healthcare services, there are growing concerns ethically, practically and in terms of inclusivity. This study aimed to identify activities which reduce bias in conversational AI and make their designs and implementation more equitable. METHODS A qualitative research approach was employed to develop an analytical framework based on the content analysis of 17 guidelines about AI use in clinical settings. A stakeholder consultation was subsequently conducted with a total of 33 ethnically diverse community members, AI designers, industry experts and relevant health professionals to further develop a roadmap for equitable design and implementation of conversational AI in healthcare. Framework analysis was conducted on the interview data. RESULTS A 10-stage roadmap was developed to outline activities relevant to equitable conversational AI design and implementation phases: 1) Conception and planning, 2) Diversity and collaboration, 3) Preliminary research, 4) Co-production, 5) Safety measures, 6) Preliminary testing, 7) Healthcare integration, 8) Service evaluation and auditing, 9) Maintenance, and 10) Termination. DISCUSSION We have made specific recommendations to increase conversational AI's equity as part of healthcare services. These emphasise the importance of a collaborative approach and the involvement of patient groups in navigating the rapid evolution of conversational AI technologies. Further research must assess the impact of recommended activities on chatbots' fairness and their ability to reduce health inequalities.
Collapse
Affiliation(s)
- Tom Nadarzynski
- School of Social Sciences, University of Westminster, London, United Kingdom
| | - Nicky Knights
- School of Social Sciences, University of Westminster, London, United Kingdom
| | - Deborah Husbands
- School of Social Sciences, University of Westminster, London, United Kingdom
| | - Cynthia A. Graham
- Kinsey Institute and Department of Gender Studies, Indiana University, Bloomington, United States of America
| | - Carrie D. Llewellyn
- Brighton and Sussex Medical School, University of Sussex, Brighton, United Kingdom
| | - Tom Buchanan
- School of Social Sciences, University of Westminster, London, United Kingdom
| | | | - Damien Ridge
- School of Social Sciences, University of Westminster, London, United Kingdom
| |
Collapse
|
6
|
Jennings AM, Cox DJ. Starting the Conversation Around the Ethical Use of Artificial Intelligence in Applied Behavior Analysis. Behav Anal Pract 2024; 17:107-122. [PMID: 38405299 PMCID: PMC10891004 DOI: 10.1007/s40617-023-00868-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2023] [Indexed: 02/27/2024] Open
Abstract
Artificial intelligence (AI) is increasingly a part of our everyday lives. Though much AI work in healthcare has been outside of applied behavior analysis (ABA), researchers within ABA have begun to demonstrate many different ways that AI might improve the delivery of ABA services. Though AI offers many exciting advances, absent from the behavior analytic literature thus far is conversation around ethical considerations when developing, building, and deploying AI technologies. Further, though AI is already in the process of coming to ABA, it is unknown the extent to which behavior analytic practitioners are familiar (and comfortable) with the use of AI in ABA. The purpose of this article is twofold. First, to describe how existing ethical publications (e.g., BACB Code of Ethics) do and do not speak to the unique ethical concerns with deploying AI in everyday, ABA service delivery settings. Second, to raise questions for consideration that might inform future ethical guidelines when developing and using AI in ABA service delivery. In total, we hope this article sparks proactive dialog around the ethical use of AI in ABA before the field is required to have a reactionary conversation.
Collapse
Affiliation(s)
- Adrienne M. Jennings
- Department of Behavioral Science, Daemen University, 4380 Main Street, Amherst, NY 14226 USA
| | - David J. Cox
- Institute for Applied Behavioral Science, Endicott College, Beverly, MA USA
- RethinkFirst, 49 W 27th St, 8th floor, New York, NY 10001 USA
| |
Collapse
|
7
|
Hendricks-Sturrup R, Simmons M, Anders S, Aneni K, Wright Clayton E, Coco J, Collins B, Heitman E, Hussain S, Joshi K, Lemieux J, Lovett Novak L, Rubin DJ, Shanker A, Washington T, Waters G, Webb Harris J, Yin R, Wagner T, Yin Z, Malin B. Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI and Machine Learning: Modified Delphi Approach. JMIR AI 2023; 2:e52888. [PMID: 38875540 PMCID: PMC11041493 DOI: 10.2196/52888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/01/2023] [Accepted: 11/05/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND Artificial intelligence (AI) and machine learning (ML) technology design and development continues to be rapid, despite major limitations in its current form as a practice and discipline to address all sociohumanitarian issues and complexities. From these limitations emerges an imperative to strengthen AI and ML literacy in underserved communities and build a more diverse AI and ML design and development workforce engaged in health research. OBJECTIVE AI and ML has the potential to account for and assess a variety of factors that contribute to health and disease and to improve prevention, diagnosis, and therapy. Here, we describe recent activities within the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Ethics and Equity Workgroup (EEWG) that led to the development of deliverables that will help put ethics and fairness at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. METHODS The AIM-AHEAD EEWG was created in 2021 with 3 cochairs and 51 members in year 1 and 2 cochairs and ~40 members in year 2. Members in both years included AIM-AHEAD principal investigators, coinvestigators, leadership fellows, and research fellows. The EEWG used a modified Delphi approach using polling, ranking, and other exercises to facilitate discussions around tangible steps, key terms, and definitions needed to ensure that ethics and fairness are at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. RESULTS The EEWG developed a set of ethics and equity principles, a glossary, and an interview guide. The ethics and equity principles comprise 5 core principles, each with subparts, which articulate best practices for working with stakeholders from historically and presently underrepresented communities. The glossary contains 12 terms and definitions, with particular emphasis on optimal development, refinement, and implementation of AI and ML in health equity research. To accompany the glossary, the EEWG developed a concept relationship diagram that describes the logical flow of and relationship between the definitional concepts. Lastly, the interview guide provides questions that can be used or adapted to garner stakeholder and community perspectives on the principles and glossary. CONCLUSIONS Ongoing engagement is needed around our principles and glossary to identify and predict potential limitations in their uses in AI and ML research settings, especially for institutions with limited resources. This requires time, careful consideration, and honest discussions around what classifies an engagement incentive as meaningful to support and sustain their full engagement. By slowing down to meet historically and presently underresourced institutions and communities where they are and where they are capable of engaging and competing, there is higher potential to achieve needed diversity, ethics, and equity in AI and ML implementation in health research.
Collapse
Affiliation(s)
| | - Malaika Simmons
- National Alliance Against Disparities in Patient Health, Woodbridge, VA, United States
| | - Shilo Anders
- Vanderbilt University Medical Center, Nashville, TN, United States
| | | | | | - Joseph Coco
- Vanderbilt University Medical Center, Nashville, TN, United States
| | - Benjamin Collins
- Vanderbilt University Medical Center, Nashville, TN, United States
| | - Elizabeth Heitman
- University of Texas Southwestern Medical Center, Dallas, TX, United States
| | | | - Karuna Joshi
- University of Maryland, Baltimore County, Baltimore, MD, United States
| | | | | | | | - Anil Shanker
- Meharry Medical College, Nashville, TN, United States
| | - Talitha Washington
- AUC Data Science Initiative, Clark Atlanta University, Atlanta, GA, United States
| | - Gabriella Waters
- Morgan State University, Center for Equitable AI & Machine Learning Systems, Baltimore, MD, United States
| | | | - Rui Yin
- University of Florida, Gainesville, FL, United States
| | - Teresa Wagner
- University of North Texas Health Science Center, SaferCare Texas, Fort Worth, TX, United States
| | - Zhijun Yin
- Vanderbilt University Medical Center, Nashville, TN, United States
| | - Bradley Malin
- Vanderbilt University Medical Center, Nashville, TN, United States
| |
Collapse
|