1
|
McMahon GT. The Risks and Challenges of Artificial Intelligence in Endocrinology. J Clin Endocrinol Metab 2024; 109:e1468-e1471. [PMID: 38471009 DOI: 10.1210/clinem/dgae017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Indexed: 03/14/2024]
Abstract
Artificial intelligence (AI) holds the promise of addressing many of the numerous challenges healthcare faces, which include a growing burden of illness, an increase in chronic health conditions and disabilities due to aging and epidemiological changes, higher demand for health services, overworked and burned-out clinicians, greater societal expectations, and rising health expenditures. While technological advancements in processing power, memory, storage, and the abundance of data have empowered computers to handle increasingly complex tasks with remarkable success, AI introduces a variety of meaningful risks and challenges. Among these are issues related to accuracy and reliability, bias and equity, errors and accountability, transparency, misuse, and privacy of data. As AI systems continue to rapidly integrate into healthcare settings, it is crucial to recognize the inherent risks they bring. These risks demand careful consideration to ensure the responsible and safe deployment of AI in healthcare.
Collapse
Affiliation(s)
- Graham T McMahon
- Accreditation Council for Continuing Medical Education, Chicago, IL 60611, USA
- Department of Medical Education and Division of Endocrinology, Metabolism and Molecular Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL 60611, USA
| |
Collapse
|
2
|
Cerrato PL, Halamka JD. How AI drives innovation in cardiovascular medicine. Front Cardiovasc Med 2024; 11:1397921. [PMID: 38737711 PMCID: PMC11082327 DOI: 10.3389/fcvm.2024.1397921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 04/17/2024] [Indexed: 05/14/2024] Open
Abstract
Medicine is entering a new era in which artificial intelligence (AI) and deep learning have a measurable impact on patient care. This impact is especially evident in cardiovascular medicine. While the purpose of this short opinion paper is not to provide an in-depth review of the many applications of AI in cardiovascular medicine, we summarize some of the important advances that have taken place in this domain.
Collapse
Affiliation(s)
| | - John D. Halamka
- Mayo Clinic Platform, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
3
|
Schaekermann M, Spitz T, Pyles M, Cole-Lewis H, Wulczyn E, Pfohl SR, Martin D, Jaroensri R, Keeling G, Liu Y, Farquhar S, Xue Q, Lester J, Hughes C, Strachan P, Tan F, Bui P, Mermel CH, Peng LH, Matias Y, Corrado GS, Webster DR, Virmani S, Semturs C, Liu Y, Horn I, Cameron Chen PH. Health equity assessment of machine learning performance (HEAL): a framework and dermatology AI model case study. EClinicalMedicine 2024; 70:102479. [PMID: 38685924 PMCID: PMC11056401 DOI: 10.1016/j.eclinm.2024.102479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 01/16/2024] [Accepted: 01/25/2024] [Indexed: 05/02/2024] Open
Abstract
Background Artificial intelligence (AI) has repeatedly been shown to encode historical inequities in healthcare. We aimed to develop a framework to quantitatively assess the performance equity of health AI technologies and to illustrate its utility via a case study. Methods Here, we propose a methodology to assess whether health AI technologies prioritise performance for patient populations experiencing worse outcomes, that is complementary to existing fairness metrics. We developed the Health Equity Assessment of machine Learning performance (HEAL) framework designed to quantitatively assess the performance equity of health AI technologies via a four-step interdisciplinary process to understand and quantify domain-specific criteria, and the resulting HEAL metric. As an illustrative case study (analysis conducted between October 2022 and January 2023), we applied the HEAL framework to a dermatology AI model. A set of 5420 teledermatology cases (store-and-forward cases from patients of 20 years or older, submitted from primary care providers in the USA and skin cancer clinics in Australia), enriched for diversity in age, sex and race/ethnicity, was used to retrospectively evaluate the AI model's HEAL metric, defined as the likelihood that the AI model performs better for subpopulations with worse average health outcomes as compared to others. The likelihood that AI performance was anticorrelated to pre-existing health outcomes was estimated using bootstrap methods as the probability that the negated Spearman's rank correlation coefficient (i.e., "R") was greater than zero. Positive values of R suggest that subpopulations with poorer health outcomes have better AI model performance. Thus, the HEAL metric, defined as p (R >0), measures how likely the AI technology is to prioritise performance for subpopulations with worse average health outcomes as compared to others (presented as a percentage below). Health outcomes were quantified as disability-adjusted life years (DALYs) when grouping by sex and age, and years of life lost (YLLs) when grouping by race/ethnicity. AI performance was measured as top-3 agreement with the reference diagnosis from a panel of 3 dermatologists per case. Findings Across all dermatologic conditions, the HEAL metric was 80.5% for prioritizing AI performance of racial/ethnic subpopulations based on YLLs, and 92.1% and 0.0% respectively for prioritizing AI performance of sex and age subpopulations based on DALYs. Certain dermatologic conditions were significantly associated with greater AI model performance compared to a reference category of less common conditions. For skin cancer conditions, the HEAL metric was 73.8% for prioritizing AI performance of age subpopulations based on DALYs. Interpretation Analysis using the proposed HEAL framework showed that the dermatology AI model prioritised performance for race/ethnicity, sex (all conditions) and age (cancer conditions) subpopulations with respect to pre-existing health disparities. More work is needed to investigate ways of promoting equitable AI performance across age for non-cancer conditions and to better understand how AI models can contribute towards improving equity in health outcomes. Funding Google LLC.
Collapse
Affiliation(s)
| | | | - Malcolm Pyles
- Advanced Clinical, Deerfield, IL, USA
- Department of Dermatology, Cleveland Clinic, Cleveland, OH, USA
| | | | | | | | | | | | | | - Yuan Liu
- Google Health, Mountain View, CA, USA
| | | | | | - Jenna Lester
- Advanced Clinical, Deerfield, IL, USA
- Department of Dermatology, University of California, San Francisco, CA, USA
| | | | | | | | - Peggy Bui
- Google Health, Mountain View, CA, USA
| | | | | | | | | | | | | | | | - Yun Liu
- Google Health, Mountain View, CA, USA
| | - Ivor Horn
- Google Health, Mountain View, CA, USA
| | | |
Collapse
|
4
|
Schweikhard FP, Kosanke A, Lange S, Kromrey ML, Mankertz F, Gamain J, Kirsch M, Rosenberg B, Hosten N. Doctor's Orders-Why Radiologists Should Consider Adjusting Commercial Machine Learning Applications in Chest Radiography to Fit Their Specific Needs. Healthcare (Basel) 2024; 12:706. [PMID: 38610129 PMCID: PMC11011470 DOI: 10.3390/healthcare12070706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 03/03/2024] [Accepted: 03/21/2024] [Indexed: 04/14/2024] Open
Abstract
This retrospective study evaluated a commercial deep learning (DL) software for chest radiographs and explored its performance in different scenarios. A total of 477 patients (284 male, 193 female, mean age 61.4 (44.7-78.1) years) were included. For the reference standard, two radiologists performed independent readings on seven diseases, thus reporting 226 findings in 167 patients. An autonomous DL reading was performed separately and evaluated against the gold standard regarding accuracy, sensitivity and specificity using ROC analysis. The overall average AUC was 0.84 (95%-CI 0.76-0.92) with an optimized DL sensitivity of 85% and specificity of 75.4%. The best results were seen in pleural effusion with an AUC of 0.92 (0.885-0.955) and sensitivity and specificity of each 86.4%. The data also showed a significant influence of sex, age, and comorbidity on the level of agreement between gold standard and DL reading. About 40% of cases could be ruled out correctly when screening for only one specific disease with a sensitivity above 95% in the exploratory analysis. For the combined reading of all abnormalities at once, only marginal workload reduction could be achieved due to insufficient specificity. DL applications like this one bear the prospect of autonomous comprehensive reporting on chest radiographs but for now require human supervision. Radiologists need to consider possible bias in certain patient groups, e.g., elderly and women. By adjusting their threshold values, commercial DL applications could already be deployed for a variety of tasks, e.g., ruling out certain conditions in screening scenarios and offering high potential for workload reduction.
Collapse
Affiliation(s)
- Frank Philipp Schweikhard
- Institute for Diagnostic Radiology and Neuroradiology, University Medicine of Greifswald, 17475 Greifswald, Germany
| | - Anika Kosanke
- Institute for Diagnostic Radiology and Neuroradiology, University Medicine of Greifswald, 17475 Greifswald, Germany
| | - Sandra Lange
- Institute for Psychology, University of Greifswald, 17489 Greifswald, Germany
| | - Marie-Luise Kromrey
- Institute for Diagnostic Radiology and Neuroradiology, University Medicine of Greifswald, 17475 Greifswald, Germany
| | - Fiona Mankertz
- Institute for Diagnostic Radiology and Neuroradiology, University Medicine of Greifswald, 17475 Greifswald, Germany
| | - Julie Gamain
- Institute for Diagnostic Radiology and Neuroradiology, University Medicine of Greifswald, 17475 Greifswald, Germany
| | - Michael Kirsch
- Institute for Diagnostic Radiology and Neuroradiology, University Medicine of Greifswald, 17475 Greifswald, Germany
| | - Britta Rosenberg
- Institute for Diagnostic Radiology and Neuroradiology, University Medicine of Greifswald, 17475 Greifswald, Germany
| | - Norbert Hosten
- Institute for Diagnostic Radiology and Neuroradiology, University Medicine of Greifswald, 17475 Greifswald, Germany
| |
Collapse
|
5
|
Quistberg DA. Potential of artificial intelligence in injury prevention research and practice. Inj Prev 2024; 30:89-91. [PMID: 38307714 PMCID: PMC11003389 DOI: 10.1136/ip-2023-045203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 01/18/2024] [Indexed: 02/04/2024]
Abstract
There is increasing interest and use of artificial Intelligence algorithms and methods in biomedical research and practice, particularly as the technology has made significant advances in the past decade and become more accessible to more disciplines. This editorial briefly reviews this technology and its potential for injury prevention research and practice, proposing ways that it can be used to advance the discipline, as well as the potential pitfalls, concerns and biases that accompany it.
Collapse
Affiliation(s)
- D Alex Quistberg
- Urban Health Collaborative, Drexel University, Philadelphia, Pennsylvania, USA
- Environmental & Occupational Health, Drexel University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
6
|
Fehr J, Citro B, Malpani R, Lippert C, Madai VI. A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare. Front Digit Health 2024; 6:1267290. [PMID: 38455991 PMCID: PMC10919164 DOI: 10.3389/fdgth.2024.1267290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 02/05/2024] [Indexed: 03/09/2024] Open
Abstract
Trustworthy medical AI requires transparency about the development and testing of underlying algorithms to identify biases and communicate potential risks of harm. Abundant guidance exists on how to achieve transparency for medical AI products, but it is unclear whether publicly available information adequately informs about their risks. To assess this, we retrieved public documentation on the 14 available CE-certified AI-based radiology products of the II b risk category in the EU from vendor websites, scientific publications, and the European EUDAMED database. Using a self-designed survey, we reported on their development, validation, ethical considerations, and deployment caveats, according to trustworthy AI guidelines. We scored each question with either 0, 0.5, or 1, to rate if the required information was "unavailable", "partially available," or "fully available." The transparency of each product was calculated relative to all 55 questions. Transparency scores ranged from 6.4% to 60.9%, with a median of 29.1%. Major transparency gaps included missing documentation on training data, ethical considerations, and limitations for deployment. Ethical aspects like consent, safety monitoring, and GDPR-compliance were rarely documented. Furthermore, deployment caveats for different demographics and medical settings were scarce. In conclusion, public documentation of authorized medical AI products in Europe lacks sufficient public transparency to inform about safety and risks. We call on lawmakers and regulators to establish legally mandated requirements for public and substantive transparency to fulfill the promise of trustworthy AI for health.
Collapse
Affiliation(s)
- Jana Fehr
- Digital Health & Machine Learning, Hasso Plattner Institute, Potsdam, Germany
- Digital Engineering Faculty, University of Potsdam, Potsdam, Germany
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Brian Citro
- Independent Researcher, Chicago, IL, United States
| | | | - Christoph Lippert
- Digital Health & Machine Learning, Hasso Plattner Institute, Potsdam, Germany
- Digital Engineering Faculty, University of Potsdam, Potsdam, Germany
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Vince I. Madai
- QUEST Center for Responsible Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Berlin, Germany
- Faculty of Computing, Engineering and the Built Environment, School of Computing and Digital Technology, Birmingham City University, Birmingham, United Kingdom
| |
Collapse
|
7
|
Economou-Zavlanos NJ, Bessias S, Cary MP, Bedoya AD, Goldstein BA, Jelovsek JE, O’Brien CL, Walden N, Elmore M, Parrish AB, Elengold S, Lytle KS, Balu S, Lipkin ME, Shariff AI, Gao M, Leverenz D, Henao R, Ming DY, Gallagher DM, Pencina MJ, Poon EG. Translating ethical and quality principles for the effective, safe and fair development, deployment and use of artificial intelligence technologies in healthcare. J Am Med Inform Assoc 2024; 31:705-713. [PMID: 38031481 PMCID: PMC10873841 DOI: 10.1093/jamia/ocad221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 10/06/2023] [Accepted: 11/03/2023] [Indexed: 12/01/2023] Open
Abstract
OBJECTIVE The complexity and rapid pace of development of algorithmic technologies pose challenges for their regulation and oversight in healthcare settings. We sought to improve our institution's approach to evaluation and governance of algorithmic technologies used in clinical care and operations by creating an Implementation Guide that standardizes evaluation criteria so that local oversight is performed in an objective fashion. MATERIALS AND METHODS Building on a framework that applies key ethical and quality principles (clinical value and safety, fairness and equity, usability and adoption, transparency and accountability, and regulatory compliance), we created concrete guidelines for evaluating algorithmic technologies at our institution. RESULTS An Implementation Guide articulates evaluation criteria used during review of algorithmic technologies and details what evidence supports the implementation of ethical and quality principles for trustworthy health AI. Application of the processes described in the Implementation Guide can lead to algorithms that are safer as well as more effective, fair, and equitable upon implementation, as illustrated through 4 examples of technologies at different phases of the algorithmic lifecycle that underwent evaluation at our academic medical center. DISCUSSION By providing clear descriptions/definitions of evaluation criteria and embedding them within standardized processes, we streamlined oversight processes and educated communities using and developing algorithmic technologies within our institution. CONCLUSIONS We developed a scalable, adaptable framework for translating principles into evaluation criteria and specific requirements that support trustworthy implementation of algorithmic technologies in patient care and healthcare operations.
Collapse
Affiliation(s)
| | - Sophia Bessias
- Duke AI Health, Duke University School of Medicine, Durham, NC 27705, United States
| | - Michael P Cary
- Duke AI Health, Duke University School of Medicine, Durham, NC 27705, United States
- Duke University School of Nursing, Durham, NC 27710, United States
| | - Armando D Bedoya
- Duke Health Technology Solutions, Duke University Health System, Durham, NC 27705, United States
- Department of Medicine, Duke University School of Medicine, Durham, NC 27710, United States
| | - Benjamin A Goldstein
- Duke AI Health, Duke University School of Medicine, Durham, NC 27705, United States
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC 27705, United States
| | - John E Jelovsek
- Department of Obstetrics and Gynecology, Duke University School of Medicine, Durham, NC 27710, United States
| | - Cara L O’Brien
- Duke Health Technology Solutions, Duke University Health System, Durham, NC 27705, United States
- Department of Medicine, Duke University School of Medicine, Durham, NC 27710, United States
| | - Nancy Walden
- Duke AI Health, Duke University School of Medicine, Durham, NC 27705, United States
| | - Matthew Elmore
- Duke AI Health, Duke University School of Medicine, Durham, NC 27705, United States
| | - Amanda B Parrish
- Office of Regulatory Affairs and Quality, Duke University School of Medicine, Durham, NC 27705, United States
| | - Scott Elengold
- Office of Counsel, Duke University, Durham, NC 27701, United States
| | - Kay S Lytle
- Duke University School of Nursing, Durham, NC 27710, United States
- Duke Health Technology Solutions, Duke University Health System, Durham, NC 27705, United States
| | - Suresh Balu
- Duke Institute for Health Innovation, Duke University, Durham, NC 27701, United States
| | - Michael E Lipkin
- Department of Urology, Duke University School of Medicine, Durham, NC 27710, United States
| | - Afreen Idris Shariff
- Department of Medicine, Duke University School of Medicine, Durham, NC 27710, United States
- Duke Endocrine-Oncology Program, Duke University Health System, Durham, NC 27710, United States
| | - Michael Gao
- Duke Institute for Health Innovation, Duke University, Durham, NC 27701, United States
| | - David Leverenz
- Department of Medicine, Duke University School of Medicine, Durham, NC 27710, United States
| | - Ricardo Henao
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC 27705, United States
- Department of Bioengineering, King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia
| | - David Y Ming
- Department of Medicine, Duke University School of Medicine, Durham, NC 27710, United States
- Duke Department of Pediatrics, Duke University Health System, Durham, NC 27705, United States
- Department of Population Health Sciences, Duke University School of Medicine, Durham, NC 27701, United States
| | - David M Gallagher
- Department of Medicine, Duke University School of Medicine, Durham, NC 27710, United States
| | - Michael J Pencina
- Duke AI Health, Duke University School of Medicine, Durham, NC 27705, United States
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC 27705, United States
| | - Eric G Poon
- Duke Health Technology Solutions, Duke University Health System, Durham, NC 27705, United States
- Department of Medicine, Duke University School of Medicine, Durham, NC 27710, United States
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC 27705, United States
| |
Collapse
|
8
|
Hendricks-Sturrup R, Simmons M, Anders S, Aneni K, Wright Clayton E, Coco J, Collins B, Heitman E, Hussain S, Joshi K, Lemieux J, Lovett Novak L, Rubin DJ, Shanker A, Washington T, Waters G, Webb Harris J, Yin R, Wagner T, Yin Z, Malin B. Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI and Machine Learning: Modified Delphi Approach. JMIR AI 2023; 2:e52888. [PMID: 38875540 PMCID: PMC11041493 DOI: 10.2196/52888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/01/2023] [Accepted: 11/05/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND Artificial intelligence (AI) and machine learning (ML) technology design and development continues to be rapid, despite major limitations in its current form as a practice and discipline to address all sociohumanitarian issues and complexities. From these limitations emerges an imperative to strengthen AI and ML literacy in underserved communities and build a more diverse AI and ML design and development workforce engaged in health research. OBJECTIVE AI and ML has the potential to account for and assess a variety of factors that contribute to health and disease and to improve prevention, diagnosis, and therapy. Here, we describe recent activities within the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Ethics and Equity Workgroup (EEWG) that led to the development of deliverables that will help put ethics and fairness at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. METHODS The AIM-AHEAD EEWG was created in 2021 with 3 cochairs and 51 members in year 1 and 2 cochairs and ~40 members in year 2. Members in both years included AIM-AHEAD principal investigators, coinvestigators, leadership fellows, and research fellows. The EEWG used a modified Delphi approach using polling, ranking, and other exercises to facilitate discussions around tangible steps, key terms, and definitions needed to ensure that ethics and fairness are at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. RESULTS The EEWG developed a set of ethics and equity principles, a glossary, and an interview guide. The ethics and equity principles comprise 5 core principles, each with subparts, which articulate best practices for working with stakeholders from historically and presently underrepresented communities. The glossary contains 12 terms and definitions, with particular emphasis on optimal development, refinement, and implementation of AI and ML in health equity research. To accompany the glossary, the EEWG developed a concept relationship diagram that describes the logical flow of and relationship between the definitional concepts. Lastly, the interview guide provides questions that can be used or adapted to garner stakeholder and community perspectives on the principles and glossary. CONCLUSIONS Ongoing engagement is needed around our principles and glossary to identify and predict potential limitations in their uses in AI and ML research settings, especially for institutions with limited resources. This requires time, careful consideration, and honest discussions around what classifies an engagement incentive as meaningful to support and sustain their full engagement. By slowing down to meet historically and presently underresourced institutions and communities where they are and where they are capable of engaging and competing, there is higher potential to achieve needed diversity, ethics, and equity in AI and ML implementation in health research.
Collapse
Affiliation(s)
| | - Malaika Simmons
- National Alliance Against Disparities in Patient Health, Woodbridge, VA, United States
| | - Shilo Anders
- Vanderbilt University Medical Center, Nashville, TN, United States
| | | | | | - Joseph Coco
- Vanderbilt University Medical Center, Nashville, TN, United States
| | - Benjamin Collins
- Vanderbilt University Medical Center, Nashville, TN, United States
| | - Elizabeth Heitman
- University of Texas Southwestern Medical Center, Dallas, TX, United States
| | | | - Karuna Joshi
- University of Maryland, Baltimore County, Baltimore, MD, United States
| | | | | | | | - Anil Shanker
- Meharry Medical College, Nashville, TN, United States
| | - Talitha Washington
- AUC Data Science Initiative, Clark Atlanta University, Atlanta, GA, United States
| | - Gabriella Waters
- Morgan State University, Center for Equitable AI & Machine Learning Systems, Baltimore, MD, United States
| | | | - Rui Yin
- University of Florida, Gainesville, FL, United States
| | - Teresa Wagner
- University of North Texas Health Science Center, SaferCare Texas, Fort Worth, TX, United States
| | - Zhijun Yin
- Vanderbilt University Medical Center, Nashville, TN, United States
| | - Bradley Malin
- Vanderbilt University Medical Center, Nashville, TN, United States
| |
Collapse
|
9
|
van Breugel M, Fehrmann RSN, Bügel M, Rezwan FI, Holloway JW, Nawijn MC, Fontanella S, Custovic A, Koppelman GH. Current state and prospects of artificial intelligence in allergy. Allergy 2023; 78:2623-2643. [PMID: 37584170 DOI: 10.1111/all.15849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 07/08/2023] [Accepted: 07/31/2023] [Indexed: 08/17/2023]
Abstract
The field of medicine is witnessing an exponential growth of interest in artificial intelligence (AI), which enables new research questions and the analysis of larger and new types of data. Nevertheless, applications that go beyond proof of concepts and deliver clinical value remain rare, especially in the field of allergy. This narrative review provides a fundamental understanding of the core concepts of AI and critically discusses its limitations and open challenges, such as data availability and bias, along with potential directions to surmount them. We provide a conceptual framework to structure AI applications within this field and discuss forefront case examples. Most of these applications of AI and machine learning in allergy concern supervised learning and unsupervised clustering, with a strong emphasis on diagnosis and subtyping. A perspective is shared on guidelines for good AI practice to guide readers in applying it effectively and safely, along with prospects of field advancement and initiatives to increase clinical impact. We anticipate that AI can further deepen our knowledge of disease mechanisms and contribute to precision medicine in allergy.
Collapse
Affiliation(s)
- Merlijn van Breugel
- Department of Pediatric Pulmonology and Pediatric Allergology, Beatrix Children's Hospital, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- MIcompany, Amsterdam, the Netherlands
| | - Rudolf S N Fehrmann
- Department of Medical Oncology, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | | | - Faisal I Rezwan
- Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, UK
- Department of Computer Science, Aberystwyth University, Aberystwyth, UK
| | - John W Holloway
- Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, UK
- National Institute for Health and Care Research Southampton Biomedical Research Centre, University Hospitals Southampton NHS Foundation Trust, Southampton, UK
| | - Martijn C Nawijn
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Department of Pathology and Medical Biology, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Sara Fontanella
- National Heart and Lung Institute, Imperial College London, London, UK
- National Institute for Health and Care Research Imperial Biomedical Research Centre (BRC), London, UK
| | - Adnan Custovic
- National Heart and Lung Institute, Imperial College London, London, UK
- National Institute for Health and Care Research Imperial Biomedical Research Centre (BRC), London, UK
| | - Gerard H Koppelman
- Department of Pediatric Pulmonology and Pediatric Allergology, Beatrix Children's Hospital, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
10
|
Addressing racial disparities in surgical care with machine learning. NPJ Digit Med 2022; 5:152. [PMID: 36180724 PMCID: PMC9525720 DOI: 10.1038/s41746-022-00695-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 09/12/2022] [Indexed: 11/17/2022] Open
Abstract
There is ample evidence to demonstrate that discrimination against several population subgroups interferes with their ability to receive optimal surgical care. This bias can take many forms, including limited access to medical services, poor quality of care, and inadequate insurance coverage. While such inequalities will require numerous cultural, ethical, and sociological solutions, artificial intelligence-based algorithms may help address the problem by detecting bias in the data sets currently being used to make medical decisions. However, such AI-based solutions are only in early development. The purpose of this commentary is to serve as a call to action to encourage investigators and funding agencies to invest in the development of these digital tools.
Collapse
|
11
|
Parbhoo S, Wawira Gichoya J, Celi LA, de la Hoz MÁA. Operationalising fairness in medical algorithms. BMJ Health Care Inform 2022; 29:bmjhci-2022-100617. [PMID: 35688512 PMCID: PMC9189822 DOI: 10.1136/bmjhci-2022-100617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 05/24/2022] [Indexed: 11/25/2022] Open
Affiliation(s)
- Sonali Parbhoo
- Harvard Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts, USA
| | - Judy Wawira Gichoya
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts, USA
- Division of Pulmonary Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | | |
Collapse
|