1
|
Young CC, Enichen E, Rao A, Hilker S, Butler A, Laird-Gion J, Succi MD. Pilot Study of Large Language Models as an Age-Appropriate Explanatory Tool for Chronic Pediatric Conditions. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.08.06.24311544. [PMID: 39148860 PMCID: PMC11326333 DOI: 10.1101/2024.08.06.24311544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/17/2024]
Abstract
There exists a gap in existing patient education resources for children with chronic conditions. This pilot study assesses large language models' (LLMs) capacity to deliver developmentally appropriate explanations of chronic conditions to pediatric patients. Two commonly used LLMs generated responses that accurately, appropriately, and effectively communicate complex medical information, making them a potentially valuable tool for enhancing patient understanding and engagement in clinical settings.
Collapse
Affiliation(s)
- Cameron C. Young
- Harvard Medical School, Boston, MA
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center, Mass General Brigham, Boston, MA
| | - Elizabeth Enichen
- Harvard Medical School, Boston, MA
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center, Mass General Brigham, Boston, MA
| | - Arya Rao
- Harvard Medical School, Boston, MA
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center, Mass General Brigham, Boston, MA
| | - Sidney Hilker
- Harvard Medical School, Boston, MA
- Boston Children’s Hospital, Boston, MA
| | - Alex Butler
- Harvard Medical School, Boston, MA
- Boston Children’s Hospital, Boston, MA
| | - Jessica Laird-Gion
- Harvard Medical School, Boston, MA
- Boston Children’s Hospital, Boston, MA
| | - Marc D. Succi
- Harvard Medical School, Boston, MA
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center, Mass General Brigham, Boston, MA
- Department of Radiology, Massachusetts General Hospital, Boston, MA
| |
Collapse
|
2
|
Pudjiadi AH, Alatas FS, Faizi M, Rusdi, Sulistijono E, Nency YM, Julia M, Baso AJA, Hartoyo E, Susanah S, Wilar R, Nugroho HW, Indrayady, Lubis BM, Haris S, Suparyatha IBG, Amarassaphira D, Monica E, Ongko L. Integration of Artificial Intelligence in Pediatric Education: Perspectives from Pediatric Medical Educators and Residents. Healthc Inform Res 2024; 30:244-252. [PMID: 39160783 PMCID: PMC11333820 DOI: 10.4258/hir.2024.30.3.244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 05/11/2024] [Accepted: 07/18/2024] [Indexed: 08/21/2024] Open
Abstract
OBJECTIVES The use of technology has rapidly increased in the past century. Artificial intelligence (AI) and information technology (IT) are now applied in healthcare and medical education. The purpose of this study was to assess the readiness of Indonesian teaching staff and pediatric residents for AI integration into the curriculum. METHODS An anonymous online survey was distributed among teaching staff and pediatric residents from 15 national universities. The questionnaire consisted of two sections: demographic information and questions regarding the use of IT and AI in child health education. Responses were collected using a 5-point Likert scale: strongly disagree, disagree, neutral, agree, and highly agree. RESULTS A total of 728 pediatric residents and 196 teaching staff from 15 national universities participated in the survey. Over half of the respondents were familiar with the terms IT and AI. The majority agreed that IT and AI have simplified the process of learning theories and skills. All participants were in favor of sharing data to facilitate the development of AI and expressed readiness to incorporate IT and AI into their teaching tools. CONCLUSIONS The findings of our study indicate that pediatric residents and teaching staff are ready to implement AI in medical education.
Collapse
Affiliation(s)
- Antonius Hocky Pudjiadi
- Department of Child Health, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta,
Indonesia
| | - Fatima Safira Alatas
- Department of Child Health, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta,
Indonesia
| | - Muhammad Faizi
- Department of Child Health, Faculty of Medicine, Universitas Airlangga, Surabaya,
Indonesia
| | - Rusdi
- Department of Child Health, Faculty of Medicine, Universitas Andalas, Padang,
Indonesia
| | - Eko Sulistijono
- Department of Child Health, Faculty of Medicine, Universitas Brawijaya, Malang,
Indonesia
| | - Yetty Movieta Nency
- Department of Child Health, Faculty of Medicine, Universitas Diponegoro, Semarang,
Indonesia
| | - Madarina Julia
- Department of Child Health, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta,
Indonesia
| | | | - Edi Hartoyo
- Department of Child Health, Faculty of Medicine, Universitas Lambung Mangkurat, Banjarmasin,
Indonesia
| | - Susi Susanah
- Department of Child Health, Faculty of Medicine, Universitas Padjadjaran, Sumedang,
Indonesia
| | - Rocky Wilar
- Department of Child Health, Faculty of Medicine, Universitas Sam Ratulangi, Manado,
Indonesia
| | - Hari Wahyu Nugroho
- Department of Child Health, Faculty of Medicine, Universitas Sebelas Maret, Surakarta,
Indonesia
| | - Indrayady
- Department of Child Health, Faculty of Medicine, Universitas Sriwijaya, Palembang,
Indonesia
| | - Bugis Mardina Lubis
- Department of Child Health, Faculty of Medicine, Universitas Sumatera Utara, Medan,
Indonesia
| | - Syafruddin Haris
- Department of Child Health, Faculty of Medicine, Universitas Syiah Kuala, Aceh,
Indonesia
| | | | - Daniar Amarassaphira
- Department of Child Health, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta,
Indonesia
| | - Ervin Monica
- Department of Child Health, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta,
Indonesia
| | - Lukito Ongko
- Department of Child Health, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta,
Indonesia
| |
Collapse
|
3
|
Di Sarno L, Caroselli A, Tonin G, Graglia B, Pansini V, Causio FA, Gatto A, Chiaretti A. Artificial Intelligence in Pediatric Emergency Medicine: Applications, Challenges, and Future Perspectives. Biomedicines 2024; 12:1220. [PMID: 38927427 PMCID: PMC11200597 DOI: 10.3390/biomedicines12061220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 05/19/2024] [Accepted: 05/28/2024] [Indexed: 06/28/2024] Open
Abstract
The dawn of Artificial intelligence (AI) in healthcare stands as a milestone in medical innovation. Different medical fields are heavily involved, and pediatric emergency medicine is no exception. We conducted a narrative review structured in two parts. The first part explores the theoretical principles of AI, providing all the necessary background to feel confident with these new state-of-the-art tools. The second part presents an informative analysis of AI models in pediatric emergencies. We examined PubMed and Cochrane Library from inception up to April 2024. Key applications include triage optimization, predictive models for traumatic brain injury assessment, and computerized sepsis prediction systems. In each of these domains, AI models outperformed standard methods. The main barriers to a widespread adoption include technological challenges, but also ethical issues, age-related differences in data interpretation, and the paucity of comprehensive datasets in the pediatric context. Future feasible research directions should address the validation of models through prospective datasets with more numerous sample sizes of patients. Furthermore, our analysis shows that it is essential to tailor AI algorithms to specific medical needs. This requires a close partnership between clinicians and developers. Building a shared knowledge platform is therefore a key step.
Collapse
Affiliation(s)
- Lorenzo Di Sarno
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (A.C.); (B.G.); (A.C.)
- The Italian Society of Artificial Intelligence in Medicine (SIIAM), 00165 Rome, Italy; (F.A.C.); (A.G.)
| | - Anya Caroselli
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (A.C.); (B.G.); (A.C.)
| | - Giovanna Tonin
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, 00168 Rome, Italy; (G.T.); (V.P.)
| | - Benedetta Graglia
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (A.C.); (B.G.); (A.C.)
| | - Valeria Pansini
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, 00168 Rome, Italy; (G.T.); (V.P.)
| | - Francesco Andrea Causio
- The Italian Society of Artificial Intelligence in Medicine (SIIAM), 00165 Rome, Italy; (F.A.C.); (A.G.)
- Section of Hygiene and Public Health, Department of Life Sciences and Public Health, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| | - Antonio Gatto
- The Italian Society of Artificial Intelligence in Medicine (SIIAM), 00165 Rome, Italy; (F.A.C.); (A.G.)
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, 00168 Rome, Italy; (G.T.); (V.P.)
| | - Antonio Chiaretti
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (A.C.); (B.G.); (A.C.)
- The Italian Society of Artificial Intelligence in Medicine (SIIAM), 00165 Rome, Italy; (F.A.C.); (A.G.)
| |
Collapse
|
4
|
Maris MT, Koçar A, Willems DL, Pols J, Tan HL, Lindinger GL, Bak MAR. Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives. BMC Med Ethics 2024; 25:42. [PMID: 38575931 PMCID: PMC10996273 DOI: 10.1186/s12910-024-01042-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 03/27/2024] [Indexed: 04/06/2024] Open
Abstract
BACKGROUND The emergence of artificial intelligence (AI) in medicine has prompted the development of numerous ethical guidelines, while the involvement of patients in the creation of these documents lags behind. As part of the European PROFID project we explore patient perspectives on the ethical implications of AI in care for patients at increased risk of sudden cardiac death (SCD). AIM Explore perspectives of patients on the ethical use of AI, particularly in clinical decision-making regarding the implantation of an implantable cardioverter-defibrillator (ICD). METHODS Semi-structured, future scenario-based interviews were conducted among patients who had either an ICD and/or a heart condition with increased risk of SCD in Germany (n = 9) and the Netherlands (n = 15). We used the principles of the European Commission's Ethics Guidelines for Trustworthy AI to structure the interviews. RESULTS Six themes arose from the interviews: the ability of AI to rectify human doctors' limitations; the objectivity of data; whether AI can serve as second opinion; AI explainability and patient trust; the importance of the 'human touch'; and the personalization of care. Overall, our results reveal a strong desire among patients for more personalized and patient-centered care in the context of ICD implantation. Participants in our study express significant concerns about the further loss of the 'human touch' in healthcare when AI is introduced in clinical settings. They believe that this aspect of care is currently inadequately recognized in clinical practice. Participants attribute to doctors the responsibility of evaluating AI recommendations for clinical relevance and aligning them with patients' individual contexts and values, in consultation with the patient. CONCLUSION The 'human touch' patients exclusively ascribe to human medical practitioners extends beyond sympathy and kindness, and has clinical relevance in medical decision-making. Because this cannot be replaced by AI, we suggest that normative research into the 'right to a human doctor' is needed. Furthermore, policies on patient-centered AI integration in clinical practice should encompass the ethics of everyday practice rather than only principle-based ethics. We suggest that an empirical ethics approach grounded in ethnographic research is exceptionally well-suited to pave the way forward.
Collapse
Affiliation(s)
- Menno T Maris
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands.
| | - Ayca Koçar
- Institute for Healthcare Management and Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Dick L Willems
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Jeannette Pols
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Department of Anthropology, University of Amsterdam, Amsterdam, The Netherlands
| | - Hanno L Tan
- Department of Clinical and Experimental Cardiology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Netherlands Heart Institute, Utrecht, The Netherlands
| | - Georg L Lindinger
- Institute for Healthcare Management and Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Marieke A R Bak
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
5
|
Haley LC, Boyd AK, Hebballi NB, Reynolds EW, Smith KG, Scully PT, Nguyen TL, Bernstam EV, Li LT. Attitudes on Artificial Intelligence use in Pediatric Care From Parents of Hospitalized Children. J Surg Res 2024; 295:158-167. [PMID: 38016269 DOI: 10.1016/j.jss.2023.10.027] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 09/27/2023] [Accepted: 10/27/2023] [Indexed: 11/30/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) may benefit pediatric healthcare, but it also raises ethical and pragmatic questions. Parental support is important for the advancement of AI in pediatric medicine. However, there is little literature describing parental attitudes toward AI in pediatric healthcare, and existing studies do not represent parents of hospitalized children well. METHODS We administered the Attitudes toward Artificial Intelligence in Pediatric Healthcare, a validated survey, to parents of hospitalized children in a single tertiary children's hospital. Surveys were administered by trained study personnel (11/2/2021-5/1/2022). Demographic data were collected. An Attitudes toward Artificial Intelligence in Pediatric Healthcare score, assessing openness toward AI-assisted medicine, was calculated for seven areas of concern. Subgroup analyses were conducted using Mann-Whitney U tests to assess the effect of race, gender, education, insurance, length of stay, and intensive care unit (ICU) admission on AI use. RESULTS We approached 90 parents and conducted 76 surveys for a response rate of 84%. Overall, parents were open to the use of AI in pediatric medicine. Social justice, convenience, privacy, and shared decision-making were important concerns. Parents of children admitted to an ICU expressed the most significantly different attitudes compared to parents of children not admitted to an ICU. CONCLUSIONS Parents were overall supportive of AI-assisted healthcare decision-making. In particular, parents of children admitted to ICU have significantly different attitudes, and further study is needed to characterize these differences. Parents value transparency and disclosure pathways should be developed to support this expectation.
Collapse
Affiliation(s)
- Lauren C Haley
- Department of Pediatric Surgery, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Alexandra K Boyd
- Department of Pediatric Surgery, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Nutan B Hebballi
- Department of Pediatric Surgery, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Eric W Reynolds
- Department of Pediatrics, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Keely G Smith
- Department of Pediatrics, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Peter T Scully
- Department of Pediatrics, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Thao L Nguyen
- Department of Pediatrics, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas
| | - Elmer V Bernstam
- Department of Pediatric Surgery, McGovern Medical School at the University of Texas Health Science Center at Houston, Houston, Texas; School of Biomedical Informatics, University of Texas at Houston, Houston, Texas
| | - Linda T Li
- Division of Pediatric Surgery, Department of Surgery, Icahn School of Medicine at Mount Sinai, New York, New York.
| |
Collapse
|
6
|
Berghea EC, Ionescu MD, Gheorghiu RM, Tincu IF, Cobilinschi CO, Craiu M, Bălgrădean M, Berghea F. Integrating Artificial Intelligence in Pediatric Healthcare: Parental Perceptions and Ethical Implications. CHILDREN (BASEL, SWITZERLAND) 2024; 11:240. [PMID: 38397353 PMCID: PMC10887612 DOI: 10.3390/children11020240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 01/30/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
BACKGROUND Our study aimed to explore the way artificial intelligence (AI) utilization is perceived in pediatric medicine, examining its acceptance among patients (in this case represented by their adult parents), and identify the challenges it presents in order to understand the factors influencing its adoption in clinical settings. METHODS A structured questionnaire was applied to caregivers (parents or grandparents) of children who presented in tertiary pediatric clinics. RESULTS The most significant differentiations were identified in relation to the level of education (e.g., aversion to AI involvement was 22.2% among those with postgraduate degrees, 43.9% among those with university degrees, and 54.5% among those who only completed high school). The greatest fear among respondents regarding the medical use of AI was related to the possibility of errors occurring (70.1%). CONCLUSIONS The general attitude toward the use of AI can be considered positive, provided that it remains human-supervised, and that the technology used is explained in detail by the physician. However, there were large differences among groups (mainly defined by education level) in the way AI is perceived and accepted.
Collapse
Affiliation(s)
- Elena Camelia Berghea
- “Marie S. Curie” Emergency Children’s Clinical Hospital, Carol Davila University of Medicine and Pharmacy, 041451 Bucharest, Romania; (E.C.B.); (M.B.)
| | - Marcela Daniela Ionescu
- “Marie S. Curie” Emergency Children’s Clinical Hospital, Carol Davila University of Medicine and Pharmacy, 041451 Bucharest, Romania; (E.C.B.); (M.B.)
| | - Radu Marian Gheorghiu
- National Institute for Mother and Child Health “Alessandrescu-Rusescu”, Carol Davila University of Medicine and Pharmacy, 041249 Bucharest, Romania;
| | - Iulia Florentina Tincu
- Dr. Victor Gomoiu Clinical Children Hospital, Carol Davila University of Medicine and Pharmacy, 022102 Bucharest, Romania;
| | - Claudia Oana Cobilinschi
- Sfanta Maria Clinica Hospital, Carol Davila University of Medicine and Pharmacy, 011172 Bucharest, Romania; (C.O.C.); (F.B.)
| | - Mihai Craiu
- National Institute for Mother and Child Health “Alessandrescu-Rusescu”, Carol Davila University of Medicine and Pharmacy, 041249 Bucharest, Romania;
| | - Mihaela Bălgrădean
- “Marie S. Curie” Emergency Children’s Clinical Hospital, Carol Davila University of Medicine and Pharmacy, 041451 Bucharest, Romania; (E.C.B.); (M.B.)
| | - Florian Berghea
- Sfanta Maria Clinica Hospital, Carol Davila University of Medicine and Pharmacy, 011172 Bucharest, Romania; (C.O.C.); (F.B.)
| |
Collapse
|
7
|
Racine N, Chow C, Hamwi L, Bucsea O, Cheng C, Du H, Fabrizi L, Jasim S, Johannsson L, Jones L, Laudiano-Dray MP, Meek J, Mistry N, Shah V, Stedman I, Wang X, Riddell RP. Health Care Professionals' and Parents' Perspectives on the Use of AI for Pain Monitoring in the Neonatal Intensive Care Unit: Multisite Qualitative Study. JMIR AI 2024; 3:e51535. [PMID: 38875686 PMCID: PMC11041412 DOI: 10.2196/51535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 11/24/2023] [Accepted: 12/17/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND The use of artificial intelligence (AI) for pain assessment has the potential to address historical challenges in infant pain assessment. There is a dearth of information on the perceived benefits and barriers to the implementation of AI for neonatal pain monitoring in the neonatal intensive care unit (NICU) from the perspective of health care professionals (HCPs) and parents. This qualitative analysis provides novel data obtained from 2 large tertiary care hospitals in Canada and the United Kingdom. OBJECTIVE The aim of the study is to explore the perspectives of HCPs and parents regarding the use of AI for pain assessment in the NICU. METHODS In total, 20 HCPs and 20 parents of preterm infants were recruited and consented to participate from February 2020 to October 2022 in interviews asking about AI use for pain assessment in the NICU, potential benefits of the technology, and potential barriers to use. RESULTS The 40 participants included 20 HCPs (17 women and 3 men) with an average of 19.4 (SD 10.69) years of experience in the NICU and 20 parents (mean age 34.4, SD 5.42 years) of preterm infants who were on average 43 (SD 30.34) days old. Six themes from the perspective of HCPs were identified: regular use of technology in the NICU, concerns with regard to AI integration, the potential to improve patient care, requirements for implementation, AI as a tool for pain assessment, and ethical considerations. Seven parent themes included the potential for improved care, increased parental distress, support for parents regarding AI, the impact on parent engagement, the importance of human care, requirements for integration, and the desire for choice in its use. A consistent theme was the importance of AI as a tool to inform clinical decision-making and not replace it. CONCLUSIONS HCPs and parents expressed generally positive sentiments about the potential use of AI for pain assessment in the NICU, with HCPs highlighting important ethical considerations. This study identifies critical methodological and ethical perspectives from key stakeholders that should be noted by any team considering the creation and implementation of AI for pain monitoring in the NICU.
Collapse
Affiliation(s)
- Nicole Racine
- School of Psychology, University of Ottawa, Children's Hospital of Eastern Ontario Research Institute, Ottawa, ON, Canada
| | - Cheryl Chow
- Department of Psychology, York University, Toronto, ON, Canada
| | - Lojain Hamwi
- Department of Psychology, York University, Toronto, ON, Canada
| | - Oana Bucsea
- Department of Psychology, York University, Toronto, ON, Canada
| | - Carol Cheng
- Department of Nursing, Mount Sinai Hospital, Toronto, ON, Canada
| | - Hang Du
- Department of Mathematics and Statistics, York University, Toronto, ON, Canada
| | - Lorenzo Fabrizi
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Sara Jasim
- Department of Psychology, York University, Toronto, ON, Canada
| | | | - Laura Jones
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Maria Pureza Laudiano-Dray
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Judith Meek
- Neonatal Care Unit, University College London Hospitals, London, United Kingdom
| | - Neelum Mistry
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Vibhuti Shah
- Department of Pediatrics, Mount Sinai Hospital, Toronto, ON, Canada
| | - Ian Stedman
- School of Public Policy and Administration, York University, Toronto, ON, Canada
| | - Xiaogang Wang
- Department of Mathematics and Statistics, York University, Toronto, ON, Canada
| | | |
Collapse
|
8
|
Coghlan S, Gyngell C, Vears DF. Ethics of artificial intelligence in prenatal and pediatric genomic medicine. J Community Genet 2024; 15:13-24. [PMID: 37796364 PMCID: PMC10857992 DOI: 10.1007/s12687-023-00678-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023] Open
Abstract
This paper examines the ethics of introducing emerging forms of artificial intelligence (AI) into prenatal and pediatric genomic medicine. Application of genomic AI to these early life settings has not received much attention in the ethics literature. We focus on three contexts: (1) prenatal genomic sequencing for possible fetal abnormalities, (2) rapid genomic sequencing for critically ill children, and (3) reanalysis of genomic data obtained from children for diagnostic purposes. The paper identifies and discusses various ethical issues in the possible application of genomic AI in these settings, especially as they relate to concepts of beneficence, nonmaleficence, respect for autonomy, justice, transparency, accountability, privacy, and trust. The examination will inform the ethically sound introduction of genomic AI in early human life.
Collapse
Affiliation(s)
- Simon Coghlan
- School of Computing and Information Systems (CIS), Centre for AI and Digital Ethics (CAIDE), The University of Melbourne, Grattan St, Melbourne, Victoria, 3010, Australia.
- Australian Research Council Centre of Excellence for Automated Decision Making and Society (ADM+S), Melbourne, Victoria, Australia.
| | - Christopher Gyngell
- Biomedical Ethics Research Group, Murdoch Children's Research Institute, The Royal Children's Hospital, 50 Flemington Rd, Parkville, Victoria, 3052, Australia
- University of Melbourne, Parkville, Victoria, 3052, Australia
| | - Danya F Vears
- Biomedical Ethics Research Group, Murdoch Children's Research Institute, The Royal Children's Hospital, 50 Flemington Rd, Parkville, Victoria, 3052, Australia
- University of Melbourne, Parkville, Victoria, 3052, Australia
- Centre for Biomedical Ethics and Law, KU Leuven, Kapucijnenvoer 35, 3000, Leuven, Belgium
| |
Collapse
|
9
|
Giddings R, Joseph A, Callender T, Janes SM, van der Schaar M, Sheringham J, Navani N. Factors influencing clinician and patient interaction with machine learning-based risk prediction models: a systematic review. Lancet Digit Health 2024; 6:e131-e144. [PMID: 38278615 DOI: 10.1016/s2589-7500(23)00241-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 10/20/2023] [Accepted: 11/14/2023] [Indexed: 01/28/2024]
Abstract
Machine learning (ML)-based risk prediction models hold the potential to support the health-care setting in several ways; however, use of such models is scarce. We aimed to review health-care professional (HCP) and patient perceptions of ML risk prediction models in published literature, to inform future risk prediction model development. Following database and citation searches, we identified 41 articles suitable for inclusion. Article quality varied with qualitative studies performing strongest. Overall, perceptions of ML risk prediction models were positive. HCPs and patients considered that models have the potential to add benefit in the health-care setting. However, reservations remain; for example, concerns regarding data quality for model development and fears of unintended consequences following ML model use. We identified that public views regarding these models might be more negative than HCPs and that concerns (eg, extra demands on workload) were not always borne out in practice. Conclusions are tempered by the low number of patient and public studies, the absence of participant ethnic diversity, and variation in article quality. We identified gaps in knowledge (particularly views from under-represented groups) and optimum methods for model explanation and alerts, which require future research.
Collapse
Affiliation(s)
- Rebecca Giddings
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK.
| | - Anabel Joseph
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Thomas Callender
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Sam M Janes
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Mihaela van der Schaar
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK; The Alan Turing Institute, London, UK
| | - Jessica Sheringham
- Department of Applied Health Research, University College London, London, UK
| | - Neal Navani
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| |
Collapse
|
10
|
Li LT, Haley LC, Boyd AK, Bernstam EV. Technical/Algorithm, Stakeholder, and Society (TASS) barriers to the application of artificial intelligence in medicine: A systematic review. J Biomed Inform 2023; 147:104531. [PMID: 37884177 DOI: 10.1016/j.jbi.2023.104531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/14/2023] [Accepted: 10/22/2023] [Indexed: 10/28/2023]
Abstract
INTRODUCTION The use of artificial intelligence (AI), particularly machine learning and predictive analytics, has shown great promise in health care. Despite its strong potential, there has been limited use in health care settings. In this systematic review, we aim to determine the main barriers to successful implementation of AI in healthcare and discuss potential ways to overcome these challenges. METHODS We conducted a literature search in PubMed (1/1/2001-1/1/2023). The search was restricted to publications in the English language, and human study subjects. We excluded articles that did not discuss AI, machine learning, predictive analytics, and barriers to the use of these techniques in health care. Using grounded theory methodology, we abstracted concepts to identify major barriers to AI use in medicine. RESULTS We identified a total of 2,382 articles. After reviewing the 306 included papers, we developed 19 major themes, which we categorized into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). These themes included: Lack of Explainability, Need for Validation Protocols, Need for Standards for Interoperability, Need for Reporting Guidelines, Need for Standardization of Performance Metrics, Lack of Plan for Updating Algorithm, Job Loss, Skills Loss, Workflow Challenges, Loss of Patient Autonomy and Consent, Disturbing the Patient-Clinician Relationship, Lack of Trust in AI, Logistical Challenges, Lack of strategic plan, Lack of Cost-effectiveness Analysis and Proof of Efficacy, Privacy, Liability, Bias and Social Justice, and Education. CONCLUSION We identified 19 major barriers to the use of AI in healthcare and categorized them into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). Future studies should expand on barriers in pediatric care and focus on developing clearly defined protocols to overcome these barriers.
Collapse
Affiliation(s)
- Linda T Li
- Department of Surgery, Division of Pediatric Surgery, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, United States; McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States.
| | - Lauren C Haley
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Alexandra K Boyd
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Elmer V Bernstam
- McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States; McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| |
Collapse
|
11
|
Ahmed L, Constantinidou A, Chatzittofis A. Patients' perspectives related to ethical issues and risks in precision medicine: a systematic review. Front Med (Lausanne) 2023; 10:1215663. [PMID: 37396896 PMCID: PMC10310545 DOI: 10.3389/fmed.2023.1215663] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 06/01/2023] [Indexed: 07/04/2023] Open
Abstract
Background Precision medicine is growing due to technological advancements including next generation sequencing techniques and artificial intelligence. However, with the application of precision medicine many ethical and potential risks may emerge. Although, its benefits and potential harms are relevantly known to professional societies and practitioners, patients' attitudes toward these potential ethical risks are not well-known. The aim of this systematic review was to focus on patients' perspective on ethics and risks that may rise with the application of precision medicine. Methods A systematic search was conducted on 4/1/2023 in the database of PubMed, for the period 1/1/2012 to 4/1/2023 identifying 914 articles. After initial screening, only 50 articles were found to be relevant. From these 50 articles, 24 articles were included in this systematic review, 2 articles were excluded as not in English language, 1 was a review, and 23 articles did not include enough relevant qualitative data regarding our research question to be included. All full texts were evaluated following PRISMA guidelines for reporting systematic reviews following the Joanna Briggs Institute criteria. Results There were eight main themes emerging from the point of view of the patients regarding ethical concerns and risks of precision medicine: privacy and security of patient data, economic impact on the patients, possible harms of precision medicine including psychosocial harms, risk for discrimination of certain groups, risks in the process of acquiring informed consent, mistrust in the provider and in medical research, issues with the diagnostic accuracy of precision medicine and changes in the doctor-patient relationship. Conclusion Ethical issues and potential risks are important for patients in relation to the applications of precision medicine and need to be addressed with patient education, dedicated research and official policies. Further research is needed for validation of the results and awareness of these findings can guide clinicians to understand and address patients concerns in clinical praxis.
Collapse
Affiliation(s)
- Lawko Ahmed
- Medical School, University of Cyprus, Nicosia, Cyprus
| | | | - Andreas Chatzittofis
- Medical School, University of Cyprus, Nicosia, Cyprus
- Department of Clinical Sciences and Psychiatry, Umeå University, Umeå, Sweden
| |
Collapse
|
12
|
Subasri M, Cressman C, Arje D, Schreyer L, Cooper E, Patel K, Ungar WJ, Barwick M, Denburg A, Hayeems RZ. Translating Precision Health for Pediatrics: A Scoping Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:897. [PMID: 37238445 PMCID: PMC10217253 DOI: 10.3390/children10050897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/09/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023]
Abstract
Precision health aims to personalize treatment and prevention strategies based on individual genetic differences. While it has significantly improved healthcare for specific patient groups, broader translation faces challenges with evidence development, evidence appraisal, and implementation. These challenges are compounded in child health as existing methods fail to incorporate the physiology and socio-biology unique to childhood. This scoping review synthesizes the existing literature on evidence development, appraisal, prioritization, and implementation of precision child health. PubMed, Scopus, Web of Science, and Embase were searched. The included articles were related to pediatrics, precision health, and the translational pathway. Articles were excluded if they were too narrow in scope. In total, 74 articles identified challenges and solutions for putting pediatric precision health interventions into practice. The literature reinforced the unique attributes of children and their implications for study design and identified major themes for the value assessment of precision health interventions for children, including clinical benefit, cost-effectiveness, stakeholder values and preferences, and ethics and equity. Tackling these identified challenges will require developing international data networks and guidelines, re-thinking methods for value assessment, and broadening stakeholder support for the effective implementation of precision health within healthcare organizations. This research was funded by the SickKids Precision Child Health Catalyst Grant.
Collapse
Affiliation(s)
- Mathushan Subasri
- Child Health Evaluative Sciences Program, The Hospital for Sick Children Research Institute, Toronto, ON M5G 1X8, Canada; (M.S.); (C.C.); (D.A.); (L.S.); (E.C.); (K.P.); (W.J.U.); (M.B.); (A.D.)
| | - Celine Cressman
- Child Health Evaluative Sciences Program, The Hospital for Sick Children Research Institute, Toronto, ON M5G 1X8, Canada; (M.S.); (C.C.); (D.A.); (L.S.); (E.C.); (K.P.); (W.J.U.); (M.B.); (A.D.)
| | - Danielle Arje
- Child Health Evaluative Sciences Program, The Hospital for Sick Children Research Institute, Toronto, ON M5G 1X8, Canada; (M.S.); (C.C.); (D.A.); (L.S.); (E.C.); (K.P.); (W.J.U.); (M.B.); (A.D.)
- Department of Paediatrics, University of Toronto, Toronto, ON M5G 1X8, Canada
| | - Leighton Schreyer
- Child Health Evaluative Sciences Program, The Hospital for Sick Children Research Institute, Toronto, ON M5G 1X8, Canada; (M.S.); (C.C.); (D.A.); (L.S.); (E.C.); (K.P.); (W.J.U.); (M.B.); (A.D.)
| | - Erin Cooper
- Child Health Evaluative Sciences Program, The Hospital for Sick Children Research Institute, Toronto, ON M5G 1X8, Canada; (M.S.); (C.C.); (D.A.); (L.S.); (E.C.); (K.P.); (W.J.U.); (M.B.); (A.D.)
| | - Komal Patel
- Child Health Evaluative Sciences Program, The Hospital for Sick Children Research Institute, Toronto, ON M5G 1X8, Canada; (M.S.); (C.C.); (D.A.); (L.S.); (E.C.); (K.P.); (W.J.U.); (M.B.); (A.D.)
| | - Wendy J. Ungar
- Child Health Evaluative Sciences Program, The Hospital for Sick Children Research Institute, Toronto, ON M5G 1X8, Canada; (M.S.); (C.C.); (D.A.); (L.S.); (E.C.); (K.P.); (W.J.U.); (M.B.); (A.D.)
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, ON M5T 3M6, Canada
| | - Melanie Barwick
- Child Health Evaluative Sciences Program, The Hospital for Sick Children Research Institute, Toronto, ON M5G 1X8, Canada; (M.S.); (C.C.); (D.A.); (L.S.); (E.C.); (K.P.); (W.J.U.); (M.B.); (A.D.)
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, ON M5T 3M6, Canada
| | - Avram Denburg
- Child Health Evaluative Sciences Program, The Hospital for Sick Children Research Institute, Toronto, ON M5G 1X8, Canada; (M.S.); (C.C.); (D.A.); (L.S.); (E.C.); (K.P.); (W.J.U.); (M.B.); (A.D.)
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, ON M5T 3M6, Canada
- Division of Haematology/Oncology, Hospital for Sick Children, University of Toronto, Toronto, ON M5G 1X8, Canada
| | - Robin Z. Hayeems
- Child Health Evaluative Sciences Program, The Hospital for Sick Children Research Institute, Toronto, ON M5G 1X8, Canada; (M.S.); (C.C.); (D.A.); (L.S.); (E.C.); (K.P.); (W.J.U.); (M.B.); (A.D.)
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, ON M5T 3M6, Canada
| |
Collapse
|
13
|
Visram S, Leyden D, Annesley O, Bappa D, Sebire NJ. Engaging children and young people on the potential role of artificial intelligence in medicine. Pediatr Res 2023; 93:440-444. [PMID: 35393524 PMCID: PMC9937917 DOI: 10.1038/s41390-022-02053-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 02/15/2022] [Accepted: 03/21/2022] [Indexed: 11/08/2022]
Abstract
INTRODUCTION There is increasing interest in Artificial Intelligence (AI) and its application to medicine. Perceptions of AI are less well-known, notably amongst children and young people (CYP). This workshop investigates attitudes towards AI and its future applications in medicine and healthcare at a specialised paediatric hospital using practical design scenarios. METHOD Twenty-one members of a Young Persons Advisory Group for research contributed to an engagement workshop to ascertain potential opportunities, apprehensions, and priorities. RESULTS When presented as a selection of practical design scenarios, we found that CYP were more open to some applications of AI in healthcare than others. Human-centeredness, governance and trust emerged as early themes, with empathy and safety considered as important when introducing AI to healthcare. Educational workshops with practical examples using AI to help, but not replace humans were suggested to address issues, build trust, and effectively communicate about AI. CONCLUSION Whilst policy guidelines acknowledge the need to include children and young people to develop AI, this requires an enabling environment for human-centred AI involving children and young people with lived experiences of healthcare. Future research should focus on building consensus on enablers for an intelligent healthcare system designed for the next generation, which fundamentally, allows co-creation. IMPACT Children and young people (CYP) want to be included to share their insights about the development of research on the potential role of Artificial Intelligence (AI) in medicine and healthcare and are more open to some applications of AI than others. Whilst it is acknowledged that a research gap on involving and engaging CYP in developing AI policies exists, there is little in the way of pragmatic and practical guidance for healthcare staff on this topic. This requires research on enabling environments for ongoing digital cooperation to identify and prioritise unmet needs in the application and development of AI.
Collapse
Affiliation(s)
- Sheena Visram
- Department of Computer Science | UCL Interaction Centre, University College London, London, UK.
- DRIVE Centre, Great Ormond Street Hospital for Children, London, UK.
| | - Deirdre Leyden
- Young Persons Advisory Group (YPAG), Great Ormond Street Hospital for Children, London, UK
| | - Oceiah Annesley
- Young Persons Advisory Group (YPAG), Great Ormond Street Hospital for Children, London, UK
| | - Dauda Bappa
- Young Persons Advisory Group (YPAG), Great Ormond Street Hospital for Children, London, UK
| | - Neil J Sebire
- DRIVE Centre, Great Ormond Street Hospital for Children, London, UK
| |
Collapse
|
14
|
Alexander N, Aftandilian C, Guo LL, Plenert E, Posada J, Fries J, Fleming S, Johnson A, Shah N, Sung L. Perspective Toward Machine Learning Implementation in Pediatric Medicine: Mixed Methods Study. JMIR Med Inform 2022; 10:e40039. [DOI: 10.2196/40039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 09/15/2022] [Accepted: 10/10/2022] [Indexed: 11/19/2022] Open
Abstract
Background
Given the costs of machine learning implementation, a systematic approach to prioritizing which models to implement into clinical practice may be valuable.
Objective
The primary objective was to determine the health care attributes respondents at 2 pediatric institutions rate as important when prioritizing machine learning model implementation. The secondary objective was to describe their perspectives on implementation using a qualitative approach.
Methods
In this mixed methods study, we distributed a survey to health system leaders, physicians, and data scientists at 2 pediatric institutions. We asked respondents to rank the following 5 attributes in terms of implementation usefulness: the clinical problem was common, the clinical problem caused substantial morbidity and mortality, risk stratification led to different actions that could reasonably improve patient outcomes, reducing physician workload, and saving money. Important attributes were those ranked as first or second most important. Individual qualitative interviews were conducted with a subsample of respondents.
Results
Among 613 eligible respondents, 275 (44.9%) responded. Qualitative interviews were conducted with 17 respondents. The most common important attributes were risk stratification leading to different actions (205/275, 74.5%) and clinical problem causing substantial morbidity or mortality (177/275, 64.4%). The attributes considered least important were reducing physician workload and saving money. Qualitative interviews consistently prioritized implementations that improved patient outcomes.
Conclusions
Respondents prioritized machine learning model implementation where risk stratification would lead to different actions and clinical problems that caused substantial morbidity and mortality. Implementations that improved patient outcomes were prioritized. These results can help provide a framework for machine learning model implementation.
Collapse
|
15
|
Scott IA, Carter SM, Coiera E. Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Inform 2021; 28:bmjhci-2021-100450. [PMID: 34887331 PMCID: PMC8663096 DOI: 10.1136/bmjhci-2021-100450] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 11/14/2021] [Indexed: 12/31/2022] Open
Abstract
Objectives Different stakeholders may hold varying attitudes towards artificial intelligence (AI) applications in healthcare, which may constrain their acceptance if AI developers fail to take them into account. We set out to ascertain evidence of the attitudes of clinicians, consumers, managers, researchers, regulators and industry towards AI applications in healthcare. Methods We undertook an exploratory analysis of articles whose titles or abstracts contained the terms ‘artificial intelligence’ or ‘AI’ and ‘medical’ or ‘healthcare’ and ‘attitudes’, ‘perceptions’, ‘opinions’, ‘views’, ‘expectations’. Using a snowballing strategy, we searched PubMed and Google Scholar for articles published 1 January 2010 through 31 May 2021. We selected articles relating to non-robotic clinician-facing AI applications used to support healthcare-related tasks or decision-making. Results Across 27 studies, attitudes towards AI applications in healthcare, in general, were positive, more so for those with direct experience of AI, but provided certain safeguards were met. AI applications which automated data interpretation and synthesis were regarded more favourably by clinicians and consumers than those that directly influenced clinical decisions or potentially impacted clinician–patient relationships. Privacy breaches and personal liability for AI-related error worried clinicians, while loss of clinician oversight and inability to fully share in decision-making worried consumers. Both clinicians and consumers wanted AI-generated advice to be trustworthy, while industry groups emphasised AI benefits and wanted more data, funding and regulatory certainty. Discussion Certain expectations of AI applications were common to many stakeholder groups from which a set of dependencies can be defined. Conclusion Stakeholders differ in some but not all of their attitudes towards AI. Those developing and implementing applications should consider policies and processes that bridge attitudinal disconnects between different stakeholders.
Collapse
Affiliation(s)
- Ian A Scott
- Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Woolloongabba, Queensland, Australia .,School of Clinical Medicine, University of Queensland, Brisbane, Queensland, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Enrico Coiera
- Centre for Clinical Informatics, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|