1
|
Muralidharan V, Schamroth J, Youssef A, Celi LA, Daneshjou R. Applied artificial intelligence for global child health: Addressing biases and barriers. PLOS DIGITAL HEALTH 2024; 3:e0000583. [PMID: 39172772 PMCID: PMC11340888 DOI: 10.1371/journal.pdig.0000583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2024]
Abstract
Given the potential benefits of artificial intelligence and machine learning (AI/ML) within healthcare, it is critical to consider how these technologies can be deployed in pediatric research and practice. Currently, healthcare AI/ML has not yet adapted to the specific technical considerations related to pediatric data nor adequately addressed the specific vulnerabilities of children and young people (CYP) in relation to AI. While the greatest burden of disease in CYP is firmly concentrated in lower and middle-income countries (LMICs), existing applied pediatric AI/ML efforts are concentrated in a small number of high-income countries (HICs). In LMICs, use-cases remain primarily in the proof-of-concept stage. This narrative review identifies a number of intersecting challenges that pose barriers to effective AI/ML for CYP globally and explores the shifts needed to make progress across multiple domains. Child-specific technical considerations throughout the AI/ML lifecycle have been largely overlooked thus far, yet these can be critical to model effectiveness. Governance concerns are paramount, with suitable national and international frameworks and guidance required to enable the safe and responsible deployment of advanced technologies impacting the care of CYP and using their data. An ambitious vision for child health demands that the potential benefits of AI/Ml are realized universally through greater international collaboration, capacity building, strong oversight, and ultimately diffusing the AI/ML locus of power to empower researchers and clinicians globally. In order that AI/ML systems that do not exacerbate inequalities in pediatric care, teams researching and developing these technologies in LMICs must ensure that AI/ML research is inclusive of the needs and concerns of CYP and their caregivers. A broad, interdisciplinary, and human-centered approach to AI/ML is essential for developing tools for healthcare workers delivering care, such that the creation and deployment of ML is grounded in local systems, cultures, and clinical practice. Decisions to invest in developing and testing pediatric AI/ML in resource-constrained settings must always be part of a broader evaluation of the overall needs of a healthcare system, considering the critical building blocks underpinning effective, sustainable, and cost-efficient healthcare delivery for CYP.
Collapse
Affiliation(s)
- Vijaytha Muralidharan
- Department of Dermatology, Stanford University, Stanford, California, United States of America
| | - Joel Schamroth
- Faculty of Population Health Sciences, University College London, London, United Kingdom
| | - Alaa Youssef
- Stanford Center for Artificial Intelligence in Medicine and Imaging, Department of Radiology, Stanford University, Stanford, California, United States of America
| | - Leo A. Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, United States of America
| | - Roxana Daneshjou
- Department of Dermatology, Stanford University, Stanford, California, United States of America
- Department of Biomedical Data Science, Stanford University, Stanford, California, United States of America
| |
Collapse
|
2
|
Lee L, Salami RK, Martin H, Shantharam L, Thomas K, Ashworth E, Allan E, Yung KW, Pauling C, Leyden D, Arthurs OJ, Shelmerdine SC. "How I would like AI used for my imaging": children and young persons' perspectives. Eur Radiol 2024:10.1007/s00330-024-10839-9. [PMID: 38900281 DOI: 10.1007/s00330-024-10839-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 04/11/2024] [Accepted: 04/27/2024] [Indexed: 06/21/2024]
Abstract
OBJECTIVES Artificial intelligence (AI) tools are becoming more available in modern healthcare, particularly in radiology, although less attention has been paid to applications for children and young people. In the development of these, it is critical their views are heard. MATERIALS AND METHODS A national, online survey was publicised to UK schools, universities and charity partners encouraging any child or young adult to participate. The survey was "live" for one year (June 2022 to 2023). Questions about views of AI in general, and in specific circumstances (e.g. bone fractures) were asked. RESULTS One hundred and seventy-one eligible responses were received, with a mean age of 19 years (6-23 years) with representation across all 4 UK nations. Most respondents agreed or strongly agreed they wanted to know the accuracy of an AI tool that was being used (122/171, 71.3%), that accuracy was more important than speed (113/171, 66.1%), and that AI should be used with human oversight (110/171, 64.3%). Many respondents (73/171, 42.7%) felt AI would be more accurate at finding problems on bone X-rays than humans, with almost all respondents who had sustained a missed fracture strongly agreeing with that sentiment (12/14, 85.7%). CONCLUSIONS Children and young people in our survey had positive views regarding AI, and felt it should be integrated into modern healthcare, but expressed a preference for a "medical professional in the loop" and accuracy of findings over speed. Key themes regarding information on AI performance and governance were raised and should be considered prior to future AI implementation for paediatric healthcare. CLINICAL RELEVANCE STATEMENT Artificial intelligence (AI) integration into clinical practice must consider all stakeholders, especially paediatric patients who have largely been ignored. Children and young people favour AI involvement with human oversight, seek assurances for safety, accuracy, and clear accountability in case of failures. KEY POINTS Paediatric patient's needs and voices are often overlooked in AI tool design and deployment. Children and young people approved of AI, if paired with human oversight and reliability. Children and young people are stakeholders for developing and deploying AI tools in paediatrics.
Collapse
Affiliation(s)
- Lauren Lee
- Young Persons Advisory Group (YPAG), Great Ormond Street Hospital for Children, London, WC1H 3JH, UK
| | | | - Helena Martin
- Guy's and St Thomas' NHS Foundation Trust, London, UK
| | | | - Kate Thomas
- Royal Hospital for Children & Young People, Edinburgh, Scotland, UK
| | - Emily Ashworth
- St George's Hospital, Blackshaw Road, Tooting London, London, UK
| | - Emma Allan
- Department of Clinical Radiology, Great Ormond Street Hospital for Children, London, WC1H 3JH, UK
| | - Ka-Wai Yung
- Wellcome/ EPSRC Centre for Interventional and Surgical Sciences, Charles Bell House, 43-45 Foley Street, London, W1W 7TY, UK
| | - Cato Pauling
- University College London, Gower Street, London, WC1E 6BT, UK.
| | - Deirdre Leyden
- Young Persons Advisory Group (YPAG), Great Ormond Street Hospital for Children, London, WC1H 3JH, UK
| | - Owen J Arthurs
- Department of Clinical Radiology, Great Ormond Street Hospital for Children, London, WC1H 3JH, UK
- UCL Great Ormond Street Institute of Child Health, Great Ormond Street Hospital for Children, London, UK, WC1N 1EH, UK
- NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, WC1N 1EH, UK
| | - Susan Cheng Shelmerdine
- Department of Clinical Radiology, Great Ormond Street Hospital for Children, London, WC1H 3JH, UK
- UCL Great Ormond Street Institute of Child Health, Great Ormond Street Hospital for Children, London, UK, WC1N 1EH, UK
- NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, WC1N 1EH, UK
| |
Collapse
|
3
|
Visram S, Rogers Y, Sebire NJ. Developing a conceptual framework for the early adoption of healthcare technologies in hospitals. Nat Med 2024; 30:1222-1224. [PMID: 38459179 DOI: 10.1038/s41591-024-02860-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2024]
Affiliation(s)
- Sheena Visram
- Data Research, Innovation and Virtual Environments (DRIVE), NIHR Great Ormond Street Hospital Biomedical Research Centre, London, UK.
- UCL Interaction Centre, Department of Computer Science, University College London, London, UK.
| | - Yvonne Rogers
- UCL Interaction Centre, Department of Computer Science, University College London, London, UK
| | - Neil J Sebire
- Data Research, Innovation and Virtual Environments (DRIVE), NIHR Great Ormond Street Hospital Biomedical Research Centre, London, UK
| |
Collapse
|
4
|
Muralidharan V, Burgart A, Daneshjou R, Rose S. Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI. NPJ Digit Med 2023; 6:166. [PMID: 37673925 PMCID: PMC10482936 DOI: 10.1038/s41746-023-00898-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 08/03/2023] [Indexed: 09/08/2023] Open
Abstract
ACCEPT-AI is a framework of recommendations for the safe inclusion of pediatric data in artificial intelligence and machine learning (AI/ML) research. It has been built on fundamental ethical principles of pediatric and AI research and incorporates age, consent, assent, communication, equity, protection of data, and technological considerations. ACCEPT-AI has been designed to guide researchers, clinicians, regulators, and policymakers and can be utilized as an independent tool, or adjunctively to existing AI/ML guidelines.
Collapse
Affiliation(s)
- V Muralidharan
- Department of Dermatology, Stanford University, Stanford, USA.
| | - A Burgart
- Department of Anesthesiology, Perioperative, and Pain Medicine, Stanford University, Stanford, USA
| | - R Daneshjou
- Department of Dermatology, Stanford University, Stanford, USA
- Department of Biomedical Data Science, Stanford University, Stanford, USA
| | - S Rose
- Department of Health Policy, Stanford University, Stanford, USA
| |
Collapse
|
5
|
Thai K, Tsiandoulas KH, Stephenson EA, Menna-Dack D, Zlotnik Shaul R, Anderson JA, Shinewald AR, Ampofo A, McCradden MD. Perspectives of Youths on the Ethical Use of Artificial Intelligence in Health Care Research and Clinical Care. JAMA Netw Open 2023; 6:e2310659. [PMID: 37126349 PMCID: PMC10152306 DOI: 10.1001/jamanetworkopen.2023.10659] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/02/2023] Open
Abstract
Importance Understanding the views and values of patients is of substantial importance to developing the ethical parameters of artificial intelligence (AI) use in medicine. Thus far, there is limited study on the views of children and youths. Their perspectives contribute meaningfully to the integration of AI in medicine. Objective To explore the moral attitudes and views of children and youths regarding research and clinical care involving health AI at the point of care. Design, Setting, and Participants This qualitative study recruited participants younger than 18 years during a 1-year period (October 2021 to March 2022) at a large urban pediatric hospital. A total of 44 individuals who were receiving or had previously received care at a hospital or rehabilitation clinic contacted the research team, but 15 were found to be ineligible. Of the 29 who consented to participate, 1 was lost to follow-up, resulting in 28 participants who completed the interview. Exposures Participants were interviewed using vignettes on 3 main themes: (1) health data research, (2) clinical AI trials, and (3) clinical use of AI. Main Outcomes and Measures Thematic description of values surrounding health data research, interventional AI research, and clinical use of AI. Results The 28 participants included 6 children (ages, 10-12 years) and 22 youths (ages, 13-17 years) (16 female, 10 male, and 3 trans/nonbinary/gender diverse). Mean (SD) age was 15 (2) years. Participants were highly engaged and quite knowledgeable about AI. They expressed a positive view of research intended to help others and had strong feelings about the uses of their health data for AI. Participants expressed appreciation for the vulnerability of potential participants in interventional AI trials and reinforced the importance of respect for their preferences regardless of their decisional capacity. A strong theme for the prospective use of clinical AI was the desire to maintain bedside interaction between the patient and their physician. Conclusions and Relevance In this study, children and youths reported generally positive views of AI, expressing strong interest and advocacy for their involvement in AI research and inclusion of their voices for shared decision-making with AI in clinical care. These findings suggest the need for more engagement of children and youths in health care AI research and integration.
Collapse
Affiliation(s)
- Kelly Thai
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics & Genome Biology, Peter Gilgan Centre for Research & Learning, Toronto, Ontario, Canada
| | - Kate H Tsiandoulas
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Elizabeth A Stephenson
- Labatt Family Heart Centre, The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Paediatrics, University of Toronto, Toronto, Ontario, Canada
| | - Dolly Menna-Dack
- Holland Bloorview Kids Rehabilitation Hospital, Toronto, Ontario, Canada
| | - Randi Zlotnik Shaul
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Paediatrics, University of Toronto, Toronto, Ontario, Canada
| | - James A Anderson
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | | | | | - Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics & Genome Biology, Peter Gilgan Centre for Research & Learning, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
6
|
Morrow E, Zidaru T, Ross F, Mason C, Patel KD, Ream M, Stockley R. Artificial intelligence technologies and compassion in healthcare: A systematic scoping review. Front Psychol 2023; 13:971044. [PMID: 36733854 PMCID: PMC9887144 DOI: 10.3389/fpsyg.2022.971044] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 12/05/2022] [Indexed: 01/18/2023] Open
Abstract
Background Advances in artificial intelligence (AI) technologies, together with the availability of big data in society, creates uncertainties about how these developments will affect healthcare systems worldwide. Compassion is essential for high-quality healthcare and research shows how prosocial caring behaviors benefit human health and societies. However, the possible association between AI technologies and compassion is under conceptualized and underexplored. Objectives The aim of this scoping review is to provide a comprehensive depth and a balanced perspective of the emerging topic of AI technologies and compassion, to inform future research and practice. The review questions were: How is compassion discussed in relation to AI technologies in healthcare? How are AI technologies being used to enhance compassion in healthcare? What are the gaps in current knowledge and unexplored potential? What are the key areas where AI technologies could support compassion in healthcare? Materials and methods A systematic scoping review following five steps of Joanna Briggs Institute methodology. Presentation of the scoping review conforms with PRISMA-ScR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews). Eligibility criteria were defined according to 3 concept constructs (AI technologies, compassion, healthcare) developed from the literature and informed by medical subject headings (MeSH) and key words for the electronic searches. Sources of evidence were Web of Science and PubMed databases, articles published in English language 2011-2022. Articles were screened by title/abstract using inclusion/exclusion criteria. Data extracted (author, date of publication, type of article, aim/context of healthcare, key relevant findings, country) was charted using data tables. Thematic analysis used an inductive-deductive approach to generate code categories from the review questions and the data. A multidisciplinary team assessed themes for resonance and relevance to research and practice. Results Searches identified 3,124 articles. A total of 197 were included after screening. The number of articles has increased over 10 years (2011, n = 1 to 2021, n = 47 and from Jan-Aug 2022 n = 35 articles). Overarching themes related to the review questions were: (1) Developments and debates (7 themes) Concerns about AI ethics, healthcare jobs, and loss of empathy; Human-centered design of AI technologies for healthcare; Optimistic speculation AI technologies will address care gaps; Interrogation of what it means to be human and to care; Recognition of future potential for patient monitoring, virtual proximity, and access to healthcare; Calls for curricula development and healthcare professional education; Implementation of AI applications to enhance health and wellbeing of the healthcare workforce. (2) How AI technologies enhance compassion (10 themes) Empathetic awareness; Empathetic response and relational behavior; Communication skills; Health coaching; Therapeutic interventions; Moral development learning; Clinical knowledge and clinical assessment; Healthcare quality assessment; Therapeutic bond and therapeutic alliance; Providing health information and advice. (3) Gaps in knowledge (4 themes) Educational effectiveness of AI-assisted learning; Patient diversity and AI technologies; Implementation of AI technologies in education and practice settings; Safety and clinical effectiveness of AI technologies. (4) Key areas for development (3 themes) Enriching education, learning and clinical practice; Extending healing spaces; Enhancing healing relationships. Conclusion There is an association between AI technologies and compassion in healthcare and interest in this association has grown internationally over the last decade. In a range of healthcare contexts, AI technologies are being used to enhance empathetic awareness; empathetic response and relational behavior; communication skills; health coaching; therapeutic interventions; moral development learning; clinical knowledge and clinical assessment; healthcare quality assessment; therapeutic bond and therapeutic alliance; and to provide health information and advice. The findings inform a reconceptualization of compassion as a human-AI system of intelligent caring comprising six elements: (1) Awareness of suffering (e.g., pain, distress, risk, disadvantage); (2) Understanding the suffering (significance, context, rights, responsibilities etc.); (3) Connecting with the suffering (e.g., verbal, physical, signs and symbols); (4) Making a judgment about the suffering (the need to act); (5) Responding with an intention to alleviate the suffering; (6) Attention to the effect and outcomes of the response. These elements can operate at an individual (human or machine) and collective systems level (healthcare organizations or systems) as a cyclical system to alleviate different types of suffering. New and novel approaches to human-AI intelligent caring could enrich education, learning, and clinical practice; extend healing spaces; and enhance healing relationships. Implications In a complex adaptive system such as healthcare, human-AI intelligent caring will need to be implemented, not as an ideology, but through strategic choices, incentives, regulation, professional education, and training, as well as through joined up thinking about human-AI intelligent caring. Research funders can encourage research and development into the topic of AI technologies and compassion as a system of human-AI intelligent caring. Educators, technologists, and health professionals can inform themselves about the system of human-AI intelligent caring.
Collapse
Affiliation(s)
| | - Teodor Zidaru
- Department of Anthropology, London School of Economics and Political Sciences, London, United Kingdom
| | - Fiona Ross
- Faculty of Health, Science, Social Care and Education, Kingston University London, London, United Kingdom
| | - Cindy Mason
- Artificial Intelligence Researcher (Independent), Palo Alto, CA, United States
| | | | - Melissa Ream
- Kent Surrey Sussex Academic Health Science Network (AHSN) and the National AHSN Network Artificial Intelligence (AI) Initiative, Surrey, United Kingdom
| | - Rich Stockley
- Head of Research and Engagement, Surrey Heartlands Health and Care Partnership, Surrey, United Kingdom
| |
Collapse
|
7
|
Malhotra A, Molloy EJ, Bearer CF, Mulkey SB. Emerging role of artificial intelligence, big data analysis and precision medicine in pediatrics. Pediatr Res 2023; 93:281-283. [PMID: 36807652 DOI: 10.1038/s41390-022-02422-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 12/01/2022] [Indexed: 02/19/2023]
Affiliation(s)
- Atul Malhotra
- Department of Paediatrics, Monash University, Melbourne, VIC, Australia. .,Monash Newborn, Monash Children's Hospital, Melbourne, VIC, Australia.
| | - Eleanor J Molloy
- Paediatrics, Trinity College, Dublin, Ireland.,Children's Hospital Ireland at Tallaght, Dublin, Ireland.,Neonatology, Coombe Women's and Infants University Hospital, Dublin, Ireland
| | - Cynthia F Bearer
- Department of Pediatrics, Rainbow Babies & Children's Hospital, UH CMC, Cleveland, OH, USA
| | - Sarah B Mulkey
- Prenatal Pediatrics Institute, Children's National Hospital, Washington, DC, USA.,Department of Neurology, The George Washington University School of Medicine and Health Sciences, Washington, DC, USA.,Department of Pediatrics, The George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| |
Collapse
|