1
|
Puchades R, Ramos-Ruperto L. Artificial intelligence in clinical practice: Quality and evidence. Rev Clin Esp 2024:S2254-8874(24)00142-5. [PMID: 39510442 DOI: 10.1016/j.rceng.2024.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Accepted: 07/09/2024] [Indexed: 11/15/2024]
Abstract
A revolution is taking place within the field of artificial intelligence (AI) with the emergence of generative AI. Although we are in an early phase at the clinical level, there is an exponential increase in the number of scientific articles that use AI (discriminative and generative) in their methodology. According to the current situation, we may be in an "AI bubble" stage; requiring filters and tools to evaluate its application, based on the quality and evidence provided. In this sense, initiatives have been developed to determine standards and guidelines for the use of discriminative AI (CONSORT AI, STARD AI and others), and more recently for generative AI (the CHART collaborative). As a new technology, AI requires scientific regulation to guarantee the efficacy and safety of its applications, while maintaining the quality of care; an evidence-based AI (IABE).
Collapse
Affiliation(s)
- R Puchades
- Grupo de trabajo de Medicina Digital de la Sociedad Española de Medicina Interna (SEMI); Servicio de Medicina Interna, Hospital Universitario La Paz, Madrid, Spain.
| | - L Ramos-Ruperto
- Grupo de trabajo de Medicina Digital de la Sociedad Española de Medicina Interna (SEMI); Servicio de Medicina Interna, Hospital Universitario La Paz, Madrid, Spain
| |
Collapse
|
2
|
Yao MWM, Jenkins J, Nguyen ET, Swanson T, Menabrito M. Patient-Centric In Vitro Fertilization Prognostic Counseling Using Machine Learning for the Pragmatist. Semin Reprod Med 2024. [PMID: 39379046 DOI: 10.1055/s-0044-1791536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2024]
Abstract
Although in vitro fertilization (IVF) has become an extremely effective treatment option for infertility, there is significant underutilization of IVF by patients who could benefit from such treatment. In order for patients to choose to consider IVF treatment when appropriate, it is critical for them to be provided with an accurate, understandable IVF prognosis. Machine learning (ML) can meet the challenge of personalized prognostication based on data available prior to treatment. The development, validation, and deployment of ML prognostic models and related patient counseling report delivery require specialized human and platform expertise. This review article takes a pragmatic approach to review relevant reports of IVF prognostic models and draws from extensive experience meeting patients' and providers' needs with the development of data and model pipelines to implement validated ML models at scale, at the point-of-care. Requirements of using ML-based IVF prognostics at point-of-care will be considered alongside clinical ML implementation factors critical for success. Finally, we discuss health, social, and economic objectives that may be achieved by leveraging combined human expertise and ML prognostics to expand fertility care access and advance health and social good.
Collapse
|
3
|
Bozkurt S, Fereydooni S, Kar I, Diop Chalmers C, Leslie SL, Pathak R, Walling A, Lindvall C, Lorenz K, Quest T, Giannitrapani K, Kavalieratos D. Investigating Data Diversity and Model Robustness of AI Applications in Palliative Care and Hospice: Protocol for Scoping Review. JMIR Res Protoc 2024; 13:e56353. [PMID: 39378420 PMCID: PMC11496913 DOI: 10.2196/56353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 07/01/2024] [Accepted: 07/22/2024] [Indexed: 10/10/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) has become a pivotal element in health care, leading to significant advancements across various medical domains, including palliative care and hospice services. These services focus on improving the quality of life for patients with life-limiting illnesses, and AI's ability to process complex datasets can enhance decision-making and personalize care in these sensitive settings. However, incorporating AI into palliative and hospice care requires careful examination to ensure it reflects the multifaceted nature of these settings. OBJECTIVE This scoping review aims to systematically map the landscape of AI in palliative care and hospice settings, focusing on the data diversity and model robustness. The goal is to understand AI's role, its clinical integration, and the transparency of its development, ultimately providing a foundation for developing AI applications that adhere to established ethical guidelines and principles. METHODS Our scoping review involves six stages: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consulting with stakeholders. Searches were conducted across databases including MEDLINE through PubMed, Embase.com, IEEE Xplore, ClinicalTrials.gov, and Web of Science Core Collection, covering studies from the inception of each database up to November 1, 2023. We used a comprehensive set of search terms to capture relevant studies, and non-English records were excluded if their abstracts were not in English. Data extraction will follow a systematic approach, and stakeholder consultations will refine the findings. RESULTS The electronic database searches conducted in November 2023 resulted in 4614 studies. After removing duplicates, 330 studies were selected for full-text review to determine their eligibility based on predefined criteria. The extracted data will be organized into a table to aid in crafting a narrative summary. The review is expected to be completed by May 2025. CONCLUSIONS This scoping review will advance the understanding of AI in palliative care and hospice, focusing on data diversity and model robustness. It will identify gaps and guide future research, contributing to the development of ethically responsible and effective AI applications in these settings. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/56353.
Collapse
Affiliation(s)
- Selen Bozkurt
- Department of Biomedical Informatics, Emory University, Atlanta, GA, United States
- Division of Palliative Medicine, Department of Family and Preventive Medicine, Emory University, Atlanta, GA, United States
- Center for Ethics, Emory University, Atlanta, Georgia
| | | | - Irem Kar
- Department of Biostatistics, Ankara University, Ankara, Turkey
| | | | - Sharon L Leslie
- Woodruff Health Sciences Center Library, Emory University, Atlanta, GA, United States
| | - Ravi Pathak
- Division of Palliative Medicine, Department of Family and Preventive Medicine, Emory University, Atlanta, GA, United States
| | - Anne Walling
- Department of Medicine, Veterans Affairs Greater Los Angeles Health System, Los Angeles, CA, United States
- Department of Medicine, University of California, Los Angeles, CA, United States
| | - Charlotta Lindvall
- Harvard Medical School, Boston, MA, United States
- Department of Informatics and Analytics, Dana-Farber Cancer Institute, Boston, MD, United States
- Department of Medicine, Brigham and Women's Hospital, Boston, MD, United States
- Department of Psychosocial Oncology and Palliative Care, Dana-Farber Cancer Institute, Boston, MD, United States
| | - Karl Lorenz
- Primary Care and Population Health, School of Medicine, Stanford University, Stanford, CA, United States
- Center for Innovation to Implementation (Ci2i), Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, United States
| | - Tammie Quest
- Division of Palliative Medicine, Department of Family and Preventive Medicine, Emory University, Atlanta, GA, United States
| | - Karleen Giannitrapani
- Primary Care and Population Health, School of Medicine, Stanford University, Stanford, CA, United States
- Center for Innovation to Implementation (Ci2i), Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, United States
| | - Dio Kavalieratos
- Division of Palliative Medicine, Department of Family and Preventive Medicine, Emory University, Atlanta, GA, United States
| |
Collapse
|
4
|
Murali M, Wiles MD. Large language models and artificial intelligence: the coming storm for academia. Anaesthesia 2024. [PMID: 39316447 DOI: 10.1111/anae.16441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/09/2024] [Indexed: 09/26/2024]
Affiliation(s)
- Mayur Murali
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Division of Anaesthetics, Pain Medicine and Intensive Care, London, UK
| | - Matthew D Wiles
- Department of Academic Anaesthesia, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
- Centre for Applied Health and Social Care Research, Sheffield Hallam University, Sheffield, UK
| |
Collapse
|
5
|
Graham AD, Kothapalli T, Wang J, Ding J, Tse V, Asbell PA, Yu SX, Lin MC. A machine learning approach to predicting dry eye-related signs, symptoms and diagnoses from meibography images. Heliyon 2024; 10:e36021. [PMID: 39286076 PMCID: PMC11403426 DOI: 10.1016/j.heliyon.2024.e36021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 08/06/2024] [Accepted: 08/08/2024] [Indexed: 09/19/2024] Open
Abstract
Purpose To use artificial intelligence to identify relationships between morphological characteristics of the Meibomian glands (MGs), subject factors, clinical outcomes, and subjective symptoms of dry eye. Methods A total of 562 infrared meibography images were collected from 363 subjects (170 contact lens wearers, 193 non-wearers). Subjects were 67.2 % female and were 54.8 % Caucasian. Subjects were 18 years of age or older. A deep learning model was trained to take meibography as input, segment the individual MG in the images, and learn their detailed morphological features. Morphological characteristics were then combined with clinical and symptom data in prediction models of MG function, tear film stability, ocular surface health, and subjective discomfort and dryness. The models were analyzed to identify the most heavily weighted features used by the algorithm for predictions. Results MG morphological characteristics were heavily weighted predictors for eyelid notching and vascularization, MG expressate quality and quantity, tear film stability, corneal staining, and comfort and dryness ratings, with accuracies ranging from 65 % to 99 %. Number of visible MG, along with other clinical parameters, were able to predict MG dysfunction, aqueous deficiency and blepharitis with accuracies ranging from 74 % to 85 %. Conclusions Machine learning-derived MG morphological characteristics were found to be important in predicting multiple signs, symptoms, and diagnoses related to MG dysfunction and dry eye. This deep learning method illustrates the rich clinical information that detailed morphological analysis of the MGs can provide, and shows promise in advancing our understanding of the role of MG morphology in ocular surface health.
Collapse
Affiliation(s)
- Andrew D Graham
- Vision Science Group, University of California, Berkeley, United States
- Clinical Research Center, School of Optometry, University of California, Berkeley, United States
| | - Tejasvi Kothapalli
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, United States
- Clinical Research Center, School of Optometry, University of California, Berkeley, United States
| | - Jiayun Wang
- Vision Science Group, University of California, Berkeley, United States
- Clinical Research Center, School of Optometry, University of California, Berkeley, United States
| | - Jennifer Ding
- Clinical Research Center, School of Optometry, University of California, Berkeley, United States
| | - Vivien Tse
- Vision Science Group, University of California, Berkeley, United States
- Clinical Research Center, School of Optometry, University of California, Berkeley, United States
| | - Penny A Asbell
- Department of Bioengineering, University of Memphis, United States
| | - Stella X Yu
- International Computer Science Institute, Berkeley, United States
| | - Meng C Lin
- Vision Science Group, University of California, Berkeley, United States
- Clinical Research Center, School of Optometry, University of California, Berkeley, United States
| |
Collapse
|
6
|
Erskine J, Abrishami P, Bernhard JC, Charter R, Culbertson R, Hiatt JC, Igarashi A, Purcell Jackson G, Lien M, Maddern G, Soon Yau Ng J, Patel A, Rha KH, Sooriakumaran P, Tackett S, Turchetti G, Chalkidou A. An international consensus panel on the potential value of Digital Surgery. BMJ Open 2024; 14:e082875. [PMID: 39242163 PMCID: PMC11381694 DOI: 10.1136/bmjopen-2023-082875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/09/2024] Open
Abstract
OBJECTIVES The use of digital technology in surgery is increasing rapidly, with a wide array of new applications from presurgical planning to postsurgical performance assessment. Understanding the clinical and economic value of these technologies is vital for making appropriate health policy and purchasing decisions. We explore the potential value of digital technologies in surgery and produce expert consensus on how to assess this value. DESIGN A modified Delphi and consensus conference approach was adopted. Delphi rounds were used to generate priority topics and consensus statements for discussion. SETTING AND PARTICIPANTS An international panel of 14 experts was assembled, representing relevant stakeholder groups: clinicians, health economists, health technology assessment experts, policy-makers and industry. PRIMARY AND SECONDARY OUTCOME MEASURES A scoping questionnaire was used to generate research questions to be answered. A second questionnaire was used to rate the importance of these research questions. A final questionnaire was used to generate statements for discussion during three consensus conferences. After discussion, the panel voted on their level of agreement from 1 to 9; where 1=strongly disagree and 9=strongly agree. Consensus was defined as a mean level of agreement of >7. RESULTS Four priority topics were identified: (1) how data are used in digital surgery, (2) the existing evidence base for digital surgical technologies, (3) how digital technologies may assist surgical training and education and (4) methods for the assessment of these technologies. Seven consensus statements were generated and refined, with the final level of consensus ranging from 7.1 to 8.6. CONCLUSION Potential benefits of digital technologies in surgery include reducing unwarranted variation in surgical practice, increasing access to surgery and reducing health inequalities. Assessments to consider the value of the entire surgical ecosystem holistically are critical, especially as many digital technologies are likely to interact simultaneously in the operating theatre.
Collapse
Affiliation(s)
- Jamie Erskine
- Market Access, Alira Health, Boston, Massachusetts, USA
| | - Payam Abrishami
- Erasmus School of Health Policy and Management, National Health Care Institute, Rotterdam, The Netherlands
| | | | - Richard Charter
- Health Technology Assessment International, Edmonton, Alberta, Canada
- CHLOE Healthcare Advisory Group, London, UK
| | - Richard Culbertson
- Louisiana State University Health Sciences Center, New Orleans, Louisiana, USA
| | - Jo Carol Hiatt
- Health Technology Assessment International, Edmonton, Alberta, Canada
| | | | - Gretchen Purcell Jackson
- Intuitive Surgical Inc, Sunnyvale, California, USA
- American Medical Informatics Association, Bethesda, Maryland, USA
| | - Matthew Lien
- Intuitive Surgical Inc, Sunnyvale, California, USA
| | - Guy Maddern
- Surgery, The Queen Elizabeth Hospital, University of Adelaide, Woodville, Adelaide, Australia
| | | | - Anita Patel
- Anita Patel Health Economics Consulting Ltd, London, UK
- Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| | - Koon Ho Rha
- Yonsei University Medical Center, Seodaemun-gu, Seoul, Republic of Korea
| | | | | | - Giuseppe Turchetti
- Institute of Management, Scuola Superiore Sant'Anna, Pisa, Toscana, Italy
| | | |
Collapse
|
7
|
Rakers MM, van Buchem MM, Kucenko S, de Hond A, Kant I, van Smeden M, Moons KGM, Leeuwenberg AM, Chavannes N, Villalobos-Quesada M, van Os HJA. Availability of Evidence for Predictive Machine Learning Algorithms in Primary Care: A Systematic Review. JAMA Netw Open 2024; 7:e2432990. [PMID: 39264624 PMCID: PMC11393722 DOI: 10.1001/jamanetworkopen.2024.32990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/13/2024] Open
Abstract
Importance The aging and multimorbid population and health personnel shortages pose a substantial burden on primary health care. While predictive machine learning (ML) algorithms have the potential to address these challenges, concerns include transparency and insufficient reporting of model validation and effectiveness of the implementation in the clinical workflow. Objectives To systematically identify predictive ML algorithms implemented in primary care from peer-reviewed literature and US Food and Drug Administration (FDA) and Conformité Européene (CE) registration databases and to ascertain the public availability of evidence, including peer-reviewed literature, gray literature, and technical reports across the artificial intelligence (AI) life cycle. Evidence Review PubMed, Embase, Web of Science, Cochrane Library, Emcare, Academic Search Premier, IEEE Xplore, ACM Digital Library, MathSciNet, AAAI.org (Association for the Advancement of Artificial Intelligence), arXiv, Epistemonikos, PsycINFO, and Google Scholar were searched for studies published between January 2000 and July 2023, with search terms that were related to AI, primary care, and implementation. The search extended to CE-marked or FDA-approved predictive ML algorithms obtained from relevant registration databases. Three reviewers gathered subsequent evidence involving strategies such as product searches, exploration of references, manufacturer website visits, and direct inquiries to authors and product owners. The extent to which the evidence for each predictive ML algorithm aligned with the Dutch AI predictive algorithm (AIPA) guideline requirements was assessed per AI life cycle phase, producing evidence availability scores. Findings The systematic search identified 43 predictive ML algorithms, of which 25 were commercially available and CE-marked or FDA-approved. The predictive ML algorithms spanned multiple clinical domains, but most (27 [63%]) focused on cardiovascular diseases and diabetes. Most (35 [81%]) were published within the past 5 years. The availability of evidence varied across different phases of the predictive ML algorithm life cycle, with evidence being reported the least for phase 1 (preparation) and phase 5 (impact assessment) (19% and 30%, respectively). Twelve (28%) predictive ML algorithms achieved approximately half of their maximum individual evidence availability score. Overall, predictive ML algorithms from peer-reviewed literature showed higher evidence availability compared with those from FDA-approved or CE-marked databases (45% vs 29%). Conclusions and Relevance The findings indicate an urgent need to improve the availability of evidence regarding the predictive ML algorithms' quality criteria. Adopting the Dutch AIPA guideline could facilitate transparent and consistent reporting of the quality criteria that could foster trust among end users and facilitating large-scale implementation.
Collapse
Affiliation(s)
- Margot M Rakers
- Department of Public Health and Primary Care, Leiden University Medical Centre, ZA Leiden, the Netherlands
- National eHealth Living Lab, Leiden University Medical Centre, ZA Leiden, the Netherlands
| | - Marieke M van Buchem
- Department of Information Technology and Digital Innovation, Leiden University Medical Center, ZA Leiden, the Netherlands
| | - Sergej Kucenko
- Hamburg University of Applied Sciences, Department of Health Sciences, Ulmenliet 20, Hamburg, Germany
| | - Anne de Hond
- Department of Digital Health, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, CG Utrecht, the Netherlands
| | - Ilse Kant
- Department of Digital Health, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, CG Utrecht, the Netherlands
| | - Maarten van Smeden
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, CG Utrecht, the Netherlands
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, CG Utrecht, the Netherlands
| | - Artuur M Leeuwenberg
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Universiteitsweg 100, CG Utrecht, the Netherlands
| | - Niels Chavannes
- Department of Public Health and Primary Care, Leiden University Medical Centre, ZA Leiden, the Netherlands
- National eHealth Living Lab, Leiden University Medical Centre, ZA Leiden, the Netherlands
| | - María Villalobos-Quesada
- Department of Public Health and Primary Care, Leiden University Medical Centre, ZA Leiden, the Netherlands
- National eHealth Living Lab, Leiden University Medical Centre, ZA Leiden, the Netherlands
| | - Hendrikus J A van Os
- Department of Public Health and Primary Care, Leiden University Medical Centre, ZA Leiden, the Netherlands
- National eHealth Living Lab, Leiden University Medical Centre, ZA Leiden, the Netherlands
| |
Collapse
|
8
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Pinto Dos Santos D, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. J Am Coll Radiol 2024; 21:1292-1310. [PMID: 38276923 DOI: 10.1016/j.jacr.2023.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. KEY POINTS.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, Alabama; American College of Radiology Data Science Institute, Reston, Virginia
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, California; Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, California
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, California
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany; Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, Massachusetts; Tufts University Medical School, Boston, Massachusetts; Commision on Informatics, and Member, Board of Chancellors, American College of Radiology, Virginia
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, Australia; College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
9
|
Muralidharan V, Schamroth J, Youssef A, Celi LA, Daneshjou R. Applied artificial intelligence for global child health: Addressing biases and barriers. PLOS DIGITAL HEALTH 2024; 3:e0000583. [PMID: 39172772 PMCID: PMC11340888 DOI: 10.1371/journal.pdig.0000583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2024]
Abstract
Given the potential benefits of artificial intelligence and machine learning (AI/ML) within healthcare, it is critical to consider how these technologies can be deployed in pediatric research and practice. Currently, healthcare AI/ML has not yet adapted to the specific technical considerations related to pediatric data nor adequately addressed the specific vulnerabilities of children and young people (CYP) in relation to AI. While the greatest burden of disease in CYP is firmly concentrated in lower and middle-income countries (LMICs), existing applied pediatric AI/ML efforts are concentrated in a small number of high-income countries (HICs). In LMICs, use-cases remain primarily in the proof-of-concept stage. This narrative review identifies a number of intersecting challenges that pose barriers to effective AI/ML for CYP globally and explores the shifts needed to make progress across multiple domains. Child-specific technical considerations throughout the AI/ML lifecycle have been largely overlooked thus far, yet these can be critical to model effectiveness. Governance concerns are paramount, with suitable national and international frameworks and guidance required to enable the safe and responsible deployment of advanced technologies impacting the care of CYP and using their data. An ambitious vision for child health demands that the potential benefits of AI/Ml are realized universally through greater international collaboration, capacity building, strong oversight, and ultimately diffusing the AI/ML locus of power to empower researchers and clinicians globally. In order that AI/ML systems that do not exacerbate inequalities in pediatric care, teams researching and developing these technologies in LMICs must ensure that AI/ML research is inclusive of the needs and concerns of CYP and their caregivers. A broad, interdisciplinary, and human-centered approach to AI/ML is essential for developing tools for healthcare workers delivering care, such that the creation and deployment of ML is grounded in local systems, cultures, and clinical practice. Decisions to invest in developing and testing pediatric AI/ML in resource-constrained settings must always be part of a broader evaluation of the overall needs of a healthcare system, considering the critical building blocks underpinning effective, sustainable, and cost-efficient healthcare delivery for CYP.
Collapse
Affiliation(s)
- Vijaytha Muralidharan
- Department of Dermatology, Stanford University, Stanford, California, United States of America
| | - Joel Schamroth
- Faculty of Population Health Sciences, University College London, London, United Kingdom
| | - Alaa Youssef
- Stanford Center for Artificial Intelligence in Medicine and Imaging, Department of Radiology, Stanford University, Stanford, California, United States of America
| | - Leo A. Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, United States of America
| | - Roxana Daneshjou
- Department of Dermatology, Stanford University, Stanford, California, United States of America
- Department of Biomedical Data Science, Stanford University, Stanford, California, United States of America
| |
Collapse
|
10
|
Hogg HDJ, Martindale APL, Liu X, Denniston AK. Clinical Evaluation of Artificial Intelligence-Enabled Interventions. Invest Ophthalmol Vis Sci 2024; 65:10. [PMID: 39106058 PMCID: PMC11309043 DOI: 10.1167/iovs.65.10.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 07/02/2024] [Indexed: 08/07/2024] Open
Abstract
Artificial intelligence (AI) health technologies are increasingly available for use in real-world care. This emerging opportunity is accompanied by a need for decision makers and practitioners across healthcare systems to evaluate the safety and effectiveness of these interventions against the needs of their own setting. To meet this need, high-quality evidence regarding AI-enabled interventions must be made available, and decision makers in varying roles and settings must be empowered to evaluate that evidence within the context in which they work. This article summarizes good practices across four stages of evidence generation for AI health technologies: study design, study conduct, study reporting, and study appraisal.
Collapse
Affiliation(s)
- H. D. Jeffry Hogg
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
- Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
- NIHR-Supported Incubator in AI & Digital Healthcare, Birmingham, United Kingdom
| | | | - Xiaoxuan Liu
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
- Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
- NIHR-Supported Incubator in AI & Digital Healthcare, Birmingham, United Kingdom
- National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, United Kingdom
| | - Alastair K. Denniston
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
- Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
- NIHR-Supported Incubator in AI & Digital Healthcare, Birmingham, United Kingdom
- National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, United Kingdom
| |
Collapse
|
11
|
Murovec B, Deutsch L, Osredkar D, Stres B. MetaBakery: a Singularity implementation of bioBakery tools as a skeleton application for efficient HPC deconvolution of microbiome metagenomic sequencing data to machine learning ready information. Front Microbiol 2024; 15:1426465. [PMID: 39139377 PMCID: PMC11321593 DOI: 10.3389/fmicb.2024.1426465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Accepted: 07/16/2024] [Indexed: 08/15/2024] Open
Abstract
In this study, we present MetaBakery (http://metabakery.fe.uni-lj.si), an integrated application designed as a framework for synergistically executing the bioBakery workflow and associated utilities. MetaBakery streamlines the processing of any number of paired or unpaired fastq files, or a mixture of both, with optional compression (gzip, zip, bzip2, xz, or mixed) within a single run. MetaBakery uses programs such as KneadData (https://github.com/bioBakery/kneaddata), MetaPhlAn, HUMAnN and StrainPhlAn as well as integrated utilities and extends the original functionality of bioBakery. In particular, it includes MelonnPan for the prediction of metabolites and Mothur for calculation of microbial alpha diversity. Written in Python 3 and C++ the whole pipeline was encapsulated as Singularity container for efficient execution on various computing infrastructures, including large High-Performance Computing clusters. MetaBakery facilitates crash recovery, efficient re-execution upon parameter changes, and processing of large data sets through subset handling and is offered in three editions with bioBakery ingredients versions 4, 3 and 2 as versatile, transparent and well documented within the MetaBakery Users' Manual (http://metabakery.fe.uni-lj.si/metabakery_manual.pdf). It provides automatic handling of command line parameters, file formats and comprehensive hierarchical storage of output to simplify navigation and debugging. MetaBakery filters out potential human contamination and excludes samples with low read counts. It calculates estimates of alpha diversity and represents a comprehensive and augmented re-implementation of the bioBakery workflow. The robustness and flexibility of the system enables efficient exploration of changing parameters and input datasets, increasing its utility for microbiome analysis. Furthermore, we have shown that the MetaBakery tool can be used in modern biostatistical and machine learning approaches including large-scale microbiome studies.
Collapse
Affiliation(s)
- Boštjan Murovec
- University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
| | - Leon Deutsch
- University of Ljubljana, Department of Animal Science, Biotechnical Faculty, Ljubljana, Slovenia
- The NU, The Nu B.V., Leiden, Netherlands
| | - Damjan Osredkar
- Department of Pediatric Neurology, University Children's Hospital, University Medical Centre Ljubljana, Ljubljana, Slovenia
- University of Ljubljana, Medical Faculty, Ljubljana, Slovenia
| | - Blaž Stres
- University of Ljubljana, Department of Animal Science, Biotechnical Faculty, Ljubljana, Slovenia
- D13 Department of Catalysis and Chemical Reaction Engineering, National Institute of Chemistry, Ljubljana, Slovenia
- University of Ljubljana, Faculty of Civil and Geodetic Engineering, Ljubljana, Slovenia
- Department of Automation, Biocybernetics and Robotics, Jožef Stefan Institute, Ljubljana, Slovenia
| |
Collapse
|
12
|
Cai YQ, Gong DX, Tang LY, Cai Y, Li HJ, Jing TC, Gong M, Hu W, Zhang ZW, Zhang X, Zhang GW. Pitfalls in Developing Machine Learning Models for Predicting Cardiovascular Diseases: Challenge and Solutions. J Med Internet Res 2024; 26:e47645. [PMID: 38869157 PMCID: PMC11316160 DOI: 10.2196/47645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 10/30/2023] [Accepted: 06/12/2024] [Indexed: 06/14/2024] Open
Abstract
In recent years, there has been explosive development in artificial intelligence (AI), which has been widely applied in the health care field. As a typical AI technology, machine learning models have emerged with great potential in predicting cardiovascular diseases by leveraging large amounts of medical data for training and optimization, which are expected to play a crucial role in reducing the incidence and mortality rates of cardiovascular diseases. Although the field has become a research hot spot, there are still many pitfalls that researchers need to pay close attention to. These pitfalls may affect the predictive performance, credibility, reliability, and reproducibility of the studied models, ultimately reducing the value of the research and affecting the prospects for clinical application. Therefore, identifying and avoiding these pitfalls is a crucial task before implementing the research. However, there is currently a lack of a comprehensive summary on this topic. This viewpoint aims to analyze the existing problems in terms of data quality, data set characteristics, model design, and statistical methods, as well as clinical implications, and provide possible solutions to these problems, such as gathering objective data, improving training, repeating measurements, increasing sample size, preventing overfitting using statistical methods, using specific AI algorithms to address targeted issues, standardizing outcomes and evaluation criteria, and enhancing fairness and replicability, with the goal of offering reference and assistance to researchers, algorithm developers, policy makers, and clinical practitioners.
Collapse
Affiliation(s)
- Yu-Qing Cai
- The First Hospital of China Medical University, Shenyang, China
| | - Da-Xin Gong
- Smart Hospital Management Department, The First Hospital of China Medical University, Shenyang, China
| | - Li-Ying Tang
- The First Hospital of China Medical University, Shenyang, China
| | - Yue Cai
- The First Hospital of China Medical University, Shenyang, China
| | - Hui-Jun Li
- Shenyang Medical & Film Science and Technology Co, Ltd, Shenyang, China
| | - Tian-Ci Jing
- Smart Hospital Management Department, The First Hospital of China Medical University, Shenyang, China
| | | | - Wei Hu
- Bayi Orthopedic Hospital, Chengdu, China
| | - Zhen-Wei Zhang
- China Rongtong Medical & Healthcare Co, Ltd, Chengdu, China
| | - Xingang Zhang
- Department of Cardiology, The First Hospital of China Medical University, Shenyang, China
| | - Guang-Wei Zhang
- Smart Hospital Management Department, The First Hospital of China Medical University, Shenyang, China
| |
Collapse
|
13
|
Kale AU, Hogg HDJ, Pearson R, Glocker B, Golder S, Coombe A, Waring J, Liu X, Moore DJ, Denniston AK. Detecting Algorithmic Errors and Patient Harms for AI-Enabled Medical Devices in Randomized Controlled Trials: Protocol for a Systematic Review. JMIR Res Protoc 2024; 13:e51614. [PMID: 38941147 PMCID: PMC11245650 DOI: 10.2196/51614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 03/11/2024] [Accepted: 04/18/2024] [Indexed: 06/29/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) medical devices have the potential to transform existing clinical workflows and ultimately improve patient outcomes. AI medical devices have shown potential for a range of clinical tasks such as diagnostics, prognostics, and therapeutic decision-making such as drug dosing. There is, however, an urgent need to ensure that these technologies remain safe for all populations. Recent literature demonstrates the need for rigorous performance error analysis to identify issues such as algorithmic encoding of spurious correlations (eg, protected characteristics) or specific failure modes that may lead to patient harm. Guidelines for reporting on studies that evaluate AI medical devices require the mention of performance error analysis; however, there is still a lack of understanding around how performance errors should be analyzed in clinical studies, and what harms authors should aim to detect and report. OBJECTIVE This systematic review will assess the frequency and severity of AI errors and adverse events (AEs) in randomized controlled trials (RCTs) investigating AI medical devices as interventions in clinical settings. The review will also explore how performance errors are analyzed including whether the analysis includes the investigation of subgroup-level outcomes. METHODS This systematic review will identify and select RCTs assessing AI medical devices. Search strategies will be deployed in MEDLINE (Ovid), Embase (Ovid), Cochrane CENTRAL, and clinical trial registries to identify relevant papers. RCTs identified in bibliographic databases will be cross-referenced with clinical trial registries. The primary outcomes of interest are the frequency and severity of AI errors, patient harms, and reported AEs. Quality assessment of RCTs will be based on version 2 of the Cochrane risk-of-bias tool (RoB2). Data analysis will include a comparison of error rates and patient harms between study arms, and a meta-analysis of the rates of patient harm in control versus intervention arms will be conducted if appropriate. RESULTS The project was registered on PROSPERO in February 2023. Preliminary searches have been completed and the search strategy has been designed in consultation with an information specialist and methodologist. Title and abstract screening started in September 2023. Full-text screening is ongoing and data collection and analysis began in April 2024. CONCLUSIONS Evaluations of AI medical devices have shown promising results; however, reporting of studies has been variable. Detection, analysis, and reporting of performance errors and patient harms is vital to robustly assess the safety of AI medical devices in RCTs. Scoping searches have illustrated that the reporting of harms is variable, often with no mention of AEs. The findings of this systematic review will identify the frequency and severity of AI performance errors and patient harms and generate insights into how errors should be analyzed to account for both overall and subgroup performance. TRIAL REGISTRATION PROSPERO CRD42023387747; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=387747. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/51614.
Collapse
Affiliation(s)
- Aditya U Kale
- Institute of Inflammation and Ageing, University of Birmingham, Birmingham, United Kingdom
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
- NIHR Birmingham Biomedical Research Centre, Birmingham, United Kingdom
- NIHR Incubator for AI and Digital Health Research, Birmingham, United Kingdom
| | - Henry David Jeffry Hogg
- Population Health Science Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Russell Pearson
- Medicines and Healthcare Products Regulatory Agency, London, United Kingdom
| | - Ben Glocker
- Kheiron Medical Technologies, London, United Kingdom
- Department of Computing, Imperial College London, London, United Kingdom
| | - Su Golder
- Department of Health Sciences, University of York, York, United Kingdom
| | - April Coombe
- Institute of Applied Health Research, University of Birmingham, Birmingham, United Kingdom
| | - Justin Waring
- Health Services Management Centre, University of Birmingham, Birmingham, United Kingdom
| | - Xiaoxuan Liu
- Institute of Inflammation and Ageing, University of Birmingham, Birmingham, United Kingdom
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
- NIHR Birmingham Biomedical Research Centre, Birmingham, United Kingdom
- NIHR Incubator for AI and Digital Health Research, Birmingham, United Kingdom
| | - David J Moore
- Institute of Applied Health Research, University of Birmingham, Birmingham, United Kingdom
| | - Alastair K Denniston
- Institute of Inflammation and Ageing, University of Birmingham, Birmingham, United Kingdom
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom
- NIHR Birmingham Biomedical Research Centre, Birmingham, United Kingdom
- NIHR Incubator for AI and Digital Health Research, Birmingham, United Kingdom
| |
Collapse
|
14
|
Yao MWM, Nguyen ET, Retzloff MG, Gago LA, Copland S, Nichols JE, Payne JF, Opsahl M, Cadesky K, Meriano J, Donesky BW, Bird J, Peavey M, Beesley R, Neal G, Bird JS, Swanson T, Chen X, Walmer DK. Improving IVF Utilization with Patient-Centric Artificial Intelligence-Machine Learning (AI/ML): A Retrospective Multicenter Experience. J Clin Med 2024; 13:3560. [PMID: 38930089 PMCID: PMC11204457 DOI: 10.3390/jcm13123560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Revised: 06/07/2024] [Accepted: 06/14/2024] [Indexed: 06/28/2024] Open
Abstract
Objectives: In vitro fertilization (IVF) has the potential to give babies to millions more people globally, yet it continues to be underutilized. We established a globally applicable and locally adaptable IVF prognostics report and framework to support patient-provider counseling and enable validated, data-driven treatment decisions. This study investigates the IVF utilization rates associated with the usage of machine learning, center-specific (MLCS) prognostic reports (the Univfy® report) in provider-patient pre-treatment and IVF counseling. Methods: We used a retrospective cohort comprising 24,238 patients with new patient visits (NPV) from 2016 to 2022 across seven fertility centers in 17 locations in seven US states and Ontario, Canada. We tested the association of Univfy report usage and first intra-uterine insemination (IUI) and/or first IVF usage (a.k.a. conversion) within 180 days, 360 days, and "Ever" of NPV as primary outcomes. Results: Univfy report usage was associated with higher direct IVF conversion (without prior IUI), with odds ratios (OR) 3.13 (95% CI 2.83, 3.46), 2.89 (95% CI 2.63, 3.17), and 2.04 (95% CI 1.90, 2.20) and total IVF conversion (with or without prior IUI), OR 3.41 (95% CI 3.09, 3.75), 3.81 (95% CI 3.49, 4.16), and 2.78 (95% CI 2.59, 2.98) in 180-day, 360-day, and Ever analyses, respectively; p < 0.05. Among patients with Univfy report usage, after accounting for center as a factor, older age was a small yet independent predictor of IVF conversion. Conclusions: Usage of a patient-centric, MLCS-based prognostics report was associated with increased IVF conversion among new fertility patients. Further research to study factors influencing treatment decision making and real-world optimization of patient-centric workflows utilizing the MLCS reports is warranted.
Collapse
Affiliation(s)
- Mylene W. M. Yao
- Department of R&D, Univfy Inc., 117 Main Street, #139, Los Altos, CA 94022, USA
| | - Elizabeth T. Nguyen
- Department of R&D, Univfy Inc., 117 Main Street, #139, Los Altos, CA 94022, USA
| | | | | | | | - John E. Nichols
- Piedmont Reproductive Endocrinology Group, Greenville, SC 29615, USA (J.F.P.)
| | - John F. Payne
- Piedmont Reproductive Endocrinology Group, Greenville, SC 29615, USA (J.F.P.)
| | | | - Ken Cadesky
- TRIO Fertility Partners, Toronto, ON M5G 2K4, Canada
| | - Jim Meriano
- TRIO Fertility Partners, Toronto, ON M5G 2K4, Canada
| | | | - Joseph Bird
- My Fertility Center, Chattanooga, TN 37421, USA
| | - Mary Peavey
- Atlantic Reproductive Medicine, Raleigh, NC 27617, USA
| | | | - Gregory Neal
- Fertility Center of San Antonio, San Antonio, TX 78229, USA
| | | | - Trevor Swanson
- Department of R&D, Univfy Inc., 117 Main Street, #139, Los Altos, CA 94022, USA
| | - Xiaocong Chen
- Department of R&D, Univfy Inc., 117 Main Street, #139, Los Altos, CA 94022, USA
| | | |
Collapse
|
15
|
Wang J, Yu Y, Tan Y, Wan H, Zheng N, He Z, Mao L, Ren W, Chen K, Lin Z, He G, Chen Y, Chen R, Xu H, Liu K, Yao Q, Fu S, Song Y, Chen Q, Zuo L, Wei L, Wang J, Ouyang N, Yao H. Artificial intelligence enables precision diagnosis of cervical cytology grades and cervical cancer. Nat Commun 2024; 15:4369. [PMID: 38778014 PMCID: PMC11111770 DOI: 10.1038/s41467-024-48705-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 05/08/2024] [Indexed: 05/25/2024] Open
Abstract
Cervical cancer is a significant global health issue, its prevalence and prognosis highlighting the importance of early screening for effective prevention. This research aimed to create and validate an artificial intelligence cervical cancer screening (AICCS) system for grading cervical cytology. The AICCS system was trained and validated using various datasets, including retrospective, prospective, and randomized observational trial data, involving a total of 16,056 participants. It utilized two artificial intelligence (AI) models: one for detecting cells at the patch-level and another for classifying whole-slide image (WSIs). The AICCS consistently showed high accuracy in predicting cytology grades across different datasets. In the prospective assessment, it achieved an area under curve (AUC) of 0.947, a sensitivity of 0.946, a specificity of 0.890, and an accuracy of 0.892. Remarkably, the randomized observational trial revealed that the AICCS-assisted cytopathologists had a significantly higher AUC, specificity, and accuracy than cytopathologists alone, with a notable 13.3% enhancement in sensitivity. Thus, AICCS holds promise as an additional tool for accurate and efficient cervical cancer screening.
Collapse
Affiliation(s)
- Jue Wang
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yunfang Yu
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Faculty of Medicine, Macau University of Science and Technology, Taipa, Macao, China
| | - Yujie Tan
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Huan Wan
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Nafen Zheng
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zifan He
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Luhui Mao
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Wei Ren
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Kai Chen
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zhen Lin
- Cells Vision (Guangzhou) Medical Technology Inc., Guangzhou, China
| | - Gui He
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yongjian Chen
- Dermatology and Venereology Division, Department of Medicine Solna, Center for Molecular Medicine, Karolinska Institutet, Stockholm, Sweden
| | - Ruichao Chen
- Department of Pathology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Hui Xu
- Department of Pathology, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Kai Liu
- Cells Vision (Guangzhou) Medical Technology Inc., Guangzhou, China
| | - Qinyue Yao
- Cells Vision (Guangzhou) Medical Technology Inc., Guangzhou, China
| | - Sha Fu
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yang Song
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qingyu Chen
- Department of Health Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Lina Zuo
- Department of Health Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Liya Wei
- Department of Health Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jin Wang
- Cells Vision (Guangzhou) Medical Technology Inc., Guangzhou, China.
| | - Nengtai Ouyang
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Herui Yao
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
- Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
- Phase I Clinical Trial Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
- Breast Tumor Centre, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
16
|
Kuziemsky CE, Chrimes D, Minshall S, Mannerow M, Lau F. AI Quality Standards in Health Care: Rapid Umbrella Review. J Med Internet Res 2024; 26:e54705. [PMID: 38776538 PMCID: PMC11153979 DOI: 10.2196/54705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 04/03/2024] [Accepted: 04/04/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND In recent years, there has been an upwelling of artificial intelligence (AI) studies in the health care literature. During this period, there has been an increasing number of proposed standards to evaluate the quality of health care AI studies. OBJECTIVE This rapid umbrella review examines the use of AI quality standards in a sample of health care AI systematic review articles published over a 36-month period. METHODS We used a modified version of the Joanna Briggs Institute umbrella review method. Our rapid approach was informed by the practical guide by Tricco and colleagues for conducting rapid reviews. Our search was focused on the MEDLINE database supplemented with Google Scholar. The inclusion criteria were English-language systematic reviews regardless of review type, with mention of AI and health in the abstract, published during a 36-month period. For the synthesis, we summarized the AI quality standards used and issues noted in these reviews drawing on a set of published health care AI standards, harmonized the terms used, and offered guidance to improve the quality of future health care AI studies. RESULTS We selected 33 review articles published between 2020 and 2022 in our synthesis. The reviews covered a wide range of objectives, topics, settings, designs, and results. Over 60 AI approaches across different domains were identified with varying levels of detail spanning different AI life cycle stages, making comparisons difficult. Health care AI quality standards were applied in only 39% (13/33) of the reviews and in 14% (25/178) of the original studies from the reviews examined, mostly to appraise their methodological or reporting quality. Only a handful mentioned the transparency, explainability, trustworthiness, ethics, and privacy aspects. A total of 23 AI quality standard-related issues were identified in the reviews. There was a recognized need to standardize the planning, conduct, and reporting of health care AI studies and address their broader societal, ethical, and regulatory implications. CONCLUSIONS Despite the growing number of AI standards to assess the quality of health care AI studies, they are seldom applied in practice. With increasing desire to adopt AI in different health topics, domains, and settings, practitioners and researchers must stay abreast of and adapt to the evolving landscape of health care AI quality standards and apply these standards to improve the quality of their AI studies.
Collapse
Affiliation(s)
| | - Dillon Chrimes
- School of Health Information Science, University of Victoria, Victoria, BC, Canada
| | - Simon Minshall
- School of Health Information Science, University of Victoria, Victoria, BC, Canada
| | | | - Francis Lau
- School of Health Information Science, University of Victoria, Victoria, BC, Canada
| |
Collapse
|
17
|
Protocol for the development of the Chatbot Assessment Reporting Tool (CHART) for clinical advice. BMJ Open 2024; 14:e081155. [PMID: 38772889 PMCID: PMC11110548 DOI: 10.1136/bmjopen-2023-081155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 03/26/2024] [Indexed: 05/23/2024] Open
Abstract
INTRODUCTION Large language model (LLM)-linked chatbots are being increasingly applied in healthcare due to their impressive functionality and public availability. Studies have assessed the ability of LLM-linked chatbots to provide accurate clinical advice. However, the methods applied in these Chatbot Assessment Studies are inconsistent due to the lack of reporting standards available, which obscures the interpretation of their study findings. This protocol outlines the development of the Chatbot Assessment Reporting Tool (CHART) reporting guideline. METHODS AND ANALYSIS The development of the CHART reporting guideline will consist of three phases, led by the Steering Committee. During phase one, the team will identify relevant reporting guidelines with artificial intelligence extensions that are published or in development by searching preprint servers, protocol databases, and the Enhancing the Quality and Transparency of health research Network. During phase two, we will conduct a scoping review to identify studies that have addressed the performance of LLM-linked chatbots in summarising evidence and providing clinical advice. The Steering Committee will identify methodology used in previous Chatbot Assessment Studies. Finally, the study team will use checklist items from prior reporting guidelines and findings from the scoping review to develop a draft reporting checklist. We will then perform a Delphi consensus and host two synchronous consensus meetings with an international, multidisciplinary group of stakeholders to refine reporting checklist items and develop a flow diagram. ETHICS AND DISSEMINATION We will publish the final CHART reporting guideline in peer-reviewed journals and will present findings at peer-reviewed meetings. Ethical approval was submitted to the Hamilton Integrated Research Ethics Board and deemed "not required" in accordance with the Tri-Council Policy Statement (TCPS2) for the development of the CHART reporting guideline (#17025). REGISTRATION This study protocol is preregistered with Open Science Framework: https://doi.org/10.17605/OSF.IO/59E2Q.
Collapse
|
18
|
Biccirè FG, Mannhart D, Kakizaki R, Windecker S, Räber L, Siontis GCM. Automatic assessment of atherosclerotic plaque features by intracoronary imaging: a scoping review. Front Cardiovasc Med 2024; 11:1332925. [PMID: 38742173 PMCID: PMC11090039 DOI: 10.3389/fcvm.2024.1332925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 04/01/2024] [Indexed: 05/16/2024] Open
Abstract
Background The diagnostic performance and clinical validity of automatic intracoronary imaging (ICI) tools for atherosclerotic plaque assessment have not been systematically investigated so far. Methods We performed a scoping review including studies on automatic tools for automatic plaque components assessment by means of optical coherence tomography (OCT) or intravascular imaging (IVUS). We summarized study characteristics and reported the specifics and diagnostic performance of developed tools. Results Overall, 42 OCT and 26 IVUS studies fulfilling the eligibility criteria were found, with the majority published in the last 5 years (86% of the OCT and 73% of the IVUS studies). A convolutional neural network deep-learning method was applied in 71% of OCT- and 34% of IVUS-studies. Calcium was the most frequent plaque feature analyzed (26/42 of OCT and 12/26 of IVUS studies), and both modalities showed high discriminatory performance in testing sets [range of area under the curve (AUC): 0.91-0.99 for OCT and 0.89-0.98 for IVUS]. Lipid component was investigated only in OCT studies (n = 26, AUC: 0.82-0.86). Fibrous cap thickness or thin-cap fibroatheroma were mainly investigated in OCT studies (n = 8, AUC: 0.82-0.94). Plaque burden was mainly assessed in IVUS studies (n = 15, testing set AUC reported in one study: 0.70). Conclusion A limited number of automatic machine learning-derived tools for ICI analysis is currently available. The majority have been developed for calcium detection for either OCT or IVUS images. The reporting of the development and validation process of automated intracoronary imaging analyses is heterogeneous and lacks critical information. Systematic Review Registration Open Science Framework (OSF), https://osf.io/nps2b/.Graphical AbstractCentral Illustration.
Collapse
Affiliation(s)
| | | | | | | | | | - George C. M. Siontis
- Department of Cardiology, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
19
|
Cohen JF, Bossuyt PMM. TRIPOD+AI: an updated reporting guideline for clinical prediction models. BMJ 2024; 385:q824. [PMID: 38626949 DOI: 10.1136/bmj.q824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 04/19/2024]
Affiliation(s)
- Jérémie F Cohen
- Centre of Research in Epidemiology and Statistics (CRESS), INSERM, EPOPé Research Team, Université Paris Cité, 75014 Paris, France
- Department of General Pediatrics and Pediatric Infectious Diseases, Necker-Enfants Malades Hospital, Assistance Publique-Hôpitaux de Paris, Université Paris Cité, Paris, France
| | - Patrick M M Bossuyt
- Department of Epidemiology and Data Science, Amsterdam University Medical Centres, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
20
|
Collins GS, Moons KGM, Dhiman P, Riley RD, Beam AL, Van Calster B, Ghassemi M, Liu X, Reitsma JB, van Smeden M, Boulesteix AL, Camaradou JC, Celi LA, Denaxas S, Denniston AK, Glocker B, Golub RM, Harvey H, Heinze G, Hoffman MM, Kengne AP, Lam E, Lee N, Loder EW, Maier-Hein L, Mateen BA, McCradden MD, Oakden-Rayner L, Ordish J, Parnell R, Rose S, Singh K, Wynants L, Logullo P. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ 2024; 385:e078378. [PMID: 38626948 PMCID: PMC11019967 DOI: 10.1136/bmj-2023-078378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/17/2024] [Indexed: 04/19/2024]
Affiliation(s)
- Gary S Collins
- Centre for Statistics in Medicine, UK EQUATOR Centre, Nuffield Department of Orthopaedics, Rheumatology, and Musculoskeletal Sciences, University of Oxford, Oxford OX3 7LD, UK
| | - Karel G M Moons
- Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht University, Utrecht, Netherlands
| | - Paula Dhiman
- Centre for Statistics in Medicine, UK EQUATOR Centre, Nuffield Department of Orthopaedics, Rheumatology, and Musculoskeletal Sciences, University of Oxford, Oxford OX3 7LD, UK
| | - Richard D Riley
- Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, Birmingham, UK
| | - Andrew L Beam
- Department of Epidemiology, Harvard T H Chan School of Public Health, Boston, MA, USA
| | - Ben Van Calster
- Department of Development and Regeneration, KU Leuven, Leuven, Belgium
- Department of Biomedical Data Science, Leiden University Medical Centre, Leiden, Netherlands
| | - Marzyeh Ghassemi
- Department of Electrical Engineering and Computer Science, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Xiaoxuan Liu
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Johannes B Reitsma
- Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht University, Utrecht, Netherlands
| | - Maarten van Smeden
- Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht University, Utrecht, Netherlands
| | - Anne-Laure Boulesteix
- Institute for Medical Information Processing, Biometry and Epidemiology, Faculty of Medicine, Ludwig-Maximilians-University of Munich and Munich Centre of Machine Learning, Germany
| | - Jennifer Catherine Camaradou
- Patient representative, Health Data Research UK patient and public involvement and engagement group
- Patient representative, University of East Anglia, Faculty of Health Sciences, Norwich Research Park, Norwich, UK
| | - Leo Anthony Celi
- Beth Israel Deaconess Medical Center, Boston, MA, USA
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Biostatistics, Harvard T H Chan School of Public Health, Boston, MA, USA
| | - Spiros Denaxas
- Institute of Health Informatics, University College London, London, UK
- British Heart Foundation Data Science Centre, London, UK
| | - Alastair K Denniston
- National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, Birmingham, UK
- Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | - Robert M Golub
- Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | | | - Georg Heinze
- Section for Clinical Biometrics, Centre for Medical Data Science, Medical University of Vienna, Vienna, Austria
| | - Michael M Hoffman
- Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | | | - Emily Lam
- Patient representative, Health Data Research UK patient and public involvement and engagement group
| | - Naomi Lee
- National Institute for Health and Care Excellence, London, UK
| | - Elizabeth W Loder
- The BMJ, London, UK
- Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Lena Maier-Hein
- Department of Intelligent Medical Systems, German Cancer Research Centre, Heidelberg, Germany
| | - Bilal A Mateen
- Institute of Health Informatics, University College London, London, UK
- Wellcome Trust, London, UK
- Alan Turing Institute, London, UK
| | - Melissa D McCradden
- Department of Bioethics, Hospital for Sick Children Toronto, ON, Canada
- Genetics and Genome Biology, SickKids Research Institute, Toronto, ON, Canada
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, SA, Australia
| | - Johan Ordish
- Medicines and Healthcare products Regulatory Agency, London, UK
| | - Richard Parnell
- Patient representative, Health Data Research UK patient and public involvement and engagement group
| | - Sherri Rose
- Department of Health Policy and Center for Health Policy, Stanford University, Stanford, CA, USA
| | - Karandeep Singh
- Department of Epidemiology, CAPHRI Care and Public Health Research Institute, Maastricht University, Maastricht, Netherlands
| | - Laure Wynants
- Department of Epidemiology, CAPHRI Care and Public Health Research Institute, Maastricht University, Maastricht, Netherlands
| | - Patricia Logullo
- Centre for Statistics in Medicine, UK EQUATOR Centre, Nuffield Department of Orthopaedics, Rheumatology, and Musculoskeletal Sciences, University of Oxford, Oxford OX3 7LD, UK
| |
Collapse
|
21
|
Kolbinger FR, Veldhuizen GP, Zhu J, Truhn D, Kather JN. Reporting guidelines in medical artificial intelligence: a systematic review and meta-analysis. COMMUNICATIONS MEDICINE 2024; 4:71. [PMID: 38605106 PMCID: PMC11009315 DOI: 10.1038/s43856-024-00492-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 03/27/2024] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND The field of Artificial Intelligence (AI) holds transformative potential in medicine. However, the lack of universal reporting guidelines poses challenges in ensuring the validity and reproducibility of published research studies in this field. METHODS Based on a systematic review of academic publications and reporting standards demanded by both international consortia and regulatory stakeholders as well as leading journals in the fields of medicine and medical informatics, 26 reporting guidelines published between 2009 and 2023 were included in this analysis. Guidelines were stratified by breadth (general or specific to medical fields), underlying consensus quality, and target research phase (preclinical, translational, clinical) and subsequently analyzed regarding the overlap and variations in guideline items. RESULTS AI reporting guidelines for medical research vary with respect to the quality of the underlying consensus process, breadth, and target research phase. Some guideline items such as reporting of study design and model performance recur across guidelines, whereas other items are specific to particular fields and research stages. CONCLUSIONS Our analysis highlights the importance of reporting guidelines in clinical AI research and underscores the need for common standards that address the identified variations and gaps in current guidelines. Overall, this comprehensive overview could help researchers and public stakeholders reinforce quality standards for increased reliability, reproducibility, clinical validity, and public trust in AI research in healthcare. This could facilitate the safe, effective, and ethical translation of AI methods into clinical applications that will ultimately improve patient outcomes.
Collapse
Grants
- UM1 TR004402 NCATS NIH HHS
- JNK is supported by the German Federal Ministry of Health (DEEP LIVER, ZMVI1-2520DAT111) and the Max-Eder-Programme of the German Cancer Aid (grant #70113864), the German Federal Ministry of Education and Research (PEARL, 01KD2104C; CAMINO, 01EO2101; SWAG, 01KD2215A; TRANSFORM LIVER, 031L0312A), the German Academic Exchange Service (SECAI, 57616814), the German Federal Joint Committee (Transplant.KI, 01VSF21048) the European Union (ODELIA, 101057091; GENIAL, 101096312) and the National Institute for Health and Care Research (NIHR, NIHR213331) Leeds Biomedical Research Centre.
Collapse
Affiliation(s)
- Fiona R Kolbinger
- Else Kroener Fresenius Center for Digital Health, TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
- Regenstrief Center for Healthcare Engineering, Purdue University, West Lafayette, IN, USA
- Department of Biostatistics and Health Data Science, Richard M. Fairbanks School of Public Health, Indiana University, Indianapolis, IN, USA
- Indiana University Simon Comprehensive Cancer Center, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Gregory P Veldhuizen
- Else Kroener Fresenius Center for Digital Health, TUD Dresden University of Technology, Dresden, Germany
| | - Jiefu Zhu
- Else Kroener Fresenius Center for Digital Health, TUD Dresden University of Technology, Dresden, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, TUD Dresden University of Technology, Dresden, Germany.
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.
- Department of Medicine I, University Hospital Dresden, Dresden, Germany.
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
22
|
Flanagin A, Pirracchio R, Khera R, Berkwits M, Hswen Y, Bibbins-Domingo K. Reporting Use of AI in Research and Scholarly Publication-JAMA Network Guidance. JAMA 2024; 331:1096-1098. [PMID: 38451540 DOI: 10.1001/jama.2024.3471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Affiliation(s)
| | | | - Rohan Khera
- Associate Editor, JAMA
- Yale School of Medicine, New Haven, Connecticut
| | | | - Yulin Hswen
- Associate Editor, JAMA
- University of California, San Francisco
| | | |
Collapse
|
23
|
Macdonald T, Dinnes J, Maniatopoulos G, Taylor-Phillips S, Shinkins B, Hogg J, Dunbar JK, Solebo AL, Sutton H, Attwood J, Pogose M, Given-Wilson R, Greaves F, Macrae C, Pearson R, Bamford D, Tufail A, Liu X, Denniston AK. Target Product Profile for a Machine Learning-Automated Retinal Imaging Analysis Software for Use in English Diabetic Eye Screening: Protocol for a Mixed Methods Study. JMIR Res Protoc 2024; 13:e50568. [PMID: 38536234 PMCID: PMC11007610 DOI: 10.2196/50568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 02/02/2024] [Accepted: 02/13/2024] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND Diabetic eye screening (DES) represents a significant opportunity for the application of machine learning (ML) technologies, which may improve clinical and service outcomes. However, successful integration of ML into DES requires careful product development, evaluation, and implementation. Target product profiles (TPPs) summarize the requirements necessary for successful implementation so these can guide product development and evaluation. OBJECTIVE This study aims to produce a TPP for an ML-automated retinal imaging analysis software (ML-ARIAS) system for use in DES in England. METHODS This work will consist of 3 phases. Phase 1 will establish the characteristics to be addressed in the TPP. A list of candidate characteristics will be generated from the following sources: an overview of systematic reviews of diagnostic test TPPs; a systematic review of digital health TPPs; and the National Institute for Health and Care Excellence's Evidence Standards Framework for Digital Health Technologies. The list of characteristics will be refined and validated by a study advisory group (SAG) made up of representatives from key stakeholders in DES. This includes people with diabetes; health care professionals; health care managers and leaders; and regulators and policy makers. In phase 2, specifications for these characteristics will be drafted following a series of semistructured interviews with participants from these stakeholder groups. Data collected from these interviews will be analyzed using the shortlist of characteristics as a framework, after which specifications will be drafted to create a draft TPP. Following approval by the SAG, in phase 3, the draft will enter an internet-based Delphi consensus study with participants sought from the groups previously identified, as well as ML-ARIAS developers, to ensure feasibility. Participants will be invited to score characteristic and specification pairs on a scale from "definitely exclude" to "definitely include," and suggest edits. The document will be iterated between rounds based on participants' feedback. Feedback on the draft document will be sought from a group of ML-ARIAS developers before its final contents are agreed upon in an in-person consensus meeting. At this meeting, representatives from the stakeholder groups previously identified (minus ML-ARIAS developers, to avoid bias) will be presented with the Delphi results and feedback of the user group and asked to agree on the final contents by vote. RESULTS Phase 1 was completed in November 2023. Phase 2 is underway and expected to finish in March 2024. Phase 3 is expected to be complete in July 2024. CONCLUSIONS The multistakeholder development of a TPP for an ML-ARIAS for use in DES in England will help developers produce tools that serve the needs of patients, health care providers, and their staff. The TPP development process will also provide methods and a template to produce similar documents in other disease areas. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/50568.
Collapse
Affiliation(s)
- Trystan Macdonald
- Ophthalmology Department, Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham National Health Service Foundation Trust, Birmingham, United Kingdom
- Academic Unit of Ophthalmology, Institute of Inflammation and Aging, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, Birmingham, United Kingdom
| | - Jacqueline Dinnes
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, Birmingham, United Kingdom
| | | | | | - Bethany Shinkins
- Warwick Medical School, University of Warwick, Coventry, United Kingdom
| | - Jeffry Hogg
- Population Health Sciences Institute, Faculty of Medical Sciences, The University of Newcastle upon Tyne, Newcastle, United Kingdom
| | | | - Ameenat Lola Solebo
- Population Policy and Practice, University College London Great Ormond Street Institute of Child Health, London, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | | | - John Attwood
- Alder Hey Children's Hospital, Alder Hey Children's Hospital NHS Foundation Trust, Liverpool, United Kingdom
| | | | - Rosalind Given-Wilson
- St. George's University Hospitals National Health Service Foundation Trust, London, United Kingdom
| | - Felix Greaves
- National Institute for Health and Care Excellence, London, United Kingdom
- Faculty of Medicine, School of Public Health, Imperial College London, London, United Kingdom
| | - Carl Macrae
- Nottingham University Business School, University of Nottingham, Nottingham, United Kingdom
| | - Russell Pearson
- Medicines and Healthcare Products Regulatory Agency, London, United Kingdom
| | | | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- Institute of Ophthalmology, University College London, London, United Kingdom
| | - Xiaoxuan Liu
- Ophthalmology Department, Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham National Health Service Foundation Trust, Birmingham, United Kingdom
- Academic Unit of Ophthalmology, Institute of Inflammation and Aging, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, Birmingham, United Kingdom
| | - Alastair K Denniston
- Ophthalmology Department, Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham National Health Service Foundation Trust, Birmingham, United Kingdom
- Academic Unit of Ophthalmology, Institute of Inflammation and Aging, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
- National Institute for Health and Care Research Birmingham Biomedical Research Centre, Birmingham, United Kingdom
- Centre for Regulatory Science and Innovation, Birmingham Health Partners, Birmingham, United Kingdom
- National Institute for Health and Care Research Biomedical Research Centre at Moorfields and University College London Institute of Ophthalmology, London, United Kingdom
| |
Collapse
|
24
|
Sánchez-Rosenberg G, Magnéli M, Barle N, Kontakis MG, Müller AM, Wittauer M, Gordon M, Brodén C. ChatGPT-4 generates orthopedic discharge documents faster than humans maintaining comparable quality: a pilot study of 6 cases. Acta Orthop 2024; 95:152-156. [PMID: 38597205 PMCID: PMC10959013 DOI: 10.2340/17453674.2024.40182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/28/2024] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND AND PURPOSE Large language models like ChatGPT-4 have emerged. They hold the potential to reduce the administrative burden by generating everyday clinical documents, thus allowing the physician to spend more time with the patient. We aimed to assess both the quality and efficiency of discharge documents generated by ChatGPT-4 in comparison with those produced by physicians. PATIENTS AND METHODS To emulate real-world situations, the health records of 6 fictional orthopedic cases were created. Discharge documents for each case were generated by a junior attending orthopedic surgeon and an advanced orthopedic resident. ChatGPT-4 was then prompted to generate the discharge documents using the same health record information. The quality assessment was performed by an expert panel (n = 15) blinded to the source of the documents. As secondary outcome, the time required to generate the documents was compared, logging the duration of the creation of the discharge documents by the physician and by ChatGPT-4. RESULTS Overall, both ChatGPT-4 and physician-generated notes were comparable in quality. Notably, ChatGPT-4 generated discharge documents 10 times faster than the traditional method. 4 events of hallucinations were found in the ChatGPT-4-generated content, compared with 6 events in the human/physician produced notes. CONCLUSION ChatGPT-4 creates orthopedic discharge notes faster than physicians, with comparable quality. This shows it has great potential for making these documents more efficient in orthopedic care. ChatGPT-4 has the potential to significantly reduce the administrative burden on healthcare professionals.
Collapse
Affiliation(s)
| | - Martin Magnéli
- Karolinska Institute, Department of Clinical Sciences at Danderyd Hospital, Stockholm; Sweden
| | - Niklas Barle
- Karolinska Institute, Department of Clinical Sciences at Danderyd Hospital, Stockholm; Sweden
| | - Michael G Kontakis
- Department of Surgical Sciences, Orthopedics, Uppsala University Hospital, Uppsala, Sweden
| | - Andreas Marc Müller
- Department of Orthopedic and Trauma Surgery, University Hospital Basel, Switzerland
| | - Matthias Wittauer
- Department of Orthopedic and Trauma Surgery, University Hospital Basel, Switzerland
| | - Max Gordon
- Karolinska Institute, Department of Clinical Sciences at Danderyd Hospital, Stockholm; Sweden
| | - Cyrus Brodén
- Department of Surgical Sciences, Orthopedics, Uppsala University Hospital, Uppsala, Sweden.
| |
Collapse
|
25
|
Schliess F, Affini Dicenzo T, Gaus N, Bourez JM, Stegbauer C, Szecsenyi J, Jacobsen M, Müller-Wieland D, Kulzer B, Heinemann L. The German Fast Track Toward Reimbursement of Digital Health Applications: Opportunities and Challenges for Manufacturers, Healthcare Providers, and People With Diabetes. J Diabetes Sci Technol 2024; 18:470-476. [PMID: 36059268 PMCID: PMC10973846 DOI: 10.1177/19322968221121660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Digital health applications (DiGA) supporting the management of diabetes are among the most commonly available digital health technologies. However, transparent quality assurance of DiGA and clinical proof of a positive healthcare effect is often missing, which creates skepticism of some stakeholders regarding the usage and reimbursement of these applications. METHODS This article reviews the recently established fast-track integration of DiGA in the German reimbursement market, with emphasis on the current impact for manufacturers, healthcare providers, and people with diabetes. The German DiGA fast track is contextualised with corresponding initiatives in Europe. RESULTS The option of a provisional prescription and reimbursement of DiGA while proving a positive healthcare effect in parallel may expedite the adoption of DiGA in Germany and beyond. However, hurdles for a permanent prescription and reimbursement of DiGA are high and only one of 12 that have achieved this status specifically addresses people with diabetes. CONCLUSION The DiGA fast track needs to be further enhanced to cope with remaining skepticism and contribute even more to a value-based diabetes care.
Collapse
Affiliation(s)
| | | | | | | | - Constance Stegbauer
- AQUA Institute for Applied Quality Improvement and Research in Healthcare GmbH, Göttingen, Germany
| | - Joachim Szecsenyi
- AQUA Institute for Applied Quality Improvement and Research in Healthcare GmbH, Göttingen, Germany
| | - Malte Jacobsen
- Department of Internal Medicine I, RWTH Aachen University Hospital, Aachen, Germany
| | - Dirk Müller-Wieland
- Department of Internal Medicine I, RWTH Aachen University Hospital, Aachen, Germany
| | | | - Lutz Heinemann
- Profil Institut für Stoffwechselforschung GmbH, Neuss, Germany
| |
Collapse
|
26
|
Cai Y, Cai YQ, Tang LY, Wang YH, Gong M, Jing TC, Li HJ, Li-Ling J, Hu W, Yin Z, Gong DX, Zhang GW. Artificial intelligence in the risk prediction models of cardiovascular disease and development of an independent validation screening tool: a systematic review. BMC Med 2024; 22:56. [PMID: 38317226 PMCID: PMC10845808 DOI: 10.1186/s12916-024-03273-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 01/23/2024] [Indexed: 02/07/2024] Open
Abstract
BACKGROUND A comprehensive overview of artificial intelligence (AI) for cardiovascular disease (CVD) prediction and a screening tool of AI models (AI-Ms) for independent external validation are lacking. This systematic review aims to identify, describe, and appraise AI-Ms of CVD prediction in the general and special populations and develop a new independent validation score (IVS) for AI-Ms replicability evaluation. METHODS PubMed, Web of Science, Embase, and IEEE library were searched up to July 2021. Data extraction and analysis were performed for the populations, distribution, predictors, algorithms, etc. The risk of bias was evaluated with the prediction risk of bias assessment tool (PROBAST). Subsequently, we designed IVS for model replicability evaluation with five steps in five items, including transparency of algorithms, performance of models, feasibility of reproduction, risk of reproduction, and clinical implication, respectively. The review is registered in PROSPERO (No. CRD42021271789). RESULTS In 20,887 screened references, 79 articles (82.5% in 2017-2021) were included, which contained 114 datasets (67 in Europe and North America, but 0 in Africa). We identified 486 AI-Ms, of which the majority were in development (n = 380), but none of them had undergone independent external validation. A total of 66 idiographic algorithms were found; however, 36.4% were used only once and only 39.4% over three times. A large number of different predictors (range 5-52,000, median 21) and large-span sample size (range 80-3,660,000, median 4466) were observed. All models were at high risk of bias according to PROBAST, primarily due to the incorrect use of statistical methods. IVS analysis confirmed only 10 models as "recommended"; however, 281 and 187 were "not recommended" and "warning," respectively. CONCLUSION AI has led the digital revolution in the field of CVD prediction, but is still in the early stage of development as the defects of research design, report, and evaluation systems. The IVS we developed may contribute to independent external validation and the development of this field.
Collapse
Affiliation(s)
- Yue Cai
- China Medical University, Shenyang, 110122, China
| | - Yu-Qing Cai
- China Medical University, Shenyang, 110122, China
| | - Li-Ying Tang
- China Medical University, Shenyang, 110122, China
| | - Yi-Han Wang
- China Medical University, Shenyang, 110122, China
| | - Mengchun Gong
- Digital Health China Co. Ltd, Beijing, 100089, China
| | - Tian-Ci Jing
- Smart Hospital Management Department, the First Hospital of China Medical University, Shenyang, 110001, China
| | - Hui-Jun Li
- Shenyang Medical & Film Science and Technology Co. Ltd., Shenyang, 110001, China
- Enduring Medicine Smart Innovation Research Institute, Shenyang, 110001, China
| | - Jesse Li-Ling
- Institute of Genetic Medicine, School of Life Science, State Key Laboratory of Biotherapy, Sichuan University, Chengdu, 610065, China
| | - Wei Hu
- Bayi Orthopedic Hospital, Chengdu, 610017, China
| | - Zhihua Yin
- Department of Epidemiology, School of Public Health, China Medical University, Shenyang, 110122, China.
| | - Da-Xin Gong
- Smart Hospital Management Department, the First Hospital of China Medical University, Shenyang, 110001, China.
- The Internet Hospital Branch of the Chinese Research Hospital Association, Beijing, 100006, China.
| | - Guang-Wei Zhang
- Smart Hospital Management Department, the First Hospital of China Medical University, Shenyang, 110001, China.
- The Internet Hospital Branch of the Chinese Research Hospital Association, Beijing, 100006, China.
| |
Collapse
|
27
|
Montoya ID, Volkow ND. IUPHAR Review: New strategies for medications to treat substance use disorders. Pharmacol Res 2024; 200:107078. [PMID: 38246477 PMCID: PMC10922847 DOI: 10.1016/j.phrs.2024.107078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 01/11/2024] [Accepted: 01/15/2024] [Indexed: 01/23/2024]
Abstract
Substance use disorders (SUDs) and drug overdose are a public health emergency and safe and effective treatments are urgently needed. Developing new medications to treat them is expensive, time-consuming, and the probability of a compound progressing to clinical trials and obtaining FDA-approval is low. The small number of FDA-approved medications for SUDs reflects the low interest of pharmaceutical companies to invest in this area due to market forces, characteristics of the population (e.g., stigma, and socio-economic and legal disadvantages), and the high bar regulatory agencies set for new medication approval. In consequence, most research on medications is funded by government agencies, such as the National Institute on Drug Abuse (NIDA). Multiple scientific opportunities are emerging that can accelerate the discovery and development of new medications for SUDs. These include fast and efficient tools to screen new molecules, discover new medication targets, use of big data to explore large clinical data sets and artificial intelligence (AI) applications to make predictions, and precision medicine tools to individualize and optimize treatments. This review provides a general description of these new research strategies for the development of medications to treat SUDs with emphasis on the gaps and scientific opportunities. It includes a brief overview of the rising public health toll of SUDs; the justification, challenges, and opportunities to develop new medications; and a discussion of medications and treatment endpoints that are being evaluated with support from NIDA.
Collapse
Affiliation(s)
- Ivan D Montoya
- Division of Therapeutics and Medical Consequences, National Institute on Drug Abuse, 3 White Flint North, North Bethesda, MD 20852, United States.
| | - Nora D Volkow
- National Institute on Drug Abuse, 3 White Flint North, North Bethesda, MD 20852, United States
| |
Collapse
|
28
|
Vieira AGDS, Saconato H, Eid RAC, Nawa RK. ChatGPT: immutable insertion in health research and researchers' lives. EINSTEIN-SAO PAULO 2024; 22:eCE0752. [PMID: 38477797 DOI: 10.31744/einstein_journal/2024ce0752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 10/18/2023] [Indexed: 03/14/2024] Open
|
29
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Pinto Dos Santos D, Tang A, Wald C, Slavotinek J. Developing, purchasing, implementing and monitoring AI tools in radiology: Practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. J Med Imaging Radiat Oncol 2024; 68:7-26. [PMID: 38259140 DOI: 10.1111/1754-9485.13612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 11/23/2023] [Indexed: 01/24/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, Alabama, USA
- American College of Radiology Data Science Institute, Reston, Virginia, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, California, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, California, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montreal, Quebec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, Massachusetts, USA
- Tufts University Medical School, Boston, Massachusetts, USA
- Commision On Informatics, and Member, Board of Chancellors, American College of Radiology, Reston, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, South Australia, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
30
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. Insights Imaging 2024; 15:16. [PMID: 38246898 PMCID: PMC10800328 DOI: 10.1186/s13244-023-01541-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones.This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.Key points • The incorporation of artificial intelligence (AI) in radiological practice demands increased monitoring of its utility and safety.• Cooperation between developers, clinicians, and regulators will allow all involved to address ethical issues and monitor AI performance.• AI can fulfil its promise to advance patient well-being if all steps from development to integration in healthcare are rigorously evaluated.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, AL, USA
- American College of Radiology Data Science Institute, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston, MA, USA
- Commision On Informatics, and Member, Board of Chancellors, American College of Radiology, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
31
|
Zrubka Z, Kertész G, Gulácsi L, Czere J, Hölgyesi Á, Nezhad HM, Mosavi A, Kovács L, Butte AJ, Péntek M. The Reporting Quality of Machine Learning Studies on Pediatric Diabetes Mellitus: Systematic Review. J Med Internet Res 2024; 26:e47430. [PMID: 38241075 PMCID: PMC10837761 DOI: 10.2196/47430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 04/29/2023] [Accepted: 11/17/2023] [Indexed: 01/23/2024] Open
Abstract
BACKGROUND Diabetes mellitus (DM) is a major health concern among children with the widespread adoption of advanced technologies. However, concerns are growing about the transparency, replicability, biasedness, and overall validity of artificial intelligence studies in medicine. OBJECTIVE We aimed to systematically review the reporting quality of machine learning (ML) studies of pediatric DM using the Minimum Information About Clinical Artificial Intelligence Modelling (MI-CLAIM) checklist, a general reporting guideline for medical artificial intelligence studies. METHODS We searched the PubMed and Web of Science databases from 2016 to 2020. Studies were included if the use of ML was reported in children with DM aged 2 to 18 years, including studies on complications, screening studies, and in silico samples. In studies following the ML workflow of training, validation, and testing of results, reporting quality was assessed via MI-CLAIM by consensus judgments of independent reviewer pairs. Positive answers to the 17 binary items regarding sufficient reporting were qualitatively summarized and counted as a proxy measure of reporting quality. The synthesis of results included testing the association of reporting quality with publication and data type, participants (human or in silico), research goals, level of code sharing, and the scientific field of publication (medical or engineering), as well as with expert judgments of clinical impact and reproducibility. RESULTS After screening 1043 records, 28 studies were included. The sample size of the training cohort ranged from 5 to 561. Six studies featured only in silico patients. The reporting quality was low, with great variation among the 21 studies assessed using MI-CLAIM. The number of items with sufficient reporting ranged from 4 to 12 (mean 7.43, SD 2.62). The items on research questions and data characterization were reported adequately most often, whereas items on patient characteristics and model examination were reported adequately least often. The representativeness of the training and test cohorts to real-world settings and the adequacy of model performance evaluation were the most difficult to judge. Reporting quality improved over time (r=0.50; P=.02); it was higher than average in prognostic biomarker and risk factor studies (P=.04) and lower in noninvasive hypoglycemia detection studies (P=.006), higher in studies published in medical versus engineering journals (P=.004), and higher in studies sharing any code of the ML pipeline versus not sharing (P=.003). The association between expert judgments and MI-CLAIM ratings was not significant. CONCLUSIONS The reporting quality of ML studies in the pediatric population with DM was generally low. Important details for clinicians, such as patient characteristics; comparison with the state-of-the-art solution; and model examination for valid, unbiased, and robust results, were often the weak points of reporting. To assess their clinical utility, the reporting standards of ML studies must evolve, and algorithms for this challenging population must become more transparent and replicable.
Collapse
Affiliation(s)
- Zsombor Zrubka
- HECON Health Economics Research Center, University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | - Gábor Kertész
- John von Neumann Faculty of Informatics, Óbuda University, Budapest, Hungary
| | - László Gulácsi
- HECON Health Economics Research Center, University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | - János Czere
- Doctoral School of Innovation Management, Óbuda University, Budapest, Hungary
| | - Áron Hölgyesi
- HECON Health Economics Research Center, University Research and Innovation Center, Óbuda University, Budapest, Hungary
- Doctoral School of Molecular Medicine, Semmelweis University, Budapest, Hungary
| | - Hossein Motahari Nezhad
- HECON Health Economics Research Center, University Research and Innovation Center, Óbuda University, Budapest, Hungary
- Doctoral School of Business and Management, Corvinus University of Budapest, Budapest, Hungary
| | - Amir Mosavi
- John von Neumann Faculty of Informatics, Óbuda University, Budapest, Hungary
| | - Levente Kovács
- Physiological Controls Research Center, University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | - Atul J Butte
- Bakar Computational Health Sciences Institute, University of California, San Francisco, CA, United States
| | - Márta Péntek
- HECON Health Economics Research Center, University Research and Innovation Center, Óbuda University, Budapest, Hungary
| |
Collapse
|
32
|
Okada N, Umemura Y, Shi S, Inoue S, Honda S, Matsuzawa Y, Hirano Y, Kikuyama A, Yamakawa M, Gyobu T, Hosomi N, Minami K, Morita N, Watanabe A, Yamasaki H, Fukaguchi K, Maeyama H, Ito K, Okamoto K, Harano K, Meguro N, Unita R, Koshiba S, Endo T, Yamamoto T, Yamashita T, Shinba T, Fujimi S. "KAIZEN" method realizing implementation of deep-learning models for COVID-19 CT diagnosis in real world hospitals. Sci Rep 2024; 14:1672. [PMID: 38243054 PMCID: PMC10799049 DOI: 10.1038/s41598-024-52135-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 01/14/2024] [Indexed: 01/21/2024] Open
Abstract
Numerous COVID-19 diagnostic imaging Artificial Intelligence (AI) studies exist. However, none of their models were of potential clinical use, primarily owing to methodological defects and the lack of implementation considerations for inference. In this study, all development processes of the deep-learning models are performed based on strict criteria of the "KAIZEN checklist", which is proposed based on previous AI development guidelines to overcome the deficiencies mentioned above. We develop and evaluate two binary-classification deep-learning models to triage COVID-19: a slice model examining a Computed Tomography (CT) slice to find COVID-19 lesions; a series model examining a series of CT images to find an infected patient. We collected 2,400,200 CT slices from twelve emergency centers in Japan. Area Under Curve (AUC) and accuracy were calculated for classification performance. The inference time of the system that includes these two models were measured. For validation data, the slice and series models recognized COVID-19 with AUCs and accuracies of 0.989 and 0.982, 95.9% and 93.0% respectively. For test data, the models' AUCs and accuracies were 0.958 and 0.953, 90.0% and 91.4% respectively. The average inference time per case was 2.83 s. Our deep-learning system realizes accuracy and inference speed high enough for practical use. The systems have already been implemented in four hospitals and eight are under progression. We released an application software and implementation code for free in a highly usable state to allow its use in Japan and globally.
Collapse
Affiliation(s)
| | | | - Shoi Shi
- University of Tsukuba, Tsukuba, Japan
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Ken Okamoto
- Juntendo University Urayasu Hospital, Urayasu, Japan
| | | | | | - Ryo Unita
- National Hospital Organization Kyoto Medical Center, Kyoto, Japan
| | | | - Takuro Endo
- International University of Health and Welfare, School of Medicine, Narita Hospital, Narita, Japan
| | | | | | | | | |
Collapse
|
33
|
Verma AA, Trbovich P, Mamdani M, Shojania KG. Grand rounds in methodology: key considerations for implementing machine learning solutions in quality improvement initiatives. BMJ Qual Saf 2024; 33:121-131. [PMID: 38050138 DOI: 10.1136/bmjqs-2022-015713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 11/04/2023] [Indexed: 12/06/2023]
Abstract
Machine learning (ML) solutions are increasingly entering healthcare. They are complex, sociotechnical systems that include data inputs, ML models, technical infrastructure and human interactions. They have promise for improving care across a wide range of clinical applications but if poorly implemented, they may disrupt clinical workflows, exacerbate inequities in care and harm patients. Many aspects of ML solutions are similar to other digital technologies, which have well-established approaches to implementation. However, ML applications present distinct implementation challenges, given that their predictions are often complex and difficult to understand, they can be influenced by biases in the data sets used to develop them, and their impacts on human behaviour are poorly understood. This manuscript summarises the current state of knowledge about implementing ML solutions in clinical care and offers practical guidance for implementation. We propose three overarching questions for potential users to consider when deploying ML solutions in clinical care: (1) Is a clinical or operational problem likely to be addressed by an ML solution? (2) How can an ML solution be evaluated to determine its readiness for deployment? (3) How can an ML solution be deployed and maintained optimally? The Quality Improvement community has an essential role to play in ensuring that ML solutions are translated into clinical practice safely, effectively, and ethically.
Collapse
Affiliation(s)
- Amol A Verma
- Unity Health Toronto, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada
- Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
| | - Patricia Trbovich
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Centre for Quality Improvement and Patient Safety, Department of Medicine, University of Toronto, Toronto, ON, Canada
- North York General Hospital, Toronto, ON, Canada
| | - Muhammad Mamdani
- Unity Health Toronto, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
| | - Kaveh G Shojania
- Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
- Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| |
Collapse
|
34
|
Beam K, Sharma P, Levy P, Beam AL. Artificial intelligence in the neonatal intensive care unit: the time is now. J Perinatol 2024; 44:131-135. [PMID: 37443271 DOI: 10.1038/s41372-023-01719-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 06/24/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Artificial intelligence (AI) has the potential to revolutionize the neonatal intensive care unit (NICU) care by leveraging the large-scale, high-dimensional data that are generated by NICU patients. There is an emerging recognition that the confluence of technological progress, commercialization pathways, and rich data sets provides a unique opportunity for AI to make a lasting impact on the NICU. In this perspective article, we discuss four broad categories of AI applications in the NICU: imaging interpretation, prediction modeling of electronic health record data, integration of real-time monitoring data, and documentation and billing. By enhancing decision-making, streamlining processes, and improving patient outcomes, AI holds the potential to transform the quality of care for vulnerable newborns, making the excitement surrounding AI advancements well-founded and the potential for significant positive change stronger than ever before.
Collapse
Affiliation(s)
- Kristyn Beam
- Department of Neonatology, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Puneet Sharma
- Division of Newborn Medicine, Department of Pediatrics Boston Children's Hospital, Boston, MA, USA
| | - Phil Levy
- Division of Newborn Medicine, Department of Pediatrics Boston Children's Hospital, Boston, MA, USA
| | - Andrew L Beam
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA.
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
35
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA. Radiol Artif Intell 2024; 6:e230513. [PMID: 38251899 PMCID: PMC10831521 DOI: 10.1148/ryai.230513] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513). Keywords: Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical
Center, Birmingham, AL, USA
- American College of Radiology Data Science
Institute, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich
School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and
Interventional Radiology, Medical Center, Faculty of Medicine, University of
Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA,
USA
- Stanford Center for Artificial
Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical
Imaging, University of California, San Francisco, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning,
University of Adelaide, Adelaide, Australia
| | - Daniel Pinto dos Santos
- Department of Radiology, University
Hospital of Cologne, Cologne, Germany
- Department of Radiology, University
Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation
Oncology, and Nuclear Medicine, Université de Montréal,
Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital
& Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston,
MA, USA
- Commission On Informatics, and Member,
Board of Chancellors, American College of Radiology, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging,
Flinders Medical Centre Adelaide, Adelaide, Australia
- College of Medicine and Public Health,
Flinders University, Adelaide, Australia
| |
Collapse
|
36
|
Collins GS, Whittle R, Bullock GS, Logullo P, Dhiman P, de Beyer JA, Riley RD, Schlussel MM. Open science practices need substantial improvement in prognostic model studies in oncology using machine learning. J Clin Epidemiol 2024; 165:111199. [PMID: 37898461 DOI: 10.1016/j.jclinepi.2023.10.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 10/06/2023] [Accepted: 10/20/2023] [Indexed: 10/30/2023]
Abstract
OBJECTIVE To describe the frequency of open science practices in a contemporary sample of studies developing prognostic models using machine learning methods in the field of oncology. STUDY DESIGN AND SETTING We conducted a systematic review, searching the MEDLINE database between December 1, 2022, and December 31, 2022, for studies developing a multivariable prognostic model using machine learning methods (as defined by the authors) in oncology. Two authors independently screened records and extracted open science practices. RESULTS We identified 46 publications describing the development of a multivariable prognostic model. The adoption of open science principles was poor. Only one study reported availability of a study protocol, and only one study was registered. Funding statements and conflicts of interest statements were common. Thirty-five studies (76%) provided data sharing statements, with 21 (46%) indicating data were available on request to the authors and seven declaring data sharing was not applicable. Two studies (4%) shared data. Only 12 studies (26%) provided code sharing statements, including 2 (4%) that indicated the code was available on request to the authors. Only 11 studies (24%) provided sufficient information to allow their model to be used in practice. The use of reporting guidelines was rare: eight studies (18%) mentioning using a reporting guideline, with 4 (10%) using the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis Or Diagnosis statement, 1 (2%) using Minimum Information About Clinical Artificial Intelligence Modeling and Consolidated Standards Of Reporting Trials-Artificial Intelligence, 1 (2%) using Strengthening The Reporting Of Observational Studies In Epidemiology, 1 (2%) using Standards for Reporting Diagnostic Accuracy Studies, and 1 (2%) using Transparent Reporting of Evaluations with Nonrandomized Designs. CONCLUSION The adoption of open science principles in oncology studies developing prognostic models using machine learning methods is poor. Guidance and an increased awareness of benefits and best practices of open science are needed for prediction research in oncology.
Collapse
Affiliation(s)
- Gary S Collins
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom.
| | - Rebecca Whittle
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom
| | - Garrett S Bullock
- Department of Orthopaedic Surgery, Wake Forest School of Medicine, Winston-Salem, NC, USA; Centre for Sport, Exercise and Osteoarthritis Research Versus Arthritis, University of Oxford, Oxford, United Kingdom
| | - Patricia Logullo
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom
| | - Paula Dhiman
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom
| | - Jennifer A de Beyer
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom
| | - Richard D Riley
- Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom
| | - Michael M Schlussel
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
37
|
Bednorz A, Mak JKL, Jylhävä J, Religa D. Use of Electronic Medical Records (EMR) in Gerontology: Benefits, Considerations and a Promising Future. Clin Interv Aging 2023; 18:2171-2183. [PMID: 38152074 PMCID: PMC10752027 DOI: 10.2147/cia.s400887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 11/05/2023] [Indexed: 12/29/2023] Open
Abstract
Electronic medical records (EMRs) have many benefits in clinical research in gerontology, enabling data analysis, development of prognostic tools and disease risk prediction. EMRs also offer a range of advantages in clinical practice, such as comprehensive medical records, streamlined communication with healthcare providers, remote data access, and rapid retrieval of test results, ultimately leading to increased efficiency, enhanced patient safety, and improved quality of care in gerontology, which includes benefits like reduced medication use and better patient history taking and physical examination assessments. The use of artificial intelligence (AI) and machine learning (ML) approaches on EMRs can further improve disease diagnosis, symptom classification, and support clinical decision-making. However, there are also challenges related to data quality, data entry errors, as well as the ethics and safety of using AI in healthcare. This article discusses the future of EMRs in gerontology and the application of AI and ML in clinical research. Ethical and legal issues surrounding data sharing and the need for healthcare professionals to critically evaluate and integrate these technologies are also emphasized. The article concludes by discussing the challenges related to the use of EMRs in research as well as in their primary intended use, the daily clinical practice.
Collapse
Affiliation(s)
- Adam Bednorz
- John Paul II Geriatric Hospital, Katowice, Poland
- Institute of Psychology, Humanitas Academy, Sosnowiec, Poland
| | - Jonathan K L Mak
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Juulia Jylhävä
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
- Faculty of Social Sciences (Health Sciences) and Gerontology Research Center (GEREC), University of Tampere, Tampere, Finland
| | - Dorota Religa
- Division of Clinical Geriatrics, Department of Neurobiology, Care sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Theme Inflammation and Aging, Karolinska University Hospital, Huddinge, Sweden
| |
Collapse
|
38
|
Huang X, Islam MR, Akter S, Ahmed F, Kazami E, Serhan HA, Abd-Alrazaq A, Yousefi S. Artificial intelligence in glaucoma: opportunities, challenges, and future directions. Biomed Eng Online 2023; 22:126. [PMID: 38102597 PMCID: PMC10725017 DOI: 10.1186/s12938-023-01187-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
| | - Md Rafiqul Islam
- Business Information Systems, Australian Institute of Higher Education, Sydney, Australia
| | - Shanjita Akter
- School of Computer Science, Taylors University, Subang Jaya, Malaysia
| | - Fuad Ahmed
- Department of Computer Science & Engineering, Islamic University of Technology (IUT), Gazipur, Bangladesh
| | - Ehsan Kazami
- Ophthalmology, General Hospital of Mahabad, Urmia University of Medical Sciences, Urmia, Iran
| | - Hashem Abu Serhan
- Department of Ophthalmology, Hamad Medical Corporations, Doha, Qatar
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA.
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA.
| |
Collapse
|
39
|
Zhong J, Xing Y, Lu J, Zhang G, Mao S, Chen H, Yin Q, Cen Q, Jiang R, Hu Y, Ding D, Ge X, Zhang H, Yao W. The endorsement of general and artificial intelligence reporting guidelines in radiological journals: a meta-research study. BMC Med Res Methodol 2023; 23:292. [PMID: 38093215 PMCID: PMC10717715 DOI: 10.1186/s12874-023-02117-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Complete reporting is essential for clinical research. However, the endorsement of reporting guidelines in radiological journals is still unclear. Further, as a field extensively utilizing artificial intelligence (AI), the adoption of both general and AI reporting guidelines would be necessary for enhancing quality and transparency of radiological research. This study aims to investigate the endorsement of general reporting guidelines and those for AI applications in medical imaging in radiological journals, and explore associated journal characteristic variables. METHODS This meta-research study screened journals from the Radiology, Nuclear Medicine & Medical Imaging category, Science Citation Index Expanded of the 2022 Journal Citation Reports, and excluded journals not publishing original research, in non-English languages, and instructions for authors unavailable. The endorsement of fifteen general reporting guidelines and ten AI reporting guidelines was rated using a five-level tool: "active strong", "active weak", "passive moderate", "passive weak", and "none". The association between endorsement and journal characteristic variables was evaluated by logistic regression analysis. RESULTS We included 117 journals. The top-five endorsed reporting guidelines were CONSORT (Consolidated Standards of Reporting Trials, 58.1%, 68/117), PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses, 54.7%, 64/117), STROBE (STrengthening the Reporting of Observational Studies in Epidemiology, 51.3%, 60/117), STARD (Standards for Reporting of Diagnostic Accuracy, 50.4%, 59/117), and ARRIVE (Animal Research Reporting of In Vivo Experiments, 35.9%, 42/117). The most implemented AI reporting guideline was CLAIM (Checklist for Artificial Intelligence in Medical Imaging, 1.7%, 2/117), while other nine AI reporting guidelines were not mentioned. The Journal Impact Factor quartile and publisher were associated with endorsement of reporting guidelines in radiological journals. CONCLUSIONS The general reporting guideline endorsement was suboptimal in radiological journals. The implementation of reporting guidelines for AI applications in medical imaging was extremely low. Their adoption should be strengthened to facilitate quality and transparency of radiological study reporting.
Collapse
Affiliation(s)
- Jingyu Zhong
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Yue Xing
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Junjie Lu
- Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Guangcheng Zhang
- Department of Orthopedics, Shanghai Sixth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200233, China
| | - Shiqi Mao
- Department of Medical Oncology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, 200433, China
| | - Haoda Chen
- Department of General Surgery, Pancreatic Disease Center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Qian Yin
- Department of Pathology, Shanghai Sixth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200233, China
| | - Qingqing Cen
- Department of Dermatology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
| | - Run Jiang
- Department of Pharmacovigilance, Shanghai Hansoh BioMedical Co., Ltd., Shanghai, 201203, China
| | - Yangfan Hu
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Defang Ding
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Xiang Ge
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Huan Zhang
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China.
| | - Weiwu Yao
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China.
| |
Collapse
|
40
|
Wei Q, Tan N, Xiong S, Luo W, Xia H, Luo B. Deep Learning Methods in Medical Image-Based Hepatocellular Carcinoma Diagnosis: A Systematic Review and Meta-Analysis. Cancers (Basel) 2023; 15:5701. [PMID: 38067404 PMCID: PMC10705136 DOI: 10.3390/cancers15235701] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 11/25/2023] [Accepted: 11/29/2023] [Indexed: 06/24/2024] Open
Abstract
(1) Background: The aim of our research was to systematically review papers specifically focused on the hepatocellular carcinoma (HCC) diagnostic performance of DL methods based on medical images. (2) Materials: To identify related studies, a comprehensive search was conducted in prominent databases, including Embase, IEEE, PubMed, Web of Science, and the Cochrane Library. The search was limited to studies published before 3 July 2023. The inclusion criteria consisted of studies that either developed or utilized DL methods to diagnose HCC using medical images. To extract data, binary information on diagnostic accuracy was collected to determine the outcomes of interest, namely, the sensitivity, specificity, and area under the curve (AUC). (3) Results: Among the forty-eight initially identified eligible studies, thirty studies were included in the meta-analysis. The pooled sensitivity was 89% (95% CI: 87-91), the specificity was 90% (95% CI: 87-92), and the AUC was 0.95 (95% CI: 0.93-0.97). Analyses of subgroups based on medical image methods (contrast-enhanced and non-contrast-enhanced images), imaging modalities (ultrasound, magnetic resonance imaging, and computed tomography), and comparisons between DL methods and clinicians consistently showed the acceptable diagnostic performance of DL models. The publication bias and high heterogeneity observed between studies and subgroups can potentially result in an overestimation of the diagnostic accuracy of DL methods in medical imaging. (4) Conclusions: To improve future studies, it would be advantageous to establish more rigorous reporting standards that specifically address the challenges associated with DL research in this particular field.
Collapse
Affiliation(s)
- Qiuxia Wei
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou 510120, China; (Q.W.); (S.X.); (W.L.)
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou 510120, China
| | - Nengren Tan
- School of Electronic and Information Engineering, Guangxi Normal University, 15 Qixing District, Guilin 541004, China;
| | - Shiyu Xiong
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou 510120, China; (Q.W.); (S.X.); (W.L.)
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou 510120, China
| | - Wanrong Luo
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou 510120, China; (Q.W.); (S.X.); (W.L.)
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou 510120, China
| | - Haiying Xia
- School of Electronic and Information Engineering, Guangxi Normal University, 15 Qixing District, Guilin 541004, China;
| | - Baoming Luo
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou 510120, China; (Q.W.); (S.X.); (W.L.)
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou 510120, China
| |
Collapse
|
41
|
Castelo-Branco L, Pellat A, Martins-Branco D, Valachis A, Derksen JWG, Suijkerbuijk KPM, Dafni U, Dellaporta T, Vogel A, Prelaj A, Groenwold RHH, Martins H, Stahel R, Bliss J, Kather J, Ribelles N, Perrone F, Hall PS, Dienstmann R, Booth CM, Pentheroudakis G, Delaloge S, Koopman M. ESMO Guidance for Reporting Oncology real-World evidence (GROW). Ann Oncol 2023; 34:1097-1112. [PMID: 37848160 DOI: 10.1016/j.annonc.2023.10.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 09/28/2023] [Accepted: 10/04/2023] [Indexed: 10/19/2023] Open
Affiliation(s)
- L Castelo-Branco
- Scientific and Medical Division, European Society for Medical Oncology (ESMO), Lugano, Switzerland.
| | - A Pellat
- Department of Gastroenterology and Digestive Oncology, Hôpital Cochin AP-HP, Université Paris Cité, Paris; Centre d'Épidémiologie Clinique, Hôtel Dieu, Paris, France
| | - D Martins-Branco
- Scientific and Medical Division, European Society for Medical Oncology (ESMO), Lugano, Switzerland; Université Libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (HUB), Institut Jules Bordet, Academic Trials Promoting Team (ATPT), Brussels, Belgium
| | - A Valachis
- Department of Oncology, Faculty of Medicine and Health, Örebro University Hospital, Örebro University, Örebro, Sweden
| | - J W G Derksen
- Julius Center for Health Sciences and Primary Care, Department of Epidemiology and Health Economics, University Medical Centre Utrecht, Utrecht University, Utrecht
| | - K P M Suijkerbuijk
- Department of Medical Oncology, University Medical Centre Utrecht, Utrecht University, Utrecht, The Netherlands
| | - U Dafni
- Laboratory of Biostatistics, Department of Nursing, National and Kapodistrian University of Athens, Athens; Frontier Science Foundation Hellas, Athens, Greece
| | - T Dellaporta
- Frontier Science Foundation Hellas, Athens, Greece
| | - A Vogel
- Department of Gastroenterology, Hepatology and Endocrinology, Medical School of Hannover, Hannover, Germany; Toronto Center of Liver Disease, Toronto General Hospital, University Health Network, Toronto; Princess Margaret Cancer Centre, University of Toronto, Toronto, Canada
| | - A Prelaj
- AI-ON-Lab, Medical Oncology Department, Fondazione IRCCS Istituto Nazionale Tumori, Milan; NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - R H H Groenwold
- Department of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands
| | - H Martins
- Business Research Unit, ISCTE Business School, ISCTE-IUL, Lisbon, Portugal
| | - R Stahel
- ETOP IBCSG Partners Foundation, Berne, Switzerland
| | - J Bliss
- ICR-CTSU, Division of Clinical Studies, The Institute of Cancer Research, London, UK
| | - J Kather
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden; Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany
| | - N Ribelles
- Medical Oncology Intercenter Unit, Regional and Virgen de la Victoria University Hospitals, IBIMA, Málaga, Spain
| | - F Perrone
- Clinical Trial Unit, Istituto Nazionale Tumori IRCCS Fondazione G. Pascale, Naples, Italy
| | - P S Hall
- Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK
| | - R Dienstmann
- Oncoclinicas Precision Medicine, Oncoclinicas Group, São Paulo, Brazil; Oncology Data Science Group, Vall d'Hebron Institute of Oncology, Barcelona, Spain
| | - C M Booth
- Department of Oncology; Department of Public Health Sciences, Queen's University, Kingston, Canada
| | - G Pentheroudakis
- Scientific and Medical Division, European Society for Medical Oncology (ESMO), Lugano, Switzerland
| | - S Delaloge
- Department of Cancer Medicine, Gustave Roussy, Villejuif, France
| | - M Koopman
- Department of Medical Oncology, University Medical Centre Utrecht, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
42
|
Shin TG, Lee Y, Kim K, Lee MS, Kwon JM. ROMIAE (Rule-Out Acute Myocardial Infarction Using Artificial Intelligence Electrocardiogram Analysis) trial study protocol: a prospective multicenter observational study for validation of a deep learning-based 12-lead electrocardiogram analysis model for detecting acute myocardial infarction in patients visiting the emergency department. Clin Exp Emerg Med 2023; 10:438-445. [PMID: 38012820 PMCID: PMC10790062 DOI: 10.15441/ceem.22.360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 09/06/2023] [Accepted: 09/07/2023] [Indexed: 11/29/2023] Open
Abstract
OBJECTIVE Based on the development of artificial intelligence (AI), an emerging number of methods have achieved outstanding performances in the diagnosis of acute myocardial infarction (AMI) using an electrocardiogram (ECG). However, AI-ECG analysis using a multicenter prospective design for detecting AMI has yet to be conducted. This prospective multicenter observational study aims to validate an AI-ECG model for detecting AMI in patients visiting the emergency department. METHODS Approximately 9,000 adult patients with chest pain and/or equivalent symptoms of AMI will be enrolled in 18 emergency medical centers in Korea. The AI-ECG analysis algorithm we developed and validated will be used in this study. The primary endpoint is the diagnosis of AMI on the day of visiting the emergency center, and the secondary endpoint is a 30-day major adverse cardiac event. From March 2022, patient registration has begun at centers approved by the institutional review board. DISCUSSION This is the first prospective study designed to identify the efficacy of an AI-based 12-lead ECG analysis algorithm for diagnosing AMI in emergency departments across multiple centers. This study may provide insights into the utility of deep learning in detecting AMI on electrocardiograms in emergency departments. Trial registration ClinicalTrials.gov identifier: NCT05435391. Registered on June 28, 2022.
Collapse
Affiliation(s)
- Tae Gun Shin
- Department of Emergency Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Youngjoo Lee
- Department of Emergency Medicine, Soonchunhyang University Seoul Hospital, Seoul, Korea
| | - Kyuseok Kim
- Department of Emergency Medicine, CHA Bundang Medical Center, CHA University School of Medicine, Seongnam, Korea
| | - Min Sung Lee
- Medical Research Team, Medical AI Co, Seoul, Korea
- Artificial Intelligence and Big Data Research Center, Incheon Sejong Hospital, Incheon, Korea
| | - Joon-myoung Kwon
- Medical Research Team, Medical AI Co, Seoul, Korea
- Artificial Intelligence and Big Data Research Center, Incheon Sejong Hospital, Incheon, Korea
- Department of Critical Care and Emergency Medicine, Incheon Sejong Hospital, Incheon, Korea
| | | |
Collapse
|
43
|
Veras M, Dyer JO, Rooney M, Barros Silva PG, Rutherford D, Kairy D. Usability and Efficacy of Artificial Intelligence Chatbots (ChatGPT) for Health Sciences Students: Protocol for a Crossover Randomized Controlled Trial. JMIR Res Protoc 2023; 12:e51873. [PMID: 37999958 DOI: 10.2196/51873] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 10/18/2023] [Accepted: 10/20/2023] [Indexed: 11/25/2023] Open
Abstract
BACKGROUND The integration of artificial intelligence (AI) into health sciences students' education holds significant importance. The rapid advancement of AI has opened new horizons in scientific writing and has the potential to reshape human-technology interactions. AI in education may impact critical thinking, leading to unintended consequences that need to be addressed. Understanding the implications of AI adoption in education is essential for ensuring its responsible and effective use, empowering health sciences students to navigate AI-driven technologies' evolving field with essential knowledge and skills. OBJECTIVE This study aims to provide details on the study protocol and the methods used to investigate the usability and efficacy of ChatGPT, a large language model. The primary focus is on assessing its role as a supplementary learning tool for improving learning processes and outcomes among undergraduate health sciences students, with a specific emphasis on chronic diseases. METHODS This single-blinded, crossover, randomized, controlled trial is part of a broader mixed methods study, and the primary emphasis of this paper is on the quantitative component of the overall research. A total of 50 students will be recruited for this study. The alternative hypothesis posits that there will be a significant difference in learning outcomes and technology usability between students using ChatGPT (group A) and those using standard web-based tools (group B) to access resources and complete assignments. Participants will be allocated to sequence AB or BA in a 1:1 ratio using computer-generated randomization. Both arms include students' participation in a writing assignment intervention, with a washout period of 21 days between interventions. The primary outcome is the measure of the technology usability and effectiveness of ChatGPT, whereas the secondary outcome is the measure of students' perceptions and experiences with ChatGPT as a learning tool. Outcome data will be collected up to 24 hours after the interventions. RESULTS This study aims to understand the potential benefits and challenges of incorporating AI as an educational tool, particularly in the context of student learning. The findings are expected to identify critical areas that need attention and help educators develop a deeper understanding of AI's impact on the educational field. By exploring the differences in the usability and efficacy between ChatGPT and conventional web-based tools, this study seeks to inform educators and students on the responsible integration of AI into academic settings, with a specific focus on health sciences education. CONCLUSIONS By exploring the usability and efficacy of ChatGPT compared with conventional web-based tools, this study seeks to inform educators and students about the responsible integration of AI into academic settings. TRIAL REGISTRATION ClinicalTrails.gov NCT05963802; https://clinicaltrials.gov/study/NCT05963802. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/51873.
Collapse
Affiliation(s)
- Mirella Veras
- Health Sciences, Carleton University, Ottawa, ON, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Montréal, QC, Canada
| | - Joseph-Omer Dyer
- École de Réadaptation, Faculté de Médecine, Université de Montréal, Montréal, QC, Canada
- Groupe Interdisciplinaire de Recherche sur la Cognition et le Raisonnement Professionnel, Faculty of Medicine, Université de Montréal, Montréal, QC, Canada
| | - Morgan Rooney
- Teaching and Learning Services, Carleton University, Ottawa, ON, Canada
| | | | - Derek Rutherford
- School of Physiotherapy, Dalhousie University, Halifax, NS, Canada
| | - Dahlia Kairy
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Montréal, QC, Canada
- École de Réadaptation, Faculté de Médecine, Université de Montréal, Montréal, QC, Canada
- Institut Universitaire sur la Réadaptation en Déficience Physique de Montréal, Centre Intégré Universitaire de Santé et Services Sociaux du Centre-Sud-de-l'Île-de-Montréal, Montréal, QC, Canada
| |
Collapse
|
44
|
Wang Y, Li N, Chen L, Wu M, Meng S, Dai Z, Zhang Y, Clarke M. Guidelines, Consensus Statements, and Standards for the Use of Artificial Intelligence in Medicine: Systematic Review. J Med Internet Res 2023; 25:e46089. [PMID: 37991819 PMCID: PMC10701655 DOI: 10.2196/46089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 08/21/2023] [Accepted: 09/26/2023] [Indexed: 11/23/2023] Open
Abstract
BACKGROUND The application of artificial intelligence (AI) in the delivery of health care is a promising area, and guidelines, consensus statements, and standards on AI regarding various topics have been developed. OBJECTIVE We performed this study to assess the quality of guidelines, consensus statements, and standards in the field of AI for medicine and to provide a foundation for recommendations about the future development of AI guidelines. METHODS We searched 7 electronic databases from database establishment to April 6, 2022, and screened articles involving AI guidelines, consensus statements, and standards for eligibility. The AGREE II (Appraisal of Guidelines for Research & Evaluation II) and RIGHT (Reporting Items for Practice Guidelines in Healthcare) tools were used to assess the methodological and reporting quality of the included articles. RESULTS This systematic review included 19 guideline articles, 14 consensus statement articles, and 3 standard articles published between 2019 and 2022. Their content involved disease screening, diagnosis, and treatment; AI intervention trial reporting; AI imaging development and collaboration; AI data application; and AI ethics governance and applications. Our quality assessment revealed that the average overall AGREE II score was 4.0 (range 2.2-5.5; 7-point Likert scale) and the mean overall reporting rate of the RIGHT tool was 49.4% (range 25.7%-77.1%). CONCLUSIONS The results indicated important differences in the quality of different AI guidelines, consensus statements, and standards. We made recommendations for improving their methodological and reporting quality. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews (CRD42022321360); https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=321360.
Collapse
Affiliation(s)
- Ying Wang
- Department of Medical Administration, West China Hospital, Sichuan University, Chengdu, China
| | - Nian Li
- Department of Medical Administration, West China Hospital, Sichuan University, Chengdu, China
| | - Lingmin Chen
- Department of Anesthesiology, National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, China
| | - Miaomiao Wu
- Department of General Practice, National Clinical Research Center for Geriatrics, International Medical Center, West China Hospital, Sichuan University, Chengdu, China
| | - Sha Meng
- Department of Operation Management, West China Hospital, Sichuan University, Chengdu, China
| | - Zelei Dai
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China
| | - Yonggang Zhang
- Department of Periodical Press, National Clinical Research Center for Geriatrics, Chinese Evidence-based Medicine Center, Nursing Key Laboratory of Sichuan Province, West China Hospital, Sichuan University, Chengdu, China
| | - Mike Clarke
- Northern Ireland Methodology Hub, Queen's University Belfast, Belfast, United Kingdom
| |
Collapse
|
45
|
Zsidai B, Hilkert AS, Kaarre J, Narup E, Senorski EH, Grassi A, Ley C, Longo UG, Herbst E, Hirschmann MT, Kopf S, Seil R, Tischer T, Samuelsson K, Feldt R. A practical guide to the implementation of AI in orthopaedic research - part 1: opportunities in clinical application and overcoming existing challenges. J Exp Orthop 2023; 10:117. [PMID: 37968370 PMCID: PMC10651597 DOI: 10.1186/s40634-023-00683-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 10/21/2023] [Indexed: 11/17/2023] Open
Abstract
Artificial intelligence (AI) has the potential to transform medical research by improving disease diagnosis, clinical decision-making, and outcome prediction. Despite the rapid adoption of AI and machine learning (ML) in other domains and industry, deployment in medical research and clinical practice poses several challenges due to the inherent characteristics and barriers of the healthcare sector. Therefore, researchers aiming to perform AI-intensive studies require a fundamental understanding of the key concepts, biases, and clinical safety concerns associated with the use of AI. Through the analysis of large, multimodal datasets, AI has the potential to revolutionize orthopaedic research, with new insights regarding the optimal diagnosis and management of patients affected musculoskeletal injury and disease. The article is the first in a series introducing fundamental concepts and best practices to guide healthcare professionals and researcher interested in performing AI-intensive orthopaedic research studies. The vast potential of AI in orthopaedics is illustrated through examples involving disease- or injury-specific outcome prediction, medical image analysis, clinical decision support systems and digital twin technology. Furthermore, it is essential to address the role of human involvement in training unbiased, generalizable AI models, their explainability in high-risk clinical settings and the implementation of expert oversight and clinical safety measures for failure. In conclusion, the opportunities and challenges of AI in medicine are presented to ensure the safe and ethical deployment of AI models for orthopaedic research and clinical application. Level of evidence IV.
Collapse
Affiliation(s)
- Bálint Zsidai
- Sahlgrenska Sports Medicine Center, Gothenburg, Sweden.
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
| | - Ann-Sophie Hilkert
- Department of Computer Science and Engineering, Chalmers University of Technology, Gothenburg, Sweden
- Medfield Diagnostics AB, Gothenburg, Sweden
| | - Janina Kaarre
- Sahlgrenska Sports Medicine Center, Gothenburg, Sweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Orthopaedic Surgery, UPMC Freddie Fu Sports Medicine Center, University of Pittsburgh, Pittsburgh, USA
| | - Eric Narup
- Sahlgrenska Sports Medicine Center, Gothenburg, Sweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Eric Hamrin Senorski
- Sahlgrenska Sports Medicine Center, Gothenburg, Sweden
- Department of Health and Rehabilitation, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Sportrehab Sports Medicine Clinic, Gothenburg, Sweden
| | - Alberto Grassi
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- IIa Clinica Ortopedica E Traumatologica, IRCCS Istituto Ortopedico Rizzoli, Bologna, Italy
| | - Christophe Ley
- Department of Mathematics, University of Luxembourg, Esch-Sur-Alzette, Luxembourg
| | - Umile Giuseppe Longo
- Department of Orthopaedic and Trauma Surgery, Campus Bio-Medico University, Rome, Italy
| | - Elmar Herbst
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Münster, Münster, Germany
| | - Michael T Hirschmann
- Department of Orthopedic Surgery and Traumatology, Head Knee Surgery and DKF Head of Research, Kantonsspital Baselland, 4101, Bruderholz, Switzerland
| | - Sebastian Kopf
- Center of Orthopaedics and Traumatology, University Hospital Brandenburg a.d.H., Brandenburg Medical School Theodor Fontane, 14770, Brandenburg a.d.H., Germany
- Faculty of Health Sciences Brandenburg, Brandenburg Medical School Theodor Fontane, 14770, Brandenburg a.d.H., Germany
| | - Romain Seil
- Department of Orthopaedic Surgery, Centre Hospitalier Luxembourg and Luxembourg Institute of Health, Luxembourg, Luxembourg
| | - Thomas Tischer
- Clinic for Orthopaedics and Trauma Surgery, Malteser Waldkrankenhaus St. Marien, Erlangen, Germany
| | - Kristian Samuelsson
- Sahlgrenska Sports Medicine Center, Gothenburg, Sweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Orthopaedics, Sahlgrenska University Hospital, Mölndal, Sweden
| | - Robert Feldt
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
46
|
Holste G, Oikonomou EK, Mortazavi BJ, Coppi A, Faridi KF, Miller EJ, Forrest JK, McNamara RL, Ohno-Machado L, Yuan N, Gupta A, Ouyang D, Krumholz HM, Wang Z, Khera R. Severe aortic stenosis detection by deep learning applied to echocardiography. Eur Heart J 2023; 44:4592-4604. [PMID: 37611002 PMCID: PMC11004929 DOI: 10.1093/eurheartj/ehad456] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 06/21/2023] [Accepted: 07/11/2023] [Indexed: 08/25/2023] Open
Abstract
BACKGROUND AND AIMS Early diagnosis of aortic stenosis (AS) is critical to prevent morbidity and mortality but requires skilled examination with Doppler imaging. This study reports the development and validation of a novel deep learning model that relies on two-dimensional (2D) parasternal long axis videos from transthoracic echocardiography without Doppler imaging to identify severe AS, suitable for point-of-care ultrasonography. METHODS AND RESULTS In a training set of 5257 studies (17 570 videos) from 2016 to 2020 [Yale-New Haven Hospital (YNHH), Connecticut], an ensemble of three-dimensional convolutional neural networks was developed to detect severe AS, leveraging self-supervised contrastive pretraining for label-efficient model development. This deep learning model was validated in a temporally distinct set of 2040 consecutive studies from 2021 from YNHH as well as two geographically distinct cohorts of 4226 and 3072 studies, from California and other hospitals in New England, respectively. The deep learning model achieved an area under the receiver operating characteristic curve (AUROC) of 0.978 (95% CI: 0.966, 0.988) for detecting severe AS in the temporally distinct test set, maintaining its diagnostic performance in geographically distinct cohorts [0.952 AUROC (95% CI: 0.941, 0.963) in California and 0.942 AUROC (95% CI: 0.909, 0.966) in New England]. The model was interpretable with saliency maps identifying the aortic valve, mitral annulus, and left atrium as the predictive regions. Among non-severe AS cases, predicted probabilities were associated with worse quantitative metrics of AS suggesting an association with various stages of AS severity. CONCLUSION This study developed and externally validated an automated approach for severe AS detection using single-view 2D echocardiography, with potential utility for point-of-care screening.
Collapse
Affiliation(s)
- Gregory Holste
- Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06520-8056, USA
| | - Evangelos K Oikonomou
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06520-8056, USA
| | - Bobak J Mortazavi
- Department of Computer Science & Engineering, Texas A&M University, College Station, TX, USA
- Center for Outcomes Research and Evaluation, Yale-New Haven Hospital, 195 Church St 5th Floor, New Haven, CT, USA
| | - Andreas Coppi
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06520-8056, USA
- Center for Outcomes Research and Evaluation, Yale-New Haven Hospital, 195 Church St 5th Floor, New Haven, CT, USA
| | - Kamil F Faridi
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06520-8056, USA
| | - Edward J Miller
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06520-8056, USA
| | - John K Forrest
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06520-8056, USA
| | - Robert L McNamara
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06520-8056, USA
| | - Lucila Ohno-Machado
- Section of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, CT, USA
| | - Neal Yuan
- Department of Medicine, University of California San Francisco, San Francisco, CA, USA
- Division of Cardiology, San Francisco Veterans Affairs Medical Center, San Francisco, CA, USA
| | - Aakriti Gupta
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Division of Artificial Intelligence in Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Harlan M Krumholz
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06520-8056, USA
- Center for Outcomes Research and Evaluation, Yale-New Haven Hospital, 195 Church St 5th Floor, New Haven, CT, USA
- Department of Health Policy and Management, Yale School of Public Health, New Haven, CT, USA
| | - Zhangyang Wang
- Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Rohan Khera
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06520-8056, USA
- Center for Outcomes Research and Evaluation, Yale-New Haven Hospital, 195 Church St 5th Floor, New Haven, CT, USA
- Section of Biomedical Informatics and Data Science, Yale School of Medicine, New Haven, CT, USA
- Section of Health Informatics, Department of Biostatistics, Yale School of Public Health, 60 College St, New Haven, CT, USA
| |
Collapse
|
47
|
Guo W, Lv C, Guo M, Zhao Q, Yin X, Zhang L. Innovative applications of artificial intelligence in zoonotic disease management. SCIENCE IN ONE HEALTH 2023; 2:100045. [PMID: 39077042 PMCID: PMC11262289 DOI: 10.1016/j.soh.2023.100045] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 10/22/2023] [Indexed: 07/31/2024]
Abstract
Zoonotic diseases, transmitted between humans and animals, pose a substantial threat to global public health. In recent years, artificial intelligence (AI) has emerged as a transformative tool in the fight against diseases. This comprehensive review discusses the innovative applications of AI in the management of zoonotic diseases, including disease prediction, early diagnosis, drug development, and future prospects. AI-driven predictive models leverage extensive datasets to predict disease outbreaks and transmission patterns, thereby facilitating proactive public health responses. Early diagnosis benefits from AI-powered diagnostic tools that expedite pathogen identification and containment. Furthermore, AI technologies have accelerated drug discovery by identifying potential drug targets and optimizing candidate drugs. This review addresses these advancements, while also examining the promising future of AI in zoonotic disease control. We emphasize the pivotal role of AI in revolutionizing our approach to managing zoonotic diseases and highlight its potential to safeguard the health of both humans and animals on a global scale.
Collapse
Affiliation(s)
- Wenqiang Guo
- Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
| | - Chenrui Lv
- Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
| | - Meng Guo
- College of Veterinary Medicine, Henan Agricultural University, Zhengzhou 450046, China
| | - Qiwei Zhao
- Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
| | - Xinyi Yin
- Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
| | - Li Zhang
- Department of Animal Nutrition and Feed Science, College of Animal Science and Technology, Huazhong Agricultural University, Wuhan 430070, China
| |
Collapse
|
48
|
Tsang B, Gupta A, Takahashi MS, Baffi H, Ola T, Doria AS. Applications of artificial intelligence in magnetic resonance imaging of primary pediatric cancers: a scoping review and CLAIM score assessment. Jpn J Radiol 2023; 41:1127-1147. [PMID: 37395982 DOI: 10.1007/s11604-023-01437-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 04/18/2023] [Indexed: 07/04/2023]
Abstract
PURPOSES To review the uses of AI for magnetic resonance (MR) imaging assessment of primary pediatric cancer and identify common literature topics and knowledge gaps. To assess the adherence of the existing literature to the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines. MATERIALS AND METHODS A scoping literature search using MEDLINE, EMBASE and Cochrane databases was performed, including studies of > 10 subjects with a mean age of < 21 years. Relevant data were summarized into three categories based on AI application: detection, characterization, treatment and monitoring. Readers independently scored each study using CLAIM guidelines, and inter-rater reproducibility was assessed using intraclass correlation coefficients. RESULTS Twenty-one studies were included. The most common AI application for pediatric cancer MR imaging was pediatric tumor diagnosis and detection (13/21 [62%] studies). The most commonly studied tumor was posterior fossa tumors (14 [67%] studies). Knowledge gaps included a lack of research in AI-driven tumor staging (0/21 [0%] studies), imaging genomics (1/21 [5%] studies), and tumor segmentation (2/21 [10%] studies). Adherence to CLAIM guidelines was moderate in primary studies, with an average (range) of 55% (34%-73%) CLAIM items reported. Adherence has improved over time based on publication year. CONCLUSION The literature surrounding AI applications of MR imaging in pediatric cancers is limited. The existing literature shows moderate adherence to CLAIM guidelines, suggesting that better adherence is required for future studies.
Collapse
Affiliation(s)
- Brian Tsang
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada
| | - Aaryan Gupta
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada
| | - Marcelo Straus Takahashi
- Instituto de Radiologia do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (InRad/HC-FMUSP), São Paulo, SP, Brazil
- Instituto da Criança do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (ICr/HC-FMUSP), São Paulo, SP, Brazil
- DasaInova, Diagnósticos da América SA (Dasa), São Paulo, SP, Brazil
| | | | - Tolulope Ola
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada
| | - Andrea S Doria
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada.
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada.
| |
Collapse
|
49
|
van Breugel M, Fehrmann RSN, Bügel M, Rezwan FI, Holloway JW, Nawijn MC, Fontanella S, Custovic A, Koppelman GH. Current state and prospects of artificial intelligence in allergy. Allergy 2023; 78:2623-2643. [PMID: 37584170 DOI: 10.1111/all.15849] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 07/08/2023] [Accepted: 07/31/2023] [Indexed: 08/17/2023]
Abstract
The field of medicine is witnessing an exponential growth of interest in artificial intelligence (AI), which enables new research questions and the analysis of larger and new types of data. Nevertheless, applications that go beyond proof of concepts and deliver clinical value remain rare, especially in the field of allergy. This narrative review provides a fundamental understanding of the core concepts of AI and critically discusses its limitations and open challenges, such as data availability and bias, along with potential directions to surmount them. We provide a conceptual framework to structure AI applications within this field and discuss forefront case examples. Most of these applications of AI and machine learning in allergy concern supervised learning and unsupervised clustering, with a strong emphasis on diagnosis and subtyping. A perspective is shared on guidelines for good AI practice to guide readers in applying it effectively and safely, along with prospects of field advancement and initiatives to increase clinical impact. We anticipate that AI can further deepen our knowledge of disease mechanisms and contribute to precision medicine in allergy.
Collapse
Affiliation(s)
- Merlijn van Breugel
- Department of Pediatric Pulmonology and Pediatric Allergology, Beatrix Children's Hospital, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- MIcompany, Amsterdam, the Netherlands
| | - Rudolf S N Fehrmann
- Department of Medical Oncology, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | | | - Faisal I Rezwan
- Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, UK
- Department of Computer Science, Aberystwyth University, Aberystwyth, UK
| | - John W Holloway
- Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, UK
- National Institute for Health and Care Research Southampton Biomedical Research Centre, University Hospitals Southampton NHS Foundation Trust, Southampton, UK
| | - Martijn C Nawijn
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Department of Pathology and Medical Biology, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Sara Fontanella
- National Heart and Lung Institute, Imperial College London, London, UK
- National Institute for Health and Care Research Imperial Biomedical Research Centre (BRC), London, UK
| | - Adnan Custovic
- National Heart and Lung Institute, Imperial College London, London, UK
- National Institute for Health and Care Research Imperial Biomedical Research Centre (BRC), London, UK
| | - Gerard H Koppelman
- Department of Pediatric Pulmonology and Pediatric Allergology, Beatrix Children's Hospital, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
50
|
Peek N, Sujan M, Scott P. Digital health and care: emerging from pandemic times. BMJ Health Care Inform 2023; 30:e100861. [PMID: 37832967 PMCID: PMC10583078 DOI: 10.1136/bmjhci-2023-100861] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 09/20/2023] [Indexed: 10/15/2023] Open
Abstract
In 2020, we published an editorial about the massive disruption of health and care services caused by the COVID-19 pandemic and the rapid changes in digital service delivery, artificial intelligence and data sharing that were taking place at the time. Now, 3 years later, we describe how these developments have progressed since, reflect on lessons learnt and consider key challenges and opportunities ahead by reviewing significant developments reported in the literature. As before, the three key areas we consider are digital transformation of services, realising the potential of artificial intelligence and wise data sharing to facilitate learning health systems. We conclude that the field of digital health has rapidly matured during the pandemic, but there are still major sociotechnical, evaluation and trust challenges in the development and deployment of new digital services.
Collapse
Affiliation(s)
- Niels Peek
- Centre for Health Informatics, The University of Manchester, Manchester, UK
- NIHR Applied Research Collaboration Greater Manchester, The University of Manchester, Manchester, UK
| | - Mark Sujan
- Human Factors Everywhere Ltd, Woking, UK
| | - Philip Scott
- Institute of Management and Health, University of Wales Trinity Saint David, Swansea, UK
| |
Collapse
|