51
|
Xu W, Liang X, Chen L, Hong W, Hu X. Biobanks in chronic disease management: A comprehensive review of strategies, challenges, and future directions. Heliyon 2024; 10:e32063. [PMID: 38868047 PMCID: PMC11168399 DOI: 10.1016/j.heliyon.2024.e32063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 05/27/2024] [Accepted: 05/28/2024] [Indexed: 06/14/2024] Open
Abstract
Biobanks, through the collection and storage of patient blood, tissue, genomic, and other biological samples, provide unique and rich resources for the research and management of chronic diseases such as cardiovascular diseases, diabetes, and cancer. These samples contain valuable cellular and molecular level information that can be utilized to decipher the pathogenesis of diseases, guide the development of novel diagnostic technologies, treatment methods, and personalized medical strategies. This article first outlines the historical evolution of biobanks, their classification, and the impact of technological advancements. Subsequently, it elaborates on the significant role of biobanks in revealing molecular biomarkers of chronic diseases, promoting the translation of basic research to clinical applications, and achieving individualized treatment and management. Additionally, challenges such as standardization of sample processing, information privacy, and security are discussed. Finally, from the perspectives of policy support, regulatory improvement, and public participation, this article provides a forecast on the future development directions of biobanks and strategies to address challenges, aiming to safeguard and enhance their unique advantages in supporting chronic disease prevention and treatment.
Collapse
Affiliation(s)
- Wanna Xu
- Shenzhen Center for Chronic Disease Control, Shenzhen Institute of Dermatology, Shenzhen, 518020, China
| | - Xiongshun Liang
- Shenzhen Center for Chronic Disease Control, Shenzhen Institute of Dermatology, Shenzhen, 518020, China
| | - Lin Chen
- Shenzhen Center for Chronic Disease Control, Shenzhen Institute of Dermatology, Shenzhen, 518020, China
| | - Wenxu Hong
- Shenzhen Center for Chronic Disease Control, Shenzhen Institute of Dermatology, Shenzhen, 518020, China
| | - Xuqiao Hu
- Shenzhen Center for Chronic Disease Control, Shenzhen Institute of Dermatology, Shenzhen, 518020, China
- Second Clinical Medical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology (Shenzhen People's Hospital), Shenzhen, China
| |
Collapse
|
52
|
Johnson H, Tipirneni-Sajja A. Explainable AI to Facilitate Understanding of Neural Network-Based Metabolite Profiling Using NMR Spectroscopy. Metabolites 2024; 14:332. [PMID: 38921467 PMCID: PMC11205398 DOI: 10.3390/metabo14060332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/27/2024] Open
Abstract
Neural networks (NNs) are emerging as a rapid and scalable method for quantifying metabolites directly from nuclear magnetic resonance (NMR) spectra, but the nonlinear nature of NNs precludes understanding of how a model makes predictions. This study implements an explainable artificial intelligence algorithm called integrated gradients (IG) to elucidate which regions of input spectra are the most important for the quantification of specific analytes. The approach is first validated in simulated mixture spectra of eight aqueous metabolites and then investigated in experimentally acquired lipid spectra of a reference standard mixture and a murine hepatic extract. The IG method revealed that, like a human spectroscopist, NNs recognize and quantify analytes based on an analyte's respective resonance line-shapes, amplitudes, and frequencies. NNs can compensate for peak overlap and prioritize specific resonances most important for concentration determination. Further, we show how modifying a NN training dataset can affect how a model makes decisions, and we provide examples of how this approach can be used to de-bug issues with model performance. Overall, results show that the IG technique facilitates a visual and quantitative understanding of how model inputs relate to model outputs, potentially making NNs a more attractive option for targeted and automated NMR-based metabolomics.
Collapse
Affiliation(s)
| | - Aaryani Tipirneni-Sajja
- Magnetic Resonance Imaging and Spectroscopy Lab, Department of Biomedical Engineering, The University of Memphis, Memphis, TN 38152, USA;
| |
Collapse
|
53
|
Ye DX, Yu JW, Li R, Hao YD, Wang TY, Yang H, Ding H. The Prediction of Recombination Hotspot Based on Automated Machine Learning. J Mol Biol 2024:168653. [PMID: 38871176 DOI: 10.1016/j.jmb.2024.168653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Revised: 05/12/2024] [Accepted: 06/06/2024] [Indexed: 06/15/2024]
Abstract
Meiotic recombination plays a pivotal role in genetic evolution. Genetic variation induced by recombination is a crucial factor in generating biodiversity and a driving force for evolution. At present, the development of recombination hotspot prediction methods has encountered challenges related to insufficient feature extraction and limited generalization capabilities. This paper focused on the research of recombination hotspot prediction methods. We explored deep learning-based recombination hotspot prediction and scrutinized the shortcomings of prevalent models in addressing the challenge of recombination hotspot prediction. To addressing these deficiencies, an automated machine learning approach was utilized to construct recombination hotspot prediction model. The model combined sequence information with physicochemical properties by employing TF-IDF-Kmer and DNA composition components to acquire more effective feature data. Experimental results validate the effectiveness of the feature extraction method and automated machine learning technology used in this study. The final model was validated on three distinct datasets and yielded accuracy rates of 97.14%, 79.71%, and 98.73%, surpassing the current leading models by 2%, 2.56%, and 4%, respectively. In addition, we incorporated tools such as SHAP and AutoGluon to analyze the interpretability of black-box models, delved into the impact of individual features on the results, and investigated the reasons behind misclassification of samples. Finally, an application of recombination hotspot prediction was established to facilitate easy access to necessary information and tools for researchers. The research outcomes of this paper underscore the enormous potential of automated machine learning methods in gene sequence prediction.
Collapse
Affiliation(s)
- Dong-Xin Ye
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Jun-Wen Yu
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Rui Li
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Yu-Duo Hao
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Tian-Yu Wang
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Hui Yang
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, Zhejiang, China.
| | - Hui Ding
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China.
| |
Collapse
|
54
|
Liang B, Qin H, Nong X, Zhang X. Classification of Ameloblastoma, Periapical Cyst, and Chronic Suppurative Osteomyelitis with Semi-Supervised Learning: The WaveletFusion-ViT Model Approach. Bioengineering (Basel) 2024; 11:571. [PMID: 38927807 PMCID: PMC11200596 DOI: 10.3390/bioengineering11060571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 05/31/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024] Open
Abstract
Ameloblastoma (AM), periapical cyst (PC), and chronic suppurative osteomyelitis (CSO) are prevalent maxillofacial diseases with similar imaging characteristics but different treatments, thus making preoperative differential diagnosis crucial. Existing deep learning methods for diagnosis often require manual delineation in tagging the regions of interest (ROIs), which triggers some challenges in practical application. We propose a new model of Wavelet Extraction and Fusion Module with Vision Transformer (WaveletFusion-ViT) for automatic diagnosis using CBCT panoramic images. In this study, 539 samples containing healthy (n = 154), AM (n = 181), PC (n = 102), and CSO (n = 102) were acquired by CBCT for classification, with an additional 2000 healthy samples for pre-training the domain-adaptive network (DAN). The WaveletFusion-ViT model was initialized with pre-trained weights obtained from the DAN and further trained using semi-supervised learning (SSL) methods. After five-fold cross-validation, the model achieved average sensitivity, specificity, accuracy, and AUC scores of 79.60%, 94.48%, 91.47%, and 0.942, respectively. Remarkably, our method achieved 91.47% accuracy using less than 20% labeled samples, surpassing the fully supervised approach's accuracy of 89.05%. Despite these promising results, this study's limitations include a low number of CSO cases and a relatively lower accuracy for this condition, which should be addressed in future research. This research is regarded as an innovative approach as it deviates from the fully supervised learning paradigm typically employed in previous studies. The WaveletFusion-ViT model effectively combines SSL methods to effectively diagnose three types of CBCT panoramic images using only a small portion of labeled data.
Collapse
Affiliation(s)
- Bohui Liang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China;
| | - Hongna Qin
- School of Information and Management, Guangxi Medical University, Nanning 530021, China;
| | - Xiaolin Nong
- College & Hospital of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Xuejun Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China;
| |
Collapse
|
55
|
Robson B, Cooper R. Glass Box and Black Box Machine Learning Approaches to Exploit Compositional Descriptors of Molecules in Drug Discovery and Aid the Medicinal Chemist. ChemMedChem 2024:e202400169. [PMID: 38837320 DOI: 10.1002/cmdc.202400169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 05/29/2024] [Accepted: 06/03/2024] [Indexed: 06/07/2024]
Abstract
The synthetic medicinal chemist plays a vital role in drug discovery. Today there are AI tools to guide next syntheses, but many are "Black Boxes" (BB). One learns little more than the prediction made. There are now also AI methods emphasizing visibility and "explainability" (thus explainable AI or XAI) that could help when "compositional data" are used, but they often still start from seemingly arbitrary learned weights and lack familiar probabilistic measures based on observation and counting from the outset. If probabilistic methods were used in a complementary way with BB methods and demonstrated comparable predictive power, they would provide guidelines about what groups to include and avoid in next syntheses and quantify the relationships in probabilistic terms. These points are demonstrated by blind test comparison of two main types of BB methods and a probabilistic "Glass Box" (GB) method new outside of medicine, but which appears well suited to the above. Because many probabilities can be involved, emphasis is on the predictive power of its simplest explanatory models. There are usually more inactive compounds by orders of magnitude, often a problem for machine learning methods. However, the approaches used here appear to work well for such "real world data".
Collapse
Affiliation(s)
- Barry Robson
- Ingine Inc., 2723 Rocklyn Road, Cleveland, OH-44122, USA
- The Dirac Foundation, c/o The Academy Partnership Ltd., Windrush Park, Witney, OX2929, UK
| | - Richard Cooper
- Oxford Drug Design, Oxford Centre for Innovation, New Rd, Oxford, OX1 3TA, UK
- Department of Chemistry, 12 Mansfield Road, Oxford, OX1 1BY, UK
| |
Collapse
|
56
|
Li P, Gao S, Wang Y, Zhou R, Chen G, Li W, Hao X, Zhu T. Utilising intraoperative respiratory dynamic features for developing and validating an explainable machine learning model for postoperative pulmonary complications. Br J Anaesth 2024; 132:1315-1326. [PMID: 38637267 DOI: 10.1016/j.bja.2024.02.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 02/20/2024] [Accepted: 02/23/2024] [Indexed: 04/20/2024] Open
Abstract
BACKGROUND Timely detection of modifiable risk factors for postoperative pulmonary complications (PPCs) could inform ventilation strategies that attenuate lung injury. We sought to develop, validate, and internally test machine learning models that use intraoperative respiratory features to predict PPCs. METHODS We analysed perioperative data from a cohort comprising patients aged 65 yr and older at an academic medical centre from 2019 to 2023. Two linear and four nonlinear learning models were developed and compared with the current gold-standard risk assessment tool ARISCAT (Assess Respiratory Risk in Surgical Patients in Catalonia Tool). The Shapley additive explanation of artificial intelligence was utilised to interpret feature importance and interactions. RESULTS Perioperative data were obtained from 10 284 patients who underwent 10 484 operations (mean age [range] 71 [65-98] yr; 42% female). An optimised XGBoost model that used preoperative variables and intraoperative respiratory variables had area under the receiver operating characteristic curves (AUROCs) of 0.878 (0.866-0.891) and 0.881 (0.879-0.883) in the validation and prospective cohorts, respectively. These models outperformed ARISCAT (AUROC: 0.496-0.533). The intraoperative dynamic features of respiratory dynamic system compliance, mechanical power, and driving pressure were identified as key modifiable contributors to PPCs. A simplified model based on XGBoost including 20 variables generated an AUROC of 0.864 (0.852-0.875) in an internal testing cohort. This has been developed into a web-based tool for further external validation (https://aorm.wchscu.cn/). CONCLUSIONS These findings suggest that real-time identification of surgical patients' risk of postoperative pulmonary complications could help personalise intraoperative ventilatory strategies and reduce postoperative pulmonary complications.
Collapse
Affiliation(s)
- Peiyi Li
- Department of Anesthesiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China; Laboratory of Anesthesia and Critical Care Medicine, National-Local Joint Engineering Research Centre of Translational Medicine of Anesthesiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China; The Research Units of West China (2018RU012)-Chinese Academy of Medical Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Shuanliang Gao
- College of Software Engineering, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Yaqiang Wang
- College of Software Engineering, Chengdu University of Information Technology, Chengdu, Sichuan, China; Sichuan Key Laboratory of Software Automatic Generation and Intelligent Service, Chengdu, Sichuan, China
| | - RuiHao Zhou
- Department of Anesthesiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China; Laboratory of Anesthesia and Critical Care Medicine, National-Local Joint Engineering Research Centre of Translational Medicine of Anesthesiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China; The Research Units of West China (2018RU012)-Chinese Academy of Medical Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Guo Chen
- Department of Anesthesiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China; The Research Units of West China (2018RU012)-Chinese Academy of Medical Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, Sichuan, China; Institute of Respiratory Health, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan, China; State Key Laboratory of Respiratory Health and Multimorbidity, West China Hospital, Sichuan University, Chengdu, Sichuan, China.
| | - Xuechao Hao
- Department of Anesthesiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China; The Research Units of West China (2018RU012)-Chinese Academy of Medical Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China.
| | - Tao Zhu
- Department of Anesthesiology, West China Hospital, Sichuan University, Chengdu, Sichuan, China; The Research Units of West China (2018RU012)-Chinese Academy of Medical Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China.
| |
Collapse
|
57
|
Hou H, Zhang R, Li J. Artificial intelligence in the clinical laboratory. Clin Chim Acta 2024; 559:119724. [PMID: 38734225 DOI: 10.1016/j.cca.2024.119724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 05/07/2024] [Accepted: 05/08/2024] [Indexed: 05/13/2024]
Abstract
Laboratory medicine has become a highly automated medical discipline. Nowadays, artificial intelligence (AI) applied to laboratory medicine is also gaining more and more attention, which can optimize the entire laboratory workflow and even revolutionize laboratory medicine in the future. However, only a few commercially available AI models are currently approved for use in clinical laboratories and have drawbacks such as high cost, lack of accuracy, and the need for manual review of model results. Furthermore, there are a limited number of literature reviews that comprehensively address the research status, challenges, and future opportunities of AI applications in laboratory medicine. Our article begins with a brief introduction to AI and some of its subsets, then reviews some AI models that are currently being used in clinical laboratories or that have been described in emerging studies, and explains the existing challenges associated with their application and possible solutions, finally provides insights into the future opportunities of the field. We highlight the current status of implementation and potential applications of AI models in different stages of the clinical testing process.
Collapse
Affiliation(s)
- Hanjing Hou
- National Center for Clinical Laboratories, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing Hospital/National Center of Gerontology, PR China; National Center for Clinical Laboratories, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, PR China; Beijing Engineering Research Center of Laboratory Medicine, Beijing Hospital, Beijing, PR China
| | - Rui Zhang
- National Center for Clinical Laboratories, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing Hospital/National Center of Gerontology, PR China; National Center for Clinical Laboratories, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, PR China; Beijing Engineering Research Center of Laboratory Medicine, Beijing Hospital, Beijing, PR China.
| | - Jinming Li
- National Center for Clinical Laboratories, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing Hospital/National Center of Gerontology, PR China; National Center for Clinical Laboratories, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, PR China; Beijing Engineering Research Center of Laboratory Medicine, Beijing Hospital, Beijing, PR China.
| |
Collapse
|
58
|
Kommuru S, Adekunle F, Niño S, Arefin S, Thalvayapati SP, Kuriakose D, Ahmadi Y, Vinyak S, Nazir Z. Role of Artificial Intelligence in the Diagnosis of Gastroesophageal Reflux Disease. Cureus 2024; 16:e62206. [PMID: 39006681 PMCID: PMC11240074 DOI: 10.7759/cureus.62206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/09/2024] [Indexed: 07/16/2024] Open
Abstract
Gastroesophageal reflux disease (GERD) is a disorder that usually presents with heartburn. GERD is diagnosed clinically, but most patients are misdiagnosed due to atypical presentations. The increased use of artificial intelligence (AI) in healthcare has provided multiple ways of diagnosing and treating patients accurately. In this review, multiple studies in which AI models were used to diagnose GERD are discussed. According to the studies, using AI models helped to diagnose GERD in patients accurately. AI, although considered one of the most potent emerging aspects of medicine with its accuracy in patient diagnosis, presents limitations of its own, which explains why healthcare providers may hesitate to use AI in patient care. The challenges and limitations should be addressed before AI is fully incorporated into the healthcare system.
Collapse
Affiliation(s)
- Sravani Kommuru
- Medical School, Dr. Pinnamaneni Siddhartha Institute of Medical Sciences & Research Foundation, Vijayawada, IND
| | - Faith Adekunle
- Medical School, American University of the Carribbean, Cupecoy, SXM
| | - Santiago Niño
- Surgery, Colegio Mayor de Nuestra Señora del Rosario, Bogota, COL
| | - Shamsul Arefin
- Internal Medicine, Nottingham University Hospitals NHS Trust, Nottingham, GBR
| | | | - Dona Kuriakose
- Internal Medicine, Petre Shotadze Tbilisi Medical Academy, Tbilisi, GEO
| | - Yasmin Ahmadi
- Medical School, Royal College of Surgeons in Ireland - Medical University of Bahrain, Busaiteen, BHR
| | - Suprada Vinyak
- Internal Medicine, Wellmont Health System/Norton Community Hospital, Norton, USA
| | - Zahra Nazir
- Internal Medicine, Combined Military Hospital, Quetta, PAK
| |
Collapse
|
59
|
Hase H, Mine Y, Okazaki S, Yoshimi Y, Ito S, Peng TY, Sano M, Koizumi Y, Kakimoto N, Tanimoto K, Murayama T. Sex estimation from maxillofacial radiographs using a deep learning approach. Dent Mater J 2024; 43:394-399. [PMID: 38599831 DOI: 10.4012/dmj.2023-253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
The purpose of this study was to construct deep learning models for more efficient and reliable sex estimation. Two deep learning models, VGG16 and DenseNet-121, were used in this retrospective study. In total, 600 lateral cephalograms were analyzed. A saliency map was generated by gradient-weighted class activation mapping for each output. The two deep learning models achieved high values in each performance metric according to accuracy, sensitivity (recall), precision, F1 score, and areas under the receiver operating characteristic curve. Both models showed substantial differences in the positions indicated in saliency maps for male and female images. The positions in saliency maps also differed between VGG16 and DenseNet-121, regardless of sex. This analysis of our proposed system suggested that sex estimation from lateral cephalograms can be achieved with high accuracy using deep learning.
Collapse
Affiliation(s)
- Hiroki Hase
- Department of Medical Systems Engineering, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Yuichi Mine
- Department of Medical Systems Engineering, Graduate School of Biomedical and Health Sciences, Hiroshima University
- Project Research Center for Integrating Digital Dentistry, Hiroshima University
| | - Shota Okazaki
- Department of Medical Systems Engineering, Graduate School of Biomedical and Health Sciences, Hiroshima University
- Project Research Center for Integrating Digital Dentistry, Hiroshima University
| | - Yuki Yoshimi
- Department of Orthodontics and Craniofacial Developmental Biology, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Shota Ito
- Department of Orthodontics and Craniofacial Developmental Biology, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Tzu-Yu Peng
- School of Dentistry, College of Oral Medicine, Taipei Medical University
| | - Mizuho Sano
- Department of Medical Systems Engineering, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Yuma Koizumi
- Department of Orthodontics and Craniofacial Developmental Biology, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Naoya Kakimoto
- School of Dentistry, College of Oral Medicine, Taipei Medical University
| | - Kotaro Tanimoto
- Department of Oral and Maxillofacial Radiology, Graduate School of Biomedical and Health Sciences, Hiroshima University
| | - Takeshi Murayama
- Department of Medical Systems Engineering, Graduate School of Biomedical and Health Sciences, Hiroshima University
- Project Research Center for Integrating Digital Dentistry, Hiroshima University
| |
Collapse
|
60
|
Lee JH, Kim YT, Lee JB. Identification of dental implant systems from low-quality and distorted dental radiographs using AI trained on a large multi-center dataset. Sci Rep 2024; 14:12606. [PMID: 38824187 PMCID: PMC11144187 DOI: 10.1038/s41598-024-63422-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 05/28/2024] [Indexed: 06/03/2024] Open
Abstract
Most artificial intelligence (AI) studies have attempted to identify dental implant systems (DISs) while excluding low-quality and distorted dental radiographs, limiting their actual clinical use. This study aimed to evaluate the effectiveness of an AI model, trained on a large and multi-center dataset, in identifying different types of DIS in low-quality and distorted dental radiographs. Based on the fine-tuned pre-trained ResNet-50 algorithm, 156,965 panoramic and periapical radiological images were used as training and validation datasets, and 530 low-quality and distorted images of four types (including those not perpendicular to the axis of the fixture, radiation overexposure, cut off the apex of the fixture, and containing foreign bodies) were used as test datasets. Moreover, the accuracy performance of low-quality and distorted DIS classification was compared using AI and five periodontists. Based on a test dataset, the performance evaluation of the AI model achieved accuracy, precision, recall, and F1 score metrics of 95.05%, 95.91%, 92.49%, and 94.17%, respectively. However, five periodontists performed the classification of nine types of DISs based on four different types of low-quality and distorted radiographs, achieving a mean overall accuracy of 37.2 ± 29.0%. Within the limitations of this study, AI demonstrated superior accuracy in identifying DIS from low-quality or distorted radiographs, outperforming dental professionals in classification tasks. However, for actual clinical application of AI, extensive standardization research on low-quality and distorted radiographic images is essential.
Collapse
Affiliation(s)
- Jae-Hong Lee
- Department of Periodontology, Jeonbuk National University College of Dentistry, 567 Baekje-daero, Deokjin-gu, Jeonju, 54896, Korea.
- Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea.
| | - Young-Taek Kim
- Department of Periodontology, Ilsan Hospital, National Health Insurance Service, Goyang, Korea
| | - Jong-Bin Lee
- Department of Periodontology, Gangneung-Wonju National University College of Dentistry, Gangneung, Korea
| |
Collapse
|
61
|
Zhu H, Qiao S, Zhao D, Wang K, Wang B, Niu Y, Shang S, Dong Z, Zhang W, Zheng Y, Chen X. Machine learning model for cardiovascular disease prediction in patients with chronic kidney disease. Front Endocrinol (Lausanne) 2024; 15:1390729. [PMID: 38863928 PMCID: PMC11165240 DOI: 10.3389/fendo.2024.1390729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 05/08/2024] [Indexed: 06/13/2024] Open
Abstract
Introduction Cardiovascular disease (CVD) is the leading cause of death in patients with chronic kidney disease (CKD). This study aimed to develop CVD risk prediction models using machine learning to support clinical decision making and improve patient prognosis. Methods Electronic medical records from patients with CKD at a single center from 2015 to 2020 were used to develop machine learning models for the prediction of CVD. Least absolute shrinkage and selection operator (LASSO) regression was used to select important features predicting the risk of developing CVD. Seven machine learning classification algorithms were used to build models, which were evaluated by receiver operating characteristic curves, accuracy, sensitivity, specificity, and F1-score, and Shapley Additive explanations was used to interpret the model results. CVD was defined as composite cardiovascular events including coronary heart disease (coronary artery disease, myocardial infarction, angina pectoris, and coronary artery revascularization), cerebrovascular disease (hemorrhagic stroke and ischemic stroke), deaths from all causes (cardiovascular deaths, non-cardiovascular deaths, unknown cause of death), congestive heart failure, and peripheral artery disease (aortic aneurysm, aortic or other peripheral arterial revascularization). A cardiovascular event was a composite outcome of multiple cardiovascular events, as determined by reviewing medical records. Results This study included 8,894 patients with CKD, with a composite CVD event incidence of 25.9%; a total of 2,304 patients reached this outcome. LASSO regression identified eight important features for predicting the risk of CKD developing into CVD: age, history of hypertension, sex, antiplatelet drugs, high-density lipoprotein, sodium ions, 24-h urinary protein, and estimated glomerular filtration rate. The model developed using Extreme Gradient Boosting in the test set had an area under the curve of 0.89, outperforming the other models, indicating that it had the best CVD predictive performance. Conclusion This study established a CVD risk prediction model for patients with CKD, based on routine clinical diagnostic and treatment data, with good predictive accuracy. This model is expected to provide a scientific basis for the management and treatment of patients with CKD.
Collapse
Affiliation(s)
- He Zhu
- Department of Nephrology, First Medical Center of Chinese PLA General Hospital, National Key Laboratory of Kidney Diseases, National Clinical Research Center for Kidney Diseases, Beijing Key Laboratory of Kidney Diseases Research, Beijing, China
- School of Clinical Medicine, Guangdong Pharmaceutical University, Guangzhou, China
| | - Shen Qiao
- Medical Innovation Research Division of Chinese PLA General Hospital, Beijing, China
- National Engineering Research Center of Medical Big Data, PLA General Hospital, Beijing, China
| | - Delong Zhao
- Department of Nephrology, First Medical Center of Chinese PLA General Hospital, National Key Laboratory of Kidney Diseases, National Clinical Research Center for Kidney Diseases, Beijing Key Laboratory of Kidney Diseases Research, Beijing, China
| | - Keyun Wang
- Department of Nephrology, First Medical Center of Chinese PLA General Hospital, National Key Laboratory of Kidney Diseases, National Clinical Research Center for Kidney Diseases, Beijing Key Laboratory of Kidney Diseases Research, Beijing, China
| | - Bin Wang
- Department of Nephrology, First Medical Center of Chinese PLA General Hospital, National Key Laboratory of Kidney Diseases, National Clinical Research Center for Kidney Diseases, Beijing Key Laboratory of Kidney Diseases Research, Beijing, China
| | - Yue Niu
- Department of Nephrology, First Medical Center of Chinese PLA General Hospital, National Key Laboratory of Kidney Diseases, National Clinical Research Center for Kidney Diseases, Beijing Key Laboratory of Kidney Diseases Research, Beijing, China
| | - Shunlai Shang
- Department of Nephrology, First Medical Center of Chinese PLA General Hospital, National Key Laboratory of Kidney Diseases, National Clinical Research Center for Kidney Diseases, Beijing Key Laboratory of Kidney Diseases Research, Beijing, China
| | - Zheyi Dong
- Department of Nephrology, First Medical Center of Chinese PLA General Hospital, National Key Laboratory of Kidney Diseases, National Clinical Research Center for Kidney Diseases, Beijing Key Laboratory of Kidney Diseases Research, Beijing, China
| | - Weiguang Zhang
- Department of Nephrology, First Medical Center of Chinese PLA General Hospital, National Key Laboratory of Kidney Diseases, National Clinical Research Center for Kidney Diseases, Beijing Key Laboratory of Kidney Diseases Research, Beijing, China
| | - Ying Zheng
- Department of Nephrology, First Medical Center of Chinese PLA General Hospital, National Key Laboratory of Kidney Diseases, National Clinical Research Center for Kidney Diseases, Beijing Key Laboratory of Kidney Diseases Research, Beijing, China
| | - Xiangmei Chen
- Department of Nephrology, First Medical Center of Chinese PLA General Hospital, National Key Laboratory of Kidney Diseases, National Clinical Research Center for Kidney Diseases, Beijing Key Laboratory of Kidney Diseases Research, Beijing, China
| |
Collapse
|
62
|
Demuth S, Paris J, Faddeenkov I, De Sèze J, Gourraud PA. Clinical applications of deep learning in neuroinflammatory diseases: A scoping review. Rev Neurol (Paris) 2024:S0035-3787(24)00522-8. [PMID: 38772806 DOI: 10.1016/j.neurol.2024.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 03/26/2024] [Accepted: 04/09/2024] [Indexed: 05/23/2024]
Abstract
BACKGROUND Deep learning (DL) is an artificial intelligence technology that has aroused much excitement for predictive medicine due to its ability to process raw data modalities such as images, text, and time series of signals. OBJECTIVES Here, we intend to give the clinical reader elements to understand this technology, taking neuroinflammatory diseases as an illustrative use case of clinical translation efforts. We reviewed the scope of this rapidly evolving field to get quantitative insights about which clinical applications concentrate the efforts and which data modalities are most commonly used. METHODS We queried the PubMed database for articles reporting DL algorithms for clinical applications in neuroinflammatory diseases and the radiology.healthairegister.com website for commercial algorithms. RESULTS The review included 148 articles published between 2018 and 2024 and five commercial algorithms. The clinical applications could be grouped as computer-aided diagnosis, individual prognosis, functional assessment, the segmentation of radiological structures, and the optimization of data acquisition. Our review highlighted important discrepancies in efforts. The segmentation of radiological structures and computer-aided diagnosis currently concentrate most efforts with an overrepresentation of imaging. Various model architectures have addressed different applications, relatively low volume of data, and diverse data modalities. We report the high-level technical characteristics of the algorithms and synthesize narratively the clinical applications. Predictive performances and some common a priori on this topic are finally discussed. CONCLUSION The currently reported efforts position DL as an information processing technology, enhancing existing modalities of paraclinical investigations and bringing perspectives to make innovative ones actionable for healthcare.
Collapse
Affiliation(s)
- S Demuth
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France; Inserm U1119 : biopathologie de la myéline, neuroprotection et stratégies thérapeutiques, University of Strasbourg, 1, rue Eugène-Boeckel - CS 60026, 67084 Strasbourg, France.
| | - J Paris
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France
| | - I Faddeenkov
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France
| | - J De Sèze
- Inserm U1119 : biopathologie de la myéline, neuroprotection et stratégies thérapeutiques, University of Strasbourg, 1, rue Eugène-Boeckel - CS 60026, 67084 Strasbourg, France; Department of Neurology, University Hospital of Strasbourg, 1, avenue Molière, 67200 Strasbourg, France; Inserm CIC 1434 Clinical Investigation Center, University Hospital of Strasbourg, 1, avenue Molière, 67200 Strasbourg, France
| | - P-A Gourraud
- Inserm U1064, CR2TI - Center for Research in Transplantation and Translational Immunology, Nantes University, 44000 Nantes, France; "Data clinic", Department of Public Health, University Hospital of Nantes, Nantes, France
| |
Collapse
|
63
|
Gombolay GY, Silva A, Schrum M, Gopalan N, Hallman-Cooper J, Dutt M, Gombolay M. Effects of explainable artificial intelligence in neurology decision support. Ann Clin Transl Neurol 2024; 11:1224-1235. [PMID: 38581138 PMCID: PMC11093252 DOI: 10.1002/acn3.52036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 02/20/2024] [Accepted: 02/27/2024] [Indexed: 04/08/2024] Open
Abstract
OBJECTIVE Artificial intelligence (AI)-based decision support systems (DSS) are utilized in medicine but underlying decision-making processes are usually unknown. Explainable AI (xAI) techniques provide insight into DSS, but little is known on how to design xAI for clinicians. Here we investigate the impact of various xAI techniques on a clinician's interaction with an AI-based DSS in decision-making tasks as compared to a general population. METHODS We conducted a randomized, blinded study in which members of the Child Neurology Society and American Academy of Neurology were compared to a general population. Participants received recommendations from a DSS via a random assignment of an xAI intervention (decision tree, crowd sourced agreement, case-based reasoning, probability scores, counterfactual reasoning, feature importance, templated language, and no explanations). Primary outcomes included test performance and perceived explainability, trust, and social competence of the DSS. Secondary outcomes included compliance, understandability, and agreement per question. RESULTS We had 81 neurology participants with 284 in the general population. Decision trees were perceived as the more explainable by the medical versus general population (P < 0.01) and as more explainable than probability scores within the medical population (P < 0.001). Increasing neurology experience and perceived explainability degraded performance (P = 0.0214). Performance was not predicted by xAI method but by perceived explainability. INTERPRETATION xAI methods have different impacts on a medical versus general population; thus, xAI is not uniformly beneficial, and there is no one-size-fits-all approach. Further user-centered xAI research targeting clinicians and to develop personalized DSS for clinicians is needed.
Collapse
Affiliation(s)
- Grace Y Gombolay
- Department of Pediatrics, Division of Neurology, Children's Healthcare of Atlanta, Emory University School of Medicine, Atlanta, GA, USA
| | - Andrew Silva
- Georgia Institute of Technology, Atlanta, GA, USA
| | | | | | - Jamika Hallman-Cooper
- Department of Pediatrics, Division of Neurology, Children's Healthcare of Atlanta, Emory University School of Medicine, Atlanta, GA, USA
| | - Monideep Dutt
- Department of Pediatrics, Division of Neurology, Children's Healthcare of Atlanta, Emory University School of Medicine, Atlanta, GA, USA
| | - Matthew Gombolay
- Department of Pediatrics, Division of Neurology, Children's Healthcare of Atlanta, Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
64
|
Lotter W, Hassett MJ, Schultz N, Kehl KL, Van Allen EM, Cerami E. Artificial Intelligence in Oncology: Current Landscape, Challenges, and Future Directions. Cancer Discov 2024; 14:711-726. [PMID: 38597966 PMCID: PMC11131133 DOI: 10.1158/2159-8290.cd-23-1199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/29/2024] [Accepted: 02/28/2024] [Indexed: 04/11/2024]
Abstract
Artificial intelligence (AI) in oncology is advancing beyond algorithm development to integration into clinical practice. This review describes the current state of the field, with a specific focus on clinical integration. AI applications are structured according to cancer type and clinical domain, focusing on the four most common cancers and tasks of detection, diagnosis, and treatment. These applications encompass various data modalities, including imaging, genomics, and medical records. We conclude with a summary of existing challenges, evolving solutions, and potential future directions for the field. SIGNIFICANCE AI is increasingly being applied to all aspects of oncology, where several applications are maturing beyond research and development to direct clinical integration. This review summarizes the current state of the field through the lens of clinical translation along the clinical care continuum. Emerging areas are also highlighted, along with common challenges, evolving solutions, and potential future directions for the field.
Collapse
Affiliation(s)
- William Lotter
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Pathology, Brigham and Women’s Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Michael J. Hassett
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Nikolaus Schultz
- Marie-Josée and Henry R. Kravis Center for Molecular Oncology, Memorial Sloan Kettering Cancer Center; New York, NY, USA
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Kenneth L. Kehl
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Eliezer M. Van Allen
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
- Cancer Program, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Ethan Cerami
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| |
Collapse
|
65
|
Contino S, Cruciata L, Gambino O, Pirrone R. IODeep: An IOD for the introduction of deep learning in the DICOM standard. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108113. [PMID: 38479148 DOI: 10.1016/j.cmpb.2024.108113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 02/22/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND AND OBJECTIVE In recent years, Artificial Intelligence (AI) and in particular Deep Neural Networks (DNN) became a relevant research topic in biomedical image segmentation due to the availability of more and more data sets along with the establishment of well known competitions. Despite the popularity of DNN based segmentation on the research side, these techniques are almost unused in the daily clinical practice even if they could support effectively the physician during the diagnostic process. Apart from the issues related to the explainability of the predictions of a neural model, such systems are not integrated in the diagnostic workflow, and a standardization of their use is needed to achieve this goal. METHODS This paper presents IODeep a new DICOM Information Object Definition (IOD) aimed at storing both the weights and the architecture of a DNN already trained on a particular image dataset that is labeled as regards the acquisition modality, the anatomical region, and the disease under investigation. RESULTS The IOD architecture is presented along with a DNN selection algorithm from the PACS server based on the labels outlined above, and a simple PACS viewer purposely designed for demonstrating the effectiveness of the DICOM integration, while no modifications are required on the PACS server side. Also a service based architecture in support of the entire workflow has been implemented. CONCLUSION IODeep ensures full integration of a trained AI model in a DICOM infrastructure, and it is also enables a scenario where a trained model can be either fine-tuned with hospital data or trained in a federated learning scheme shared by different hospitals. In this way AI models can be tailored to the real data produced by a Radiology ward thus improving the physician decision making process. Source code is freely available at https://github.com/CHILab1/IODeep.git.
Collapse
Affiliation(s)
- Salvatore Contino
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy
| | - Luca Cruciata
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy
| | - Orazio Gambino
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy.
| | - Roberto Pirrone
- Department of Engineering, University of Palermo, Palermo, 90128, Sicily, Italy
| |
Collapse
|
66
|
Yu C, Pei H. Dynamic Weighting Translation Transfer Learning for Imbalanced Medical Image Classification. ENTROPY (BASEL, SWITZERLAND) 2024; 26:400. [PMID: 38785649 PMCID: PMC11119260 DOI: 10.3390/e26050400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 04/25/2024] [Accepted: 04/26/2024] [Indexed: 05/25/2024]
Abstract
Medical image diagnosis using deep learning has shown significant promise in clinical medicine. However, it often encounters two major difficulties in real-world applications: (1) domain shift, which invalidates the trained model on new datasets, and (2) class imbalance problems leading to model biases towards majority classes. To address these challenges, this paper proposes a transfer learning solution, named Dynamic Weighting Translation Transfer Learning (DTTL), for imbalanced medical image classification. The approach is grounded in information and entropy theory and comprises three modules: Cross-domain Discriminability Adaptation (CDA), Dynamic Domain Translation (DDT), and Balanced Target Learning (BTL). CDA connects discriminative feature learning between source and target domains using a synthetic discriminability loss and a domain-invariant feature learning loss. The DDT unit develops a dynamic translation process for imbalanced classes between two domains, utilizing a confidence-based selection approach to select the most useful synthesized images to create a pseudo-labeled balanced target domain. Finally, the BTL unit performs supervised learning on the reassembled target set to obtain the final diagnostic model. This paper delves into maximizing the entropy of class distributions, while simultaneously minimizing the cross-entropy between the source and target domains to reduce domain discrepancies. By incorporating entropy concepts into our framework, our method not only significantly enhances medical image classification in practical settings but also innovates the application of entropy and information theory within deep learning and medical image processing realms. Extensive experiments demonstrate that DTTL achieves the best performance compared to existing state-of-the-art methods for imbalanced medical image classification tasks.
Collapse
Affiliation(s)
- Chenglin Yu
- School of Electrtronic & Information Engineering and Communication Engineering, Guangzhou City University of Technology, Guangzhou 510800, China
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, South China University of Technology, Guangzhou 510640, China
| | - Hailong Pei
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, School of Automation Scinece and Engineering, South China University of Technology, Guangzhou 510640, China;
| |
Collapse
|
67
|
Mansouri Z, Salimi Y, Akhavanallaf A, Shiri I, Teixeira EPA, Hou X, Beauregard JM, Rahmim A, Zaidi H. Deep transformer-based personalized dosimetry from SPECT/CT images: a hybrid approach for [ 177Lu]Lu-DOTATATE radiopharmaceutical therapy. Eur J Nucl Med Mol Imaging 2024; 51:1516-1529. [PMID: 38267686 PMCID: PMC11043201 DOI: 10.1007/s00259-024-06618-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 01/15/2024] [Indexed: 01/26/2024]
Abstract
PURPOSE Accurate dosimetry is critical for ensuring the safety and efficacy of radiopharmaceutical therapies. In current clinical dosimetry practice, MIRD formalisms are widely employed. However, with the rapid advancement of deep learning (DL) algorithms, there has been an increasing interest in leveraging the calculation speed and automation capabilities for different tasks. We aimed to develop a hybrid transformer-based deep learning (DL) model that incorporates a multiple voxel S-value (MSV) approach for voxel-level dosimetry in [177Lu]Lu-DOTATATE therapy. The goal was to enhance the performance of the model to achieve accuracy levels closely aligned with Monte Carlo (MC) simulations, considered as the standard of reference. We extended our analysis to include MIRD formalisms (SSV and MSV), thereby conducting a comprehensive dosimetry study. METHODS We used a dataset consisting of 22 patients undergoing up to 4 cycles of [177Lu]Lu-DOTATATE therapy. MC simulations were used to generate reference absorbed dose maps. In addition, MIRD formalism approaches, namely, single S-value (SSV) and MSV techniques, were performed. A UNEt TRansformer (UNETR) DL architecture was trained using five-fold cross-validation to generate MC-based dose maps. Co-registered CT images were fed into the network as input, whereas the difference between MC and MSV (MC-MSV) was set as output. DL results are then integrated to MSV to revive the MC dose maps. Finally, the dose maps generated by MSV, SSV, and DL were quantitatively compared to the MC reference at both voxel level and organ level (organs at risk and lesions). RESULTS The DL approach showed slightly better performance (voxel relative absolute error (RAE) = 5.28 ± 1.32) compared to MSV (voxel RAE = 5.54 ± 1.4) and outperformed SSV (voxel RAE = 7.8 ± 3.02). Gamma analysis pass rates were 99.0 ± 1.2%, 98.8 ± 1.3%, and 98.7 ± 1.52% for DL, MSV, and SSV approaches, respectively. The computational time for MC was the highest (~2 days for a single-bed SPECT study) compared to MSV, SSV, and DL, whereas the DL-based approach outperformed the other approaches in terms of time efficiency (3 s for a single-bed SPECT). Organ-wise analysis showed absolute percent errors of 1.44 ± 3.05%, 1.18 ± 2.65%, and 1.15 ± 2.5% for SSV, MSV, and DL approaches, respectively, in lesion-absorbed doses. CONCLUSION A hybrid transformer-based deep learning model was developed for fast and accurate dose map generation, outperforming the MIRD approaches, specifically in heterogenous regions. The model achieved accuracy close to MC gold standard and has potential for clinical implementation for use on large-scale datasets.
Collapse
Affiliation(s)
- Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Eliluane Pirazzo Andrade Teixeira
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Xinchi Hou
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Jean-Mathieu Beauregard
- Cancer Research Centre and Department of Radiology and Nuclear Medicine, Université Laval, Quebec City, QC, Canada
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
68
|
Tian Z, Cheng Y, Zhao S, Li R, Zhou J, Sun Q, Wang D. Deep learning radiomics-based prediction model of metachronous distant metastasis following curative resection for retroperitoneal leiomyosarcoma: a bicentric study. Cancer Imaging 2024; 24:52. [PMID: 38627828 PMCID: PMC11020328 DOI: 10.1186/s40644-024-00697-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 03/29/2024] [Indexed: 04/19/2024] Open
Abstract
BACKGROUND Combining conventional radiomics models with deep learning features can result in superior performance in predicting the prognosis of patients with tumors; however, this approach has never been evaluated for the prediction of metachronous distant metastasis (MDM) among patients with retroperitoneal leiomyosarcoma (RLS). Thus, the purpose of this study was to develop and validate a preoperative contrast-enhanced computed tomography (CECT)-based deep learning radiomics model for predicting the occurrence of MDM in patients with RLS undergoing complete surgical resection. METHODS A total of 179 patients who had undergone surgery for the treatment of histologically confirmed RLS were retrospectively recruited from two tertiary sarcoma centers. Semantic segmentation features derived from a convolutional neural network deep learning model as well as conventional hand-crafted radiomics features were extracted from preoperative three-phase CECT images to quantify the sarcoma phenotypes. A conventional radiomics signature (RS) and a deep learning radiomics signature (DLRS) that incorporated hand-crafted radiomics and deep learning features were developed to predict the risk of MDM. Additionally, a deep learning radiomics nomogram (DLRN) was established to evaluate the incremental prognostic significance of the DLRS in combination with clinico-radiological predictors. RESULTS The comparison of the area under the curve (AUC) values in the external validation set, as determined by the DeLong test, demonstrated that the integrated DLRN, DLRS, and RS models all exhibited superior predictive performance compared with that of the clinical model (AUC 0.786 [95% confidence interval 0.649-0.923] vs. 0.822 [0.692-0.952] vs. 0.733 [0.573-0.892] vs. 0.511 [0.359-0.662]; both P < 0.05). The decision curve analyses graphically indicated that utilizing the DLRN for risk stratification provided greater net benefits than those achieved using the DLRS, RS and clinical models. Good alignment with the calibration curve indicated that the DLRN also exhibited good performance. CONCLUSIONS The novel CECT-based DLRN developed in this study demonstrated promising performance in the preoperative prediction of the risk of MDM following curative resection in patients with RLS. The DLRN, which outperformed the other three models, could provide valuable information for predicting surgical efficacy and tailoring individualized treatment plans in this patient population. TRIAL REGISTRATION Not applicable.
Collapse
Affiliation(s)
- Zhen Tian
- Northern Jiangsu People's Hospital, Clinical Teaching Hospital of Medical School, Nanjing University, Yangzhou, China
- Department of General Surgery, Northern Jiangsu People's Hospital, Yangzhou, China
| | - Yifan Cheng
- Northern Jiangsu People's Hospital, Clinical Teaching Hospital of Medical School, Nanjing University, Yangzhou, China
- Department of General Surgery, Northern Jiangsu People's Hospital, Yangzhou, China
| | - Shuai Zhao
- Northern Jiangsu People's Hospital, Clinical Teaching Hospital of Medical School, Nanjing University, Yangzhou, China
- Department of General Surgery, Northern Jiangsu People's Hospital, Yangzhou, China
| | - Ruiqi Li
- Northern Jiangsu People's Hospital, Clinical Teaching Hospital of Medical School, Nanjing University, Yangzhou, China
- Department of General Surgery, Northern Jiangsu People's Hospital, Yangzhou, China
| | - Jiajie Zhou
- Northern Jiangsu People's Hospital, Clinical Teaching Hospital of Medical School, Nanjing University, Yangzhou, China
- Department of General Surgery, Northern Jiangsu People's Hospital, Yangzhou, China
| | - Qiannan Sun
- Department of General Surgery, Northern Jiangsu People's Hospital, Yangzhou, China
- General Surgery Institute of Yangzhou, Yangzhou University, Yangzhou, China
| | - Daorong Wang
- Northern Jiangsu People's Hospital, Clinical Teaching Hospital of Medical School, Nanjing University, Yangzhou, China.
- Department of General Surgery, Northern Jiangsu People's Hospital, Yangzhou, China.
- General Surgery Institute of Yangzhou, Yangzhou University, Yangzhou, China.
- Yangzhou Key Laboratory of Basic and Clinical Transformation of Digestive and Metabolic Diseases, Yangzhou, China.
| |
Collapse
|
69
|
Wang AQ, Karaman BK, Kim H, Rosenthal J, Saluja R, Young SI, Sabuncu MR. A Framework for Interpretability in Machine Learning for Medical Imaging. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2024; 12:53277-53292. [PMID: 39421804 PMCID: PMC11486155 DOI: 10.1109/access.2024.3387702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What goals does one actually seek to address when interpretability is needed? To answer these questions, we identify a need to formalize the goals and elements of interpretability in MLMI. By reasoning about real-world tasks and goals common in both medical image analysis and its intersection with machine learning, we identify five core elements of interpretability: localization, visual recognizability, physical attribution, model transparency, and actionability. From this, we arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context. Overall, this paper formalizes interpretability needs in the context of medical imaging, and our applied perspective clarifies concrete MLMI-specific goals and considerations in order to guide method design and improve real-world usage. Our goal is to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.
Collapse
Affiliation(s)
- Alan Q Wang
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Batuhan K Karaman
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Heejong Kim
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Jacob Rosenthal
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
- Weill Cornell/Rockefeller/Sloan Kettering Tri-Institutional M.D.-Ph.D. Program, New York City, NY 10065, USA
| | - Rachit Saluja
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Sean I Young
- Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA
| | - Mert R Sabuncu
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| |
Collapse
|
70
|
Gullo RL, Brunekreef J, Marcus E, Han LK, Eskreis-Winkler S, Thakur SB, Mann R, Lipman KG, Teuwen J, Pinker K. AI Applications to Breast MRI: Today and Tomorrow. J Magn Reson Imaging 2024:10.1002/jmri.29358. [PMID: 38581127 PMCID: PMC11452568 DOI: 10.1002/jmri.29358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/07/2024] [Accepted: 03/09/2024] [Indexed: 04/08/2024] Open
Abstract
In breast imaging, there is an unrelenting increase in the demand for breast imaging services, partly explained by continuous expanding imaging indications in breast diagnosis and treatment. As the human workforce providing these services is not growing at the same rate, the implementation of artificial intelligence (AI) in breast imaging has gained significant momentum to maximize workflow efficiency and increase productivity while concurrently improving diagnostic accuracy and patient outcomes. Thus far, the implementation of AI in breast imaging is at the most advanced stage with mammography and digital breast tomosynthesis techniques, followed by ultrasound, whereas the implementation of AI in breast magnetic resonance imaging (MRI) is not moving along as rapidly due to the complexity of MRI examinations and fewer available dataset. Nevertheless, there is persisting interest in AI-enhanced breast MRI applications, even as the use of and indications of breast MRI continue to expand. This review presents an overview of the basic concepts of AI imaging analysis and subsequently reviews the use cases for AI-enhanced MRI interpretation, that is, breast MRI triaging and lesion detection, lesion classification, prediction of treatment response, risk assessment, and image quality. Finally, it provides an outlook on the barriers and facilitators for the adoption of AI in breast MRI. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 6.
Collapse
Affiliation(s)
- Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Joren Brunekreef
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Eric Marcus
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Lynn K Han
- Weill Cornell Medical College, New York-Presbyterian Hospital, New York, NY, USA
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Sunitha B Thakur
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Ritse Mann
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Kevin Groot Lipman
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jonas Teuwen
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
71
|
Feng S, Wang S, Liu C, Wu S, Zhang B, Lu C, Huang C, Chen T, Zhou C, Zhu J, Chen J, Xue J, Wei W, Zhan X. Prediction model for spinal cord injury in spinal tuberculosis patients using multiple machine learning algorithms: a multicentric study. Sci Rep 2024; 14:7691. [PMID: 38565845 PMCID: PMC10987632 DOI: 10.1038/s41598-024-56711-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 03/09/2024] [Indexed: 04/04/2024] Open
Abstract
Spinal cord injury (SCI) is a prevalent and serious complication among patients with spinal tuberculosis (STB) that can lead to motor and sensory impairment and potentially paraplegia. This research aims to identify factors associated with SCI in STB patients and to develop a clinically significant predictive model. Clinical data from STB patients at a single hospital were collected and divided into training and validation sets. Univariate analysis was employed to screen clinical indicators in the training set. Multiple machine learning (ML) algorithms were utilized to establish predictive models. Model performance was evaluated and compared using receiver operating characteristic (ROC) curves, area under the curve (AUC), calibration curve analysis, decision curve analysis (DCA), and precision-recall (PR) curves. The optimal model was determined, and a prospective cohort from two other hospitals served as a testing set to assess its accuracy. Model interpretation and variable importance ranking were conducted using the DALEX R package. The model was deployed on the web by using the Shiny app. Ten clinical characteristics were utilized for the model. The random forest (RF) model emerged as the optimal choice based on the AUC, PRs, calibration curve analysis, and DCA, achieving a test set AUC of 0.816. Additionally, MONO was identified as the primary predictor of SCI in STB patients through variable importance ranking. The RF predictive model provides an efficient and swift approach for predicting SCI in STB patients.
Collapse
Affiliation(s)
- Sitan Feng
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Shujiang Wang
- Department of Outpatient, General Hospital of Eastern Theater Command, Nanjing, Jiangsu, People's Republic of China
| | - Chong Liu
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Shaofeng Wu
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Bin Zhang
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
- Department of Spine Ward, Bei Jing Ji Shui Tan Hospital Gui Zhou Hospital, Guiyang, Guizhou, People's Republic of China
| | - Chunxian Lu
- Department of Spine and Osteopathy Ward, Bai Se People's Hospital, Baise, Guangxi, People's Republic of China
| | - Chengqian Huang
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Tianyou Chen
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Chenxing Zhou
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Jichong Zhu
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Jiarui Chen
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Jiang Xue
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Wendi Wei
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Xinli Zhan
- Department of Spine and Osteopathy Ward, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China.
| |
Collapse
|
72
|
Lo ZJ, Mak MHW, Liang S, Chan YM, Goh CC, Lai T, Tan A, Thng P, Rodriguez J, Weyde T, Smit S. Development of an explainable artificial intelligence model for Asian vascular wound images. Int Wound J 2024; 21:e14565. [PMID: 38146127 PMCID: PMC10961881 DOI: 10.1111/iwj.14565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 12/04/2023] [Indexed: 12/27/2023] Open
Abstract
Chronic wounds contribute to significant healthcare and economic burden worldwide. Wound assessment remains challenging given its complex and dynamic nature. The use of artificial intelligence (AI) and machine learning methods in wound analysis is promising. Explainable modelling can help its integration and acceptance in healthcare systems. We aim to develop an explainable AI model for analysing vascular wound images among an Asian population. Two thousand nine hundred and fifty-seven wound images from a vascular wound image registry from a tertiary institution in Singapore were utilized. The dataset was split into training, validation and test sets. Wound images were classified into four types (neuroischaemic ulcer [NIU], surgical site infections [SSI], venous leg ulcers [VLU], pressure ulcer [PU]), measured with automatic estimation of width, length and depth and segmented into 18 wound and peri-wound features. Data pre-processing was performed using oversampling and augmentation techniques. Convolutional and deep learning models were utilized for model development. The model was evaluated with accuracy, F1 score and receiver operating characteristic (ROC) curves. Explainability methods were used to interpret AI decision reasoning. A web browser application was developed to demonstrate results of the wound AI model with explainability. After development, the model was tested on additional 15 476 unlabelled images to evaluate effectiveness. After the development on the training and validation dataset, the model performance on unseen labelled images in the test set achieved an AUROC of 0.99 for wound classification with mean accuracy of 95.9%. For wound measurements, the model achieved AUROC of 0.97 with mean accuracy of 85.0% for depth classification, and AUROC of 0.92 with mean accuracy of 87.1% for width and length determination. For wound segmentation, an AUROC of 0.95 and mean accuracy of 87.8% was achieved. Testing on unlabelled images, the model confidence score for wound classification was 82.8% with an explainability score of 60.6%. Confidence score was 87.6% for depth classification with 68.0% explainability score, while width and length measurement obtained 93.0% accuracy score with 76.6% explainability. Confidence score for wound segmentation was 83.9%, while explainability was 72.1%. Using explainable AI models, we have developed an algorithm and application for analysis of vascular wound images from an Asian population with accuracy and explainability. With further development, it can be utilized as a clinical decision support system and integrated into existing healthcare electronic systems.
Collapse
Affiliation(s)
- Zhiwen Joseph Lo
- Department of SurgeryWoodlands HealthSingaporeSingapore
- Lee Kong Chian School of MedicineNanyang Technological UniversitySingaporeSingapore
| | | | | | - Yam Meng Chan
- Department of General SurgeryTan Tock Seng HospitalSingaporeSingapore
| | - Cheng Cheng Goh
- Wound and Stoma Care, Nursing SpecialityTan Tock Seng HospitalSingaporeSingapore
| | - Tina Lai
- Wound and Stoma Care, Nursing SpecialityTan Tock Seng HospitalSingaporeSingapore
| | - Audrey Tan
- Wound and Stoma Care, Nursing SpecialityTan Tock Seng HospitalSingaporeSingapore
| | - Patrick Thng
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| | - Jorge Rodriguez
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| | - Tillman Weyde
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| | - Sylvia Smit
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| |
Collapse
|
73
|
Liao J, Misaki K, Uno T, Futami K, Nakada M, Sakamoto J. Determination of Significant Three-Dimensional Hemodynamic Features for Postembolization Recanalization in Cerebral Aneurysms Through Explainable Artificial Intelligence. World Neurosurg 2024; 184:e166-e177. [PMID: 38246531 DOI: 10.1016/j.wneu.2024.01.076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 01/12/2024] [Accepted: 01/13/2024] [Indexed: 01/23/2024]
Abstract
BACKGROUND Recanalization poses challenges after coil embolization in cerebral aneurysms. Establishing predictive models for postembolization recanalization is important for clinical decision making. However, conventional statistical and machine learning (ML) models may overlook critical parameters during the initial selection process. METHODS In this study, we automated the identification of significant hemodynamic parameters using a PointNet-based deep neural network (DNN), leveraging their three-dimensional spatial features. Further feature analysis was conducted using saliency mapping, an explainable artificial intelligence (XAI) technique. The study encompassed the analysis of velocity, pressure, and wall shear stress in both precoiling and postcoiling models derived from computational fluid dynamics simulations for 58 aneurysms. RESULTS Velocity was identified as the most pivotal parameter, supported by the lowest P value from statistical analysis and the highest area under the receiver operating characteristic curves/precision-recall curves values from the DNN model. Moreover, visual XAI analysis showed that robust injection flow zones, with notable impingement points in precoiling models, as well as pronounced interplay between flow dynamics and the coiling plane, were important three-dimensional features in identifying the recanalized aneurysms. CONCLUSIONS The combination of DNN and XAI was found to be an accurate and explainable approach not only at predicting postembolization recanalization but also at discovering unknown features in the future.
Collapse
Affiliation(s)
- Jing Liao
- Division of Transdisciplinary Sciences, Graduate School of Frontier Science Initiative, Kanazawa University, Kanazawa, Ishikawa, Japan
| | - Kouichi Misaki
- Department of Neurosurgery, Kanazawa University, Kanazawa, Ishikawa, Japan.
| | - Tekehiro Uno
- Department of Neurosurgery, Kanazawa University, Kanazawa, Ishikawa, Japan
| | - Kazuya Futami
- Department of Neurosurgery, Hokuriku Central Hospital, Oyabe, Toyama, Japan
| | - Mitsutoshi Nakada
- Department of Neurosurgery, Kanazawa University, Kanazawa, Ishikawa, Japan
| | - Jiro Sakamoto
- Division of Mechanical Science and Engineering, Graduate School of Natural Science and Technology, Kanazawa University, Kanazawa, Ishikawa, Japan
| |
Collapse
|
74
|
Yurkovich JT, Evans SJ, Rappaport N, Boore JL, Lovejoy JC, Price ND, Hood LE. The transition from genomics to phenomics in personalized population health. Nat Rev Genet 2024; 25:286-302. [PMID: 38093095 DOI: 10.1038/s41576-023-00674-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2023] [Indexed: 03/21/2024]
Abstract
Modern health care faces several serious challenges, including an ageing population and its inherent burden of chronic diseases, rising costs and marginal quality metrics. By assessing and optimizing the health trajectory of each individual using a data-driven personalized approach that reflects their genetics, behaviour and environment, we can start to address these challenges. This assessment includes longitudinal phenome measures, such as the blood proteome and metabolome, gut microbiome composition and function, and lifestyle and behaviour through wearables and questionnaires. Here, we review ongoing large-scale genomics and longitudinal phenomics efforts and the powerful insights they provide into wellness. We describe our vision for the transformation of the current health care from disease-oriented to data-driven, wellness-oriented and personalized population health.
Collapse
Affiliation(s)
- James T Yurkovich
- Phenome Health, Seattle, WA, USA
- Center for Phenomic Health, The Buck Institute for Research on Aging, Novato, CA, USA
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX, USA
| | - Simon J Evans
- Phenome Health, Seattle, WA, USA
- Center for Phenomic Health, The Buck Institute for Research on Aging, Novato, CA, USA
| | - Noa Rappaport
- Center for Phenomic Health, The Buck Institute for Research on Aging, Novato, CA, USA
- Institute for Systems Biology, Seattle, WA, USA
| | - Jeffrey L Boore
- Phenome Health, Seattle, WA, USA
- Center for Phenomic Health, The Buck Institute for Research on Aging, Novato, CA, USA
| | - Jennifer C Lovejoy
- Phenome Health, Seattle, WA, USA
- Center for Phenomic Health, The Buck Institute for Research on Aging, Novato, CA, USA
- Institute for Systems Biology, Seattle, WA, USA
| | - Nathan D Price
- Institute for Systems Biology, Seattle, WA, USA
- Thorne HealthTech, New York, NY, USA
- Department of Bioengineering, University of Washington, Seattle, WA, USA
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
| | - Leroy E Hood
- Phenome Health, Seattle, WA, USA.
- Center for Phenomic Health, The Buck Institute for Research on Aging, Novato, CA, USA.
- Institute for Systems Biology, Seattle, WA, USA.
- Department of Bioengineering, University of Washington, Seattle, WA, USA.
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA.
- Department of Immunology, University of Washington, Seattle, WA, USA.
| |
Collapse
|
75
|
Dolezal JM, Kochanny S, Dyer E, Ramesh S, Srisuwananukorn A, Sacco M, Howard FM, Li A, Mohan P, Pearson AT. Slideflow: deep learning for digital histopathology with real-time whole-slide visualization. BMC Bioinformatics 2024; 25:134. [PMID: 38539070 PMCID: PMC10967068 DOI: 10.1186/s12859-024-05758-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 03/20/2024] [Indexed: 05/04/2024] Open
Abstract
Deep learning methods have emerged as powerful tools for analyzing histopathological images, but current methods are often specialized for specific domains and software environments, and few open-source options exist for deploying models in an interactive interface. Experimenting with different deep learning approaches typically requires switching software libraries and reprocessing data, reducing the feasibility and practicality of experimenting with new architectures. We developed a flexible deep learning library for histopathology called Slideflow, a package which supports a broad array of deep learning methods for digital pathology and includes a fast whole-slide interface for deploying trained models. Slideflow includes unique tools for whole-slide image data processing, efficient stain normalization and augmentation, weakly-supervised whole-slide classification, uncertainty quantification, feature generation, feature space analysis, and explainability. Whole-slide image processing is highly optimized, enabling whole-slide tile extraction at 40x magnification in 2.5 s per slide. The framework-agnostic data processing pipeline enables rapid experimentation with new methods built with either Tensorflow or PyTorch, and the graphical user interface supports real-time visualization of slides, predictions, heatmaps, and feature space characteristics on a variety of hardware devices, including ARM-based devices such as the Raspberry Pi.
Collapse
Affiliation(s)
- James M Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA.
| | - Sara Kochanny
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Emma Dyer
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Siddhi Ramesh
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Andrew Srisuwananukorn
- Division of Hematology, Department of Internal Medicine, The Ohio State University Comprehensive Cancer Center, Columbus, OH, USA
| | - Matteo Sacco
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Frederick M Howard
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Anran Li
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Prajval Mohan
- Department of Computer Science, University of Chicago, Chicago, IL, USA
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA.
| |
Collapse
|
76
|
McNamara SL, Yi PH, Lotter W. The clinician-AI interface: intended use and explainability in FDA-cleared AI devices for medical image interpretation. NPJ Digit Med 2024; 7:80. [PMID: 38531952 DOI: 10.1038/s41746-024-01080-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 03/14/2024] [Indexed: 03/28/2024] Open
Abstract
As applications of AI in medicine continue to expand, there is an increasing focus on integration into clinical practice. An underappreciated aspect of this clinical translation is where the AI fits into the clinical workflow, and in turn, the outputs generated by the AI to facilitate clinician interaction in this workflow. For instance, in the canonical use case of AI for medical image interpretation, the AI could prioritize cases before clinician review or even autonomously interpret the images without clinician review. A related aspect is explainability - does the AI generate outputs to help explain its predictions to clinicians? While many clinical AI workflows and explainability techniques have been proposed, a summative assessment of the current scope in clinical practice is lacking. Here, we evaluate the current state of FDA-cleared AI devices for medical image interpretation assistance in terms of intended clinical use, outputs generated, and types of explainability offered. We create a curated database focused on these aspects of the clinician-AI interface, where we find a high frequency of "triage" devices, notable variability in output characteristics across products, and often limited explainability of AI predictions. Altogether, we aim to increase transparency of the current landscape of the clinician-AI interface and highlight the need to rigorously assess which strategies ultimately lead to the best clinical outcomes.
Collapse
Affiliation(s)
| | - Paul H Yi
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - William Lotter
- Harvard Medical School, Boston, MA, USA.
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA.
- Department of Pathology, Brigham & Women's Hospital, Boston, MA, USA.
| |
Collapse
|
77
|
Ounissi M, Latouche M, Racoceanu D. PhagoStat a scalable and interpretable end to end framework for efficient quantification of cell phagocytosis in neurodegenerative disease studies. Sci Rep 2024; 14:6482. [PMID: 38499658 PMCID: PMC10948879 DOI: 10.1038/s41598-024-56081-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 03/01/2024] [Indexed: 03/20/2024] Open
Abstract
Quantifying the phagocytosis of dynamic, unstained cells is essential for evaluating neurodegenerative diseases. However, measuring rapid cell interactions and distinguishing cells from background make this task very challenging when processing time-lapse phase-contrast video microscopy. In this study, we introduce an end-to-end, scalable, and versatile real-time framework for quantifying and analyzing phagocytic activity. Our proposed pipeline is able to process large data-sets and includes a data quality verification module to counteract potential perturbations such as microscope movements and frame blurring. We also propose an explainable cell segmentation module to improve the interpretability of deep learning methods compared to black-box algorithms. This includes two interpretable deep learning capabilities: visual explanation and model simplification. We demonstrate that interpretability in deep learning is not the opposite of high performance, by additionally providing essential deep learning algorithm optimization insights and solutions. Besides, incorporating interpretable modules results in an efficient architecture design and optimized execution time. We apply this pipeline to quantify and analyze microglial cell phagocytosis in frontotemporal dementia (FTD) and obtain statistically reliable results showing that FTD mutant cells are larger and more aggressive than control cells. The method has been tested and validated on several public benchmarks by generating state-of-the art performances. To stimulate translational approaches and future studies, we release an open-source end-to-end pipeline and a unique microglial cells phagocytosis dataset for immune system characterization in neurodegenerative diseases research. This pipeline and the associated dataset will consistently crystallize future advances in this field, promoting the development of efficient and effective interpretable algorithms dedicated to the critical domain of neurodegenerative diseases' characterization. https://github.com/ounissimehdi/PhagoStat .
Collapse
Affiliation(s)
- Mehdi Ounissi
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France
| | - Morwena Latouche
- Inserm, CNRS, AP-HP, Institut du Cerveau, ICM, Sorbonne Université, 75013, Paris, France
- PSL Research university, EPHE, Paris, France
| | - Daniel Racoceanu
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France.
| |
Collapse
|
78
|
Fraioli F, Albert N, Boellaard R, Galazzo IB, Brendel M, Buvat I, Castellaro M, Cecchin D, Fernandez PA, Guedj E, Hammers A, Kaplar Z, Morbelli S, Papp L, Shi K, Tolboom N, Traub-Weidinger T, Verger A, Van Weehaeghe D, Yakushev I, Barthel H. Perspectives of the European Association of Nuclear Medicine on the role of artificial intelligence (AI) in molecular brain imaging. Eur J Nucl Med Mol Imaging 2024; 51:1007-1011. [PMID: 38097746 DOI: 10.1007/s00259-023-06553-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Affiliation(s)
- Francesco Fraioli
- Institute of Nuclear Medicine, University College London Hospitals, 5Th Floor UCH, 235 Euston Rd, London, NW1 2BU, UK.
| | - Nathalie Albert
- Department of Nuclear Medicine, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location VUmc, Amsterdam, The Netherlands
| | | | - Matthias Brendel
- Department of Nuclear Medicine, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Irene Buvat
- Institut Curie - Inserm Laboratory of Translational Imaging in Oncology, Paris, France
| | - Marco Castellaro
- Department of Information Engineering, University-Hospital of Padova, Padua, Italy
| | - Diego Cecchin
- Nuclear Medicine Unit, Department of Medicine - DIMED, University-Hospital of Padova, Padua, Italy
| | - Pablo Aguiar Fernandez
- CIMUS, Universidade Santiago de Compostela & Nuclear Medicine Dept, Univ. Hospital IDIS, Santiago de Compostela, Spain
| | - Eric Guedj
- Département de Médecine Nucléaire, Aix Marseille Univ, APHM, CNRS, Centrale Marseille, Institut Fresnel, Hôpital de La Timone, CERIMED, Marseille, France
| | - Alexander Hammers
- School of Biomedical Engineering and Imaging Sciences, King's College London St Thomas' Hospital, London, SE1 7EH, UK
| | - Zoltan Kaplar
- Institute of Nuclear Medicine, University College London Hospitals, 5Th Floor UCH, 235 Euston Rd, London, NW1 2BU, UK
| | - Silvia Morbelli
- Nuclear Medicine Unit, AOU Città Della Salute E Della Scienza Di Torino, University of Turin, Turin, Italy
| | - Laszlo Papp
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Kuangyu Shi
- Lab for Artificial Intelligence and Translational Theranostic, Dept. of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Nelleke Tolboom
- Department of Radiology and Nuclear Medicine, Utrecht University Medical Center, Utrecht, The Netherlands
| | - Tatjana Traub-Weidinger
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Antoine Verger
- Department of Nuclear Medicine and Nancyclotep Imaging Platform, CHRU Nancy, Université de Lorraine, IADI, INSERM U1254, Nancy, France
| | - Donatienne Van Weehaeghe
- Department of Radiology and Nuclear Medicine, Ghent University Hospital, C. Heymanslaan 10, 9000, Ghent, Belgium
| | - Igor Yakushev
- Department of Nuclear Medicine, School of Medicine, Technical University of Munich, Munich, Germany
| | - Henryk Barthel
- Department of Nuclear Medicine, Leipzig University Medical Centre, Leipzig, Germany
| |
Collapse
|
79
|
Zeineldin RA, Karar ME, Burgert O, Mathis-Ullrich F. NeuroIGN: Explainable Multimodal Image-Guided System for Precise Brain Tumor Surgery. J Med Syst 2024; 48:25. [PMID: 38393660 DOI: 10.1007/s10916-024-02037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/03/2024] [Indexed: 02/25/2024]
Abstract
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany
| |
Collapse
|
80
|
Maheswari BU, Sam D, Mittal N, Sharma A, Kaur S, Askar SS, Abouhawwash M. Explainable deep-neural-network supported scheme for tuberculosis detection from chest radiographs. BMC Med Imaging 2024; 24:32. [PMID: 38317098 PMCID: PMC10840197 DOI: 10.1186/s12880-024-01202-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 01/15/2024] [Indexed: 02/07/2024] Open
Abstract
Chest radiographs are examined in typical clinical settings by competent physicians for tuberculosis diagnosis. However, this procedure is time consuming and subjective. Due to the growing usage of machine learning techniques in applied sciences, researchers have begun applying comparable concepts to medical diagnostics, such as tuberculosis screening. In the period of extremely deep neural nets which comprised of hundreds of convolution layers for feature extraction, we create a shallow-CNN for screening of TB condition from Chest X-rays so that the model is able to offer appropriate interpretation for right diagnosis. The suggested model consists of four convolution-maxpooling layers with various hyperparameters that were optimized for optimal performance using a Bayesian optimization technique. The model was reported with a peak classification accuracy, F1-score, sensitivity and specificity of 0.95. In addition, the receiver operating characteristic (ROC) curve for the proposed shallow-CNN showed a peak area under the curve value of 0.976. Moreover, we have employed class activation maps (CAM) and Local Interpretable Model-agnostic Explanations (LIME), explainer systems for assessing the transparency and explainability of the model in comparison to a state-of-the-art pre-trained neural net such as the DenseNet.
Collapse
Affiliation(s)
- B Uma Maheswari
- Department of Computer Science and Engineering, St. Joseph's College of Engineering, OMR, Chennai, Tamilnadu, 600119, India
| | - Dahlia Sam
- Department of Information Technology, Dr. M.G.R Educational and Research Institute, Periyar E.V.R High Road, Vishwas Nagar, Maduravoyal, Chennai, Tamilnadu, 600095, India
| | - Nitin Mittal
- University Centre for Research and Development, Chandigarh University, Mohali, Punjab, 140413, India
| | - Abhishek Sharma
- Department of Computer Engineering and Applications, GLA University, Mathura, Uttar Pradesh, 281406, India
| | - Sandeep Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, 143005, India
| | - S S Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh, 11451, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Computational Mathematics, Science, and Engineering (CMSE), College of Engineering, Michigan State University, East Lansing, MI, 48824, USA.
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt.
| |
Collapse
|
81
|
Luu VP, Fiorini M, Combes S, Quemeneur E, Bonneville M, Bousquet PJ. Challenges of artificial intelligence in precision oncology: public-private partnerships including national health agencies as an asset to make it happen. Ann Oncol 2024; 35:154-158. [PMID: 37769849 DOI: 10.1016/j.annonc.2023.09.3106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 07/13/2023] [Accepted: 09/17/2023] [Indexed: 10/03/2023] Open
Affiliation(s)
- V P Luu
- Epidemiology and innovation Unit, Artificial Intelligence and Cancers Association, Paris, France.
| | - M Fiorini
- Artificial Intelligence and Cancers Association, Paris, France
| | | | - E Quemeneur
- France Biotech, Paris, France; Transgene S.A., Illkirch-Graffenstaden, France
| | - M Bonneville
- Alliance pour la Recherche et l'Innovation des Industries de Santé, Paris, France; Institut Mérieux, Lyon, France
| | - P J Bousquet
- Health Survey, Data-Science, Assessment Division, Institut National du Cancer, Boulogne Billancourt, France; Aix Marseille University, INSERM, IRD, Economics and Social Sciences Applied to Health & Analysis of Medical Information (SESSTIM), Marseille, France
| |
Collapse
|
82
|
Gil-Rios MA, Cruz-Aceves I, Hernandez-Aguirre A, Moya-Albor E, Brieva J, Hernandez-Gonzalez MA, Solorio-Meza SE. High-Dimensional Feature Selection for Automatic Classification of Coronary Stenosis Using an Evolutionary Algorithm. Diagnostics (Basel) 2024; 14:268. [PMID: 38337787 PMCID: PMC10855604 DOI: 10.3390/diagnostics14030268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 01/11/2024] [Accepted: 01/23/2024] [Indexed: 02/12/2024] Open
Abstract
In this paper, a novel strategy to perform high-dimensional feature selection using an evolutionary algorithm for the automatic classification of coronary stenosis is introduced. The method involves a feature extraction stage to form a bank of 473 features considering different types such as intensity, texture and shape. The feature selection task is carried out on a high-dimensional feature bank, where the search space is denoted by O(2n) and n=473. The proposed evolutionary search strategy was compared in terms of the Jaccard coefficient and accuracy classification with different state-of-the-art methods. The highest feature selection rate, along with the best classification performance, was obtained with a subset of four features, representing a 99% discrimination rate. In the last stage, the feature subset was used as input to train a support vector machine using an independent testing set. The classification of coronary stenosis cases involves a binary classification type by considering positive and negative classes. The highest classification performance was obtained with the four-feature subset in terms of accuracy (0.86) and Jaccard coefficient (0.75) metrics. In addition, a second dataset containing 2788 instances was formed from a public image database, obtaining an accuracy of 0.89 and a Jaccard Coefficient of 0.80. Finally, based on the performance achieved with the four-feature subset, they can be suitable for use in a clinical decision support system.
Collapse
Affiliation(s)
- Miguel-Angel Gil-Rios
- Tecnologías de Información, Universidad Tecnológica de León, Blvd. Universidad Tecnológica 225, Col. San Carlos, León 37670, Mexico;
| | - Ivan Cruz-Aceves
- CONACYT, Centro de Investigación en Matemáticas (CIMAT), A.C., Jalisco S/N, Col. Valenciana, Guanajuato 36000, Mexico
| | - Arturo Hernandez-Aguirre
- Departamento de Computación, Centro de Investigación en Matemáticas (CIMAT), A.C., Jalisco S/N, Col. Valenciana, Guanajuato 36000, Mexico;
| | - Ernesto Moya-Albor
- Facultad de Ingeniería, Universidad Panamericana, Augusto Rodin 498, Ciudad de México 03920, Mexico; (E.M.-A.); (J.B.)
| | - Jorge Brieva
- Facultad de Ingeniería, Universidad Panamericana, Augusto Rodin 498, Ciudad de México 03920, Mexico; (E.M.-A.); (J.B.)
| | - Martha-Alicia Hernandez-Gonzalez
- Unidad Médica de Alta Especialidad (UMAE), Hospital de Especialidades No. 1. Centro Médico Nacional del Bajio, IMSS, Blvd. Adolfo López Mateos esquina Paseo de los Insurgentes S/N, Col. Los Paraisos, León 37320, Mexico;
| | - Sergio-Eduardo Solorio-Meza
- División Ciencias de la Salud, Universidad Tecnológica de México, Campus León, Blvd. Juan Alonso de Torres 1041, Col. San José del Consuelo, León 37200, Mexico;
| |
Collapse
|
83
|
Wang J, Xue L, Jiang J, Liu F, Wu P, Lu J, Zhang H, Bao W, Xu Q, Ju Z, Chen L, Jiao F, Lin H, Ge J, Zuo C, Tian M. Diagnostic performance of artificial intelligence-assisted PET imaging for Parkinson's disease: a systematic review and meta-analysis. NPJ Digit Med 2024; 7:17. [PMID: 38253738 PMCID: PMC10803804 DOI: 10.1038/s41746-024-01012-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 01/10/2024] [Indexed: 01/24/2024] Open
Abstract
Artificial intelligence (AI)-assisted PET imaging is emerging as a promising tool for the diagnosis of Parkinson's disease (PD). We aim to systematically review the diagnostic accuracy of AI-assisted PET in detecting PD. The Ovid MEDLINE, Ovid Embase, Web of Science, and IEEE Xplore databases were systematically searched for related studies that developed an AI algorithm in PET imaging for diagnostic performance from PD and were published by August 17, 2023. Binary diagnostic accuracy data were extracted for meta-analysis to derive outcomes of interest: area under the curve (AUC). 23 eligible studies provided sufficient data to construct contingency tables that allowed the calculation of diagnostic accuracy. Specifically, 11 studies were identified that distinguished PD from normal control, with a pooled AUC of 0.96 (95% CI: 0.94-0.97) for presynaptic dopamine (DA) and 0.90 (95% CI: 0.87-0.93) for glucose metabolism (18F-FDG). 13 studies were identified that distinguished PD from the atypical parkinsonism (AP), with a pooled AUC of 0.93 (95% CI: 0.91 - 0.95) for presynaptic DA, 0.79 (95% CI: 0.75-0.82) for postsynaptic DA, and 0.97 (95% CI: 0.96-0.99) for 18F-FDG. Acceptable diagnostic performance of PD with AI algorithms-assisted PET imaging was highlighted across the subgroups. More rigorous reporting standards that take into account the unique challenges of AI research could improve future studies.
Collapse
Affiliation(s)
- Jing Wang
- Huashan Hospital & Human Phenome Institute, Fudan University, Shanghai, China
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Le Xue
- Department of Nuclear Medicine, the Second Hospital of Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Jiehui Jiang
- Institute of Biomedical Engineering, School of Life Science, Shanghai University, Shanghai, China
| | - Fengtao Liu
- Department of Neurology, Huashan Hospital, Fudan University, Shanghai, China
- National Clinical Research Center for Aging and Medicine, & National Center for Neurological Disorders, Huashan Hospital, Fudan University, Shanghai, China
| | - Ping Wu
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Jiaying Lu
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Huiwei Zhang
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Weiqi Bao
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Qian Xu
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Zizhao Ju
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Li Chen
- Department of Ultrasound Medicine, Huashan Hospital, Fudan University, Shanghai, China
| | - Fangyang Jiao
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Huamei Lin
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Jingjie Ge
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China.
| | - Chuantao Zuo
- Huashan Hospital & Human Phenome Institute, Fudan University, Shanghai, China.
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China.
- National Clinical Research Center for Aging and Medicine, & National Center for Neurological Disorders, Huashan Hospital, Fudan University, Shanghai, China.
| | - Mei Tian
- Huashan Hospital & Human Phenome Institute, Fudan University, Shanghai, China.
- Department of Nuclear Medicine/PET Center, Huashan Hospital, Fudan University, Shanghai, China.
| |
Collapse
|
84
|
Khosravi P, Mohammadi S, Zahiri F, Khodarahmi M, Zahiri J. AI-Enhanced Detection of Clinically Relevant Structural and Functional Anomalies in MRI: Traversing the Landscape of Conventional to Explainable Approaches. J Magn Reson Imaging 2024. [PMID: 38243677 DOI: 10.1002/jmri.29247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/05/2024] [Accepted: 01/08/2024] [Indexed: 01/21/2024] Open
Abstract
Anomaly detection in medical imaging, particularly within the realm of magnetic resonance imaging (MRI), stands as a vital area of research with far-reaching implications across various medical fields. This review meticulously examines the integration of artificial intelligence (AI) in anomaly detection for MR images, spotlighting its transformative impact on medical diagnostics. We delve into the forefront of AI applications in MRI, exploring advanced machine learning (ML) and deep learning (DL) methodologies that are pivotal in enhancing the precision of diagnostic processes. The review provides a detailed analysis of preprocessing, feature extraction, classification, and segmentation techniques, alongside a comprehensive evaluation of commonly used metrics. Further, this paper explores the latest developments in ensemble methods and explainable AI, offering insights into future directions and potential breakthroughs. This review synthesizes current insights, offering a valuable guide for researchers, clinicians, and medical imaging experts. It highlights AI's crucial role in improving the precision and speed of detecting key structural and functional irregularities in MRI. Our exploration of innovative techniques and trends furthers MRI technology development, aiming to refine diagnostics, tailor treatments, and elevate patient care outcomes. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- The CUNY Graduate Center, City University of New York, New York City, New York, USA
| | - Saber Mohammadi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- Department of Biophysics, Tarbiat Modares University, Tehran, Iran
| | - Fatemeh Zahiri
- Department of Cell and Molecular Sciences, Kharazmi University, Tehran, Iran
| | | | - Javad Zahiri
- Department of Neuroscience, University of California San Diego, San Diego, California, USA
| |
Collapse
|
85
|
Ciobanu-Caraus O, Aicher A, Kernbach JM, Regli L, Serra C, Staartjes VE. A critical moment in machine learning in medicine: on reproducible and interpretable learning. Acta Neurochir (Wien) 2024; 166:14. [PMID: 38227273 DOI: 10.1007/s00701-024-05892-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 12/14/2023] [Indexed: 01/17/2024]
Abstract
Over the past two decades, advances in computational power and data availability combined with increased accessibility to pre-trained models have led to an exponential rise in machine learning (ML) publications. While ML may have the potential to transform healthcare, this sharp increase in ML research output without focus on methodological rigor and standard reporting guidelines has fueled a reproducibility crisis. In addition, the rapidly growing complexity of these models compromises their interpretability, which currently impedes their successful and widespread clinical adoption. In medicine, where failure of such models may have severe implications for patients' health, the high requirements for accuracy, robustness, and interpretability confront ML researchers with a unique set of challenges. In this review, we discuss the semantics of reproducibility and interpretability, as well as related issues and challenges, and outline possible solutions to counteracting the "black box". To foster reproducibility, standard reporting guidelines need to be further developed and data or code sharing encouraged. Editors and reviewers may equally play a critical role by establishing high methodological standards and thus preventing the dissemination of low-quality ML publications. To foster interpretable learning, the use of simpler models more suitable for medical data can inform the clinician how results are generated based on input data. Model-agnostic explanation tools, sensitivity analysis, and hidden layer representations constitute further promising approaches to increase interpretability. Balancing model performance and interpretability are important to ensure clinical applicability. We have now reached a critical moment for ML in medicine, where addressing these issues and implementing appropriate solutions will be vital for the future evolution of the field.
Collapse
Affiliation(s)
- Olga Ciobanu-Caraus
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Anatol Aicher
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Julius M Kernbach
- Department of Neuroradiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Luca Regli
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Carlo Serra
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Victor E Staartjes
- Machine Intelligence in Clinical Neuroscience & Microsurgical Neuroanatomy (MICN) Laboratory, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland.
| |
Collapse
|
86
|
Zhao S, Dai G, Li J, Zhu X, Huang X, Li Y, Tan M, Wang L, Fang P, Chen X, Yan N, Liu H. An interpretable model based on graph learning for diagnosis of Parkinson's disease with voice-related EEG. NPJ Digit Med 2024; 7:3. [PMID: 38182737 PMCID: PMC10770376 DOI: 10.1038/s41746-023-00983-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 11/29/2023] [Indexed: 01/07/2024] Open
Abstract
Parkinson's disease (PD) exhibits significant clinical heterogeneity, presenting challenges in the identification of reliable electroencephalogram (EEG) biomarkers. Machine learning techniques have been integrated with resting-state EEG for PD diagnosis, but their practicality is constrained by the interpretable features and the stochastic nature of resting-state EEG. The present study proposes a novel and interpretable deep learning model, graph signal processing-graph convolutional networks (GSP-GCNs), using event-related EEG data obtained from a specific task involving vocal pitch regulation for PD diagnosis. By incorporating both local and global information from single-hop and multi-hop networks, our proposed GSP-GCNs models achieved an averaged classification accuracy of 90.2%, exhibiting a significant improvement of 9.5% over other deep learning models. Moreover, the interpretability analysis revealed discriminative distributions of large-scale EEG networks and topographic map of microstate MS5 learned by our models, primarily located in the left ventral premotor cortex, superior temporal gyrus, and Broca's area that are implicated in PD-related speech disorders, reflecting our GSP-GCN models' ability to provide interpretable insights identifying distinctive EEG biomarkers from large-scale networks. These findings demonstrate the potential of interpretable deep learning models coupled with voice-related EEG signals for distinguishing PD patients from healthy controls with accuracy and elucidating the underlying neurobiological mechanisms.
Collapse
Affiliation(s)
- Shuzhi Zhao
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Guangyan Dai
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jingting Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xiaoxia Zhu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xiyan Huang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yongxue Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Mingdan Tan
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Lan Wang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Peng Fang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xi Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.
| | - Nan Yan
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.
- Guangdong Provincial Key Laboratory of Brain Function and Disease, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
87
|
Rahman A, Debnath T, Kundu D, Khan MSI, Aishi AA, Sazzad S, Sayduzzaman M, Band SS. Machine learning and deep learning-based approach in smart healthcare: Recent advances, applications, challenges and opportunities. AIMS Public Health 2024; 11:58-109. [PMID: 38617415 PMCID: PMC11007421 DOI: 10.3934/publichealth.2024004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 12/18/2023] [Indexed: 04/16/2024] Open
Abstract
In recent years, machine learning (ML) and deep learning (DL) have been the leading approaches to solving various challenges, such as disease predictions, drug discovery, medical image analysis, etc., in intelligent healthcare applications. Further, given the current progress in the fields of ML and DL, there exists the promising potential for both to provide support in the realm of healthcare. This study offered an exhaustive survey on ML and DL for the healthcare system, concentrating on vital state of the art features, integration benefits, applications, prospects and future guidelines. To conduct the research, we found the most prominent journal and conference databases using distinct keywords to discover scholarly consequences. First, we furnished the most current along with cutting-edge progress in ML-DL-based analysis in smart healthcare in a compendious manner. Next, we integrated the advancement of various services for ML and DL, including ML-healthcare, DL-healthcare, and ML-DL-healthcare. We then offered ML and DL-based applications in the healthcare industry. Eventually, we emphasized the research disputes and recommendations for further studies based on our observations.
Collapse
Affiliation(s)
- Anichur Rahman
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Tanoy Debnath
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
- Department of CSE, Green University of Bangladesh, 220/D, Begum Rokeya Sarani, Dhaka -1207, Bangladesh
| | - Dipanjali Kundu
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Md. Saikat Islam Khan
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Airin Afroj Aishi
- Department of Computing and Information System, Daffodil International University, Savar, Dhaka, Bangladesh
| | - Sadia Sazzad
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Mohammad Sayduzzaman
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Shahab S. Band
- Department of Information Management, International Graduate School of Artificial Intelligence, National Yunlin University of Science and Technology, Taiwan
| |
Collapse
|
88
|
Evans H, Snead D. Why do errors arise in artificial intelligence diagnostic tools in histopathology and how can we minimize them? Histopathology 2024; 84:279-287. [PMID: 37921030 DOI: 10.1111/his.15071] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 09/22/2023] [Accepted: 09/27/2023] [Indexed: 11/04/2023]
Abstract
Artificial intelligence (AI)-based diagnostic tools can offer numerous benefits to the field of histopathology, including improved diagnostic accuracy, efficiency and productivity. As a result, such tools are likely to have an increasing role in routine practice. However, all AI tools are prone to errors, and these AI-associated errors have been identified as a major risk in the introduction of AI into healthcare. The errors made by AI tools are different, in terms of both cause and nature, to the errors made by human pathologists. As highlighted by the National Institute for Health and Care Excellence, it is imperative that practising pathologists understand the potential limitations of AI tools, including the errors made. Pathologists are in a unique position to be gatekeepers of AI tool use, maximizing patient benefit while minimizing harm. Furthermore, their pathological knowledge is essential to understanding when, and why, errors have occurred and so to developing safer future algorithms. This paper summarises the literature on errors made by AI diagnostic tools in histopathology. These include erroneous errors, data concerns (data bias, hidden stratification, data imbalances, distributional shift, and lack of generalisability), reinforcement of outdated practices, unsafe failure mode, automation bias, and insensitivity to impact. Methods to reduce errors in both tool design and clinical use are discussed, and the practical roles for pathologists in error minimisation are highlighted. This aims to inform and empower pathologists to move safely through this seismic change in practice and help ensure that novel AI tools are adopted safely.
Collapse
Affiliation(s)
- Harriet Evans
- Histopathology Department, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
- Warwick Medical School, University of Warwick, Coventry, UK
| | - David Snead
- Histopathology Department, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
- Warwick Medical School, University of Warwick, Coventry, UK
| |
Collapse
|
89
|
Gaddum O, Chapiro J. An Interventional Radiologist's Primer of Critical Appraisal of Artificial Intelligence Research. J Vasc Interv Radiol 2024; 35:7-14. [PMID: 37769940 DOI: 10.1016/j.jvir.2023.09.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 07/17/2023] [Accepted: 09/18/2023] [Indexed: 10/03/2023] Open
Abstract
Recent advances in artificial intelligence (AI) are expected to cause a significant paradigm shift in all digital data-driven aspects of information gain, processing, and decision making in both clinical healthcare and medical research. The field of interventional radiology (IR) will be enmeshed in this innovation, yet the collective IR expertise in the field of AI remains rudimentary because of lack of training. This primer provides the clinical interventional radiologist with a simple guide for critically appraising AI research and products by identifying 12 fundamental items that should be considered: (a) need for AI technology to address the clinical problem, (b) type of applied Al algorithm, (c) data quality and degree of annotation, (d) reporting of accuracy, (e) applicability of standardized reporting, (f) reproducibility of methodology and data transparency, (g) algorithm validation, (h) interpretability, (i) concrete impact on IR, (j) pathway toward translation to clinical practice, (k) clinical benefit and cost-effectiveness, and (l) regulatory framework.
Collapse
Affiliation(s)
- Olivia Gaddum
- Division of Interventional Radiology, Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut
| | - Julius Chapiro
- Division of Interventional Radiology, Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut.
| |
Collapse
|
90
|
Samala RK, Drukker K, Shukla-Dave A, Chan HP, Sahiner B, Petrick N, Greenspan H, Mahmood U, Summers RM, Tourassi G, Deserno TM, Regge D, Näppi JJ, Yoshida H, Huo Z, Chen Q, Vergara D, Cha KH, Mazurchuk R, Grizzard KT, Huisman H, Morra L, Suzuki K, Armato SG, Hadjiiski L. AI and machine learning in medical imaging: key points from development to translation. BJR ARTIFICIAL INTELLIGENCE 2024; 1:ubae006. [PMID: 38828430 PMCID: PMC11140849 DOI: 10.1093/bjrai/ubae006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 04/02/2024] [Accepted: 04/25/2024] [Indexed: 06/05/2024]
Abstract
Innovation in medical imaging artificial intelligence (AI)/machine learning (ML) demands extensive data collection, algorithmic advancements, and rigorous performance assessments encompassing aspects such as generalizability, uncertainty, bias, fairness, trustworthiness, and interpretability. Achieving widespread integration of AI/ML algorithms into diverse clinical tasks will demand a steadfast commitment to overcoming issues in model design, development, and performance assessment. The complexities of AI/ML clinical translation present substantial challenges, requiring engagement with relevant stakeholders, assessment of cost-effectiveness for user and patient benefit, timely dissemination of information relevant to robust functioning throughout the AI/ML lifecycle, consideration of regulatory compliance, and feedback loops for real-world performance evidence. This commentary addresses several hurdles for the development and adoption of AI/ML technologies in medical imaging. Comprehensive attention to these underlying and often subtle factors is critical not only for tackling the challenges but also for exploring novel opportunities for the advancement of AI in radiology.
Collapse
Affiliation(s)
- Ravi K Samala
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, United States
| | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, IL, 60637, United States
| | - Amita Shukla-Dave
- Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109, United States
| | - Berkman Sahiner
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, United States
| | - Nicholas Petrick
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, United States
| | - Hayit Greenspan
- Biomedical Engineering and Imaging Institute, Department of Radiology, Icahn School of Medicine at Mt Sinai, New York, NY, 10029, United States
| | - Usman Mahmood
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States
| | - Ronald M Summers
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, 20892, United States
| | - Georgia Tourassi
- Computing and Computational Sciences Directorate, Oak Ridge National Laboratory, Oak Ridge, TN, 37830, United States
| | - Thomas M Deserno
- Peter L. Reichertz Institute for Medical Informatics, TU Braunschweig and Hannover Medical School, Braunschweig, Niedersachsen, 38106, Germany
| | - Daniele Regge
- Radiology Unit, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, 10060, Italy
- Department of Translational Research and of New Surgical and Medical Technologies of the University of Pisa, Pisa, 56126, Italy
| | - Janne J Näppi
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, United States
| | - Hiroyuki Yoshida
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, United States
| | - Zhimin Huo
- Tencent America, Palo Alto, CA, 94306, United States
| | - Quan Chen
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, 85054, United States
| | - Daniel Vergara
- Department of Radiology, University of Washington, Seattle, WA, 98195, United States
| | - Kenny H Cha
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, United States
| | - Richard Mazurchuk
- Division of Cancer Prevention, National Cancer Institute, National Institutes of Health, Bethesda, MD, 20892, United States
| | - Kevin T Grizzard
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, 06510, United States
| | - Henkjan Huisman
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, Gelderland, 6525 GA, Netherlands
| | - Lia Morra
- Department of Control and Computer Engineering, Politecnico di Torino, Torino, Piemonte, 10129, Italy
| | - Kenji Suzuki
- Institute of Innovative Research, Tokyo Institute of Technology, Midori-ku, Yokohama, Kanagawa, 226-8503, Japan
| | - Samuel G Armato
- Department of Radiology, University of Chicago, Chicago, IL, 60637, United States
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109, United States
| |
Collapse
|
91
|
Adeoye J, Su YX. Artificial intelligence in salivary biomarker discovery and validation for oral diseases. Oral Dis 2024; 30:23-37. [PMID: 37335832 DOI: 10.1111/odi.14641] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 05/19/2023] [Accepted: 05/28/2023] [Indexed: 06/21/2023]
Abstract
Salivary biomarkers can improve the efficacy, efficiency, and timeliness of oral and maxillofacial disease diagnosis and monitoring. Oral and maxillofacial conditions in which salivary biomarkers have been utilized for disease-related outcomes include periodontal diseases, dental caries, oral cancer, temporomandibular joint dysfunction, and salivary gland diseases. However, given the equivocal accuracy of salivary biomarkers during validation, incorporating contemporary analytical techniques for biomarker selection and operationalization from the abundant multi-omics data available may help improve biomarker performance. Artificial intelligence represents one such advanced approach that may optimize the potential of salivary biomarkers to diagnose and manage oral and maxillofacial diseases. Therefore, this review summarized the role and current application of techniques based on artificial intelligence for salivary biomarker discovery and validation in oral and maxillofacial diseases.
Collapse
Affiliation(s)
- John Adeoye
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, University of Hong Kong, Hong Kong SAR, China
| | - Yu-Xiong Su
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
92
|
Varghese J. Reply to: "Can LLMs improve existing scenario of healthcare?". J Hepatol 2024; 80:e29-e30. [PMID: 37827471 DOI: 10.1016/j.jhep.2023.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 10/01/2023] [Indexed: 10/14/2023]
Affiliation(s)
- Julian Varghese
- Institute of Medical Informatics, University of Münster, Germany; Institute of Medical Informatics, University of Münster, Germany European Research Center for Information Systems (ERCIS), Germany.
| |
Collapse
|
93
|
Rokhshad R, Salehi SN, Yavari A, Shobeiri P, Esmaeili M, Manila N, Motamedian SR, Mohammad-Rahimi H. Deep learning for diagnosis of head and neck cancers through radiographic data: a systematic review and meta-analysis. Oral Radiol 2024; 40:1-20. [PMID: 37855976 DOI: 10.1007/s11282-023-00715-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 09/23/2023] [Indexed: 10/20/2023]
Abstract
PURPOSE This study aims to review deep learning applications for detecting head and neck cancer (HNC) using magnetic resonance imaging (MRI) and radiographic data. METHODS Through January 2023, a PubMed, Scopus, Embase, Google Scholar, IEEE, and arXiv search were carried out. The inclusion criteria were implementing head and neck medical images (computed tomography (CT), positron emission tomography (PET), MRI, Planar scans, and panoramic X-ray) of human subjects with segmentation, object detection, and classification deep learning models for head and neck cancers. The risk of bias was rated with the quality assessment of diagnostic accuracy studies (QUADAS-2) tool. For the meta-analysis diagnostic odds ratio (DOR) was calculated. Deeks' funnel plot was used to assess publication bias. MIDAS and Metandi packages were used to analyze diagnostic test accuracy in STATA. RESULTS From 1967 studies, 32 were found eligible after the search and screening procedures. According to the QUADAS-2 tool, 7 included studies had a low risk of bias for all domains. According to the results of all included studies, the accuracy varied from 82.6 to 100%. Additionally, specificity ranged from 66.6 to 90.1%, sensitivity from 74 to 99.68%. Fourteen studies that provided sufficient data were included for meta-analysis. The pooled sensitivity was 90% (95% CI 0.820.94), and the pooled specificity was 92% (CI 95% 0.87-0.96). The DORs were 103 (27-251). Publication bias was not detected based on the p-value of 0.75 in the meta-analysis. CONCLUSION With a head and neck screening deep learning model, detectable screening processes can be enhanced with high specificity and sensitivity.
Collapse
Affiliation(s)
- Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
| | - Seyyede Niloufar Salehi
- Executive Secretary of Research Committee, Board Director of Scientific Society, Dental Faculty, Azad University, Tehran, Iran
| | - Amirmohammad Yavari
- Student Research Committee, School of Dentistry, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Parnian Shobeiri
- School of Medicine, Tehran University of Medical Science, Tehran, Iran
| | - Mahdieh Esmaeili
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
| | - Nisha Manila
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
- Department of Diagnostic Sciences, Louisiana State University Health Science Center School of Dentistry, Louisiana, USA
| | - Saeed Reza Motamedian
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany.
- Dentofacial Deformities Research Center, Research Institute of Dental, Sciences & Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjou Blvd, Tehran, Iran.
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
| |
Collapse
|
94
|
Ueda D, Kakinuma T, Fujita S, Kamagata K, Fushimi Y, Ito R, Matsui Y, Nozaki T, Nakaura T, Fujima N, Tatsugami F, Yanagawa M, Hirata K, Yamada A, Tsuboyama T, Kawamura M, Fujioka T, Naganawa S. Fairness of artificial intelligence in healthcare: review and recommendations. Jpn J Radiol 2024; 42:3-15. [PMID: 37540463 PMCID: PMC10764412 DOI: 10.1007/s11604-023-01474-3] [Citation(s) in RCA: 49] [Impact Index Per Article: 49.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 07/17/2023] [Indexed: 08/05/2023]
Abstract
In this review, we address the issue of fairness in the clinical integration of artificial intelligence (AI) in the medical field. As the clinical adoption of deep learning algorithms, a subfield of AI, progresses, concerns have arisen regarding the impact of AI biases and discrimination on patient health. This review aims to provide a comprehensive overview of concerns associated with AI fairness; discuss strategies to mitigate AI biases; and emphasize the need for cooperation among physicians, AI researchers, AI developers, policymakers, and patients to ensure equitable AI integration. First, we define and introduce the concept of fairness in AI applications in healthcare and radiology, emphasizing the benefits and challenges of incorporating AI into clinical practice. Next, we delve into concerns regarding fairness in healthcare, addressing the various causes of biases in AI and potential concerns such as misdiagnosis, unequal access to treatment, and ethical considerations. We then outline strategies for addressing fairness, such as the importance of diverse and representative data and algorithm audits. Additionally, we discuss ethical and legal considerations such as data privacy, responsibility, accountability, transparency, and explainability in AI. Finally, we present the Fairness of Artificial Intelligence Recommendations in healthcare (FAIR) statement to offer best practices. Through these efforts, we aim to provide a foundation for discussing the responsible and equitable implementation and deployment of AI in healthcare.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-Machi, Abeno-ku, Osaka, 545-8585, Japan.
| | | | - Shohei Fujita
- Department of Radiology, University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Sakyoku, Kyoto, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Kita-ku, Okayama, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Shinjuku-ku, Tokyo, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Chuo-ku, Kumamoto, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Minami-ku, Hiroshima, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita City, Osaka, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita-ku, Sapporo, Hokkaido, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita City, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo-ku, Tokyo, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
95
|
Sigut J, Fumero F, Estévez J, Alayón S, Díaz-Alemán T. In-Depth Evaluation of Saliency Maps for Interpreting Convolutional Neural Network Decisions in the Diagnosis of Glaucoma Based on Fundus Imaging. SENSORS (BASEL, SWITZERLAND) 2023; 24:239. [PMID: 38203101 PMCID: PMC10781365 DOI: 10.3390/s24010239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 12/14/2023] [Accepted: 12/29/2023] [Indexed: 01/12/2024]
Abstract
Glaucoma, a leading cause of blindness, damages the optic nerve, making early diagnosis challenging due to no initial symptoms. Fundus eye images taken with a non-mydriatic retinograph help diagnose glaucoma by revealing structural changes, including the optic disc and cup. This research aims to thoroughly analyze saliency maps in interpreting convolutional neural network decisions for diagnosing glaucoma from fundus images. These maps highlight the most influential image regions guiding the network's decisions. Various network architectures were trained and tested on 739 optic nerve head images, with nine saliency methods used. Some other popular datasets were also used for further validation. The results reveal disparities among saliency maps, with some consensus between the folds corresponding to the same architecture. Concerning the significance of optic disc sectors, there is generally a lack of agreement with standard medical criteria. The background, nasal, and temporal sectors emerge as particularly influential for neural network decisions, showing a likelihood of being the most relevant ranging from 14.55% to 28.16% on average across all evaluated datasets. We can conclude that saliency maps are usually difficult to interpret and even the areas indicated as the most relevant can be very unintuitive. Therefore, its usefulness as an explanatory tool may be compromised, at least in problems such as the one addressed in this study, where the features defining the model prediction are generally not consistently reflected in relevant regions of the saliency maps, and they even cannot always be related to those used as medical standards.
Collapse
Affiliation(s)
- Jose Sigut
- Department of Computer Science and Systems Engineering, Universidad de La Laguna, Camino San Francisco de Paula, 19, La Laguna, 38203 Santa Cruz de Tenerife, Spain; (F.F.); (J.E.); (S.A.)
| | - Francisco Fumero
- Department of Computer Science and Systems Engineering, Universidad de La Laguna, Camino San Francisco de Paula, 19, La Laguna, 38203 Santa Cruz de Tenerife, Spain; (F.F.); (J.E.); (S.A.)
| | - José Estévez
- Department of Computer Science and Systems Engineering, Universidad de La Laguna, Camino San Francisco de Paula, 19, La Laguna, 38203 Santa Cruz de Tenerife, Spain; (F.F.); (J.E.); (S.A.)
| | - Silvia Alayón
- Department of Computer Science and Systems Engineering, Universidad de La Laguna, Camino San Francisco de Paula, 19, La Laguna, 38203 Santa Cruz de Tenerife, Spain; (F.F.); (J.E.); (S.A.)
| | - Tinguaro Díaz-Alemán
- Department of Ophthalmology, Hospital Universitario de Canarias, Carretera Ofra S/N, La Laguna, 38320 Santa Cruz de Tenerife, Spain;
| |
Collapse
|
96
|
García-García S, Cepeda S, Müller D, Mosteiro A, Torné R, Agudo S, de la Torre N, Arrese I, Sarabia R. Mortality Prediction of Patients with Subarachnoid Hemorrhage Using a Deep Learning Model Based on an Initial Brain CT Scan. Brain Sci 2023; 14:10. [PMID: 38248225 PMCID: PMC10812955 DOI: 10.3390/brainsci14010010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 12/10/2023] [Accepted: 12/21/2023] [Indexed: 01/23/2024] Open
Abstract
BACKGROUND Subarachnoid hemorrhage (SAH) entails high morbidity and mortality rates. Convolutional neural networks (CNN) are capable of generating highly accurate predictions from imaging data. Our objective was to predict mortality in SAH patients by processing initial CT scans using a CNN-based algorithm. METHODS We conducted a retrospective multicentric study of a consecutive cohort of patients with SAH. Demographic, clinical and radiological variables were analyzed. Preprocessed baseline CT scan images were used as the input for training using the AUCMEDI framework. Our model's architecture leveraged a DenseNet121 structure, employing transfer learning principles. The output variable was mortality in the first three months. RESULTS Images from 219 patients were processed; 175 for training and validation and 44 for the model's evaluation. Of the patients, 52% (115/219) were female and the median age was 58 (SD = 13.06) years. In total, 18.5% (39/219) had idiopathic SAH. The mortality rate was 28.5% (63/219). The model showed good accuracy at predicting mortality in SAH patients when exclusively using the images of the initial CT scan (accuracy = 74%, F1 = 75% and AUC = 82%). CONCLUSION Modern image processing techniques based on AI and CNN make it possible to predict mortality in SAH patients with high accuracy using CT scan images as the only input. These models might be optimized by including more data and patients, resulting in better training, development and performance on tasks that are beyond the skills of conventional clinical knowledge.
Collapse
Affiliation(s)
- Sergio García-García
- Neurosurgery Department, Rio Hortega University Hospital, 47012 Valladolid, Spain; (S.C.); (S.A.); (N.d.l.T.); (I.A.); (R.S.)
| | - Santiago Cepeda
- Neurosurgery Department, Rio Hortega University Hospital, 47012 Valladolid, Spain; (S.C.); (S.A.); (N.d.l.T.); (I.A.); (R.S.)
| | - Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, 86159 Augsburg, Germany;
| | - Alejandra Mosteiro
- Neurosurgery Department, Hospital Clinic de Barcelona, 08036 Barcelona, Spain; (A.M.); (R.T.)
| | - Ramón Torné
- Neurosurgery Department, Hospital Clinic de Barcelona, 08036 Barcelona, Spain; (A.M.); (R.T.)
| | - Silvia Agudo
- Neurosurgery Department, Rio Hortega University Hospital, 47012 Valladolid, Spain; (S.C.); (S.A.); (N.d.l.T.); (I.A.); (R.S.)
| | - Natalia de la Torre
- Neurosurgery Department, Rio Hortega University Hospital, 47012 Valladolid, Spain; (S.C.); (S.A.); (N.d.l.T.); (I.A.); (R.S.)
| | - Ignacio Arrese
- Neurosurgery Department, Rio Hortega University Hospital, 47012 Valladolid, Spain; (S.C.); (S.A.); (N.d.l.T.); (I.A.); (R.S.)
| | - Rosario Sarabia
- Neurosurgery Department, Rio Hortega University Hospital, 47012 Valladolid, Spain; (S.C.); (S.A.); (N.d.l.T.); (I.A.); (R.S.)
| |
Collapse
|
97
|
Till T, Tschauner S, Singer G, Lichtenegger K, Till H. Development and optimization of AI algorithms for wrist fracture detection in children using a freely available dataset. Front Pediatr 2023; 11:1291804. [PMID: 38188914 PMCID: PMC10768054 DOI: 10.3389/fped.2023.1291804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 12/05/2023] [Indexed: 01/09/2024] Open
Abstract
Introduction In the field of pediatric trauma computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems have emerged offering a promising avenue for improved patient care. Especially children with wrist fractures may benefit from machine learning (ML) solutions, since some of these lesions may be overlooked on conventional X-ray due to minimal compression without dislocation or mistaken for cartilaginous growth plates. In this article, we describe the development and optimization of AI algorithms for wrist fracture detection in children. Methods A team of IT-specialists, pediatric radiologists and pediatric surgeons used the freely available GRAZPEDWRI-DX dataset containing annotated pediatric trauma wrist radiographs of 6,091 patients, a total number of 10,643 studies (20,327 images). First, a basic object detection model, a You Only Look Once object detector of the seventh generation (YOLOv7) was trained and tested on these data. Then, team decisions were taken to adjust data preparation, image sizes used for training and testing, and configuration of the detection model. Furthermore, we investigated each of these models using an Explainable Artificial Intelligence (XAI) method called Gradient Class Activation Mapping (Grad-CAM). This method visualizes where a model directs its attention to before classifying and regressing a certain class through saliency maps. Results Mean average precision (mAP) improved when applying optimizations pre-processing the dataset images (maximum increases of + 25.51% mAP@0.5 and + 39.78% mAP@[0.5:0.95]), as well as the object detection model itself (maximum increases of + 13.36% mAP@0.5 and + 27.01% mAP@[0.5:0.95]). Generally, when analyzing the resulting models using XAI methods, higher scoring model variations in terms of mAP paid more attention to broader regions of the image, prioritizing detection accuracy over precision compared to the less accurate models. Discussion This paper supports the implementation of ML solutions for pediatric trauma care. Optimization of a large X-ray dataset and the YOLOv7 model improve the model's ability to detect objects and provide valid diagnostic support to health care specialists. Such optimization protocols must be understood and advocated, before comparing ML performances against health care specialists.
Collapse
Affiliation(s)
- Tristan Till
- Department of Applied Computer Sciences, FH JOANNEUM - University of Applied Sciences, Graz, Austria
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Graz, Austria
| | - Sebastian Tschauner
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Graz, Austria
| | - Georg Singer
- Department of Pediatric and Adolescent Surgery, Medical University of Graz, Graz, Austria
| | - Klaus Lichtenegger
- Department of Applied Computer Sciences, FH JOANNEUM - University of Applied Sciences, Graz, Austria
| | - Holger Till
- Department of Pediatric and Adolescent Surgery, Medical University of Graz, Graz, Austria
| |
Collapse
|
98
|
Sarao V, Veritti D, De Nardin A, Misciagna M, Foresti G, Lanzetta P. Explainable artificial intelligence model for the detection of geographic atrophy using colour retinal photographs. BMJ Open Ophthalmol 2023; 8:e001411. [PMID: 38057106 PMCID: PMC10711821 DOI: 10.1136/bmjophth-2023-001411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 11/22/2023] [Indexed: 12/08/2023] Open
Abstract
OBJECTIVE To develop and validate an explainable artificial intelligence (AI) model for detecting geographic atrophy (GA) via colour retinal photographs. METHODS AND ANALYSIS We conducted a prospective study where colour fundus images were collected from healthy individuals and patients with retinal diseases using an automated imaging system. All images were categorised into three classes: healthy, GA and other retinal diseases, by two experienced retinologists. Simultaneously, an explainable learning model using class activation mapping techniques categorised each image into one of the three classes. The AI system's performance was then compared with manual evaluations. RESULTS A total of 540 colour retinal photographs were collected. Data was divided such that 300 images from each class trained the AI model, 120 for validation and 120 for performance testing. In distinguishing between GA and healthy eyes, the model demonstrated a sensitivity of 100%, specificity of 97.5% and an overall diagnostic accuracy of 98.4%. Performance metrics like area under the receiver operating characteristic (AUC-ROC, 0.988) and the precision-recall (AUC-PR, 0.952) curves reinforced the model's robust achievement. When differentiating GA from other retinal conditions, the model preserved a diagnostic accuracy of 96.8%, a precision of 90.9% and a recall of 100%, leading to an F1-score of 0.952. The AUC-ROC and AUC-PR scores were 0.975 and 0.909, respectively. CONCLUSIONS Our explainable AI model exhibits excellent performance in detecting GA using colour retinal images. With its high sensitivity, specificity and overall diagnostic accuracy, the AI model stands as a powerful tool for the automated diagnosis of GA.
Collapse
Affiliation(s)
- Valentina Sarao
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare (IEMO), Udine, Italy
| | - Daniele Veritti
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
| | - Axel De Nardin
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Micaela Misciagna
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
| | - Gianluca Foresti
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Paolo Lanzetta
- Department of Medicine-Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare (IEMO), Udine, Italy
| |
Collapse
|
99
|
Aggarwal K, Manso Jimeno M, Ravi KS, Gonzalez G, Geethanath S. Developing and deploying deep learning models in brain magnetic resonance imaging: A review. NMR IN BIOMEDICINE 2023; 36:e5014. [PMID: 37539775 DOI: 10.1002/nbm.5014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/05/2023]
Abstract
Magnetic resonance imaging (MRI) of the brain has benefited from deep learning (DL) to alleviate the burden on radiologists and MR technologists, and improve throughput. The easy accessibility of DL tools has resulted in a rapid increase of DL models and subsequent peer-reviewed publications. However, the rate of deployment in clinical settings is low. Therefore, this review attempts to bring together the ideas from data collection to deployment in the clinic, building on the guidelines and principles that accreditation agencies have espoused. We introduce the need for and the role of DL to deliver accessible MRI. This is followed by a brief review of DL examples in the context of neuropathologies. Based on these studies and others, we collate the prerequisites to develop and deploy DL models for brain MRI. We then delve into the guiding principles to develop good machine learning practices in the context of neuroimaging, with a focus on explainability. A checklist based on the United States Food and Drug Administration's good machine learning practices is provided as a summary of these guidelines. Finally, we review the current challenges and future opportunities in DL for brain MRI.
Collapse
Affiliation(s)
- Kunal Aggarwal
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
- Department of Electrical and Computer Engineering, Technical University Munich, Munich, Germany
| | - Marina Manso Jimeno
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Keerthi Sravan Ravi
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Gilberto Gonzalez
- Division of Neuroradiology, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Sairam Geethanath
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
| |
Collapse
|
100
|
Marzi SJ, Schilder BM, Nott A, Frigerio CS, Willaime-Morawek S, Bucholc M, Hanger DP, James C, Lewis PA, Lourida I, Noble W, Rodriguez-Algarra F, Sharif JA, Tsalenchuk M, Winchester LM, Yaman Ü, Yao Z, Ranson JM, Llewellyn DJ. Artificial intelligence for neurodegenerative experimental models. Alzheimers Dement 2023; 19:5970-5987. [PMID: 37768001 DOI: 10.1002/alz.13479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 08/11/2023] [Accepted: 08/14/2023] [Indexed: 09/29/2023]
Abstract
INTRODUCTION Experimental models are essential tools in neurodegenerative disease research. However, the translation of insights and drugs discovered in model systems has proven immensely challenging, marred by high failure rates in human clinical trials. METHODS Here we review the application of artificial intelligence (AI) and machine learning (ML) in experimental medicine for dementia research. RESULTS Considering the specific challenges of reproducibility and translation between other species or model systems and human biology in preclinical dementia research, we highlight best practices and resources that can be leveraged to quantify and evaluate translatability. We then evaluate how AI and ML approaches could be applied to enhance both cross-model reproducibility and translation to human biology, while sustaining biological interpretability. DISCUSSION AI and ML approaches in experimental medicine remain in their infancy. However, they have great potential to strengthen preclinical research and translation if based upon adequate, robust, and reproducible experimental data. HIGHLIGHTS There are increasing applications of AI in experimental medicine. We identified issues in reproducibility, cross-species translation, and data curation in the field. Our review highlights data resources and AI approaches as solutions. Multi-omics analysis with AI offers exciting future possibilities in drug discovery.
Collapse
Affiliation(s)
- Sarah J Marzi
- UK Dementia Research Institute, Imperial College London, London, UK
- Department of Brain Sciences, Imperial College London, London, UK
| | - Brian M Schilder
- UK Dementia Research Institute, Imperial College London, London, UK
- Department of Brain Sciences, Imperial College London, London, UK
| | - Alexi Nott
- UK Dementia Research Institute, Imperial College London, London, UK
- Department of Brain Sciences, Imperial College London, London, UK
| | | | | | - Magda Bucholc
- School of Computing, Engineering & Intelligent Systems, Ulster University, Derry, UK
| | - Diane P Hanger
- Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | | | - Patrick A Lewis
- Royal Veterinary College, London, UK
- Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | | | - Wendy Noble
- Faculty of Health and Life Sciences, University of Exeter, Exeter, UK
| | | | - Jalil-Ahmad Sharif
- UK Dementia Research Institute, Imperial College London, London, UK
- Department of Brain Sciences, Imperial College London, London, UK
| | - Maria Tsalenchuk
- UK Dementia Research Institute, Imperial College London, London, UK
- Department of Brain Sciences, Imperial College London, London, UK
| | | | - Ümran Yaman
- UK Dementia Research Institute at UCL, London, UK
| | | | | | - David J Llewellyn
- University of Exeter Medical School, Exeter, UK
- Alan Turing Institute, London, UK
| |
Collapse
|