101
|
Sambyal AS, Niyaz U, Krishnan NC, Bathula DR. Understanding calibration of deep neural networks for medical image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107816. [PMID: 37778139 DOI: 10.1016/j.cmpb.2023.107816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 09/03/2023] [Accepted: 09/14/2023] [Indexed: 10/03/2023]
Abstract
Background and Objective - In the field of medical image analysis, achieving high accuracy is not enough; ensuring well-calibrated predictions is also crucial. Confidence scores of a deep neural network play a pivotal role in explainability by providing insights into the model's certainty, identifying cases that require attention, and establishing trust in its predictions. Consequently, the significance of a well-calibrated model becomes paramount in the medical imaging domain, where accurate and reliable predictions are of utmost importance. While there has been a significant effort towards training modern deep neural networks to achieve high accuracy on medical imaging tasks, model calibration and factors that affect it remain under-explored. Methods - To address this, we conducted a comprehensive empirical study that explores model performance and calibration under different training regimes. We considered fully supervised training, which is the prevailing approach in the community, as well as rotation-based self-supervised method with and without transfer learning, across various datasets and architecture sizes. Multiple calibration metrics were employed to gain a holistic understanding of model calibration. Results - Our study reveals that factors such as weight distributions and the similarity of learned representations correlate with the calibration trends observed in the models. Notably, models trained using rotation-based self-supervised pretrained regime exhibit significantly better calibration while achieving comparable or even superior performance compared to fully supervised models across different medical imaging datasets. Conclusion - These findings shed light on the importance of model calibration in medical image analysis and highlight the benefits of incorporating self-supervised learning approach to improve both performance and calibration.
Collapse
Affiliation(s)
- Abhishek Singh Sambyal
- Department of Computer Science and Engineering, Indian Institute of Technology Ropar, Rupnagar, 140001, Punjab, India.
| | - Usma Niyaz
- Department of Computer Science and Engineering, Indian Institute of Technology Ropar, Rupnagar, 140001, Punjab, India.
| | - Narayanan C Krishnan
- Department of Data Science, Indian Institute of Technology Palakkad, Palakkad, 678532, Kerala, India.
| | - Deepti R Bathula
- Department of Computer Science and Engineering, Indian Institute of Technology Ropar, Rupnagar, 140001, Punjab, India.
| |
Collapse
|
102
|
Salimi Y, Akhavanallaf A, Mansouri Z, Shiri I, Zaidi H. Real-time, acquisition parameter-free voxel-wise patient-specific Monte Carlo dose reconstruction in whole-body CT scanning using deep neural networks. Eur Radiol 2023; 33:9411-9424. [PMID: 37368113 PMCID: PMC10667156 DOI: 10.1007/s00330-023-09839-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 03/28/2023] [Accepted: 04/14/2023] [Indexed: 06/28/2023]
Abstract
OBJECTIVE We propose a deep learning-guided approach to generate voxel-based absorbed dose maps from whole-body CT acquisitions. METHODS The voxel-wise dose maps corresponding to each source position/angle were calculated using Monte Carlo (MC) simulations considering patient- and scanner-specific characteristics (SP_MC). The dose distribution in a uniform cylinder was computed through MC calculations (SP_uniform). The density map and SP_uniform dose maps were fed into a residual deep neural network (DNN) to predict SP_MC through an image regression task. The whole-body dose maps reconstructed by the DNN and MC were compared in the 11 test cases scanned with two tube voltages through transfer learning with/without tube current modulation (TCM). The voxel-wise and organ-wise dose evaluations, such as mean error (ME, mGy), mean absolute error (MAE, mGy), relative error (RE, %), and relative absolute error (RAE, %), were performed. RESULTS The model performance for the 120 kVp and TCM test set in terms of ME, MAE, RE, and RAE voxel-wise parameters was - 0.0302 ± 0.0244 mGy, 0.0854 ± 0.0279 mGy, - 1.13 ± 1.41%, and 7.17 ± 0.44%, respectively. The organ-wise errors for 120 kVp and TCM scenario averaged over all segmented organs in terms of ME, MAE, RE, and RAE were - 0.144 ± 0.342 mGy, and 0.23 ± 0.28 mGy, - 1.11 ± 2.90%, 2.34 ± 2.03%, respectively. CONCLUSION Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy suitable for organ-level absorbed dose estimation. CLINICAL RELEVANCE STATEMENT We proposed a novel method for voxel dose map calculation using deep neural networks. This work is clinically relevant since accurate dose calculation for patients can be carried out within acceptable computational time compared to lengthy Monte Carlo calculations. KEY POINTS • We proposed a deep neural network approach as an alternative to Monte Carlo dose calculation. • Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy, suitable for organ-level dose estimation. • By generating a dose distribution from a single source position, our model can generate accurate and personalized dose maps for a wide range of acquisition parameters.
Collapse
Affiliation(s)
- Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neurocenter, Geneva University, CH_1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark.
| |
Collapse
|
103
|
He S, Li Y, Zhang C, Li Z, Ren Y, Li T, Wang J. Deep learning technique to detect craniofacial anatomical abnormalities concentrated on middle and anterior of face in patients with sleep apnea. Sleep Med 2023; 112:12-20. [PMID: 37801860 DOI: 10.1016/j.sleep.2023.09.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 09/17/2023] [Accepted: 09/23/2023] [Indexed: 10/08/2023]
Abstract
OBJECTIVES The aim of this study is to propose a deep learning-based model using craniofacial photographs for automatic obstructive sleep apnea (OSA) detection and to perform design explainability tests to investigate important craniofacial regions as well as the reliability of the method. METHODS Five hundred and thirty participants with suspected OSA are subjected to polysomnography. Front and profile craniofacial photographs are captured and randomly segregated into training, validation, and test sets for model development and evaluation. Photographic occlusion tests and visual observations are performed to determine regions at risk of OSA. The number of positive regions in each participant is identified and their associations with OSA is assessed. RESULTS The model using craniofacial photographs alone yields an accuracy of 0.884 and an area under the receiver operating characteristic curve of 0.881 (95% confidence interval, 0.839-0.922). Using the cutoff point with the maximum sum of sensitivity and specificity, the model exhibits a sensitivity of 0.905 and a specificity of 0.941. The bilateral eyes, nose, mouth and chin, pre-auricular area, and ears contribute the most to disease detection. When photographs that increase the weights of these regions are used, the performance of the model improved. Additionally, different severities of OSA become more prevalent as the number of positive craniofacial regions increases. CONCLUSIONS The results suggest that the deep learning-based model can extract meaningful features that are primarily concentrated in the middle and anterior regions of the face.
Collapse
Affiliation(s)
- Shuai He
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, China
| | - Yingjie Li
- School of Computer Science and Engineering, Beijing Technology and Business University, China
| | - Chong Zhang
- Department of Big Data Management and Application, School of International Economics and Management, Beijing Technology and Business University, China
| | - Zufei Li
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, China
| | - Yuanyuan Ren
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, China
| | - Tiancheng Li
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, China.
| | - Jianting Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, China.
| |
Collapse
|
104
|
Champendal M, Müller H, Prior JO, Dos Reis CS. A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging. Eur J Radiol 2023; 169:111159. [PMID: 37976760 DOI: 10.1016/j.ejrad.2023.111159] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/26/2023] [Accepted: 10/19/2023] [Indexed: 11/19/2023]
Abstract
PURPOSE To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI). METHOD A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion. RESULTS 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3). CONCLUSION The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.
Collapse
Affiliation(s)
- Mélanie Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - Henning Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical faculty, University of Geneva, CH, Switzerland.
| | - John O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV), Lausanne, CH, Switzerland.
| | - Cláudia Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland.
| |
Collapse
|
105
|
Pertuz S, Ortega D, Suarez É, Cancino W, Africano G, Rinta-Kiikka I, Arponen O, Paris S, Lozano A. Saliency of breast lesions in breast cancer detection using artificial intelligence. Sci Rep 2023; 13:20545. [PMID: 37996504 PMCID: PMC10667547 DOI: 10.1038/s41598-023-46921-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 11/07/2023] [Indexed: 11/25/2023] Open
Abstract
The analysis of mammograms using artificial intelligence (AI) has shown great potential for assisting breast cancer screening. We use saliency maps to study the role of breast lesions in the decision-making process of AI systems for breast cancer detection in screening mammograms. We retrospectively collected mammograms from 191 women with screen-detected breast cancer and 191 healthy controls matched by age and mammographic system. Two radiologists manually segmented the breast lesions in the mammograms from CC and MLO views. We estimated the detection performance of four deep learning-based AI systems using the area under the ROC curve (AUC) with a 95% confidence interval (CI). We used automatic thresholding on saliency maps from the AI systems to identify the areas of interest on the mammograms. Finally, we measured the overlap between these areas of interest and the segmented breast lesions using Dice's similarity coefficient (DSC). The detection performance of the AI systems ranged from low to moderate (AUCs from 0.525 to 0.694). The overlap between the areas of interest and the breast lesions was low for all the studied methods (median DSC from 4.2% to 38.0%). The AI system with the highest cancer detection performance (AUC = 0.694, CI 0.662-0.726) showed the lowest overlap (DSC = 4.2%) with breast lesions. The areas of interest found by saliency analysis of the AI systems showed poor overlap with breast lesions. These results suggest that AI systems with the highest performance do not solely rely on localized breast lesions for their decision-making in cancer detection; rather, they incorporate information from large image regions. This work contributes to the understanding of the role of breast lesions in cancer detection using AI.
Collapse
Affiliation(s)
- Said Pertuz
- Escuela de Ingenierías Eléctrica Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - David Ortega
- Escuela de Ingenierías Eléctrica Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - Érika Suarez
- Escuela de Ingenierías Eléctrica Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - William Cancino
- Escuela de Ingenierías Eléctrica Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - Gerson Africano
- Escuela de Ingenierías Eléctrica Electrónica y de Telecomunicaciones, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - Irina Rinta-Kiikka
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Department of Radiology, Tampere University Hospital, Tampere, Finland
| | - Otso Arponen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland.
- Department of Radiology, Tampere University Hospital, Tampere, Finland.
| | - Sara Paris
- Departamento de Imágenes Diagnósticas, Universidad Nacional de Colombia, Bogotá, Colombia
| | - Alfonso Lozano
- Departamento de Imágenes Diagnósticas, Universidad Nacional de Colombia, Bogotá, Colombia
| |
Collapse
|
106
|
O'Shea R, Manickavasagar T, Horst C, Hughes D, Cusack J, Tsoka S, Cook G, Goh V. Weakly supervised segmentation models as explainable radiological classifiers for lung tumour detection on CT images. Insights Imaging 2023; 14:195. [PMID: 37980637 PMCID: PMC10657919 DOI: 10.1186/s13244-023-01542-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/13/2023] [Indexed: 11/21/2023] Open
Abstract
PURPOSE Interpretability is essential for reliable convolutional neural network (CNN) image classifiers in radiological applications. We describe a weakly supervised segmentation model that learns to delineate the target object, trained with only image-level labels ("image contains object" or "image does not contain object"), presenting a different approach towards explainable object detectors for radiological imaging tasks. METHODS A weakly supervised Unet architecture (WSUnet) was trained to learn lung tumour segmentation from image-level labelled data. WSUnet generates voxel probability maps with a Unet and then constructs an image-level prediction by global max-pooling, thereby facilitating image-level training. WSUnet's voxel-level predictions were compared to traditional model interpretation techniques (class activation mapping, integrated gradients and occlusion sensitivity) in CT data from three institutions (training/validation: n = 412; testing: n = 142). Methods were compared using voxel-level discrimination metrics and clinical value was assessed with a clinician preference survey on data from external institutions. RESULTS Despite the absence of voxel-level labels in training, WSUnet's voxel-level predictions localised tumours precisely in both validation (precision: 0.77, 95% CI: [0.76-0.80]; dice: 0.43, 95% CI: [0.39-0.46]), and external testing (precision: 0.78, 95% CI: [0.76-0.81]; dice: 0.33, 95% CI: [0.32-0.35]). WSUnet's voxel-level discrimination outperformed the best comparator in validation (area under precision recall curve (AUPR): 0.55, 95% CI: [0.49-0.56] vs. 0.23, 95% CI: [0.21-0.25]) and testing (AUPR: 0.40, 95% CI: [0.38-0.41] vs. 0.36, 95% CI: [0.34-0.37]). Clinicians preferred WSUnet predictions in most instances (clinician preference rate: 0.72 95% CI: [0.68-0.77]). CONCLUSION Weakly supervised segmentation is a viable approach by which explainable object detection models may be developed for medical imaging. CRITICAL RELEVANCE STATEMENT WSUnet learns to segment images at voxel level, training only with image-level labels. A Unet backbone first generates a voxel-level probability map and then extracts the maximum voxel prediction as the image-level prediction. Thus, training uses only image-level annotations, reducing human workload. WSUnet's voxel-level predictions provide a causally verifiable explanation for its image-level prediction, improving interpretability. KEY POINTS • Explainability and interpretability are essential for reliable medical image classifiers. • This study applies weakly supervised segmentation to generate explainable image classifiers. • The weakly supervised Unet inherently explains its image-level predictions at voxel level.
Collapse
Affiliation(s)
- Robert O'Shea
- Department of Cancer Imaging, King's College London, London, UK.
| | | | - Carolyn Horst
- Department of Radiology, Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Daniel Hughes
- Department of Cancer Imaging, King's College London, London, UK
| | - James Cusack
- Department of Radiology, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| | - Sophia Tsoka
- Department of Natural and Mathematical Sciences, King's College London, London, UK
| | - Gary Cook
- King's College London & Guy's and St Thomas' PET Centre, Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - Vicky Goh
- Department of Radiology, Guy's and St Thomas' NHS Foundation Trust, London, UK
| |
Collapse
|
107
|
Dong Z, Shen C, Tang J, Wang B, Liao H. Accuracy of Thoracic Ultrasonography for the Diagnosis of Pediatric Pneumonia: A Systematic Review and Meta-Analysis. Diagnostics (Basel) 2023; 13:3457. [PMID: 37998593 PMCID: PMC10670251 DOI: 10.3390/diagnostics13223457] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 11/05/2023] [Accepted: 11/13/2023] [Indexed: 11/25/2023] Open
Abstract
As an emerging imaging technique, thoracic ultrasonography (TUS) is increasingly utilized in the diagnosis of lung diseases in children and newborns, especially in emergency and critical settings. This systematic review aimed to estimate the diagnostic accuracy of TUS in childhood pneumonia. We searched Embase, PubMed, and Web of Science for studies until July 2023 using both TUS and chest radiography (CR) for the diagnosis of pediatric pneumonia. Two researchers independently screened the literature based on the inclusion and exclusion criteria, collected the results, and assessed the risk of bias using the Diagnostic Accuracy Study Quality Assessment (QUADAS) tool. A total of 26 articles met our inclusion criteria and were included in the final analysis, including 22 prospective studies and four retrospective studies. The StataMP 14.0 software was used for the analysis of the study. The overall pooled sensitivity was 0.95 [95% confidence intervals (CI), 0.92-0.97] and the specificity was 0.94 [95% CI, 0.88-0.97], depicting a good diagnostic accuracy. Our results indicated that TUS was an effective imaging modality for detecting pediatric pneumonia. It is a potential alternative to CXR and a follow-up for pediatric pneumonia due to its simplicity, versatility, low cost, and lack of radiation hazards.
Collapse
Affiliation(s)
- Zhenghao Dong
- Department of Thoracic Surgery, West China Hospital, Sichuan University, Chengdu 610041, China; (Z.D.); (C.S.); (B.W.)
| | - Cheng Shen
- Department of Thoracic Surgery, West China Hospital, Sichuan University, Chengdu 610041, China; (Z.D.); (C.S.); (B.W.)
| | - Jinhai Tang
- Department of Radiation Oncology, The First Affiliated Hospital of Dalian Medical University, Dalian 116011, China
| | - Beinuo Wang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, Chengdu 610041, China; (Z.D.); (C.S.); (B.W.)
| | - Hu Liao
- Department of Thoracic Surgery, West China Hospital, Sichuan University, Chengdu 610041, China; (Z.D.); (C.S.); (B.W.)
| |
Collapse
|
108
|
Thunold HH, Riegler MA, Yazidi A, Hammer HL. A Deep Diagnostic Framework Using Explainable Artificial Intelligence and Clustering. Diagnostics (Basel) 2023; 13:3413. [PMID: 37998548 PMCID: PMC10670034 DOI: 10.3390/diagnostics13223413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/03/2023] [Accepted: 11/06/2023] [Indexed: 11/25/2023] Open
Abstract
An important part of diagnostics is to gain insight into properties that characterize a disease. Machine learning has been used for this purpose, for instance, to identify biomarkers in genomics. However, when patient data are presented as images, identifying properties that characterize a disease becomes far more challenging. A common strategy involves extracting features from the images and analyzing their occurrence in healthy versus pathological images. A limitation of this approach is that the ability to gain new insights into the disease from the data is constrained by the information in the extracted features. Typically, these features are manually extracted by humans, which further limits the potential for new insights. To overcome these limitations, in this paper, we propose a novel framework that provides insights into diseases without relying on handcrafted features or human intervention. Our framework is based on deep learning (DL), explainable artificial intelligence (XAI), and clustering. DL is employed to learn deep patterns, enabling efficient differentiation between healthy and pathological images. Explainable artificial intelligence (XAI) visualizes these patterns, and a novel "explanation-weighted" clustering technique is introduced to gain an overview of these patterns across multiple patients. We applied the method to images from the gastrointestinal tract. In addition to real healthy images and real images of polyps, some of the images had synthetic shapes added to represent other types of pathologies than polyps. The results show that our proposed method was capable of organizing the images based on the reasons they were diagnosed as pathological, achieving high cluster quality and a rand index close to or equal to one.
Collapse
Affiliation(s)
- Håvard Horgen Thunold
- Department of Compute Science, Faculty of Technology, Art and Design, Oslo Metropolitan University, 0176 Oslo, Norway; (H.H.T.); (M.A.R.); (A.Y.)
| | - Michael A. Riegler
- Department of Compute Science, Faculty of Technology, Art and Design, Oslo Metropolitan University, 0176 Oslo, Norway; (H.H.T.); (M.A.R.); (A.Y.)
- Department of Holistic Systems, SimulaMet, 0176 Oslo, Norway
| | - Anis Yazidi
- Department of Compute Science, Faculty of Technology, Art and Design, Oslo Metropolitan University, 0176 Oslo, Norway; (H.H.T.); (M.A.R.); (A.Y.)
| | - Hugo L. Hammer
- Department of Compute Science, Faculty of Technology, Art and Design, Oslo Metropolitan University, 0176 Oslo, Norway; (H.H.T.); (M.A.R.); (A.Y.)
- Department of Holistic Systems, SimulaMet, 0176 Oslo, Norway
| |
Collapse
|
109
|
Li M, Jiang Y, Zhang Y, Zhu H. Medical image analysis using deep learning algorithms. Front Public Health 2023; 11:1273253. [PMID: 38026291 PMCID: PMC10662291 DOI: 10.3389/fpubh.2023.1273253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/05/2023] [Indexed: 12/01/2023] Open
Abstract
In the field of medical image analysis within deep learning (DL), the importance of employing advanced DL techniques cannot be overstated. DL has achieved impressive results in various areas, making it particularly noteworthy for medical image analysis in healthcare. The integration of DL with medical image analysis enables real-time analysis of vast and intricate datasets, yielding insights that significantly enhance healthcare outcomes and operational efficiency in the industry. This extensive review of existing literature conducts a thorough examination of the most recent deep learning (DL) approaches designed to address the difficulties faced in medical healthcare, particularly focusing on the use of deep learning algorithms in medical image analysis. Falling all the investigated papers into five different categories in terms of their techniques, we have assessed them according to some critical parameters. Through a systematic categorization of state-of-the-art DL techniques, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), Long Short-term Memory (LSTM) models, and hybrid models, this study explores their underlying principles, advantages, limitations, methodologies, simulation environments, and datasets. Based on our results, Python was the most frequent programming language used for implementing the proposed methods in the investigated papers. Notably, the majority of the scrutinized papers were published in 2021, underscoring the contemporaneous nature of the research. Moreover, this review accentuates the forefront advancements in DL techniques and their practical applications within the realm of medical image analysis, while simultaneously addressing the challenges that hinder the widespread implementation of DL in image analysis within the medical healthcare domains. These discerned insights serve as compelling impetuses for future studies aimed at the progressive advancement of image analysis in medical healthcare research. The evaluation metrics employed across the reviewed articles encompass a broad spectrum of features, encompassing accuracy, sensitivity, specificity, F-score, robustness, computational complexity, and generalizability.
Collapse
Affiliation(s)
- Mengfang Li
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yuanyuan Jiang
- Department of Cardiovascular Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yanzhou Zhang
- Department of Cardiovascular Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Haisheng Zhu
- Department of Cardiovascular Medicine, Wencheng People’s Hospital, Wencheng, China
| |
Collapse
|
110
|
Yousefpour Shahrivar R, Karami F, Karami E. Enhancing Fetal Anomaly Detection in Ultrasonography Images: A Review of Machine Learning-Based Approaches. Biomimetics (Basel) 2023; 8:519. [PMID: 37999160 PMCID: PMC10669151 DOI: 10.3390/biomimetics8070519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/05/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023] Open
Abstract
Fetal development is a critical phase in prenatal care, demanding the timely identification of anomalies in ultrasound images to safeguard the well-being of both the unborn child and the mother. Medical imaging has played a pivotal role in detecting fetal abnormalities and malformations. However, despite significant advances in ultrasound technology, the accurate identification of irregularities in prenatal images continues to pose considerable challenges, often necessitating substantial time and expertise from medical professionals. In this review, we go through recent developments in machine learning (ML) methods applied to fetal ultrasound images. Specifically, we focus on a range of ML algorithms employed in the context of fetal ultrasound, encompassing tasks such as image classification, object recognition, and segmentation. We highlight how these innovative approaches can enhance ultrasound-based fetal anomaly detection and provide insights for future research and clinical implementations. Furthermore, we emphasize the need for further research in this domain where future investigations can contribute to more effective ultrasound-based fetal anomaly detection.
Collapse
Affiliation(s)
- Ramin Yousefpour Shahrivar
- Department of Biology, College of Convergent Sciences and Technologies, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Fatemeh Karami
- Department of Medical Genetics, Applied Biophotonics Research Center, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Ebrahim Karami
- Department of Engineering and Applied Sciences, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada
| |
Collapse
|
111
|
Raghu VK, Lu MT. Chest Radiographs: A New Form of Identification? Radiol Artif Intell 2023; 5:e230397. [PMID: 38074776 PMCID: PMC10698601 DOI: 10.1148/ryai.230397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 09/22/2023] [Accepted: 09/25/2023] [Indexed: 10/16/2024]
Affiliation(s)
- Vineet K. Raghu
- From the Department of Radiology, Cardiovascular Imaging Research
Center, Massachusetts General Hospital and Harvard Medical School, 165 Cambridge
St, Ste 400, Boston, MA 02114
| | - Michael T. Lu
- From the Department of Radiology, Cardiovascular Imaging Research
Center, Massachusetts General Hospital and Harvard Medical School, 165 Cambridge
St, Ste 400, Boston, MA 02114
| |
Collapse
|
112
|
Ali S, Akhlaq F, Imran AS, Kastrati Z, Daudpota SM, Moosa M. The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review. Comput Biol Med 2023; 166:107555. [PMID: 37806061 DOI: 10.1016/j.compbiomed.2023.107555] [Citation(s) in RCA: 27] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 08/13/2023] [Accepted: 09/28/2023] [Indexed: 10/10/2023]
Abstract
In domains such as medical and healthcare, the interpretability and explainability of machine learning and artificial intelligence systems are crucial for building trust in their results. Errors caused by these systems, such as incorrect diagnoses or treatments, can have severe and even life-threatening consequences for patients. To address this issue, Explainable Artificial Intelligence (XAI) has emerged as a popular area of research, focused on understanding the black-box nature of complex and hard-to-interpret machine learning models. While humans can increase the accuracy of these models through technical expertise, understanding how these models actually function during training can be difficult or even impossible. XAI algorithms such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) can provide explanations for these models, improving trust in their predictions by providing feature importance and increasing confidence in the systems. Many articles have been published that propose solutions to medical problems by using machine learning models alongside XAI algorithms to provide interpretability and explainability. In our study, we identified 454 articles published from 2018-2022 and analyzed 93 of them to explore the use of these techniques in the medical domain.
Collapse
Affiliation(s)
- Subhan Ali
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Filza Akhlaq
- Department of Computer Science, Sukkur IBA University, Sukkur, 65200, Sindh, Pakistan.
| | - Ali Shariq Imran
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Zenun Kastrati
- Department of Informatics, Linnaeus University, Växjö, 351 95, Sweden.
| | | | - Muhammad Moosa
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| |
Collapse
|
113
|
Jimenez-Mesa C, Arco JE, Martinez-Murcia FJ, Suckling J, Ramirez J, Gorriz JM. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol Res 2023; 197:106984. [PMID: 37940064 DOI: 10.1016/j.phrs.2023.106984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/04/2023] [Accepted: 11/04/2023] [Indexed: 11/10/2023]
Abstract
The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.
Collapse
Affiliation(s)
- Carmen Jimenez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Communications Engineering, University of Malaga, 29010, Spain
| | | | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK.
| |
Collapse
|
114
|
Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, Shin K, Kim KD, Ryu SM, Seo JB, Lee SM, Kim N. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J Radiol 2023; 24:1061-1080. [PMID: 37724586 PMCID: PMC10613849 DOI: 10.3348/kjr.2023.0393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/01/2023] [Accepted: 07/30/2023] [Indexed: 09/21/2023] Open
Abstract
Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.
Collapse
Affiliation(s)
- Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jiheon Jeong
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Grace Yoojin Lee
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Laboratory for Biosignal Analysis and Perioperative Outcome Research, Biomedical Engineering Center, Asan Institute of Lifesciences, Asan Medical Center, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
115
|
Xu J, Xu HL, Cao YN, Huang Y, Gao S, Wu QJ, Gong TT. The performance of deep learning on thyroid nodule imaging predicts thyroid cancer: A systematic review and meta-analysis of epidemiological studies with independent external test sets. Diabetes Metab Syndr 2023; 17:102891. [PMID: 37907027 DOI: 10.1016/j.dsx.2023.102891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 10/06/2023] [Accepted: 10/15/2023] [Indexed: 11/02/2023]
Abstract
BACKGROUND AND AIMS It is still controversial whether deep learning (DL) systems add accuracy to thyroid nodule imaging classification based on the recent available evidence. We conducted this study to analyze the current evidence of DL in thyroid nodule imaging diagnosis in both internal and external test sets. METHODS Until the end of December 2022, PubMed, IEEE, Embase, Web of Science, and the Cochrane Library were searched. We included primary epidemiological studies using externally validated DL techniques in image-based thyroid nodule appraisal. This systematic review was registered on PROSPERO (CRD42022362892). RESULTS We evaluated evidence from 17 primary epidemiological studies using externally validated DL techniques in image-based thyroid nodule appraisal. Fourteen studies were deemed eligible for meta-analysis. The pooled sensitivity, specificity, and area under the curve (AUC) of these DL algorithms were 0.89 (95% confidence interval 0.87-0.90), 0.84 (0.82-0.86), and 0.93 (0.91-0.95), respectively. For the internal validation set, the pooled sensitivity, specificity, and AUC were 0.91 (0.89-0.93), 0.88 (0.85-0.91), and 0.96 (0.93-0.97), respectively. In the external validation set, the pooled sensitivity, specificity, and AUC were 0.87 (0.85-0.89), 0.81 (0.77-0.83), and 0.91 (0.88-0.93), respectively. Notably, in subgroup analyses, DL algorithms still demonstrated exceptional diagnostic validity. CONCLUSIONS Current evidence suggests DL-based imaging shows diagnostic performances comparable to clinicians for differentiating thyroid nodules in both the internal and external test sets.
Collapse
Affiliation(s)
- Jin Xu
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - He-Li Xu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yi-Ning Cao
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China; Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ying Huang
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Song Gao
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qi-Jun Wu
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China; Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China; Key Laboratory of Reproductive and Genetic Medicine (China Medical University), National Health Commission, Shenyang, China.
| | - Ting-Ting Gong
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China.
| |
Collapse
|
116
|
Carmichael J, Abdi S, Balaskas K, Costanza E, Blandford A. The effectiveness of interventions for optometric referrals into the hospital eye service: A review. Ophthalmic Physiol Opt 2023; 43:1510-1523. [PMID: 37632154 PMCID: PMC10947293 DOI: 10.1111/opo.13219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 08/05/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023]
Abstract
PURPOSE Ophthalmic services are currently under considerable stress; in the UK, ophthalmology departments have the highest number of outpatient appointments of any department within the National Health Service. Recognising the need for intervention, several approaches have been trialled to tackle the high numbers of false-positive referrals initiated in primary care and seen face to face within the hospital eye service (HES). In this mixed-methods narrative synthesis, we explored interventions based on their clinical impact, cost and acceptability to determine whether they are clinically effective, safe and sustainable. A systematic literature search of PubMed, MEDLINE and CINAHL, guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), was used to identify appropriate studies published between December 2001 and December 2022. RECENT FINDINGS A total of 55 studies were reviewed. Four main interventions were assessed, where two studies covered more than one type: training and guidelines (n = 8), referral filtering schemes (n = 32), asynchronous teleophthalmology (n = 13) and synchronous teleophthalmology (n = 5). All four approaches demonstrated effectiveness for reducing false-positive referrals to the HES. There was sufficient evidence for stakeholder acceptance and cost-effectiveness of referral filtering schemes; however, cost comparisons involved assumptions. Referral filtering and asynchronous teleophthalmology reported moderate levels of false-negative cases (2%-20%), defined as discharged patients requiring HES monitoring. SUMMARY The effectiveness of interventions varied depending on which outcome and stakeholder was considered. More studies are required to explore stakeholder opinions around all interventions. In order to maximise clinical safety, it may be appropriate to combine more than one approach, such as referral filtering schemes with virtual review of discharged patients to assess the rate of false-negative cases. The implementation of a successful intervention is more complex than a 'one-size-fits-all' approach and there is potential space for newer types of interventions, such as artificial intelligence clinical support systems within the referral pathway.
Collapse
Affiliation(s)
- Josie Carmichael
- University College London Interaction Centre (UCLIC), UCLLondonUK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCLInstitute of OphthalmologyLondonUK
| | - Sarah Abdi
- University College London Interaction Centre (UCLIC), UCLLondonUK
| | - Konstantinos Balaskas
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCLInstitute of OphthalmologyLondonUK
| | - Enrico Costanza
- University College London Interaction Centre (UCLIC), UCLLondonUK
| | - Ann Blandford
- University College London Interaction Centre (UCLIC), UCLLondonUK
| |
Collapse
|
117
|
Cui R, Wang L, Lin L, Li J, Lu R, Liu S, Liu B, Gu Y, Zhang H, Shang Q, Chen L, Tian D. Deep Learning in Barrett's Esophagus Diagnosis: Current Status and Future Directions. Bioengineering (Basel) 2023; 10:1239. [PMID: 38002363 PMCID: PMC10669008 DOI: 10.3390/bioengineering10111239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/26/2023] Open
Abstract
Barrett's esophagus (BE) represents a pre-malignant condition characterized by abnormal cellular proliferation in the distal esophagus. A timely and accurate diagnosis of BE is imperative to prevent its progression to esophageal adenocarcinoma, a malignancy associated with a significantly reduced survival rate. In this digital age, deep learning (DL) has emerged as a powerful tool for medical image analysis and diagnostic applications, showcasing vast potential across various medical disciplines. In this comprehensive review, we meticulously assess 33 primary studies employing varied DL techniques, predominantly featuring convolutional neural networks (CNNs), for the diagnosis and understanding of BE. Our primary focus revolves around evaluating the current applications of DL in BE diagnosis, encompassing tasks such as image segmentation and classification, as well as their potential impact and implications in real-world clinical settings. While the applications of DL in BE diagnosis exhibit promising results, they are not without challenges, such as dataset issues and the "black box" nature of models. We discuss these challenges in the concluding section. Essentially, while DL holds tremendous potential to revolutionize BE diagnosis, addressing these challenges is paramount to harnessing its full capacity and ensuring its widespread application in clinical practice.
Collapse
Affiliation(s)
- Ruichen Cui
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Lei Wang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Lin Lin
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Jie Li
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Runda Lu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Shixiang Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Bowei Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Yimin Gu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Hanlu Zhang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Qixin Shang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Longqi Chen
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Dong Tian
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| |
Collapse
|
118
|
Salvi M, Molinari F, Ciccarelli M, Testi R, Taraglio S, Imperiale D. Quantitative analysis of prion disease using an AI-powered digital pathology framework. Sci Rep 2023; 13:17759. [PMID: 37853094 PMCID: PMC10584956 DOI: 10.1038/s41598-023-44782-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 10/12/2023] [Indexed: 10/20/2023] Open
Abstract
Prion disease is a fatal neurodegenerative disorder characterized by accumulation of an abnormal prion protein (PrPSc) in the central nervous system. To identify PrPSc aggregates for diagnostic purposes, pathologists use immunohistochemical staining of prion protein antibodies on tissue samples. With digital pathology, artificial intelligence can now analyze stained slides. In this study, we developed an automated pipeline for the identification of PrPSc aggregates in tissue samples from the cerebellar and occipital cortex. To the best of our knowledge, this is the first framework to evaluate PrPSc deposition in digital images. We used two strategies: a deep learning segmentation approach using a vision transformer, and a machine learning classification approach with traditional classifiers. Our method was developed and tested on 64 whole slide images from 41 patients definitively diagnosed with prion disease. The results of our study demonstrated that our proposed framework can accurately classify WSIs from a blind test set. Moreover, it can quantify PrPSc distribution and localization throughout the brain. This could potentially be extended to evaluate protein expression in other neurodegenerative diseases like Alzheimer's and Parkinson's. Overall, our pipeline highlights the potential of AI-assisted pathology to provide valuable insights, leading to improved diagnostic accuracy and efficiency.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, PoliTo(BIO)Med Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129, Turin, Italy.
| | - Filippo Molinari
- Biolab, PoliTo(BIO)Med Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129, Turin, Italy
| | - Mario Ciccarelli
- Biolab, PoliTo(BIO)Med Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129, Turin, Italy
| | - Roberto Testi
- SC Medicina Legale, ASL Città di Torino, Turin, Italy
| | | | - Daniele Imperiale
- SC Neurologia Ospedale Maria Vittoria & Centro Diagnosi Osservazione Malattie Prioniche, ASL Città di Torino, Turin, Italy
| |
Collapse
|
119
|
Thirunavukarasu AJ, Elangovan K, Gutierrez L, Li Y, Tan I, Keane PA, Korot E, Ting DSW. Democratizing Artificial Intelligence Imaging Analysis With Automated Machine Learning: Tutorial. J Med Internet Res 2023; 25:e49949. [PMID: 37824185 PMCID: PMC10603560 DOI: 10.2196/49949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/21/2023] [Accepted: 09/13/2023] [Indexed: 10/13/2023] Open
Abstract
Deep learning-based clinical imaging analysis underlies diagnostic artificial intelligence (AI) models, which can match or even exceed the performance of clinical experts, having the potential to revolutionize clinical practice. A wide variety of automated machine learning (autoML) platforms lower the technical barrier to entry to deep learning, extending AI capabilities to clinicians with limited technical expertise, and even autonomous foundation models such as multimodal large language models. Here, we provide a technical overview of autoML with descriptions of how autoML may be applied in education, research, and clinical practice. Each stage of the process of conducting an autoML project is outlined, with an emphasis on ethical and technical best practices. Specifically, data acquisition, data partitioning, model training, model validation, analysis, and model deployment are considered. The strengths and limitations of available code-free, code-minimal, and code-intensive autoML platforms are considered. AutoML has great potential to democratize AI in medicine, improving AI literacy by enabling "hands-on" education. AutoML may serve as a useful adjunct in research by facilitating rapid testing and benchmarking before significant computational resources are committed. AutoML may also be applied in clinical contexts, provided regulatory requirements are met. The abstraction by autoML of arduous aspects of AI engineering promotes prioritization of data set curation, supporting the transition from conventional model-driven approaches to data-centric development. To fulfill its potential, clinicians must be educated on how to apply these technologies ethically, rigorously, and effectively; this tutorial represents a comprehensive summary of relevant considerations.
Collapse
Affiliation(s)
- Arun James Thirunavukarasu
- University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Kabilan Elangovan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Laura Gutierrez
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Yong Li
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Iris Tan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Pearse A Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Edward Korot
- Byers Eye Institute, Stanford University, Palo Alto, CA, United States
- Retina Specialists of Michigan, Grand Rapids, MI, United States
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
- Byers Eye Institute, Stanford University, Palo Alto, CA, United States
- Singapore National Eye Centre, Singapore, Singapore
| |
Collapse
|
120
|
Li Z, Wang B, Liang H, Li Y, Zhang Z, Han L. A three-stage eccDNA based molecular profiling significantly improves the identification, prognosis assessment and recurrence prediction accuracy in patients with glioma. Cancer Lett 2023; 574:216369. [PMID: 37640198 DOI: 10.1016/j.canlet.2023.216369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 08/15/2023] [Accepted: 08/24/2023] [Indexed: 08/31/2023]
Abstract
Glioblastoma (GBM) progression is influenced by intratumoral heterogeneity. Emerging evidence has emphasized the pivotal role of extrachromosomal circular DNA (eccDNA) in accelerating tumor heterogeneity, particularly in GBM. However, the eccDNA landscape of GBM has not yet been elucidated. In this study, we first identified the eccDNA profiles in GBM and adjacent tissues using circle- and RNA-sequencing data from the same samples. A three-stage model was established based on eccDNA-carried genes that exhibited consistent upregulation and downregulation trends at the mRNA level. Combinations of machine learning algorithms and stacked ensemble models were used to improve the performance and robustness of the three-stage model. In stage 1, a total of 113 combinations of machine learning algorithms were constructed and validated in multiple external cohorts to accurately distinguish between low-grade glioma (LGG) and GBM in patients with glioma. The model with the highest area under the curve (AUC) across all cohorts was selected for interpretability analysis. In stage 2, a total of 101 combinations of machine learning algorithms were established and validated for prognostic prediction in patients with glioma. This prognostic model performed well in multiple glioma cohorts. Recurrent GBM is invariably associated with aggressive and refractory disease. Therefore, accurate prediction of recurrence risk is crucial for developing individualized treatment strategies, monitoring patient status, and improving clinical management. In stage 3, a large-scale GBM cohort (including primary and recurrent GBM samples) was used to fit the GBM recurrence prediction model. Multiple machine learning and stacked ensemble models were fitted to select the model with the best performance. Finally, a web tool was developed to facilitate the clinical application of the three-stage model.
Collapse
Affiliation(s)
- Zesheng Li
- Tianjin Neurological Institute, Key Laboratory of Post-Neuro Injury, Neuro-repair and Regeneration in Central Nervous System, Ministry of Education and Tianjin City, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Bo Wang
- Tianjin Neurological Institute, Key Laboratory of Post-Neuro Injury, Neuro-repair and Regeneration in Central Nervous System, Ministry of Education and Tianjin City, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Hao Liang
- Tianjin Neurological Institute, Key Laboratory of Post-Neuro Injury, Neuro-repair and Regeneration in Central Nervous System, Ministry of Education and Tianjin City, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Ying Li
- Tianjin Neurological Institute, Key Laboratory of Post-Neuro Injury, Neuro-repair and Regeneration in Central Nervous System, Ministry of Education and Tianjin City, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Zhenyu Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, 480082, China.
| | - Lei Han
- Tianjin Neurological Institute, Key Laboratory of Post-Neuro Injury, Neuro-repair and Regeneration in Central Nervous System, Ministry of Education and Tianjin City, Tianjin Medical University General Hospital, Tianjin, 300052, China.
| |
Collapse
|
121
|
van Breugel M, Fehrmann RSN, Bügel M, Rezwan FI, Holloway JW, Nawijn MC, Fontanella S, Custovic A, Koppelman GH. Current state and prospects of artificial intelligence in allergy. Allergy 2023; 78:2623-2643. [PMID: 37584170 DOI: 10.1111/all.15849] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 07/08/2023] [Accepted: 07/31/2023] [Indexed: 08/17/2023]
Abstract
The field of medicine is witnessing an exponential growth of interest in artificial intelligence (AI), which enables new research questions and the analysis of larger and new types of data. Nevertheless, applications that go beyond proof of concepts and deliver clinical value remain rare, especially in the field of allergy. This narrative review provides a fundamental understanding of the core concepts of AI and critically discusses its limitations and open challenges, such as data availability and bias, along with potential directions to surmount them. We provide a conceptual framework to structure AI applications within this field and discuss forefront case examples. Most of these applications of AI and machine learning in allergy concern supervised learning and unsupervised clustering, with a strong emphasis on diagnosis and subtyping. A perspective is shared on guidelines for good AI practice to guide readers in applying it effectively and safely, along with prospects of field advancement and initiatives to increase clinical impact. We anticipate that AI can further deepen our knowledge of disease mechanisms and contribute to precision medicine in allergy.
Collapse
Affiliation(s)
- Merlijn van Breugel
- Department of Pediatric Pulmonology and Pediatric Allergology, Beatrix Children's Hospital, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- MIcompany, Amsterdam, the Netherlands
| | - Rudolf S N Fehrmann
- Department of Medical Oncology, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | | | - Faisal I Rezwan
- Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, UK
- Department of Computer Science, Aberystwyth University, Aberystwyth, UK
| | - John W Holloway
- Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, UK
- National Institute for Health and Care Research Southampton Biomedical Research Centre, University Hospitals Southampton NHS Foundation Trust, Southampton, UK
| | - Martijn C Nawijn
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Department of Pathology and Medical Biology, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Sara Fontanella
- National Heart and Lung Institute, Imperial College London, London, UK
- National Institute for Health and Care Research Imperial Biomedical Research Centre (BRC), London, UK
| | - Adnan Custovic
- National Heart and Lung Institute, Imperial College London, London, UK
- National Institute for Health and Care Research Imperial Biomedical Research Centre (BRC), London, UK
| | - Gerard H Koppelman
- Department of Pediatric Pulmonology and Pediatric Allergology, Beatrix Children's Hospital, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
- Groningen Research Institute for Asthma and COPD (GRIAC), University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
122
|
Sun Z, Lin M, Zhu Q, Xie Q, Wang F, Lu Z, Peng Y. A scoping review on multimodal deep learning in biomedical images and texts. J Biomed Inform 2023; 146:104482. [PMID: 37652343 PMCID: PMC10591890 DOI: 10.1016/j.jbi.2023.104482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 07/18/2023] [Accepted: 08/28/2023] [Indexed: 09/02/2023]
Abstract
OBJECTIVE Computer-assisted diagnostic and prognostic systems of the future should be capable of simultaneously processing multimodal data. Multimodal deep learning (MDL), which involves the integration of multiple sources of data, such as images and text, has the potential to revolutionize the analysis and interpretation of biomedical data. However, it only caught researchers' attention recently. To this end, there is a critical need to conduct a systematic review on this topic, identify the limitations of current work, and explore future directions. METHODS In this scoping review, we aim to provide a comprehensive overview of the current state of the field and identify key concepts, types of studies, and research gaps with a focus on biomedical images and texts joint learning, mainly because these two were the most commonly available data types in MDL research. RESULT This study reviewed the current uses of multimodal deep learning on five tasks: (1) Report generation, (2) Visual question answering, (3) Cross-modal retrieval, (4) Computer-aided diagnosis, and (5) Semantic segmentation. CONCLUSION Our results highlight the diverse applications and potential of MDL and suggest directions for future research in the field. We hope our review will facilitate the collaboration of natural language processing (NLP) and medical imaging communities and support the next generation of decision-making and computer-assisted diagnostic system development.
Collapse
Affiliation(s)
- Zhaoyi Sun
- Population Health Sciences, Weill Cornell Medicine, New York, NY 10016, USA.
| | - Mingquan Lin
- Population Health Sciences, Weill Cornell Medicine, New York, NY 10016, USA.
| | - Qingqing Zhu
- National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD 20894, USA.
| | - Qianqian Xie
- Population Health Sciences, Weill Cornell Medicine, New York, NY 10016, USA.
| | - Fei Wang
- Population Health Sciences, Weill Cornell Medicine, New York, NY 10016, USA.
| | - Zhiyong Lu
- National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD 20894, USA.
| | - Yifan Peng
- Population Health Sciences, Weill Cornell Medicine, New York, NY 10016, USA.
| |
Collapse
|
123
|
Tian C, Zhu H, Meng X, Ma Z, Yuan S, Li W. Research for accurate auxiliary diagnosis of lung cancer based on intracellular fluorescent fingerprint information. JOURNAL OF BIOPHOTONICS 2023; 16:e202300174. [PMID: 37350031 DOI: 10.1002/jbio.202300174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 06/09/2023] [Accepted: 06/20/2023] [Indexed: 06/24/2023]
Abstract
The distinctions in pathological types and genetic subtypes of lung cancer have a direct impact on the choice of treatment choices and clinical prognosis in clinical practice. This study used pathological histological sections of surgically removed or biopsied tumor tissue from 36 patients. Based on a small sample size, millions of spectral data points were extracted to investigate the feasibility of employing intracellular fluorescent fingerprint information to diagnose the pathological types and mutational status of lung cancer. The intracellular fluorescent fingerprint information revealed the EGFR gene mutation characteristics in lung cancer, and the area under the curve (AUC) value for the optimal model was 0.98. For the classification of lung cancer pathological types, the macro average AUC value for the ensemble-learning model was 0.97. Our research contributes new idea for pathological diagnosis of lung cancer and offers a quick, easy, and accurate auxiliary diagnostic approach.
Collapse
Affiliation(s)
- Chongxuan Tian
- Department of Biomedical Engineering Institute, School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| | - He Zhu
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Shandong Cancer Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Xiangwei Meng
- Department of Biomedical Engineering Institute, School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| | - Zhixiang Ma
- Department of Biomedical Engineering Institute, School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| | - Shuanghu Yuan
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Shandong Cancer Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
- Department of Radiation Oncology, Shandong Cancer Hospital Affiliated to Shandong University, Jinan, Shandong, China
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Wei Li
- Department of Biomedical Engineering Institute, School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| |
Collapse
|
124
|
Valente J, António J, Mora C, Jardim S. Developments in Image Processing Using Deep Learning and Reinforcement Learning. J Imaging 2023; 9:207. [PMID: 37888314 PMCID: PMC10607786 DOI: 10.3390/jimaging9100207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/24/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023] Open
Abstract
The growth in the volume of data generated, consumed, and stored, which is estimated to exceed 180 zettabytes in 2025, represents a major challenge both for organizations and for society in general. In addition to being larger, datasets are increasingly complex, bringing new theoretical and computational challenges. Alongside this evolution, data science tools have exploded in popularity over the past two decades due to their myriad of applications when dealing with complex data, their high accuracy, flexible customization, and excellent adaptability. When it comes to images, data analysis presents additional challenges because as the quality of an image increases, which is desirable, so does the volume of data to be processed. Although classic machine learning (ML) techniques are still widely used in different research fields and industries, there has been great interest from the scientific community in the development of new artificial intelligence (AI) techniques. The resurgence of neural networks has boosted remarkable advances in areas such as the understanding and processing of images. In this study, we conducted a comprehensive survey regarding advances in AI design and the optimization solutions proposed to deal with image processing challenges. Despite the good results that have been achieved, there are still many challenges to face in this field of study. In this work, we discuss the main and more recent improvements, applications, and developments when targeting image processing applications, and we propose future research directions in this field of constant and fast evolution.
Collapse
Affiliation(s)
- Jorge Valente
- Techframe-Information Systems, SA, 2785-338 São Domingos de Rana, Portugal; (J.V.); (J.A.)
| | - João António
- Techframe-Information Systems, SA, 2785-338 São Domingos de Rana, Portugal; (J.V.); (J.A.)
| | - Carlos Mora
- Smart Cities Research Center, Polytechnic Institute of Tomar, 2300-313 Tomar, Portugal;
| | - Sandra Jardim
- Smart Cities Research Center, Polytechnic Institute of Tomar, 2300-313 Tomar, Portugal;
| |
Collapse
|
125
|
Qin X, Ran T, Chen Y, Zhang Y, Wang D, Zhou C, Zou D. Artificial Intelligence in Endoscopic Ultrasonography-Guided Fine-Needle Aspiration/Biopsy (EUS-FNA/B) for Solid Pancreatic Lesions: Opportunities and Challenges. Diagnostics (Basel) 2023; 13:3054. [PMID: 37835797 PMCID: PMC10572518 DOI: 10.3390/diagnostics13193054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/06/2023] [Accepted: 09/06/2023] [Indexed: 10/15/2023] Open
Abstract
Solid pancreatic lesions (SPLs) encompass a variety of benign and malignant diseases and accurate diagnosis is crucial for guiding appropriate treatment decisions. Endoscopic ultrasonography-guided fine-needle aspiration/biopsy (EUS-FNA/B) serves as a front-line diagnostic tool for pancreatic mass lesions and is widely used in clinical practice. Artificial intelligence (AI) is a mathematical technique that automates the learning and recognition of data patterns. Its strong self-learning ability and unbiased nature have led to its gradual adoption in the medical field. In this paper, we describe the fundamentals of AI and provide a summary of reports on AI in EUS-FNA/B to help endoscopists understand and realize its potential in improving pathological diagnosis and guiding targeted EUS-FNA/B. However, AI models have limitations and shortages that need to be addressed before clinical use. Furthermore, as most AI studies are retrospective, large-scale prospective clinical trials are necessary to evaluate their clinical usefulness accurately. Although AI in EUS-FNA/B is still in its infancy, the constant input of clinical data and the advancements in computer technology are expected to make computer-aided diagnosis and treatment more feasible.
Collapse
Affiliation(s)
| | | | | | | | | | - Chunhua Zhou
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, China; (X.Q.); (T.R.); (Y.C.); (Y.Z.); (D.W.)
| | - Duowu Zou
- Department of Gastroenterology, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, China; (X.Q.); (T.R.); (Y.C.); (Y.Z.); (D.W.)
| |
Collapse
|
126
|
Miranda F, Choudhari V, Barone S, Anchling L, Hutin N, Gurgel M, Al Turkestani N, Yatabe M, Bianchi J, Aliaga-Del Castillo A, Zupelari-Gonçalves P, Edwards S, Garib D, Cevidanes L, Prieto J. Interpretable artificial intelligence for classification of alveolar bone defect in patients with cleft lip and palate. Sci Rep 2023; 13:15861. [PMID: 37740091 PMCID: PMC10516946 DOI: 10.1038/s41598-023-43125-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 09/20/2023] [Indexed: 09/24/2023] Open
Abstract
Cleft lip and/or palate (CLP) is the most common congenital craniofacial anomaly and requires bone grafting of the alveolar cleft. This study aimed to develop a novel classification algorithm to assess the severity of alveolar bone defects in patients with CLP using three-dimensional (3D) surface models and to demonstrate through an interpretable artificial intelligence (AI)-based algorithm the decisions provided by the classifier. Cone-beam computed tomography scans of 194 patients with CLP were used to train and test the performance of an automatic classification of the severity of alveolar bone defect. The shape, height, and width of the alveolar bone defect were assessed in automatically segmented maxillary 3D surface models to determine the ground truth classification index of its severity. The novel classifier algorithm renders the 3D surface models from different viewpoints and captures 2D image snapshots fed into a 2D Convolutional Neural Network. An interpretable AI algorithm was developed that uses features from each view and aggregated via Attention Layers to explain the classification. The precision, recall and F-1 score were 0.823, 0.816, and 0.817, respectively, with agreement ranging from 97.4 to 100% on the severity index within 1 group difference. The new classifier and interpretable AI algorithm presented satisfactory accuracy to classify the severity of alveolar bone defect morphology using 3D surface models of patients with CLP and graphically displaying the features that were considered during the deep learning model's classification decision.
Collapse
Affiliation(s)
- Felicia Miranda
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA.
- Department of Orthodontics, Bauru Dental School, University of São Paulo, Bauru, SP, Brazil.
| | - Vishakha Choudhari
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Selene Barone
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- Department of Health Science, School of Dentistry, Magna Graecia University of Catanzaro, Catanzaro, Italy
| | - Luc Anchling
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- CPE Lyon, Lyon, France
| | - Nathan Hutin
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- CPE Lyon, Lyon, France
| | - Marcela Gurgel
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Najla Al Turkestani
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- Department of Restorative and Aesthetic Dentistry, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Marilia Yatabe
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Jonas Bianchi
- Department of Orthodontics, University of the Pacific, Arthur A. Dugoni School of Dentistry, San Francisco, CA, USA
| | - Aron Aliaga-Del Castillo
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Paulo Zupelari-Gonçalves
- Department of Oral and Maxillofacial Surgery, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Sean Edwards
- Department of Oral and Maxillofacial Surgery, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Daniela Garib
- Department of Orthodontics, Bauru Dental School, University of São Paulo, Bauru, SP, Brazil
- Department of Orthodontics, Hospital for Rehabilitation of Craniofacial Anomalies, University of São Paulo, Bauru, SP, Brazil
| | - Lucia Cevidanes
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Juan Prieto
- Department of Psychiatry, University of North Carolina, Chapel Hill, NC, USA
| |
Collapse
|
127
|
Li B, Zhan C. Distributed Diagnoses Based on Constructing a Private Chain via a Public Network. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1305. [PMID: 37761604 PMCID: PMC10530034 DOI: 10.3390/e25091305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 09/01/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023]
Abstract
Secure online consultations can provide convenient medical services to patients who require experts from different regions. Moreover, this process can save time, which is critical in emergency cases, and cut medical costs. However, medical services need a high level of privacy protection that advances the difficulty of a construction method. It is a good idea to construct a virtual private chain through public networks by means of cryptology and identity verification. For this purpose, novel protocols are proposed to finish the package layout, secure transmission, and authorization. By mining the special characteristics of this application, two different kinds of encryption channels were designed to support the proposed protocol to ensure the secure transmission of data. And Hash values and multiple checking were employed in the transmission package to find the incompleteness of data related to network errors or attacks. Besides the secure communication of medical information, the Extended Chinese Remainder Theorem was utilized to finish the approval during a change in committee in emergency situations. Finally, example case was used to verify the effectiveness of the total methods.
Collapse
Affiliation(s)
- Bing Li
- School of Economics, Wuhan University of Technology, Wuhan 430070, China
| | - Choujun Zhan
- School of Computer, South China Normal University, Guangzhou 510631, China
| |
Collapse
|
128
|
Cao G, Zhang M, Wang Y, Zhang J, Han Y, Xu X, Huang J, Kang G. End-to-end automatic pathology localization for Alzheimer's disease diagnosis using structural MRI. Comput Biol Med 2023; 163:107110. [PMID: 37321102 DOI: 10.1016/j.compbiomed.2023.107110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/18/2023] [Accepted: 05/30/2023] [Indexed: 06/17/2023]
Abstract
Structural magnetic resonance imaging (sMRI) is an essential part of the clinical assessment of patients at risk of Alzheimer dementia. One key challenge in sMRI-based computer-aided dementia diagnosis is to localize local pathological regions for discriminative feature learning. Existing solutions predominantly depend on generating saliency maps for pathology localization and handle the localization task independently of the dementia diagnosis task, leading to a complex multi-stage training pipeline that is hard to optimize with weakly-supervised sMRI-level annotations. In this work, we aim to simplify the pathology localization task and construct an end-to-end automatic localization framework (AutoLoc) for Alzheimer's disease diagnosis. To this end, we first present an efficient pathology localization paradigm that directly predicts the coordinate of the most disease-related region in each sMRI slice. Then, we approximate the non-differentiable patch-cropping operation with the bilinear interpolation technique, which eliminates the barrier to gradient backpropagation and thus enables the joint optimization of localization and diagnosis tasks. Extensive experiments on commonly used ADNI and AIBL datasets demonstrate the superiority of our method. Especially, we achieve 93.38% and 81.12% accuracy on Alzheimer's disease classification and mild cognitive impairment conversion prediction tasks, respectively. Several important brain regions, such as rostral hippocampus and globus pallidus, are identified to be highly associated with Alzheimer's disease.
Collapse
Affiliation(s)
- Gongpeng Cao
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Manli Zhang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Yiping Wang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Jing Zhang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Ying Han
- Department of Neurology, Xuanwu Hospital of Capital Medical University, No. 45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Xin Xu
- Department of Neurosurgery, Chinese PLA General Hospital, No. 28 Fuxing Road, Haidian District, Beijing, 100853, China
| | - Jinguo Huang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China.
| | - Guixia Kang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China.
| |
Collapse
|
129
|
Allgaier J, Mulansky L, Draelos RL, Pryss R. How does the model make predictions? A systematic literature review on the explainability power of machine learning in healthcare. Artif Intell Med 2023; 143:102616. [PMID: 37673561 DOI: 10.1016/j.artmed.2023.102616] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 02/22/2023] [Accepted: 05/15/2023] [Indexed: 09/08/2023]
Abstract
BACKGROUND Medical use cases for machine learning (ML) are growing exponentially. The first hospitals are already using ML systems as decision support systems in their daily routine. At the same time, most ML systems are still opaque and it is not clear how these systems arrive at their predictions. METHODS In this paper, we provide a brief overview of the taxonomy of explainability methods and review popular methods. In addition, we conduct a systematic literature search on PubMed to investigate which explainable artificial intelligence (XAI) methods are used in 450 specific medical supervised ML use cases, how the use of XAI methods has emerged recently, and how the precision of describing ML pipelines has evolved over the past 20 years. RESULTS A large fraction of publications with ML use cases do not use XAI methods at all to explain ML predictions. However, when XAI methods are used, open-source and model-agnostic explanation methods are more commonly used, with SHapley Additive exPlanations (SHAP) and Gradient Class Activation Mapping (Grad-CAM) for tabular and image data leading the way. ML pipelines have been described in increasing detail and uniformity in recent years. However, the willingness to share data and code has stagnated at about one-quarter. CONCLUSIONS XAI methods are mainly used when their application requires little effort. The homogenization of reports in ML use cases facilitates the comparability of work and should be advanced in the coming years. Experts who can mediate between the worlds of informatics and medicine will become more and more in demand when using ML systems due to the high complexity of the domain.
Collapse
Affiliation(s)
- Johannes Allgaier
- Institute of Clinical Epidemiology and Biometry, Julius-Maximilians-Universität Würzburg (JMU), Germany.
| | - Lena Mulansky
- Institute of Clinical Epidemiology and Biometry, Julius-Maximilians-Universität Würzburg (JMU), Germany.
| | | | - Rüdiger Pryss
- Institute of Clinical Epidemiology and Biometry, Julius-Maximilians-Universität Würzburg (JMU), Germany.
| |
Collapse
|
130
|
Tan TF, Dai P, Zhang X, Jin L, Poh S, Hong D, Lim J, Lim G, Teo ZL, Liu N, Ting DSW. Explainable artificial intelligence in ophthalmology. Curr Opin Ophthalmol 2023; 34:422-430. [PMID: 37527200 DOI: 10.1097/icu.0000000000000983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/03/2023]
Abstract
PURPOSE OF REVIEW Despite the growing scope of artificial intelligence (AI) and deep learning (DL) applications in the field of ophthalmology, most have yet to reach clinical adoption. Beyond model performance metrics, there has been an increasing emphasis on the need for explainability of proposed DL models. RECENT FINDINGS Several explainable AI (XAI) methods have been proposed, and increasingly applied in ophthalmological DL applications, predominantly in medical imaging analysis tasks. SUMMARY We summarize an overview of the key concepts, and categorize some examples of commonly employed XAI methods. Specific to ophthalmology, we explore XAI from a clinical perspective, in enhancing end-user trust, assisting clinical management, and uncovering new insights. We finally discuss its limitations and future directions to strengthen XAI for application to clinical practice.
Collapse
Affiliation(s)
- Ting Fang Tan
- Artificial Intelligence and Digital Innovation Research Group
- Singapore National Eye Centre, Singapore General Hospital
| | - Peilun Dai
- Institute of High Performance Computing, A∗STAR
| | - Xiaoman Zhang
- Duke-National University of Singapore Medical School, Singapore
| | - Liyuan Jin
- Artificial Intelligence and Digital Innovation Research Group
- Duke-National University of Singapore Medical School, Singapore
| | - Stanley Poh
- Singapore National Eye Centre, Singapore General Hospital
| | - Dylan Hong
- Artificial Intelligence and Digital Innovation Research Group
| | - Joshua Lim
- Singapore National Eye Centre, Singapore General Hospital
| | - Gilbert Lim
- Artificial Intelligence and Digital Innovation Research Group
| | - Zhen Ling Teo
- Artificial Intelligence and Digital Innovation Research Group
- Singapore National Eye Centre, Singapore General Hospital
| | - Nan Liu
- Artificial Intelligence and Digital Innovation Research Group
- Duke-National University of Singapore Medical School, Singapore
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation Research Group
- Singapore National Eye Centre, Singapore General Hospital
- Duke-National University of Singapore Medical School, Singapore
- Byers Eye Institute, Stanford University, Stanford, California, USA
| |
Collapse
|
131
|
Champendal M, Marmy L, Malamateniou C, Sá Dos Reis C. Artificial intelligence to support person-centred care in breast imaging - A scoping review. J Med Imaging Radiat Sci 2023; 54:511-544. [PMID: 37183076 DOI: 10.1016/j.jmir.2023.04.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/05/2023] [Accepted: 04/11/2023] [Indexed: 05/16/2023]
Abstract
AIM To overview Artificial Intelligence (AI) developments and applications in breast imaging (BI) focused on providing person-centred care in diagnosis and treatment for breast pathologies. METHODS The scoping review was conducted in accordance with the Joanna Briggs Institute methodology. The search was conducted on MEDLINE, Embase, CINAHL, Web of science, IEEE explore and arxiv during July 2022 and included only studies published after 2016, in French and English. Combination of keywords and Medical Subject Headings terms (MeSH) related to breast imaging and AI were used. No keywords or MeSH terms related to patients, or the person-centred care (PCC) concept were included. Three independent reviewers screened all abstracts and titles, and all eligible full-text publications during a second stage. RESULTS 3417 results were identified by the search and 106 studies were included for meeting all criteria. Six themes relating to the AI-enabled PCC in BI were identified: individualised risk prediction/growth and prediction/false negative reduction (44.3%), treatment assessment (32.1%), tumour type prediction (11.3%), unnecessary biopsies reduction (5.7%), patients' preferences (2.8%) and other issues (3.8%). The main BI modalities explored in the included studies were magnetic resonance imaging (MRI) (31.1%), mammography (27.4%) and ultrasound (23.6%). The studies were predominantly retrospective, and some variations (age range, data source, race, medical imaging) were present in the datasets used. CONCLUSIONS The AI tools for person-centred care are mainly designed for risk and cancer prediction and disease management to identify the most suitable treatment. However, further studies are needed for image acquisition optimisation for different patient groups, improvement and customisation of patient experience and for communicating to patients the options and pathways of disease management.
Collapse
Affiliation(s)
- Mélanie Champendal
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH.
| | - Laurent Marmy
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH.
| | - Christina Malamateniou
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH; Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, University of London, London, UK.
| | - Cláudia Sá Dos Reis
- School of Health Sciences HESAV, HES-SO; University of Applied Sciences Western Switzerland: Lausanne, CH.
| |
Collapse
|
132
|
Amoroso N, Quarto S, La Rocca M, Tangaro S, Monaco A, Bellotti R. An eXplainability Artificial Intelligence approach to brain connectivity in Alzheimer's disease. Front Aging Neurosci 2023; 15:1238065. [PMID: 37719873 PMCID: PMC10501457 DOI: 10.3389/fnagi.2023.1238065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Accepted: 08/08/2023] [Indexed: 09/19/2023] Open
Abstract
The advent of eXplainable Artificial Intelligence (XAI) has revolutionized the way human experts, especially from non-computational domains, approach artificial intelligence; this is particularly true for clinical applications where the transparency of the results is often compromised by the algorithmic complexity. Here, we investigate how Alzheimer's disease (AD) affects brain connectivity within a cohort of 432 subjects whose T1 brain Magnetic Resonance Imaging data (MRI) were acquired within the Alzheimer's Disease Neuroimaging Initiative (ADNI). In particular, the cohort included 92 patients with AD, 126 normal controls (NC) and 214 subjects with mild cognitive impairment (MCI). We show how graph theory-based models can accurately distinguish these clinical conditions and how Shapley values, borrowed from game theory, can be adopted to make these models intelligible and easy to interpret. Explainability analyses outline the role played by regions like putamen, middle and superior temporal gyrus; from a class-related perspective, it is possible to outline specific regions, such as hippocampus and amygdala for AD and posterior cingulate and precuneus for MCI. The approach is general and could be adopted to outline how brain connectivity affects specific brain regions.
Collapse
Affiliation(s)
- Nicola Amoroso
- Dipartimento di Farmacia-Scienze del Farmaco, Universitá degli Studi di Bari Aldo Moro, Bari, Italy
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari, Italy
| | - Silvano Quarto
- Dipartimento Interateneo di Fisica, Universitá degli Studi di Bari Aldo Moro, Bari, Italy
| | - Marianna La Rocca
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari, Italy
- Dipartimento Interateneo di Fisica, Universitá degli Studi di Bari Aldo Moro, Bari, Italy
| | - Sabina Tangaro
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari, Italy
- Dipartimento di Scienze del Suolo, della Pianta e degli Alimenti, Universitá degli Studi di Bari Aldo Moro, Bari, Italy
| | - Alfonso Monaco
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari, Italy
- Dipartimento Interateneo di Fisica, Universitá degli Studi di Bari Aldo Moro, Bari, Italy
| | - Roberto Bellotti
- Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari, Italy
- Dipartimento Interateneo di Fisica, Universitá degli Studi di Bari Aldo Moro, Bari, Italy
| |
Collapse
|
133
|
Gao Y, Soh NYT, Liu N, Lim G, Ting D, Cheng LTE, Wong KM, Liew C, Oh HC, Tan JR, Venkataraman N, Goh SH, Yan YY. Application of a deep learning algorithm in the detection of hip fractures. iScience 2023; 26:107350. [PMID: 37554447 PMCID: PMC10404720 DOI: 10.1016/j.isci.2023.107350] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 05/30/2023] [Accepted: 07/06/2023] [Indexed: 08/10/2023] Open
Abstract
This paper describes the development of a deep learning model for prediction of hip fractures on pelvic radiographs (X-rays). Developed using over 40,000 pelvic radiographs from a single institution, the model demonstrated high sensitivity and specificity when applied to a test set of emergency department radiographs. This study approximates the real-world application of a deep learning fracture detection model by including radiographs with sub-optimal image quality, other non-hip fractures, and metallic implants, which were excluded from prior published work. The study also explores the effect of ethnicity on model performance, as well as the accuracy of visualization algorithm for fracture localization.
Collapse
Affiliation(s)
- Yan Gao
- Health Services Research, Changi General Hospital, Singapore Health Services (SingHealth), Singapore, Singapore
| | - Nicholas Yock Teck Soh
- Department of Diagnostic Radiology, Changi General Hospital, Singapore Health Services (SingHealth), Singapore, Singapore
| | - Nan Liu
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore, Singapore
| | - Gilbert Lim
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore, Singapore
| | - Daniel Ting
- Singapore Health Services (SingHealth), Duke-NUS Medical School, Singapore, Singapore
| | - Lionel Tim-Ee Cheng
- Department of Diagnostic Radiology, Singapore General Hospital, Singapore Health Services (SingHealth), Singapore, Singapore
- Radiological Sciences ACP, Duke-NUS Medical School, Singapore, Singapore
| | - Kang Min Wong
- Department of Diagnostic Radiology, Changi General Hospital, Singapore Health Services (SingHealth), Singapore, Singapore
- Radiological Sciences ACP, Duke-NUS Medical School, Singapore, Singapore
| | - Charlene Liew
- Department of Diagnostic Radiology, Changi General Hospital, Singapore Health Services (SingHealth), Singapore, Singapore
- Radiological Sciences ACP, Duke-NUS Medical School, Singapore, Singapore
| | - Hong Choon Oh
- Health Services Research, Changi General Hospital, Singapore Health Services (SingHealth), Singapore, Singapore
| | - Jin Rong Tan
- Department of Diagnostic Radiology, Singapore General Hospital, Singapore Health Services (SingHealth), Singapore, Singapore
| | - Narayan Venkataraman
- Department of Medical Informatics, Changi General Hospital, Singapore Health Services (SingHealth), Singapore, Singapore
| | - Siang Hiong Goh
- Department of Emergency Medicine, Changi General Hospital, Singapore Health Services (SingHealth), Singapore, Singapore
| | - Yet Yen Yan
- Department of Diagnostic Radiology, Changi General Hospital, Singapore Health Services (SingHealth), Singapore, Singapore
- Radiological Sciences ACP, Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
134
|
Iqbal S, Qureshi AN, Alhussein M, Aurangzeb K, Kadry S. A Novel Heteromorphous Convolutional Neural Network for Automated Assessment of Tumors in Colon and Lung Histopathology Images. Biomimetics (Basel) 2023; 8:370. [PMID: 37622975 PMCID: PMC10452605 DOI: 10.3390/biomimetics8040370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 07/31/2023] [Accepted: 08/03/2023] [Indexed: 08/26/2023] Open
Abstract
The automated assessment of tumors in medical image analysis encounters challenges due to the resemblance of colon and lung tumors to non-mitotic nuclei and their heteromorphic characteristics. An accurate assessment of tumor nuclei presence is crucial for determining tumor aggressiveness and grading. This paper proposes a new method called ColonNet, a heteromorphous convolutional neural network (CNN) with a feature grafting methodology categorically configured for analyzing mitotic nuclei in colon and lung histopathology images. The ColonNet model consists of two stages: first, identifying potential mitotic patches within the histopathological imaging areas, and second, categorizing these patches into squamous cell carcinomas, adenocarcinomas (lung), benign (lung), benign (colon), and adenocarcinomas (colon) based on the model's guidelines. We develop and employ our deep CNNs, each capturing distinct structural, textural, and morphological properties of tumor nuclei, to construct the heteromorphous deep CNN. The execution of the proposed ColonNet model is analyzed by its comparison with state-of-the-art CNNs. The results demonstrate that our model surpasses others on the test set, achieving an impressive F1 score of 0.96, sensitivity and specificity of 0.95, and an area under the accuracy curve of 0.95. These outcomes underscore our hybrid model's superior performance, excellent generalization, and accuracy, highlighting its potential as a valuable tool to support pathologists in diagnostic activities.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore 54000, Pakistan;
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore 54000, Pakistan;
| | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia; (M.A.); (K.A.)
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia; (M.A.); (K.A.)
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway;
| |
Collapse
|
135
|
Balla Y, Tirunagari S, Windridge D. Pediatrics in Artificial Intelligence Era: A Systematic Review on Challenges, Opportunities, and Explainability. Indian Pediatr 2023; 60:561-569. [PMID: 37424120 DOI: 10.1007/s13312-023-2936-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2023]
Abstract
BACKGROUND The emergence of artificial intelligence (AI) tools such as ChatGPT and Bard is disrupting a broad swathe of fields, including medicine. In pediatric medicine, AI is also increasingly being used across multiple subspecialties. However, the practical application of AI still faces a number of key challenges. Consequently, there is a requirement for a concise overview of the roles of AI across the multiple domains of pediatric medicine, which the current study seeks to address. AIM To systematically assess the challenges, opportunities, and explainability of AI in pediatric medicine. METHODOLOGY A systematic search was carried out on peer-reviewed databases, PubMed Central, Europe PubMed Central, and grey literature using search terms related to machine learning (ML) and AI for the years 2016 to 2022 in the English language. A total of 210 articles were retrieved that were screened with PRISMA for abstract, year, language, context, and proximal relevance to research aims. A thematic analysis was carried out to extract findings from the included studies. RESULTS Twenty articles were selected for data abstraction and analysis, with three consistent themes emerging from these articles. In particular, eleven articles address the current state-of-the-art application of AI in diagnosing and predicting health conditions such as behavioral and mental health, cancer, syndromic and metabolic diseases. Five articles highlight the specific challenges of AI deployment in pediatric medicines: data security, handling, authentication, and validation. Four articles set out future opportunities for AI to be adapted: the incorporation of Big Data, cloud computing, precision medicine, and clinical decision support systems. These studies collectively critically evaluate the potential of AI in overcoming current barriers to adoption. CONCLUSION AI is proving disruptive within pediatric medicine and is presently associated with challenges, opportunities, and the need for explainability. AI should be viewed as a tool to enhance and support clinical decision-making rather than a substitute for human judgement and expertise. Future research should consequently focus on obtaining comprehensive data to ensure the generalizability of research findings.
Collapse
Affiliation(s)
- Yashaswini Balla
- Neurosciences Department, Alder Hey Children's NHS Foundation Trust, Liverpool, United Kingdom
| | - Santosh Tirunagari
- Department of Psychology, Middlesex University, London, United Kingdom. Correspondence to: Dr Santosh Tirunagari, Department of Psychology, Middlesex University, London, United Kingdom.
| | - David Windridge
- Department of Computer Science, Middlesex University, London, United Kingdom
| |
Collapse
|
136
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
137
|
Kim SY. Personalized Explanations for Early Diagnosis of Alzheimer's Disease Using Explainable Graph Neural Networks with Population Graphs. Bioengineering (Basel) 2023; 10:701. [PMID: 37370632 DOI: 10.3390/bioengineering10060701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 06/05/2023] [Accepted: 06/06/2023] [Indexed: 06/29/2023] Open
Abstract
Leveraging recent advances in graph neural networks, our study introduces an application of graph convolutional networks (GCNs) within a correlation-based population graph, aiming to enhance Alzheimer's disease (AD) prognosis and illuminate the intricacies of AD progression. This methodological approach leverages the inherent structure and correlations in demographic and neuroimaging data to predict amyloid-beta (Aβ) positivity. To validate our approach, we conducted extensive performance comparisons with conventional machine learning models and a GCN model with randomly assigned edges. The results consistently highlighted the superior performance of the correlation-based GCN model across different sample groups in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, suggesting the importance of accurately reflecting the correlation structure in population graphs for effective pattern recognition and accurate prediction. Furthermore, our exploration of the model's decision-making process using GNNExplainer identified unique sets of biomarkers indicative of Aβ positivity in different groups, shedding light on the heterogeneity of AD progression. This study underscores the potential of our proposed approach for more nuanced AD prognoses, potentially informing more personalized and precise therapeutic strategies. Future research can extend these findings by integrating diverse data sources, employing longitudinal data, and refining the interpretability of the model, which potentially has broad applicability to other complex diseases.
Collapse
Affiliation(s)
- So Yeon Kim
- Department of Artificial Intelligence, Ajou University, Suwon 16499, Republic of Korea
- Department of Software and Computer Engineering, Ajou University, Suwon 16499, Republic of Korea
| |
Collapse
|
138
|
Dolezal JM, Wolk R, Hieromnimon HM, Howard FM, Srisuwananukorn A, Karpeyev D, Ramesh S, Kochanny S, Kwon JW, Agni M, Simon RC, Desai C, Kherallah R, Nguyen TD, Schulte JJ, Cole K, Khramtsova G, Garassino MC, Husain AN, Li H, Grossman R, Cipriani NA, Pearson AT. Deep learning generates synthetic cancer histology for explainability and education. NPJ Precis Oncol 2023; 7:49. [PMID: 37248379 PMCID: PMC10227067 DOI: 10.1038/s41698-023-00399-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 05/12/2023] [Indexed: 05/31/2023] Open
Abstract
Artificial intelligence methods including deep neural networks (DNN) can provide rapid molecular classification of tumors from routine histology with accuracy that matches or exceeds human pathologists. Discerning how neural networks make their predictions remains a significant challenge, but explainability tools help provide insights into what models have learned when corresponding histologic features are poorly defined. Here, we present a method for improving explainability of DNN models using synthetic histology generated by a conditional generative adversarial network (cGAN). We show that cGANs generate high-quality synthetic histology images that can be leveraged for explaining DNN models trained to classify molecularly-subtyped tumors, exposing histologic features associated with molecular state. Fine-tuning synthetic histology through class and layer blending illustrates nuanced morphologic differences between tumor subtypes. Finally, we demonstrate the use of synthetic histology for augmenting pathologist-in-training education, showing that these intuitive visualizations can reinforce and improve understanding of histologic manifestations of tumor biology.
Collapse
Affiliation(s)
- James M Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Rachelle Wolk
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Hanna M Hieromnimon
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Frederick M Howard
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | | | | | - Siddhi Ramesh
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Sara Kochanny
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Jung Woo Kwon
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Meghana Agni
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Richard C Simon
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Chandni Desai
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Raghad Kherallah
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Tung D Nguyen
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Jefree J Schulte
- Department of Pathology and Laboratory Medicine, University of Wisconsin at Madison, Madison, WN, USA
| | - Kimberly Cole
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Galina Khramtsova
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Marina Chiara Garassino
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Aliya N Husain
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Huihua Li
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Robert Grossman
- University of Chicago, Center for Translational Data Science, Chicago, IL, USA
| | - Nicole A Cipriani
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA.
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA.
| |
Collapse
|
139
|
Zhang T, Bur AM, Kraft S, Kavookjian H, Renslo B, Chen X, Luo B, Wang G. Gender, Smoking History, and Age Prediction from Laryngeal Images. J Imaging 2023; 9:109. [PMID: 37367457 DOI: 10.3390/jimaging9060109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/22/2023] [Accepted: 05/25/2023] [Indexed: 06/28/2023] Open
Abstract
Flexible laryngoscopy is commonly performed by otolaryngologists to detect laryngeal diseases and to recognize potentially malignant lesions. Recently, researchers have introduced machine learning techniques to facilitate automated diagnosis using laryngeal images and achieved promising results. The diagnostic performance can be improved when patients' demographic information is incorporated into models. However, the manual entry of patient data is time-consuming for clinicians. In this study, we made the first endeavor to employ deep learning models to predict patient demographic information to improve the detector model's performance. The overall accuracy for gender, smoking history, and age was 85.5%, 65.2%, and 75.9%, respectively. We also created a new laryngoscopic image set for the machine learning study and benchmarked the performance of eight classical deep learning models based on CNNs and Transformers. The results can be integrated into current learning models to improve their performance by incorporating the patient's demographic information.
Collapse
Affiliation(s)
- Tianxiao Zhang
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS 66045, USA
| | - Andrés M Bur
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas Medical Center, Kansas City, KS 66160, USA
| | - Shannon Kraft
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas Medical Center, Kansas City, KS 66160, USA
| | - Hannah Kavookjian
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas Medical Center, Kansas City, KS 66160, USA
| | - Bryan Renslo
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas Medical Center, Kansas City, KS 66160, USA
| | - Xiangyu Chen
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS 66045, USA
| | - Bo Luo
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, KS 66045, USA
| | - Guanghui Wang
- Department of Computer Science, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
| |
Collapse
|
140
|
Zhao G, Kong D, Xu X, Hu S, Li Z, Tian J. Deep learning-based classification of breast lesions using dynamic ultrasound video. Eur J Radiol 2023; 165:110885. [PMID: 37290361 DOI: 10.1016/j.ejrad.2023.110885] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 03/27/2023] [Accepted: 05/17/2023] [Indexed: 06/10/2023]
Abstract
PURPOSE We intended to develop a deep-learning-based classification model based on breast ultrasound dynamic video, then evaluate its diagnostic performance in comparison with the classic model based on ultrasound static image and that of different radiologists. METHOD We collected 1000 breast lesions from 888 patients from May 2020 to December 2021. Each lesion contained two static images and two dynamic videos. We divided these lesions randomly into training, validation, and test sets by the ratio of 7:2:1. Two deep learning (DL) models, namely DL-video and DL-image, were developed based on 3D Resnet-50 and 2D Resnet-50 using 2000 dynamic videos and 2000 static images, respectively. Lesions in the test set were evaluated to compare the diagnostic performance of two models and six radiologists with different seniority. RESULTS The area under the curve of the DL-video model was significantly higher than those of the DL-image model (0.969 vs. 0.925, P = 0.0172) and six radiologists (0.969 vs. 0.779-0.912, P < 0.05). All radiologists performed better when evaluating the dynamic videos compared to the static images. Furthermore, radiologists performed better with increased seniority both in reading images and videos. CONCLUSIONS The DL-video model can discern more detailed spatial and temporal information for accurate classification of breast lesions than the conventional DL-image model and radiologists, and its clinical application can further improve the diagnosis of breast cancer.
Collapse
Affiliation(s)
- Guojia Zhao
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China; Department of Ultrasound, Lin Yi People's Hospital, Linyi, Shandong, China
| | | | - Xiangli Xu
- The Second Hospital of Harbin, Harbin, Heilongjiang, China
| | - Shunbo Hu
- Lin Yi University, Linyi, Shandong, China.
| | - Ziyao Li
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China.
| | - Jiawei Tian
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China.
| |
Collapse
|
141
|
Bou Zerdan M, Kassab J, Saba L, Haroun E, Bou Zerdan M, Allam S, Nasr L, Macaron W, Mammadli M, Abou Moussa S, Chaulagain CP. Liquid biopsies and minimal residual disease in lymphoid malignancies. Front Oncol 2023; 13:1173701. [PMID: 37228488 PMCID: PMC10203459 DOI: 10.3389/fonc.2023.1173701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 04/21/2023] [Indexed: 05/27/2023] Open
Abstract
Minimal residual disease (MRD) assessment using peripheral blood instead of bone marrow aspirate/biopsy specimen or the biopsy of the cancerous infiltrated by lymphoid malignancies is an emerging technique with enormous interest of research and technological innovation at the current time. In some lymphoid malignancies (particularly ALL), Studies have shown that MRD monitoring of the peripheral blood may be an adequate alternative to frequent BM aspirations. However, additional studies investigating the biology of liquid biopsies in ALL and its potential as an MRD marker in larger patient cohorts in treatment protocols are warranted. Despite the promising data, there are still limitations in liquid biopsies in lymphoid malignancies, such as standardization of the sample collection and processing, determination of timing and duration for liquid biopsy analysis, and definition of the biological characteristics and specificity of the techniques evaluated such as flow cytometry, molecular techniques, and next generation sequencies. The use of liquid biopsy for detection of minimal residual disease in T-cell lymphoma is still experimental but it has made significant progress in multiple myeloma for example. Recent attempt to use artificial intelligence may help simplify the algorithm for testing and may help avoid inter-observer variation and operator dependency in these highly technically demanding testing process.
Collapse
Affiliation(s)
- Maroun Bou Zerdan
- Department of Internal Medicine, State University of New York (SUNY) Upstate Medical University, Syracuse, NY, United States
| | - Joseph Kassab
- Cleveland Clinic, Research Institute, Cleveland, OH, United States
| | - Ludovic Saba
- Department of Hematology-Oncology, Myeloma and Amyloidosis Program, Maroone Cancer Center, Cleveland Clinic Florida, Weston, FL, United States
| | - Elio Haroun
- Department of Medicine, State University of New York (SUNY) Upstate Medical University, New York, NY, United States
| | | | - Sabine Allam
- Department of Medicine and Medical Sciences, University of Balamand, Balamand, Lebanon
| | - Lewis Nasr
- University of Texas MD Anderson Cancer Center, Texas, TX, United States
| | - Walid Macaron
- University of Texas MD Anderson Cancer Center, Texas, TX, United States
| | - Mahinbanu Mammadli
- Department of Internal Medicine, State University of New York (SUNY) Upstate Medical University, Syracuse, NY, United States
| | | | - Chakra P. Chaulagain
- Department of Hematology-Oncology, Myeloma and Amyloidosis Program, Maroone Cancer Center, Cleveland Clinic Florida, Weston, FL, United States
| |
Collapse
|
142
|
Kocak B, Baessler B, Bakas S, Cuocolo R, Fedorov A, Maier-Hein L, Mercaldo N, Müller H, Orlhac F, Pinto Dos Santos D, Stanzione A, Ugga L, Zwanenburg A. CheckList for EvaluAtion of Radiomics research (CLEAR): a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSoMII. Insights Imaging 2023; 14:75. [PMID: 37142815 PMCID: PMC10160267 DOI: 10.1186/s13244-023-01415-8] [Citation(s) in RCA: 116] [Impact Index Per Article: 116.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 03/24/2023] [Indexed: 05/06/2023] Open
Abstract
Even though radiomics can hold great potential for supporting clinical decision-making, its current use is mostly limited to academic research, without applications in routine clinical practice. The workflow of radiomics is complex due to several methodological steps and nuances, which often leads to inadequate reporting and evaluation, and poor reproducibility. Available reporting guidelines and checklists for artificial intelligence and predictive modeling include relevant good practices, but they are not tailored to radiomic research. There is a clear need for a complete radiomics checklist for study planning, manuscript writing, and evaluation during the review process to facilitate the repeatability and reproducibility of studies. We here present a documentation standard for radiomic research that can guide authors and reviewers. Our motivation is to improve the quality and reliability and, in turn, the reproducibility of radiomic research. We name the checklist CLEAR (CheckList for EvaluAtion of Radiomics research), to convey the idea of being more transparent. With its 58 items, the CLEAR checklist should be considered a standardization tool providing the minimum requirements for presenting clinical radiomics research. In addition to a dynamic online version of the checklist, a public repository has also been set up to allow the radiomics community to comment on the checklist items and adapt the checklist for future versions. Prepared and revised by an international group of experts using a modified Delphi method, we hope the CLEAR checklist will serve well as a single and complete scientific documentation tool for authors and reviewers to improve the radiomics literature.
Collapse
Affiliation(s)
- Burak Kocak
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, Istanbul, 34480, Turkey.
| | - Bettina Baessler
- Institute of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Spyridon Bakas
- Center for Artificial Intelligence for Integrated Diagnostics (AI2D) & Center for Biomedical Image Computing & Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Renato Cuocolo
- Department of Medicine, Surgery, and Dentistry, University of Salerno, Baronissi, Italy
| | - Andrey Fedorov
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Lena Maier-Hein
- Division of Intelligent Medical Systems, German Cancer Research Center, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Nathaniel Mercaldo
- Institute for Technology Assessment, Massachusetts General Hospital, Boston, MA, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Henning Müller
- University of Applied Sciences of Western Switzerland (HES-SO Valais), Valais, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva (UniGe), Geneva, Switzerland
| | - Fanny Orlhac
- Laboratoire d'Imagerie Translationnelle en Oncologie (LITO)-U1288, Institut Curie, Inserm, Université PSL, Orsay, France
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Institute for Diagnostic and Interventional Radiology, Goethe-University Frankfurt Am Main, Frankfurt, Germany
| | - Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Lorenzo Ugga
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Alex Zwanenburg
- OncoRay-National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
143
|
Borys K, Schmitt YA, Nauta M, Seifert C, Krämer N, Friedrich CM, Nensa F. Explainable AI in medical imaging: An overview for clinical practitioners – Saliency-based XAI approaches. Eur J Radiol 2023; 162:110787. [PMID: 37001254 DOI: 10.1016/j.ejrad.2023.110787] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 03/03/2023] [Accepted: 03/14/2023] [Indexed: 03/30/2023]
Abstract
Since recent achievements of Artificial Intelligence (AI) have proven significant success and promising results throughout many fields of application during the last decade, AI has also become an essential part of medical research. The improving data availability, coupled with advances in high-performance computing and innovative algorithms, has increased AI's potential in various aspects. Because AI rapidly reshapes research and promotes the development of personalized clinical care, alongside its implementation arises an urgent need for a deep understanding of its inner workings, especially in high-stake domains. However, such systems can be highly complex and opaque, limiting the possibility of an immediate understanding of the system's decisions. Regarding the medical field, a high impact is attributed to these decisions as physicians and patients can only fully trust AI systems when reasonably communicating the origin of their results, simultaneously enabling the identification of errors and biases. Explainable AI (XAI), becoming an increasingly important field of research in recent years, promotes the formulation of explainability methods and provides a rationale allowing users to comprehend the results generated by AI systems. In this paper, we investigate the application of XAI in medical imaging, addressing a broad audience, especially healthcare professionals. The content focuses on definitions and taxonomies, standard methods and approaches, advantages, limitations, and examples representing the current state of research regarding XAI in medical imaging. This paper focuses on saliency-based XAI methods, where the explanation can be provided directly on the input data (image) and which naturally are of special importance in medical imaging.
Collapse
|
144
|
Borys K, Schmitt YA, Nauta M, Seifert C, Krämer N, Friedrich CM, Nensa F. Explainable AI in medical imaging: An overview for clinical practitioners – Beyond saliency-based XAI approaches. Eur J Radiol 2023; 162:110786. [PMID: 36990051 DOI: 10.1016/j.ejrad.2023.110786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 03/03/2023] [Accepted: 03/14/2023] [Indexed: 03/30/2023]
Abstract
Driven by recent advances in Artificial Intelligence (AI) and Computer Vision (CV), the implementation of AI systems in the medical domain increased correspondingly. This is especially true for the domain of medical imaging, in which the incorporation of AI aids several imaging-based tasks such as classification, segmentation, and registration. Moreover, AI reshapes medical research and contributes to the development of personalized clinical care. Consequently, alongside its extended implementation arises the need for an extensive understanding of AI systems and their inner workings, potentials, and limitations which the field of eXplainable AI (XAI) aims at. Because medical imaging is mainly associated with visual tasks, most explainability approaches incorporate saliency-based XAI methods. In contrast to that, in this article we would like to investigate the full potential of XAI methods in the field of medical imaging by specifically focusing on XAI techniques not relying on saliency, and providing diversified examples. We dedicate our investigation to a broad audience, but particularly healthcare professionals. Moreover, this work aims at establishing a common ground for cross-disciplinary understanding and exchange across disciplines between Deep Learning (DL) builders and healthcare professionals, which is why we aimed for a non-technical overview. Presented XAI methods are divided by a method's output representation into the following categories: Case-based explanations, textual explanations, and auxiliary explanations.
Collapse
|
145
|
Qian J, Li H, Wang J, He L. Recent Advances in Explainable Artificial Intelligence for Magnetic Resonance Imaging. Diagnostics (Basel) 2023; 13:1571. [PMID: 37174962 PMCID: PMC10178221 DOI: 10.3390/diagnostics13091571] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 03/29/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023] Open
Abstract
Advances in artificial intelligence (AI), especially deep learning (DL), have facilitated magnetic resonance imaging (MRI) data analysis, enabling AI-assisted medical image diagnoses and prognoses. However, most of the DL models are considered as "black boxes". There is an unmet need to demystify DL models so domain experts can trust these high-performance DL models. This has resulted in a sub-domain of AI research called explainable artificial intelligence (XAI). In the last decade, many experts have dedicated their efforts to developing novel XAI methods that are competent at visualizing and explaining the logic behind data-driven DL models. However, XAI techniques are still in their infancy for medical MRI image analysis. This study aims to outline the XAI applications that are able to interpret DL models for MRI data analysis. We first introduce several common MRI data modalities. Then, a brief history of DL models is discussed. Next, we highlight XAI frameworks and elaborate on the principles of multiple popular XAI methods. Moreover, studies on XAI applications in MRI image analysis are reviewed across the tissues/organs of the human body. A quantitative analysis is conducted to reveal the insights of MRI researchers on these XAI techniques. Finally, evaluations of XAI methods are discussed. This survey presents recent advances in the XAI domain for explaining the DL models that have been utilized in MRI applications.
Collapse
Affiliation(s)
- Jinzhao Qian
- Imaging Research Center, Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Computer Science, University of Cincinnati, Cincinnati, OH 45221, USA
| | - Hailong Li
- Imaging Research Center, Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Radiology, College of Medicine, University of Cincinnati, Cincinnati, OH 45221, USA
| | - Junqi Wang
- Imaging Research Center, Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
| | - Lili He
- Imaging Research Center, Department of Radiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Computer Science, University of Cincinnati, Cincinnati, OH 45221, USA
- Department of Radiology, College of Medicine, University of Cincinnati, Cincinnati, OH 45221, USA
| |
Collapse
|
146
|
Vrahatis AG, Skolariki K, Krokidis MG, Lazaros K, Exarchos TP, Vlamos P. Revolutionizing the Early Detection of Alzheimer's Disease through Non-Invasive Biomarkers: The Role of Artificial Intelligence and Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:4184. [PMID: 37177386 PMCID: PMC10180573 DOI: 10.3390/s23094184] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/19/2023] [Accepted: 04/19/2023] [Indexed: 05/15/2023]
Abstract
Alzheimer's disease (AD) is now classified as a silent pandemic due to concerning current statistics and future predictions. Despite this, no effective treatment or accurate diagnosis currently exists. The negative impacts of invasive techniques and the failure of clinical trials have prompted a shift in research towards non-invasive treatments. In light of this, there is a growing need for early detection of AD through non-invasive approaches. The abundance of data generated by non-invasive techniques such as blood component monitoring, imaging, wearable sensors, and bio-sensors not only offers a platform for more accurate and reliable bio-marker developments but also significantly reduces patient pain, psychological impact, risk of complications, and cost. Nevertheless, there are challenges concerning the computational analysis of the large quantities of data generated, which can provide crucial information for the early diagnosis of AD. Hence, the integration of artificial intelligence and deep learning is critical to addressing these challenges. This work attempts to examine some of the facts and the current situation of these approaches to AD diagnosis by leveraging the potential of these tools and utilizing the vast amount of non-invasive data in order to revolutionize the early detection of AD according to the principles of a new non-invasive medicine era.
Collapse
Affiliation(s)
| | | | - Marios G. Krokidis
- Bioinformatics and Human Electrophysiology Laboratory, Department of Informatics, Ionian University, 49100 Corfu, Greece
| | | | | | | |
Collapse
|
147
|
Padmapriya ST, Parthasarathy S. Ethical Data Collection for Medical Image Analysis: a Structured Approach. Asian Bioeth Rev 2023:1-14. [PMID: 37361687 PMCID: PMC10088772 DOI: 10.1007/s41649-023-00250-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 03/24/2023] [Accepted: 03/26/2023] [Indexed: 06/28/2023] Open
Abstract
Due to advancements in technology such as data science and artificial intelligence, healthcare research has gained momentum and is generating new findings and predictions on abnormalities leading to the diagnosis of diseases or disorders in human beings. On one hand, the extensive application of data science to healthcare research is progressing faster, while on the other hand, the ethical concerns and adjoining risks and legal hurdles those data scientists may face in the future slow down the progression of healthcare research. Simply put, the application of data science to ethically guided healthcare research appears to be a dream come true. Hence, in this paper, we discuss the current practices, challenges, and limitations of the data collection process during medical image analysis (MIA) conducted as part of healthcare research and propose an ethical data collection framework to guide data scientists to address the possible ethical concerns before commencing data analytics over a medical dataset.
Collapse
Affiliation(s)
- S. T. Padmapriya
- Department of Applied Mathematics and Computational Science, Thiagarajar College of Engineering, Madurai, India
| | - Sudhaman Parthasarathy
- Department of Applied Mathematics and Computational Science, Thiagarajar College of Engineering, Madurai, India
| |
Collapse
|
148
|
Lundström C, Lindvall M. Mapping the Landscape of Care Providers' Quality Assurance Approaches for AI in Diagnostic Imaging. J Digit Imaging 2023; 36:379-387. [PMID: 36352164 PMCID: PMC10039170 DOI: 10.1007/s10278-022-00731-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 10/26/2022] [Accepted: 10/28/2022] [Indexed: 11/10/2022] Open
Abstract
The discussion on artificial intelligence (AI) solutions in diagnostic imaging has matured in recent years. The potential value of AI adoption is well established, as are the potential risks associated. Much focus has, rightfully, been on regulatory certification of AI products, with the strong incentive of being an enabling step for the commercial actors. It is, however, becoming evident that regulatory approval is not enough to ensure safe and effective AI usage in the local setting. In other words, care providers need to develop and implement quality assurance (QA) approaches for AI solutions in diagnostic imaging. The domain of AI-specific QA is still in an early development phase. We contribute to this development by describing the current landscape of QA-for-AI approaches in medical imaging, with focus on radiology and pathology. We map the potential quality threats and review the existing QA approaches in relation to those threats. We propose a practical categorization of QA approaches, based on key characteristics corresponding to means, situation, and purpose. The review highlights the heterogeneity of methods and practices relevant for this domain and points to targets for future research efforts.
Collapse
Affiliation(s)
- Claes Lundström
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
- Sectra AB, Linköping, Sweden.
| | | |
Collapse
|
149
|
Altini N, Puro E, Taccogna MG, Marino F, De Summa S, Saponaro C, Mattioli E, Zito FA, Bevilacqua V. Tumor Cellularity Assessment of Breast Histopathological Slides via Instance Segmentation and Pathomic Features Explainability. Bioengineering (Basel) 2023; 10:396. [PMID: 37106583 PMCID: PMC10135772 DOI: 10.3390/bioengineering10040396] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/14/2023] [Accepted: 03/19/2023] [Indexed: 04/29/2023] Open
Abstract
The segmentation and classification of cell nuclei are pivotal steps in the pipelines for the analysis of bioimages. Deep learning (DL) approaches are leading the digital pathology field in the context of nuclei detection and classification. Nevertheless, the features that are exploited by DL models to make their predictions are difficult to interpret, hindering the deployment of such methods in clinical practice. On the other hand, pathomic features can be linked to an easier description of the characteristics exploited by the classifiers for making the final predictions. Thus, in this work, we developed an explainable computer-aided diagnosis (CAD) system that can be used to support pathologists in the evaluation of tumor cellularity in breast histopathological slides. In particular, we compared an end-to-end DL approach that exploits the Mask R-CNN instance segmentation architecture with a two steps pipeline, where the features are extracted while considering the morphological and textural characteristics of the cell nuclei. Classifiers that are based on support vector machines and artificial neural networks are trained on top of these features in order to discriminate between tumor and non-tumor nuclei. Afterwards, the SHAP (Shapley additive explanations) explainable artificial intelligence technique was employed to perform a feature importance analysis, which led to an understanding of the features processed by the machine learning models for making their decisions. An expert pathologist validated the employed feature set, corroborating the clinical usability of the model. Even though the models resulting from the two-stage pipeline are slightly less accurate than those of the end-to-end approach, the interpretability of their features is clearer and may help build trust for pathologists to adopt artificial intelligence-based CAD systems in their clinical workflow. To further show the validity of the proposed approach, it has been tested on an external validation dataset, which was collected from IRCCS Istituto Tumori "Giovanni Paolo II" and made publicly available to ease research concerning the quantification of tumor cellularity.
Collapse
Affiliation(s)
- Nicola Altini
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona n. 4, 70126 Bari, Italy
| | - Emilia Puro
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona n. 4, 70126 Bari, Italy
| | - Maria Giovanna Taccogna
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona n. 4, 70126 Bari, Italy
| | - Francescomaria Marino
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona n. 4, 70126 Bari, Italy
| | - Simona De Summa
- Molecular Diagnostics and Pharmacogenetics Unit, IRCCS Istituto Tumori “Giovanni Paolo II”, Via O. Flacco n. 65, 70124 Bari, Italy
| | - Concetta Saponaro
- Laboratory of Preclinical and Translational Research, Centro di Riferimento Oncologico della Basilicata (IRCCS-CROB), Via Padre Pio n. 1, 85028 Rionero in Vulture, Italy
| | - Eliseo Mattioli
- Pathology Department, IRCCS Istituto Tumori “Giovanni Paolo II”, Via O. Flacco n. 65, 70124 Bari, Italy
| | - Francesco Alfredo Zito
- Pathology Department, IRCCS Istituto Tumori “Giovanni Paolo II”, Via O. Flacco n. 65, 70124 Bari, Italy
| | - Vitoantonio Bevilacqua
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona n. 4, 70126 Bari, Italy
- Apulian Bioengineering s.r.l., Via delle Violette n. 14, 70026 Modugno, Italy
| |
Collapse
|
150
|
Al-Jabbar M, Alshahrani M, Senan EM, Ahmed IA. Histopathological Analysis for Detecting Lung and Colon Cancer Malignancies Using Hybrid Systems with Fused Features. Bioengineering (Basel) 2023; 10:bioengineering10030383. [PMID: 36978774 PMCID: PMC10045080 DOI: 10.3390/bioengineering10030383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/05/2023] [Accepted: 03/16/2023] [Indexed: 03/30/2023] Open
Abstract
Lung and colon cancer are among humanity's most common and deadly cancers. In 2020, there were 4.19 million people diagnosed with lung and colon cancer, and more than 2.7 million died worldwide. Some people develop lung and colon cancer simultaneously due to smoking which causes lung cancer, leading to an abnormal diet, which also causes colon cancer. There are many techniques for diagnosing lung and colon cancer, most notably the biopsy technique and its analysis in laboratories. Due to the scarcity of health centers and medical staff, especially in developing countries. Moreover, manual diagnosis takes a long time and is subject to differing opinions of doctors. Thus, artificial intelligence techniques solve these challenges. In this study, three strategies were developed, each with two systems for early diagnosis of histological images of the LC25000 dataset. Histological images have been improved, and the contrast of affected areas has been increased. The GoogLeNet and VGG-19 models of all systems produced high dimensional features, so redundant and unnecessary features were removed to reduce high dimensionality and retain essential features by the PCA method. The first strategy for diagnosing the histological images of the LC25000 dataset by ANN uses crucial features of GoogLeNet and VGG-19 models separately. The second strategy uses ANN with the combined features of GoogLeNet and VGG-19. One system reduced dimensions and combined, while the other combined high features and then reduced high dimensions. The third strategy uses ANN with fusion features of CNN models (GoogLeNet and VGG-19) and handcrafted features. With the fusion features of VGG-19 and handcrafted features, the ANN reached a sensitivity of 99.85%, a precision of 100%, an accuracy of 99.64%, a specificity of 100%, and an AUC of 99.86%.
Collapse
Affiliation(s)
- Mohammed Al-Jabbar
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Mohammed Alshahrani
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | |
Collapse
|