1
|
Gunashekar DD, Bielak L, Oerther B, Benndorf M, Nedelcu A, Hickey S, Zamboglou C, Grosu AL, Bock M. Comparison of data fusion strategies for automated prostate lesion detection using mpMRI correlated with whole mount histology. Radiat Oncol 2024; 19:96. [PMID: 39080735 PMCID: PMC11287985 DOI: 10.1186/s13014-024-02471-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/14/2024] [Indexed: 08/03/2024] Open
Abstract
BACKGROUND In this work, we compare input level, feature level and decision level data fusion techniques for automatic detection of clinically significant prostate lesions (csPCa). METHODS Multiple deep learning CNN architectures were developed using the Unet as the baseline. The CNNs use both multiparametric MRI images (T2W, ADC, and High b-value) and quantitative clinical data (prostate specific antigen (PSA), PSA density (PSAD), prostate gland volume & gross tumor volume (GTV)), and only mp-MRI images (n = 118), as input. In addition, co-registered ground truth data from whole mount histopathology images (n = 22) were used as a test set for evaluation. RESULTS The CNNs achieved for early/intermediate / late level fusion a precision of 0.41/0.51/0.61, recall value of 0.18/0.22/0.25, an average precision of 0.13 / 0.19 / 0.27, and F scores of 0.55/0.67/ 0.76. Dice Sorensen Coefficient (DSC) was used to evaluate the influence of combining mpMRI with parametric clinical data for the detection of csPCa. We compared the DSC between the predictions of CNN's trained with mpMRI and parametric clinical and the CNN's trained with only mpMRI images as input with the ground truth. We obtained a DSC of data 0.30/0.34/0.36 and 0.26/0.33/0.34 respectively. Additionally, we evaluated the influence of each mpMRI input channel for the task of csPCa detection and obtained a DSC of 0.14 / 0.25 / 0.28. CONCLUSION The results show that the decision level fusion network performs better for the task of prostate lesion detection. Combining mpMRI data with quantitative clinical data does not show significant differences between these networks (p = 0.26/0.62/0.85). The results show that CNNs trained with all mpMRI data outperform CNNs with less input channels which is consistent with current clinical protocols where the same input is used for PI-RADS lesion scoring. TRIAL REGISTRATION The trial was registered retrospectively at the German Register for Clinical Studies (DRKS) under proposal number Nr. 476/14 & 476/19.
Collapse
Affiliation(s)
- Deepa Darshini Gunashekar
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany.
| | - Lars Bielak
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Benedict Oerther
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Andrea Nedelcu
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Samantha Hickey
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Constantinos Zamboglou
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Anca-Ligia Grosu
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Michael Bock
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| |
Collapse
|
2
|
Zaridis DI, Mylona E, Tsiknakis N, Tachos NS, Matsopoulos GK, Marias K, Tsiknakis M, Fotiadis DI. ProLesA-Net: A multi-channel 3D architecture for prostate MRI lesion segmentation with multi-scale channel and spatial attentions. PATTERNS (NEW YORK, N.Y.) 2024; 5:100992. [PMID: 39081575 PMCID: PMC11284496 DOI: 10.1016/j.patter.2024.100992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 03/06/2024] [Accepted: 04/17/2024] [Indexed: 08/02/2024]
Abstract
Prostate cancer diagnosis and treatment relies on precise MRI lesion segmentation, a challenge notably for small (<15 mm) and intermediate (15-30 mm) lesions. Our study introduces ProLesA-Net, a multi-channel 3D deep-learning architecture with multi-scale squeeze and excitation and attention gate mechanisms. Tested against six models across two datasets, ProLesA-Net significantly outperformed in key metrics: Dice score increased by 2.2%, and Hausdorff distance and average surface distance improved by 0.5 mm, with recall and precision also undergoing enhancements. Specifically, for lesions under 15 mm, our model showed a notable increase in five key metrics. In summary, ProLesA-Net consistently ranked at the top, demonstrating enhanced performance and stability. This advancement addresses crucial challenges in prostate lesion segmentation, enhancing clinical decision making and expediting treatment processes.
Collapse
Affiliation(s)
- Dimitrios I. Zaridis
- Biomedical Research Institute, FORTH, 45110 Ioannina, Greece
- Biomedical Engineering Laboratory, School of Electrical & Computer Engineering, National Technical University of Athens, 9 Iroon Polytechniou St., 15780 Athens, Greece
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina, Greece
| | - Eugenia Mylona
- Biomedical Research Institute, FORTH, 45110 Ioannina, Greece
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina, Greece
| | | | - Nikolaos S. Tachos
- Biomedical Research Institute, FORTH, 45110 Ioannina, Greece
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina, Greece
| | - George K. Matsopoulos
- Biomedical Engineering Laboratory, School of Electrical & Computer Engineering, National Technical University of Athens, 9 Iroon Polytechniou St., 15780 Athens, Greece
| | - Kostas Marias
- Computational Biomedicine Laboratory, FORTH, Heraklion, Greece
| | | | - Dimitrios I. Fotiadis
- Biomedical Research Institute, FORTH, 45110 Ioannina, Greece
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina, Greece
| |
Collapse
|
3
|
Talyshinskii A, Hameed BMZ, Ravinder PP, Naik N, Randhawa P, Shah M, Rai BP, Tokas T, Somani BK. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers (Basel) 2024; 16:1809. [PMID: 38791888 PMCID: PMC11119252 DOI: 10.3390/cancers16101809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/29/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND The aim was to analyze the current state of deep learning (DL)-based prostate cancer (PCa) diagnosis with a focus on magnetic resonance (MR) prostate reconstruction; PCa detection/stratification/reconstruction; positron emission tomography/computed tomography (PET/CT); androgen deprivation therapy (ADT); prostate biopsy; associated challenges and their clinical implications. METHODS A search of the PubMed database was conducted based on the inclusion and exclusion criteria for the use of DL methods within the abovementioned areas. RESULTS A total of 784 articles were found, of which, 64 were included. Reconstruction of the prostate, the detection and stratification of prostate cancer, the reconstruction of prostate cancer, and diagnosis on PET/CT, ADT, and biopsy were analyzed in 21, 22, 6, 7, 2, and 6 studies, respectively. Among studies describing DL use for MR-based purposes, datasets with magnetic field power of 3 T, 1.5 T, and 3/1.5 T were used in 18/19/5, 0/1/0, and 3/2/1 studies, respectively, of 6/7 studies analyzing DL for PET/CT diagnosis which used data from a single institution. Among the radiotracers, [68Ga]Ga-PSMA-11, [18F]DCFPyl, and [18F]PSMA-1007 were used in 5, 1, and 1 study, respectively. Only two studies that analyzed DL in the context of DT met the inclusion criteria. Both were performed with a single-institution dataset with only manual labeling of training data. Three studies, each analyzing DL for prostate biopsy, were performed with single- and multi-institutional datasets. TeUS, TRUS, and MRI were used as input modalities in two, three, and one study, respectively. CONCLUSION DL models in prostate cancer diagnosis show promise but are not yet ready for clinical use due to variability in methods, labels, and evaluation criteria. Conducting additional research while acknowledging all the limitations outlined is crucial for reinforcing the utility and effectiveness of DL-based models in clinical settings.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology and Andrology, Astana Medical University, Astana 010000, Kazakhstan;
| | | | - Prajwal P. Ravinder
- Department of Urology, Kasturba Medical College, Mangaluru, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Princy Randhawa
- Department of Mechatronics, Manipal University Jaipur, Jaipur 303007, India;
| | - Milap Shah
- Department of Urology, Aarogyam Hospital, Ahmedabad 380014, India;
| | - Bhavan Prasad Rai
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK;
| | - Theodoros Tokas
- Department of Urology, Medical School, University General Hospital of Heraklion, University of Crete, 14122 Heraklion, Greece;
| | - Bhaskar K. Somani
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
4
|
Duan L, Liu Z, Wan F, Dai B. Advantage of whole-mount histopathology in prostate cancer: current applications and future prospects. BMC Cancer 2024; 24:448. [PMID: 38605339 PMCID: PMC11007899 DOI: 10.1186/s12885-024-12071-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 02/29/2024] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND Whole-mount histopathology (WMH) has been a powerful tool to investigate the characteristics of prostate cancer. However, the latest advancement of WMH was yet under summarization. In this review, we offer a comprehensive exposition of current research utilizing WMH in diagnosing and treating prostate cancer (PCa), and summarize the clinical advantages of WMH and outlines potential on future prospects. METHODS An extensive PubMed search was conducted until February 26, 2023, with the search term "prostate", "whole-mount", "large format histology", which was limited to the last 4 years. Publications included were restricted to those in English. Other papers were also cited to contribute a better understanding. RESULTS WMH exhibits an enhanced legibility for pathologists, which improved the efficacy of pathologic examination and provide educational value. It simplifies the histopathological registration with medical images, which serves as a convincing reference standard for imaging indicator investigation and medical image-based artificial intelligence (AI). Additionally, WMH provides comprehensive histopathological information for tumor volume estimation, post-treatment evaluation, and provides direct pathological data for AI readers. It also offers complete spatial context for the location estimation of both intraprostatic and extraprostatic cancerous region. CONCLUSIONS WMH provides unique benefits in several aspects of clinical diagnosis and treatment of PCa. The utilization of WMH technique facilitates the development and refinement of various clinical technologies. We believe that WMH will play an important role in future clinical applications.
Collapse
Affiliation(s)
- Lewei Duan
- Department of Urology, Fudan University Shanghai Cancer Center, 200032, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, 200032, Shanghai, China
- Shanghai Genitourinary Cancer Institute, 200032, Shanghai, China
| | - Zheng Liu
- Department of Urology, Fudan University Shanghai Cancer Center, 200032, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, 200032, Shanghai, China
- Shanghai Genitourinary Cancer Institute, 200032, Shanghai, China
| | - Fangning Wan
- Department of Urology, Fudan University Shanghai Cancer Center, 200032, Shanghai, China.
- Department of Oncology, Shanghai Medical College, Fudan University, 200032, Shanghai, China.
- Shanghai Genitourinary Cancer Institute, 200032, Shanghai, China.
| | - Bo Dai
- Department of Urology, Fudan University Shanghai Cancer Center, 200032, Shanghai, China.
- Department of Oncology, Shanghai Medical College, Fudan University, 200032, Shanghai, China.
- Shanghai Genitourinary Cancer Institute, 200032, Shanghai, China.
| |
Collapse
|
5
|
Yin X, Wang K, Wang L, Yang Z, Zhang Y, Wu P, Zhao C, Zhang J. Algorithms for classification of sequences and segmentation of prostate gland: an external validation study. Abdom Radiol (NY) 2024; 49:1275-1287. [PMID: 38436698 DOI: 10.1007/s00261-024-04241-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 02/05/2024] [Accepted: 02/05/2024] [Indexed: 03/05/2024]
Abstract
OBJECTIVES The aim of the study was to externally validate two AI models for the classification of prostate mpMRI sequences and segmentation of the prostate gland on T2WI. MATERIALS AND METHODS MpMRI data from 719 patients were retrospectively collected from two hospitals, utilizing nine MR scanners from four different vendors, over the period from February 2018 to May 2022. Med3D deep learning pretrained architecture was used to perform image classification,UNet-3D was used to segment the prostate gland. The images were classified into one of nine image types by the mode. The segmentation model was validated using T2WI images. The accuracy of the segmentation was evaluated by measuring the DSC, VS,AHD.Finally,efficacy of the models was compared for different MR field strengths and sequences. RESULTS 20,551 image groups were obtained from 719 MR studies. The classification model accuracy is 99%, with a kappa of 0.932. The precision, recall, and F1 values for the nine image types had statistically significant differences, respectively (all P < 0.001). The accuracy for scanners 1.436 T, 1.5 T, and 3.0 T was 87%, 86%, and 98%, respectively (P < 0.001). For segmentation model, the median DSC was 0.942 to 0.955, the median VS was 0.974 to 0.982, and the median AHD was 5.55 to 6.49 mm,respectively.These values also had statistically significant differences for the three different magnetic field strengths (all P < 0.001). CONCLUSION The AI models for mpMRI image classification and prostate segmentation demonstrated good performance during external validation, which could enhance efficiency in prostate volume measurement and cancer detection with mpMRI. CLINICAL RELEVANCE STATEMENT These models can greatly improve the work efficiency in cancer detection, measurement of prostate volume and guided biopsies.
Collapse
Affiliation(s)
- Xuemei Yin
- Department of Medical Imaging, First Hospital of Qinhuangdao, 066000, Qinhuangdao City, Hebei Province, China
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, 100052, Beijing, China
| | - Liang Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd, 100011, Beijing, China
| | - Pengsheng Wu
- Beijing Smart Tree Medical Technology Co. Ltd, 100011, Beijing, China
| | - Chenglin Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China.
| | - Jun Zhang
- Department of Medical Imaging, First Hospital of Qinhuangdao, 066000, Qinhuangdao City, Hebei Province, China.
| |
Collapse
|
6
|
Ferrero A, Ghelichkhan E, Manoochehri H, Ho MM, Albertson DJ, Brintz BJ, Tasdizen T, Whitaker RT, Knudsen BS. HistoEM: A Pathologist-Guided and Explainable Workflow Using Histogram Embedding for Gland Classification. Mod Pathol 2024; 37:100447. [PMID: 38369187 DOI: 10.1016/j.modpat.2024.100447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 01/06/2024] [Accepted: 02/06/2024] [Indexed: 02/20/2024]
Abstract
Pathologists have, over several decades, developed criteria for diagnosing and grading prostate cancer. However, this knowledge has not, so far, been included in the design of convolutional neural networks (CNN) for prostate cancer detection and grading. Further, it is not known whether the features learned by machine-learning algorithms coincide with diagnostic features used by pathologists. We propose a framework that enforces algorithms to learn the cellular and subcellular differences between benign and cancerous prostate glands in digital slides from hematoxylin and eosin-stained tissue sections. After accurate gland segmentation and exclusion of the stroma, the central component of the pipeline, named HistoEM, utilizes a histogram embedding of features from the latent space of the CNN encoder. Each gland is represented by 128 feature-wise histograms that provide the input into a second network for benign vs cancer classification of the whole gland. Cancer glands are further processed by a U-Net structured network to separate low-grade from high-grade cancer. Our model demonstrates similar performance compared with other state-of-the-art prostate cancer grading models with gland-level resolution. To understand the features learned by HistoEM, we first rank features based on the distance between benign and cancer histograms and visualize the tissue origins of the 2 most important features. A heatmap of pixel activation by each feature is generated using Grad-CAM and overlaid on nuclear segmentation outlines. We conclude that HistoEM, similar to pathologists, uses nuclear features for the detection of prostate cancer. Altogether, this novel approach can be broadly deployed to visualize computer-learned features in histopathology images.
Collapse
Affiliation(s)
- Alessandro Ferrero
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Elham Ghelichkhan
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Hamid Manoochehri
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Man Minh Ho
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | | | | - Tolga Tasdizen
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Ross T Whitaker
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | |
Collapse
|
7
|
Tsui JMG, Kehayias CE, Leeman JE, Nguyen PL, Peng L, Yang DD, Moningi S, Martin N, Orio PF, D'Amico AV, Bredfeldt JS, Lee LK, Guthier CV, King MT. Assessing the Feasibility of Using Artificial Intelligence-Segmented Dominant Intraprostatic Lesion for Focal Intraprostatic Boost With External Beam Radiation Therapy. Int J Radiat Oncol Biol Phys 2024; 118:74-84. [PMID: 37517600 DOI: 10.1016/j.ijrobp.2023.07.029] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 07/11/2023] [Accepted: 07/18/2023] [Indexed: 08/01/2023]
Abstract
PURPOSE The delineation of dominant intraprostatic gross tumor volumes (GTVs) on multiparametric magnetic resonance imaging (mpMRI) can be subject to interobserver variability. We evaluated whether deep learning artificial intelligence (AI)-segmented GTVs can provide a similar degree of intraprostatic boosting with external beam radiation therapy (EBRT) as radiation oncologist (RO)-delineated GTVs. METHODS AND MATERIALS We identified 124 patients who underwent mpMRI followed by EBRT between 2010 and 2013. A reference GTV was delineated by an RO and approved by a board-certified radiologist. We trained an AI algorithm for GTV delineation on 89 patients, and tested the algorithm on 35 patients, each with at least 1 PI-RADS (Prostate Imaging Reporting and Data System) 4 or 5 lesion (46 total lesions). We then asked 5 additional ROs to independently delineate GTVs on the test set. We compared lesion detectability and geometric accuracy of the GTVs from AI and 5 ROs against the reference GTV. Then, we generated EBRT plans (77 Gy prostate) that boosted each observer-specific GTV to 95 Gy. We compared reference GTV dose (D98%) across observers using a mixed-effects model. RESULTS On a lesion level, AI GTV exhibited a sensitivity of 82.6% and positive predictive value of 86.4%. Respective ranges among the 5 RO GTVs were 84.8% to 95.7% and 95.1% to 100.0%. Among 30 GTVs mutually identified by all observers, no significant differences in Dice coefficient were detected between AI and any of the 5 ROs. Across all patients, only 2 of 5 ROs had a reference GTV D98% that significantly differed from that of AI by 2.56 Gy (P = .02) and 3.20 Gy (P = .003). The presence of false-negative (-5.97 Gy; P < .001) but not false-positive (P = .24) lesions was associated with reference GTV D98%. CONCLUSIONS AI-segmented GTVs demonstrate potential for intraprostatic boosting, although the degree of boosting may be adversely affected by false-negative lesions. Prospective review of AI-segmented GTVs remains essential.
Collapse
Affiliation(s)
- James M G Tsui
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts; Department of Radiation Oncology, McGill University Health Centre, Montreal, Quebec, Canada
| | - Christopher E Kehayias
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Jonathan E Leeman
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Paul L Nguyen
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Luke Peng
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - David D Yang
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Shalini Moningi
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Neil Martin
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Peter F Orio
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Anthony V D'Amico
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Jeremy S Bredfeldt
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Leslie K Lee
- Department of Radiology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Christian V Guthier
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Martin T King
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts.
| |
Collapse
|
8
|
Champendal M, Müller H, Prior JO, Dos Reis CS. A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging. Eur J Radiol 2023; 169:111159. [PMID: 37976760 DOI: 10.1016/j.ejrad.2023.111159] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/26/2023] [Accepted: 10/19/2023] [Indexed: 11/19/2023]
Abstract
PURPOSE To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI). METHOD A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion. RESULTS 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3). CONCLUSION The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.
Collapse
Affiliation(s)
- Mélanie Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - Henning Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical faculty, University of Geneva, CH, Switzerland.
| | - John O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV), Lausanne, CH, Switzerland.
| | - Cláudia Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland.
| |
Collapse
|
9
|
Mullan S, Sonka M. Kernel-weighted contribution: a method of visual attribution for 3D deep learning segmentation in medical imaging. J Med Imaging (Bellingham) 2023; 10:054001. [PMID: 37692092 PMCID: PMC10482593 DOI: 10.1117/1.jmi.10.5.054001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 08/10/2023] [Accepted: 08/23/2023] [Indexed: 09/12/2023] Open
Abstract
Purpose Explaining deep learning model decisions, especially those for medical image segmentation, is a critical step toward the understanding and validation that will enable these powerful tools to see more widespread adoption in healthcare. We introduce kernel-weighted contribution, a visual explanation method for three-dimensional medical image segmentation models that produces accurate and interpretable explanations. Unlike previous attribution methods, kernel-weighted contribution is explicitly designed for medical image segmentation models and assesses feature importance using the relative contribution of each considered activation map to the predicted segmentation. Approach We evaluate our method on a synthetic dataset that provides complete knowledge over input features and a comprehensive explanation quality metric using this ground truth. Our method and three other prevalent attribution methods were applied to five different model layer combinations to explain segmentation predictions for 100 test samples and compared using this metric. Results Kernel-weighted contribution produced superior explanations of obtained image segmentations when applied to both encoder and decoder sections of a trained model as compared to other layer combinations (p < 0.0005 ). In between-method comparisons, kernel-weighted contribution produced superior explanations compared with other methods using the same model layers in four of five experiments (p < 0.0005 ) and showed equivalently superior performance to GradCAM++ when only using non-transpose convolution layers of the model decoder (p = 0.008 ). Conclusion The reported method produced explanations of superior quality uniquely suited to fully utilize the specific architectural considerations present in image and especially medical image segmentation models. Both the synthetic dataset and implementation of our method are available to the research community.
Collapse
Affiliation(s)
- Sean Mullan
- University of Iowa, Iowa Institute for Biomedical Imaging, Iowa City, Iowa, United States
| | - Milan Sonka
- University of Iowa, Iowa Institute for Biomedical Imaging, Iowa City, Iowa, United States
| |
Collapse
|
10
|
Hagiwara A, Fujita S, Kurokawa R, Andica C, Kamagata K, Aoki S. Multiparametric MRI: From Simultaneous Rapid Acquisition Methods and Analysis Techniques Using Scoring, Machine Learning, Radiomics, and Deep Learning to the Generation of Novel Metrics. Invest Radiol 2023; 58:548-560. [PMID: 36822661 PMCID: PMC10332659 DOI: 10.1097/rli.0000000000000962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/10/2023] [Indexed: 02/25/2023]
Abstract
ABSTRACT With the recent advancements in rapid imaging methods, higher numbers of contrasts and quantitative parameters can be acquired in less and less time. Some acquisition models simultaneously obtain multiparametric images and quantitative maps to reduce scan times and avoid potential issues associated with the registration of different images. Multiparametric magnetic resonance imaging (MRI) has the potential to provide complementary information on a target lesion and thus overcome the limitations of individual techniques. In this review, we introduce methods to acquire multiparametric MRI data in a clinically feasible scan time with a particular focus on simultaneous acquisition techniques, and we discuss how multiparametric MRI data can be analyzed as a whole rather than each parameter separately. Such data analysis approaches include clinical scoring systems, machine learning, radiomics, and deep learning. Other techniques combine multiple images to create new quantitative maps associated with meaningful aspects of human biology. They include the magnetic resonance g-ratio, the inner to the outer diameter of a nerve fiber, and the aerobic glycolytic index, which captures the metabolic status of tumor tissues.
Collapse
Affiliation(s)
- Akifumi Hagiwara
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shohei Fujita
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Division of Neuroradiology, Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Christina Andica
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Koji Kamagata
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shigeki Aoki
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| |
Collapse
|
11
|
Kim H, Kang SW, Kim JH, Nagar H, Sabuncu M, Margolis DJA, Kim CK. The role of AI in prostate MRI quality and interpretation: Opportunities and challenges. Eur J Radiol 2023; 165:110887. [PMID: 37245342 DOI: 10.1016/j.ejrad.2023.110887] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 05/06/2023] [Accepted: 05/20/2023] [Indexed: 05/30/2023]
Abstract
Prostate MRI plays an important role in imaging the prostate gland and surrounding tissues, particularly in the diagnosis and management of prostate cancer. With the widespread adoption of multiparametric magnetic resonance imaging in recent years, the concerns surrounding the variability of imaging quality have garnered increased attention. Several factors contribute to the inconsistency of image quality, such as acquisition parameters, scanner differences and interobserver variabilities. While efforts have been made to standardize image acquisition and interpretation via the development of systems, such as PI-RADS and PI-QUAL, the scoring systems still depend on the subjective experience and acumen of humans. Artificial intelligence (AI) has been increasingly used in many applications, including medical imaging, due to its ability to automate tasks and lower human error rates. These advantages have the potential to standardize the tasks of image interpretation and quality control of prostate MRI. Despite its potential, thorough validation is required before the implementation of AI in clinical practice. In this article, we explore the opportunities and challenges of AI, with a focus on the interpretation and quality of prostate MRI.
Collapse
Affiliation(s)
- Heejong Kim
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Shin Won Kang
- Research Institute for Future Medicine, Samsung Medical Center, Republic of Korea
| | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| | - Himanshu Nagar
- Department of Radiation Oncology, Weill Cornell Medical College, 525 E 68th St, New York, NY 10021, United States
| | - Mert Sabuncu
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Daniel J A Margolis
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States.
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| |
Collapse
|
12
|
Chaddad A, Tan G, Liang X, Hassan L, Rathore S, Desrosiers C, Katib Y, Niazi T. Advancements in MRI-Based Radiomics and Artificial Intelligence for Prostate Cancer: A Comprehensive Review and Future Prospects. Cancers (Basel) 2023; 15:3839. [PMID: 37568655 PMCID: PMC10416937 DOI: 10.3390/cancers15153839] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/25/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023] Open
Abstract
The use of multiparametric magnetic resonance imaging (mpMRI) has become a common technique used in guiding biopsy and developing treatment plans for prostate lesions. While this technique is effective, non-invasive methods such as radiomics have gained popularity for extracting imaging features to develop predictive models for clinical tasks. The aim is to minimize invasive processes for improved management of prostate cancer (PCa). This study reviews recent research progress in MRI-based radiomics for PCa, including the radiomics pipeline and potential factors affecting personalized diagnosis. The integration of artificial intelligence (AI) with medical imaging is also discussed, in line with the development trend of radiogenomics and multi-omics. The survey highlights the need for more data from multiple institutions to avoid bias and generalize the predictive model. The AI-based radiomics model is considered a promising clinical tool with good prospects for application.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada
| | - Guina Tan
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
| | - Xiaojuan Liang
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
| | - Lama Hassan
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China
| | | | - Christian Desrosiers
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada
| | - Yousef Katib
- Department of Radiology, Taibah University, Al Madinah 42361, Saudi Arabia
| | - Tamim Niazi
- Lady Davis Institute for Medical Research, McGill University, Montreal, QC H3T 1E2, Canada
| |
Collapse
|
13
|
Harder FN, Heming CAM, Haider MA. mpMRI Interpretation in Active Surveillance for Prostate Cancer-An overview of the PRECISE score. Abdom Radiol (NY) 2023; 48:2449-2455. [PMID: 37160473 DOI: 10.1007/s00261-023-03912-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 03/31/2023] [Accepted: 04/05/2023] [Indexed: 05/11/2023]
Abstract
Active surveillance (AS) is now included in all major guidelines for patients with low-risk PCa and selected patients with intermediate-risk PCa. Several studies have highlighted the potential benefit of multiparametric magnetic resonance imaging (mpMRI) in AS and it has been adopted in some guidelines. However, uncertainty remains about whether serial mpMRI can help to safely reduce the number of required repeat biopsies under AS. In 2017, the European School of Oncology initiated the Prostate Cancer Radiological Estimation of Change in Sequential Evaluation (PRECISE) panel which proposed the PRECISE scoring system to assess the likelihood of radiological tumor progression on serial mpMRI. The PRECISE scoring system remains the only major system evaluated in multiple publications. In this review article, we discuss the current body of literature investigating the application of PRECISE as it is not as yet an established standard in mpMRI reporting. We delineate the strengths of PRECISE and its potential added value. Also, we underline potential weaknesses of the PRECISE scoring system, which might be tackled in future versions to further increase its value in AS.
Collapse
Affiliation(s)
- Felix N Harder
- Institute of Diagnostic and Interventional Radiology, Technical University of Munich, Munich, Germany
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, 600 University Avenue, Toronto, ON, M5G 1X5, Canada
- Joint Department of Medical Imaging, University Health Network, Sinai Health System and University of Toronto, Toronto, ON, M5G 1X5, Canada
| | - Carolina A M Heming
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, 600 University Avenue, Toronto, ON, M5G 1X5, Canada
- Joint Department of Medical Imaging, University Health Network, Sinai Health System and University of Toronto, Toronto, ON, M5G 1X5, Canada
- Radiology Department, Instituto Nacional do Cancer (INCa), Rio de Janeiro, Brazil
| | - Masoom A Haider
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, 600 University Avenue, Toronto, ON, M5G 1X5, Canada.
- Joint Department of Medical Imaging, University Health Network, Sinai Health System and University of Toronto, Toronto, ON, M5G 1X5, Canada.
| |
Collapse
|
14
|
Bhandary S, Kuhn D, Babaiee Z, Fechter T, Benndorf M, Zamboglou C, Grosu AL, Grosu R. Investigation and benchmarking of U-Nets on prostate segmentation tasks. Comput Med Imaging Graph 2023; 107:102241. [PMID: 37201475 DOI: 10.1016/j.compmedimag.2023.102241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 05/20/2023]
Abstract
In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.
Collapse
Affiliation(s)
- Shrajan Bhandary
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria.
| | - Dejan Kuhn
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Zahra Babaiee
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria
| | - Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany
| | - Constantinos Zamboglou
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; German Oncology Center, European University, Limassol, 4108, Cyprus
| | - Anca-Ligia Grosu
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany
| | - Radu Grosu
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria; Department of Computer Science, State University of New York at Stony Brook, NY, 11794, USA
| |
Collapse
|
15
|
Karagoz A, Alis D, Seker ME, Zeybel G, Yergin M, Oksuz I, Karaarslan E. Anatomically guided self-adapting deep neural network for clinically significant prostate cancer detection on bi-parametric MRI: a multi-center study. Insights Imaging 2023; 14:110. [PMID: 37337101 DOI: 10.1186/s13244-023-01439-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 04/17/2023] [Indexed: 06/21/2023] Open
Abstract
OBJECTIVE To evaluate the effectiveness of a self-adapting deep network, trained on large-scale bi-parametric MRI data, in detecting clinically significant prostate cancer (csPCa) in external multi-center data from men of diverse demographics; to investigate the advantages of transfer learning. METHODS We used two samples: (i) Publicly available multi-center and multi-vendor Prostate Imaging: Cancer AI (PI-CAI) training data, consisting of 1500 bi-parametric MRI scans, along with its unseen validation and testing samples; (ii) In-house multi-center testing and transfer learning data, comprising 1036 and 200 bi-parametric MRI scans. We trained a self-adapting 3D nnU-Net model using probabilistic prostate masks on the PI-CAI data and evaluated its performance on the hidden validation and testing samples and the in-house data with and without transfer learning. We used the area under the receiver operating characteristic (AUROC) curve to evaluate patient-level performance in detecting csPCa. RESULTS The PI-CAI training data had 425 scans with csPCa, while the in-house testing and fine-tuning data had 288 and 50 scans with csPCa, respectively. The nnU-Net model achieved an AUROC of 0.888 and 0.889 on the hidden validation and testing data. The model performed with an AUROC of 0.886 on the in-house testing data, with a slight decrease in performance to 0.870 using transfer learning. CONCLUSIONS The state-of-the-art deep learning method using prostate masks trained on large-scale bi-parametric MRI data provides high performance in detecting csPCa in internal and external testing data with different characteristics, demonstrating the robustness and generalizability of deep learning within and across datasets. CLINICAL RELEVANCE STATEMENT A self-adapting deep network, utilizing prostate masks and trained on large-scale bi-parametric MRI data, is effective in accurately detecting clinically significant prostate cancer across diverse datasets, highlighting the potential of deep learning methods for improving prostate cancer detection in clinical practice.
Collapse
Affiliation(s)
- Ahmet Karagoz
- Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey
| | - Deniz Alis
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey.
- Department of Radiology, School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey.
| | - Mustafa Ege Seker
- School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| | - Gokberk Zeybel
- School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| | - Mert Yergin
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey
| | - Ilkay Oksuz
- Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
| | - Ercan Karaarslan
- Department of Radiology, School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| |
Collapse
|
16
|
de Vries BM, Zwezerijnen GJC, Burchell GL, van Velden FHP, Menke-van der Houven van Oordt CW, Boellaard R. Explainable artificial intelligence (XAI) in radiology and nuclear medicine: a literature review. Front Med (Lausanne) 2023; 10:1180773. [PMID: 37250654 PMCID: PMC10213317 DOI: 10.3389/fmed.2023.1180773] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 04/17/2023] [Indexed: 05/31/2023] Open
Abstract
Rational Deep learning (DL) has demonstrated a remarkable performance in diagnostic imaging for various diseases and modalities and therefore has a high potential to be used as a clinical tool. However, current practice shows low deployment of these algorithms in clinical practice, because DL algorithms lack transparency and trust due to their underlying black-box mechanism. For successful employment, explainable artificial intelligence (XAI) could be introduced to close the gap between the medical professionals and the DL algorithms. In this literature review, XAI methods available for magnetic resonance (MR), computed tomography (CT), and positron emission tomography (PET) imaging are discussed and future suggestions are made. Methods PubMed, Embase.com and Clarivate Analytics/Web of Science Core Collection were screened. Articles were considered eligible for inclusion if XAI was used (and well described) to describe the behavior of a DL model used in MR, CT and PET imaging. Results A total of 75 articles were included of which 54 and 17 articles described post and ad hoc XAI methods, respectively, and 4 articles described both XAI methods. Major variations in performance is seen between the methods. Overall, post hoc XAI lacks the ability to provide class-discriminative and target-specific explanation. Ad hoc XAI seems to tackle this because of its intrinsic ability to explain. However, quality control of the XAI methods is rarely applied and therefore systematic comparison between the methods is difficult. Conclusion There is currently no clear consensus on how XAI should be deployed in order to close the gap between medical professionals and DL algorithms for clinical implementation. We advocate for systematic technical and clinical quality assessment of XAI methods. Also, to ensure end-to-end unbiased and safe integration of XAI in clinical workflow, (anatomical) data minimization and quality control methods should be included.
Collapse
Affiliation(s)
- Bart M. de Vries
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Gerben J. C. Zwezerijnen
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | | | | | | | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
17
|
Liu F, Zhu J, Lv B, Yang L, Sun W, Dai Z, Gou F, Wu J. Auxiliary Segmentation Method of Osteosarcoma MRI Image Based on Transformer and U-Net. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9990092. [PMID: 36419505 PMCID: PMC9678467 DOI: 10.1155/2022/9990092] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 10/24/2022] [Accepted: 10/28/2022] [Indexed: 07/28/2023]
Abstract
One of the most prevalent malignant bone tumors is osteosarcoma. The diagnosis and treatment cycle are long and the prognosis is poor. It takes a lot of time to manually identify osteosarcoma from osteosarcoma magnetic resonance imaging (MRI). Medical image processing technology has greatly alleviated the problems faced by medical diagnoses. However, MRI images of osteosarcoma are characterized by high noise and blurred edges. The complex features increase the difficulty of lesion area identification. Therefore, this study proposes an osteosarcoma MRI image segmentation method (OSTransnet) based on Transformer and U-net. This technique primarily addresses the issues of fuzzy tumor edge segmentation and overfitting brought on by data noise. First, we optimize the dataset by changing the precise spatial distribution of noise and the data-increment image rotation process. The tumor is then segmented based on the model of U-Net and Transformer with edge improvement. It compensates for the limitations of U-semantic Net by using channel-based transformers. Finally, we also add an edge enhancement module (BAB) and a combined loss function to improve the performance of edge segmentation. The method's accuracy and stability are demonstrated by the detection and training results based on more than 4,000 MRI images of osteosarcoma, which also demonstrate how well the method works as an adjunct to clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Feng Liu
- School of Information Engineering, Shandong Youth University of Political Science, Jinan, Shandong, China
- New Technology Research and Development Center of Intelligent Information Controlling in Universities of Shandong, Jinan 250103, China
| | - Jun Zhu
- The First People's Hospital of Huaihua, Huaihua 418000, Hunan, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, Hunan, China
| | - Baolong Lv
- School of Modern Service Management, Shandong Youth University of Political Science, Jinan, China
| | - Lei Yang
- School of Computer Science and Technology, Shandong Janzhu University, Jinan, China
| | - Wenyan Sun
- School of Information Engineering, Shandong Youth University of Political Science, Jinan, Shandong, China
| | - Zhehao Dai
- Department of Spine Surgery, The Second Xiangya Hospital, Central South University, Changsha 410011, China
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, Victoria 3800, Australia
| |
Collapse
|
18
|
Multi Level Approach for Segmentation of Interstitial Lung Disease (ILD) Patterns Classification Based on Superpixel Processing and Fusion of K-Means Clusters: SPFKMC. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4431817. [PMID: 36317075 PMCID: PMC9617705 DOI: 10.1155/2022/4431817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 09/23/2022] [Accepted: 09/30/2022] [Indexed: 11/17/2022]
Abstract
During the COVID-19 pandemic, huge interstitial lung disease (ILD) lung images have been captured. It is high time to develop the efficient segmentation techniques utilized to separate the anatomical structures and ILD patterns for disease and infection level identification. The effectiveness of disease classification directly depends on the accuracy of initial stages like preprocessing and segmentation. This paper proposed a hybrid segmentation algorithm designed for ILD images by taking advantage of superpixel and K-means clustering approaches. Segmented superpixel images adapt the better irregular local and spatial neighborhoods that are helpful to improving the performance of K-means clustering-based ILD image segmentation. To overcome the limitations of multiclass belongings, semiadaptive wavelet-based fusion is applied over selected K-means clusters. The performance of the proposed SPFKMC was compared with that of 3-class Fuzzy C-Means clustering (FCM) and K-Means clustering in terms of accuracy, Jaccard similarity index, and Dice similarity coefficient. The SPFKMC algorithm gives an accuracy of 99.28%, DSC 98.72%, and JSI 97.87%. The proposed Fused Clustering gives better results as compared to traditional K-means clustering segmentation with wavelet-based fused cluster results.
Collapse
|
19
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
20
|
COVLIAS 2.0-cXAI: Cloud-Based Explainable Deep Learning System for COVID-19 Lesion Localization in Computed Tomography Scans. Diagnostics (Basel) 2022; 12:diagnostics12061482. [PMID: 35741292 PMCID: PMC9221733 DOI: 10.3390/diagnostics12061482] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/07/2022] [Accepted: 06/13/2022] [Indexed: 02/07/2023] Open
Abstract
Background: The previous COVID-19 lung diagnosis system lacks both scientific validation and the role of explainable artificial intelligence (AI) for understanding lesion localization. This study presents a cloud-based explainable AI, the “COVLIAS 2.0-cXAI” system using four kinds of class activation maps (CAM) models. Methodology: Our cohort consisted of ~6000 CT slices from two sources (Croatia, 80 COVID-19 patients and Italy, 15 control patients). COVLIAS 2.0-cXAI design consisted of three stages: (i) automated lung segmentation using hybrid deep learning ResNet-UNet model by automatic adjustment of Hounsfield units, hyperparameter optimization, and parallel and distributed training, (ii) classification using three kinds of DenseNet (DN) models (DN-121, DN-169, DN-201), and (iii) validation using four kinds of CAM visualization techniques: gradient-weighted class activation mapping (Grad-CAM), Grad-CAM++, score-weighted CAM (Score-CAM), and FasterScore-CAM. The COVLIAS 2.0-cXAI was validated by three trained senior radiologists for its stability and reliability. The Friedman test was also performed on the scores of the three radiologists. Results: The ResNet-UNet segmentation model resulted in dice similarity of 0.96, Jaccard index of 0.93, a correlation coefficient of 0.99, with a figure-of-merit of 95.99%, while the classifier accuracies for the three DN nets (DN-121, DN-169, and DN-201) were 98%, 98%, and 99% with a loss of ~0.003, ~0.0025, and ~0.002 using 50 epochs, respectively. The mean AUC for all three DN models was 0.99 (p < 0.0001). The COVLIAS 2.0-cXAI showed 80% scans for mean alignment index (MAI) between heatmaps and gold standard, a score of four out of five, establishing the system for clinical settings. Conclusions: The COVLIAS 2.0-cXAI successfully showed a cloud-based explainable AI system for lesion localization in lung CT scans.
Collapse
|