1
|
Huang L, Ruan S, Xing Y, Feng M. A review of uncertainty quantification in medical image analysis: Probabilistic and non-probabilistic methods. Med Image Anal 2024; 97:103223. [PMID: 38861770 DOI: 10.1016/j.media.2024.103223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/16/2024] [Accepted: 05/27/2024] [Indexed: 06/13/2024]
Abstract
The comprehensive integration of machine learning healthcare models within clinical practice remains suboptimal, notwithstanding the proliferation of high-performing solutions reported in the literature. A predominant factor hindering widespread adoption pertains to an insufficiency of evidence affirming the reliability of the aforementioned models. Recently, uncertainty quantification methods have been proposed as a potential solution to quantify the reliability of machine learning models and thus increase the interpretability and acceptability of the results. In this review, we offer a comprehensive overview of the prevailing methods proposed to quantify the uncertainty inherent in machine learning models developed for various medical image tasks. Contrary to earlier reviews that exclusively focused on probabilistic methods, this review also explores non-probabilistic approaches, thereby furnishing a more holistic survey of research pertaining to uncertainty quantification for machine learning models. Analysis of medical images with the summary and discussion on medical applications and the corresponding uncertainty evaluation protocols are presented, which focus on the specific challenges of uncertainty in medical image analysis. We also highlight some potential future research work at the end. Generally, this review aims to allow researchers from both clinical and technical backgrounds to gain a quick and yet in-depth understanding of the research in uncertainty quantification for medical image analysis machine learning models.
Collapse
Affiliation(s)
- Ling Huang
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Su Ruan
- Quantif, LITIS, University of Rouen Normandy, France.
| | - Yucheng Xing
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Mengling Feng
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore; Institute of Data Science, National University of Singapore, Singapore
| |
Collapse
|
2
|
Liu H, Grothe MJ, Rashid T, Labrador-Espinosa MA, Toledo JB, Habes M. ADCoC: Adaptive Distribution Modeling Based Collaborative Clustering for Disentangling Disease Heterogeneity from Neuroimaging Data. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2023; 7:308-318. [PMID: 36969108 PMCID: PMC10038331 DOI: 10.1109/tetci.2021.3136587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Conventional clustering techniques for neuroimaging applications usually focus on capturing differences between given subjects, while neglecting arising differences between features and the potential bias caused by degraded data quality. In practice, collected neuroimaging data are often inevitably contaminated by noise, which may lead to errors in clustering and clinical interpretation. Additionally, most methods ignore the importance of feature grouping towards optimal clustering. In this paper, we exploit the underlying heterogeneous clusters of features to serve as weak supervision for improved clustering of subjects, which is achieved by simultaneously clustering subjects and features via nonnegative matrix tri-factorization. In order to suppress noise, we further introduce adaptive regularization based on coefficient distribution modeling. Particularly, unlike conventional sparsity regularization techniques that assume zero mean of the coefficients, we form the distributions using the data of interest so that they could better fit the non-negative coefficients. In this manner, the proposed approach is expected to be more effective and robust against noise. We compared the proposed method with standard techniques and recently published methods demonstrating superior clustering performance on synthetic data with known ground truth labels. Furthermore, when applying our proposed technique to magnetic resonance imaging (MRI) data from a cohort of patients with Parkinson's disease, we identified two stable and highly reproducible patient clusters characterized by frontal and posterior cortical/medial temporal atrophy patterns, respectively, which also showed corresponding differences in cognitive characteristics.
Collapse
Affiliation(s)
- Hangfan Liu
- Neuroimage Analytics Laboratory (NAL) and Biggs Institute Neuroimaging Core, Glenn Biggs Institute for Neurodegenerative Disorders, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Michel J Grothe
- Unidad de Trastornos del Movimiento, Servicio de Neurología y Neurofisiología Clínica, Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla, Seville, Spain
| | - Tanweer Rashid
- Neuroimage Analytics Laboratory (NAL) and Biggs Institute Neuroimaging Core, Glenn Biggs Institute for Neurodegenerative Disorders, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA
| | - Miguel A Labrador-Espinosa
- Unidad de Trastornos del Movimiento, Servicio de Neurología y Neurofisiología Clínica, Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla, Seville, Spain; Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), Madrid, Spain
| | - Jon B Toledo
- Department of Neurology, University of Florida College of Medicine, Gainesville, and also with Fixel Institute for Neurologic Diseases, University of Florida, Gainesville
| | - Mohamad Habes
- Neuroimage Analytics Laboratory (NAL) and Biggs Institute Neuroimaging Core, Glenn Biggs Institute for Neurodegenerative Disorders, University of Texas Health Science Center at San Antonio, San Antonio, Texas, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
3
|
Huang L, Ruan S, Denœux T. Semi-Supervised Multiple Evidence Fusion for Brain Tumor Segmentation. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
4
|
TECM: Transfer learning-based evidential c-means clustering. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
5
|
Diao Z, Jiang H, Han XH, Yao YD, Shi T. EFNet: evidence fusion network for tumor segmentation from PET-CT volumes. Phys Med Biol 2021; 66. [PMID: 34555816 DOI: 10.1088/1361-6560/ac299a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/23/2021] [Indexed: 11/11/2022]
Abstract
Precise delineation of target tumor from positron emission tomography-computed tomography (PET-CT) is a key step in clinical practice and radiation therapy. PET-CT co-segmentation actually uses the complementary information of two modalities to reduce the uncertainty of single-modal segmentation, so as to obtain more accurate segmentation results. At present, the PET-CT segmentation methods based on fully convolutional neural network (FCN) mainly adopt image fusion and feature fusion. The current fusion strategies do not consider the uncertainty of multi-modal segmentation and complex feature fusion consumes more computing resources, especially when dealing with 3D volumes. In this work, we analyze the PET-CT co-segmentation from the perspective of uncertainty, and propose evidence fusion network (EFNet). The network respectively outputs PET result and CT result containing uncertainty by proposed evidence loss, which are used as PET evidence and CT evidence. Then we use evidence fusion to reduce uncertainty of single-modal evidence. The final segmentation result is obtained based on evidence fusion of PET evidence and CT evidence. EFNet uses the basic 3D U-Net as backbone and only uses simple unidirectional feature fusion. In addition, EFNet can separately train and predict PET evidence and CT evidence, without the need for parallel training of two branch networks. We do experiments on the soft-tissue-sarcomas and lymphoma datasets. Compared with 3D U-Net, our proposed method improves the Dice by 8% and 5% respectively. Compared with the complex feature fusion method, our proposed method improves the Dice by 7% and 2% respectively. Our results show that in PET-CT segmentation methods based on FCN, by outputting uncertainty evidence and evidence fusion, the network can be simplified and the segmentation results can be improved.
Collapse
Affiliation(s)
- Zhaoshuo Diao
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| | - Xian-Hua Han
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi-shi 7538511, Japan
| | - Yu-Dong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken NJ 07030, United States of America
| | - Tianyu Shi
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
6
|
Yousefirizi F, Jha AK, Brosch-Lenz J, Saboury B, Rahmim A. Toward High-Throughput Artificial Intelligence-Based Segmentation in Oncological PET Imaging. PET Clin 2021; 16:577-596. [PMID: 34537131 DOI: 10.1016/j.cpet.2021.06.001] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) techniques for image-based segmentation have garnered much attention in recent years. Convolutional neural networks have shown impressive results and potential toward fully automated segmentation in medical imaging, and particularly PET imaging. To cope with the limited access to annotated data needed in supervised AI methods, given tedious and prone-to-error manual delineations, semi-supervised and unsupervised AI techniques have also been explored for segmentation of tumors or normal organs in single- and bimodality scans. This work reviews existing AI techniques for segmentation tasks and the evaluation criteria for translational AI-based segmentation efforts toward routine adoption in clinical workflows.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St Louis, MO 63130, USA; Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO 63110, USA
| | - Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada; Department of Physics, University of British Columbia, Senior Scientist & Provincial Medical Imaging Physicist, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada
| |
Collapse
|
7
|
|
8
|
Li S, Jiang H, Li H, Yao YD. AW-SDRLSE: Adaptive Weighting and Scalable Distance Regularized Level Set Evolution for Lymphoma Segmentation on PET Images. IEEE J Biomed Health Inform 2021; 25:1173-1184. [PMID: 32841130 DOI: 10.1109/jbhi.2020.3017546] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Accurate lymphoma segmentation on Positron Emission Tomography (PET) images is of great importance for medical diagnoses, such as for distinguishing benign and malignant. To this end, this paper proposes an adaptive weighting and scalable distance regularized level set evolution (AW-SDRLSE) method for delineating lymphoma boundaries on 2D PET slices. There are three important characteristics with respect to AW-SDRLSE: 1) A scalable distance regularization term is proposed and a parameter q can control the contour's convergence rate and precision in theory. 2) A novel dynamic annular mask is proposed to calculate mean intensities of local interior and exterior regions and further define the region energy term. 3) As the level set method is sensitive to parameters, we thus propose an adaptive weighting strategy for the length and area energy terms using local region intensity and boundary direction information. AW-SDRLSE is evaluated on 90 cases of real PET data with a mean Dice coefficient of 0.8796. Comparative results demonstrate the accuracy and robustness of AW-SDRLSE as well as its performance advantages as compared with related level set methods. In addition, experimental results indicate that AW-SDRLSE can be a fine segmentation method for improving the lymphoma segmentation results obtained by deep learning (DL) methods significantly.
Collapse
|
9
|
|
10
|
Sbei A, ElBedoui K, Barhoumi W, Maktouf C. Gradient-based generation of intermediate images for heterogeneous tumor segmentation within hybrid PET/MRI scans. Comput Biol Med 2020; 119:103669. [PMID: 32339115 DOI: 10.1016/j.compbiomed.2020.103669] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 02/17/2020] [Accepted: 02/17/2020] [Indexed: 10/25/2022]
Abstract
Segmentation of tumors from hybrid PET/MRI scans plays an essential role in accurate diagnosis and treatment planning. However, when treating tumors, several challenges, notably heterogeneity and the problem of leaking into surrounding tissues with similar high uptake, have to be considered. To address these issues, we propose an automated method for accurate delineation of tumors in hybrid PET/MRI scans. The method is mainly based on creating intermediate images. In fact, an automatic detection technique that determines a preliminary Interesting Uptake Region (IUR) is firstly performed. To overcome the leakage problem, a separation technique is adopted to generate the final IUR. Then, smart seeds are provided for the Graph Cut (GC) technique to obtain the tumor map. To create intermediate images that tend to reduce heterogeneity faced on the original images, the tumor map gradient is combined with the gradient image. Lastly, segmentation based on the GCsummax technique is applied to the generated images. The proposed method has been validated on PET phantoms as well as on real-world PET/MRI scans of prostate, liver and pancreatic tumors. Experimental comparison revealed the superiority of the proposed method over state-of-the-art methods. This confirms the crucial role of automatically creating intermediate images in addressing the problem of wrongly estimating arc weights for heterogeneous targets.
Collapse
Affiliation(s)
- Arafet Sbei
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia
| | - Khaoula ElBedoui
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia
| | - Walid Barhoumi
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia.
| | - Chokri Maktouf
- Nuclear Medicine Department, Pasteur Institute of Tunis, Tunis, Tunisia
| |
Collapse
|
11
|
|
12
|
Sun L, Zu C, Shao W, Guang J, Zhang D, Liu M. Reliability-based robust multi-atlas label fusion for brain MRI segmentation. Artif Intell Med 2019; 96:12-24. [DOI: 10.1016/j.artmed.2019.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 03/04/2019] [Accepted: 03/05/2019] [Indexed: 10/27/2022]
|
13
|
Lian C, Ruan S, Denoeux T, Li H, Vera P. Joint Tumor Segmentation in PET-CT Images Using Co-Clustering and Fusion Based on Belief Functions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:755-766. [PMID: 30296224 PMCID: PMC8191586 DOI: 10.1109/tip.2018.2872908] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Precise delineation of target tumor is a key factor to ensure the effectiveness of radiation therapy. While hybrid positron emission tomography-computed tomography (PET-CT) has become a standard imaging tool in the practice of radiation oncology, many existing automatic/semi-automatic methods still perform tumor segmentation on mono-modal images. In this paper, a co-clustering algorithm is proposed to concurrently segment 3D tumors in PET-CT images, considering that the two complementary imaging modalities can combine functional and anatomical information to improve segmentation performance. The theory of belief functions is adopted in the proposed method to model, fuse, and reason with uncertain and imprecise knowledge from noisy and blurry PET-CT images. To ensure reliable segmentation for each modality, the distance metric for the quantification of clustering distortions and spatial smoothness is iteratively adapted during the clustering procedure. On the other hand, to encourage consistent segmentation between different modalities, a specific context term is proposed in the clustering objective function. Moreover, during the iterative optimization process, clustering results for the two distinct modalities are further adjusted via a belief-functions-based information fusion strategy. The proposed method has been evaluated on a data set consisting of 21 paired PET-CT images for non-small cell lung cancer patients. The quantitative and qualitative evaluations show that our proposed method performs well compared with the state-of-the-art methods.
Collapse
|
14
|
Pan L, Deng Y. A New Belief Entropy to Measure Uncertainty of Basic Probability Assignments Based on Belief Function and Plausibility Function. ENTROPY 2018; 20:e20110842. [PMID: 33266566 PMCID: PMC7512404 DOI: 10.3390/e20110842] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2018] [Revised: 10/28/2018] [Accepted: 10/31/2018] [Indexed: 11/17/2022]
Abstract
How to measure the uncertainty of the basic probability assignment (BPA) function is an open issue in Dempster–Shafer (D–S) theory. The main work of this paper is to propose a new belief entropy, which is mainly used to measure the uncertainty of BPA. The proposed belief entropy is based on Deng entropy and probability interval consisting of lower and upper probabilities. In addition, under certain conditions, it can be transformed into Shannon entropy. Numerical examples are used to illustrate the efficiency of the new belief entropy in measurement uncertainty.
Collapse
Affiliation(s)
| | - Yong Deng
- Correspondence: ; Tel.: +86-(028)-6183-0858
| |
Collapse
|
15
|
Liu M, Zhang J, Nie D, Yap PT, Shen D. Anatomical Landmark Based Deep Feature Representation for MR Images in Brain Disease Diagnosis. IEEE J Biomed Health Inform 2018; 22:1476-1485. [PMID: 29994175 PMCID: PMC6238951 DOI: 10.1109/jbhi.2018.2791863] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Most automated techniques for brain disease diagnosis utilize hand-crafted (e.g., voxel-based or region-based) biomarkers from structural magnetic resonance (MR) images as feature representations. However, these hand-crafted features are usually high-dimensional or require regions-of-interest defined by experts. Also, because of possibly heterogeneous property between the hand-crafted features and the subsequent model, existing methods may lead to sub-optimal performances in brain disease diagnosis. In this paper, we propose a landmark-based deep feature learning (LDFL) framework to automatically extract patch-based representation from MRI for automatic diagnosis of Alzheimer's disease. We first identify discriminative anatomical landmarks from MR images in a data-driven manner, and then propose a convolutional neural network for patch-based deep feature learning. We have evaluated the proposed method on subjects from three public datasets, including the Alzheimer's disease neuroimaging initiative (ADNI-1), ADNI-2, and the minimal interval resonance imaging in alzheimer's disease (MIRIAD) dataset. Experimental results of both tasks of brain disease classification and MR image retrieval demonstrate that the proposed LDFL method improves the performance of disease classification and MR image retrieval.
Collapse
|