1
|
Huang L, Ruan S, Xing Y, Feng M. A review of uncertainty quantification in medical image analysis: Probabilistic and non-probabilistic methods. Med Image Anal 2024; 97:103223. [PMID: 38861770 DOI: 10.1016/j.media.2024.103223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/16/2024] [Accepted: 05/27/2024] [Indexed: 06/13/2024]
Abstract
The comprehensive integration of machine learning healthcare models within clinical practice remains suboptimal, notwithstanding the proliferation of high-performing solutions reported in the literature. A predominant factor hindering widespread adoption pertains to an insufficiency of evidence affirming the reliability of the aforementioned models. Recently, uncertainty quantification methods have been proposed as a potential solution to quantify the reliability of machine learning models and thus increase the interpretability and acceptability of the results. In this review, we offer a comprehensive overview of the prevailing methods proposed to quantify the uncertainty inherent in machine learning models developed for various medical image tasks. Contrary to earlier reviews that exclusively focused on probabilistic methods, this review also explores non-probabilistic approaches, thereby furnishing a more holistic survey of research pertaining to uncertainty quantification for machine learning models. Analysis of medical images with the summary and discussion on medical applications and the corresponding uncertainty evaluation protocols are presented, which focus on the specific challenges of uncertainty in medical image analysis. We also highlight some potential future research work at the end. Generally, this review aims to allow researchers from both clinical and technical backgrounds to gain a quick and yet in-depth understanding of the research in uncertainty quantification for medical image analysis machine learning models.
Collapse
Affiliation(s)
- Ling Huang
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Su Ruan
- Quantif, LITIS, University of Rouen Normandy, France.
| | - Yucheng Xing
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Mengling Feng
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore; Institute of Data Science, National University of Singapore, Singapore
| |
Collapse
|
2
|
Li Y, Li C, Yang T, Chen L, Huang M, Yang L, Zhou S, Liu H, Xia J, Wang S. Multiview deep learning networks based on automated breast volume scanner images for identifying breast cancer in BI-RADS 4. Front Oncol 2024; 14:1399296. [PMID: 39309734 PMCID: PMC11412795 DOI: 10.3389/fonc.2024.1399296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 08/19/2024] [Indexed: 09/25/2024] Open
Abstract
Objectives To develop and validate a deep learning (DL) based automatic segmentation and classification system to classify benign and malignant BI-RADS 4 lesions imaged with ABVS. Methods From May to December 2020, patients with BI-RADS 4 lesions from Centre 1 and Centre 2 were retrospectively enrolled and divided into a training set (Centre 1) and an independent test set (Centre 2). All included patients underwent an ABVS examination within one week before the biopsy. A two-stage DL framework consisting of an automatic segmentation module and an automatic classification module was developed. The preprocessed ABVS images were input into the segmentation module for BI-RADS 4 lesion segmentation. The classification model was constructed to extract features and output the probability of malignancy. The diagnostic performances among different ABVS views (axial, sagittal, coronal, and multi-view) and DL architectures (Inception-v3, ResNet 50, and MobileNet) were compared. Results A total of 251 BI-RADS 4 lesions from 216 patients were included (178 in the training set and 73 in the independent test set). The average Dice coefficient, precision, and recall of the segmentation module in the test set were 0.817 ± 0.142, 0.903 ± 0.183, and 0.886 ± 0.187, respectively. The DL model based on multiview ABVS images and Inception-v3 achieved the best performance, with an AUC, sensitivity, specificity, PPV, and NPV of 0.949 (95% CI: 0.945-0.953), 82.14%, 95.56%, 92.00%, and 89.58%, respectively, in the test set. Conclusions The developed multiview DL model enables automatic segmentation and classification of BI-RADS 4 lesions in ABVS images.
Collapse
Affiliation(s)
- Yini Li
- Department of Ultrasound, The Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Cao Li
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Tao Yang
- Department of Ultrasound, The Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Lingzhi Chen
- Department of Ultrasound, The Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Mingquan Huang
- Department of Breast Surgery, The Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Lu Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Shuxian Zhou
- Artificial Intelligence Innovation Center, Research Institute of Tsinghua, Guangdong, China
| | - Huaqing Liu
- Artificial Intelligence Innovation Center, Research Institute of Tsinghua, Guangdong, China
| | - Jizhu Xia
- Department of Ultrasound, The Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Shijie Wang
- Department of Ultrasound, The Affiliated Hospital of Southwest Medical University, Sichuan, China
| |
Collapse
|
3
|
Qin C, Wang Y, Zhang J. URCA: Uncertainty-based region clipping algorithm for semi-supervised medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108278. [PMID: 38878360 DOI: 10.1016/j.cmpb.2024.108278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 05/24/2024] [Accepted: 06/06/2024] [Indexed: 07/28/2024]
Abstract
BACKGROUND AND OBJECTIVE Training convolutional neural networks based on large amount of labeled data has made great progress in the field of image segmentation. However, in medical image segmentation tasks, annotating the data is expensive and time-consuming because pixel-level annotation requires experts in the relevant field. Currently, the combination of consistent regularization and pseudo labeling-based semi-supervised methods has shown good performance in image segmentation. However, in the training process, a portion of low-confidence pseudo labels are generated by the model. And the semi-supervised segmentation method still has the problem of distribution bias between labeled and unlabeled data. The objective of this study is to address the challenges of semi-supervised learning and improve the segmentation accuracy of semi-supervised models on medical images. METHODS To address these issues, we propose an Uncertainty-based Region Clipping Algorithm for semi-supervised medical image segmentation, which consists of two main modules. A module is introduced to compute the uncertainty of two sub-networks predictions with diversity using Monte Carlo Dropout, allowing the model to gradually learn from more reliable targets. To retain model diversity, we use different loss functions for different branches and use Non-Maximum Suppression in one of the branches. The other module is proposed to generate new samples by masking the low-confidence pixels in the original image based on uncertainty information. New samples are fed into the model to facilitate the model to generate pseudo labels with high confidence and enlarge the training data distribution. RESULTS Comprehensive experiments on the combination of two benchmarks ACDC and BraTS2019 show that our proposed model outperforms state-of-the-art methods in terms of Dice, HD95 and ASD. The results reach an average Dice score of 87.86 % and a HD95 score of 4.214 mm on ACDC dataset. For the brain tumor segmentation, the results reach an average Dice score of 84.79 % and a HD score of 10.13 mm. CONCLUSIONS Our proposed method improves the accuracy of semi-supervised medical image segmentation. Extensive experiments on two public medical image datasets including 2D and 3D modalities demonstrate the superiority of our model. The code is available at: https://github.com/QuintinDong/URCA.
Collapse
Affiliation(s)
- Chendong Qin
- University of Shanghai for Science and Technology, School of Opto-Electronic Information and Computer Engineering, Department of Control Science and Engineering, 516 War Industrial Road, Shanghai 200093, China
| | - Yongxiong Wang
- University of Shanghai for Science and Technology, School of Opto-Electronic Information and Computer Engineering, Department of Control Science and Engineering, 516 War Industrial Road, Shanghai 200093, China.
| | - Jiapeng Zhang
- University of Shanghai for Science and Technology, School of Opto-Electronic Information and Computer Engineering, Department of Control Science and Engineering, 516 War Industrial Road, Shanghai 200093, China
| |
Collapse
|
4
|
Barekatrezaei S, Kozegar E, Salamati M, Soryani M. Mass detection in automated three dimensional breast ultrasound using cascaded convolutional neural networks. Phys Med 2024; 124:103433. [PMID: 39002423 DOI: 10.1016/j.ejmp.2024.103433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/03/2024] [Accepted: 07/08/2024] [Indexed: 07/15/2024] Open
Abstract
PURPOSE Early detection of breast cancer has a significant effect on reducing its mortality rate. For this purpose, automated three-dimensional breast ultrasound (3-D ABUS) has been recently used alongside mammography. The 3-D volume produced in this imaging system includes many slices. The radiologist must review all the slices to find the mass, a time-consuming task with a high probability of mistakes. Therefore, many computer-aided detection (CADe) systems have been developed to assist radiologists in this task. In this paper, we propose a novel CADe system for mass detection in 3-D ABUS images. METHODS The proposed system includes two cascaded convolutional neural networks. The goal of the first network is to achieve the highest possible sensitivity, and the second network's goal is to reduce false positives while maintaining high sensitivity. In both networks, an improved version of 3-D U-Net architecture is utilized in which two types of modified Inception modules are used in the encoder section. In the second network, new attention units are also added to the skip connections that receive the results of the first network as saliency maps. RESULTS The system was evaluated on a dataset containing 60 3-D ABUS volumes from 43 patients and 55 masses. A sensitivity of 91.48% and a mean false positive of 8.85 per patient were achieved. CONCLUSIONS The suggested mass detection system is fully automatic without any user interaction. The results indicate that the sensitivity and the mean FP per patient of the CADe system outperform competing techniques.
Collapse
Affiliation(s)
- Sepideh Barekatrezaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran.
| | - Ehsan Kozegar
- Department of Computer Engineering and Engineering Sciences, Faculty of Technology and Engineering, University of Guilan, Rudsar-Vajargah, Guilan, Iran.
| | - Masoumeh Salamati
- Department of Reproductive Imaging, Reproductive Biomedicine Research Center, Royan Institute for Reproductive Biomedicine, ACECR, Tehran, Iran.
| | - Mohsen Soryani
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran.
| |
Collapse
|
5
|
Li Y, Ren Y, Cheng Z, Sun J, Pan P, Chen H. Automatic breast ultrasound (ABUS) tumor segmentation based on global and local feature fusion. Phys Med Biol 2024; 69:115039. [PMID: 38759673 DOI: 10.1088/1361-6560/ad4d53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 05/17/2024] [Indexed: 05/19/2024]
Abstract
Accurate segmentation of tumor regions in automated breast ultrasound (ABUS) images is of paramount importance in computer-aided diagnosis system. However, the inherent diversity of tumors and the imaging interference pose great challenges to ABUS tumor segmentation. In this paper, we propose a global and local feature interaction model combined with graph fusion (GLGM), for 3D ABUS tumor segmentation. In GLGM, we construct a dual branch encoder-decoder, where both local and global features can be extracted. Besides, a global and local feature fusion module is designed, which employs the deepest semantic interaction to facilitate information exchange between local and global features. Additionally, to improve the segmentation performance for small tumors, a graph convolution-based shallow feature fusion module is designed. It exploits the shallow feature to enhance the feature expression of small tumors in both local and global domains. The proposed method is evaluated on a private ABUS dataset and a public ABUS dataset. For the private ABUS dataset, the small tumors (volume smaller than 1 cm3) account for over 50% of the entire dataset. Experimental results show that the proposed GLGM model outperforms several state-of-the-art segmentation models in 3D ABUS tumor segmentation, particularly in segmenting small tumors.
Collapse
Affiliation(s)
- Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| | - Yihan Ren
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| | - Zhanyi Cheng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| | - Jia Sun
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| | - Pan Pan
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, People's Republic of China
| |
Collapse
|
6
|
Lambert B, Forbes F, Doyle S, Dehaene H, Dojat M. Trustworthy clinical AI solutions: A unified review of uncertainty quantification in Deep Learning models for medical image analysis. Artif Intell Med 2024; 150:102830. [PMID: 38553168 DOI: 10.1016/j.artmed.2024.102830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 02/28/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
The full acceptance of Deep Learning (DL) models in the clinical field is rather low with respect to the quantity of high-performing solutions reported in the literature. End users are particularly reluctant to rely on the opaque predictions of DL models. Uncertainty quantification methods have been proposed in the literature as a potential solution, to reduce the black-box effect of DL models and increase the interpretability and the acceptability of the result by the final user. In this review, we propose an overview of the existing methods to quantify uncertainty associated with DL predictions. We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their variable quality, as well as constraints associated with real-world clinical routine. Moreover, we discuss the concept of structural uncertainty, a corpus of methods to facilitate the alignment of segmentation uncertainty estimates with clinical attention. We then discuss the evaluation protocols to validate the relevance of uncertainty estimates. Finally, we highlight the open challenges for uncertainty quantification in the medical field.
Collapse
Affiliation(s)
- Benjamin Lambert
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut des Neurosciences, Grenoble, 38000, France; Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Florence Forbes
- Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, Grenoble, 38000, France
| | - Senan Doyle
- Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Harmonie Dehaene
- Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Michel Dojat
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut des Neurosciences, Grenoble, 38000, France.
| |
Collapse
|
7
|
Ilesanmi AE, Ilesanmi TO, Ajayi BO. Reviewing 3D convolutional neural network approaches for medical image segmentation. Heliyon 2024; 10:e27398. [PMID: 38496891 PMCID: PMC10944240 DOI: 10.1016/j.heliyon.2024.e27398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/19/2024] Open
Abstract
Background Convolutional neural networks (CNNs) assume pivotal roles in aiding clinicians in diagnosis and treatment decisions. The rapid evolution of imaging technology has established three-dimensional (3D) CNNs as a formidable framework for delineating organs and anomalies in medical images. The prominence of 3D CNN frameworks is steadily growing within medical image segmentation and classification. Thus, our proposition entails a comprehensive review, encapsulating diverse 3D CNN algorithms for the segmentation of medical image anomalies and organs. Methods This study systematically presents an exhaustive review of recent 3D CNN methodologies. Rigorous screening of abstracts and titles were carried out to establish their relevance. Research papers disseminated across academic repositories were meticulously chosen, analyzed, and appraised against specific criteria. Insights into the realm of anomalies and organ segmentation were derived, encompassing details such as network architecture and achieved accuracies. Results This paper offers an all-encompassing analysis, unveiling the prevailing trends in 3D CNN segmentation. In-depth elucidations encompass essential insights, constraints, observations, and avenues for future exploration. A discerning examination indicates the preponderance of the encoder-decoder network in segmentation tasks. The encoder-decoder framework affords a coherent methodology for the segmentation of medical images. Conclusion The findings of this study are poised to find application in clinical diagnosis and therapeutic interventions. Despite inherent limitations, CNN algorithms showcase commendable accuracy levels, solidifying their potential in medical image segmentation and classification endeavors.
Collapse
Affiliation(s)
- Ademola E. Ilesanmi
- University of Pennsylvania, 3710 Hamilton Walk, 6th Floor, Philadelphia, PA, 19104, United States
| | | | - Babatunde O. Ajayi
- National Astronomical Research Institute of Thailand, Chiang Mai 50180, Thailand
| |
Collapse
|
8
|
Taddese AA, Tilahun BC, Awoke T, Atnafu A, Mamuye A, Mengiste SA. Deep-learning models for image-based gynecological cancer diagnosis: a systematic review and meta- analysis. Front Oncol 2024; 13:1216326. [PMID: 38273847 PMCID: PMC10809847 DOI: 10.3389/fonc.2023.1216326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 11/13/2023] [Indexed: 01/27/2024] Open
Abstract
Introduction Gynecological cancers pose a significant threat to women worldwide, especially those in resource-limited settings. Human analysis of images remains the primary method of diagnosis, but it can be inconsistent and inaccurate. Deep learning (DL) can potentially enhance image-based diagnosis by providing objective and accurate results. This systematic review and meta-analysis aimed to summarize the recent advances of deep learning (DL) techniques for gynecological cancer diagnosis using various images and explore their future implications. Methods The study followed the PRISMA-2 guidelines, and the protocol was registered in PROSPERO. Five databases were searched for articles published from January 2018 to December 2022. Articles that focused on five types of gynecological cancer and used DL for diagnosis were selected. Two reviewers assessed the articles for eligibility and quality using the QUADAS-2 tool. Data was extracted from each study, and the performance of DL techniques for gynecological cancer classification was estimated by pooling and transforming sensitivity and specificity values using a random-effects model. Results The review included 48 studies, and the meta-analysis included 24 studies. The studies used different images and models to diagnose different gynecological cancers. The most popular models were ResNet, VGGNet, and UNet. DL algorithms showed more sensitivity but less specificity compared to machine learning (ML) methods. The AUC of the summary receiver operating characteristic plot was higher for DL algorithms than for ML methods. Of the 48 studies included, 41 were at low risk of bias. Conclusion This review highlights the potential of DL in improving the screening and diagnosis of gynecological cancer, particularly in resource-limited settings. However, the high heterogeneity and quality of the studies could affect the validity of the results. Further research is necessary to validate the findings of this study and to explore the potential of DL in improving gynecological cancer diagnosis.
Collapse
Affiliation(s)
- Asefa Adimasu Taddese
- Department of Health Informatics, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
- eHealthlab Ethiopia Research Center, University of Gondar, Gondar, Ethiopia
| | - Binyam Chakilu Tilahun
- Department of Health Informatics, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
- eHealthlab Ethiopia Research Center, University of Gondar, Gondar, Ethiopia
| | - Tadesse Awoke
- Department of Epidemiology and Biostatistics, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
| | - Asmamaw Atnafu
- eHealthlab Ethiopia Research Center, University of Gondar, Gondar, Ethiopia
- Department of Health Systems and Policy, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia
| | - Adane Mamuye
- eHealthlab Ethiopia Research Center, University of Gondar, Gondar, Ethiopia
- School of Information Technology and Engineering, Addis Ababa University, Addis Ababa, Ethiopia
| | - Shegaw Anagaw Mengiste
- Department of Business, History and Social Sciences, University of Southeastern Norway, Vestfold, Vestfold, Norway
| |
Collapse
|
9
|
Pan P, Li Y, Chen H, Sun J, Li X, Cheng L. ABUS tumor segmentation via decouple contrastive knowledge distillation. Phys Med Biol 2023; 69:015019. [PMID: 38052091 DOI: 10.1088/1361-6560/ad1274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 12/05/2023] [Indexed: 12/07/2023]
Abstract
Objective.In recent years, deep learning-based methods have become the mainstream for medical image segmentation. Accurate segmentation of automated breast ultrasound (ABUS) tumor plays an essential role in computer-aided diagnosis. Existing deep learning models typically require a large number of computations and parameters.Approach. Aiming at this problem, we propose a novel knowledge distillation method for ABUS tumor segmentation. The tumor or non-tumor regions from different cases tend to have similar representations in the feature space. Based on this, we propose to decouple features into positive (tumor) and negative (non-tumor) pairs and design a decoupled contrastive learning method. The contrastive loss is utilized to force the student network to mimic the tumor or non-tumor features of the teacher network. In addition, we designed a ranking loss function based on ranking the distance metric in the feature space to address the problem of hard-negative mining in medical image segmentation.Main results. The effectiveness of our knowledge distillation method is evaluated on the private ABUS dataset and a public hippocampus dataset. The experimental results demonstrate that our proposed method achieves state-of-the-art performance in ABUS tumor segmentation. Notably, after distilling knowledge from the teacher network (3D U-Net), the Dice similarity coefficient (DSC) of the student network (small 3D U-Net) is improved by 7%. Moreover, the DSC of the student network (3D HR-Net) reaches 0.780, which is very close to that of the teacher network, while their parameters are only 6.8% and 12.1% of 3D U-Net, respectively.Significance. This research introduces a novel knowledge distillation method for ABUS tumor segmentation, significantly reducing computational demands while achieving state-of-the-art performance. The method promises enhanced accuracy and feasibility for computer-aided diagnosis in diverse imaging scenarios.
Collapse
Affiliation(s)
- Pan Pan
- Beijing Jiaotong University, Shangyuancun No.3, Haidian, Beijing, 100044, People's Republic of China
| | - Yanfeng Li
- Beijing Jiaotong University, Shangyuancun No.3, Haidian, Beijing, 100044, People's Republic of China
| | - Houjin Chen
- Beijing Jiaotong University, Shangyuancun No.3, Haidian, Beijing, 100044, People's Republic of China
| | - Jia Sun
- Beijing Jiaotong University, Shangyuancun No.3, Haidian, Beijing, 100044, People's Republic of China
| | - Xiaoling Li
- Beijing Jiaotong University, Shangyuancun No.3, Haidian, Beijing, 100044, People's Republic of China
| | - Lin Cheng
- Peking University People's Hospital, Haidian, Beijing, 100044, People's Republic of China
| |
Collapse
|
10
|
Zhang X, Sisniega A, Zbijewski WB, Lee J, Jones CK, Wu P, Han R, Uneri A, Vagdargi P, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Combining physics-based models with deep learning image synthesis and uncertainty in intraoperative cone-beam CT of the brain. Med Phys 2023; 50:2607-2624. [PMID: 36906915 PMCID: PMC10175241 DOI: 10.1002/mp.16351] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/03/2023] [Accepted: 02/27/2023] [Indexed: 03/13/2023] Open
Abstract
BACKGROUND Image-guided neurosurgery requires high localization and registration accuracy to enable effective treatment and avoid complications. However, accurate neuronavigation based on preoperative magnetic resonance (MR) or computed tomography (CT) images is challenged by brain deformation occurring during the surgical intervention. PURPOSE To facilitate intraoperative visualization of brain tissues and deformable registration with preoperative images, a 3D deep learning (DL) reconstruction framework (termed DL-Recon) was proposed for improved intraoperative cone-beam CT (CBCT) image quality. METHODS The DL-Recon framework combines physics-based models with deep learning CT synthesis and leverages uncertainty information to promote robustness to unseen features. A 3D generative adversarial network (GAN) with a conditional loss function modulated by aleatoric uncertainty was developed for CBCT-to-CT synthesis. Epistemic uncertainty of the synthesis model was estimated via Monte Carlo (MC) dropout. Using spatially varying weights derived from epistemic uncertainty, the DL-Recon image combines the synthetic CT with an artifact-corrected filtered back-projection (FBP) reconstruction. In regions of high epistemic uncertainty, DL-Recon includes greater contribution from the FBP image. Twenty paired real CT and simulated CBCT images of the head were used for network training and validation, and experiments evaluated the performance of DL-Recon on CBCT images containing simulated and real brain lesions not present in the training data. Performance among learning- and physics-based methods was quantified in terms of structural similarity (SSIM) of the resulting image to diagnostic CT and Dice similarity metric (DSC) in lesion segmentation compared to ground truth. A pilot study was conducted involving seven subjects with CBCT images acquired during neurosurgery to assess the feasibility of DL-Recon in clinical data. RESULTS CBCT images reconstructed via FBP with physics-based corrections exhibited the usual challenges to soft-tissue contrast resolution due to image non-uniformity, noise, and residual artifacts. GAN synthesis improved image uniformity and soft-tissue visibility but was subject to error in the shape and contrast of simulated lesions that were unseen in training. Incorporation of aleatoric uncertainty in synthesis loss improved estimation of epistemic uncertainty, with variable brain structures and unseen lesions exhibiting higher epistemic uncertainty. The DL-Recon approach mitigated synthesis errors while maintaining improvement in image quality, yielding 15%-22% increase in SSIM (image appearance compared to diagnostic CT) and up to 25% increase in DSC in lesion segmentation compared to FBP. Clear gains in visual image quality were also observed in real brain lesions and in clinical CBCT images. CONCLUSIONS DL-Recon leveraged uncertainty estimation to combine the strengths of DL and physics-based reconstruction and demonstrated substantial improvements in the accuracy and quality of intraoperative CBCT. The improved soft-tissue contrast resolution could facilitate visualization of brain structures and support deformable registration with preoperative images, further extending the utility of intraoperative CBCT in image-guided neurosurgery.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Wojciech B. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Junghoon Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Craig K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Prasad Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | - Mark Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - William S. Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030
| |
Collapse
|
11
|
Chen G, Li L, Dai Y, Zhang J, Yap MH. AAU-Net: An Adaptive Attention U-Net for Breast Lesions Segmentation in Ultrasound Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1289-1300. [PMID: 36455083 DOI: 10.1109/tmi.2022.3226268] [Citation(s) in RCA: 22] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Various deep learning methods have been proposed to segment breast lesions from ultrasound images. However, similar intensity distributions, variable tumor morphologies and blurred boundaries present challenges for breast lesions segmentation, especially for malignant tumors with irregular shapes. Considering the complexity of ultrasound images, we develop an adaptive attention U-net (AAU-net) to segment breast lesions automatically and stably from ultrasound images. Specifically, we introduce a hybrid adaptive attention module (HAAM), which mainly consists of a channel self-attention block and a spatial self-attention block, to replace the traditional convolution operation. Compared with the conventional convolution operation, the design of the hybrid adaptive attention module can help us capture more features under different receptive fields. Different from existing attention mechanisms, the HAAM module can guide the network to adaptively select more robust representation in channel and space dimensions to cope with more complex breast lesions segmentation. Extensive experiments with several state-of-the-art deep learning segmentation methods on three public breast ultrasound datasets show that our method has better performance on breast lesions segmentation. Furthermore, robustness analysis and external experiments demonstrate that our proposed AAU-net has better generalization performance in the breast lesion segmentation. Moreover, the HAAM module can be flexibly applied to existing network frameworks. The source code is available on https://github.com/CGPxy/AAU-net.
Collapse
|
12
|
Mújica-Vargas D, Matuz-Cruz M, García-Aquino C, Ramos-Palencia C. Efficient System for Delimitation of Benign and Malignant Breast Masses. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1775. [PMID: 36554180 PMCID: PMC9777637 DOI: 10.3390/e24121775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 11/23/2022] [Accepted: 11/26/2022] [Indexed: 06/01/2023]
Abstract
In this study, a high-performing scheme is introduced to delimit benign and malignant masses in breast ultrasound images. The proposal is built upon by the Nonlocal Means filter for image quality improvement, an Intuitionistic Fuzzy C-Means local clustering algorithm for superpixel generation with high adherence to the edges, and the DBSCAN algorithm for the global clustering of those superpixels in order to delimit masses' regions. The empirical study was performed using two datasets, both with benign and malignant breast tumors. The quantitative results with respect to the BUSI dataset were JSC≥0.907, DM≥0.913, HD≥7.025, and MCR≤6.431 for benign masses and JSC≥0.897, DM≥0.900, HD≥8.666, and MCR≤8.016 for malignant ones, while the MID dataset resulted in JSC≥0.890, DM≥0.905, HD≥8.370, and MCR≤7.241 along with JSC≥0.881, DM≥0.898, HD≥8.865, and MCR≤7.808 for benign and malignant masses, respectively. These numerical results revealed that our proposal outperformed all the evaluated comparative state-of-the-art methods in mass delimitation. This is confirmed by the visual results since the segmented regions had a better edge delimitation.
Collapse
Affiliation(s)
- Dante Mújica-Vargas
- Departamento de Ciencias Computacionales, Tecnológico Nacional de México, Centro Nacional de Investigación y Desarrollo Tecnológico, Cuernavaca 62490, Morelos, Mexico
| | - Manuel Matuz-Cruz
- Tecnológico Nacional de México, Instituto Tecnológico de Tapachula, Tapachula 30700, Chiapas, Mexico
| | - Christian García-Aquino
- Departamento de Ciencias Computacionales, Tecnológico Nacional de México, Centro Nacional de Investigación y Desarrollo Tecnológico, Cuernavaca 62490, Morelos, Mexico
| | - Celia Ramos-Palencia
- Departamento de Ciencias Computacionales, Tecnológico Nacional de México, Centro Nacional de Investigación y Desarrollo Tecnológico, Cuernavaca 62490, Morelos, Mexico
| |
Collapse
|
13
|
Cao X, Chen H, Li Y, Peng Y, Zhou Y, Cheng L, Liu T, Shen D. Auto-DenseUNet: Searchable neural network architecture for mass segmentation in 3D automated breast ultrasound. Med Image Anal 2022; 82:102589. [DOI: 10.1016/j.media.2022.102589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 07/18/2022] [Accepted: 08/17/2022] [Indexed: 11/15/2022]
|
14
|
Zhang J, Tao X, Jiang Y, Wu X, Yan D, Xue W, Zhuang S, Chen L, Luo L, Ni D. Application of Convolution Neural Network Algorithm Based on Multicenter ABUS Images in Breast Lesion Detection. Front Oncol 2022; 12:938413. [PMID: 35898876 PMCID: PMC9310547 DOI: 10.3389/fonc.2022.938413] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 05/30/2022] [Indexed: 11/24/2022] Open
Abstract
Objective This study aimed to evaluate a convolution neural network algorithm for breast lesion detection with multi-center ABUS image data developed based on ABUS image and Yolo v5. Methods A total of 741 cases with 2,538 volume data of ABUS examinations were analyzed, which were recruited from 7 hospitals between October 2016 and December 2020. A total of 452 volume data of 413 cases were used as internal validation data, and 2,086 volume data from 328 cases were used as external validation data. There were 1,178 breast lesions in 413 patients (161 malignant and 1,017 benign) and 1,936 lesions in 328 patients (57 malignant and 1,879 benign). The efficiency and accuracy of the algorithm were analyzed in detecting lesions with different allowable false positive values and lesion sizes, and the differences were compared and analyzed, which included the various indicators in internal validation and external validation data. Results The study found that the algorithm had high sensitivity for all categories of lesions, even when using internal or external validation data. The overall detection rate of the algorithm was as high as 78.1 and 71.2% in the internal and external validation sets, respectively. The algorithm could detect more lesions with increasing nodule size (87.4% in ≥10 mm lesions but less than 50% in <10 mm). The detection rate of BI-RADS 4/5 lesions was higher than that of BI-RADS 3 or 2 (96.5% vs 79.7% vs 74.7% internal, 95.8% vs 74.7% vs 88.4% external). Furthermore, the detection performance was better for malignant nodules than benign (98.1% vs 74.9% internal, 98.2% vs 70.4% external). Conclusions This algorithm showed good detection efficiency in the internal and external validation sets, especially for category 4/5 lesions and malignant lesions. However, there are still some deficiencies in detecting category 2 and 3 lesions and lesions smaller than 10 mm.
Collapse
Affiliation(s)
- Jianxing Zhang
- Department of Medical Imaging Center, The First Affiliated Hospital, Jinan University, Guangzhou, China
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
- *Correspondence: Jianxing Zhang, ; Liangping Luo, ; Dong Ni,
| | - Xing Tao
- Medical Ultrasound Image Computing Lab, Shenzhen University, Shenzhen, China
| | - Yanhui Jiang
- Medical Ultrasound Image Computing Lab, Shenzhen University, Shenzhen, China
| | - Xiaoxi Wu
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Dan Yan
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wen Xue
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Shulian Zhuang
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Ling Chen
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Liangping Luo
- Department of Medical Imaging Center, The First Affiliated Hospital, Jinan University, Guangzhou, China
- *Correspondence: Jianxing Zhang, ; Liangping Luo, ; Dong Ni,
| | - Dong Ni
- Medical Ultrasound Image Computing Lab, Shenzhen University, Shenzhen, China
- *Correspondence: Jianxing Zhang, ; Liangping Luo, ; Dong Ni,
| |
Collapse
|
15
|
Cheng Z, Li Y, Chen H, Zhang Z, Pan P, Cheng L. DSGMFFN: Deepest semantically guided multi-scale feature fusion network for automated lesion segmentation in ABUS images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106891. [PMID: 35623209 DOI: 10.1016/j.cmpb.2022.106891] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 05/06/2022] [Accepted: 05/12/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Automated breast ultrasound (ABUS) imaging technology has been widely used in clinical diagnosis. Accurate lesion segmentation in ABUS images is essential in computer-aided diagnosis (CAD) systems. Although deep learning-based approaches have been widely employed in medical image analysis, the large variety of lesions and the imaging interference make ABUS lesion segmentation challenging. METHODS In this paper, we propose a novel deepest semantically guided multi-scale feature fusion network (DSGMFFN) for lesion segmentation in 2D ABUS slices. In order to cope with the large variety of lesions, a deepest semantically guided decoder (DSGNet) and a multi-scale feature fusion model (MFFM) are designed, where the deepest semantics is fully utilized to guide the decoding and feature fusion. That is, the deepest information is given the highest weight in the feature fusion process, and participates in every decoding stage. Aiming at the challenge of imaging interference, a novel mixed attention mechanism is developed, integrating spatial self-attention and channel self-attention to obtain the correlation among pixels and channels to highlight the lesion region. RESULTS The proposed DSGMFFN is evaluated on 3742 slices of 170 ABUS volumes. The experimental result indicates that DSGMFFN achieves 84.54% and 73.24% in Dice similarity coefficient (DSC) and intersection over union (IoU), respectively. CONCLUSIONS The proposed method shows better performance than the state-of-the-art methods in ABUS lesion segmentation. Incorrect segmentation caused by lesion variety and imaging interference in ABUS images can be alleviated.
Collapse
Affiliation(s)
- Zhanyi Cheng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China.
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Zilu Zhang
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Pan Pan
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China
| | - Lin Cheng
- Center for Breast, People's Hospital of Peking University, Beijing, China
| |
Collapse
|
16
|
Diao Z, Jiang H, Shi T. A unified uncertainty network for tumor segmentation using uncertainty cross entropy loss and prototype similarity. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
17
|
|