1
|
Gunashekar DD, Bielak L, Oerther B, Benndorf M, Nedelcu A, Hickey S, Zamboglou C, Grosu AL, Bock M. Comparison of data fusion strategies for automated prostate lesion detection using mpMRI correlated with whole mount histology. Radiat Oncol 2024; 19:96. [PMID: 39080735 PMCID: PMC11287985 DOI: 10.1186/s13014-024-02471-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/14/2024] [Indexed: 08/03/2024] Open
Abstract
BACKGROUND In this work, we compare input level, feature level and decision level data fusion techniques for automatic detection of clinically significant prostate lesions (csPCa). METHODS Multiple deep learning CNN architectures were developed using the Unet as the baseline. The CNNs use both multiparametric MRI images (T2W, ADC, and High b-value) and quantitative clinical data (prostate specific antigen (PSA), PSA density (PSAD), prostate gland volume & gross tumor volume (GTV)), and only mp-MRI images (n = 118), as input. In addition, co-registered ground truth data from whole mount histopathology images (n = 22) were used as a test set for evaluation. RESULTS The CNNs achieved for early/intermediate / late level fusion a precision of 0.41/0.51/0.61, recall value of 0.18/0.22/0.25, an average precision of 0.13 / 0.19 / 0.27, and F scores of 0.55/0.67/ 0.76. Dice Sorensen Coefficient (DSC) was used to evaluate the influence of combining mpMRI with parametric clinical data for the detection of csPCa. We compared the DSC between the predictions of CNN's trained with mpMRI and parametric clinical and the CNN's trained with only mpMRI images as input with the ground truth. We obtained a DSC of data 0.30/0.34/0.36 and 0.26/0.33/0.34 respectively. Additionally, we evaluated the influence of each mpMRI input channel for the task of csPCa detection and obtained a DSC of 0.14 / 0.25 / 0.28. CONCLUSION The results show that the decision level fusion network performs better for the task of prostate lesion detection. Combining mpMRI data with quantitative clinical data does not show significant differences between these networks (p = 0.26/0.62/0.85). The results show that CNNs trained with all mpMRI data outperform CNNs with less input channels which is consistent with current clinical protocols where the same input is used for PI-RADS lesion scoring. TRIAL REGISTRATION The trial was registered retrospectively at the German Register for Clinical Studies (DRKS) under proposal number Nr. 476/14 & 476/19.
Collapse
Affiliation(s)
- Deepa Darshini Gunashekar
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany.
| | - Lars Bielak
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Benedict Oerther
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Andrea Nedelcu
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Samantha Hickey
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Constantinos Zamboglou
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Anca-Ligia Grosu
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Michael Bock
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| |
Collapse
|
2
|
Wang H, Wang T, Hao Y, Ding S, Feng J. Breast tumor segmentation via deep correlation analysis of multi-sequence MRI. Med Biol Eng Comput 2024:10.1007/s11517-024-03166-0. [PMID: 39031329 DOI: 10.1007/s11517-024-03166-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 07/03/2024] [Indexed: 07/22/2024]
Abstract
Precise segmentation of breast tumors from MRI is crucial for breast cancer diagnosis, as it allows for detailed calculation of tumor characteristics such as shape, size, and edges. Current segmentation methodologies face significant challenges in accurately modeling the complex interrelationships inherent in multi-sequence MRI data. This paper presents a hybrid deep network framework with three interconnected modules, aimed at efficiently integrating and exploiting the spatial-temporal features among multiple MRI sequences for breast tumor segmentation. The first module involves an advanced multi-sequence encoder with a densely connected architecture, separating the encoding pathway into multiple streams for individual MRI sequences. To harness the intricate correlations between different sequence features, we propose a sequence-awareness and temporal-awareness method that adeptly fuses spatial-temporal features of MRI in the second multi-scale feature embedding module. Finally, the decoder module engages in the upsampling of feature maps, meticulously refining the resolution to achieve highly precise segmentation of breast tumors. In contrast to other popular methods, the proposed method learns the interrelationships inherent in multi-sequence MRI. We justify the proposed method through extensive experiments. It achieves notable improvements in segmentation performance, with Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and Positive Predictive Value (PPV) scores of 80.57%, 74.08%, and 84.74% respectively.
Collapse
Affiliation(s)
- Hongyu Wang
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
| | - Tonghui Wang
- Department of Information Science and Technology, Northwest University, Xi'an, Shaanxi, 7101127, China
| | - Yanfang Hao
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
| | - Songtao Ding
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
| | - Jun Feng
- Department of Information Science and Technology, Northwest University, Xi'an, Shaanxi, 7101127, China.
| |
Collapse
|
3
|
Chen T, Zheng W, Hu H, Luo C, Chen J, Yuan C, Lu W, Chen DZ, Gao H, Wu J. A Corresponding Region Fusion Framework for Multi-Modal Cervical Lesion Detection. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:959-970. [PMID: 35635817 DOI: 10.1109/tcbb.2022.3178725] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cervical lesion detection (CLD) using colposcopic images of multi-modality (acetic and iodine) is critical to computer-aided diagnosis (CAD) systems for accurate, objective, and comprehensive cervical cancer screening. To robustly capture lesion features and conform with clinical diagnosis practice, we propose a novel corresponding region fusion network (CRFNet) for multi-modal CLD. CRFNet first extracts feature maps and generates proposals for each modality, then performs proposal shifting to obtain corresponding regions under large position shifts between modalities, and finally fuses those region features with a new corresponding channel attention to detect lesion regions on both modalities. To evaluate CRFNet, we build a large multi-modal colposcopic image dataset collected from our collaborative hospital. We show that our proposed CRFNet surpasses known single-modal and multi-modal CLD methods and achieves state-of-the-art performance, especially in terms of Average Precision.
Collapse
|
4
|
Liu S, Xin J, Wu J, Deng Y, Su R, Niessen WJ, Zheng N, van Walsum T. Multi-view Contour-constrained Transformer Network for Thin-cap Fibroatheroma Identification. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
5
|
Huang W, Wang X, Huang Y, Lin F, Tang X. Multi-parametric Magnetic Resonance Imaging Fusion for Automatic Classification of Prostate Cancer. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:471-474. [PMID: 36085623 DOI: 10.1109/embc48229.2022.9871334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Computer-aided diagnosis (CAD) of prostate cancer (PCa) using multi-parametric magnetic resonance imaging (mp-MRI) has recently gained great research interest. In this work, a fully automatic CAD pipeline of PCa using mp-MRI data is presented. In order to fully explore the mp-MRI data, we systematically investigate three multi-modal medical image fusion strategies in convolutional neural networks, namely input-level fusion, feature-level fusion, and decision-level fusion. Extensive experiments are conducted on two datasets with different PCa-related diagnostic tasks. We identify a pipeline that works relatively the best for both diagnostic tasks, two important components of which are stacking three adjacent slices as the input and performing decision-level fusion with specific loss weights. Clinical relevance- This work provides a practical method for automated diagnosis of PCa based on multi-parametric MRI.
Collapse
|
6
|
Dual-attention EfficientNet based on multi-view feature fusion for cervical squamous intraepithelial lesions diagnosis. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
7
|
Jin K, Yan Y, Chen M, Wang J, Pan X, Liu X, Liu M, Lou L, Wang Y, Ye J. Multimodal deep learning with feature level fusion for identification of choroidal neovascularization activity in age-related macular degeneration. Acta Ophthalmol 2022; 100:e512-e520. [PMID: 34159761 DOI: 10.1111/aos.14928] [Citation(s) in RCA: 59] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/20/2021] [Indexed: 02/06/2023]
Abstract
PURPOSE This study aimed to determine the efficacy of a multimodal deep learning (DL) model using optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) images for the assessment of choroidal neovascularization (CNV) in neovascular age-related macular degeneration (AMD). METHODS This retrospective and cross-sectional study was performed at a multicentre, and the inclusion criteria were age >50 years and a diagnosis of typical neovascular AMD. The OCT and OCTA data for an internal data set and two external data sets were collected. A DL model was developed with a novel feature-level fusion (FLF) method utilized to combine the multimodal data. The results were compared with identification performed by an ophthalmologist. The best model was tested on two external data sets to show its potential for clinical use. RESULTS Our best model achieved an accuracy of 95.5% and an area under the curve (AUC) of 0.9796 on multimodal data inputs for the internal data set, which is comparable to the performance of retinal specialists. The proposed model reached an accuracy of 100.00% and an AUC of 1.0 for the Ningbo data set, and these performance indicators were 90.48% and an AUC of 0.9727 for the Jinhua data set. CONCLUSION The FLF method is feasible and highly accurate, and could enhance the power of the existing computer-aided diagnosis systems. The bi-modal computer-aided diagnosis (CADx) system for the automated identification of CNV activity is an accurate and promising tool in the realm of public health.
Collapse
Affiliation(s)
- Kai Jin
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Yan Yan
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Menglu Chen
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Jun Wang
- The School of Biomedical Engineering Shanghai Jiao Tong University Shanghai China
| | - Xiangji Pan
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Xindi Liu
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Mushui Liu
- College of Computer Science and Technology Zhejiang University Hangzhou China
| | - Lixia Lou
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Yao Wang
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Juan Ye
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| |
Collapse
|
8
|
Chen T, Liu X, Feng R, Wang W, Yuan C, Lu W, He H, Gao H, Ying H, Chen DZ, Wu J. Discriminative Cervical Lesion Detection in Colposcopic Images with Global Class Activation and Local Bin Excitation. IEEE J Biomed Health Inform 2021; 26:1411-1421. [PMID: 34314364 DOI: 10.1109/jbhi.2021.3100367] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Accurate cervical lesion detection (CLD) methods using colposcopic images are highly demanded in computer-aided diagnosis (CAD) for automatic diagnosis of High-grade Squamous Intraepithelial Lesions (HSIL). However, compared to natural scene images, the specific characteristics of colposcopic images, such as low contrast, visual similarity, and ambiguous lesion boundaries, pose difficulties to accurately locating HSIL regions and also significantly impede the performance improvement of existing CLD approaches. To tackle these difficulties and better capture cervical lesions, we develop novel feature enhancing mechanisms from both global and local perspectives, and propose a new discriminative CLD framework, called CervixNet, with a Global Class Activation (GCA) module and a Local Bin Excitation (LBE) module. Specifically, the GCA module learns discriminative features by introducing an auxiliary classifier, and guides our model to focus on HSIL regions while ignoring noisy regions. It globally facilitates the feature extraction process and helps boost feature discriminability. Further, our LBE module excites lesion features in a local manner, and allows the lesion regions to be more fine-grained enhanced by explicitly modelling the inter-dependencies among bins of proposal feature. Extensive experiments on a number of 9888 clinical colposcopic images verify the superiority of our method (AP .75=20.45) over state-of-the-art models on four widely used metrics.
Collapse
|