1
|
Zhou H, Hua Z, Gao J, Lin F, Chen Y, Zhang S, Zheng T, Wang Z, Shao H, Li W, Liu F, Li Q, Chen J, Wang X, Zhao F, Qu N, Xie H, Ma H, Zhang H, Mao N. Multitask Deep Learning-Based Whole-Process System for Automatic Diagnosis of Breast Lesions and Axillary Lymph Node Metastasis Discrimination from Dynamic Contrast-Enhanced-MRI: A Multicenter Study. J Magn Reson Imaging 2024; 59:1710-1722. [PMID: 37497811 DOI: 10.1002/jmri.28913] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 07/02/2023] [Accepted: 07/03/2023] [Indexed: 07/28/2023] Open
Abstract
BACKGROUND Accurate diagnosis of breast lesions and discrimination of axillary lymph node (ALN) metastases largely depend on radiologist experience. PURPOSE To develop a deep learning-based whole-process system (DLWPS) for segmentation and diagnosis of breast lesions and discrimination of ALN metastasis. STUDY TYPE Retrospective. POPULATION 1760 breast patients, who were divided into training and validation sets (1110 patients), internal (476 patients), and external (174 patients) test sets. FIELD STRENGTH/SEQUENCE 3.0T/dynamic contrast-enhanced (DCE)-MRI sequence. ASSESSMENT DLWPS was developed using segmentation and classification models. The DLWPS-based segmentation model was developed by the U-Net framework, which combined the attention module and the edge feature extraction module. The average score of the output scores of three networks was used as the result of the DLWPS-based classification model. Moreover, the radiologists' diagnosis without and with the DLWPS-assistance was explored. To reveal the underlying biological basis of DLWPS, genetic analysis was performed based on RNA-sequencing data. STATISTICAL TESTS Dice similarity coefficient (DI), area under receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and kappa value. RESULTS The segmentation model reached a DI of 0.828 and 0.813 in the internal and external test sets, respectively. Within the breast lesions diagnosis, the DLWPS achieved AUCs of 0.973 in internal test set and 0.936 in external test set. For ALN metastasis discrimination, the DLWPS achieved AUCs of 0.927 in internal test set and 0.917 in external test set. The agreement of radiologists improved with the DLWPS-assistance from 0.547 to 0.794, and from 0.848 to 0.892 in breast lesions diagnosis and ALN metastasis discrimination, respectively. Additionally, 10 breast cancers with ALN metastasis were associated with pathways of aerobic electron transport chain and cytoplasmic translation. DATA CONCLUSION The performance of DLWPS indicates that it can promote radiologists in the judgment of breast lesions and ALN metastasis and nonmetastasis. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY STAGE: 3.
Collapse
Affiliation(s)
- Heng Zhou
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai, Shandong, China
| | - Zhen Hua
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai, Shandong, China
| | - Jing Gao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Fan Lin
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Yuqian Chen
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai, Shandong, China
| | - Shijie Zhang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Tiantian Zheng
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Zhongyi Wang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Huafei Shao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Wenjuan Li
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Fengjie Liu
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Qin Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Jingjing Chen
- Department of Radiology, Qingdao University Affiliated Hospital, Qingdao, Shandong, China
| | - Ximing Wang
- Department of Radiology, Shandong Provincial Hospital, Jinan, Shandong, China
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, Shandong, China
| | - Nina Qu
- Department of Ultrasound, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Haizhu Xie
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Heng Ma
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Haicheng Zhang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| |
Collapse
|
2
|
Ilesanmi AE, Ilesanmi TO, Ajayi BO. Reviewing 3D convolutional neural network approaches for medical image segmentation. Heliyon 2024; 10:e27398. [PMID: 38496891 PMCID: PMC10944240 DOI: 10.1016/j.heliyon.2024.e27398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/19/2024] Open
Abstract
Background Convolutional neural networks (CNNs) assume pivotal roles in aiding clinicians in diagnosis and treatment decisions. The rapid evolution of imaging technology has established three-dimensional (3D) CNNs as a formidable framework for delineating organs and anomalies in medical images. The prominence of 3D CNN frameworks is steadily growing within medical image segmentation and classification. Thus, our proposition entails a comprehensive review, encapsulating diverse 3D CNN algorithms for the segmentation of medical image anomalies and organs. Methods This study systematically presents an exhaustive review of recent 3D CNN methodologies. Rigorous screening of abstracts and titles were carried out to establish their relevance. Research papers disseminated across academic repositories were meticulously chosen, analyzed, and appraised against specific criteria. Insights into the realm of anomalies and organ segmentation were derived, encompassing details such as network architecture and achieved accuracies. Results This paper offers an all-encompassing analysis, unveiling the prevailing trends in 3D CNN segmentation. In-depth elucidations encompass essential insights, constraints, observations, and avenues for future exploration. A discerning examination indicates the preponderance of the encoder-decoder network in segmentation tasks. The encoder-decoder framework affords a coherent methodology for the segmentation of medical images. Conclusion The findings of this study are poised to find application in clinical diagnosis and therapeutic interventions. Despite inherent limitations, CNN algorithms showcase commendable accuracy levels, solidifying their potential in medical image segmentation and classification endeavors.
Collapse
Affiliation(s)
- Ademola E. Ilesanmi
- University of Pennsylvania, 3710 Hamilton Walk, 6th Floor, Philadelphia, PA, 19104, United States
| | | | - Babatunde O. Ajayi
- National Astronomical Research Institute of Thailand, Chiang Mai 50180, Thailand
| |
Collapse
|
3
|
Zhao Z, Du S, Xu Z, Yin Z, Huang X, Huang X, Wong C, Liang Y, Shen J, Wu J, Qu J, Zhang L, Cui Y, Wang Y, Wee L, Dekker A, Han C, Liu Z, Shi Z, Liang C. SwinHR: Hemodynamic-powered hierarchical vision transformer for breast tumor segmentation. Comput Biol Med 2024; 169:107939. [PMID: 38194781 DOI: 10.1016/j.compbiomed.2024.107939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 12/12/2023] [Accepted: 01/01/2024] [Indexed: 01/11/2024]
Abstract
Accurate and automated segmentation of breast tumors in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a critical role in computer-aided diagnosis and treatment of breast cancer. However, this task is challenging, due to random variation in tumor sizes, shapes, appearances, and blurred boundaries of tumors caused by inherent heterogeneity of breast cancer. Moreover, the presence of ill-posed artifacts in DCE-MRI further complicate the process of tumor region annotation. To address the challenges above, we propose a scheme (named SwinHR) integrating prior DCE-MRI knowledge and temporal-spatial information of breast tumors. The prior DCE-MRI knowledge refers to hemodynamic information extracted from multiple DCE-MRI phases, which can provide pharmacokinetics information to describe metabolic changes of the tumor cells over the scanning time. The Swin Transformer with hierarchical re-parameterization large kernel architecture (H-RLK) can capture long-range dependencies within DCE-MRI while maintaining computational efficiency by a shifted window-based self-attention mechanism. The use of H-RLK can extract high-level features with a wider receptive field, which can make the model capture contextual information at different levels of abstraction. Extensive experiments are conducted in large-scale datasets to validate the effectiveness of our proposed SwinHR scheme, demonstrating its superiority over recent state-of-the-art segmentation methods. Also, a subgroup analysis split by MRI scanners, field strength, and tumor size is conducted to verify its generalization. The source code is released on (https://github.com/GDPHMediaLab/SwinHR).
Collapse
Affiliation(s)
- Zhihe Zhao
- School of Medicine, South China University of Technology, Guangzhou, 510006, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Siyao Du
- Department of Radiology, The First Hospital of China Medical University, Shenyang, Liaoning Province, 110001, China
| | - Zeyan Xu
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Yunnan Cancer Center, Kunming, 650118, China
| | - Zhi Yin
- Department of Radiology, Shanxi Province Cancer Hospital/ Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, 030013, China
| | - Xiaomei Huang
- Department of Medical Imaging, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Xin Huang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Shantou University Medical College, Shantou, 515041, China
| | - Chinting Wong
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Yanting Liang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Jing Shen
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China
| | - Jianlin Wu
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China
| | - Jinrong Qu
- Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, 450008, China
| | - Lina Zhang
- Department of Radiology, The First Hospital of China Medical University, Shenyang, Liaoning Province, 110001, China
| | - Yanfen Cui
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Department of Radiology, Shanxi Province Cancer Hospital/ Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, 030013, China
| | - Ying Wang
- Department of Medical Ultrasonics, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510120, China
| | - Leonard Wee
- Clinical Data Science, Faculty of Health Medicine Life Sciences, Maastricht University, Maastricht, 6229 ET, The Netherlands; Department of Radiation Oncology (Maastro), GROW School of Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (Maastro), GROW School of Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Chu Han
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| | - Zhenwei Shi
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| | - Changhong Liang
- School of Medicine, South China University of Technology, Guangzhou, 510006, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| |
Collapse
|
4
|
Saikia S, Si T, Deb D, Bora K, Mallik S, Maulik U, Zhao Z. Lesion detection in women breast's dynamic contrast-enhanced magnetic resonance imaging using deep learning. Sci Rep 2023; 13:22555. [PMID: 38110462 PMCID: PMC10728155 DOI: 10.1038/s41598-023-48553-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 11/28/2023] [Indexed: 12/20/2023] Open
Abstract
Breast cancer is one of the most common cancers in women and the second foremost cause of cancer death in women after lung cancer. Recent technological advances in breast cancer treatment offer hope to millions of women in the world. Segmentation of the breast's Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is one of the necessary tasks in the diagnosis and detection of breast cancer. Currently, a popular deep learning model, U-Net is extensively used in biomedical image segmentation. This article aims to advance the state of the art and conduct a more in-depth analysis with a focus on the use of various U-Net models in lesion detection in women's breast DCE-MRI. In this article, we perform an empirical study of the effectiveness and efficiency of U-Net and its derived deep learning models including ResUNet, Dense UNet, DUNet, Attention U-Net, UNet++, MultiResUNet, RAUNet, Inception U-Net and U-Net GAN for lesion detection in breast DCE-MRI. All the models are applied to the benchmarked 100 Sagittal T2-Weighted fat-suppressed DCE-MRI slices of 20 patients and their performance is compared. Also, a comparative study has been conducted with V-Net, W-Net, and DeepLabV3+. Non-parametric statistical test Wilcoxon Signed Rank Test is used to analyze the significance of the quantitative results. Furthermore, Multi-Criteria Decision Analysis (MCDA) is used to evaluate overall performance focused on accuracy, precision, sensitivity, F[Formula: see text]-score, specificity, Geometric-Mean, DSC, and false-positive rate. The RAUNet segmentation model achieved a high accuracy of 99.76%, sensitivity of 85.04%, precision of 90.21%, and Dice Similarity Coefficient (DSC) of 85.04% whereas ResNet achieved 99.62% accuracy, 62.26% sensitivity, 99.56% precision, and 72.86% DSC. ResUNet is found to be the most effective model based on MCDA. On the other hand, U-Net GAN takes the least computational time to perform the segmentation task. Both quantitative and qualitative results demonstrate that the ResNet model performs better than other models in segmenting the images and lesion detection, though computational time in achieving the objectives varies.
Collapse
Affiliation(s)
- Sudarshan Saikia
- Information Technology Department, Oil India Limited, Duliajan, Assam, 786602, India
| | - Tapas Si
- AI Innovation Lab, Department of Computer Science & Engineering, University of Engineering & Management, Jaipur, GURUKUL, Jaipur, Rajasthan, 303807, India
| | - Darpan Deb
- Department of Computer Application, Christ University, Bengaluru, 560029, India
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, 781001, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, 02115, USA
| | - Ujjwal Maulik
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
5
|
Zhang J, Cui Z, Shi Z, Jiang Y, Zhang Z, Dai X, Yang Z, Gu Y, Zhou L, Han C, Huang X, Ke C, Li S, Xu Z, Gao F, Zhou L, Wang R, Liu J, Zhang J, Ding Z, Sun K, Li Z, Liu Z, Shen D. A robust and efficient AI assistant for breast tumor segmentation from DCE-MRI via a spatial-temporal framework. PATTERNS (NEW YORK, N.Y.) 2023; 4:100826. [PMID: 37720328 PMCID: PMC10499873 DOI: 10.1016/j.patter.2023.100826] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/25/2023] [Accepted: 07/21/2023] [Indexed: 09/19/2023]
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) allows screening, follow up, and diagnosis for breast tumor with high sensitivity. Accurate tumor segmentation from DCE-MRI can provide crucial information of tumor location and shape, which significantly influences the downstream clinical decisions. In this paper, we aim to develop an artificial intelligence (AI) assistant to automatically segment breast tumors by capturing dynamic changes in multi-phase DCE-MRI with a spatial-temporal framework. The main advantages of our AI assistant include (1) robustness, i.e., our model can handle MR data with different phase numbers and imaging intervals, as demonstrated on a large-scale dataset from seven medical centers, and (2) efficiency, i.e., our AI assistant significantly reduces the time required for manual annotation by a factor of 20, while maintaining accuracy comparable to that of physicians. More importantly, as the fundamental step to build an AI-assisted breast cancer diagnosis system, our AI assistant will promote the application of AI in more clinical diagnostic practices regarding breast cancer.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Zhiming Cui
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Zhenwei Shi
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Yingjia Jiang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Hunan 410011, China
| | - Zhiliang Zhang
- School of Medical Imaging, Hangzhou Medical College, Zhejiang 310059, China
| | - Xiaoting Dai
- Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200080, China
| | - Zhenlu Yang
- Department of Radiology, Guizhou Provincial People’s Hospital, Guizhou 550002, China
| | - Yuning Gu
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Lei Zhou
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Chu Han
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Xiaomei Huang
- Department of Medical Imaging, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Chenglu Ke
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Suyun Li
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Zeyan Xu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Fei Gao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Luping Zhou
- School of Electrical and Information Engineering, The University of Sydney, Sydney, NSW 2006, Australia
| | - Rongpin Wang
- Department of Radiology, Guizhou Provincial People’s Hospital, Guizhou 550002, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, Hunan 410011, China
| | - Jiayin Zhang
- Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200080, China
| | - Zhongxiang Ding
- Department of Radiology, Key Laboratory of Clinical Cancer Pharmacology and Toxicology Research of Zhejiang Province, Hangzhou 310003, China
| | - Kun Sun
- Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China
| | - Zhenhui Li
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Kunming 650118, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200230, China
- Shanghai Clinical Research and Trial Center, Shanghai 200052, China
| |
Collapse
|
6
|
Machine learning on MRI radiomic features: identification of molecular subtype alteration in breast cancer after neoadjuvant therapy. Eur Radiol 2023; 33:2965-2974. [PMID: 36418622 DOI: 10.1007/s00330-022-09264-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/03/2022] [Accepted: 10/22/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Recent studies have revealed the change of molecular subtypes in breast cancer (BC) after neoadjuvant therapy (NAT). This study aims to construct a non-invasive model for predicting molecular subtype alteration in breast cancer after NAT. METHODS Eighty-two estrogen receptor (ER)-negative/ human epidermal growth factor receptor 2 (HER2)-negative or ER-low-positive/HER2-negative breast cancer patients who underwent NAT and completed baseline MRI were retrospectively recruited between July 2010 and November 2020. Subtype alteration was observed in 21 cases after NAT. A 2D-DenseUNet machine-learning model was built to perform automatic segmentation of breast cancer. 851 radiomic features were extracted from each MRI sequence (T2-weighted imaging, ADC, DCE, and contrast-enhanced T1-weighted imaging), both in the manual and auto-segmentation masks. All samples were divided into a training set (n = 66) and a test set (n = 16). XGBoost model with 5-fold cross-validation was performed to predict molecular subtype alterations in breast cancer patients after NAT. The predictive ability of these models was subsequently evaluated by the AUC of the ROC curve, sensitivity, and specificity. RESULTS A model consisting of three radiomics features from the manual segmentation of multi-sequence MRI achieved favorable predictive efficacy in identifying molecular subtype alteration in BC after NAT (cross-validation set: AUC = 0.908, independent test set: AUC = 0.864); whereas an automatic segmentation approach of BC lesions on the DCE sequence produced good segmentation results (Dice similarity coefficient = 0.720). CONCLUSIONS A machine learning model based on baseline MRI is proven useful for predicting molecular subtype alterations in breast cancer after NAT. KEY POINTS • Machine learning models using MRI-based radiomics signature have the ability to predict molecular subtype alterations in breast cancer after neoadjuvant therapy, which subsequently affect treatment protocols. • The application of deep learning in the automatic segmentation of breast cancer lesions from MRI images shows the potential to replace manual segmentation..
Collapse
|
7
|
Fully automatic classification of breast lesions on multi-parameter MRI using a radiomics model with minimal number of stable, interpretable features. LA RADIOLOGIA MEDICA 2023; 128:160-170. [PMID: 36670236 DOI: 10.1007/s11547-023-01594-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 01/10/2023] [Indexed: 01/21/2023]
Abstract
PURPOSE To build an automatic computer-aided diagnosis (CAD) pipeline based on multiparametric magnetic resonance imaging (mpMRI) and explore the role of different imaging features in the classification of breast cancer. MATERIALS AND METHODS A total of 222 histopathology-confirmed breast lesions, together with their BI-RADS scores, were included in the analysis. The cohort was randomly split into training (159) and test (63) cohorts, and another 50 lesions were collected as an external cohort. An nnUNet-based lesion segmentation model was trained to automatically segment lesion ROI, from which radiomics features were extracted for diffusion-weighted imaging (DWI), T2-weighted imaging (T2WI), and contrast-enhanced (DCE) pharmacokinetic parametric maps. Models based on combinations of sequences were built using support vector machine (SVM) and logistic regression (LR). Also, the performance of these sequence combinations and BI-RADS scores were compared. The Dice coefficient and AUC were calculated to evaluate the segmentation and classification results. Decision curve analysis (DCA) was used to assess clinical utility. RESULTS The segmentation model achieved a Dice coefficient of 0.831 in the test cohort. The radiomics model used only three features from diffusion coefficient (ADC) images, T2WI, and DCE-derived kinetic mapping, and achieved an AUC of 0.946 [0.883-0.990], AUC of 0.842 [0.6856-0.998] in the external cohort, which was higher than the BI-RADS score with an AUC of 0.872 [0.752-0.975]. The joint model using both radiomics score and BI-RADS score achieved the highest test AUC of 0.975 [0.935-1.000], with a sensitivity of 0.920 and a specificity of 0.923. CONCLUSION Three radiomics features can be used to construct an automatic radiomics-based pipeline to improve the diagnosis of breast lesions and reduce unnecessary biopsies, especially when using jointly with BI-RADS scores.
Collapse
|
8
|
Yang H, Chen Q, Fu K, Zhu L, Jin L, Qiu B, Ren Q, Du H, Lu Y. Boosting medical image segmentation via conditional-synergistic convolution and lesion decoupling. Comput Med Imaging Graph 2022; 101:102110. [PMID: 36057184 DOI: 10.1016/j.compmedimag.2022.102110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 06/09/2022] [Accepted: 07/28/2022] [Indexed: 01/27/2023]
Abstract
Medical image segmentation is a critical step in pathology assessment and monitoring. Extensive methods tend to utilize a deep convolutional neural network for various medical segmentation tasks, such as polyp segmentation, skin lesion segmentation, etc. However, due to the inherent difficulty of medical images and tremendous data variations, they usually perform poorly in some intractable cases. In this paper, we propose an input-specific network called conditional-synergistic convolution and lesion decoupling network (CCLDNet) to solve these issues. First, in contrast to existing CNN-based methods with stationary convolutions, we propose the conditional synergistic convolution (CSConv) that aims to generate a specialist convolution kernel for each lesion. CSConv has the ability of dynamic modeling and could be leveraged as a basic block to construct other networks in a broad range of vision tasks. Second, we devise a lesion decoupling strategy (LDS) to decouple the original lesion segmentation map into two soft labels, i.e., lesion center label and lesion boundary label, for reducing the segmentation difficulty. Besides, we use a transformer network as the backbone, further erasing the fixed structure of the standard CNN and empowering dynamic modeling capability of the whole framework. Our CCLDNet outperforms state-of-the-art approaches by a large margin on a variety of benchmarks, including polyp segmentation (89.22% dice score on EndoScene) and skin lesion segmentation (91.15% dice score on ISIC2018). Our code is available at https://github.com/QianChen98/CCLD-Net.
Collapse
Affiliation(s)
- Huakun Yang
- College of Information Science and Technology, University of Science and Technology of China, Hefei 230041, China
| | - Qian Chen
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
| | - Keren Fu
- College of Computer Science, National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu 610065, China
| | - Lei Zhu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
| | - Lujia Jin
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
| | - Bensheng Qiu
- College of Information Science and Technology, University of Science and Technology of China, Hefei 230041, China
| | - Qiushi Ren
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
| | - Hongwei Du
- College of Information Science and Technology, University of Science and Technology of China, Hefei 230041, China.
| | - Yanye Lu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China.
| |
Collapse
|
9
|
A hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme for breast cancer segmentation based on DCE-MRI. Med Image Anal 2022; 82:102572. [PMID: 36055051 DOI: 10.1016/j.media.2022.102572] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 07/08/2022] [Accepted: 08/11/2022] [Indexed: 11/24/2022]
Abstract
Automatically and accurately annotating tumor in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which provides a noninvasive in vivo method to evaluate tumor vasculature architectures based on contrast accumulation and washout, is a crucial step in computer-aided breast cancer diagnosis and treatment. However, it remains challenging due to the varying sizes, shapes, appearances and densities of tumors caused by the high heterogeneity of breast cancer, and the high dimensionality and ill-posed artifacts of DCE-MRI. In this paper, we propose a hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme that integrates pharmacokinetics prior and feature refinement to generate sufficiently adequate features in DCE-MRI for breast cancer segmentation. The pharmacokinetics prior expressed by time intensity curve (TIC) is incorporated into the scheme through objective function called dynamic contrast-enhanced prior (DCP) loss. It contains contrast agent kinetic heterogeneity prior knowledge, which is important to optimize our model parameters. Besides, we design a spatial fusion module (SFM) embedded in the scheme to exploit intra-slices spatial structural correlations, and deploy a spatial-kinetic fusion module (SKFM) to effectively leverage the complementary information extracted from spatial-kinetic space. Furthermore, considering that low spatial resolution often leads to poor image quality in DCE-MRI, we integrate a reconstruction autoencoder into the scheme to refine feature maps in an unsupervised manner. We conduct extensive experiments to validate the proposed method and show that our approach can outperform recent state-of-the-art segmentation methods on breast cancer DCE-MRI dataset. Moreover, to explore the generalization for other segmentation tasks on dynamic imaging, we also extend the proposed method to brain segmentation in DSC-MRI sequence. Our source code will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/DCEDuDoFNet.
Collapse
|
10
|
LMA-Net: A lesion morphology aware network for medical image segmentation towards breast tumors. Comput Biol Med 2022; 147:105685. [DOI: 10.1016/j.compbiomed.2022.105685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/20/2022] [Accepted: 05/30/2022] [Indexed: 11/17/2022]
|
11
|
Dewangan KK, Dewangan DK, Sahu SP, Janghel R. Breast cancer diagnosis in an early stage using novel deep learning with hybrid optimization technique. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:13935-13960. [PMID: 35233181 PMCID: PMC8874754 DOI: 10.1007/s11042-022-12385-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 01/17/2022] [Accepted: 01/21/2022] [Indexed: 05/17/2023]
Abstract
Breast cancer is one of the primary causes of death that is occurred in females around the world. So, the recognition and categorization of initial phase breast cancer are necessary to help the patients to have suitable action. However, mammography images provide very low sensitivity and efficiency while detecting breast cancer. Moreover, Magnetic Resonance Imaging (MRI) provides high sensitivity than mammography for predicting breast cancer. In this research, a novel Back Propagation Boosting Recurrent Wienmed model (BPBRW) with Hybrid Krill Herd African Buffalo Optimization (HKH-ABO) mechanism is developed for detecting breast cancer in an earlier stage using breast MRI images. Initially, the MRI breast images are trained to the system, and an innovative Wienmed filter is established for preprocessing the MRI noisy image content. Moreover, the projected BPBRW with HKH-ABO mechanism categorizes the breast cancer tumor as benign and malignant. Additionally, this model is simulated using Python, and the performance of the current research work is evaluated with prevailing works. Hence, the comparative graph shows that the current research model produces improved accuracy of 99.6% with a 0.12% lower error rate.
Collapse
Affiliation(s)
- Kranti Kumar Dewangan
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| | - Deepak Kumar Dewangan
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| | - Satya Prakash Sahu
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| | - Rekhram Janghel
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| |
Collapse
|
12
|
Wang S, Li C, Wang R, Liu Z, Wang M, Tan H, Wu Y, Liu X, Sun H, Yang R, Liu X, Chen J, Zhou H, Ben Ayed I, Zheng H. Annotation-efficient deep learning for automatic medical image segmentation. Nat Commun 2021; 12:5915. [PMID: 34625565 PMCID: PMC8501087 DOI: 10.1038/s41467-021-26216-9] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 09/22/2021] [Indexed: 01/17/2023] Open
Abstract
Automatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
- Peng Cheng Laboratory, Shenzhen, Guangdong, China.
- Pazhou Laboratory, Guangzhou, Guangdong, China.
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
| | - Rongpin Wang
- Department of Medical Imaging, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Zaiyi Liu
- Department of Medical Imaging, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, China
| | - Meiyun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Hongna Tan
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yaping Wu
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Xinfeng Liu
- Department of Medical Imaging, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Hui Sun
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Rui Yang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Xin Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Jie Chen
- Peng Cheng Laboratory, Shenzhen, Guangdong, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, Guangdong, China
| | - Huihui Zhou
- Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | | | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
| |
Collapse
|