1
|
Guo Y, Zhou Y. Expansive Receptive Field and Local Feature Extraction Network: Advancing Multiscale Feature Fusion for Breast Fibroadenoma Segmentation in Sonography. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01142-6. [PMID: 38822159 DOI: 10.1007/s10278-024-01142-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 03/08/2024] [Accepted: 03/26/2024] [Indexed: 06/02/2024]
Abstract
Fibroadenoma is a common benign breast disease that affects women of all ages. Early diagnosis can greatly improve the treatment outcomes and reduce the associated pain. Computer-aided diagnosis (CAD) has great potential to improve diagnosis accuracy and efficiency. However, its application in sonography is limited. A network that utilizes expansive receptive fields and local information learning was proposed for the accurate segmentation of breast fibroadenomas in sonography. The architecture comprises the Hierarchical Attentive Fusion module, which conducts local information learning through channel-wise and pixel-wise perspectives, and the Residual Large-Kernel module, which utilizes multiscale large kernel convolution for global information learning. Additionally, multiscale feature fusion in both modules was included to enhance the stability of our network. Finally, an energy function and a data augmentation method were incorporated to fine-tune low-level features of medical images and improve data enhancement. The performance of our model is evaluated using both our local clinical dataset and a public dataset. Mean pixel accuracy (MPA) of 93.93% and 86.06% and mean intersection over union (MIOU) of 88.16% and 73.19% were achieved on the clinical and public datasets, respectively. They are significantly improved over state-of-the-art methods such as SegFormer (89.75% and 78.45% in MPA and 83.26% and 71.85% in MIOU, respectively). The proposed feature extraction strategy, combining local pixel-wise learning with an expansive receptive field for global information perception, demonstrates excellent feature learning capabilities. Due to this powerful and unique local-global feature extraction capability, our deep network achieves superior segmentation of breast fibroadenoma in sonography, which may be valuable in early diagnosis.
Collapse
Affiliation(s)
- Yongxin Guo
- Medical College Road, State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016, China
| | - Yufeng Zhou
- Medical College Road, State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016, China.
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016, China.
- National Medical Products Administration (NMPA) Key Laboratory for Quality Evaluation of Ultrasonic Surgical Equipment, Donghu New Technology Development Zone, 507 Gaoxin Ave., Wuhan, Hubei, 430075, China.
| |
Collapse
|
2
|
Luo J, Zhang H, Zhuang Y, Han L, Chen K, Hua Z, Li C, Lin J. 2S-BUSGAN: A Novel Generative Adversarial Network for Realistic Breast Ultrasound Image with Corresponding Tumor Contour Based on Small Datasets. SENSORS (BASEL, SWITZERLAND) 2023; 23:8614. [PMID: 37896706 PMCID: PMC10610581 DOI: 10.3390/s23208614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/13/2023] [Accepted: 10/19/2023] [Indexed: 10/29/2023]
Abstract
Deep learning (DL) models in breast ultrasound (BUS) image analysis face challenges with data imbalance and limited atypical tumor samples. Generative Adversarial Networks (GAN) address these challenges by providing efficient data augmentation for small datasets. However, current GAN approaches fail to capture the structural features of BUS and generated images lack structural legitimacy and are unrealistic. Furthermore, generated images require manual annotation for different downstream tasks before they can be used. Therefore, we propose a two-stage GAN framework, 2s-BUSGAN, for generating annotated BUS images. It consists of the Mask Generation Stage (MGS) and the Image Generation Stage (IGS), generating benign and malignant BUS images using corresponding tumor contours. Moreover, we employ a Feature-Matching Loss (FML) to enhance the quality of generated images and utilize a Differential Augmentation Module (DAM) to improve GAN performance on small datasets. We conduct experiments on two datasets, BUSI and Collected. Moreover, results indicate that the quality of generated images is improved compared with traditional GAN methods. Additionally, our generated images underwent evaluation by ultrasound experts, demonstrating the possibility of deceiving doctors. A comparative evaluation showed that our method also outperforms traditional GAN methods when applied to training segmentation and classification models. Our method achieved a classification accuracy of 69% and 85.7% on two datasets, respectively, which is about 3% and 2% higher than that of the traditional augmentation model. The segmentation model trained using the 2s-BUSGAN augmented datasets achieved DICE scores of 75% and 73% on the two datasets, respectively, which were higher than the traditional augmentation methods. Our research tackles imbalanced and limited BUS image data challenges. Our 2s-BUSGAN augmentation method holds potential for enhancing deep learning model performance in the field.
Collapse
Affiliation(s)
- Jie Luo
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; (J.L.); (L.H.); (K.C.)
| | - Heqing Zhang
- Department of Ultrasound, West China Hospital, Sichuan University, Chengdu 610065, China;
| | - Yan Zhuang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; (J.L.); (L.H.); (K.C.)
| | - Lin Han
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; (J.L.); (L.H.); (K.C.)
- Highong Intellimage Medical Technology (Tianjin) Co., Ltd., Tianjin 300480, China
| | - Ke Chen
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; (J.L.); (L.H.); (K.C.)
| | - Zhan Hua
- China-Japan Friendship Hospital, Beijing 100029, China; (Z.H.); (C.L.)
| | - Cheng Li
- China-Japan Friendship Hospital, Beijing 100029, China; (Z.H.); (C.L.)
| | - Jiangli Lin
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; (J.L.); (L.H.); (K.C.)
| |
Collapse
|
3
|
Chen Y, Li D, Zhang X, Liu P, Meng F, Jin J, Shen Y. A devised thyroid segmentation with multi-stage modification based on Super-pixel U-Net under insufficient data. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:1728-1741. [PMID: 37137743 DOI: 10.1016/j.ultrasmedbio.2023.03.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 01/24/2023] [Accepted: 03/27/2023] [Indexed: 05/05/2023]
Abstract
OBJECTIVES The application of deep learning to medical image segmentation has received considerable attention. Nevertheless, when segmenting thyroid ultrasound images, it is difficult to achieve good segmentation results using deep learning methods because of the large number of nonthyroidal regions and insufficient training data. METHODS In this study, a Super-pixel U-Net, designed by adding a supplementary path to U-Net, was devised to boost the segmentation results of thyroids. The improved network can introduce more information into the network, boosting auxiliary segmentation results. A multi-stage modification is introduced in this method, which includes boundary segmentation, boundary repair, and auxiliary segmentation. To reduce the negative effects of non-thyroid regions in the segmentation, U-Net was utilized to obtain rough boundary outputs. Subsequently, another U-Net is trained to improve and repair the coverage of the boundary outputs. Super-pixel U-Net was applied in the third stage to assist in the segmentation of the thyroid more precisely. Finally, multidimensional indicators were used to compare the segmentation results of the proposed method with those of other comparison experiments. DISCUSSION The proposed method achieved an F1 Score of 0.9161 and an IoU of 0.9279. Furthermore, the proposed method also exhibits better performance in terms of shape similarity, with an average convexity of 0.9395. an average ratio of 0.9109, an average compactness of 0.8976, an average eccentricity of 0.9448, and an average rectangularity of 0.9289. The average area estimation indicator was 0.8857. CONCLUSION The proposed method exhibited superior performance, proving the improvements of the multi-stage modification and Super-pixel U-Net.
Collapse
Affiliation(s)
- Yifei Chen
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - Dandan Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China.
| | - Xin Zhang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - Peng Liu
- Heilongjiang Provincial Key Laboratory of Trace Elements and Human Health, Harbin Medical University, Harbin, 150081 China; Endemic Disease Control Center, Chinese Center for Disease Control and Prevention, Harbin Medical University, Harbin, 150081 China
| | - Fangang Meng
- Heilongjiang Provincial Key Laboratory of Trace Elements and Human Health, Harbin Medical University, Harbin, 150081 China; Endemic Disease Control Center, Chinese Center for Disease Control and Prevention, Harbin Medical University, Harbin, 150081 China
| | - Jing Jin
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - Yi Shen
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| |
Collapse
|
4
|
Zhang B, Vakanski A, Xian M. BI-RADS-NET-V2: A Composite Multi-Task Neural Network for Computer-Aided Diagnosis of Breast Cancer in Ultrasound Images With Semantic and Quantitative Explanations. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2023; 11:79480-79494. [PMID: 37608804 PMCID: PMC10443928 DOI: 10.1109/access.2023.3298569] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Computer-aided Diagnosis (CADx) based on explainable artificial intelligence (XAI) can gain the trust of radiologists and effectively improve diagnosis accuracy and consultation efficiency. This paper proposes BI-RADS-Net-V2, a novel machine learning approach for fully automatic breast cancer diagnosis in ultrasound images. The BI-RADS-Net-V2 can accurately distinguish malignant tumors from benign ones and provides both semantic and quantitative explanations. The explanations are provided in terms of clinically proven morphological features used by clinicians for diagnosis and reporting mass findings, i.e., Breast Imaging Reporting and Data System (BI-RADS). The experiments on 1,192 Breast Ultrasound (BUS) images indicate that the proposed method improves the diagnosis accuracy by taking full advantage of the medical knowledge in BI-RADS while providing both semantic and quantitative explanations for the decision.
Collapse
Affiliation(s)
- Boyu Zhang
- Institute for Interdisciplinary Data Sciences, University of Idaho, Moscow, ID 83844, USA
| | - Aleksandar Vakanski
- Department of Nuclear Engineering and Industrial Management, University of Idaho, Idaho Falls, ID 83402, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA
| |
Collapse
|
5
|
Yang KB, Lee J, Yang J. Multi-class semantic segmentation of breast tissues from MRI images using U-Net based on Haar wavelet pooling. Sci Rep 2023; 13:11704. [PMID: 37474633 PMCID: PMC10359288 DOI: 10.1038/s41598-023-38557-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 07/11/2023] [Indexed: 07/22/2023] Open
Abstract
MRI images used in breast cancer diagnosis are taken in a lying position and therefore are inappropriate for reconstructing the natural breast shape in a standing position. Some studies have proposed methods to present the breast shape in a standing position using an ordinary differential equation of the finite element method. However, it is difficult to obtain meaningful results because breast tissues have different elastic moduli. This study proposed a multi-class semantic segmentation method for breast tissues to reconstruct breast shapes using U-Net based on Haar wavelet pooling. First, a dataset was constructed by labeling the skin, fat, and fibro-glandular tissues and the background from MRI images taken in a lying position. Next, multi-class semantic segmentation was performed using U-Net based on Haar wavelet pooling to improve the segmentation accuracy for breast tissues. The U-Net effectively extracted breast tissue features while reducing image information loss in a subsampling stage using multiple sub-bands. In addition, the proposed network is robust to overfitting. The proposed network showed a mIOU of 87.48 for segmenting breast tissues. The proposed networks demonstrated high-accuracy segmentation for breast tissue with different elastic moduli to reconstruct the natural breast shape.
Collapse
Affiliation(s)
- Kwang Bin Yang
- Devision of Memory - Memory FAB Team 1, Samsung Electronics, 1 Samsungjeonja-ro, Hwaseong, Gyeonggi, 18448, Republic of Korea
| | - Jinwon Lee
- Department of Industrial and Management Engineering, Gangneung-Wonju National University, 150 Namwon-ro, Wonju, Gangwon, 26403, Republic of Korea
| | - Jeongsam Yang
- Department of Industrial Engineering, Ajou University, 206 Worldcup-ro, Suwon, Gyeonggi, 16499, Republic of Korea.
| |
Collapse
|
6
|
Marzola F, Lochner P, Naldi A, Lemor R, Stögbauer J, Meiburger KM. Development of a Deep Learning-Based System for Optic Nerve Characterization in Transorbital Ultrasound Images on a Multicenter Data Set. ULTRASOUND IN MEDICINE & BIOLOGY 2023:S0301-5629(23)00169-2. [PMID: 37357081 DOI: 10.1016/j.ultrasmedbio.2023.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 05/16/2023] [Accepted: 05/21/2023] [Indexed: 06/27/2023]
Abstract
OBJECTIVE Characterization of the optic nerve through measurement of optic nerve diameter (OND) and optic nerve sheath diameter (ONSD) using transorbital sonography (TOS) has proven to be a useful tool for the evaluation of intracranial pressure (ICP) and multiple neurological conditions. We describe a deep learning-based system for automatic characterization of the optic nerve from B-mode TOS images by automatic measurement of the OND and ONSD. In addition, we determine how the signal-to-noise ratio in two different areas of the image influences system performance. METHODS A UNet was trained as the segmentation model. The training was performed on a multidevice, multicenter data set of 464 TOS images from 110 subjects. Fivefold cross-validation was performed, and the training process was repeated eight times. The final prediction was made as an ensemble of the predictions of the eight single models. Automatic OND and ONSD measurements were compared with the manual measurements taken by an expert with a graphical user interface that mimics a clinical setting. RESULTS A Dice score of 0.719 ± 0.139 was obtained on the whole data set merging the test folds. Pearson's correlation was 0.69 for both OND and ONSD parameters. The signal-to-noise ratio was found to influence segmentation performance, but no clear correlation with diameter measurement performance was determined. CONCLUSION The developed system has a good correlation with manual measurements, proving that it is feasible to create a model capable of automatically analyzing TOS images from multiple devices. The promising results encourage further definition of a standard protocol for the automatization of the OND and ONSD measurement process using deep learning-based methods. The image data and the manual measurements used in this work will be available at 10.17632/kw8gvp8m8x.1.
Collapse
Affiliation(s)
- Francesco Marzola
- Biolab, Department of Electronics and Communications, Politecnico di Torino, Torino, Italy.
| | | | - Andrea Naldi
- Neurology Unit, San Giovanni Bosco Hospital, Turin, Italy
| | - Robert Lemor
- Department of Biomedical Engineering, Saarland University of Applied Sciences, Saarbrücken, Germany
| | | | - Kristen M Meiburger
- Biolab, Department of Electronics and Communications, Politecnico di Torino, Torino, Italy
| |
Collapse
|
7
|
Afrin H, Larson NB, Fatemi M, Alizad A. Deep Learning in Different Ultrasound Methods for Breast Cancer, from Diagnosis to Prognosis: Current Trends, Challenges, and an Analysis. Cancers (Basel) 2023; 15:3139. [PMID: 37370748 PMCID: PMC10296633 DOI: 10.3390/cancers15123139] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/02/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis. This article reviews the recent research trends on neural networks for breast mass ultrasound, including and beyond diagnosis. We discussed original research recently conducted to analyze which modes of ultrasound and which models have been used for which purposes, and where they show the best performance. Our analysis reveals that lesion classification showed the highest performance compared to those used for other purposes. We also found that fewer studies were performed for prognosis than diagnosis. We also discussed the limitations and future directions of ongoing research on neural networks for breast ultrasound.
Collapse
Affiliation(s)
- Humayra Afrin
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Nicholas B. Larson
- Department of Quantitative Health Sciences, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Azra Alizad
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| |
Collapse
|
8
|
Chen Y, Zhang X, Li D, Park H, Li X, Liu P, Jin J, Shen Y. Automatic segmentation of thyroid with the assistance of the devised boundary improvement based on multicomponent small dataset. APPL INTELL 2023; 53:1-16. [PMID: 37363389 PMCID: PMC10015528 DOI: 10.1007/s10489-023-04540-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2023] [Indexed: 03/17/2023]
Abstract
Deep learning has been widely considered in medical image segmentation. However, the difficulty of acquiring medical images and labels can affect the accuracy of the segmentation results for deep learning methods. In this paper, an automatic segmentation method is proposed by devising a multicomponent neighborhood extreme learning machine to improve the boundary attention region of the preliminary segmentation results. The neighborhood features are acquired by training U-Nets with the multicomponent small dataset, which consists of original thyroid ultrasound images, Sobel edge images and superpixel images. Afterward, the neighborhood features are selected by min-redundancy and max-relevance filter in the designed extreme learning machine, and the selected features are used to train the extreme learning machine to obtain supplementary segmentation results. Finally, the accuracy of the segmentation results is improved by adjusting the boundary attention region of the preliminary segmentation results with the supplementary segmentation results. This method combines the advantages of deep learning and traditional machine learning, boosting the accuracy of thyroid segmentation accuracy with a small dataset in a multigroup test.
Collapse
Affiliation(s)
- Yifei Chen
- Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
- Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141 Korea
| | - Xin Zhang
- Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - Dandan Li
- Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - HyunWook Park
- Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141 Korea
| | - Xinran Li
- Mathematics, Harbin Institute of Technology, Harbin, 150001 China
| | - Peng Liu
- Heilongjiang Provincial Key Laboratory of Trace Elements and Human Health, Harbin Medical University, Harbin, 150081 China
| | - Jing Jin
- Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - Yi Shen
- Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001 China
| |
Collapse
|
9
|
A hybrid attentional guidance network for tumors segmentation of breast ultrasound images. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02849-7. [PMID: 36853584 DOI: 10.1007/s11548-023-02849-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 01/31/2023] [Indexed: 03/01/2023]
Abstract
PURPOSE In recent years, breast cancer has become the greatest threat to women. There are many studies dedicated to the precise segmentation of breast tumors, which is indispensable in computer-aided diagnosis. Deep neural networks have achieved accurate segmentation of images. However, convolutional layers are biased to extract local features and tend to lose global and location information as the network deepens, which leads to a decrease in breast tumors segmentation accuracy. For this reason, we propose a hybrid attention-guided network (HAG-Net). We believe that this method will improve the detection rate and segmentation of tumors in breast ultrasound images. METHODS The method is equipped with multi-scale guidance block (MSG) for guiding the extraction of low-resolution location information. Short multi-head self-attention (S-MHSA) and convolutional block attention module are used to capture global features and long-range dependencies. Finally, the segmentation results are obtained by fusing multi-scale contextual information. RESULTS We compare with 7 state-of-the-art methods on two publicly available datasets through five random fivefold cross-validations. The highest dice coefficient, Jaccard Index and detect rate ([Formula: see text]%, [Formula: see text]%, [Formula: see text]% and [Formula: see text]%, [Formula: see text]%, [Formula: see text]%, separately) obtained on two publicly available datasets(BUSI and OASUBD), prove the superiority of our method. CONCLUSION HAG-Net can better utilize multi-resolution features to localize the breast tumors. Demonstrating excellent generalizability and applicability for breast tumors segmentation compare to other state-of-the-art methods.
Collapse
|
10
|
Ma Z, Qi Y, Xu C, Zhao W, Lou M, Wang Y, Ma Y. ATFE-Net: Axial Transformer and Feature Enhancement-based CNN for ultrasound breast mass segmentation. Comput Biol Med 2023; 153:106533. [PMID: 36638617 DOI: 10.1016/j.compbiomed.2022.106533] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 11/25/2022] [Accepted: 12/31/2022] [Indexed: 01/05/2023]
Abstract
Breast mass is one of the main clinical symptoms of breast cancer. Recently, many CNN-based methods for breast mass segmentation have been proposed. However, these methods have difficulties in capturing long-range dependencies, causing poor segmentation of large-scale breast masses. In this paper, we propose an axial Transformer and feature enhancement-based CNN (ATFE-Net) for ultrasound breast mass segmentation. Specially, an axial Transformer (Axial-Trans) module and a Transformer-based feature enhancement (Trans-FE) module are proposed to capture long-range dependencies. Axial-Trans module only calculates self-attention in width and height directions of input feature maps, which reduces the complexity of self-attention significantly from O(n2) to O(n). In addition, Trans-FE module can enhance feature representation by capturing dependencies between different feature layers, since deeper feature layers have richer semantic information and shallower feature layers have more detailed information. The experimental results show that our ATFE-Net achieved better performance than several state-of-the-art methods on two publicly available breast ultrasound datasets, with Dice coefficient of 82.46% for BUSI and 86.78% for UDIAT, respectively.
Collapse
Affiliation(s)
- Zhou Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Yunliang Qi
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Chunbo Xu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Wei Zhao
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Meng Lou
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Yiming Wang
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Yide Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China.
| |
Collapse
|
11
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
12
|
Peng Y, Yu D, Guo Y. MShNet: Multi-scale feature combined with h-network for medical image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
13
|
Wang J, Zheng Y, Ma J, Li X, Wang C, Gee J, Wang H, Huang W. Information bottleneck-based interpretable multitask network for breast cancer classification and segmentation. Med Image Anal 2023; 83:102687. [PMID: 36436356 DOI: 10.1016/j.media.2022.102687] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 09/19/2022] [Accepted: 11/07/2022] [Indexed: 11/13/2022]
Abstract
Breast cancer is one of the most common causes of death among women worldwide. Early signs of breast cancer can be an abnormality depicted on breast images (e.g., mammography or breast ultrasonography). However, reliable interpretation of breast images requires intensive labor and physicians with extensive experience. Deep learning is evolving breast imaging diagnosis by introducing a second opinion to physicians. However, most deep learning-based breast cancer analysis algorithms lack interpretability because of their black box nature, which means that domain experts cannot understand why the algorithms predict a label. In addition, most deep learning algorithms are formulated as a single-task-based model that ignores correlations between different tasks (e.g., tumor classification and segmentation). In this paper, we propose an interpretable multitask information bottleneck network (MIB-Net) to accomplish simultaneous breast tumor classification and segmentation. MIB-Net maximizes the mutual information between the latent representations and class labels while minimizing information shared by the latent representations and inputs. In contrast from existing models, our MIB-Net generates a contribution score map that offers an interpretable aid for physicians to understand the model's decision-making process. In addition, MIB-Net implements multitask learning and further proposes a dual prior knowledge guidance strategy to enhance deep task correlation. Our evaluations are carried out on three breast image datasets in different modalities. Our results show that the proposed framework is not only able to help physicians better understand the model's decisions but also improve breast tumor classification and segmentation accuracy over representative state-of-the-art models. Our code is available at https://github.com/jxw0810/MIB-Net.
Collapse
Affiliation(s)
- Junxia Wang
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China; Shanghai AI Laboratory, No. 701 Yunjin Road, Xuhui District, Shanghai, 200433, China.
| | - Jun Ma
- School of Cyber Science and Engineering, Southeast University, No. 2 Southeast University Road, Jiangning District, Nanjing, 211189, China
| | - Xinmeng Li
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China
| | - Chongjing Wang
- China Academy of Information and Communications Technology, No. 52 Huayuan North Road, Haidian District, Beijing 100191, China
| | - James Gee
- Penn Image Computing and Science Laboratory, University of Pennsylvania, PA 19104, USA
| | - Haipeng Wang
- Institute of Information Fusion, Naval Aviation University, Erma Road Yantai Shandong, Yantai 264001, China.
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, No. 1 Daxue Road, Changqing District, Jinan 250358, China.
| |
Collapse
|
14
|
Mújica-Vargas D, Matuz-Cruz M, García-Aquino C, Ramos-Palencia C. Efficient System for Delimitation of Benign and Malignant Breast Masses. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1775. [PMID: 36554180 PMCID: PMC9777637 DOI: 10.3390/e24121775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 11/23/2022] [Accepted: 11/26/2022] [Indexed: 06/01/2023]
Abstract
In this study, a high-performing scheme is introduced to delimit benign and malignant masses in breast ultrasound images. The proposal is built upon by the Nonlocal Means filter for image quality improvement, an Intuitionistic Fuzzy C-Means local clustering algorithm for superpixel generation with high adherence to the edges, and the DBSCAN algorithm for the global clustering of those superpixels in order to delimit masses' regions. The empirical study was performed using two datasets, both with benign and malignant breast tumors. The quantitative results with respect to the BUSI dataset were JSC≥0.907, DM≥0.913, HD≥7.025, and MCR≤6.431 for benign masses and JSC≥0.897, DM≥0.900, HD≥8.666, and MCR≤8.016 for malignant ones, while the MID dataset resulted in JSC≥0.890, DM≥0.905, HD≥8.370, and MCR≤7.241 along with JSC≥0.881, DM≥0.898, HD≥8.865, and MCR≤7.808 for benign and malignant masses, respectively. These numerical results revealed that our proposal outperformed all the evaluated comparative state-of-the-art methods in mass delimitation. This is confirmed by the visual results since the segmented regions had a better edge delimitation.
Collapse
Affiliation(s)
- Dante Mújica-Vargas
- Departamento de Ciencias Computacionales, Tecnológico Nacional de México, Centro Nacional de Investigación y Desarrollo Tecnológico, Cuernavaca 62490, Morelos, Mexico
| | - Manuel Matuz-Cruz
- Tecnológico Nacional de México, Instituto Tecnológico de Tapachula, Tapachula 30700, Chiapas, Mexico
| | - Christian García-Aquino
- Departamento de Ciencias Computacionales, Tecnológico Nacional de México, Centro Nacional de Investigación y Desarrollo Tecnológico, Cuernavaca 62490, Morelos, Mexico
| | - Celia Ramos-Palencia
- Departamento de Ciencias Computacionales, Tecnológico Nacional de México, Centro Nacional de Investigación y Desarrollo Tecnológico, Cuernavaca 62490, Morelos, Mexico
| |
Collapse
|
15
|
Applying Deep Learning for Breast Cancer Detection in Radiology. Curr Oncol 2022; 29:8767-8793. [PMID: 36421343 PMCID: PMC9689782 DOI: 10.3390/curroncol29110690] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/12/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Recent advances in deep learning have enhanced medical imaging research. Breast cancer is the most prevalent cancer among women, and many applications have been developed to improve its early detection. The purpose of this review is to examine how various deep learning methods can be applied to breast cancer screening workflows. We summarize deep learning methods, data availability and different screening methods for breast cancer including mammography, thermography, ultrasound and magnetic resonance imaging. In this review, we will explore deep learning in diagnostic breast imaging and describe the literature review. As a conclusion, we discuss some of the limitations and opportunities of integrating artificial intelligence into breast cancer clinical practice.
Collapse
|
16
|
Montaha S, Azam S, Rafid AKMRH, Hasan MZ, Karim A, Hasib KM, Patel SK, Jonkman M, Mannan ZI. MNet-10: A robust shallow convolutional neural network model performing ablation study on medical images assessing the effectiveness of applying optimal data augmentation technique. Front Med (Lausanne) 2022; 9:924979. [PMID: 36052321 PMCID: PMC9424498 DOI: 10.3389/fmed.2022.924979] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 07/19/2022] [Indexed: 11/13/2022] Open
Abstract
Interpretation of medical images with a computer-aided diagnosis (CAD) system is arduous because of the complex structure of cancerous lesions in different imaging modalities, high degree of resemblance between inter-classes, presence of dissimilar characteristics in intra-classes, scarcity of medical data, and presence of artifacts and noises. In this study, these challenges are addressed by developing a shallow convolutional neural network (CNN) model with optimal configuration performing ablation study by altering layer structure and hyper-parameters and utilizing a suitable augmentation technique. Eight medical datasets with different modalities are investigated where the proposed model, named MNet-10, with low computational complexity is able to yield optimal performance across all datasets. The impact of photometric and geometric augmentation techniques on different datasets is also evaluated. We selected the mammogram dataset to proceed with the ablation study for being one of the most challenging imaging modalities. Before generating the model, the dataset is augmented using the two approaches. A base CNN model is constructed first and applied to both the augmented and non-augmented mammogram datasets where the highest accuracy is obtained with the photometric dataset. Therefore, the architecture and hyper-parameters of the model are determined by performing an ablation study on the base model using the mammogram photometric dataset. Afterward, the robustness of the network and the impact of different augmentation techniques are assessed by training the model with the rest of the seven datasets. We obtain a test accuracy of 97.34% on the mammogram, 98.43% on the skin cancer, 99.54% on the brain tumor magnetic resonance imaging (MRI), 97.29% on the COVID chest X-ray, 96.31% on the tympanic membrane, 99.82% on the chest computed tomography (CT) scan, and 98.75% on the breast cancer ultrasound datasets by photometric augmentation and 96.76% on the breast cancer microscopic biopsy dataset by geometric augmentation. Moreover, some elastic deformation augmentation methods are explored with the proposed model using all the datasets to evaluate their effectiveness. Finally, VGG16, InceptionV3, and ResNet50 were trained on the best-performing augmented datasets, and their performance consistency was compared with that of the MNet-10 model. The findings may aid future researchers in medical data analysis involving ablation studies and augmentation techniques.
Collapse
Affiliation(s)
- Sidratul Montaha
- Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Sami Azam
- College of Engineering, IT & Environment, Charles Darwin University, Darwin, NT, Australia
| | | | - Md. Zahid Hasan
- Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh
| | - Asif Karim
- College of Engineering, IT & Environment, Charles Darwin University, Darwin, NT, Australia
| | - Khan Md. Hasib
- Department of Computer Science and Engineering, Ahsanullah University of Science and Technology, Dhaka, Bangladesh
| | - Shobhit K. Patel
- Department of Computer Engineering, Marwadi University, Rajkot, India
| | - Mirjam Jonkman
- College of Engineering, IT & Environment, Charles Darwin University, Darwin, NT, Australia
| | - Zubaer Ibna Mannan
- Department of Smart Computing, Kyungdong University – Global Campus, Sokcho-si, South Korea
| |
Collapse
|
17
|
Breast Tumor Ultrasound Image Segmentation Method Based on Improved Residual U-Net Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3905998. [PMID: 35795762 PMCID: PMC9252688 DOI: 10.1155/2022/3905998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 05/19/2022] [Accepted: 05/31/2022] [Indexed: 11/25/2022]
Abstract
In order to achieve efficient and accurate breast tumor recognition and diagnosis, this paper proposes a breast tumor ultrasound image segmentation method based on U-Net framework, combined with residual block and attention mechanism. In this method, the residual block is introduced into U-Net network for improvement to avoid the degradation of model performance caused by the gradient disappearance and reduce the training difficulty of deep network. At the same time, considering the features of spatial and channel attention, a fusion attention mechanism is proposed to be introduced into the image analysis model to improve the ability to obtain the feature information of ultrasound images and realize the accurate recognition and extraction of breast tumors. The experimental results show that the Dice index value of the proposed method can reach 0.921, which shows excellent image segmentation performance.
Collapse
|
18
|
Abstract
Machine learning (ML) methods are pervading an increasing number of fields of application because of their capacity to effectively solve a wide variety of challenging problems. The employment of ML techniques in ultrasound imaging applications started several years ago but the scientific interest in this issue has increased exponentially in the last few years. The present work reviews the most recent (2019 onwards) implementations of machine learning techniques for two of the most popular ultrasound imaging fields, medical diagnostics and non-destructive evaluation. The former, which covers the major part of the review, was analyzed by classifying studies according to the human organ investigated and the methodology (e.g., detection, segmentation, and/or classification) adopted, while for the latter, some solutions to the detection/classification of material defects or particular patterns are reported. Finally, the main merits of machine learning that emerged from the study analysis are summarized and discussed.
Collapse
|
19
|
Wu C, Wang Z. Robust fuzzy dual-local information clustering with kernel metric and quadratic surface prototype for image segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03690-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
20
|
Ukwuoma CC, Urama GC, Qin Z, Bin Heyat MB, Mohammed Khan H, Akhtar F, Masadeh MS, Ibegbulam CS, Delali FL, AlShorman O. Boosting Breast Cancer Classification from Microscopic Images Using Attention Mechanism. 2022 INTERNATIONAL CONFERENCE ON DECISION AID SCIENCES AND APPLICATIONS (DASA) 2022. [DOI: 10.1109/dasa54658.2022.9765013] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Chiagoziem C. Ukwuoma
- University of Electronic Science and Technology of China,School of Information and Software Engineering,Chengdu,Sichuan,China
| | - Gilbert C. Urama
- University of Electronic Science and Technology of China,School of Computer Science and Engineering,Chengdu,Sichuan,China
| | - Zhiguang Qin
- University of Electronic Science and Technology of China,School of Information and Software Engineering,Chengdu,Sichuan,China
| | - Md Belal Bin Heyat
- Sichuan University,West China Hospital,Department of Orthopedics Surgery,Chengdu,Sichuan,China
| | - Haider Mohammed Khan
- University of Electronic Science and Technology of China,School of Computer Science and Engineering,Chengdu,Sichuan,China
| | - Faijan Akhtar
- University of Electronic Science and Technology of China,School of Computer Science and Engineering,Chengdu,Sichuan,China
| | - Mahmoud S. Masadeh
- Yarmouk University,Hijjawi Faculty for Engineering,Computer Engineering Department,Irbid,Jordan
| | - Chukwuemeka S. Ibegbulam
- Federal University of Technology,Department of Polymer and Textile Engineering,Owerri,Imo State,Nigeria
| | - Fiasam Linda Delali
- University of Electronic Science and Technology of China,School of Information and Software Engineering,Chengdu,Sichuan,China
| | - Omar AlShorman
- Najran University,Faculty of Engineering and AlShrouk Traiding Company,Najran,KSA
| |
Collapse
|
21
|
Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. SENSORS 2022; 22:s22030807. [PMID: 35161552 PMCID: PMC8840464 DOI: 10.3390/s22030807] [Citation(s) in RCA: 56] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 12/11/2022]
Abstract
After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK;
| | - Ameer Hamza
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Artūras Mickus
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
- Correspondence:
| |
Collapse
|