1
|
Ji Z, Che H, Yan Y, Wu J. BAG-Net: a boundary detection and multiple attention-guided network for liver ultrasound image automatic segmentation in ultrasound guided surgery. Phys Med Biol 2024; 69:035015. [PMID: 38198733 DOI: 10.1088/1361-6560/ad1cfa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 01/10/2024] [Indexed: 01/12/2024]
Abstract
Objective.Automated segmentation of targets in ultrasound (US) images during US-guided liver surgery holds the potential to assist physicians in fast locating critical areas such as blood vessels and lesions. However, this remains a challenging task primarily due to the image quality issues associated with US, including blurred edges and low contrast. In addition, studies specifically targeting liver segmentation are relatively scarce possibly since studying deep abdominal organs under US is difficult. In this paper, we proposed a network named BAG-Net to address these challenges and achieve accurate segmentation of liver targets with varying morphologies, including lesions and blood vessels.Approach.The BAG-Net was designed with a boundary detection module together with a position module to locate the target, and multiple attention-guided modules combined with the depth supervision strategy to enhance detailed segmentation of the target area.Main Results.Our method was compared to other approaches and demonstrated superior performance on two liver US datasets. Specifically, the method achieved 93.9% precision, 91.2% recall, 92.4% Dice coefficient, and 86.2% IoU to segment the liver tumor. Additionally, we evaluated the capability of our network to segment tumors on the breast US dataset (BUSI), where it also achieved excellent results.Significance.Our proposed method was validated to effectively segment liver targets with diverse morphologies, providing suspicious areas for clinicians to identify lesions or other characteristics. In the clinic, the method is anticipated to improve surgical efficiency during US-guided surgery.
Collapse
Affiliation(s)
- Zihan Ji
- Institute of Biomedical Engineering, Shenzhen International Graduate School, Tsinghua University, People's Republic of China
| | - Hui Che
- Institute of Biomedical Engineering, Shenzhen International Graduate School, Tsinghua University, People's Republic of China
| | - Yibo Yan
- Institute of Biomedical Engineering, Shenzhen International Graduate School, Tsinghua University, People's Republic of China
| | - Jian Wu
- Institute of Biomedical Engineering, Shenzhen International Graduate School, Tsinghua University, People's Republic of China
| |
Collapse
|
2
|
Tagnamas J, Ramadan H, Yahyaouy A, Tairi H. Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images. Vis Comput Ind Biomed Art 2024; 7:2. [PMID: 38273164 PMCID: PMC10811315 DOI: 10.1186/s42492-024-00155-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 01/11/2024] [Indexed: 01/27/2024] Open
Abstract
Accurate segmentation of breast ultrasound (BUS) images is crucial for early diagnosis and treatment of breast cancer. Further, the task of segmenting lesions in BUS images continues to pose significant challenges due to the limitations of convolutional neural networks (CNNs) in capturing long-range dependencies and obtaining global context information. Existing methods relying solely on CNNs have struggled to address these issues. Recently, ConvNeXts have emerged as a promising architecture for CNNs, while transformers have demonstrated outstanding performance in diverse computer vision tasks, including the analysis of medical images. In this paper, we propose a novel breast lesion segmentation network CS-Net that combines the strengths of ConvNeXt and Swin Transformer models to enhance the performance of the U-Net architecture. Our network operates on BUS images and adopts an end-to-end approach to perform segmentation. To address the limitations of CNNs, we design a hybrid encoder that incorporates modified ConvNeXt convolutions and Swin Transformer. Furthermore, to enhance capturing the spatial and channel attention in feature maps we incorporate the Coordinate Attention Module. Second, we design an Encoder-Decoder Features Fusion Module that facilitates the fusion of low-level features from the encoder with high-level semantic features from the decoder during the image reconstruction. Experimental results demonstrate the superiority of our network over state-of-the-art image segmentation methods for BUS lesions segmentation.
Collapse
Affiliation(s)
- Jaouad Tagnamas
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco.
| | - Hiba Ramadan
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| | - Ali Yahyaouy
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| | - Hamid Tairi
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| |
Collapse
|
3
|
Guo Y, Chen M, Yang L, Yin H, Yang H, Zhou Y. A neural network with a human learning paradigm for breast fibroadenoma segmentation in sonography. Biomed Eng Online 2024; 23:5. [PMID: 38221632 PMCID: PMC10787993 DOI: 10.1186/s12938-024-01198-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 01/04/2024] [Indexed: 01/16/2024] Open
Abstract
BACKGROUND Breast fibroadenoma poses a significant health concern, particularly for young women. Computer-aided diagnosis has emerged as an effective and efficient method for the early and accurate detection of various solid tumors. Automatic segmentation of the breast fibroadenoma is important and potentially reduces unnecessary biopsies, but challenging due to the low image quality and presence of various artifacts in sonography. METHODS Human learning involves modularizing complete information and then integrating it through dense contextual connections in an intuitive and efficient way. Here, a human learning paradigm was introduced to guide the neural network by using two consecutive phases: the feature fragmentation stage and the information aggregation stage. To optimize this paradigm, three fragmentation attention mechanisms and information aggregation mechanisms were adapted according to the characteristics of sonography. The evaluation was conducted using a local dataset comprising 600 breast ultrasound images from 30 patients at Suining Central Hospital in China. Additionally, a public dataset consisting of 246 breast ultrasound images from Dataset_BUSI and DatasetB was used to further validate the robustness of the proposed network. Segmentation performance and inference speed were assessed by Dice similarity coefficient (DSC), Hausdorff distance (HD), and training time and then compared with those of the baseline model (TransUNet) and other state-of-the-art methods. RESULTS Most models guided by the human learning paradigm demonstrated improved segmentation on the local dataset with the best one (incorporating C3ECA and LogSparse Attention modules) outperforming the baseline model by 0.76% in DSC and 3.14 mm in HD and reducing the training time by 31.25%. Its robustness and efficiency on the public dataset are also confirmed, surpassing TransUNet by 0.42% in DSC and 5.13 mm in HD. CONCLUSIONS Our proposed human learning paradigm has demonstrated the superiority and efficiency of ultrasound breast fibroadenoma segmentation across both public and local datasets. This intuitive and efficient learning paradigm as the core of neural networks holds immense potential in medical image processing.
Collapse
Affiliation(s)
- Yongxin Guo
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, 1 Medical College Road, Chongqing, 400016, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016, China
| | - Maoshan Chen
- Department of Breast and Thyroid Surgery, Suining Central Hospital, Suining, 629000, China
| | - Lei Yang
- Department of Breast and Thyroid Surgery, Suining Central Hospital, Suining, 629000, China
| | - Heng Yin
- Department of Breast and Thyroid Surgery, Suining Central Hospital, Suining, 629000, China
| | - Hongwei Yang
- Department of Breast and Thyroid Surgery, Suining Central Hospital, Suining, 629000, China
| | - Yufeng Zhou
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, 1 Medical College Road, Chongqing, 400016, China.
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016, China.
- National Medical Products Administration (NMPA) Key Laboratory for Quality Evaluation of Ultrasonic Surgical Equipment, 507 Gaoxin Ave., Donghu New Technology Development Zone, Wuhan, 430075, Hubei, China.
| |
Collapse
|
4
|
Song H, Liu C, Li S, Zhang P. TS-GCN: A novel tumor segmentation method integrating transformer and GCN. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:18173-18190. [PMID: 38052553 DOI: 10.3934/mbe.2023807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
As one of the critical branches of medical image processing, the task of segmentation of breast cancer tumors is of great importance for planning surgical interventions, radiotherapy and chemotherapy. Breast cancer tumor segmentation faces several challenges, including the inherent complexity and heterogeneity of breast tissue, the presence of various imaging artifacts and noise in medical images, low contrast between the tumor region and healthy tissue, and inconsistent size of the tumor region. Furthermore, the existing segmentation methods may not fully capture the rich spatial and contextual information in small-sized regions in breast images, leading to suboptimal performance. In this paper, we propose a novel breast tumor segmentation method, called the transformer and graph convolutional neural (TS-GCN) network, for medical imaging analysis. Specifically, we designed a feature aggregation network to fuse the features extracted from the transformer, GCN and convolutional neural network (CNN) networks. The CNN extract network is designed for the image's local deep feature, and the transformer and GCN networks can better capture the spatial and context dependencies among pixels in images. By leveraging the strengths of three feature extraction networks, our method achieved superior segmentation performance on the BUSI dataset and dataset B. The TS-GCN showed the best performance on several indexes, with Acc of 0.9373, Dice of 0.9058, IoU of 0.7634, F1 score of 0.9338, and AUC of 0.9692, which outperforms other state-of-the-art methods. The research of this segmentation method provides a promising future for medical image analysis and diagnosis of other diseases.
Collapse
Affiliation(s)
- Haiyan Song
- The Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Cuihong Liu
- Affiliated Eye Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
- School of Nursing, Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Shengnan Li
- The Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Peixiao Zhang
- The Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| |
Collapse
|
5
|
Hu K, Zhang X, Lee D, Xiong D, Zhang Y, Gao X. Boundary-Guided and Region-Aware Network With Global Scale-Adaptive for Accurate Segmentation of Breast Tumors in Ultrasound Images. IEEE J Biomed Health Inform 2023; 27:4421-4432. [PMID: 37310830 DOI: 10.1109/jbhi.2023.3285789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Breast ultrasound (BUS) image segmentation is a critical procedure in the diagnosis and quantitative analysis of breast cancer. Most existing methods for BUS image segmentation do not effectively utilize the prior information extracted from the images. In addition, breast tumors have very blurred boundaries, various sizes and irregular shapes, and the images have a lot of noise. Thus, tumor segmentation remains a challenge. In this article, we propose a BUS image segmentation method using a boundary-guided and region-aware network with global scale-adaptive (BGRA-GSA). Specifically, we first design a global scale-adaptive module (GSAM) to extract features of tumors of different sizes from multiple perspectives. GSAM encodes the features at the top of the network in both channel and spatial dimensions, which can effectively extract multi-scale context and provide global prior information. Moreover, we develop a boundary-guided module (BGM) for fully mining boundary information. BGM guides the decoder to learn the boundary context by explicitly enhancing the extracted boundary features. Simultaneously, we design a region-aware module (RAM) for realizing the cross-fusion of diverse layers of breast tumor diversity features, which can facilitate the network to improve the learning ability of contextual features of tumor regions. These modules enable our BGRA-GSA to capture and integrate rich global multi-scale context, multi-level fine-grained details, and semantic information to facilitate accurate breast tumor segmentation. Finally, the experimental results on three publicly available datasets show that our model achieves highly effective segmentation of breast tumors even with blurred boundaries, various sizes and shapes, and low contrast.
Collapse
|
6
|
Gao Y, Fu X, Chen Y, Guo C, Wu J. Post-pandemic healthcare for COVID-19 vaccine: Tissue-aware diagnosis of cervical lymphadenopathy via multi-modal ultrasound semantic segmentation. Appl Soft Comput 2023; 133:109947. [PMID: 36570119 PMCID: PMC9762098 DOI: 10.1016/j.asoc.2022.109947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 11/30/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022]
Abstract
With the widespread deployment of COVID-19 vaccines all around the world, billions of people have benefited from the vaccination and thereby avoiding infection. However, huge amount of clinical cases revealed diverse side effects of COVID-19 vaccines, among which cervical lymphadenopathy is one of the most frequent local reactions. Therefore, rapid detection of cervical lymph node (LN) is essential in terms of vaccine recipients' healthcare and avoidance of misdiagnosis in the post-pandemic era. This paper focuses on a novel deep learning-based framework for the rapid diagnosis of cervical lymphadenopathy towards COVID-19 vaccine recipients. Existing deep learning-based computer-aided diagnosis (CAD) methods for cervical LN enlargement mostly only depend on single modal images, e.g., grayscale ultrasound (US), color Doppler ultrasound, and CT, while failing to effectively integrate information from the multi-source medical images. Meanwhile, both the surrounding tissue objects of the cervical LNs and different regions inside the cervical LNs may imply valuable diagnostic knowledge which is pending for mining. In this paper, we propose an Tissue-Aware Cervical Lymph Node Diagnosis method (TACLND) via multi-modal ultrasound semantic segmentation. The method effectively integrates grayscale and color Doppler US images and realizes a pixel-level localization of different tissue objects, i.e., lymph, muscle, and blood vessels. With inter-tissue and intra-tissue attention mechanisms applied, our proposed method can enhance the implicit tissue-level diagnostic knowledge in both spatial and channel dimension, and realize diagnosis of cervical LN with normal, benign or malignant state. Extensive experiments conducted on our collected cervical LN US dataset demonstrate the effectiveness of our methods on both tissue detection and cervical lymphadenopathy diagnosis. Therefore, our proposed framework can guarantee efficient diagnosis for the vaccine recipients' cervical LN, and assist doctors to discriminate between COVID-related reactive lymphadenopathy and metastatic lymphadenopathy.
Collapse
Affiliation(s)
- Yue Gao
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Xiangling Fu
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Yuepeng Chen
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
- Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Chenyi Guo
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Ji Wu
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| |
Collapse
|
7
|
Huang K, Zhang Y, Cheng HD, Xing P. Trustworthy Breast Ultrasound Image Semantic Segmentation Based on Fuzzy Uncertainty Reduction. Healthcare (Basel) 2022; 10:healthcare10122480. [PMID: 36554005 PMCID: PMC9778351 DOI: 10.3390/healthcare10122480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/02/2022] [Accepted: 12/06/2022] [Indexed: 12/14/2022] Open
Abstract
Medical image semantic segmentation is essential in computer-aided diagnosis systems. It can separate tissues and lesions in the image and provide valuable information to radiologists and doctors. The breast ultrasound (BUS) images have advantages: no radiation, low cost, portable, etc. However, there are two unfavorable characteristics: (1) the dataset size is often small due to the difficulty in obtaining the ground truths, and (2) BUS images are usually in poor quality. Trustworthy BUS image segmentation is urgent in breast cancer computer-aided diagnosis systems, especially for fully understanding the BUS images and segmenting the breast anatomy, which supports breast cancer risk assessment. The main challenge for this task is uncertainty in both pixels and channels of the BUS images. In this paper, we propose a Spatial and Channel-wise Fuzzy Uncertainty Reduction Network (SCFURNet) for BUS image semantic segmentation. The proposed architecture can reduce the uncertainty in the original segmentation frameworks. We apply the proposed method to four datasets: (1) a five-category BUS image dataset with 325 images, and (2) three BUS image datasets with only tumor category (1830 images in total). The proposed approach compares state-of-the-art methods such as U-Net with VGG-16, ResNet-50/ResNet-101, Deeplab, FCN-8s, PSPNet, U-Net with information extension, attention U-Net, and U-Net with the self-attention mechanism. It achieves 2.03%, 1.84%, and 2.88% improvements in the Jaccard index on three public BUS datasets, and 6.72% improvement in the tumor category and 4.32% improvement in the overall performance on the five-category dataset compared with that of the original U-shape network with ResNet-101 since it can handle the uncertainty effectively and efficiently.
Collapse
Affiliation(s)
- Kuan Huang
- Department of Computer Science and Technology, Kean University, Union, NJ 07083, USA
| | - Yingtao Zhang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
| | - Heng-Da Cheng
- Department of Computer Science, Utah State University, Logan, UT 84322, USA
- Correspondence:
| | - Ping Xing
- Ultrasound Department, The First Affiliated Hospital of Harbin Medical University, Harbin 150001, China
| |
Collapse
|
8
|
Zuo Q, Lu L, Wang L, Zuo J, Ouyang T. Constructing brain functional network by Adversarial Temporal-Spatial Aligned Transformer for early AD analysis. Front Neurosci 2022; 16:1087176. [PMID: 36518529 PMCID: PMC9742604 DOI: 10.3389/fnins.2022.1087176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 11/10/2022] [Indexed: 09/19/2023] Open
Abstract
Introduction The brain functional network can describe the spontaneous activity of nerve cells and reveal the subtle abnormal changes associated with brain disease. It has been widely used for analyzing early Alzheimer's disease (AD) and exploring pathological mechanisms. However, the current methods of constructing functional connectivity networks from functional magnetic resonance imaging (fMRI) heavily depend on the software toolboxes, which may lead to errors in connection strength estimation and bad performance in disease analysis because of many subjective settings. Methods To solve this problem, in this paper, a novel Adversarial Temporal-Spatial Aligned Transformer (ATAT) model is proposed to automatically map 4D fMRI into functional connectivity network for early AD analysis. By incorporating the volume and location of anatomical brain regions, the region-guided feature learning network can roughly focus on local features for each brain region. Also, the spatial-temporal aligned transformer network is developed to adaptively adjust boundary features of adjacent regions and capture global functional connectivity patterns of distant regions. Furthermore, a multi-channel temporal discriminator is devised to distinguish the joint distributions of the multi-region time series from the generator and the real sample. Results Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) proved the effectiveness and superior performance of the proposed model in early AD prediction and progression analysis. Discussion To verify the reliability of the proposed model, the detected important ROIs are compared with clinical studies and show partial consistency. Furthermore, the most significant altered connectivity reflects the main characteristics associated with AD. Conclusion Generally, the proposed ATAT provides a new perspective in constructing functional connectivity networks and is able to evaluate the disease-related changing characteristics at different stages for neuroscience exploration and clinical disease analysis.
Collapse
Affiliation(s)
- Qiankun Zuo
- School of Information Engineering, Hubei University of Economics, Wuhan, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and the SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, China
| | - Libin Lu
- School of Mathematics and Computer Science, Wuhan Polytechnic University, Wuhan, China
| | - Lin Wang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and the SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, China
- Guangdong-Hong Kong-Macau Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen, China
| | - Jiahui Zuo
- State Key Laboratory of Petroleum Resource and Prospecting, and Unconventional Petroleum Research Institute, China University of Petroleum, Beijing, China
| | - Tao Ouyang
- State Key Laboratory of Geomechanics and Geotechnical Engineering, Institute of Rock and Soil Mechanics, Chinese Academy of Sciences, Wuhan, China
| |
Collapse
|
9
|
Chen G, Dai Y, Zhang J. C-Net: Cascaded convolutional neural network with global guidance and refinement residuals for breast ultrasound images segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107086. [PMID: 36044802 DOI: 10.1016/j.cmpb.2022.107086] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 08/05/2022] [Accepted: 08/23/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Breast lesions segmentation is an important step of computer-aided diagnosis system. However, speckle noise, heterogeneous structure, and similar intensity distributions bring challenges for breast lesion segmentation. METHODS In this paper, we presented a novel cascaded convolutional neural network integrating U-net, bidirectional attention guidance network (BAGNet) and refinement residual network (RFNet) for the lesion segmentation in breast ultrasound images. Specifically, we first use U-net to generate a set of saliency maps containing low-level and high-level image structures. Then, the bidirectional attention guidance network is used to capture the context between global (low-level) and local (high-level) features from the saliency map. The introduction of the global feature map can reduce the interference of surrounding tissue on the lesion regions. Furthermore, we developed a refinement residual network based on the core architecture of U-net to learn the difference between rough saliency feature maps and ground-truth masks. The learning of residuals can assist us to obtain a more complete lesion mask. RESULTS To evaluate the segmentation performance of the network, we compared with several state-of-the-art segmentation methods on the public breast ultrasound dataset (BUSIS) using six commonly used evaluation metrics. Our method achieves the highest scores on six metrics. Furthermore, p-values indicate significant differences between our method and the comparative methods. CONCLUSIONS Experimental results show that our method achieves the most competitive segmentation results. In addition, we apply the network on renal ultrasound images segmentation. In general, our method has good adaptability and robustness on ultrasound image segmentation.
Collapse
Affiliation(s)
- Gongping Chen
- College of Artificial Intelligence, Nankai University, Tianjin, China.
| | - Yu Dai
- College of Artificial Intelligence, Nankai University, Tianjin, China.
| | - Jianxun Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| |
Collapse
|
10
|
Zou K, Tao T, Yuan X, Shen X, Lai W, Long H. An interactive dual-branch network for hard palate segmentation of the oral cavity from CBCT images. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
11
|
Akinyelu AA, Zaccagna F, Grist JT, Castelli M, Rundo L. Brain Tumor Diagnosis Using Machine Learning, Convolutional Neural Networks, Capsule Neural Networks and Vision Transformers, Applied to MRI: A Survey. J Imaging 2022; 8:205. [PMID: 35893083 PMCID: PMC9331677 DOI: 10.3390/jimaging8080205] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 06/20/2022] [Accepted: 07/12/2022] [Indexed: 02/01/2023] Open
Abstract
Management of brain tumors is based on clinical and radiological information with presumed grade dictating treatment. Hence, a non-invasive assessment of tumor grade is of paramount importance to choose the best treatment plan. Convolutional Neural Networks (CNNs) represent one of the effective Deep Learning (DL)-based techniques that have been used for brain tumor diagnosis. However, they are unable to handle input modifications effectively. Capsule neural networks (CapsNets) are a novel type of machine learning (ML) architecture that was recently developed to address the drawbacks of CNNs. CapsNets are resistant to rotations and affine translations, which is beneficial when processing medical imaging datasets. Moreover, Vision Transformers (ViT)-based solutions have been very recently proposed to address the issue of long-range dependency in CNNs. This survey provides a comprehensive overview of brain tumor classification and segmentation techniques, with a focus on ML-based, CNN-based, CapsNet-based, and ViT-based techniques. The survey highlights the fundamental contributions of recent studies and the performance of state-of-the-art techniques. Moreover, we present an in-depth discussion of crucial issues and open challenges. We also identify some key limitations and promising future research directions. We envisage that this survey shall serve as a good springboard for further study.
Collapse
Affiliation(s)
- Andronicus A. Akinyelu
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
- Department of Computer Science and Informatics, University of the Free State, Phuthaditjhaba 9866, South Africa
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum-University of Bologna, 40138 Bologna, Italy;
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Functional and Molecular Neuroimaging Unit, 40139 Bologna, Italy
| | - James T. Grist
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK;
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, UK
- Oxford Centre for Clinical Magnetic Research Imaging, University of Oxford, Oxford OX3 9DU, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2SY, UK
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal;
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy
| |
Collapse
|
12
|
Ding Y, Yang Q, Wang Y, Chen D, Qin Z, Zhang J. MallesNet: A multi-object assistance based network for brachial plexus segmentation in ultrasound images. Med Image Anal 2022; 80:102511. [PMID: 35753278 DOI: 10.1016/j.media.2022.102511] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 06/02/2022] [Accepted: 06/06/2022] [Indexed: 12/19/2022]
Abstract
Ultrasound-guided injection is widely used to help anesthesiologists perform anesthesia in peripheral nerve blockade (PNB). However, it is a daunting task to accurately identify nerve structure in ultrasound images even for the experienced anesthesiologists. In this paper, a Multi-object assistance based Brachial Plexus Segmentation Network, named MallesNet, is proposed to improve the nerve segmentation performance in ultrasound image with the assistance of simultaneously segmenting its surrounding anatomical structures (e.g., muscle, vein, and artery). The MallesNet is designed by following the framework of Mask R-CNN to implement the multi object identification and segmentation. Moreover, a spatial local contrast feature (SLCF) extraction module is proposed to compute contrast features at different scales to effectively obtain useful features for small objects. And the self-attention gate (SAG) is also utilized to capture the spatial relationships in different channels and further re-weight the channels in feature maps by following the design of non-local operation and channel attention. Furthermore, the upsampling mechanism in original Feature Pyramid Network (FPN) is improved by adopting the transpose convolution and skip concatenation to fine-tune the feature maps. The Ultrasound Brachial Plexus Dataset (UBPD) is also proposed to support the research on brachial plexus segmentation, which consists of 1055 ultrasound images with four objects (i.e., nerve, artery, vein and muscle) and their corresponding label masks. Extensive experimental results using UBPD dataset demonstrate that MallesNet can achieve a better segmentation performance on nerves structure and also on surrounding structures in comparison to other competing approaches.
Collapse
Affiliation(s)
- Yi Ding
- Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; Ningbo WebKing Technology Joint Stock Co., Ltd, Ningbo, Zhejiang, 315000, China.
| | | | - Qiqi Yang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; Network and Data Security Key Laboratory of China, Chengdu, Sichuan, 610054 China.
| | - Yiqian Wang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; Network and Data Security Key Laboratory of China, Chengdu, Sichuan, 610054 China.
| | - Dajiang Chen
- Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | | | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; Network and Data Security Key Laboratory of China, Chengdu, Sichuan, 610054 China.
| | | | - Jian Zhang
- Center of Anaesthesia surgery, Sichuan Provincial Hospital for Women and Children/Affilated Women and Children's Hospital of Chengdu Medical College, Chengdu, China.
| |
Collapse
|
13
|
End-to-End Convolutional Neural Network Framework for Breast Ultrasound Analysis Using Multiple Parametric Images Generated from Radiofrequency Signals. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12104942] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Breast ultrasound (BUS) is an effective clinical modality for diagnosing breast abnormalities in women. Deep-learning techniques based on convolutional neural networks (CNN) have been widely used to analyze BUS images. However, the low quality of B-mode images owing to speckle noise and a lack of training datasets makes BUS analysis challenging in clinical applications. In this study, we proposed an end-to-end CNN framework for BUS analysis using multiple parametric images generated from radiofrequency (RF) signals. The entropy and phase images, which represent the microstructural and anatomical information, respectively, and the traditional B-mode images were used as parametric images in the time domain. In addition, the attenuation image, estimated from the frequency domain using RF signals, was used for the spectral features. Because one set of RF signals from one patient produced multiple images as CNN inputs, the proposed framework overcame the limitation of datasets in a broad sense of data augmentation while providing complementary information to compensate for the low quality of the B-mode images. The experimental results showed that the proposed architecture improved the classification accuracy and recall by 5.5% and 11.6%, respectively, compared with the traditional approach using only B-mode images. The proposed framework can be extended to various other parametric images in both the time and frequency domains using deep neural networks to improve its performance.
Collapse
|
14
|
Magnuska ZA, Theek B, Darguzyte M, Palmowski M, Stickeler E, Schulz V, Kießling F. Influence of the Computer-Aided Decision Support System Design on Ultrasound-Based Breast Cancer Classification. Cancers (Basel) 2022; 14:277. [PMID: 35053441 PMCID: PMC8773857 DOI: 10.3390/cancers14020277] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 12/30/2021] [Indexed: 02/04/2023] Open
Abstract
Automation of medical data analysis is an important topic in modern cancer diagnostics, aiming at robust and reproducible workflows. Therefore, we used a dataset of breast US images (252 malignant and 253 benign cases) to realize and compare different strategies for CAD support in lesion detection and classification. Eight different datasets (including pre-processed and spatially augmented images) were prepared, and machine learning algorithms (i.e., Viola-Jones; YOLOv3) were trained for lesion detection. The radiomics signature (RS) was derived from detection boxes and compared with RS derived from manually obtained segments. Finally, the classification model was established and evaluated concerning accuracy, sensitivity, specificity, and area under the Receiver Operating Characteristic curve. After training on a dataset including logarithmic derivatives of US images, we found that YOLOv3 obtains better results in breast lesion detection (IoU: 0.544 ± 0.081; LE: 0.171 ± 0.009) than the Viola-Jones framework (IoU: 0.399 ± 0.054; LE: 0.096 ± 0.016). Interestingly, our findings show that the classification model trained with RS derived from detection boxes and the model based on the RS derived from a gold standard manual segmentation are comparable (p-value = 0.071). Thus, deriving radiomics signatures from the detection box is a promising technique for building a breast lesion classification model, and may reduce the need for the lesion segmentation step in the future design of CAD systems.
Collapse
Affiliation(s)
- Zuzanna Anna Magnuska
- Institute for Experimental Molecular Imaging, Uniklinik RWTH Aachen and Helmholtz Institute for Biomedical Engineering, Faculty of Medicine, RWTH Aachen University, 52074 Aachen, Germany; (Z.A.M.); (B.T.); (M.D.); (V.S.)
| | - Benjamin Theek
- Institute for Experimental Molecular Imaging, Uniklinik RWTH Aachen and Helmholtz Institute for Biomedical Engineering, Faculty of Medicine, RWTH Aachen University, 52074 Aachen, Germany; (Z.A.M.); (B.T.); (M.D.); (V.S.)
| | - Milita Darguzyte
- Institute for Experimental Molecular Imaging, Uniklinik RWTH Aachen and Helmholtz Institute for Biomedical Engineering, Faculty of Medicine, RWTH Aachen University, 52074 Aachen, Germany; (Z.A.M.); (B.T.); (M.D.); (V.S.)
| | - Moritz Palmowski
- Radiologie Baden-Baden, Beethovenstraße 2, 76530 Baden-Baden, Germany;
| | - Elmar Stickeler
- Department of Obstetrics and Gynecology, University Clinic Aachen, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany;
- Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Volkmar Schulz
- Institute for Experimental Molecular Imaging, Uniklinik RWTH Aachen and Helmholtz Institute for Biomedical Engineering, Faculty of Medicine, RWTH Aachen University, 52074 Aachen, Germany; (Z.A.M.); (B.T.); (M.D.); (V.S.)
- Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany
- Physics Institute III B, RWTH Aachen University, 52074 Aachen, Germany
- Hyperion Hybrid Imaging Systems GmbH, 52074 Aachen, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Fabian Kießling
- Institute for Experimental Molecular Imaging, Uniklinik RWTH Aachen and Helmholtz Institute for Biomedical Engineering, Faculty of Medicine, RWTH Aachen University, 52074 Aachen, Germany; (Z.A.M.); (B.T.); (M.D.); (V.S.)
- Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| |
Collapse
|
15
|
Chowdary J, Yogarajah P, Chaurasia P, Guruviah V. A Multi-Task Learning Framework for Automated Segmentation and Classification of Breast Tumors From Ultrasound Images. ULTRASONIC IMAGING 2022; 44:3-12. [PMID: 35128997 PMCID: PMC8902030 DOI: 10.1177/01617346221075769] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Breast cancer is one of the most fatal diseases leading to the death of several women across the world. But early diagnosis of breast cancer can help to reduce the mortality rate. So an efficient multi-task learning approach is proposed in this work for the automatic segmentation and classification of breast tumors from ultrasound images. The proposed learning approach consists of an encoder, decoder, and bridge blocks for segmentation and a dense branch for the classification of tumors. For efficient classification, multi-scale features from different levels of the network are used. Experimental results show that the proposed approach is able to enhance the accuracy and recall of segmentation by 1.08%, 4.13%, and classification by 1.16%, 2.34%, respectively than the methods available in the literature.
Collapse
Affiliation(s)
| | - Pratheepan Yogarajah
- University of Ulster, Londonderry, UK
- Pratheepan Yogarajah, University of Ulster, Northland Road, Magee Campus, Londonderry, Northern Ireland BT48 7JL, UK.
| | | | | |
Collapse
|
16
|
Li Y, Liu Y, Huang L, Wang Z, Luo J. Deep weakly-supervised breast tumor segmentation in ultrasound images with explicit anatomical constraints. Med Image Anal 2021; 76:102315. [PMID: 34902792 DOI: 10.1016/j.media.2021.102315] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 11/16/2021] [Accepted: 11/23/2021] [Indexed: 12/24/2022]
Abstract
Breast tumor segmentation is an important step in the diagnostic procedure of physicians and computer-aided diagnosis systems. We propose a two-step deep learning framework for breast tumor segmentation in breast ultrasound (BUS) images which requires only a few manual labels. The first step is breast anatomy decomposition handled by a semi-supervised semantic segmentation technique. The input BUS image is decomposed into four breast anatomical structures, namely fat, mammary gland, muscle and thorax layers. Fat and mammary gland layers are used as constrained region to reduce the search space for breast tumor segmentation. The second step is breast tumor segmentation performed in a weakly-supervised learning scenario where only image-level labels are available. Breast tumors are first recognized by a classification network and then segmented by the proposed class activation mapping and deep level set (CAM-DLS) method. For breast anatomy decomposition, the proposed framework achieves Dice similarity coefficient (DSC) of 83.0 ± 11.8%, 84.3 ± 10.0%, 80.7 ± 15.4% and 91.0 ± 11.4% for fat, mammary gland, muscle and thorax layers, respectively. For breast tumor recognition, the proposed framework achieves sensitivity of 95.8%, precision of 92.4%, specificity of 93.9%, accuracy of 94.8% and F1-score of 0.941. For breast tumor segmentation, the proposed framework achieves DSC of 77.3% and intersection-over-union (IoU) of 66.0%. In conclusion, the proposed framework could efficiently perform breast tumor recognition and segmentation simultaneously in a weakly-supervised setting with anatomical constraints.
Collapse
Affiliation(s)
- Yongshuai Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Yuan Liu
- Senior Department of Oncology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing 100039, China; Senior Department of Oncology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing 100039, China
| | - Lijie Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Zhili Wang
- Department of Ultrasound, The First Medical Center of Chinese PLA General Hospital, Beijing 100853, China.
| | - Jianwen Luo
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
17
|
Huang K, Zhang Y, Cheng HD, Xing P. MSF-GAN: Multi-Scale Fuzzy Generative Adversarial Network for Breast Ultrasound Image Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3193-3196. [PMID: 34891920 DOI: 10.1109/embc46164.2021.9630108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Automatic breast ultrasound image (BUS) segmentation is still a challenging task due to poor image quality and inherent speckle noise. In this paper, we propose a novel multi-scale fuzzy generative adversarial network (MSF-GAN) for breast ultrasound image segmentation. The proposed MSF-GAN consists of two networks: a generative network to generate segmentation maps for input BUS images, and a discriminative network that employs a multi-scale fuzzy (MSF) entropy module for discrimination. The major contribution of this paper is applying fuzzy logic and fuzzy entropy in the discriminative network which can distinguish the uncertainty of segmentation maps and groundtruth maps and forces the generative network to achieve better segmentation performance. We evaluate the performance of MSF-GAN on three BUS datasets and compare it with six state-of-the-art deep neural network-based methods in terms of five metrics. MSF-GAN achieves the highest mean IoU of 78.75%, 73.30%, and 71.12% on three datasets, respectively.
Collapse
|
18
|
Gómez Ó, Mesejo P, Ibáñez Ó, Cordón Ó. Deep architectures for the segmentation of frontal sinuses in X-ray images: Towards an automatic forensic identification system in comparative radiography. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.116] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
19
|
Zou H, Gong X, Luo J, Li T. A Robust Breast ultrasound segmentation method under noisy annotations. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106327. [PMID: 34428680 DOI: 10.1016/j.cmpb.2021.106327] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 07/30/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE A large-scale training data and accurate annotations are fundamental for current segmentation networks. However, the characteristic artifacts of ultrasound images always make the annotation task complicated, such as attenuation, speckle, shadows and signal dropout. Further complications arise as the contrast between the region of interest and background is often low. Without double-check from professionals, it is hard to guarantee that there is no noisy annotation in segmentation datasets. However, among the deep learning methods applied to ultrasound segmentation so far, no one can solve this problem. METHOD Given a dataset with poorly labeled masks, including a certain amount of noises, we propose an end-to-end noisy annotation tolerance network (NAT-Net). NAT-Net can detect noise by the proposed noise index (NI) and dynamically correct noisy annotations in the training stage. Simultaneously, noise index is used to correct the noise along with the output of the learning model. This method does not need any auxiliary clean datasets or prior knowledge of noise distributions, so it is more general, robust and easier to apply than the existing methods. RESULTS NAT-Net outperforms previous state-of-the-art methods on synthesized data with different noise ratio. For real-world dataset with more complex noise types, the IoU of NAT-Net is higher than that of state-of-art approaches by nearly 6%. Experimental results show that our method can also achieve good results compared with the existing methods for clean dataset. CONCLUSION The NAT-Net reduces manual interaction of data annotation, reduces dependence on medical personnel. After tumor segmentation, disease diagnosis efficiency is improved, which provides an auxiliary strategies for subsequent medical diagnosis systems based on ultrasound.
Collapse
Affiliation(s)
- Haipeng Zou
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan, China.
| | - Xun Gong
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan, China.
| | - Jun Luo
- Sichuan Academy of Medical Sciences Sichuan Provincial Peoples Hospital, Chengdu, Sichuan, China.
| | - Tianrui Li
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan, China.
| |
Collapse
|
20
|
Wu Y, Zhang R, Zhu L, Wang W, Wang S, Xie H, Cheng G, Wang FL, He X, Zhang H. BGM-Net: Boundary-Guided Multiscale Network for Breast Lesion Segmentation in Ultrasound. Front Mol Biosci 2021; 8:698334. [PMID: 34350211 PMCID: PMC8326799 DOI: 10.3389/fmolb.2021.698334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 06/14/2021] [Indexed: 11/13/2022] Open
Abstract
Automatic and accurate segmentation of breast lesion regions from ultrasonography is an essential step for ultrasound-guided diagnosis and treatment. However, developing a desirable segmentation method is very difficult due to strong imaging artifacts e.g., speckle noise, low contrast and intensity inhomogeneity, in breast ultrasound images. To solve this problem, this paper proposes a novel boundary-guided multiscale network (BGM-Net) to boost the performance of breast lesion segmentation from ultrasound images based on the feature pyramid network (FPN). First, we develop a boundary-guided feature enhancement (BGFE) module to enhance the feature map for each FPN layer by learning a boundary map of breast lesion regions. The BGFE module improves the boundary detection capability of the FPN framework so that weak boundaries in ambiguous regions can be correctly identified. Second, we design a multiscale scheme to leverage the information from different image scales in order to tackle ultrasound artifacts. Specifically, we downsample each testing image into a coarse counterpart, and both the testing image and its coarse counterpart are input into BGM-Net to predict a fine and a coarse segmentation maps, respectively. The segmentation result is then produced by fusing the fine and the coarse segmentation maps so that breast lesion regions are accurately segmented from ultrasound images and false detections are effectively removed attributing to boundary feature enhancement and multiscale image information. We validate the performance of the proposed approach on two challenging breast ultrasound datasets, and experimental results demonstrate that our approach outperforms state-of-the-art methods.
Collapse
Affiliation(s)
- Yunzhu Wu
- Department of Ultrasound, Shenzhen People’s Hospital, The Second Clinical College of Jinan University, Shenzhen, China
| | - Ruoxin Zhang
- Department of Gastroenterology, The First Affiliated Hospital of Guangdong Pharmaceutical University, Guangzhou, China
| | - Lei Zhu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, United Kingdom
| | - Weiming Wang
- School of Science and Technology, The Open University of Hong Kong, Hong Kong, China
| | - Shengwen Wang
- Department of Neurosurgery, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Haoran Xie
- Department of Computing and Decision Sciences, Lingnan University, Hong Kong, China
| | - Gary Cheng
- Department of Mathematics and Information Technology, The Education University of Hong Kong, Hong Kong, China
| | - Fu Lee Wang
- School of Science and Technology, The Open University of Hong Kong, Hong Kong, China
| | - Xingxiang He
- Department of Gastroenterology, The First Affiliated Hospital of Guangdong Pharmaceutical University, Guangzhou, China
| | - Hai Zhang
- Department of Ultrasound, Shenzhen People’s Hospital, The Second Clinical College of Jinan University, Shenzhen, China
- The First Affiliated Hospital of Southern University of Science and Technology, Shenzhen, China
| |
Collapse
|
21
|
Mei Y, Jin H, Yu B, Wu E, Yang K. Visual geometry Group-UNet: Deep learning ultrasonic image reconstruction for curved parts. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:2997. [PMID: 34241089 DOI: 10.1121/10.0004827] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Accepted: 04/09/2021] [Indexed: 06/13/2023]
Abstract
Detecting small defects in curved parts through classical monostatic pulse-echo ultrasonic imaging is known to be a challenge. Hence, a robot-assisted ultrasonic testing system with the track-scan imaging method is studied to improve the detecting coverage and contrast of ultrasonic images. To further improve the image resolution, we propose a visual geometry group-UNet (VGG-UNet) deep learning network to optimize the ultrasonic images reconstructed by the track-scan imaging method. The VGG-UNet uses VGG to extract advanced information from ultrasonic images and takes advantage of UNet for small dataset segmentation. A comparison of the reconstructed images on the simulation dataset with ground truth reveals that the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) can reach 39 dB and 0.99, respectively. Meanwhile, the trained network is also robust against the noise and environmental factors according to experimental results. The experiments indicate that the PSNR and SSIM can reach 32 dB and 0.99, respectively. The resolution of ultrasonic images reconstructed by track-scan imaging method is increased approximately 10 times. All the results verify that the proposed method can improve the resolution of reconstructed ultrasonic images with high computation efficiency.
Collapse
Affiliation(s)
- Yujian Mei
- State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou 310027, People's Republic of China
| | - Haoran Jin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Bei Yu
- State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou 310027, People's Republic of China
| | - Eryong Wu
- State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou 310027, People's Republic of China
| | - Keji Yang
- State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou 310027, People's Republic of China
| |
Collapse
|
22
|
Tumor classification in automated breast ultrasound (ABUS) based on a modified extracting feature network. Comput Med Imaging Graph 2021; 90:101925. [PMID: 33915383 DOI: 10.1016/j.compmedimag.2021.101925] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 01/29/2021] [Accepted: 04/05/2021] [Indexed: 11/24/2022]
Abstract
People can get consistent Automated Breast Ultrasound (ABUS) images due to the imaging mechanism of scanning. Therefore, it has unique advantages in breast tumor classification using artificial intelligence technology. This paper proposes a method for classifying benign and malignant breast tumors using ABUS sequence based on deep learning. First, Images of Interest (IOI) will be extracted and Region of Interest (ROI) will be cropped in ABUS sequence by two preprocessing deep learning models, Extracting-IOI model and Cropping-ROI model. Then, we propose a Shallowly Dilated Convolutional Branch Network (SDCB-Net). We combine this network with the VGG16 transfer learning network to construct a brand-new Shared Extracting Feature Network (SEF-Net) to extract ROI sequence features. Finally, the correlation features of ABUS images are extracted and integrated by using GRU Classified Network (GRUC-Net) to achieve the accurate breast tumors classification. The final results show that the accuracy of the test set for classifying benign and malignant ABUS sequence is 92.86 %. This method not only has high accuracy but also greatly improves the speed and efficiency of breast tumor classification. It has high clinical application significance that more women can discover breast tumors timely.
Collapse
|
23
|
Abd El Kader I, Xu G, Shuai Z, Saminu S, Javaid I, Salim Ahmad I. Differential Deep Convolutional Neural Network Model for Brain Tumor Classification. Brain Sci 2021; 11:352. [PMID: 33801994 PMCID: PMC8001442 DOI: 10.3390/brainsci11030352] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 03/01/2021] [Accepted: 03/03/2021] [Indexed: 02/02/2023] Open
Abstract
The classification of brain tumors is a difficult task in the field of medical image analysis. Improving algorithms and machine learning technology helps radiologists to easily diagnose the tumor without surgical intervention. In recent years, deep learning techniques have made excellent progress in the field of medical image processing and analysis. However, there are many difficulties in classifying brain tumors using magnetic resonance imaging; first, the difficulty of brain structure and the intertwining of tissues in it; and secondly, the difficulty of classifying brain tumors due to the high density nature of the brain. We propose a differential deep convolutional neural network model (differential deep-CNN) to classify different types of brain tumor, including abnormal and normal magnetic resonance (MR) images. Using differential operators in the differential deep-CNN architecture, we derived the additional differential feature maps in the original CNN feature maps. The derivation process led to an improvement in the performance of the proposed approach in accordance with the results of the evaluation parameters used. The advantage of the differential deep-CNN model is an analysis of a pixel directional pattern of images using contrast calculations and its high ability to classify a large database of images with high accuracy and without technical problems. Therefore, the proposed approach gives an excellent overall performance. To test and train the performance of this model, we used a dataset consisting of 25,000 brain magnetic resonance imaging (MRI) images, which includes abnormal and normal images. The experimental results showed that the proposed model achieved an accuracy of 99.25%. This study demonstrates that the proposed differential deep-CNN model can be used to facilitate the automatic classification of brain tumors.
Collapse
Affiliation(s)
- Isselmou Abd El Kader
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300130, China; (Z.S.); (S.S.); (I.J.); (I.S.A.)
| | - Guizhi Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin 300130, China; (Z.S.); (S.S.); (I.J.); (I.S.A.)
| | | | | | | | | |
Collapse
|
24
|
Xue C, Zhu L, Fu H, Hu X, Li X, Zhang H, Heng PA. Global guidance network for breast lesion segmentation in ultrasound images. Med Image Anal 2021; 70:101989. [PMID: 33640719 DOI: 10.1016/j.media.2021.101989] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 12/01/2022]
Abstract
Automatic breast lesion segmentation in ultrasound helps to diagnose breast cancer, which is one of the dreadful diseases that affect women globally. Segmenting breast regions accurately from ultrasound image is a challenging task due to the inherent speckle artifacts, blurry breast lesion boundaries, and inhomogeneous intensity distributions inside the breast lesion regions. Recently, convolutional neural networks (CNNs) have demonstrated remarkable results in medical image segmentation tasks. However, the convolutional operations in a CNN often focus on local regions, which suffer from limited capabilities in capturing long-range dependencies of the input ultrasound image, resulting in degraded breast lesion segmentation accuracy. In this paper, we develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection (BD) modules for boosting the breast ultrasound lesion segmentation. The GGB utilizes the multi-layer integrated feature map as a guidance information to learn the long-range non-local dependencies from both spatial and channel domains. The BD modules learn additional breast lesion boundary map to enhance the boundary quality of a segmentation result refinement. Experimental results on a public dataset and a collected dataset show that our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation. Moreover, we also show the application of our network on the ultrasound prostate segmentation, in which our method better identifies prostate regions than state-of-the-art networks.
Collapse
Affiliation(s)
- Cheng Xue
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Lei Zhu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Hong Kong, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, UAE
| | - Xiaowei Hu
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Hai Zhang
- Shenzhen People's Hospital, The Second Clinical College of Jinan University, The First Affiliated Hospital of Southern University of Science and Technology, Guangdong Province, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong. Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| |
Collapse
|
25
|
Zhou Y, Chen H, Li Y, Liu Q, Xu X, Wang S, Yap PT, Shen D. Multi-task learning for segmentation and classification of tumors in 3D automated breast ultrasound images. Med Image Anal 2020; 70:101918. [PMID: 33676100 DOI: 10.1016/j.media.2020.101918] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 11/22/2020] [Accepted: 11/23/2020] [Indexed: 12/12/2022]
Abstract
Tumor classification and segmentation are two important tasks for computer-aided diagnosis (CAD) using 3D automated breast ultrasound (ABUS) images. However, they are challenging due to the significant shape variation of breast tumors and the fuzzy nature of ultrasound images (e.g., low contrast and signal to noise ratio). Considering the correlation between tumor classification and segmentation, we argue that learning these two tasks jointly is able to improve the outcomes of both tasks. In this paper, we propose a novel multi-task learning framework for joint segmentation and classification of tumors in ABUS images. The proposed framework consists of two sub-networks: an encoder-decoder network for segmentation and a light-weight multi-scale network for classification. To account for the fuzzy boundaries of tumors in ABUS images, our framework uses an iterative training strategy to refine feature maps with the help of probability maps obtained from previous iterations. Experimental results based on a clinical dataset of 170 3D ABUS volumes collected from 107 patients indicate that the proposed multi-task framework improves tumor segmentation and classification over the single-task learning counterparts.
Collapse
Affiliation(s)
- Yue Zhou
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China; Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China.
| | - Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Xuanang Xu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Shu Wang
- Peking University People's Hospital, Beijing 100044, China
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
26
|
|
27
|
Shen CC, Yang JE. Estimation of Ultrasound Echogenicity Map from B-Mode Images Using Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20174931. [PMID: 32878199 PMCID: PMC7506733 DOI: 10.3390/s20174931] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 08/24/2020] [Accepted: 08/29/2020] [Indexed: 06/11/2023]
Abstract
In ultrasound B-mode imaging, speckle noises decrease the accuracy of estimation of tissue echogenicity of imaged targets from the amplitude of the echo signals. In addition, since the granular size of the speckle pattern is affected by the point spread function (PSF) of the imaging system, the resolution of B-mode image remains limited, and the boundaries of tissue structures often become blurred. This study proposed a convolutional neural network (CNN) to remove speckle noises together with improvement of image spatial resolution to reconstruct ultrasound tissue echogenicity map. The CNN model is trained using in silico simulation dataset and tested with experimentally acquired images. Results indicate that the proposed CNN method can effectively eliminate the speckle noises in the background of the B-mode images while retaining the contours and edges of the tissue structures. The contrast and the contrast-to-noise ratio of the reconstructed echogenicity map increased from 0.22/2.72 to 0.33/44.14, and the lateral and axial resolutions also improved from 5.9/2.4 to 2.9/2.0, respectively. Compared with other post-processing filtering methods, the proposed CNN method provides better approximation to the original tissue echogenicity by completely removing speckle noises and improving the image resolution together with the capability for real-time implementation.
Collapse
|
28
|
Lei B, Huang S, Li H, Li R, Bian C, Chou YH, Qin J, Zhou P, Gong X, Cheng JZ. Self-co-attention neural network for anatomy segmentation in whole breast ultrasound. Med Image Anal 2020; 64:101753. [DOI: 10.1016/j.media.2020.101753] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 05/27/2020] [Accepted: 06/06/2020] [Indexed: 11/25/2022]
|
29
|
Zhang E, Seiler S, Chen M, Lu W, Gu X. BIRADS features-oriented semi-supervised deep learning for breast ultrasound computer-aided diagnosis. Phys Med Biol 2020; 65:125005. [PMID: 32155605 DOI: 10.1088/1361-6560/ab7e7d] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
We propose a novel BIRADS-SSDL network that integrates clinically-approved breast lesion characteristics (BIRADS features) into task-oriented semi-supervised deep learning (SSDL) for accurate diagnosis of ultrasound (US) images with a small training dataset. Breast US images are converted to BIRADS-oriented feature maps (BFMs) using a distance-transformation coupled with a Gaussian filter. Then, the converted BFMs are used as the input of an SSDL network, which performs unsupervised stacked convolutional auto-encoder (SCAE) image reconstruction guided by lesion classification. This integrated multi-task learning allows SCAE to extract image features with the constraints from the lesion classification task, while the lesion classification is achieved by utilizing the SCAE encoder features with a convolutional network. We trained the BIRADS-SSDL network with an alternative learning strategy by balancing the reconstruction error and classification label prediction error. To demonstrate the effectiveness of our approach, we evaluated it using two breast US image datasets. We compared the performance of the BIRADS-SSDL network with conventional SCAE and SSDL methods that use the original images as inputs, as well as with an SCAE that use BFMs as inputs. The experimental results on two breast US datasets show that BIRADS-SSDL ranked the best among the four networks, with a classification accuracy of around 94.23 ± 3.33% and 84.38 ± 3.11% on two datasets. In the case of experiments across two datasets collected from two different institutions/and US devices, the developed BIRADS-SSDL is generalizable across the different US devices and institutions without overfitting to a single dataset and achieved satisfactory results. Furthermore, we investigate the performance of the proposed method by varying the model training strategies, lesion boundary accuracy, and Gaussian filter parameters. The experimental results showed that a pre-training strategy can help to speed up model convergence during training but with no improvement of the classification accuracy on the testing dataset. The classification accuracy decreases as the segmentation accuracy decreases. The proposed BIRADS-SSDL achieves the best results among the compared methods in each case and has the capacity to deal with multiple different datasets under one model. Compared with state-of-the-art methods, BIRADS-SSDL could be promising for effective breast US computer-aided diagnosis using small datasets.
Collapse
Affiliation(s)
- Erlei Zhang
- College of Information Science and Technology, Northwest University, Xi' an 710069, People's Republic of China. Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | | | | | | | | |
Collapse
|
30
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Ravi K. Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | | |
Collapse
|
31
|
Ding H, Pan Z, Cen Q, Li Y, Chen S. Multi-scale fully convolutional network for gland segmentation using three-class classification. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.097] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
32
|
|