1
|
Oza U, Gohel B, Kumar P, Oza P. Presegmenter Cascaded Framework for Mammogram Mass Segmentation. Int J Biomed Imaging 2024; 2024:9422083. [PMID: 39155940 PMCID: PMC11329304 DOI: 10.1155/2024/9422083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 05/09/2024] [Accepted: 06/26/2024] [Indexed: 08/20/2024] Open
Abstract
Accurate segmentation of breast masses in mammogram images is essential for early cancer diagnosis and treatment planning. Several deep learning (DL) models have been proposed for whole mammogram segmentation and mass patch/crop segmentation. However, current DL models for breast mammogram mass segmentation face several limitations, including false positives (FPs), false negatives (FNs), and challenges with the end-to-end approach. This paper presents a novel two-stage end-to-end cascaded breast mass segmentation framework that incorporates a saliency map of potential mass regions to guide the DL models for breast mass segmentation. The first-stage segmentation model of the cascade framework is used to generate a saliency map to establish a coarse region of interest (ROI), effectively narrowing the focus to probable mass regions. The proposed presegmenter attention (PSA) blocks are introduced in the second-stage segmentation model to enable dynamic adaptation to the most informative regions within the mammogram images based on the generated saliency map. Comparative analysis of the Attention U-net model with and without the cascade framework is provided in terms of dice scores, precision, recall, FP rates (FPRs), and FN outcomes. Experimental results consistently demonstrate enhanced breast mass segmentation performance by the proposed cascade framework across all three datasets: INbreast, CSAW-S, and DMID. The cascade framework shows superior segmentation performance by improving the dice score by about 6% for the INbreast dataset, 3% for the CSAW-S dataset, and 2% for the DMID dataset. Similarly, the FN outcomes were reduced by 10% for the INbreast dataset, 19% for the CSAW-S dataset, and 4% for the DMID dataset. Moreover, the proposed cascade framework's performance is validated with varying state-of-the-art segmentation models such as DeepLabV3+ and Swin transformer U-net. The presegmenter cascade framework has the potential to improve segmentation performance and mitigate FNs when integrated with any medical image segmentation framework, irrespective of the choice of the model.
Collapse
Affiliation(s)
- Urvi Oza
- Computer ScienceDhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, Gujarat, India
| | - Bakul Gohel
- Computer ScienceDhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, Gujarat, India
| | - Pankaj Kumar
- Computer Science & EngineeringNirma University, Ahmedabad, Gujarat, India
| | - Parita Oza
- Computer Science & EngineeringNirma University, Ahmedabad, Gujarat, India
| |
Collapse
|
2
|
Nalla V, Pouriyeh S, Parizi RM, Trivedi H, Sheng QZ, Hwang I, Seyyed-Kalantari L, Woo M. Deep learning for computer-aided abnormalities classification in digital mammogram: A data-centric perspective. Curr Probl Diagn Radiol 2024; 53:346-352. [PMID: 38302303 DOI: 10.1067/j.cpradiol.2024.01.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 11/05/2023] [Accepted: 01/16/2024] [Indexed: 02/03/2024]
Abstract
Breast cancer is the most common type of cancer in women, and early abnormality detection using mammography can significantly improve breast cancer survival rates. Diverse datasets are required to improve the training and validation of deep learning (DL) systems for autonomous breast cancer diagnosis. However, only a small number of mammography datasets are publicly available. This constraint has created challenges when comparing different DL models using the same dataset. The primary contribution of this study is the comprehensive description of a selection of currently available public mammography datasets. The information available on publicly accessible datasets is summarized and their usability reviewed to enable more effective models to be developed for breast cancer detection and to improve understanding of existing models trained using these datasets. This study aims to bridge the existing knowledge gap by offering researchers and practitioners a valuable resource to develop and assess DL models in breast cancer diagnosis.
Collapse
Affiliation(s)
- Vineela Nalla
- Department of Information Technology, Kennesaw State University, Kennesaw, Georgia, USA
| | - Seyedamin Pouriyeh
- Department of Information Technology, Kennesaw State University, Kennesaw, Georgia, USA
| | - Reza M Parizi
- Decentralized Science Lab, Kennesaw State University, Marietta, GA, USA
| | - Hari Trivedi
- Department of Radiology and Imaging Services, Emory University, Atlanta, Georgia, USA
| | - Quan Z Sheng
- School of Computing, Macquarie University, Sydney, Australia
| | - Inchan Hwang
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, Georgia, USA
| | - Laleh Seyyed-Kalantari
- Department of Electrical Engineering and Computer Science, York University, Toronto, Ontario, Canada
| | - MinJae Woo
- School of Data Science and Analytics, Kennesaw State University, Kennesaw, Georgia, USA.
| |
Collapse
|
3
|
Zhong Y, Piao Y, Tan B, Liu J. A multi-task fusion model based on a residual-Multi-layer perceptron network for mammographic breast cancer screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 247:108101. [PMID: 38432087 DOI: 10.1016/j.cmpb.2024.108101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 01/13/2024] [Accepted: 02/23/2024] [Indexed: 03/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Deep learning approaches are being increasingly applied for medical computer-aided diagnosis (CAD). However, these methods generally target only specific image-processing tasks, such as lesion segmentation or benign state prediction. For the breast cancer screening task, single feature extraction models are generally used, which directly extract only those potential features from the input mammogram that are relevant to the target task. This can lead to the neglect of other important morphological features of the lesion as well as other auxiliary information from the internal breast tissue. To obtain more comprehensive and objective diagnostic results, in this study, we developed a multi-task fusion model that combines multiple specific tasks for CAD of mammograms. METHODS We first trained a set of separate, task-specific models, including a density classification model, a mass segmentation model, and a lesion benignity-malignancy classification model, and then developed a multi-task fusion model that incorporates all of the mammographic features from these different tasks to yield comprehensive and refined prediction results for breast cancer diagnosis. RESULTS The experimental results showed that our proposed multi-task fusion model outperformed other related state-of-the-art models in both breast cancer screening tasks in the publicly available datasets CBIS-DDSM and INbreast, achieving a competitive screening performance with area-under-the-curve scores of 0.92 and 0.95, respectively. CONCLUSIONS Our model not only allows an overall assessment of lesion types in mammography but also provides intermediate results related to radiological features and potential cancer risk factors, indicating its potential to offer comprehensive workflow support to radiologists.
Collapse
Affiliation(s)
- Yutong Zhong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China
| | - Yan Piao
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China.
| | - Baolin Tan
- Technology Co. LTD, Shenzhen 518000, PR China
| | - Jingxin Liu
- Department of Radiology, China-Japan Union Hospital, Jilin University, Changchun 130033, PR China
| |
Collapse
|
4
|
Oza P, Oza U, Oza R, Sharma P, Patel S, Kumar P, Gohel B. Digital mammography dataset for breast cancer diagnosis research (DMID) with breast mass segmentation analysis. Biomed Eng Lett 2024; 14:317-330. [PMID: 38374902 PMCID: PMC10874363 DOI: 10.1007/s13534-023-00339-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 11/08/2023] [Accepted: 11/28/2023] [Indexed: 02/21/2024] Open
Abstract
Purpose:In the last two decades, computer-aided detection and diagnosis (CAD) systems have been created to help radiologists discover and diagnose lesions observed on breast imaging tests. These systems can serve as a second opinion tool for the radiologist. However, developing algorithms for identifying and diagnosing breast lesions relies heavily on mammographic datasets. Many existing databases do not consider all the needs necessary for research and study, such as mammographic masks, radiology reports, breast composition, etc. This paper aims to introduce and describe a new mammographic database. Methods:The proposed dataset comprises mammograms with several lesions, such as masses, calcifications, architectural distortions, and asymmetries. In addition, a radiologist report is provided, describing the details of the breast, such as breast density, description of abnormality present, condition of the skin, nipple and pectoral muscles, etc., for each mammogram. Results:We present results of commonly used segmentation framework trained on our proposed dataset. We used information regarding the class of abnormalities (benign or malignant) and breast tissue density provided with each mammogram to analyze the segmentation model's performance concerning these parameters. Conclusion:The presented dataset provides diverse mammogram images to develop and train models for breast cancer diagnosis applications.
Collapse
Affiliation(s)
| | - Urvi Oza
- Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, India
| | - Rajiv Oza
- Rad Imaging, X-Ray and Sonography Clinic, Ahmedabad, India
| | - Paawan Sharma
- Pandit Deendayal Energy University, Gandhinagar, India
| | - Samir Patel
- Pandit Deendayal Energy University, Gandhinagar, India
| | | | - Bakul Gohel
- Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, India
| |
Collapse
|
5
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
6
|
Aliniya P, Nicolescu M, Nicolescu M, Bebis G. Improved Loss Function for Mass Segmentation in Mammography Images Using Density and Mass Size. J Imaging 2024; 10:20. [PMID: 38249005 PMCID: PMC10816853 DOI: 10.3390/jimaging10010020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/31/2023] [Accepted: 01/04/2024] [Indexed: 01/23/2024] Open
Abstract
Mass segmentation is one of the fundamental tasks used when identifying breast cancer due to the comprehensive information it provides, including the location, size, and border of the masses. Despite significant improvement in the performance of the task, certain properties of the data, such as pixel class imbalance and the diverse appearance and sizes of masses, remain challenging. Recently, there has been a surge in articles proposing to address pixel class imbalance through the formulation of the loss function. While demonstrating an enhancement in performance, they mostly fail to address the problem comprehensively. In this paper, we propose a new perspective on the calculation of the loss that enables the binary segmentation loss to incorporate the sample-level information and region-level losses in a hybrid loss setting. We propose two variations of the loss to include mass size and density in the loss calculation. Also, we introduce a single loss variant using the idea of utilizing mass size and density to enhance focal loss. We tested the proposed method on benchmark datasets: CBIS-DDSM and INbreast. Our approach outperformed the baseline and state-of-the-art methods on both datasets.
Collapse
Affiliation(s)
- Parvaneh Aliniya
- Computer Science and Engineering Department, College of Engineering, University of Nevada, Reno, 89557 NV, USA; (M.N.); (G.B.)
| | - Mircea Nicolescu
- Computer Science and Engineering Department, College of Engineering, University of Nevada, Reno, 89557 NV, USA; (M.N.); (G.B.)
| | | | | |
Collapse
|
7
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
8
|
Liao C, Wen X, Qi S, Liu Y, Cao R. FSE-Net: feature selection and enhancement network for mammogram classification. Phys Med Biol 2023; 68:195001. [PMID: 37712226 DOI: 10.1088/1361-6560/acf559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 08/30/2023] [Indexed: 09/16/2023]
Abstract
Objective. Early detection and diagnosis allow for intervention and treatment at an early stage of breast cancer. Despite recent advances in computer aided diagnosis systems based on convolutional neural networks for breast cancer diagnosis, improving the classification performance of mammograms remains a challenge due to the various sizes of breast lesions and difficult extraction of small lesion features. To obtain more accurate classification results, many studies choose to directly classify region of interest (ROI) annotations, but labeling ROIs is labor intensive. The purpose of this research is to design a novel network to automatically classify mammogram image as cancer and no cancer, aiming to mitigate or address the above challenges and help radiologists perform mammogram diagnosis more accurately.Approach. We propose a novel feature selection and enhancement network (FSE-Net) to fully exploit the features of mammogram images, which requires only mammogram images and image-level labels without any bounding boxes or masks. Specifically, to obtain more contextual information, an effective feature selection module is proposed to adaptively select the receptive fields and fuse features from receptive fields of different scales. Moreover, a feature enhancement module is designed to explore the correlation between feature maps of different resolutions and to enhance the representation capacity of low-resolution feature maps with high-resolution feature maps.Main results. The performance of the proposed network has been evaluated on the CBIS-DDSM dataset and INbreast dataset. It achieves an accuracy of 0.806 with an AUC of 0.866 on the CBIS-DDSM dataset and an accuracy of 0.956 with an AUC of 0.974 on the INbreast dataset.Significance. Through extensive experiments and saliency map visualization analysis, the proposed network achieves the satisfactory performance in the mammogram classification task, and can roughly locate suspicious regions to assist in the final prediction of the entire images.
Collapse
Affiliation(s)
- Caiqing Liao
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Xin Wen
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Shuman Qi
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Yanan Liu
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Rui Cao
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| |
Collapse
|
9
|
Alam MS, Wang D, Liao Q, Sowmya A. A Multi-Scale Context Aware Attention Model for Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:3731-3739. [PMID: 37015493 DOI: 10.1109/jbhi.2022.3227540] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Medical image segmentation is critical for efficient diagnosis of diseases and treatment planning. In recent years, convolutional neural networks (CNN)-based methods, particularly U-Net and its variants, have achieved remarkable results on medical image segmentation tasks. However, they do not always work consistently on images with complex structures and large variations in regions of interest (ROI). This could be due to the fixed geometric structure of the receptive fields used for feature extraction and repetitive down-sampling operations that lead to information loss. To overcome these problems, the standard U-Net architecture is modified in this work by replacing the convolution block with a dilated convolution block to extract multi-scale context features with varying sizes of receptive fields, and adding a dilated inception block between the encoder and decoder paths to alleviate the problem of information recession and the semantic gap between features. Furthermore, the input of each dilated convolution block is added to the output through a squeeze and excitation unit, which alleviates the vanishing gradient problem and improves overall feature representation by re-weighting the channel-wise feature responses. The original inception block is modified by reducing the size of the spatial filter and introducing dilated convolution to obtain a larger receptive field. The proposed network was validated on three challenging medical image segmentation tasks with varying size ROIs: lung segmentation on chest X-ray (CXR) images, skin lesion segmentation on dermoscopy images and nucleus segmentation on microscopy cell images. Improved performance compared to state-of-the-art techniques demonstrates the effectiveness and generalisability of the proposed Dilated Convolution and Inception blocks-based U-Net (DCI-UNet).
Collapse
|
10
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
11
|
Müjdat Tiryaki V. Mass segmentation and classification from film mammograms using cascaded deep transfer learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
|
12
|
Zhao F, Huang K, Sun Z, Chen X, He X, Wang B, Xin C. Consistent Learning-Based Breast Tumor Segmentation and Its Application in Sentinel Lymph Node Metastasis Prediction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083326 DOI: 10.1109/embc40787.2023.10340091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Accurate staging of lymph nodes provides crucial diagnostic information for breast cancer patients, where segmentation is of great importance by localizing and visualizing the breast tumor of interest. Nevertheless, current segmentation methods perform average when facing large span of tumor sizes, degraded image quality, blurred tumor boundaries, and resulting noise during manual annotation. Therefore, we develop a Multi-scale RepVGG-based Segmentation Network (MPSegNet) to segment breast tumor from MR images. In particular, we construct a consistent learning framework for the MPSegNet to alleviate the impact of noisy labels upon segmentation results. The rationale is that different views covering the same breast tumors are supposed to generate identical segmentation predictions. Then, we predict SLN metastasis given segmented breast tumors, where we evaluate the relationships between the predictive performance and tumor segmentations under different consistencies. The results show the superiority of our method over other state-of-the-art methods. A high consistency among multiple views can boost the segmentation performance during consistent learning. However, the optimal segmentation does not produce the best SLN metastatic prediction results, implying that the dependence of classification upon segmentation needs to be elaborately investigated further.Clinical Relevance- This study facilitates more accurate segmentation of breast tumors with consistent learning, and provides an initial analysis between tumor segmentation and subsequent prediction of SLN metastasis, which has potential significance for the precise medical care of breast cancer patients.
Collapse
|
13
|
Ru J, Lu B, Chen B, Shi J, Chen G, Wang M, Pan Z, Lin Y, Gao Z, Zhou J, Liu X, Zhang C. Attention guided neural ODE network for breast tumor segmentation in medical images. Comput Biol Med 2023; 159:106884. [PMID: 37071938 DOI: 10.1016/j.compbiomed.2023.106884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 01/25/2023] [Accepted: 03/30/2023] [Indexed: 04/05/2023]
Abstract
Breast cancer is the most common cancer in women. Ultrasound is a widely used screening tool for its portability and easy operation, and DCE-MRI can highlight the lesions more clearly and reveal the characteristics of tumors. They are both noninvasive and nonradiative for assessment of breast cancer. Doctors make diagnoses and further instructions through the sizes, shapes and textures of the breast masses showed on medical images, so automatic tumor segmentation via deep neural networks can to some extent assist doctors. Compared to some challenges which the popular deep neural networks have faced, such as large amounts of parameters, lack of interpretability, overfitting problem, etc., we propose a segmentation network named Att-U-Node which uses attention modules to guide a neural ODE-based framework, trying to alleviate the problems mentioned above. Specifically, the network uses ODE blocks to make up an encoder-decoder structure, feature modeling by neural ODE is completed at each level. Besides, we propose to use an attention module to calculate the coefficient and generate a much refined attention feature for skip connection. Three public available breast ultrasound image datasets (i.e. BUSI, BUS and OASBUD) and a private breast DCE-MRI dataset are used to assess the efficiency of the proposed model, besides, we upgrade the model to 3D for tumor segmentation with the data selected from Public QIN Breast DCE-MRI. The experiments show that the proposed model achieves competitive results compared with the related methods while mitigates the common problems of deep neural networks.
Collapse
Affiliation(s)
- Jintao Ru
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Beichen Lu
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Buran Chen
- Department of Thyroid and Breast Surgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Jialin Shi
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Gaoxiang Chen
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Meihao Wang
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Key Laboratory of Intelligent Medical Imaging of Wenzhou, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China.
| | - Zhifang Pan
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China.
| | - Yezhi Lin
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Key Laboratory of Intelligent Treatment and Life Support for Critical Diseases of Zhejiang Province, Wenzhou, 325000, People's Republic of China.
| | - Zhihong Gao
- Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Jiejie Zhou
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Xiaoming Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430065, People's Republic of China
| | - Chen Zhang
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| |
Collapse
|
14
|
Pan S, Liu X, Xie N, Chong Y. EG-TransUNet: a transformer-based U-Net with enhanced and guided models for biomedical image segmentation. BMC Bioinformatics 2023; 24:85. [PMID: 36882688 PMCID: PMC9989586 DOI: 10.1186/s12859-023-05196-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 02/20/2023] [Indexed: 03/09/2023] Open
Abstract
Although various methods based on convolutional neural networks have improved the performance of biomedical image segmentation to meet the precision requirements of medical imaging segmentation task, medical image segmentation methods based on deep learning still need to solve the following problems: (1) Difficulty in extracting the discriminative feature of the lesion region in medical images during the encoding process due to variable sizes and shapes; (2) difficulty in fusing spatial and semantic information of the lesion region effectively during the decoding process due to redundant information and the semantic gap. In this paper, we used the attention-based Transformer during the encoder and decoder stages to improve feature discrimination at the level of spatial detail and semantic location by its multihead-based self-attention. In conclusion, we propose an architecture called EG-TransUNet, including three modules improved by a transformer: progressive enhancement module, channel spatial attention, and semantic guidance attention. The proposed EG-TransUNet architecture allowed us to capture object variabilities with improved results on different biomedical datasets. EG-TransUNet outperformed other methods on two popular colonoscopy datasets (Kvasir-SEG and CVC-ClinicDB) by achieving 93.44% and 95.26% on mDice. Extensive experiments and visualization results demonstrate that our method advances the performance on five medical segmentation datasets with better generalization ability.
Collapse
Affiliation(s)
- Shaoming Pan
- The State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan, China
| | - Xin Liu
- The State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan, China
| | - Ningdi Xie
- The State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan, China
| | - Yanwen Chong
- The State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan, China.
| |
Collapse
|
15
|
Das HS, Das A, Neog A, Mallik S, Bora K, Zhao Z. Breast cancer detection: Shallow convolutional neural network against deep convolutional neural networks based approach. Front Genet 2023; 13:1097207. [PMID: 36685963 PMCID: PMC9846574 DOI: 10.3389/fgene.2022.1097207] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 12/15/2022] [Indexed: 01/06/2023] Open
Abstract
Introduction: Of all the cancers that afflict women, breast cancer (BC) has the second-highest mortality rate, and it is also believed to be the primary cause of the high death rate. Breast cancer is the most common cancer that affects women globally. There are two types of breast tumors: benign (less harmful and unlikely to become breast cancer) and malignant (which are very dangerous and might result in aberrant cells that could result in cancer). Methods: To find breast abnormalities like masses and micro-calcifications, competent and educated radiologists often examine mammographic images. This study focuses on computer-aided diagnosis to help radiologists make more precise diagnoses of breast cancer. This study aims to compare and examine the performance of the proposed shallow convolutional neural network architecture having different specifications against pre-trained deep convolutional neural network architectures trained on mammography images. Mammogram images are pre-processed in this study's initial attempt to carry out the automatic identification of BC. Thereafter, three different types of shallow convolutional neural networks with representational differences are then fed with the resulting data. In the second method, transfer learning via fine-tuning is used to feed the same collection of images into pre-trained convolutional neural networks VGG19, ResNet50, MobileNet-v2, Inception-v3, Xception, and Inception-ResNet-v2. Results: In our experiment with two datasets, the accuracy for the CBIS-DDSM and INbreast datasets are 80.4%, 89.2%, and 87.8%, 95.1% respectively. Discussion: It can be concluded from the experimental findings that the deep network-based approach with precise tuning outperforms all other state-of-the-art techniques in experiments on both datasets.
Collapse
Affiliation(s)
- Himanish Shekhar Das
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Akalpita Das
- Department of Computer Science and Engineering, GIMT Guwahati, Guwahati, India
| | - Anupal Neog
- Department of AI and Machine Learning COE, IQVIA, Bengaluru, Karnataka, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, United States
- Department of Pharmacology and Toxicology, University of Arizona, Tucson, AZ, United States
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Pathology and Laboratory Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
16
|
Karri M, Annavarapu CSR, Acharya UR. Explainable multi-module semantic guided attention based network for medical image segmentation. Comput Biol Med 2022; 151:106231. [PMID: 36335811 DOI: 10.1016/j.compbiomed.2022.106231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 10/05/2022] [Accepted: 10/11/2022] [Indexed: 12/27/2022]
Abstract
Automated segmentation of medical images is crucial for disease diagnosis and treatment planning. Medical image segmentation has been improved based on the convolutional neural networks (CNNs) models. Unfortunately, they are still limited by scenarios in which the segmentation objective has large variations in size, boundary, position, and shape. Moreover, current CNNs have low explainability, restricting their use in clinical decisions. In this paper, we involve substantial use of various attentions in a CNN model and present an explainable multi-module semantic guided attention based network (MSGA-Net) for explainable and highly accurate medical image segmentation, which involves considering the most significant spatial regions, boundaries, scales, and channels. Specifically, we present a multi-scale attention module (MSA) to extract the most salient features at various scales from medical images. Then, we propose a semantic region-guided attention mechanism (SRGA) including location attention (LAM), channel-wise attention (CWA), and edge attention (EA) modules to extract the most important spatial, channel-wise, boundary-related features for interested regions. Moreover, we present a sequence of fine-tuning steps with the SRGA module to gradually weight the significance of interesting regions while simultaneously reducing the noise. In this work, we experimented with three different types of medical images such as dermoscopic images (HAM10000 dataset), multi-organ CT images (CHAOS 2019 dataset), and Brain tumor MRI images (BraTS 2020 dataset). Extensive experiments on all types of medical images revealed that our proposed MSGA-Net substantially increased the overall performance of all metrics over the existing models. Moreover, displaying the attention feature maps has more explainability than state-of-the-art models.
Collapse
Affiliation(s)
- Meghana Karri
- Computer Science and Engineering Department, Indian Institute of Technology (ISM), Dhanbad, 826004, Jharkhand, India.
| | | | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of science and Technology, SUSS university, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia university, Taichung, Taiwan.
| |
Collapse
|
17
|
Connected-SegNets: A Deep Learning Model for Breast Tumor Segmentation from X-ray Images. Cancers (Basel) 2022; 14:cancers14164030. [PMID: 36011022 PMCID: PMC9406420 DOI: 10.3390/cancers14164030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 08/17/2022] [Accepted: 08/18/2022] [Indexed: 11/17/2022] Open
Abstract
Inspired by Connected-UNets, this study proposes a deep learning model, called Connected-SegNets, for breast tumor segmentation from X-ray images. In the proposed model, two SegNet architectures are connected with skip connections between their layers. Moreover, the cross-entropy loss function of the original SegNet has been replaced by the intersection over union (IoU) loss function in order to make the proposed model more robust against noise during the training process. As part of data preprocessing, a histogram equalization technique, called contrast limit adapt histogram equalization (CLAHE), is applied to all datasets to enhance the compressed regions and smooth the distribution of the pixels. Additionally, two image augmentation methods, namely rotation and flipping, are used to increase the amount of training data and to prevent overfitting. The proposed model has been evaluated on two publicly available datasets, specifically INbreast and the curated breast imaging subset of digital database for screening mammography (CBIS-DDSM). The proposed model has also been evaluated using a private dataset obtained from Cheng Hsin General Hospital in Taiwan. The experimental results show that the proposed Connected-SegNets model outperforms the state-of-the-art methods in terms of Dice score and IoU score. The proposed Connected-SegNets produces a maximum Dice score of 96.34% on the INbreast dataset, 92.86% on the CBIS-DDSM dataset, and 92.25% on the private dataset. Furthermore, the experimental results show that the proposed model achieves the highest IoU score of 91.21%, 87.34%, and 83.71% on INbreast, CBIS-DDSM, and the private dataset, respectively.
Collapse
|
18
|
Rajasree PM, Jatti A, Santosh D, Desai U, Krishnappa VD. Breast Masses Detection and Segmentation in Full-Field Digital Mammograms using Unified Convolution Neural Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1002-1007. [PMID: 36085669 DOI: 10.1109/embc48229.2022.9871866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Breast Cancer has been the primary reason for mortality in women of age between twenties and sixties worldwide; moreover early detection and treatment provides patients to get absolute treatment and decrease the mortality rate. Furthermore, recent research indicates that most experienced physicians have plenty of limitations, hence the plethora of work has been carried out to develop an automated mechanism of segmentation and classification of affected area and type of cancer; however, it is still considered to be highly challenging due to the variability of tumor in shape, low signal to noise ratio, shape, size and location of tumor. Furthermore, mammographic mass segmentation and detection are performed as a separate task and a convolution neural network is a highly adopted architecture for the same. In this research, we have designed and developed unified CNN architecture to perform the segmentation and detection of a breast mass. The unified-CNN architecture comprises a novel module for convolution which is combined through additional offset. Further RRS aka Random Region Selection mechanism is applied for data augmentation approach and high-level feature map is implied to achieve the high prediction. Furthermore, unified-CNN is evaluated using the metrics like true positive Rate at FPI (False Positive per Image) and Dice Index on INBreast dataset, also comparative analysis is out carried with various existing methodology. Unified-CNN is developed through improvising CNN. It introduces a novel module at the convolution layer to aim for a high-level feature map in order to get a high prediction. RRS (Random Region Selection) algorithm is used as the data augmentation approach to select the boundary region of the affected area; further robust model training is designed and optimized for process to make optimal. Unified-CNN introduces novel module at the convolution layer to aim for high level feature map in order to get high prediction; further ROI pooling is utilized for boundary detection in images.
Collapse
|
19
|
|
20
|
AlEisa HN, Touiti W, Ali ALHussan A, Ben Aoun N, Ejbali R, Zaied M, Saadia A. Breast Cancer Classification Using FCN and Beta Wavelet Autoencoder. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8044887. [PMID: 35785059 PMCID: PMC9246636 DOI: 10.1155/2022/8044887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 06/04/2022] [Indexed: 11/17/2022]
Abstract
In this paper, a new classification approach of breast cancer based on Fully Convolutional Networks (FCNs) and Beta Wavelet Autoencoder (BWAE) is presented. FCN, as a powerful image segmentation model, is used to extract the relevant information from mammography images. It will identify the relevant zones to model while WAE is used to model the extracted information for these zones. In fact, WAE has proven its superiority to the majority of the features extraction approaches. The fusion of these two techniques have improved the feature extraction phase and this by keeping and modeling only the relevant and useful features for the identification and description of breast masses. The experimental results showed the effectiveness of our proposed method which has given very encouraging results in comparison with the states of the art approaches on the same mammographic image base. A precision rate of 94% for benign and 93% for malignant was achieved with a recall rate of 92% for benign and 95% for malignant. For the normal case, we were able to reach a rate of 100%.
Collapse
Affiliation(s)
- Hussah Nasser AlEisa
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Wajdi Touiti
- Research Team in Intelligent Machines, National School of Engineers of Gabes, B. P. W 6072, Gabes, Tunisia
| | - Amel Ali ALHussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Najib Ben Aoun
- College of Computer Science and Information Technology, Al Baha University, Al Baha, Saudi Arabia
- REGIM-Lab, Research Groups in Intelligent Machines, National School of Engineers of Sfax (ENIS), University of Sfax, Sfax, Tunisia
| | - Ridha Ejbali
- Research Team in Intelligent Machines, National School of Engineers of Gabes, B. P. W 6072, Gabes, Tunisia
| | - Mourad Zaied
- Research Team in Intelligent Machines, National School of Engineers of Gabes, B. P. W 6072, Gabes, Tunisia
| | - Ayesha Saadia
- Department of Computer Science, Faculty of Computing and Artificial Intelligence, Air University, PAF Complex, Islamabad, Pakistan
| |
Collapse
|
21
|
Liu Z, Yuan H, Wang H. CAM-Wnet: an effective solution for accurate pulmonary embolism segmentation. Med Phys 2022; 49:5294-5303. [PMID: 35609213 DOI: 10.1002/mp.15719] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 03/01/2022] [Accepted: 03/01/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND The morbidity of Pulmonary Embolism (PE) is only lower than that of coronary heart disease and hypertension. Early detection, early diagnosis and timely treatment are the keys to effectively reduce the risk of death. Nevertheless, PE segmentation is still a challenging task at present. The automatic segmentation of PE is particularly important. On the one hand, manual segmentation of PE from a CT sequence is very time-consuming and prone to misdiagnose. On the other hand, an accurate contour of the location, volume, and shape of PE can help radiotherapists carry out targeted treatment and thus greatly increase the survival rate of patients. Therefore, developing an automatic and efficient PE segmentation approach is an urgent demand in clinical diagnosis. PURPOSE Accurate segmentation of PE is critical for the diagnosis of PE. However, it remains a difficult and relevant problem in the field of medical image processing due to factors like incongruent sizes and shapes of emboli regions, and low contrast between embolisms and other tissues. To address this conundrum, in this study, a deep neural network (CAM-Wnet) that incorporate coordinate attention mechanisms and pyramid pooling modules is proposed to end-to-end segment PE from CT image. METHODS CAM-Wnet architecture is composed of coarse U-Net and subdivision U-Net stacked on top of each other. Firstly, the coarse U-Net uses a pre-trained VGG-19 as an encoder, which can transfer the features learned from ImageNet to other tasks. At the same time, coordinate attention residual blocks are introduced into the decoder of the coarse network to obtain a wider range of semantic information and find out the correlation between channels. Then, the multiplied results of input image and preliminary mask are put into the subdivision U-Net for secondary feature distillation, and the encoder and decoder of the subdivision U-Net are both constructed from coordinate attention residual blocks, too. The pyramid pooling module are used between the encoder and the decoder of two U-Net architectures to utilize global context information and further enhance the feature extraction effect. Finally, the improved Focal Loss function is used to train the network to further improve the segmentation effect. RESULTS In this study, we used the doctors' manual contours of China-Japan Friendship hospital dataset to test the proposed architecture. We calculated the Precision, Recall, IoU, and F1-score to evaluate the accuracy of the architecture for PE segmentation. The segmentation Precision for PE was found to be 0.9703, Recall was 0.963, IoU was 0.9353 and F1-score was 0.9665. The experimental results show the effectiveness of the proposed method to automatically and accurately segment embolism in lung CT images. Furthermore, we also test the performance of our method on the LiTS public dataset, which demonstrate the effectiveness and generalization ability of our method. CONCLUSIONS CAM-Wnet obtained more global information and semantic information with the introduction of multi-scale pooling and attention mechanism. Experimental results showed that the proposed method effectively improved the segmentation effect of PE in lung CT images and could be applied to assist doctors in clinical treatment. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Zhenhong Liu
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100091
| | - Hongfang Yuan
- Department of Information Science and Technology, Beijing University of Chemical Technology, Beijing, 100029
| | - Huaqing Wang
- Department of Mechanical and Electrical Engineering, University of Chemical Technology, Beijing, 100029
| |
Collapse
|
22
|
Wei W, Tao H, Chen W, Wu X. Automatic recognition of micronucleus by combining attention mechanism and AlexNet. BMC Med Inform Decis Mak 2022; 22:138. [PMID: 35585543 PMCID: PMC9116712 DOI: 10.1186/s12911-022-01875-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 05/05/2022] [Indexed: 11/19/2022] Open
Abstract
Background Micronucleus (MN) is an abnormal fragment in a human cell caused by disorders in the mechanism regulating chromosome segregation. It can be used as a biomarker for genotoxicity, tumor risk, and tumor malignancy. The in vitro micronucleus assay is a commonly used method to detect micronucleus. However, it is time-consuming and the visual scoring can be inconsistent. Methods To alleviate this issue, we proposed a computer-aided diagnosis method combining convolutional neural networks and visual attention for micronucleus recognition. The backbone of our model is AlexNet without any dense layers and it is pretrained on the ImageNet dataset. Two attention modules are applied to extract cell image features and generate attention maps highlighting the region of interest to improve the interpretability of the network. Given the problems in the data set, we leverage data augmentation and focal loss to alleviate the impact. Results Experiments show that the proposed network yields better performance with fewer parameters. The AP value, F1 value and AUC value reach 0.932, 0.811 and 0.995, respectively. Conclusion In conclusion, the proposed network can effectively recognize micronucleus, and it can play an auxiliary role in clinical diagnosis by doctors.
Collapse
Affiliation(s)
- Weiyi Wei
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Hong Tao
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China.
| | - Wenxia Chen
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Xiaoqin Wu
- Radiology Department, Gansu Provincial Center For Disease Control And Prevention, Lanzhou, China
| |
Collapse
|
23
|
Karthik R, Menaka R, M H, Won D. Contour-enhanced attention CNN for CT-based COVID-19 segmentation. PATTERN RECOGNITION 2022; 125:108538. [PMID: 35068591 PMCID: PMC8767763 DOI: 10.1016/j.patcog.2022.108538] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 09/14/2021] [Accepted: 01/14/2022] [Indexed: 05/14/2023]
Abstract
Accurate detection of COVID-19 is one of the challenging research topics in today's healthcare sector to control the coronavirus pandemic. Automatic data-powered insights for COVID-19 localization from medical imaging modality like chest CT scan tremendously augment clinical care assistance. In this research, a Contour-aware Attention Decoder CNN has been proposed to precisely segment COVID-19 infected tissues in a very effective way. It introduces a novel attention scheme to extract boundary, shape cues from CT contours and leverage these features in refining the infected areas. For every decoded pixel, the attention module harvests contextual information in its spatial neighborhood from the contour feature maps. As a result of incorporating such rich structural details into decoding via dense attention, the CNN is able to capture even intricate morphological details. The decoder is also augmented with a Cross Context Attention Fusion Upsampling to robustly reconstruct deep semantic features back to high-resolution segmentation map. It employs a novel pixel-precise attention model that draws relevant encoder features to aid in effective upsampling. The proposed CNN was evaluated on 3D scans from MosMedData and Jun Ma benchmarked datasets. It achieved state-of-the-art performance with a high dice similarity coefficient of 85.43% and a recall of 88.10%.
Collapse
Affiliation(s)
- R Karthik
- Centre for Cyber Physical Systems (CCPS), Vellore Institute of Technology, Chennai, India
| | - R Menaka
- Centre for Cyber Physical Systems (CCPS), Vellore Institute of Technology, Chennai, India
| | - Hariharan M
- School of Computing Sciences and Engineering, Vellore Institute of Technology, Chennai, India
| | - Daehan Won
- System Sciences and Industrial Engineering, Binghamton University, United States
| |
Collapse
|
24
|
Wu Y, Wu J, Dou Y, Rubert N, Wang Y, Deng J. A deep learning fusion model with evidence-based confidence level analysis for differentiation of malignant and benign breast tumors using dynamic contrast enhanced MRI. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
25
|
Marin T, Zhuo Y, Lahoud RM, Tian F, Ma X, Xing F, Moteabbed M, Liu X, Grogg K, Shusharina N, Woo J, Ma C, Chen YLE, El Fakhri G. Deep learning-based GTV contouring modeling inter- and intra- observer variability in sarcomas. Radiother Oncol 2022; 167:269-276. [PMID: 34808228 PMCID: PMC8934266 DOI: 10.1016/j.radonc.2021.09.034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 09/21/2021] [Accepted: 09/29/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND AND PURPOSE The delineation of the gross tumor volume (GTV) is a critical step for radiation therapy treatment planning. The delineation procedure is typically performed manually which exposes two major issues: cost and reproducibility. Delineation is a time-consuming process that is subject to inter- and intra-observer variability. While methods have been proposed to predict GTV contours, typical approaches ignore variability and therefore fail to utilize the valuable confidence information offered by multiple contours. MATERIALS AND METHODS In this work we propose an automatic GTV contouring method for soft-tissue sarcomas from X-ray computed tomography (CT) images, using deep learning by integrating inter- and intra-observer variability in the learned model. Sixty-eight patients with soft tissue and bone sarcomas were considered in this evaluation, all underwent pre-operative CT imaging used to perform GTV delineation. Four radiation oncologists and radiologists performed three contouring trials each for all patients. We quantify variability by defining confidence levels based on the frequency of inclusion of a given voxel into the GTV and use a deep convolutional neural network to learn GTV confidence maps. RESULTS Results were compared to confidence maps from the four readers as well as ground-truth consensus contours established jointly by all readers. The resulting continuous Dice score between predicted and true confidence maps was 87% and the Hausdorff distance was 14 mm. CONCLUSION Results demonstrate the ability of the proposed method to predict accurate contours while utilizing variability and as such it can be used to improve clinical workflow.
Collapse
Affiliation(s)
- Thibault Marin
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Yue Zhuo
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Rita Maria Lahoud
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Fei Tian
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Xiaoyue Ma
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Fangxu Xing
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Maryam Moteabbed
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Department of Radiation Oncology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Xiaofeng Liu
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Kira Grogg
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Nadya Shusharina
- Department of Radiation Oncology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Jonghye Woo
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Chao Ma
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Yen-Lin E. Chen
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Department of Radiation Oncology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America,Harvard Medical School, Boston MA, 02115, United States of America,Corresponding author,
| |
Collapse
|
26
|
Xu C, Qi Y, Wang Y, Lou M, Pi J, Ma Y. ARF-Net: An Adaptive Receptive Field Network for breast mass segmentation in whole mammograms and ultrasound images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103178] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
27
|
Connected-UNets: a deep learning architecture for breast mass segmentation. NPJ Breast Cancer 2021; 7:151. [PMID: 34857755 PMCID: PMC8640011 DOI: 10.1038/s41523-021-00358-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 11/01/2021] [Indexed: 12/19/2022] Open
Abstract
Breast cancer analysis implies that radiologists inspect mammograms to detect suspicious breast lesions and identify mass tumors. Artificial intelligence techniques offer automatic systems for breast mass segmentation to assist radiologists in their diagnosis. With the rapid development of deep learning and its application to medical imaging challenges, UNet and its variations is one of the state-of-the-art models for medical image segmentation that showed promising performance on mammography. In this paper, we propose an architecture, called Connected-UNets, which connects two UNets using additional modified skip connections. We integrate Atrous Spatial Pyramid Pooling (ASPP) in the two standard UNets to emphasize the contextual information within the encoder–decoder network architecture. We also apply the proposed architecture on the Attention UNet (AUNet) and the Residual UNet (ResUNet). We evaluated the proposed architectures on two publically available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Experiments were also conducted using additional synthetic data using the cycle-consistent Generative Adversarial Network (CycleGAN) model between two unpaired datasets to augment and enhance the images. Qualitative and quantitative results show that the proposed architecture can achieve better automatic mass segmentation with a high Dice score of 89.52%, 95.28%, and 95.88% and Intersection over Union (IoU) score of 80.02%, 91.03%, and 92.27%, respectively, on CBIS-DDSM, INbreast, and the private dataset.
Collapse
|
28
|
Liu R, Liu M, Sheng B, Li H, Li P, Song H, Zhang P, Jiang L, Shen D. NHBS-Net: A Feature Fusion Attention Network for Ultrasound Neonatal Hip Bone Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3446-3458. [PMID: 34106849 DOI: 10.1109/tmi.2021.3087857] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Ultrasound is a widely used technology for diagnosing developmental dysplasia of the hip (DDH) because it does not use radiation. Due to its low cost and convenience, 2-D ultrasound is still the most common examination in DDH diagnosis. In clinical usage, the complexity of both ultrasound image standardization and measurement leads to a high error rate for sonographers. The automatic segmentation results of key structures in the hip joint can be used to develop a standard plane detection method that helps sonographers decrease the error rate. However, current automatic segmentation methods still face challenges in robustness and accuracy. Thus, we propose a neonatal hip bone segmentation network (NHBS-Net) for the first time for the segmentation of seven key structures. We design three improvements, an enhanced dual attention module, a two-class feature fusion module, and a coordinate convolution output head, to help segment different structures. Compared with current state-of-the-art networks, NHBS-Net gains outstanding performance accuracy and generalizability, as shown in the experiments. Additionally, image standardization is a common need in ultrasonography. The ability of segmentation-based standard plane detection is tested on a 50-image standard dataset. The experiments show that our method can help healthcare workers decrease their error rate from 6%-10% to 2%. In addition, the segmentation performance in another ultrasound dataset (fetal heart) demonstrates the ability of our network.
Collapse
|
29
|
Wang S, Li C, Wang R, Liu Z, Wang M, Tan H, Wu Y, Liu X, Sun H, Yang R, Liu X, Chen J, Zhou H, Ben Ayed I, Zheng H. Annotation-efficient deep learning for automatic medical image segmentation. Nat Commun 2021; 12:5915. [PMID: 34625565 PMCID: PMC8501087 DOI: 10.1038/s41467-021-26216-9] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 09/22/2021] [Indexed: 01/17/2023] Open
Abstract
Automatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
- Peng Cheng Laboratory, Shenzhen, Guangdong, China.
- Pazhou Laboratory, Guangzhou, Guangdong, China.
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
| | - Rongpin Wang
- Department of Medical Imaging, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Zaiyi Liu
- Department of Medical Imaging, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, China
| | - Meiyun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Hongna Tan
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yaping Wu
- Department of Medical Imaging, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Xinfeng Liu
- Department of Medical Imaging, Guizhou Provincial People's Hospital, Guiyang, Guizhou, China
| | - Hui Sun
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Rui Yang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Xin Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Jie Chen
- Peng Cheng Laboratory, Shenzhen, Guangdong, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, Guangdong, China
| | - Huihui Zhou
- Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | | | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China.
| |
Collapse
|
30
|
Tardy M, Mateus D. Looking for Abnormalities in Mammograms With Self- and Weakly Supervised Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2711-2722. [PMID: 33417539 DOI: 10.1109/tmi.2021.3050040] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Early breast cancer screening through mammography produces every year millions of images worldwide. Despite the volume of the data generated, these images are not systematically associated with standardized labels. Current protocols encourage giving a malignancy probability to each studied breast but do not require the explicit and burdensome annotation of the affected regions. In this work, we address the problem of abnormality detection in the context of such weakly annotated datasets. We combine domain knowledge about the pathology and clinically available image-wise labels to propose a mixed self- and weakly supervised learning framework for abnormalities reconstruction. We also introduce an auxiliary classification task based on the reconstructed regions to improve explainability. We work with high-resolution imaging that enables our network to capture different findings, including masses, micro-calcifications, distortions, and asymmetries, unlike most state-of-the-art works that mainly focus on masses. We use the popular INBreast dataset as well as our private multi-manufacturer dataset for validation and we challenge our method in segmentation, detection, and classification versus multiple state-of-the-art methods. Our results include image-wise AUC up to 0.86, overall region detection true positives rate of 0.93, and the pixel-wise F1 score of 64% on malignant masses.
Collapse
|
31
|
Pi J, Qi Y, Lou M, Li X, Wang Y, Xu C, Ma Y. FS-UNet: Mass segmentation in mammograms using an encoder-decoder architecture with feature strengthening. Comput Biol Med 2021; 137:104800. [PMID: 34507155 DOI: 10.1016/j.compbiomed.2021.104800] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 08/20/2021] [Accepted: 08/21/2021] [Indexed: 11/18/2022]
Abstract
Breast mass segmentation in mammograms is still a challenging and clinically valuable task. In this paper, we propose an effective and lightweight segmentation model based on convolutional neural networks to automatically segment breast masses in whole mammograms. Specifically, we first developed feature strengthening modules to enhance relevant information about masses and other tissues and improve the representation power of low-resolution feature layers with high-resolution feature maps. Second, we applied a parallel dilated convolution module to capture the features of different scales of masses and fully extract information about the edges and internal texture of the masses. Third, a mutual information loss function was employed to optimise the accuracy of the prediction results by maximising the mutual information between the prediction results and the ground truth. Finally, the proposed model was evaluated on both available INbreast and CBIS-DDSM datasets, and the experimental results indicated that our method achieved excellent segmentation performance in terms of dice coefficient, intersection over union, and sensitivity metrics.
Collapse
Affiliation(s)
- Jiande Pi
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, 730000, China
| | - Yunliang Qi
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, 730000, China
| | - Meng Lou
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, 730000, China
| | - Xiaorong Li
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, 730000, China
| | - Yiming Wang
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, 730000, China
| | - Chunbo Xu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, 730000, China
| | - Yide Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, 730000, China.
| |
Collapse
|
32
|
Zhao W, Wang R, Qi Y, Lou M, Wang Y, Yang Y, Deng X, Ma Y. BASCNet: Bilateral adaptive spatial and channel attention network for breast density classification in the mammogram. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
33
|
Breast Cancer Segmentation Methods: Current Status and Future Potentials. BIOMED RESEARCH INTERNATIONAL 2021; 2021:9962109. [PMID: 34337066 PMCID: PMC8321730 DOI: 10.1155/2021/9962109] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 05/14/2021] [Accepted: 06/11/2021] [Indexed: 12/24/2022]
Abstract
Early breast cancer detection is one of the most important issues that need to be addressed worldwide as it can help increase the survival rate of patients. Mammograms have been used to detect breast cancer in the early stages; if detected in the early stages, it can drastically reduce treatment costs. The detection of tumours in the breast depends on segmentation techniques. Segmentation plays a significant role in image analysis and includes detection, feature extraction, classification, and treatment. Segmentation helps physicians quantify the volume of tissue in the breast for treatment planning. In this work, we have grouped segmentation methods into three groups: classical segmentation that includes region-, threshold-, and edge-based segmentation; machine learning segmentation; and supervised and unsupervised and deep learning segmentation. The findings of our study revealed that region-based segmentation is frequently used for classical methods, and the most frequently used techniques are region growing. Further, a median filter is a robust tool for removing noise. Moreover, the MIAS database is frequently used in classical segmentation methods. Meanwhile, in machine learning segmentation, unsupervised machine learning methods are more frequently used, and U-Net is frequently used for mammogram image segmentation because it does not require many annotated images compared with other deep learning models. Furthermore, reviewed papers revealed that it is possible to train a deep learning model without performing any preprocessing or postprocessing and also showed that the U-Net model is frequently used for mammogram segmentation. The U-Net model is frequently used because it does not require many annotated images and also because of the presence of high-performance GPU computing, which makes it easy to train networks with more layers. Additionally, we identified mammograms and utilised widely used databases, wherein 3 and 28 are public and private databases, respectively.
Collapse
|
34
|
Lou M, Qi Y, Meng J, Xu C, Wang Y, Pi J, Ma Y. DCANet: Dual contextual affinity network for mass segmentation in whole mammograms. Med Phys 2021; 48:4291-4303. [PMID: 34061371 DOI: 10.1002/mp.15010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 04/27/2021] [Accepted: 05/25/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Breast mass segmentation in mammograms remains a crucial yet challenging topic in computer-aided diagnosis systems. Existing algorithms mainly used mass-centered patches to achieve mass segmentation, which is time-consuming and unstable in clinical diagnosis. Therefore, we aim to directly perform fully automated mass segmentation in whole mammograms with deep learning solutions. METHODS In this work, we propose a novel dual contextual affinity network (a.k.a., DCANet) for mass segmentation in whole mammograms. Based on the encoder-decoder structure, two lightweight yet effective contextual affinity modules including the global-guided affinity module (GAM) and the local-guided affinity module (LAM) are proposed. The former aggregates the features integrated by all positions and captures long-range contextual dependencies, aiming to enhance the feature representations of homogeneous regions. The latter emphasizes semantic information around each position and exploits contextual affinity based on the local field-of-view, aiming to improve the indistinction among heterogeneous regions. RESULTS The proposed DCANet is greatly demonstrated on two public mammographic databases including the DDSM and the INbreast, achieving the Dice similarity coefficient (DSC) of 85.95% and 84.65%, respectively. Both segmentation performance and computational efficiency outperform the current state-of-the-art methods. CONCLUSION According to extensive qualitative and quantitative analyses, we believe that the proposed fully automated approach has sufficient robustness to provide fast and accurate diagnoses for possible clinical breast mass segmentation.
Collapse
Affiliation(s)
- Meng Lou
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Yunliang Qi
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Jie Meng
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Chunbo Xu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Yiming Wang
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Jiande Pi
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Yide Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| |
Collapse
|
35
|
Yang Z, Zhao YQ, Liao M, Di SH, Zeng YZ. Semi-automatic liver tumor segmentation with adaptive region growing and graph cuts. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102670] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
36
|
|
37
|
Gao F, Qiao K, Yan B, Wu M, Wang L, Chen J, Shi D. Hybrid network with difference degree and attention mechanism combined with radiomics (H-DARnet) for MVI prediction in HCC. Magn Reson Imaging 2021; 83:27-40. [PMID: 34147593 DOI: 10.1016/j.mri.2021.06.018] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 02/05/2021] [Accepted: 06/15/2021] [Indexed: 12/12/2022]
Abstract
MVI is a risk assessment factor related to hepatocellular carcinoma (HCC) recurrence after hepatectomy or liver transplantation. The goal of this paper is to study the preoperative diagnosis of microvascular invasion (MVI) by using a deep learning algorithm in non-contrast T2 weighted magnetic resonance imaging (MRI) images instead of pathological images. Herein, an ensemble learning algorithm named H-DARnet-based on the difference degree and attention mechanism, combined with radiomics, for MVI prediction-is proposed. Our hybrid network combines the fine-grained, high-level semantic, and radiomics features and exhibits a rich multilevel-feature architecture composed of global-local-prior knowledge with suitable complementarity. The total loss function comprises two regularization items--the triplet and the cross-entropy loss function--which are selected for the triplet network and SE-DenseNet, respectively. The hard triplet sample selection strategy for a triplet network and data augmentation for small-scale liver image datasets in convolutional neural network (CNN) training is indispensable. For 200 patch level test samples (135 positive samples and 65 negative samples), our method can obtain the best prediction results, the AUC, sensitivity, and specificity were 0.826, 79.5% and 73.8%, respectively. The experiment results show that MVI can be predicted by using MRI images, and the proposed method is better than other deep learning algorithms and hand-crafted feature algorithms. The proposed ensemble learning algorithm is proved to be an effective method for MVI prediction.
Collapse
Affiliation(s)
- Fei Gao
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, ZhengZhou, China
| | - Kai Qiao
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, ZhengZhou, China
| | - Bin Yan
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, ZhengZhou, China.
| | - Minghui Wu
- Department of Radiology, Henan Provincial People's Hospital, ZhengZhou, China
| | - Linyuan Wang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, ZhengZhou, China
| | - Jian Chen
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, ZhengZhou, China
| | - Dapeng Shi
- Department of Radiology, Henan Provincial People's Hospital, ZhengZhou, China
| |
Collapse
|
38
|
He X, Deng Y, Fang L, Peng Q. Multi-Modal Retinal Image Classification With Modality-Specific Attention Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1591-1602. [PMID: 33625978 DOI: 10.1109/tmi.2021.3059956] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recently, automatic diagnostic approaches have been widely used to classify ocular diseases. Most of these approaches are based on a single imaging modality (e.g., fundus photography or optical coherence tomography (OCT)), which usually only reflect the oculopathy to a certain extent, and neglect the modality-specific information among different imaging modalities. This paper proposes a novel modality-specific attention network (MSAN) for multi-modal retinal image classification, which can effectively utilize the modality-specific diagnostic features from fundus and OCT images. The MSAN comprises two attention modules to extract the modality-specific features from fundus and OCT images, respectively. Specifically, for the fundus image, ophthalmologists need to observe local and global pathologies at multiple scales (e.g., from microaneurysms at the micrometer level, optic disc at millimeter level to blood vessels through the whole eye). Therefore, we propose a multi-scale attention module to extract both the local and global features from fundus images. Moreover, large background regions exist in the OCT image, which is meaningless for diagnosis. Thus, a region-guided attention module is proposed to encode the retinal layer-related features and ignore the background in OCT images. Finally, we fuse the modality-specific features to form a multi-modal feature and train the multi-modal retinal image classification network. The fusion of modality-specific features allows the model to combine the advantages of fundus and OCT modality for a more accurate diagnosis. Experimental results on a clinically acquired multi-modal retinal image (fundus and OCT) dataset demonstrate that our MSAN outperforms other well-known single-modal and multi-modal retinal image classification methods.
Collapse
|
39
|
Jiang M, Han L, Sun H, Li J, Bao N, Li H, Zhou S, Yu T. Cross-modality image feature fusion diagnosis in breast cancer. Phys Med Biol 2021; 66. [PMID: 33784653 DOI: 10.1088/1361-6560/abf38b] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 03/30/2021] [Indexed: 01/22/2023]
Abstract
Considering the complementarity of mammography and breast MRI, the research of feature fusion diagnosis based on cross-modality images was explored to improve the accuracy of breast cancer diagnosis. 201 patients with both mammography and breast MRI were collected retrospectively, including 117 cases of benign lesions and 84 cases of malignant ones. Two feature optimization strategies of sequential floating forward selection (SFFS), SFFS-1 and SFFS-2, were defined based on the sequential floating forward selection method. Each strategy was used to analyze the diagnostic performance of single-modality images and then to study the feature fusion diagnosis of cross-modality images. Three feature fusion approaches were compared: optimizing MRI features and then fusing those of mammography; optimizing mammography features and then fusing those of MRI; selecting the effective features from the whole feature set (mammography and MRI). Support vector machine, Naive Bayes, and K-nearest neighbor were employed as the classifiers and were finally integrated to get better performance. The average accuracy and area under the ROC curve (AUC) of MRI (88.56%, 0.9 for SFFS-1, 88.39%, 0.89 for SFFS-2) were better than mammography (84.25%, 0.84 for SFFS-1, 80.43%, 0.80 for SFFS-2). Furthermore, compared with a single modality, the average accuracy and AUC of cross-modality feature fusion can improve from 85.40% and 0.86 to 89.66% and 0.91. Classifier integration improved the accuracy and AUC from 90.49%, 0.92 to 92.37%, and 0.97. Cross-modality image feature fusion can achieve better diagnosis performance than a single modality. Feature selection strategy SFFS-1 has better efficiency than SFFS-2. Classifier integration can further improve diagnostic accuracy.
Collapse
Affiliation(s)
- Mingkuan Jiang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, People's Republic of China
| | - Lu Han
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, People's Republic of China
| | - Hang Sun
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, People's Republic of China
| | - Jing Li
- Department of Radiology, Affiliated Hospital of Guizhou Medical University, Guiyang, People's Republic of China
| | - Nan Bao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, People's Republic of China
| | - Hong Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, People's Republic of China
| | - Shi Zhou
- Department of Radiology, Affiliated Hospital of Guizhou Medical University, Guiyang, People's Republic of China
| | - Tao Yu
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, People's Republic of China
| |
Collapse
|
40
|
Tang P, Zu C, Hong M, Yan R, Peng X, Xiao J, Wu X, Zhou J, Zhou L, Wang Y. DA-DSUnet: Dual Attention-based Dense SU-net for automatic head-and-neck tumor segmentation in MRI images. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.12.085] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
41
|
Li C, Xu J, Liu Q, Zhou Y, Mou L, Pu Z, Xia Y, Zheng H, Wang S. Multi-View Mammographic Density Classification by Dilated and Attention-Guided Residual Learning. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:1003-1013. [PMID: 32012021 DOI: 10.1109/tcbb.2020.2970713] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Breast density is widely adopted to reflect the likelihood of early breast cancer development. Existing methods of mammographic density classification either require steps of manual operations or achieve only moderate classification accuracy due to the limited model capacity. In this study, we present a radiomics approach based on dilated and attention-guided residual learning for the task of mammographic density classification. The proposed method was instantiated with two datasets, one clinical dataset and one publicly available dataset, and classification accuracies of 88.7 and 70.0 percent were obtained, respectively. Although the classification accuracy of the public dataset was lower than the clinical dataset, which was very likely related to the dataset size, our proposed model still achieved a better performance than the naive residual networks and several recently published deep learning-based approaches. Furthermore, we designed a multi-stream network architecture specifically targeting at analyzing the multi-view mammograms. Utilizing the clinical dataset, we validated that multi-view inputs were beneficial to the breast density classification task with an increase of at least 2.0 percent in accuracy and the different views lead to different model classification capacities. Our method has a great potential to be further developed and applied in computer-aided diagnosis systems. Our code is available at https://github.com/lich0031/Mammographic_Density_Classification.
Collapse
|
42
|
Hou X, Bai Y, Xie Y, Li Y. Mass segmentation for whole mammograms via attentive multi-task learning framework. Phys Med Biol 2021; 66. [PMID: 33882475 DOI: 10.1088/1361-6560/abfa35] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 04/21/2021] [Indexed: 01/19/2023]
Abstract
Mass segmentation in the mammogram is a necessary and challenging task in the computer-aided diagnosis of breast cancer. Most of the existing methods tended to segment the mass by manually or automatically extracting mass-centered image patches. However, manual patch extraction is time-consuming, and automatic patch extraction can introduce errors that will affect the performance of subsequent segmentation.In order to improve the efficiency of mass segmentation and reduce segmentation errors, we proposed a novel mass segmentation method based on an attentive multi-task learning network (MTLNet), which is an end-to-end model to accurately segment mass in the whole mammogram directly, without the need for extraction in advance with the center of mass image patch. In MTLNet, we applied group convolution to the feature extraction network, which not only reduced the redundancy of the network but also improved the capacity of feature learning. Secondly, an attention mechanism is added to the backbone to highlight the feature channels that contain rich information. Eventually, the multi-task learning framework is employed in the model, which reduces the risk of model overfitting and enables the model not only to segment the mass but also to classify and locate the mass. We \hl{used five-fold cross validation to evaluate the performance of the proposed method under detection and segmentation tasks respectively on the two public mammographic datasets INbreast and CBIS-DDSM, and our method achieved the Dice index of 0.826 on INbreast and 0.863 on CBIS-DDSM.
Collapse
Affiliation(s)
- Xuan Hou
- Northwestern Polytechnical University, Xi'an, 710072, CHINA
| | - Yunpeng Bai
- Northwestern Polytechnical University, Xi'an, CHINA
| | - Yefan Xie
- Northwestern Polytechnical University, Xi'an, CHINA
| | - Ying Li
- Northwestern Polytechnical University, Xi'an, CHINA
| |
Collapse
|
43
|
Adaptive channel and multiscale spatial context network for breast mass segmentation in full-field mammograms. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02297-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
44
|
Zahoor S, Lali IU, Khan MA, Javed K, Mehmood W. Breast Cancer Detection and Classification using Traditional Computer Vision Techniques: A Comprehensive Review. Curr Med Imaging 2021; 16:1187-1200. [PMID: 32250226 DOI: 10.2174/1573405616666200406110547] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 12/25/2019] [Accepted: 01/03/2020] [Indexed: 11/22/2022]
Abstract
Breast Cancer is a common dangerous disease for women. Around the world, many women have died due to Breast cancer. However, in the initial stage, the diagnosis of breast cancer can save women's life. To diagnose cancer in the breast tissues, there are several techniques and methods. The image processing, machine learning, and deep learning methods and techniques are presented in this paper to diagnose the breast cancer. This work will be helpful to adopt better choices and reliable methods to diagnose breast cancer in an initial stage to save a women's life. To detect the breast masses, microcalcifications, and malignant cells,different techniques are used in the Computer-Aided Diagnosis (CAD) systems phases like preprocessing, segmentation, feature extraction, and classification. We have reported a detailed analysis of different techniques or methods with their usage and performance measurement. From the reported results, it is concluded that for breast cancer survival, it is essential to improve the methods or techniques to diagnose it at an initial stage by improving the results of the Computer-Aided Diagnosis systems. Furthermore, segmentation and classification phases are also challenging for researchers for the diagnosis of breast cancer accurately. Therefore, more advanced tools and techniques are still essential for the accurate diagnosis and classification of breast cancer.
Collapse
Affiliation(s)
- Saliha Zahoor
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Ikram Ullah Lali
- Department of Information Technology, University of Education, Lahore, Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University, Museum Road Taxila, Rawalpindi, Pakistan
| | - Kashif Javed
- Department of Robotics, SMME NUST, Islamabad, Pakistan
| | - Waqar Mehmood
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| |
Collapse
|
45
|
Wang Y, Wang S, Chen J, Wu C. Whole mammographic mass segmentation using attention mechanism and multiscale pooling adversarial network. J Med Imaging (Bellingham) 2020; 7:054503. [PMID: 33102621 DOI: 10.1117/1.jmi.7.5.054503] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 09/28/2020] [Indexed: 12/24/2022] Open
Abstract
Purpose: Since breast mass is a clear sign of breast cancer, its precise segmentation is of great significance for the diagnosis of breast cancer. However, the current diagnosis relies mainly on radiologists who spend time extracting features manually, which inevitably reduces the efficiency of diagnosis. Therefore, designing an automatic segmentation method is urgently necessary for the accurate segmentation of breast masses. Approach: We propose an effective attention mechanism and multiscale pooling conditional generative adversarial network (AM-MSP-cGAN), which accurately achieves mass automatic segmentation in whole mammograms. In AM-MSP-cGAN, U-Net is utilized as a generator network by incorporating attention mechanism (AM) into it, which allows U-Net to pay more attention to the target mass regions without additional cost. As a discriminator network, a convolutional neural network with multiscale pooling module is used to learn more meticulous features from the masses with rough and fuzzy boundaries. The proposed model is trained and tested on two public datasets: CBIS-DDSM and INbreast. Results: Compared with other state-of-the-art methods, the AM-MSP-cGAN can achieve better segmentation results in terms of the dice similarity coefficient (Dice) and Hausdorff distance metrics, achieving top scores of 84.49% and 5.01 on CBIS-DDSM, and 83.92% and 5.81 on INbreast, respectively. Therefore, qualitative and quantitative experiments illustrate that the proposed model is effective and robust for the mass segmentation in whole mammograms. Conclusions: The proposed deep learning model is suitable for the automatic segmentation of breast masses, which provides technical assistance for subsequent pathological structure analysis.
Collapse
Affiliation(s)
- Yuehang Wang
- Jilin University, College of Software, Changchun, China
| | - Shengsheng Wang
- Jilin University, College of Computer Science and Technology, Changchun, China
| | - Juan Chen
- Jilin University, College of Computer Science and Technology, Changchun, China
| | - Chun Wu
- Jilin University, College of Software, Changchun, China
| |
Collapse
|
46
|
Fujioka T, Yashima Y, Oyama J, Mori M, Kubota K, Katsuta L, Kimura K, Yamaga E, Oda G, Nakagawa T, Kitazume Y, Tateishi U. Deep-learning approach with convolutional neural network for classification of maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging. Magn Reson Imaging 2020; 75:1-8. [PMID: 33045323 DOI: 10.1016/j.mri.2020.10.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 08/27/2020] [Accepted: 10/06/2020] [Indexed: 02/05/2023]
Abstract
PURPOSE We aimed to evaluate deep learning approach with convolutional neural networks (CNNs) to discriminate between benign and malignant lesions on maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging (MRI). METHODS We retrospectively gathered maximum intensity projections of dynamic contrast-enhanced breast MRI of 106 benign (including 22 normal) and 180 malignant cases for training and validation data. CNN models were constructed to calculate the probability of malignancy using CNN architectures (DenseNet121, DenseNet169, InceptionResNetV2, InceptionV3, NasNetMobile, and Xception) with 500 epochs and analyzed that of 25 benign (including 12 normal) and 47 malignant cases for test data. Two human readers also interpreted these test data and scored the probability of malignancy for each case using Breast Imaging Reporting and Data System. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were calculated. RESULTS The CNN models showed a mean AUC of 0.830 (range, 0.750-0.895). The best model was InceptionResNetV2. This model, Reader 1, and Reader 2 had sensitivities of 74.5%, 72.3%, and 78.7%; specificities of 96.0%, 88.0%, and 80.0%; and AUCs of 0.895, 0.823, and 0.849, respectively. No significant difference arose between the CNN models and human readers (p > 0.125). CONCLUSION Our CNN models showed comparable diagnostic performance in differentiating between benign and malignant lesions to human readers on maximum intensity projection of dynamic contrast-enhanced breast MRI.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yuka Yashima
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Jun Oyama
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan.
| | - Kazunori Kubota
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan; Department of Radiology, Dokkyo Medical University, Tochigi, Japan
| | - Leona Katsuta
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Koichiro Kimura
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Goshi Oda
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, Tokyo, Japan
| | - Tsuyoshi Nakagawa
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yoshio Kitazume
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| |
Collapse
|