1
|
Sun L, Han B, Jiang W, Liu W, Liu B, Tao D, Yu Z, Li C. Multi-scale region selection network in deep features for full-field mammogram classification. Med Image Anal 2025; 100:103399. [PMID: 39615148 DOI: 10.1016/j.media.2024.103399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 11/14/2024] [Accepted: 11/17/2024] [Indexed: 12/16/2024]
Abstract
Early diagnosis and treatment of breast cancer can effectively reduce mortality. Since mammogram is one of the most commonly used methods in the early diagnosis of breast cancer, the classification of mammogram images is an important work of computer-aided diagnosis (CAD) systems. With the development of deep learning in CAD, deep convolutional neural networks have been shown to have the ability to complete the classification of breast cancer tumor patches with high quality, which makes most previous CNN-based full-field mammography classification methods rely on region of interest (ROI) or segmentation annotation to enable the model to locate and focus on small tumor regions. However, the dependence on ROI greatly limits the development of CAD, because obtaining a large number of reliable ROI annotations is expensive and difficult. Some full-field mammography image classification algorithms use multi-stage training or multi-feature extractors to get rid of the dependence on ROI, which increases the computational amount of the model and feature redundancy. In order to reduce the cost of model training and make full use of the feature extraction capability of CNN, we propose a deep multi-scale region selection network (MRSN) in deep features for end-to-end training to classify full-field mammography without ROI or segmentation annotation. Inspired by the idea of multi-example learning and the patch classifier, MRSN filters the feature information and saves only the feature information of the tumor region to make the performance of the full-field image classifier closer to the patch classifier. MRSN first scores different regions under different dimensions to obtain the location information of tumor regions. Then, a few high-scoring regions are selected by location information as feature representations of the entire image, allowing the model to focus on the tumor region. Experiments on two public datasets and one private dataset prove that the proposed MRSN achieves the most advanced performance.
Collapse
Affiliation(s)
- Luhao Sun
- Breast Cancer Center, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Bowen Han
- School of Computer Science and Technology, Tongji University, Shanghai 201804, China
| | - Wenzong Jiang
- The College of Oceanography and Space Informatics, China University of Petroleum (East China), Qingdao 266580, China
| | - Weifeng Liu
- The College of Control Science and Engineering, China University of Petroleum (East China), Qingdao 266580, China
| | - Baodi Liu
- The College of Control Science and Engineering, China University of Petroleum (East China), Qingdao 266580, China
| | - Dapeng Tao
- The School of Information Science and Engineering, Yunnan University, Yunnan 650504, China; Yunnan United Vision Technology Co., Ltd., Yunnan 650504, China
| | - Zhiyong Yu
- Breast Cancer Center, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China.
| | - Chao Li
- Breast Cancer Center, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China.
| |
Collapse
|
2
|
P MD, A M, Ali Y, V S. Effective BCDNet-based breast cancer classification model using hybrid deep learning with VGG16-based optimal feature extraction. BMC Med Imaging 2025; 25:12. [PMID: 39780045 PMCID: PMC11707918 DOI: 10.1186/s12880-024-01538-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Accepted: 12/17/2024] [Indexed: 01/11/2025] Open
Abstract
PROBLEM Breast cancer is a leading cause of death among women, and early detection is crucial for improving survival rates. The manual breast cancer diagnosis utilizes more time and is subjective. Also, the previous CAD models mostly depend on manmade visual details that are complex to generalize across ultrasound images utilizing distinct techniques. Distinct imaging tools have been utilized in previous works such as mammography and MRI. However, these imaging tools are costly and less portable than ultrasound imaging. Also, ultrasound imaging is a non-invasive method commonly used for breast cancer screening. Hence, the paper presents a novel deep learning model, BCDNet, for classifying breast tumors as benign or malignant using ultrasound images. AIM The primary aim of the study is to design an effective breast cancer diagnosis model that can accurately classify tumors in their early stages, thus reducing mortality rates. The model aims to optimize the weight and parameters using the RPAOSM-ESO algorithm to enhance accuracy and minimize false negative rates. METHODS The BCDNet model utilizes transfer learning from a pre-trained VGG16 network for feature extraction and employs an AHDNAM classification approach, which includes ASPP, DTCN, 1DCNN, and an attention mechanism. The RPAOSM-ESO algorithm is used to fine-tune the weights and parameters. RESULTS The RPAOSM-ESO-BCDNet-based breast cancer diagnosis model provided 94.5 accuracy rates. This value is relatively higher than the previous models such as DTCN (88.2), 1DCNN (89.6), MobileNet (91.3), and ASPP-DTC-1DCNN-AM (93.8). Hence, it is guaranteed that the designed RPAOSM-ESO-BCDNet produces relatively accurate solutions for the classification than the previous models. CONCLUSION The BCDNet model, with its sophisticated feature extraction and classification techniques optimized by the RPAOSM-ESO algorithm, shows promise in accurately classifying breast tumors using ultrasound images. The study suggests that the model could be a valuable tool in the early detection of breast cancer, potentially saving lives and reducing the burden on healthcare systems.
Collapse
Affiliation(s)
- Meenakshi Devi P
- Department of Information Technology, K.S.R. College of Engineering, Tiruchengode, Tamilnadu, 637215, India
| | - Muna A
- Centre for Research Impact & Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India
| | - Yasser Ali
- Chitkara Centre for Research and Development, Chitkara University, Baddi, Himachal Pradesh, 174103, India
| | - Sumanth V
- Department of Information Technology, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| |
Collapse
|
3
|
Pérez-Núñez JR, Rodríguez C, Vásquez-Serpa LJ, Navarro C. The Challenge of Deep Learning for the Prevention and Automatic Diagnosis of Breast Cancer: A Systematic Review. Diagnostics (Basel) 2024; 14:2896. [PMID: 39767257 PMCID: PMC11675111 DOI: 10.3390/diagnostics14242896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Revised: 11/24/2024] [Accepted: 12/18/2024] [Indexed: 01/11/2025] Open
Abstract
OBJECTIVES This review aims to evaluate several convolutional neural network (CNN) models applied to breast cancer detection, to identify and categorize CNN variants in recent studies, and to analyze their specific strengths, limitations, and challenges. METHODS Using PRISMA methodology, this review examines studies that focus on deep learning techniques, specifically CNN, for breast cancer detection. Inclusion criteria encompassed studies from the past five years, with duplicates and those unrelated to breast cancer excluded. A total of 62 articles from the IEEE, SCOPUS, and PubMed databases were analyzed, exploring CNN architectures and their applicability in detecting this pathology. RESULTS The review found that CNN models with advanced architecture and greater depth exhibit high accuracy and sensitivity in image processing and feature extraction for breast cancer detection. CNN variants that integrate transfer learning proved particularly effective, allowing the use of pre-trained models with less training data required. However, challenges include the need for large, labeled datasets and significant computational resources. CONCLUSIONS CNNs represent a promising tool in breast cancer detection, although future research should aim to create models that are more resource-efficient and maintain accuracy while reducing data requirements, thus improving clinical applicability.
Collapse
Affiliation(s)
- Jhelly-Reynaluz Pérez-Núñez
- Facultad de Ingeniería de Sistemas e Informática, Universidad Nacional Mayor de San Marcos (UNMSM), Lima 15081, Peru; (C.R.); (L.-J.V.-S.); (C.N.)
| | | | | | | |
Collapse
|
4
|
Huang W, Zhang L, Wang Z, Wang L. Exploring Inherent Consistency for Semi-Supervised Anatomical Structure Segmentation in Medical Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3731-3741. [PMID: 38743533 DOI: 10.1109/tmi.2024.3400840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Due to the exorbitant expense of obtaining labeled data in the field of medical image analysis, semi-supervised learning has emerged as a favorable method for the segmentation of anatomical structures. Although semi-supervised learning techniques have shown great potential in this field, existing methods only utilize image-level spatial consistency to impose unsupervised regularization on data in label space. Considering that anatomical structures often possess inherent anatomical properties that have not been focused on in previous works, this study introduces the inherent consistency into semi-supervised anatomical structure segmentation. First, the prediction and the ground-truth are projected into an embedding space to obtain latent representations that encapsulate the inherent anatomical properties of the structures. Then, two inherent consistency constraints are designed to leverage these inherent properties by aligning these latent representations. The proposed method is plug-and-play and can be seamlessly integrated with existing methods, thereby collaborating to improve segmentation performance and enhance the anatomical plausibility of the results. To evaluate the effectiveness of the proposed method, experiments are conducted on three public datasets (ACDC, LA, and Pancreas). Extensive experimental results demonstrate that the proposed method exhibits good generalizability and outperforms several state-of-the-art methods.
Collapse
|
5
|
Naeem OB, Saleem Y. CSA-Net: Channel and Spatial Attention-Based Network for Mammogram and Ultrasound Image Classification. J Imaging 2024; 10:256. [PMID: 39452419 PMCID: PMC11508210 DOI: 10.3390/jimaging10100256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 10/12/2024] [Accepted: 10/14/2024] [Indexed: 10/26/2024] Open
Abstract
Breast cancer persists as a critical global health concern, emphasizing the advancement of reliable diagnostic strategies to improve patient survival rates. To address this challenge, a computer-aided diagnostic methodology for breast cancer classification is proposed. An architecture that incorporates a pre-trained EfficientNet-B0 model along with channel and spatial attention mechanisms is employed. The efficiency of leveraging attention mechanisms for breast cancer classification is investigated here. The proposed model demonstrates commendable performance in classification tasks, particularly showing significant improvements upon integrating attention mechanisms. Furthermore, this model demonstrates versatility across various imaging modalities, as demonstrated by its robust performance in classifying breast lesions, not only in mammograms but also in ultrasound images during cross-modality evaluation. It has achieved accuracy of 99.9% for binary classification using the mammogram dataset and 92.3% accuracy on the cross-modality multi-class dataset. The experimental results emphasize the superiority of our proposed method over the current state-of-the-art approaches for breast cancer classification.
Collapse
Affiliation(s)
- Osama Bin Naeem
- Department of Electrical Engineering, University of Engineering and Technology, Lahore-Narowal Campus, Narowal 51600, Pakistan
- Department of Computer Engineering, University of Engineering and Technology, Lahore 39161, Pakistan;
| | - Yasir Saleem
- Department of Computer Engineering, University of Engineering and Technology, Lahore 39161, Pakistan;
| |
Collapse
|
6
|
Han B, Sun L, Li C, Yu Z, Jiang W, Liu W, Tao D, Liu B. Deep Location Soft-Embedding-Based Network With Regional Scoring for Mammogram Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3137-3148. [PMID: 38625766 DOI: 10.1109/tmi.2024.3389661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2024]
Abstract
Early detection and treatment of breast cancer can significantly reduce patient mortality, and mammogram is an effective method for early screening. Computer-aided diagnosis (CAD) of mammography based on deep learning can assist radiologists in making more objective and accurate judgments. However, existing methods often depend on datasets with manual segmentation annotations. In addition, due to the large image sizes and small lesion proportions, many methods that do not use region of interest (ROI) mostly rely on multi-scale and multi-feature fusion models. These shortcomings increase the labor, money, and computational overhead of applying the model. Therefore, a deep location soft-embedding-based network with regional scoring (DLSEN-RS) is proposed. DLSEN-RS is an end-to-end mammography image classification method containing only one feature extractor and relies on positional embedding (PE) and aggregation pooling (AP) modules to locate lesion areas without bounding boxes, transfer learning, or multi-stage training. In particular, the introduced PE and AP modules exhibit versatility across various CNN models and improve the model's tumor localization and diagnostic accuracy for mammography images. Experiments are conducted on published INbreast and CBIS-DDSM datasets, and compared to previous state-of-the-art mammographic image classification methods, DLSEN-RS performed satisfactorily.
Collapse
|
7
|
Bhalla D, Rangarajan K, Chandra T, Banerjee S, Arora C. Reproducibility and Explainability of Deep Learning in Mammography: A Systematic Review of Literature. Indian J Radiol Imaging 2024; 34:469-487. [PMID: 38912238 PMCID: PMC11188703 DOI: 10.1055/s-0043-1775737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background Although abundant literature is currently available on the use of deep learning for breast cancer detection in mammography, the quality of such literature is widely variable. Purpose To evaluate published literature on breast cancer detection in mammography for reproducibility and to ascertain best practices for model design. Methods The PubMed and Scopus databases were searched to identify records that described the use of deep learning to detect lesions or classify images into cancer or noncancer. A modification of Quality Assessment of Diagnostic Accuracy Studies (mQUADAS-2) tool was developed for this review and was applied to the included studies. Results of reported studies (area under curve [AUC] of receiver operator curve [ROC] curve, sensitivity, specificity) were recorded. Results A total of 12,123 records were screened, of which 107 fit the inclusion criteria. Training and test datasets, key idea behind model architecture, and results were recorded for these studies. Based on mQUADAS-2 assessment, 103 studies had high risk of bias due to nonrepresentative patient selection. Four studies were of adequate quality, of which three trained their own model, and one used a commercial network. Ensemble models were used in two of these. Common strategies used for model training included patch classifiers, image classification networks (ResNet in 67%), and object detection networks (RetinaNet in 67%). The highest reported AUC was 0.927 ± 0.008 on a screening dataset, while it reached 0.945 (0.919-0.968) on an enriched subset. Higher values of AUC (0.955) and specificity (98.5%) were reached when combined radiologist and Artificial Intelligence readings were used than either of them alone. None of the studies provided explainability beyond localization accuracy. None of the studies have studied interaction between AI and radiologist in a real world setting. Conclusion While deep learning holds much promise in mammography interpretation, evaluation in a reproducible clinical setting and explainable networks are the need of the hour.
Collapse
Affiliation(s)
- Deeksha Bhalla
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Tany Chandra
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Subhashis Banerjee
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| |
Collapse
|
8
|
Qian N, Jiang W, Wu X, Zhang N, Yu H, Guo Y. Lesion attention guided neural network for contrast-enhanced mammography-based biomarker status prediction in breast cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108194. [PMID: 38678959 DOI: 10.1016/j.cmpb.2024.108194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 04/13/2024] [Accepted: 04/21/2024] [Indexed: 05/01/2024]
Abstract
BACKGROUND AND OBJECTIVE Accurate identification of molecular biomarker statuses is crucial in cancer diagnosis, treatment, and prognosis. Studies have demonstrated that medical images could be utilized for non-invasive prediction of biomarker statues. The biomarker status-associated features extracted from medical images are essential in developing medical image-based non-invasive prediction models. Contrast-enhanced mammography (CEM) is a promising imaging technique for breast cancer diagnosis. This study aims to develop a neural network-based method to extract biomarker-related image features from CEM images and evaluate the potential of CEM in non-invasive biomarker status prediction. METHODS An end-to-end learning convolutional neural network with the whole breast images as inputs was proposed to extract CEM features for biomarker status prediction in breast cancer. The network focused on lesion regions and flexibly extracted image features from lesion and peri‑tumor regions by employing supervised learning with a smooth L1-based consistency constraint. An image-level weakly supervised segmentation network based on Vision Transformer with cross attention to contrast images of breasts with lesions against the contralateral breast images was developed for automatic lesion segmentation. Finally, prediction models were developed following further selection of significant features and the implementation of random forest-based classification. Results were reported using the area under the curve (AUC), accuracy, sensitivity, and specificity. RESULTS A dataset from 1203 breast cancer patients was utilized to develop and evaluate the proposed method. Compared to the method without lesion attention and with only lesion regions as inputs, the proposed method performed better at biomarker status prediction. Specifically, it achieved an AUC of 0.71 (95 % confidence interval [CI]: 0.65, 0.77) for Ki-67 and 0.73 (95 % CI: 0.65, 0.80) for human epidermal growth factor receptor 2 (HER2). CONCLUSIONS A lesion attention-guided neural network was proposed in this work to extract CEM image features for biomarker status prediction in breast cancer. The promising results demonstrated the potential of CEM in non-invasively predicting the biomarker statuses in breast cancer.
Collapse
Affiliation(s)
- Nini Qian
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Wei Jiang
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China; Department of Radiotherapy, Yantai Yuhuangding Hospital, Shandong 264000, China
| | - Xiaoqian Wu
- Department of Radiation Oncology, The Affiliated Hospital of Qingdao University, Qingdao 266071, China
| | - Ning Zhang
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Hui Yu
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Yu Guo
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China.
| |
Collapse
|
9
|
Pan X, Wang P, Jia S, Wang Y, Liu Y, Zhang Y, Jiang C. Multi-contrast learning-guided lightweight few-shot learning scheme for predicting breast cancer molecular subtypes. Med Biol Eng Comput 2024; 62:1601-1613. [PMID: 38316663 DOI: 10.1007/s11517-024-03031-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Accepted: 12/27/2023] [Indexed: 02/07/2024]
Abstract
Invasive gene expression profiling studies have exposed prognostically significant breast cancer subtypes: normal-like, luminal, HER-2 enriched, and basal-like, which is defined in large part by human epidermal growth factor receptor 2 (HER-2), progesterone receptor (PR), and estrogen receptor (ER). However, while dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has been generally employed in the screening and therapy of breast cancer, there is a challenging problem to noninvasively predict breast cancer molecular subtypes, which have extremely low-data regimes. In this paper, a novel few-shot learning scheme, which combines lightweight contrastive convolutional neural network (LC-CNN) and multi-contrast learning strategy (MCLS), is worthwhile to be developed for predicting molecular subtype of breast cancer in DCE-MRI. Moreover, MCLS is designed to construct One-vs-Rest and One-vs-One classification tasks, which addresses inter-class similarity among normal-like, luminal, HER-2 enriched, and basal-like. Extensive experiments demonstrate the superiority of our proposed scheme over state-of-the-art methods. Furthermore, our scheme is able to achieve competitive results on few samples due to joint LC-CNN and MCLS for excavating contrastive correlations of a pair of DCE-MRI.
Collapse
Affiliation(s)
- Xiang Pan
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau, SAR, China
| | - Pei Wang
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Shunyuan Jia
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Yihang Wang
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Yuan Liu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Yan Zhang
- Department of Oncology, Wuxi Maternal and Child Health Care Hospital, Jiangnan University, Wuxi, China.
| | - Chunjuan Jiang
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China.
| |
Collapse
|
10
|
Zhong Y, Piao Y, Tan B, Liu J. A multi-task fusion model based on a residual-Multi-layer perceptron network for mammographic breast cancer screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 247:108101. [PMID: 38432087 DOI: 10.1016/j.cmpb.2024.108101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 01/13/2024] [Accepted: 02/23/2024] [Indexed: 03/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Deep learning approaches are being increasingly applied for medical computer-aided diagnosis (CAD). However, these methods generally target only specific image-processing tasks, such as lesion segmentation or benign state prediction. For the breast cancer screening task, single feature extraction models are generally used, which directly extract only those potential features from the input mammogram that are relevant to the target task. This can lead to the neglect of other important morphological features of the lesion as well as other auxiliary information from the internal breast tissue. To obtain more comprehensive and objective diagnostic results, in this study, we developed a multi-task fusion model that combines multiple specific tasks for CAD of mammograms. METHODS We first trained a set of separate, task-specific models, including a density classification model, a mass segmentation model, and a lesion benignity-malignancy classification model, and then developed a multi-task fusion model that incorporates all of the mammographic features from these different tasks to yield comprehensive and refined prediction results for breast cancer diagnosis. RESULTS The experimental results showed that our proposed multi-task fusion model outperformed other related state-of-the-art models in both breast cancer screening tasks in the publicly available datasets CBIS-DDSM and INbreast, achieving a competitive screening performance with area-under-the-curve scores of 0.92 and 0.95, respectively. CONCLUSIONS Our model not only allows an overall assessment of lesion types in mammography but also provides intermediate results related to radiological features and potential cancer risk factors, indicating its potential to offer comprehensive workflow support to radiologists.
Collapse
Affiliation(s)
- Yutong Zhong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China
| | - Yan Piao
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China.
| | - Baolin Tan
- Technology Co. LTD, Shenzhen 518000, PR China
| | - Jingxin Liu
- Department of Radiology, China-Japan Union Hospital, Jilin University, Changchun 130033, PR China
| |
Collapse
|
11
|
Guo Y, Zhang H, Yuan L, Chen W, Zhao H, Yu QQ, Shi W. Machine learning and new insights for breast cancer diagnosis. J Int Med Res 2024; 52:3000605241237867. [PMID: 38663911 PMCID: PMC11047257 DOI: 10.1177/03000605241237867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 02/21/2024] [Indexed: 04/28/2024] Open
Abstract
Breast cancer (BC) is the most prominent form of cancer among females all over the world. The current methods of BC detection include X-ray mammography, ultrasound, computed tomography, magnetic resonance imaging, positron emission tomography and breast thermographic techniques. More recently, machine learning (ML) tools have been increasingly employed in diagnostic medicine for its high efficiency in detection and intervention. The subsequent imaging features and mathematical analyses can then be used to generate ML models, which stratify, differentiate and detect benign and malignant breast lesions. Given its marked advantages, radiomics is a frequently used tool in recent research and clinics. Artificial neural networks and deep learning (DL) are novel forms of ML that evaluate data using computer simulation of the human brain. DL directly processes unstructured information, such as images, sounds and language, and performs precise clinical image stratification, medical record analyses and tumour diagnosis. Herein, this review thoroughly summarizes prior investigations on the application of medical images for the detection and intervention of BC using radiomics, namely DL and ML. The aim was to provide guidance to scientists regarding the use of artificial intelligence and ML in research and the clinic.
Collapse
Affiliation(s)
- Ya Guo
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Heng Zhang
- Department of Laboratory Medicine, Shandong Daizhuang Hospital, Jining, Shandong Province, China
| | - Leilei Yuan
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Weidong Chen
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Haibo Zhao
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Qing-Qing Yu
- Phase I Clinical Research Centre, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Wenjie Shi
- Molecular and Experimental Surgery, University Clinic for General-, Visceral-, Vascular- and Trans-Plantation Surgery, Medical Faculty University Hospital Magdeburg, Otto-von Guericke University, Magdeburg, Germany
| |
Collapse
|
12
|
Zeng J, Gao X, Gao L, Yu Y, Shen L, Pan X. Recognition of rare antinuclear antibody patterns based on a novel attention-based enhancement framework. Brief Bioinform 2024; 25:bbad531. [PMID: 38279651 PMCID: PMC10818137 DOI: 10.1093/bib/bbad531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 12/17/2023] [Accepted: 12/19/2023] [Indexed: 01/28/2024] Open
Abstract
Rare antinuclear antibody (ANA) pattern recognition has been a widely applied technology for routine ANA screening in clinical laboratories. In recent years, the application of deep learning methods in recognizing ANA patterns has witnessed remarkable advancements. However, the majority of studies in this field have primarily focused on the classification of the most common ANA patterns, while another subset has concentrated on the detection of mitotic metaphase cells. To date, no prior research has been specifically dedicated to the identification of rare ANA patterns. In the present paper, we introduce a novel attention-based enhancement framework, which was designed for the recognition of rare ANA patterns in ANA-indirect immunofluorescence images. More specifically, we selected the algorithm with the best performance as our target detection network by conducting comparative experiments. We then further developed and enhanced the chosen algorithm through a series of optimizations. Then, attention mechanism was introduced to facilitate neural networks in expediting the learning process, extracting more essential and distinctive features for the target features that belong to the specific patterns. The proposed approach has helped to obtained high precision rate of 86.40%, 82.75% recall, 84.24% F1 score and 84.64% mean average precision for a 9-category rare ANA pattern detection task on our dataset. Finally, we evaluated the potential of the model as medical technologist assistant and observed that the technologist's performance improved after referring to the results of the model prediction. These promising results highlighted its potential as an efficient and reliable tool to assist medical technologists in their clinical practice.
Collapse
Affiliation(s)
- Junxiang Zeng
- Department of Clinical Laboratory, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Faculty of Medical Laboratory Science, College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute of Artificial Intelligence Medicine, Shanghai Academy of Experimental Medicine, Shanghai, China
| | - Xiupan Gao
- Department of Clinical Laboratory, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Limei Gao
- Department of Immunology and Rheumatology, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Youyou Yu
- Department of Clinical Laboratory, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Lisong Shen
- Department of Clinical Laboratory, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Faculty of Medical Laboratory Science, College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute of Artificial Intelligence Medicine, Shanghai Academy of Experimental Medicine, Shanghai, China
| | - Xiujun Pan
- Department of Clinical Laboratory, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
13
|
Chaiyarin S, Rojbundit N, Piyabenjarad P, Limpitigranon P, Wisitthipakdeekul S, Nonthasaen P, Achararit P. Neural architecture search for medicine: A survey. INFORMATICS IN MEDICINE UNLOCKED 2024; 50:101565. [DOI: 10.1016/j.imu.2024.101565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2025] Open
|
14
|
Radha K, Yepuganti K, Saritha S, Kamireddy C, Bavirisetti DP. Unfolded deep kernel estimation-attention UNet-based retinal image segmentation. Sci Rep 2023; 13:20712. [PMID: 38001149 PMCID: PMC10674026 DOI: 10.1038/s41598-023-48039-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 11/21/2023] [Indexed: 11/26/2023] Open
Abstract
Retinal vessel segmentation is a critical process in the automated inquiry of fundus images to screen and diagnose diabetic retinopathy. It is a widespread complication of diabetes that causes sudden vision loss. Automated retinal vessel segmentation can help to detect these changes more accurately and quickly than manual evaluation by an ophthalmologist. The proposed approach aims to precisely segregate blood vessels in retinal images while shortening the complication and computational value of the segmentation procedure. This can help to improve the accuracy and reliability of retinal image analysis and assist in diagnosing various eye diseases. Attention U-Net is an essential architecture in retinal image segmentation in diabetic retinopathy that obtained promising results in improving the segmentation accuracy especially in the situation where the training data and ground truth are limited. This approach involves U-Net with an attention mechanism to mainly focus on applicable regions of the input image along with the unfolded deep kernel estimation (UDKE) method to enhance the effective performance of semantic segmentation models. Extensive experiments were carried out on STARE, DRIVE, and CHASE_DB datasets, and the proposed method achieved good performance compared to existing methods.
Collapse
Affiliation(s)
- K Radha
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Karuna Yepuganti
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Saladi Saritha
- School of Electronics Engineering, VIT-AP University, Amaravathi, Andhra Pradesh, India
| | - Chinmayee Kamireddy
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Durga Prasad Bavirisetti
- Department of Computer Science, Norwegian, University of Science and Technology, Trondheim, Norway.
| |
Collapse
|
15
|
Zhong Y, Piao Y, Zhang G. Multi-view fusion-based local-global dynamic pyramid convolutional cross-tansformer network for density classification in mammography. Phys Med Biol 2023; 68:225012. [PMID: 37827166 DOI: 10.1088/1361-6560/ad02d7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 10/12/2023] [Indexed: 10/14/2023]
Abstract
Object.Breast density is an important indicator of breast cancer risk. However, existing methods for breast density classification do not fully utilise the multi-view information produced by mammography and thus have limited classification accuracy.Method.In this paper, we propose a multi-view fusion network, denoted local-global dynamic pyramidal-convolution transformer network (LG-DPTNet), for breast density classification in mammography. First, for single-view feature extraction, we develop a dynamic pyramid convolutional network to enable the network to adaptively learn global and local features. Second, we address the problem exhibited by traditional multi-view fusion methods, this is based on a cross-transformer that integrates fine-grained information and global contextual information from different views and thereby provides accurate predictions for the network. Finally, we use an asymmetric focal loss function instead of traditional cross-entropy loss during network training to solve the problem of class imbalance in public datasets, thereby further improving the performance of the model.Results.We evaluated the effectiveness of our method on two publicly available mammography datasets, CBIS-DDSM and INbreast, and achieved areas under the curve (AUC) of 96.73% and 91.12%, respectively.Conclusion.Our experiments demonstrated that the devised fusion model can more effectively utilise the information contained in multiple views than existing models and exhibits classification performance that is superior to that of baseline and state-of-the-art methods.
Collapse
Affiliation(s)
- Yutong Zhong
- Electronic Information Engineering School, Changchun University of Science and Technology, Changchun, People's Republic of China
| | - Yan Piao
- Electronic Information Engineering School, Changchun University of Science and Technology, Changchun, People's Republic of China
| | - Guohui Zhang
- Department of Pneumoconiosis Diagnosis and Treatment Center, Occupational Preventive and Treatment Hospital in Jilin Province, Changchun, People's Republic of China
| |
Collapse
|
16
|
Sahu A, Das PK, Meher S. Recent advancements in machine learning and deep learning-based breast cancer detection using mammograms. Phys Med 2023; 114:103138. [PMID: 37914431 DOI: 10.1016/j.ejmp.2023.103138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 07/22/2023] [Accepted: 09/14/2023] [Indexed: 11/03/2023] Open
Abstract
OBJECTIVE Mammogram-based automatic breast cancer detection has a primary role in accurate cancer diagnosis and treatment planning to save valuable lives. Mammography is one basic yet efficient test for screening breast cancer. Very few comprehensive surveys have been presented to briefly analyze methods for detecting breast cancer with mammograms. In this article, our objective is to give an overview of recent advancements in machine learning (ML) and deep learning (DL)-based breast cancer detection systems. METHODS We give a structured framework to categorize mammogram-based breast cancer detection techniques. Several publicly available mammogram databases and different performance measures are also mentioned. RESULTS After deliberate investigation, we find most of the works classify breast tumors either as normal-abnormal or malignant-benign rather than classifying them into three classes. Furthermore, DL-based features are more significant than hand-crafted features. However, transfer learning is preferred over others as it yields better performance in small datasets, unlike classical DL techniques. SIGNIFICANCE AND CONCLUSION In this article, we have made an attempt to give recent advancements in artificial intelligence (AI)-based breast cancer detection systems. Furthermore, a number of challenging issues and possible research directions are mentioned, which will help researchers in further scopes of research in this field.
Collapse
Affiliation(s)
- Adyasha Sahu
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| | - Pradeep Kumar Das
- School of Electronics Engineering (SENSE), VIT Vellore, Tamil Nadu, 632014, India.
| | - Sukadev Meher
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| |
Collapse
|
17
|
Liao C, Wen X, Qi S, Liu Y, Cao R. FSE-Net: feature selection and enhancement network for mammogram classification. Phys Med Biol 2023; 68:195001. [PMID: 37712226 DOI: 10.1088/1361-6560/acf559] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 08/30/2023] [Indexed: 09/16/2023]
Abstract
Objective. Early detection and diagnosis allow for intervention and treatment at an early stage of breast cancer. Despite recent advances in computer aided diagnosis systems based on convolutional neural networks for breast cancer diagnosis, improving the classification performance of mammograms remains a challenge due to the various sizes of breast lesions and difficult extraction of small lesion features. To obtain more accurate classification results, many studies choose to directly classify region of interest (ROI) annotations, but labeling ROIs is labor intensive. The purpose of this research is to design a novel network to automatically classify mammogram image as cancer and no cancer, aiming to mitigate or address the above challenges and help radiologists perform mammogram diagnosis more accurately.Approach. We propose a novel feature selection and enhancement network (FSE-Net) to fully exploit the features of mammogram images, which requires only mammogram images and image-level labels without any bounding boxes or masks. Specifically, to obtain more contextual information, an effective feature selection module is proposed to adaptively select the receptive fields and fuse features from receptive fields of different scales. Moreover, a feature enhancement module is designed to explore the correlation between feature maps of different resolutions and to enhance the representation capacity of low-resolution feature maps with high-resolution feature maps.Main results. The performance of the proposed network has been evaluated on the CBIS-DDSM dataset and INbreast dataset. It achieves an accuracy of 0.806 with an AUC of 0.866 on the CBIS-DDSM dataset and an accuracy of 0.956 with an AUC of 0.974 on the INbreast dataset.Significance. Through extensive experiments and saliency map visualization analysis, the proposed network achieves the satisfactory performance in the mammogram classification task, and can roughly locate suspicious regions to assist in the final prediction of the entire images.
Collapse
Affiliation(s)
- Caiqing Liao
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Xin Wen
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Shuman Qi
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Yanan Liu
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| | - Rui Cao
- College of Software Engineering, Taiyuan University of Technology, Taiyuan 030600, People's Republic of China
| |
Collapse
|
18
|
Sivamurugan J, Sureshkumar G. Applying dual models on optimized LSTM with U-net segmentation for breast cancer diagnosis using mammogram images. Artif Intell Med 2023; 143:102626. [PMID: 37673584 DOI: 10.1016/j.artmed.2023.102626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 07/08/2023] [Accepted: 07/09/2023] [Indexed: 09/08/2023]
Abstract
BACKGROUND OF THE STUDY Breast cancer is the most fatal disease that widely affects women. When the cancerous lumps grow from the cells of the breast, it causes breast cancer. Self-analysis and regular medical check-ups help for detecting the disease earlier and enhance the survival rate. Hence, an automated breast cancer detection system in mammograms can assist clinicians in the patient's treatment. In medical techniques, the categorization of breast cancer becomes challenging for investigators and researchers. The advancement in deep learning approaches has established more attention to their advantages to medical imaging issues, especially for breast cancer detection. AIM The research work plans to develop a novel hybrid model for breast cancer diagnosis with the support of optimized deep-learning architecture. METHODS The required images are gathered from the benchmark datasets. These collected datasets are used in three pre-processing approaches like "Median Filtering, Histogram Equalization, and morphological operation", which helps to remove unwanted regions from the images. Then, the pre-processed images are applied to the Optimized U-net-based tumor segmentation phase for obtaining accurate segmented results along with the optimization of certain parameters in U-Net by employing "Adapted-Black Widow Optimization (A-BWO)". Further, the detection is performed in two different ways that is given as model 1 and model 2. In model 1, the segmented tumors are used to extract the significant patterns with the help of the "Gray-Level Co-occurrence Matrix (GLCM) and Local Gradient pattern (LGP)". Further, these extracted patterns are utilized in the "Dual Model accessed Optimized Long Short-Term Memory (DM-OLSTM)" for performing breast cancer detection and the detected score 1 is obtained. In model 2, the same segmented tumors are given into the different variants of CNN, such as "VGG19, Resnet150, and Inception". The extracted deep features from three CNN-based approaches are fused to form a single set of deep features. These fused deep features are inserted into the developed DM-OLSTM for getting the detected score 2 for breast cancer diagnosis. In the final phase of the hybrid model, the score 1 and score 2 obtained from model 1 and model 2 are averaged to get the final detection output. RESULTS The accuracy and F1-score of the offered DM-OLSTM model are achieved at 96 % and 95 %. CONCLUSION Experimental analysis proves that the recommended methodology achieves better performance by analyzing with the benchmark dataset. Hence, the designed model is helpful for detecting breast cancer in real-time applications.
Collapse
Affiliation(s)
- J Sivamurugan
- Department of Computer Science and Engineering, School of Engineering & Technology, Pondicherry University (karaikal Campus), karaikal-609605, Puducherry UT, India..
| | - G Sureshkumar
- Department of Computer Science and Engineering, School of Engineering & Technology, Pondicherry University (karaikal Campus), karaikal-609605, Puducherry UT, India
| |
Collapse
|
19
|
Wang SH, Chen G, Zhong X, Lin T, Shen Y, Fan X, Cao L. Global development of artificial intelligence in cancer field: a bibliometric analysis range from 1983 to 2022. Front Oncol 2023; 13:1215729. [PMID: 37519796 PMCID: PMC10382324 DOI: 10.3389/fonc.2023.1215729] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 06/26/2023] [Indexed: 08/01/2023] Open
Abstract
Background Artificial intelligence (AI) is widely applied in cancer field nowadays. The aim of this study is to explore the hotspots and trends of AI in cancer research. Methods The retrieval term includes four topic words ("tumor," "cancer," "carcinoma," and "artificial intelligence"), which were searched in the database of Web of Science from January 1983 to December 2022. Then, we documented and processed all data, including the country, continent, Journal Impact Factor, and so on using the bibliometric software. Results A total of 6,920 papers were collected and analyzed. We presented the annual publications and citations, most productive countries/regions, most influential scholars, the collaborations of journals and institutions, and research focus and hotspots in AI-based cancer research. Conclusion This study systematically summarizes the current research overview of AI in cancer research so as to lay the foundation for future research.
Collapse
Affiliation(s)
- Sui-Han Wang
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Guoqiao Chen
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Xin Zhong
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Tianyu Lin
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Yan Shen
- Department of General Surgery, The First People’s Hospital of Yu Hang District, Hangzhou, China
| | - Xiaoxiao Fan
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Liping Cao
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
20
|
Razali NF, Isa IS, Sulaiman SN, A. Karim NK, Osman MK. CNN-Wavelet scattering textural feature fusion for classifying breast tissue in mammograms. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
21
|
Quintana GI, Li Z, Vancamberg L, Mougeot M, Desolneux A, Muller S. Exploiting Patch Sizes and Resolutions for Multi-Scale Deep Learning in Mammogram Image Classification. Bioengineering (Basel) 2023; 10:bioengineering10050534. [PMID: 37237603 DOI: 10.3390/bioengineering10050534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 04/20/2023] [Accepted: 04/25/2023] [Indexed: 05/28/2023] Open
Abstract
Recent progress in deep learning (DL) has revived the interest on DL-based computer aided detection or diagnosis (CAD) systems for breast cancer screening. Patch-based approaches are one of the main state-of-the-art techniques for 2D mammogram image classification, but they are intrinsically limited by the choice of patch size, as there is no unique patch size that is adapted to all lesion sizes. In addition, the impact of input image resolution on performance is not yet fully understood. In this work, we study the impact of patch size and image resolution on the classifier performance for 2D mammograms. To leverage the advantages of different patch sizes and resolutions, a multi patch-size classifier and a multi-resolution classifier are proposed. These new architectures perform multi-scale classification by combining different patch sizes and input image resolutions. The AUC is increased by 3% on the public CBIS-DDSM dataset and by 5% on an internal dataset. Compared with a baseline single patch size and single resolution classifier, our multi-scale classifier reaches an AUC of 0.809 and 0.722 in each dataset.
Collapse
Affiliation(s)
- Gonzalo Iñaki Quintana
- GE HealthCare, 283 Rue de la Minière, 78530 Buc, France
- ENS Paris-Saclay, Centre Borelli, 91190 Gif-sur-Yvette, France
| | - Zhijin Li
- GE HealthCare, 283 Rue de la Minière, 78530 Buc, France
| | | | | | - Agnès Desolneux
- ENS Paris-Saclay, Centre Borelli, 91190 Gif-sur-Yvette, France
| | - Serge Muller
- GE HealthCare, 283 Rue de la Minière, 78530 Buc, France
| |
Collapse
|
22
|
Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M. YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification. Healthcare (Basel) 2023; 11:healthcare11091222. [PMID: 37174764 PMCID: PMC10178524 DOI: 10.3390/healthcare11091222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 04/15/2023] [Accepted: 04/22/2023] [Indexed: 05/15/2023] Open
Abstract
Pressure ulcers are significant healthcare concerns affecting millions of people worldwide, particularly those with limited mobility. Early detection and classification of pressure ulcers are crucial in preventing their progression and reducing associated morbidity and mortality. In this work, we present a novel approach that uses YOLOv5, an advanced and robust object detection model, to detect and classify pressure ulcers into four stages and non-pressure ulcers. We also utilize data augmentation techniques to expand our dataset and strengthen the resilience of our model. Our approach shows promising results, achieving an overall mean average precision of 76.9% and class-specific mAP50 values ranging from 66% to 99.5%. Compared to previous studies that primarily utilize CNN-based algorithms, our approach provides a more efficient and accurate solution for the detection and classification of pressure ulcers. The successful implementation of our approach has the potential to improve the early detection and treatment of pressure ulcers, resulting in better patient outcomes and reduced healthcare costs.
Collapse
Affiliation(s)
- Bader Aldughayfiq
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| | - Farzeen Ashfaq
- School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
| | - N Z Jhanjhi
- School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| |
Collapse
|
23
|
Jakkaladiki SP, Maly F. An efficient transfer learning based cross model classification (TLBCM) technique for the prediction of breast cancer. PeerJ Comput Sci 2023; 9:e1281. [PMID: 37346575 PMCID: PMC10280457 DOI: 10.7717/peerj-cs.1281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 02/16/2023] [Indexed: 06/23/2023]
Abstract
Breast cancer has been the most life-threatening disease in women in the last few decades. The high mortality rate among women is due to breast cancer because of less awareness and a minimum number of medical facilities to detect the disease in the early stages. In the recent era, the situation has changed with the help of many technological advancements and medical equipment to observe breast cancer development. The machine learning technique supports vector machines (SVM), logistic regression, and random forests have been used to analyze the images of cancer cells on different data sets. Although the particular technique has performed better on the smaller data set, accuracy still needs to catch up in most of the data, which needs to be fairer to apply in the real-time medical environment. In the proposed research, state-of-the-art deep learning techniques, such as transfer learning, based cross model classification (TLBCM), convolution neural network (CNN) and transfer learning, residual network (ResNet), and Densenet proposed for efficient prediction of breast cancer with the minimized error rating. The convolution neural network and transfer learning are the most prominent techniques for predicting the main features in the data set. The sensitive data is protected using a cyber-physical system (CPS) while using the images virtually over the network. CPS act as a virtual connection between human and networks. While the data is transferred in the network, it must monitor using CPS. The ResNet changes the data on many layers without compromising the minimum error rate. The DenseNet conciliates the problem of vanishing gradient issues. The experiment is carried out on the data sets Breast Cancer Wisconsin (Diagnostic) and Breast Cancer Histopathological Dataset (BreakHis). The convolution neural network and the transfer learning have achieved a validation accuracy of 98.3%. The results of these proposed methods show the highest classification rate between the benign and the malignant data. The proposed method improves the efficiency and speed of classification, which is more convenient for discovering breast cancer in earlier stages than the previously proposed methodologies.
Collapse
|
24
|
Garg S, Singh P. Transfer Learning Based Lightweight Ensemble Model for Imbalanced Breast Cancer Classification. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:1529-1539. [PMID: 35536810 DOI: 10.1109/tcbb.2022.3174091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Automated classification of breast cancer can often save lives, as manual detection is usually time-consuming & expensive. Since the last decade, deep learning techniques have been most widely used for the automatic classification of breast cancer using histopathology images. This paper has performed the binary and multi-class classification of breast cancer using a transfer learning-based ensemble model. To analyze the correctness and reliability of the proposed model, we have used an imbalance IDC dataset, an imbalance BreakHis dataset in the binary class scenario, and a balanced BACH dataset for the multi-class classification. A lightweight shallow CNN model with batch normalization technology to accelerate convergence is aggregated with lightweight MobileNetV2 to improve learning and adaptability. The aggregation output is fed into a multilayer perceptron to complete the final classification task. The experimental study on all three datasets was performed and compared with the recent works. We have fine-tuned three different pre-trained models (ResNet50, InceptionV4, and MobilNetV2) and compared it with the proposed lightweight ensemble model in terms of execution time, number of parameters, model size, etc. In both the evaluation phases, it is seen that our model outperforms in all three datasets.
Collapse
|
25
|
Castro E, Costa Pereira J, Cardoso JS. Symmetry-based regularization in deep breast cancer screening. Med Image Anal 2023; 83:102690. [PMID: 36446314 DOI: 10.1016/j.media.2022.102690] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 10/28/2022] [Accepted: 11/09/2022] [Indexed: 11/23/2022]
Abstract
Breast cancer is the most common and lethal form of cancer in women. Recent efforts have focused on developing accurate neural network-based computer-aided diagnosis systems for screening to help anticipate this disease. The ultimate goal is to reduce mortality and improve quality of life after treatment. Due to the difficulty in collecting and annotating data in this domain, data scarcity is - and will continue to be - a limiting factor. In this work, we present a unified view of different regularization methods that incorporate domain-known symmetries in the model. Three general strategies were followed: (i) data augmentation, (ii) invariance promotion in the loss function, and (iii) the use of equivariant architectures. Each of these strategies encodes different priors on the functions learned by the model and can be readily introduced in most settings. Empirically we show that the proposed symmetry-based regularization procedures improve generalization to unseen examples. This advantage is verified in different scenarios, datasets and model architectures. We hope that both the principle of symmetry-based regularization and the concrete methods presented can guide development towards more data-efficient methods for breast cancer screening as well as other medical imaging domains.
Collapse
Affiliation(s)
- Eduardo Castro
- INESC TEC, Campus da Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal; Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal.
| | - Jose Costa Pereira
- INESC TEC, Campus da Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal; Huawei Technologies R&D, Noah's Ark Lab, Gridiron building, 1 Pancras Square, 5th floor, London N1C 4AG, United Kingdom
| | - Jaime S Cardoso
- INESC TEC, Campus da Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal; Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
| |
Collapse
|
26
|
Kong Z, Ouyang H, Cao Y, Huang T, Ahn E, Zhang M, Liu H. Automated periodontitis bone loss diagnosis in panoramic radiographs using a bespoke two-stage detector. Comput Biol Med 2023; 152:106374. [PMID: 36512876 DOI: 10.1016/j.compbiomed.2022.106374] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 11/02/2022] [Accepted: 11/27/2022] [Indexed: 11/30/2022]
Abstract
Periodontitis is a serious oral disease that can lead to severe conditions such as bone loss and teeth falling out if left untreated. Diagnosis of radiographic bone loss (RBL) is critical for the staging and treatment of periodontitis. Unfortunately, the RBL diagnosis by examining the panoramic radiographs is time-consuming. The demand for automated image analysis is urgent. However, existing deep learning methods have limited performances in diagnosis accuracy and have certain difficulties in implementation. Hence, we propose a novel two-stage periodontitis detection convolutional neural network (PDCNN), where we optimize the detector with an anchor-free encoding that allows fast and accurate prediction. We also introduce a proposal-connection module in our detector that excludes less relevant regions of interests (ROIs), making the network focus on more relevant ROIs to improve detection accuracy. Furthermore, we introduced a large-scale, high-resolution panoramic radiograph dataset that captures various complex cases with professional periodontitis annotations. Experiments on our panoramic-image dataset show that the proposed approach achieved an RBL classification accuracy of 0.762. This result shows that our approach outperforms state-of-the-art detectors such as Faster R-CNN and YOLO-v4. We can conclude that the proposed method successfully improves the RBL detection performance. The dataset and our code have been released on GitHub. (https://github.com/PuckBlink/PDCNN).
Collapse
Affiliation(s)
- Zhengmin Kong
- School of Electrical Engineering and Automation, Wuhan University, Wuhan, 430072, China.
| | - Hui Ouyang
- School of Electrical Engineering and Automation, Wuhan University, Wuhan, 430072, China.
| | - Yiyuan Cao
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, 430071, China.
| | - Tao Huang
- College of Science and Engineering, James Cook University, Queensland, Australia
| | - Euijoon Ahn
- College of Science and Engineering, James Cook University, Queensland, Australia
| | - Maoqi Zhang
- The State Key Laboratory Breeding Base of Basic Science of Stomatology & Key Laboratory for Oral Biomedicine of Ministry of Education, School and Hospital of Stomatology, Wuhan University, Wuhan, 430079, China; Taikang Center for Life and Medical Sciences, Wuhan University, Wuhan, 430079, China
| | - Huan Liu
- The State Key Laboratory Breeding Base of Basic Science of Stomatology & Key Laboratory for Oral Biomedicine of Ministry of Education, School and Hospital of Stomatology, Wuhan University, Wuhan, 430079, China; Taikang Center for Life and Medical Sciences, Wuhan University, Wuhan, 430079, China
| |
Collapse
|
27
|
Chen X, Zhang Y, Zhou J, Wang X, Liu X, Nie K, Lin X, He W, Su MY, Cao G, Wang M. Diagnosis of architectural distortion on digital breast tomosynthesis using radiomics and deep learning. Front Oncol 2022; 12:991892. [PMID: 36582788 PMCID: PMC9792864 DOI: 10.3389/fonc.2022.991892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 11/14/2022] [Indexed: 12/14/2022] Open
Abstract
Purpose To implement two Artificial Intelligence (AI) methods, radiomics and deep learning, to build diagnostic models for patients presenting with architectural distortion on Digital Breast Tomosynthesis (DBT) images. Materials and Methods A total of 298 patients were identified from a retrospective review, and all of them had confirmed pathological diagnoses, 175 malignant and 123 benign. The BI-RADS scores of DBT were obtained from the radiology reports, classified into 2, 3, 4A, 4B, 4C, and 5. The architectural distortion areas on craniocaudal (CC) and mediolateral oblique (MLO) views were manually outlined as the region of interest (ROI) for the radiomics analysis. Features were extracted using PyRadiomics, and then the support vector machine (SVM) was applied to select important features and build the classification model. Deep learning was performed using the ResNet50 algorithm, with the binary output of malignancy and benignity. The Gradient-weighted Class Activation Mapping (Grad-CAM) method was utilized to localize the suspicious areas. The predicted malignancy probability was used to construct the ROC curves, compared by the DeLong test. The binary diagnosis was made using the threshold of ≥ 0.5 as malignant. Results The majority of malignant lesions had BI-RADS scores of 4B, 4C, and 5 (148/175 = 84.6%). In the benign group, a substantial number of patients also had high BI-RADS ≥ 4B (56/123 = 45.5%), and the majority had BI-RADS ≥ 4A (102/123 = 82.9%). The radiomics model built using the combined CC+MLO features yielded an area under curve (AUC) of 0.82, the sensitivity of 0.78, specificity of 0.68, and accuracy of 0.74. If only features from CC were used, the AUC was 0.77, and if only features from MLO were used, the AUC was 0.72. The deep-learning model yielded an AUC of 0.61, significantly lower than all radiomics models (p<0.01), which was presumably due to the use of the entire image as input. The Grad-CAM could localize the architectural distortion areas. Conclusion The radiomics model can achieve a satisfactory diagnostic accuracy, and the high specificity in the benign group can be used to avoid unnecessary biopsies. Deep learning can be used to localize the architectural distortion areas, which may provide an automatic method for ROI delineation to facilitate the development of a fully-automatic computer-aided diagnosis system using combined AI strategies.
Collapse
Affiliation(s)
- Xiao Chen
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yang Zhang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, United States,Department of Radiological Sciences, University of California, Irvine, Irvine, CA, United States
| | - Jiahuan Zhou
- Department of Radiology, Yuyao Hospital of Traditional Chinese Medicine, Ningbo, China
| | - Xiao Wang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Xinmiao Liu
- School of Laboratory Medicine and Life Sciences, Wenzhou Medical University, Wenzhou, China
| | - Ke Nie
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Xiaomin Lin
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Wenwen He
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, Irvine, CA, United States,Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan,*Correspondence: Min-Ying Su, ; Guoquan Cao, ; Meihao Wang,
| | - Guoquan Cao
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China,*Correspondence: Min-Ying Su, ; Guoquan Cao, ; Meihao Wang,
| | - Meihao Wang
- Department of Radiology, Key Laboratory of Intelligent Medical Imaging of Wenzhou, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China,*Correspondence: Min-Ying Su, ; Guoquan Cao, ; Meihao Wang,
| |
Collapse
|
28
|
Multi-instance learning based on spatial continuous category representation for case-level meningioma grading in MRI images. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04114-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
29
|
Jiang J, Peng J, Hu C, Jian W, Wang X, Liu W. Breast cancer detection and classification in mammogram using a three-stage deep learning framework based on PAA algorithm. Artif Intell Med 2022; 134:102419. [PMID: 36462904 DOI: 10.1016/j.artmed.2022.102419] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 07/20/2022] [Accepted: 10/02/2022] [Indexed: 12/13/2022]
Abstract
In recent years, deep learning has been used to develop an automatic breast cancer detection and classification tool to assist doctors. In this paper, we proposed a three-stage deep learning framework based on an anchor-free object detection algorithm, named the Probabilistic Anchor Assignment (PAA) to improve diagnosis performance by automatically detecting breast lesions (i.e., mass and calcification) and further classifying mammograms into benign or malignant. Firstly, a single-stage PAA-based detector roundly finds suspicious breast lesions in mammogram. Secondly, we designed a two-branch ROI detector to further classify and regress these lesions that aim to reduce the number of false positives. Besides, in this stage, we introduced a threshold-adaptive post-processing algorithm with dense breast information. Finally, the benign or malignant lesions would be classified by an ROI classifier which combines local-ROI features and global-image features. In addition, considering the strong correlation between the task of detection head of PAA and the task of whole mammogram classification, we added an image classifier that utilizes the same global-image features to perform image classification. The image classifier and the ROI classifier jointly guide to enhance the feature extraction ability and further improve the performance of classification. We integrated three public datasets of mammograms (CBIS-DDSM, INbreast, MIAS) to train and test our model and compared our framework with recent state-of-the-art methods. The results show that our proposed method can improve the diagnostic efficiency of radiologists by automatically detecting and classifying breast lesions and classifying benign and malignant mammograms.
Collapse
Affiliation(s)
- Jiale Jiang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen 518060, Guangdong, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen 518060, Guangdong, China
| | - Junchuan Peng
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen 518060, Guangdong, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen 518060, Guangdong, China
| | - Chuting Hu
- Department of Breast and Thyroid Surgery, The Second People's Hospital of Shenzhen, Shenzhen 518035, Guangdong, China
| | - Wenjing Jian
- Department of Breast and Thyroid Surgery, The Second People's Hospital of Shenzhen, Shenzhen 518035, Guangdong, China
| | - Xianming Wang
- Department of Breast and Thyroid Surgery, South China Hospital Affiliated to Shenzhen University, Shenzhen 518111, Guangdong, China.
| | - Weixiang Liu
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen 518060, Guangdong, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen 518060, Guangdong, China; College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen 518060, Guangdong, China.
| |
Collapse
|
30
|
Lou Q, Li Y, Qian Y, Lu F, Ma J. Mammogram classification based on a novel convolutional neural network with efficient channel attention. Comput Biol Med 2022; 150:106082. [PMID: 36195044 DOI: 10.1016/j.compbiomed.2022.106082] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 08/09/2022] [Accepted: 09/03/2022] [Indexed: 11/03/2022]
Abstract
Early accurate mammography screening and diagnosis can reduce the mortality of breast cancer. Although CNN-based breast cancer computer-aided diagnosis (CAD) systems have achieved significant results in recent years, precise diagnosis of lesions in mammogram remains a challenge due to low signal-to-noise ratio (SNR) and physiological characteristics. Many researchers achieved excellent performance in detecting mammographic images by inputting region of interest (ROI) annotations while ROI annotations require a great quantity of manual labor, time and resources. We propose a two-stage method that combines images preprocessing and model optimization to address the aforementioned challenges. Firstly, we propose the breast database preprocess (BDP) method to preprocess INbreast then we get INbreast†. The only label we need is benign or malignant label of one mammogram, not manual labeling such as ROI annotations. Secondly, we apply focal loss to ECA-Net50 which is an improved model based on ResNet50 with efficient channel attention (ECA) module. Our method can adaptively extract the key features of mammograms, meanwhile solving the problem of hard-to-classify samples and unbalanced categories. The AUC value of our method on INbreast† is 0.960, accuracy is 0.929, Recall is 0.928. The precision of our method on INbreast† is 0.883 which improved by 0.254 compared to ResNet50. In addition, we use Grad-CAM to visualize the effect of our model. The visualized heatmaps extracted by our method can focus more on lesion regions. Both numerical and visualized experiments demonstrate that our method achieves satisfactory performance.
Collapse
Affiliation(s)
- Qiong Lou
- School of Science, Zhejiang University of Science and Technology, Hangzhou 310012, China
| | - Yingying Li
- School of Science, Zhejiang University of Science and Technology, Hangzhou 310012, China
| | - Yaguan Qian
- School of Science, Zhejiang University of Science and Technology, Hangzhou 310012, China
| | - Fang Lu
- School of Science, Zhejiang University of Science and Technology, Hangzhou 310012, China
| | - Jinlian Ma
- School of Microelectronics, Shandong University, Jinan 250101, China; Shenzhen Research Institute of Shandong University, A301 Virtual University Park in South District of Shenzhen, China.
| |
Collapse
|
31
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
32
|
Liu W, Shu X, Zhang L, Li D, Lv Q. Deep Multiscale Multi-Instance Networks With Regional Scoring for Mammogram Classification. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1109/tai.2021.3136146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Wenjie Liu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Dong Li
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Qing Lv
- Department of Galactophore Surgery, West China Hospital, Sichuan University, Chengdu, P.R. China
| |
Collapse
|
33
|
Breast Cancer Semantic Segmentation for Accurate Breast Cancer Detection with an Ensemble Deep Neural Network. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10856-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
34
|
Wimmer M, Sluiter G, Major D, Lenis D, Berg A, Neubauer T, Buhler K. Multi-Task Fusion for Improving Mammography Screening Data Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:937-950. [PMID: 34788218 DOI: 10.1109/tmi.2021.3129068] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Machine learning and deep learning methods have become essential for computer-assisted prediction in medicine, with a growing number of applications also in the field of mammography. Typically these algorithms are trained for a specific task, e.g., the classification of lesions or the prediction of a mammogram's pathology status. To obtain a comprehensive view of a patient, models which were all trained for the same task(s) are subsequently ensembled or combined. In this work, we propose a pipeline approach, where we first train a set of individual, task-specific models and subsequently investigate the fusion thereof, which is in contrast to the standard model ensembling strategy. We fuse model predictions and high-level features from deep learning models with hybrid patient models to build stronger predictors on patient level. To this end, we propose a multi-branch deep learning model which efficiently fuses features across different tasks and mammograms to obtain a comprehensive patient-level prediction. We train and evaluate our full pipeline on public mammography data, i.e., DDSM and its curated version CBIS-DDSM, and report an AUC score of 0.962 for predicting the presence of any lesion and 0.791 for predicting the presence of malignant lesions on patient level. Overall, our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling. Moreover, by providing not only global patient-level predictions but also task-specific model results that are related to radiological features, our pipeline aims to closely support the reading workflow of radiologists.
Collapse
|
35
|
Wang Y, Wang Z, Feng Y, Zhang L. WDCCNet: Weighted Double-Classifier Constraint Neural Network for Mammographic Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:559-570. [PMID: 34606448 DOI: 10.1109/tmi.2021.3117272] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The early detection and timely treatment of breast cancer can save lives. Mammography is one of the most efficient approaches to screening early breast cancer. An automatic mammographic image classification method could improve the work efficiency of radiologists. Current deep learning-based methods typically use the traditional softmax loss to optimize the feature extraction part, which aims to learn the features of mammographic images. However, previous studies have shown that the feature extraction part cannot learn discriminative features from complex data using the standard softmax loss. In this paper, we design a new architecture and propose respective loss functions. Specifically, we develop a double-classifier network architecture that constrains the extracted features' distribution by changing the classifiers' decision boundaries. Then, we propose the double-classifier constraint loss function to constrain the decision boundaries so that the feature extraction part can learn discriminative features. Furthermore, by taking advantage of the architecture of two classifiers, the neural network can detect the difficult-to-classify samples. We propose a weighted double-classifier constraint method to make the feature extract part pay more attention to learning difficult-to-classify samples' features. Our proposed method can be easily applied to an existing convolutional neural network to improve mammographic image classification performance. We conducted extensive experiments to evaluate our methods on three public benchmark mammographic image datasets. The results showed that our methods outperformed many other similar methods and state-of-the-art methods on the three public medical benchmarks. Our code and weights can be found on GitHub.
Collapse
|
36
|
Mahmood T, Li J, Pei Y, Akhtar F, Rehman MU, Wasti SH. Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach. PLoS One 2022; 17:e0263126. [PMID: 35085352 PMCID: PMC8794221 DOI: 10.1371/journal.pone.0263126] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 01/12/2022] [Indexed: 11/18/2022] Open
Abstract
Breast cancer is one of the worst illnesses, with a higher fatality rate among women globally. Breast cancer detection needs accurate mammography interpretation and analysis, which is challenging for radiologists owing to the intricate anatomy of the breast and low image quality. Advances in deep learning-based models have significantly improved breast lesions’ detection, localization, risk assessment, and categorization. This study proposes a novel deep learning-based convolutional neural network (ConvNet) that significantly reduces human error in diagnosing breast malignancy tissues. Our methodology is most effective in eliciting task-specific features, as feature learning is coupled with classification tasks to achieve higher performance in automatically classifying the suspicious regions in mammograms as benign and malignant. To evaluate the model’s validity, 322 raw mammogram images from Mammographic Image Analysis Society (MIAS) and 580 from Private datasets were obtained to extract in-depth features, the intensity of information, and the high likelihood of malignancy. Both datasets are magnificently improved through preprocessing, synthetic data augmentation, and transfer learning techniques to attain the distinctive combination of breast tumors. The experimental findings indicate that the proposed approach achieved remarkable training accuracy of 0.98, test accuracy of 0.97, high sensitivity of 0.99, and an AUC of 0.99 in classifying breast masses on mammograms. The developed model achieved promising performance that helps the clinician in the speedy computation of mammography, breast masses diagnosis, treatment planning, and follow-up of disease progression. Moreover, it has the immense potential over retrospective approaches in consistency feature extraction and precise lesions classification.
Collapse
Affiliation(s)
- Tariq Mahmood
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Fukushima, Japan
- * E-mail:
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur, Pakistan
| | - Mujeeb Ur Rehman
- Radiology Department, Continental Medical College and Hayat Memorial Teaching Hospital, Lahore, Pakistan
| | - Shahbaz Hassan Wasti
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| |
Collapse
|
37
|
Deep convolutional neural networks for computer-aided breast cancer diagnostic: a survey. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06804-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
38
|
Huang W, Shu X, Wang Z, Zhang L, Chen C, Xu J, Yi Z. Feature Pyramid Network With Level-Aware Attention for Meningioma Segmentation. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2022. [DOI: 10.1109/tetci.2022.3146965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
39
|
Rehman KU, Li J, Pei Y, Yasin A, Ali S, Saeed Y. Architectural Distortion-Based Digital Mammograms Classification Using Depth Wise Convolutional Neural Network. BIOLOGY 2021; 11:15. [PMID: 35053013 PMCID: PMC8773233 DOI: 10.3390/biology11010015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/15/2021] [Accepted: 12/17/2021] [Indexed: 01/29/2023]
Abstract
Architectural distortion is the third most suspicious appearance on a mammogram representing abnormal regions. Architectural distortion (AD) detection from mammograms is challenging due to its subtle and varying asymmetry on breast mass and small size. Automatic detection of abnormal ADs regions in mammograms using computer algorithms at initial stages could help radiologists and doctors. The architectural distortion star shapes ROIs detection, noise removal, and object location, affecting the classification performance, reducing accuracy. The computer vision-based technique automatically removes the noise and detects the location of objects from varying patterns. The current study investigated the gap to detect architectural distortion ROIs (region of interest) from mammograms using computer vision techniques. Proposed an automated computer-aided diagnostic system based on architectural distortion using computer vision and deep learning to predict breast cancer from digital mammograms. The proposed mammogram classification framework pertains to four steps such as image preprocessing, augmentation and image pixel-wise segmentation. Architectural distortion ROI's detection, training deep learning, and machine learning networks to classify AD's ROIs into malignant and benign classes. The proposed method has been evaluated on three databases, the PINUM, the CBIS-DDSM, and the DDSM mammogram images, using computer vision and depth-wise 2D V-net 64 convolutional neural networks and achieved 0.95, 0.97, and 0.98 accuracies, respectively. Experimental results reveal that our proposed method outperforms as compared with the ShuffelNet, MobileNet, SVM, K-NN, RF, and previous studies.
Collapse
Affiliation(s)
- Khalil ur Rehman
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (Y.S.)
| | - Jianqiang Li
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (Y.S.)
- Beijing Engineering Research Center for IoT Software and Systems, Beijing 100124, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu 965-8580, Fukushima, Japan
| | - Anaa Yasin
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (Y.S.)
| | - Saqib Ali
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (Y.S.)
| | - Yousaf Saeed
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (Y.S.)
| |
Collapse
|
40
|
Liu W, Zhang L, Dai G, Zhang X, Li G, Yi Z. Deep Neural Network with Structural Similarity Difference and Orientation-based Loss for Position Error Classification in The Radiotherapy of Graves' Ophthalmopathy Patients. IEEE J Biomed Health Inform 2021; 26:2606-2614. [PMID: 34941537 DOI: 10.1109/jbhi.2021.3137451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Identifying position errors for Graves' ophthalmopathy (GO) patients using electronic portal imaging device (EPID) transmission fluence maps is helpful in monitoring treatment.} However, most of the existing models only extract features from dose difference maps computed from EPID images, which do not fully characterize all information of the positional errors. In addition, the position error has a three-dimensional spatial nature, which has never been explored in previous work. To address the above problems, a deep neural network (DNN) model with structural similarity difference and orientation-based loss is proposed in this paper, which consists of a feature extraction network and a feature enhancement network. To capture more information, three types of Structural SIMilarity (SSIM) sub-index maps are computed to enhance the luminance, contrast, and structural features of EPID images, respectively. These maps and the dose difference maps are fed into different networks to extract radiomic features. To acquire spatial features of the position errors, an orientation-based loss function is proposed for optimal training. It makes the data distribution more consistent with the realistic 3D space by integrating the error deviations of the predicted values in the left-right, superior-inferior, anterior-posterior directions. Experimental results on a constructed dataset demonstrate the effectiveness of the proposed model, compared with other related models and existing state-of-the-art methods.
Collapse
|
41
|
Busaleh M, Hussain M, Aboalsamh HA, Amin FE. Breast Mass Classification Using Diverse Contextual Information and Convolutional Neural Network. BIOSENSORS 2021; 11:419. [PMID: 34821634 PMCID: PMC8615673 DOI: 10.3390/bios11110419] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 10/14/2021] [Accepted: 10/22/2021] [Indexed: 06/13/2023]
Abstract
Masses are one of the early signs of breast cancer, and the survival rate of women suffering from breast cancer can be improved if masses can be correctly identified as benign or malignant. However, their classification is challenging due to the similarity in texture patterns of both types of mass. The existing methods for this problem have low sensitivity and specificity. Based on the hypothesis that diverse contextual information of a mass region forms a strong indicator for discriminating benign and malignant masses and the idea of the ensemble classifier, we introduce a computer-aided system for this problem. The system uses multiple regions of interest (ROIs) encompassing a mass region for modeling diverse contextual information, a single ResNet-50 model (or its density-specific modification) as a backbone for local decisions, and stacking with SVM as a base model to predict the final decision. A data augmentation technique is introduced for fine-tuning the backbone model. The system was thoroughly evaluated on the benchmark CBIS-DDSM dataset using its provided data split protocol, and it achieved a sensitivity of 98.48% and a specificity of 92.31%. Furthermore, it was found that the system gives higher performance if it is trained and tested using the data from a specific breast density BI-RADS class. The system does not need to fine-tune/train multiple CNN models; it introduces diverse contextual information by multiple ROIs. The comparison shows that the method outperforms the state-of-the-art methods for classifying mass regions into benign and malignant. It will help radiologists reduce their burden and enhance their sensitivity in the prediction of malignant masses.
Collapse
Affiliation(s)
- Mariam Busaleh
- Department of Computer Science, CCIS, King Saud University, Riyadh 11451, Saudi Arabia; (M.H.); (H.A.A.)
| | - Muhammad Hussain
- Department of Computer Science, CCIS, King Saud University, Riyadh 11451, Saudi Arabia; (M.H.); (H.A.A.)
| | - Hatim A. Aboalsamh
- Department of Computer Science, CCIS, King Saud University, Riyadh 11451, Saudi Arabia; (M.H.); (H.A.A.)
| | - Fazal-e- Amin
- Department of Software Engineering, CCIS, King Saud University, Riyadh 11543, Saudi Arabia;
| |
Collapse
|
42
|
Mahmood T, Li J, Pei Y, Akhtar F. An Automated In-Depth Feature Learning Algorithm for Breast Abnormality Prognosis and Robust Characterization from Mammography Images Using Deep Transfer Learning. BIOLOGY 2021; 10:859. [PMID: 34571736 PMCID: PMC8468800 DOI: 10.3390/biology10090859] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 08/25/2021] [Accepted: 08/27/2021] [Indexed: 01/17/2023]
Abstract
BACKGROUND Diagnosing breast cancer masses and calcification clusters have paramount significance in mammography, which aids in mitigating the disease's complexities and curing it at early stages. However, a wrong mammogram interpretation may lead to an unnecessary biopsy of the false-positive findings, which reduces the patient's survival chances. Consequently, approaches that learn to discern breast masses can reduce the number of misconceptions and incorrect diagnoses. Conventionally used classification models focus on feature extraction techniques specific to a particular problem based on domain information. Deep learning strategies are becoming promising alternatives to solve the many challenges of feature-based approaches. METHODS This study introduces a convolutional neural network (ConvNet)-based deep learning method to extract features at varying densities and discern mammography's normal and suspected regions. Two different experiments were carried out to make an accurate diagnosis and classification. The first experiment consisted of five end-to-end pre-trained and fine-tuned deep convolution neural networks (DCNN). The in-depth features extracted from the ConvNet are also used to train the support vector machine algorithm to achieve excellent performance in the second experiment. Additionally, DCNN is the most frequently used image interpretation and classification method, including VGGNet, GoogLeNet, MobileNet, ResNet, and DenseNet. Moreover, this study pertains to data cleaning, preprocessing, and data augmentation, and improving mass recognition accuracy. The efficacy of all models is evaluated by training and testing three mammography datasets and has exhibited remarkable results. RESULTS Our deep learning ConvNet+SVM model obtained a discriminative training accuracy of 97.7% and validating accuracy of 97.8%, contrary to this, VGGNet16 method yielded 90.2%, 93.5% for VGGNet19, 63.4% for GoogLeNet, 82.9% for MobileNetV2, 75.1% for ResNet50, and 72.9% for DenseNet121. CONCLUSIONS The proposed model's improvement and validation are appropriated in conventional pathological practices that conceivably reduce the pathologist's strain in predicting clinical outcomes by analyzing patients' mammography images.
Collapse
Affiliation(s)
- Tariq Mahmood
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.)
- Division of Science and Technology, University of Education, Lahore 54000, Pakistan
| | - Jianqiang Li
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.)
- Beijing Engineering Research Center for IoT Software and Systems, Beijing 100124, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu 965-8580, Japan
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur 65200, Pakistan;
| |
Collapse
|
43
|
Das A, Narayan Mohanty M, Kumar Mallick P, Tiwari P, Muhammad K, Zhu H. Breast cancer detection using an ensemble deep learning method. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103009] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
44
|
Xiao B, Sun H, Meng Y, Peng Y, Yang X, Chen S, Yan Z, Zheng J. Classification of microcalcification clusters in digital breast tomosynthesis using ensemble convolutional neural network. Biomed Eng Online 2021; 20:71. [PMID: 34320986 PMCID: PMC8317331 DOI: 10.1186/s12938-021-00908-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 07/15/2021] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND The classification of benign and malignant microcalcification clusters (MCs) is an important task for computer-aided diagnosis (CAD) of digital breast tomosynthesis (DBT) images. Influenced by imaging method, DBT has the characteristic of anisotropic resolution, in which the resolution of intra-slice and inter-slice is quite different. In addition, the sharpness of MCs in different slices of DBT is quite different, among which the clearest slice is called focus slice. These characteristics limit the performance of CAD algorithms based on standard 3D convolution neural network (CNN). METHODS To make full use of the characteristics of the DBT, we proposed a new ensemble CNN, which consists of the 2D ResNet34 and the anisotropic 3D ResNet to extract the 2D focus slice features and 3D contextual features of MCs, respectively. Moreover, the anisotropic 3D convolution is used to build 3D ResNet to avoid the influence of DBT anisotropy. RESULTS The proposed method was evaluated on 495 MCs in DBT images of 275 patients, which are collected from our collaborative hospital. The area under the curve (AUC) of receiver operating characteristic (ROC) and accuracy of classifying benign and malignant MCs using decision-level ensemble strategy were 0.8837 and 82.00%, which were significantly higher than the experimental results of 2D ResNet34 (AUC: 0.8264, ACC: 76.00%) and anisotropic 3D ResNet (AUC: 0.8455, ACC: 76.00%). Compared with the results of 3D features classification in the radiomics, the AUC of the deep learning method with decision-level ensemble strategy was improved by 0.0435, and the F1 score was improved from 79.37 to 85.71%. More importantly, the sensitivity increased from 78.13 to 84.38%, and the specificity increased from 66.67 to 77.78%, which effectively reduced the false positives of diagnosis CONCLUSION: The results fully prove that the ensemble CNN can effectively integrate 2D features and 3D features, improve the classification performance of benign and malignant MCs in DBT, and reduce the false positives.
Collapse
Affiliation(s)
- Bingbing Xiao
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Haotian Sun
- University of Science and Technology of China, Hefei, China
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - You Meng
- Department of Breast Surgery, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, China
- Gusu School, Nanjing Medical University, Suzhou, China
| | - Yunsong Peng
- University of Science and Technology of China, Hefei, China
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Xiaodong Yang
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Shuangqing Chen
- Gusu School, Nanjing Medical University, Suzhou, China.
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou, China.
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, China.
| | - Jian Zheng
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.
| |
Collapse
|
45
|
Rehman KU, Li J, Pei Y, Yasin A, Ali S, Mahmood T. Computer Vision-Based Microcalcification Detection in Digital Mammograms Using Fully Connected Depthwise Separable Convolutional Neural Network. SENSORS 2021; 21:s21144854. [PMID: 34300597 PMCID: PMC8309805 DOI: 10.3390/s21144854] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 07/12/2021] [Accepted: 07/12/2021] [Indexed: 01/21/2023]
Abstract
Microcalcification clusters in mammograms are one of the major signs of breast cancer. However, the detection of microcalcifications from mammograms is a challenging task for radiologists due to their tiny size and scattered location inside a denser breast composition. Automatic CAD systems need to predict breast cancer at the early stages to support clinical work. The intercluster gap, noise between individual MCs, and individual object’s location can affect the classification performance, which may reduce the true-positive rate. In this study, we propose a computer-vision-based FC-DSCNN CAD system for the detection of microcalcification clusters from mammograms and classification into malignant and benign classes. The computer vision method automatically controls the noise and background color contrast and directly detects the MC object from mammograms, which increases the classification performance of the neural network. The breast cancer classification framework has four steps: image preprocessing and augmentation, RGB to grayscale channel transformation, microcalcification region segmentation, and MC ROI classification using FC-DSCNN to predict malignant and benign cases. The proposed method was evaluated on 3568 DDSM and 2885 PINUM mammogram images with automatic feature extraction, obtaining a score of 0.97 with a 2.35 and 0.99 true-positive ratio with 2.45 false positives per image, respectively. Experimental results demonstrated that the performance of the proposed method remains higher than the traditional and previous approaches.
Collapse
Affiliation(s)
- Khalil ur Rehman
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (T.M.)
| | - Jianqiang Li
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (T.M.)
- Beijing Engineering Research Center for IoT Software and Systems, Beijing 100124, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Fukushima 965-8580, Japan
- Correspondence:
| | - Anaa Yasin
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (T.M.)
| | - Saqib Ali
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (T.M.)
| | - Tariq Mahmood
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (T.M.)
- Division of Science and Technology, University of Education, Lahore 54000, Pakistan
| |
Collapse
|
46
|
Wang Y, Feng Y, Zhang L, Wang Z, Lv Q, Yi Z. Deep adversarial domain adaptation for breast cancer screening from mammograms. Med Image Anal 2021; 73:102147. [PMID: 34246849 DOI: 10.1016/j.media.2021.102147] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 11/10/2020] [Accepted: 06/23/2021] [Indexed: 02/05/2023]
Abstract
The early detection of breast cancer greatly increases the chances that the right decision for a successful treatment plan will be made. Deep learning approaches are used in breast cancer screening and have achieved promising results when a large-scale labeled dataset is available for training. However, they may suffer from a dramatic decrease in performance when annotated data are limited. In this paper, we propose a method called deep adversarial domain adaptation (DADA) to improve the performance of breast cancer screening using mammography. Specifically, our aim is to extract the knowledge from a public dataset (source domain) and transfer the learned knowledge to improve the detection performance on the target dataset (target domain). Because of the different distributions of the source and target domains, the proposed method adopts an adversarial learning technique to perform domain adaptation using the two domains. Specifically, the adversarial procedure is trained by taking advantage of the disagreement of two classifiers. To evaluate the proposed method, the public well-labeled image-level dataset Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) is employed as the source domain. Mammography samples from the West China Hospital were collected to construct our target domain dataset, and the samples are annotated at case-level based on the corresponding pathological reports. The experimental results demonstrate the effectiveness of the proposed method compared with several other state-of-the-art automatic breast cancer screening approaches.
Collapse
Affiliation(s)
- Yan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China; Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
| | - Yangqin Feng
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China.
| | - Zizhou Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Qing Lv
- Department of Galactophore Surgery, West China Hospital, Sichuan University, Chengdu 610041, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| |
Collapse
|
47
|
Yuan Y, Zhang L, Wang L, Huang H. Multi-level Attention Network for Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2021; 26:312-323. [PMID: 34129508 DOI: 10.1109/jbhi.2021.3089201] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automatic vessel segmentation in the fundus images plays an important role in the screening, diagnosis, treatment, and evaluation of various cardiovascular and ophthalmologic diseases. However, due to the limited well-annotated data, varying size of vessels, and intricate vessel structures, retinal vessel segmentation has become a long-standing challenge. In this paper, a novel deep learning model called AACA-MLA-D-UNet is proposed to fully utilize the low-level detailed information and the complementary information encoded in different layers to accurately distinguish the vessels from the background with low model complexity. The architecture of the proposed model is based on U-Net, and the dropout dense block is proposed to preserve maximum vessel information between convolution layers and mitigate the over-fitting problem. The adaptive atrous channel attention module is embedded in the contracting path to sort the importance of each feature channel automatically. After that, the multi-level attention module is proposed to integrate the multi-level features extracted from the expanding path, and use them to refine the features at each individual layer via attention mechanism. The proposed method has been validated on the three publicly available databases, i.e. the DRIVE, STARE, and CHASE DB1. The experimental results demonstrate that the proposed method can achieve better or comparable performance on retinal vessel segmentation with lower model complexity. Furthermore, the proposed method can also deal with some challenging cases and has strong generalization ability.
Collapse
|
48
|
Sánchez-Cauce R, Pérez-Martín J, Luque M. Multi-input convolutional neural network for breast cancer detection using thermal images and clinical data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 204:106045. [PMID: 33784548 DOI: 10.1016/j.cmpb.2021.106045] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 03/05/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Breast cancer is the most common cancer in women. While mammography is the most widely used screening technique for the early detection of this disease, it has several disadvantages such as radiation exposure or high economic cost. Recently, multiple authors studied the ability of machine learning algorithms for early diagnosis of breast cancer using thermal images, showing that thermography can be considered as a complementary test to mammography, or even as a primary test under certain circumstances. Moreover, although some personal and clinical data are considered risk factors of breast cancer, none of these works considered that information jointly with thermal images. METHODS We propose a novel approach for early detection of breast cancer combining thermal images of different views with personal and clinical data, building a multi-input classification model which exploits the benefits of convolutional neural networks for image analysis. First, we searched for structures using only thermal images. Next, we added the clinical data as a new branch of each of these structures, aiming to improve its performance. RESULTS We applied our method to the most widely used public database of breast thermal images, the Database for Mastology Research with Infrared Image. The best model achieves a 97% accuracy and an area under the ROC curve of 0.99, with a specificity of 100% and a sensitivity of 83%. CONCLUSIONS After studying the impact of thermal images and personal and clinical data on multi-input convolutional neural networks for breast cancer diagnosis, we conclude that: (1) adding the lateral views to the front view improves the performance of the classification model, and (2) including personal and clinical data helps the model to recognize sick patients.
Collapse
Affiliation(s)
- Raquel Sánchez-Cauce
- Department of Artificial Intelligence, Universidad Nacional de Educación a Distancia (UNED), Juan del Rosal, 16, 28040 Madrid, Spain.
| | - Jorge Pérez-Martín
- Department of Artificial Intelligence, Universidad Nacional de Educación a Distancia (UNED), Juan del Rosal, 16, 28040 Madrid, Spain.
| | - Manuel Luque
- Department of Artificial Intelligence, Universidad Nacional de Educación a Distancia (UNED), Juan del Rosal, 16, 28040 Madrid, Spain.
| |
Collapse
|
49
|
Zheng J, Sun H, Wu S, Jiang K, Peng Y, Yang X, Zhang F, Li M. 3D Context-Aware Convolutional Neural Network for False Positive Reduction in Clustered Microcalcifications Detection. IEEE J Biomed Health Inform 2021; 25:764-773. [PMID: 32750942 DOI: 10.1109/jbhi.2020.3003316] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
False positives (FPs) reduction is indispensable for clustered microcalcifications (MCs) detection in digital breast tomosynthesis (DBT), since there might be excessive false candidates in the detection stage. Considering that DBT volume has an anisotropic resolution, we proposed a novel 3D context-aware convolutional neural network (CNN) to reduce FPs, which consists of a 2D intra-slices feature extraction branch and a 3D inter-slice features fusion branch. In particular, 3D anisotropic convolutions were designed to learn representations from DBT volumes and inter-slice information fusion is only performed on the feature map level, which could avoid the influence of anisotropic resolution of DBT volume. The proposed method was evaluated on a large-scale Chinese women population of 877 cases with 1754 DBT volumes and compared with 8 related methods. Experimental results show that the proposed network achieved the best performance with an accuracy of 92.68% for FPs reduction with an AUC of 97.65%, and the FPs are 0.0512 per DBT volume at a sensitivity of 90%. This also proved that making full use of 3D contextual information of DBT volume can improve the performance of the classification algorithm.
Collapse
|
50
|
Shen Y, Wu N, Phang J, Park J, Liu K, Tyagi S, Heacock L, Kim SG, Moy L, Cho K, Geras KJ. An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization. Med Image Anal 2021; 68:101908. [PMID: 33383334 PMCID: PMC7828643 DOI: 10.1016/j.media.2020.101908] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 11/12/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
Abstract
Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11.
Collapse
Affiliation(s)
- Yiqiu Shen
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA
| | - Nan Wu
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA
| | - Jason Phang
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA
| | - Jungkyu Park
- Department of Radiology, NYU School of Medicine, 530 1st Ave, New York, NY 10016, USA
| | - Kangning Liu
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA
| | - Sudarshini Tyagi
- Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY 10012, USA
| | - Laura Heacock
- Department of Radiology, NYU School of Medicine, 530 1st Ave, New York, NY 10016, USA; Perlmutter Cancer Center, NYU Langone Health, 160 E 34th St, New York, NY 10016, USA
| | - S Gene Kim
- Department of Radiology, NYU School of Medicine, 530 1st Ave, New York, NY 10016, USA; Center for Advanced Imaging Innovation and Research, NYU Langone Health, 660 1st Ave, New York, NY 10016, USA; Perlmutter Cancer Center, NYU Langone Health, 160 E 34th St, New York, NY 10016, USA
| | - Linda Moy
- Department of Radiology, NYU School of Medicine, 530 1st Ave, New York, NY 10016, USA; Center for Advanced Imaging Innovation and Research, NYU Langone Health, 660 1st Ave, New York, NY 10016, USA; Perlmutter Cancer Center, NYU Langone Health, 160 E 34th St, New York, NY 10016, USA
| | - Kyunghyun Cho
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA; Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY 10012, USA
| | - Krzysztof J Geras
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA; Department of Radiology, NYU School of Medicine, 530 1st Ave, New York, NY 10016, USA; Center for Advanced Imaging Innovation and Research, NYU Langone Health, 660 1st Ave, New York, NY 10016, USA.
| |
Collapse
|