1
|
Darbandi MR, Darbandi M, Darbandi S, Bado I, Hadizadeh M, Khorram Khorshid HR. Artificial intelligence breakthroughs in pioneering early diagnosis and precision treatment of breast cancer: A multimethod study. Eur J Cancer 2024; 209:114227. [PMID: 39053289 DOI: 10.1016/j.ejca.2024.114227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Accepted: 07/07/2024] [Indexed: 07/27/2024]
Abstract
This article delves into the potential of artificial intelligence (AI) to enhance early breast cancer (BC) detection for improved treatment outcomes and patient care. Utilizing a multimethod approach comprising literature review and experiments, the study systematically reviewed 310 articles utilizing 30 diverse datasets. Among the techniques assessed, recurrent neural network (RNN) emerged as the most accurate, achieving 98.58 % accuracy, followed by genetic principles (GP), transfer learning (TL), and artificial neural networks (ANNs), with accuracies exceeding 96 %. While conventional machine learning (ML) methods demonstrated accuracies above 90 %, DL techniques outperformed them. Evaluation of BC diagnostic models using the Wisconsin breast cancer dataset (WBCD) highlighted logistic regression (LR) and support vector machine (SVM) as the most accurate predictors, with minimal errors for clinical data. Conversely, decision trees (DT) exhibited higher error rates due to overfitting, emphasizing the importance of algorithm selection for complex datasets. Analysis of ultrasound images underscored the significance of preprocessing, while histopathological image analysis using convolutional neural networks (CNNs) demonstrated robust classification capabilities. These findings underscore the transformative potential of ML and DL in BC diagnosis, offering automated, accurate, and accessible diagnostic tools. Collaboration among stakeholders is crucial for further advancements in BC detection methods.
Collapse
Affiliation(s)
| | - Mahsa Darbandi
- Fetal Health Research Center, Hope Generation Foundation, Tehran, Iran.
| | - Sara Darbandi
- Gene Therapy and Regenerative Medicine Research Center, Hope Generation Foundation, Tehran, Iran.
| | - Igor Bado
- Department of Oncological Sciences, Tisch Cancer Institute, New York, USA.
| | - Mohammad Hadizadeh
- Cancer Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Hamid Reza Khorram Khorshid
- Genetics Research Center, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran; Personalized Medicine and Genometabolics Research Center, Hope Generation Foundation, Tehran, Iran.
| |
Collapse
|
2
|
Wang Z, Wu M, Liu Q, Wang X, Yan C, Song T. Multiclassification of Hepatic Cystic Echinococcosis by Using Multiple Kernel Learning Framework and Ultrasound Images. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1034-1044. [PMID: 38679514 DOI: 10.1016/j.ultrasmedbio.2024.03.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 03/10/2024] [Accepted: 03/30/2024] [Indexed: 05/01/2024]
Abstract
To properly treat and care for hepatic cystic echinococcosis (HCE), it is essential to make an accurate diagnosis before treatment. OBJECTIVE The objective of this study was to assess the diagnostic accuracy of computer-aided diagnosis techniques in classifying HCE ultrasound images into five subtypes. METHODS A total of 1820 HCE ultrasound images collected from 967 patients were included in the study. A multi-kernel learning method was developed to learn the texture and depth features of the ultrasound images. Combined kernel functions were built-in Support Vector Machine (MK-SVM) for the classification work. The experimental results were evaluated using five-fold cross-validation. Finally, our approach was compared with three other machine learning algorithms: the decision tree classifier, random forest, and gradient boosting decision tree. RESULTS Among all the methods used in the study, the MK-SVM achieved the highest accuracy of 96.6% on the fused feature set. CONCLUSION The multi-kernel learning method effectively learns different image features from ultrasound images by utilizing various kernels. The MK-SVM method, which combines the learning of texture features and depth features separately, has significant application value in HCE classification tasks.
Collapse
Affiliation(s)
- Zhengye Wang
- Center for Disease Control and Prevention, Xinjiang Production and Construction Corps, Urumqi, China; Ultrasound Department, State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Disease in Central Asia, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, China
| | - Miao Wu
- College of Medical Engineering and Technology, Xinjiang Medical University, Urumqi, China
| | - Qian Liu
- Basic Medical College, Xinjiang Medical University, Urumqi, China
| | - Xiaorong Wang
- Ultrasound Department, State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Disease in Central Asia, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, China
| | - Chuanbo Yan
- College of Medical Engineering and Technology, Xinjiang Medical University, Urumqi, China
| | - Tao Song
- Ultrasound Department, State Key Laboratory of Pathogenesis, Prevention and Treatment of High Incidence Disease in Central Asia, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, China.
| |
Collapse
|
3
|
Xu P, Zhao J, Wan M, Song Q, Su Q, Wang D. Classification of multi-feature fusion ultrasound images of breast tumor within category 4 using convolutional neural networks. Med Phys 2024; 51:4243-4257. [PMID: 38436433 DOI: 10.1002/mp.16946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 01/03/2024] [Accepted: 01/09/2024] [Indexed: 03/05/2024] Open
Abstract
BACKGROUND Breast tumor is a fatal threat to the health of women. Ultrasound (US) is a common and economical method for the diagnosis of breast cancer. Breast imaging reporting and data system (BI-RADS) category 4 has the highest false-positive value of about 30% among five categories. The classification task in BI-RADS category 4 is challenging and has not been fully studied. PURPOSE This work aimed to use convolutional neural networks (CNNs) for breast tumor classification using B-mode images in category 4 to overcome the dependence on operator and artifacts. Additionally, this work intends to take full advantage of morphological and textural features in breast tumor US images to improve classification accuracy. METHODS First, original US images coming directly from the hospital were cropped and resized. In 1385 B-mode US BI-RADS category 4 images, the biopsy eliminated 503 samples of benign tumor and left 882 of malignant. Then, K-means clustering algorithm and entropy of sliding windows of US images were conducted. Considering the diversity of different characteristic information of malignant and benign represented by original B-mode images, K-means clustering images and entropy images, they are fused in a three-channel form multi-feature fusion images dataset. The training, validation, and test sets are 969, 277, and 139. With transfer learning, 11 CNN models including DenseNet and ResNet were investigated. Finally, by comparing accuracy, precision, recall, F1-score, and area under curve (AUC) of the results, models which had better performance were selected. The normality of data was assessed by Shapiro-Wilk test. DeLong test and independent t-test were used to evaluate the significant difference of AUC and other values. False discovery rate was utilized to ultimately evaluate the advantages of CNN with highest evaluation metrics. In addition, the study of anti-log compression was conducted but no improvement has shown in CNNs classification results. RESULTS With multi-feature fusion images, DenseNet121 has highest accuracy of 80.22 ± 1.45% compared to other CNNs, precision of 77.97 ± 2.89% and AUC of 0.82 ± 0.01. Multi-feature fusion improved accuracy of DenseNet121 by 1.87% from classification of original B-mode images (p < 0.05). CONCLUSION The CNNs with multi-feature fusion show a good potential of reducing the false-positive rate within category 4. The work illustrated that CNNs and fusion images have the potential to reduce false-positive rate in breast tumor within US BI-RADS category 4, and make the diagnosis of category 4 breast tumors to be more accurate and precise.
Collapse
Affiliation(s)
- Pengfei Xu
- Department of Biomedical Engineering, Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| | - Jing Zhao
- The Second Hospital of Jilin University, Changchun, China
| | - Mingxi Wan
- Department of Biomedical Engineering, Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| | - Qing Song
- The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Qiang Su
- Department of Oncology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Diya Wang
- Department of Biomedical Engineering, Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
4
|
AlZoubi A, Eskandari A, Yu H, Du H. Explainable DCNN Decision Framework for Breast Lesion Classification from Ultrasound Images Based on Cancer Characteristics. Bioengineering (Basel) 2024; 11:453. [PMID: 38790320 PMCID: PMC11117892 DOI: 10.3390/bioengineering11050453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024] Open
Abstract
In recent years, deep convolutional neural networks (DCNNs) have shown promising performance in medical image analysis, including breast lesion classification in 2D ultrasound (US) images. Despite the outstanding performance of DCNN solutions, explaining their decisions remains an open investigation. Yet, the explainability of DCNN models has become essential for healthcare systems to accept and trust the models. This paper presents a novel framework for explaining DCNN classification decisions of lesions in ultrasound images using the saliency maps linking the DCNN decisions to known cancer characteristics in the medical domain. The proposed framework consists of three main phases. First, DCNN models for classification in ultrasound images are built. Next, selected methods for visualization are applied to obtain saliency maps on the input images of the DCNN models. In the final phase, the visualization outputs and domain-known cancer characteristics are mapped. The paper then demonstrates the use of the framework for breast lesion classification from ultrasound images. We first follow the transfer learning approach and build two DCNN models. We then analyze the visualization outputs of the trained DCNN models using the EGrad-CAM and Ablation-CAM methods. We map the DCNN model decisions of benign and malignant lesions through the visualization outputs to the characteristics such as echogenicity, calcification, shape, and margin. A retrospective dataset of 1298 US images collected from different hospitals is used to evaluate the effectiveness of the framework. The test results show that these characteristics contribute differently to the benign and malignant lesions' decisions. Our study provides the foundation for other researchers to explain the DCNN classification decisions of other cancer types.
Collapse
Affiliation(s)
- Alaa AlZoubi
- School of Computing, University of Derby, Derby DE3 16B, UK; (A.E.); (H.Y.)
| | - Ali Eskandari
- School of Computing, University of Derby, Derby DE3 16B, UK; (A.E.); (H.Y.)
| | - Harry Yu
- School of Computing, University of Derby, Derby DE3 16B, UK; (A.E.); (H.Y.)
| | - Hongbo Du
- School of Computing, The University of Buckingham, Buckingham MK18 1EG, UK;
| |
Collapse
|
5
|
Guo Y, Zhang H, Yuan L, Chen W, Zhao H, Yu QQ, Shi W. Machine learning and new insights for breast cancer diagnosis. J Int Med Res 2024; 52:3000605241237867. [PMID: 38663911 PMCID: PMC11047257 DOI: 10.1177/03000605241237867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 02/21/2024] [Indexed: 04/28/2024] Open
Abstract
Breast cancer (BC) is the most prominent form of cancer among females all over the world. The current methods of BC detection include X-ray mammography, ultrasound, computed tomography, magnetic resonance imaging, positron emission tomography and breast thermographic techniques. More recently, machine learning (ML) tools have been increasingly employed in diagnostic medicine for its high efficiency in detection and intervention. The subsequent imaging features and mathematical analyses can then be used to generate ML models, which stratify, differentiate and detect benign and malignant breast lesions. Given its marked advantages, radiomics is a frequently used tool in recent research and clinics. Artificial neural networks and deep learning (DL) are novel forms of ML that evaluate data using computer simulation of the human brain. DL directly processes unstructured information, such as images, sounds and language, and performs precise clinical image stratification, medical record analyses and tumour diagnosis. Herein, this review thoroughly summarizes prior investigations on the application of medical images for the detection and intervention of BC using radiomics, namely DL and ML. The aim was to provide guidance to scientists regarding the use of artificial intelligence and ML in research and the clinic.
Collapse
Affiliation(s)
- Ya Guo
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Heng Zhang
- Department of Laboratory Medicine, Shandong Daizhuang Hospital, Jining, Shandong Province, China
| | - Leilei Yuan
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Weidong Chen
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Haibo Zhao
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Qing-Qing Yu
- Phase I Clinical Research Centre, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Wenjie Shi
- Molecular and Experimental Surgery, University Clinic for General-, Visceral-, Vascular- and Trans-Plantation Surgery, Medical Faculty University Hospital Magdeburg, Otto-von Guericke University, Magdeburg, Germany
| |
Collapse
|
6
|
Tian R, Lu G, Tang S, Sang L, Ma H, Qian W, Yang W. Benign and malignant classification of breast tumor ultrasound images using conventional radiomics and transfer learning features: A multicenter retrospective study. Med Eng Phys 2024; 125:104117. [PMID: 38508797 DOI: 10.1016/j.medengphy.2024.104117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 01/25/2024] [Accepted: 02/13/2024] [Indexed: 03/22/2024]
Abstract
This study aims to establish an effective benign and malignant classification model for breast tumor ultrasound images by using conventional radiomics and transfer learning features. We collaborated with a local hospital and collected a base dataset (Dataset A) consisting of 1050 cases of single lesion 2D ultrasound images from patients, with a total of 593 benign and 357 malignant tumor cases. The experimental approach comprises three main parts: conventional radiomics, transfer learning, and feature fusion. Furthermore, we assessed the model's generalizability by utilizing multicenter data obtained from Datasets B and C. The results from conventional radiomics indicated that the SVM classifier achieved the highest balanced accuracy of 0.791, while XGBoost obtained the highest AUC of 0.854. For transfer learning, we extracted deep features from ResNet50, Inception-v3, DenseNet121, MNASNet, and MobileNet. Among these models, MNASNet, with 640-dimensional deep features, yielded the optimal performance, with a balanced accuracy of 0.866, AUC of 0.937, sensitivity of 0.819, and specificity of 0.913. In the feature fusion phase, we trained SVM, ExtraTrees, XGBoost, and LightGBM with early fusion features and evaluated them with weighted voting. This approach achieved the highest balanced accuracy of 0.964 and AUC of 0.981. Combining conventional radiomics and transfer learning features demonstrated clear advantages over using individual features for breast tumor ultrasound image classification. This automated diagnostic model can ease patient burden and provide additional diagnostic support to radiologists. The performance of this model encourages future prospective research in this domain.
Collapse
Affiliation(s)
- Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Department of Nuclear Medicine, General Hospital of Northern Theatre Command, Shenyang, China
| | - Shiting Tang
- Department of Orthopedics, Joint Surgery and Sports Medicine, The First Hospital of China Medical University, Shenyang, China
| | - Liang Sang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer, Hospital & Institute, Shenyang, China.
| |
Collapse
|
7
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
8
|
Tagnamas J, Ramadan H, Yahyaouy A, Tairi H. Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images. Vis Comput Ind Biomed Art 2024; 7:2. [PMID: 38273164 PMCID: PMC10811315 DOI: 10.1186/s42492-024-00155-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 01/11/2024] [Indexed: 01/27/2024] Open
Abstract
Accurate segmentation of breast ultrasound (BUS) images is crucial for early diagnosis and treatment of breast cancer. Further, the task of segmenting lesions in BUS images continues to pose significant challenges due to the limitations of convolutional neural networks (CNNs) in capturing long-range dependencies and obtaining global context information. Existing methods relying solely on CNNs have struggled to address these issues. Recently, ConvNeXts have emerged as a promising architecture for CNNs, while transformers have demonstrated outstanding performance in diverse computer vision tasks, including the analysis of medical images. In this paper, we propose a novel breast lesion segmentation network CS-Net that combines the strengths of ConvNeXt and Swin Transformer models to enhance the performance of the U-Net architecture. Our network operates on BUS images and adopts an end-to-end approach to perform segmentation. To address the limitations of CNNs, we design a hybrid encoder that incorporates modified ConvNeXt convolutions and Swin Transformer. Furthermore, to enhance capturing the spatial and channel attention in feature maps we incorporate the Coordinate Attention Module. Second, we design an Encoder-Decoder Features Fusion Module that facilitates the fusion of low-level features from the encoder with high-level semantic features from the decoder during the image reconstruction. Experimental results demonstrate the superiority of our network over state-of-the-art image segmentation methods for BUS lesions segmentation.
Collapse
Affiliation(s)
- Jaouad Tagnamas
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco.
| | - Hiba Ramadan
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| | - Ali Yahyaouy
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| | - Hamid Tairi
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| |
Collapse
|
9
|
Eida S, Fukuda M, Katayama I, Takagi Y, Sasaki M, Mori H, Kawakami M, Nishino T, Ariji Y, Sumi M. Metastatic Lymph Node Detection on Ultrasound Images Using YOLOv7 in Patients with Head and Neck Squamous Cell Carcinoma. Cancers (Basel) 2024; 16:274. [PMID: 38254765 PMCID: PMC10813890 DOI: 10.3390/cancers16020274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 12/28/2023] [Accepted: 01/04/2024] [Indexed: 01/24/2024] Open
Abstract
Ultrasonography is the preferred modality for detailed evaluation of enlarged lymph nodes (LNs) identified on computed tomography and/or magnetic resonance imaging, owing to its high spatial resolution. However, the diagnostic performance of ultrasonography depends on the examiner's expertise. To support the ultrasonographic diagnosis, we developed YOLOv7-based deep learning models for metastatic LN detection on ultrasonography and compared their detection performance with that of highly experienced radiologists and less experienced residents. We enrolled 462 B- and D-mode ultrasound images of 261 metastatic and 279 non-metastatic histopathologically confirmed LNs from 126 patients with head and neck squamous cell carcinoma. The YOLOv7-based B- and D-mode models were optimized using B- and D-mode training and validation images and their detection performance for metastatic LNs was evaluated using B- and D-mode testing images, respectively. The D-mode model's performance was comparable to that of radiologists and superior to that of residents' reading of D-mode images, whereas the B-mode model's performance was higher than that of residents but lower than that of radiologists on B-mode images. Thus, YOLOv7-based B- and D-mode models can assist less experienced residents in ultrasonographic diagnoses. The D-mode model could raise the diagnostic performance of residents to the same level as experienced radiologists.
Collapse
Affiliation(s)
- Sato Eida
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Motoki Fukuda
- Department of Oral Radiology, Osaka Dental University, 1-5-17 Otemae, Chuo-ku, Osaka 540-0008, Japan; (M.F.); (Y.A.)
| | - Ikuo Katayama
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Yukinori Takagi
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Miho Sasaki
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Hiroki Mori
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Maki Kawakami
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Tatsuyoshi Nishino
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Yoshiko Ariji
- Department of Oral Radiology, Osaka Dental University, 1-5-17 Otemae, Chuo-ku, Osaka 540-0008, Japan; (M.F.); (Y.A.)
| | - Misa Sumi
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| |
Collapse
|
10
|
AlZoubi A, Lu F, Zhu Y, Ying T, Ahmed M, Du H. Classification of breast lesions in ultrasound images using deep convolutional neural networks: transfer learning versus automatic architecture design. Med Biol Eng Comput 2024; 62:135-149. [PMID: 37735296 PMCID: PMC10758370 DOI: 10.1007/s11517-023-02922-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/08/2023] [Indexed: 09/23/2023]
Abstract
Deep convolutional neural networks (DCNNs) have demonstrated promising performance in classifying breast lesions in 2D ultrasound (US) images. Exiting approaches typically use pre-trained models based on architectures designed for natural images with transfer learning. Fewer attempts have been made to design customized architectures specifically for this purpose. This paper presents a comprehensive evaluation on transfer learning based solutions and automatically designed networks, analyzing the accuracy and robustness of different recognition models in three folds. First, we develop six different DCNN models (BNet, GNet, SqNet, DsNet, RsNet, IncReNet) based on transfer learning. Second, we adapt the Bayesian optimization method to optimize a CNN network (BONet) for classifying breast lesions. A retrospective dataset of 3034 US images collected from various hospitals is then used for evaluation. Extensive tests show that the BONet outperforms other models, exhibiting higher accuracy (83.33%), lower generalization gap (1.85%), shorter training time (66 min), and less model complexity (approximately 0.5 million weight parameters). We also compare the diagnostic performance of all models against that by three experienced radiologists. Finally, we explore the use of saliency maps to explain the classification decisions made by different models. Our investigation shows that saliency maps can assist in comprehending the classification decisions.
Collapse
Affiliation(s)
- Alaa AlZoubi
- School of Computing and Engineering, University of Derby, Derby, DE22 3AW, UK.
| | - Feng Lu
- Department of Ultrasound, Shuguang Hospital affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Yicheng Zhu
- Department of Ultrasound, Pudong New Area People's Hospital affiliated to Shanghai University of Medicine and Health Sciences, Shanghai, 201200, China
| | - Tao Ying
- Department of Ultrasound, Sixth People's Hospital, Shanghai, China
| | - Mohmmed Ahmed
- School of Computing, The University of Buckingham, Buckingham, MK18 1EG, UK
| | - Hongbo Du
- School of Computing, The University of Buckingham, Buckingham, MK18 1EG, UK
| |
Collapse
|
11
|
Marcinkevičs R, Reis Wolfertstetter P, Klimiene U, Chin-Cheong K, Paschke A, Zerres J, Denzinger M, Niederberger D, Wellmann S, Ozkan E, Knorr C, Vogt JE. Interpretable and intervenable ultrasonography-based machine learning models for pediatric appendicitis. Med Image Anal 2024; 91:103042. [PMID: 38000257 DOI: 10.1016/j.media.2023.103042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 11/10/2023] [Accepted: 11/20/2023] [Indexed: 11/26/2023]
Abstract
Appendicitis is among the most frequent reasons for pediatric abdominal surgeries. Previous decision support systems for appendicitis have focused on clinical, laboratory, scoring, and computed tomography data and have ignored abdominal ultrasound, despite its noninvasive nature and widespread availability. In this work, we present interpretable machine learning models for predicting the diagnosis, management and severity of suspected appendicitis using ultrasound images. Our approach utilizes concept bottleneck models (CBM) that facilitate interpretation and interaction with high-level concepts understandable to clinicians. Furthermore, we extend CBMs to prediction problems with multiple views and incomplete concept sets. Our models were trained on a dataset comprising 579 pediatric patients with 1709 ultrasound images accompanied by clinical and laboratory data. Results show that our proposed method enables clinicians to utilize a human-understandable and intervenable predictive model without compromising performance or requiring time-consuming image annotation when deployed. For predicting the diagnosis, the extended multiview CBM attained an AUROC of 0.80 and an AUPR of 0.92, performing comparably to similar black-box neural networks trained and tested on the same dataset.
Collapse
Affiliation(s)
- Ričards Marcinkevičs
- Department of Computer Science, ETH Zurich, Universitätstrasse 6, Zürich, 8092, Switzerland.
| | - Patricia Reis Wolfertstetter
- Department of Pediatric Surgery and Pediatric Orthopedics, Hospital St. Hedwig of the Order of St. John of God, University Children's Hospital Regensburg (KUNO), Steinmetzstrasse 1-3, Regensburg, 93049, Germany; Faculty of Medicine, University of Regensburg, Franz-Josef-Strauss-Allee 11, Regensburg, 93053, Germany.
| | - Ugne Klimiene
- Department of Computer Science, ETH Zurich, Universitätstrasse 6, Zürich, 8092, Switzerland
| | - Kieran Chin-Cheong
- Department of Computer Science, ETH Zurich, Universitätstrasse 6, Zürich, 8092, Switzerland
| | - Alyssia Paschke
- Faculty of Medicine, University of Regensburg, Franz-Josef-Strauss-Allee 11, Regensburg, 93053, Germany
| | - Julia Zerres
- Faculty of Medicine, University of Regensburg, Franz-Josef-Strauss-Allee 11, Regensburg, 93053, Germany
| | - Markus Denzinger
- Department of Pediatric Surgery and Pediatric Orthopedics, Hospital St. Hedwig of the Order of St. John of God, University Children's Hospital Regensburg (KUNO), Steinmetzstrasse 1-3, Regensburg, 93049, Germany; Faculty of Medicine, University of Regensburg, Franz-Josef-Strauss-Allee 11, Regensburg, 93053, Germany
| | - David Niederberger
- Department of Computer Science, ETH Zurich, Universitätstrasse 6, Zürich, 8092, Switzerland
| | - Sven Wellmann
- Faculty of Medicine, University of Regensburg, Franz-Josef-Strauss-Allee 11, Regensburg, 93053, Germany; Division of Neonatology, Hospital St. Hedwig of the Order of St. John of God, University Children's Hospital Regensburg (KUNO), Steinmetzstrasse 1-3, Regensburg, 93049, Germany
| | - Ece Ozkan
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 43 Vassar Street, Cambridge, 02139, USA
| | - Christian Knorr
- Department of Pediatric Surgery and Pediatric Orthopedics, Hospital St. Hedwig of the Order of St. John of God, University Children's Hospital Regensburg (KUNO), Steinmetzstrasse 1-3, Regensburg, 93049, Germany
| | - Julia E Vogt
- Department of Computer Science, ETH Zurich, Universitätstrasse 6, Zürich, 8092, Switzerland.
| |
Collapse
|
12
|
Zhou G, Mosadegh B. Distilling Knowledge From an Ensemble of Vision Transformers for Improved Classification of Breast Ultrasound. Acad Radiol 2024; 31:104-120. [PMID: 37666747 DOI: 10.1016/j.acra.2023.08.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/20/2023] [Accepted: 08/05/2023] [Indexed: 09/06/2023]
Abstract
RATIONALE AND OBJECTIVES To develop a deep learning model for the automated classification of breast ultrasound images as benign or malignant. More specifically, the application of vision transformers, ensemble learning, and knowledge distillation is explored for breast ultrasound classification. MATERIALS AND METHODS Single view, B-mode ultrasound images were curated from the publicly available Breast Ultrasound Image (BUSI) dataset, which has categorical ground truth labels (benign vs malignant) assigned by radiologists and malignant cases confirmed by biopsy. The performance of vision transformers (ViT) is compared to convolutional neural networks (CNN), followed by a comparison between supervised, self-supervised, and randomly initialized ViT. Subsequently, the ensemble of 10 independently trained ViT, where the ensemble model is the unweighted average of the output of each individual model is compared to the performance of each ViT alone. Finally, we train a single ViT to emulate the ensembled ViT using knowledge distillation. RESULTS On this dataset that was trained using five-fold cross validation, ViT outperforms CNN, while self-supervised ViT outperform supervised and randomly initialized ViT. The ensemble model achieves an area under the receiver operating characteristics curve (AuROC) and area under the precision recall curve (AuPRC) of 0.977 and 0.965 on the test set, outperforming the average AuROC and AuPRC of the independently trained ViTs (0.958 ± 0.05 and 0.931 ± 0.016). The distilled ViT achieves an AuROC and AuPRC of 0.972 and 0.960. CONCLUSION Both transfer learning and ensemble learning can each offer increased performance independently and can be sequentially combined to collectively improve the performance of the final model. Furthermore, a single vision transformer can be trained to match the performance of an ensemble of a set of vision transformers using knowledge distillation.
Collapse
Affiliation(s)
| | - Bobak Mosadegh
- Dalio Institute of Cardiovascular Imaging, Department of Radiology, Weill Cornell Medicine, New York, New York
| |
Collapse
|
13
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
14
|
Liang Z, Chen K, Luo T, Jiang W, Wen J, Zhao L, Song W. HTC-Net: Hashimoto's thyroiditis ultrasound image classification model based on residual network reinforced by channel attention mechanism. Health Inf Sci Syst 2023; 11:24. [PMID: 37234207 PMCID: PMC10205956 DOI: 10.1007/s13755-023-00225-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 04/23/2023] [Indexed: 05/27/2023] Open
Abstract
Convolutional neural network (CNN) is efficient in extracting and aggregating local features in the spatial dimension of the images. However, obtaining the inapparent texture information of the low-echo area in the ultrasound images is not easy, and it is especially challenging for the early lesion recognition in Hashimoto's thyroiditis (HT) ultrasound images. In this paper, a HT ultrasound image classification model HTC-Net based on residual network reinforced by channel attention mechanism is proposed. HTC-Net strengthens the features of the important channels by reinforced channel attention mechanism through which the high-level semantic information is enchanced and the low-level semantic information is suppressed. Residual network assists HTC-Net focus on the key local areas of the ultrasound images while pay attention to the global semantic information. Furthermore, in order to solve the problem of uneven distribution caused by large amount of difficult-to-classify samples in the data sets, a new feature loss function TanCELoss with weight factor dynamically adjusting is constructed. TanCELoss function can better assist HTC-Net to transform difficult-to-classify samples into easy-to-classify samples gradually, and improve the balancing distribution of the samples. The experiments are implemented based on data sets collected by the Endocrinology Department of four branches from Guangdong Provincial Hospital of Chinese Medicine. Both quantitative testing and visualization results show that HTC-Net obtains STOA performance for early lesions recognition in HT ultrasound images. HTC-Net has great application value especially under the condition of owning only small data samples.
Collapse
Affiliation(s)
- Zhipeng Liang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, 510006 China
| | - Kang Chen
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, 510006 China
| | - Tianchun Luo
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, 510006 China
| | - Wenchao Jiang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, 510006 China
| | - Jianxuan Wen
- Department of Endocrinology, Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, 510120 China
| | - Ling Zhao
- Department of Endocrinology, Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, 510120 China
| | - Wei Song
- Department of Endocrinology, Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, 510120 China
| |
Collapse
|
15
|
Saleh GA, Batouty NM, Gamal A, Elnakib A, Hamdy O, Sharafeldeen A, Mahmoud A, Ghazal M, Yousaf J, Alhalabi M, AbouEleneen A, Tolba AE, Elmougy S, Contractor S, El-Baz A. Impact of Imaging Biomarkers and AI on Breast Cancer Management: A Brief Review. Cancers (Basel) 2023; 15:5216. [PMID: 37958390 PMCID: PMC10650187 DOI: 10.3390/cancers15215216] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/13/2023] [Accepted: 10/21/2023] [Indexed: 11/15/2023] Open
Abstract
Breast cancer stands out as the most frequently identified malignancy, ranking as the fifth leading cause of global cancer-related deaths. The American College of Radiology (ACR) introduced the Breast Imaging Reporting and Data System (BI-RADS) as a standard terminology facilitating communication between radiologists and clinicians; however, an update is now imperative to encompass the latest imaging modalities developed subsequent to the 5th edition of BI-RADS. Within this review article, we provide a concise history of BI-RADS, delve into advanced mammography techniques, ultrasonography (US), magnetic resonance imaging (MRI), PET/CT images, and microwave breast imaging, and subsequently furnish comprehensive, updated insights into Molecular Breast Imaging (MBI), diagnostic imaging biomarkers, and the assessment of treatment responses. This endeavor aims to enhance radiologists' proficiency in catering to the personalized needs of breast cancer patients. Lastly, we explore the augmented benefits of artificial intelligence (AI), machine learning (ML), and deep learning (DL) applications in segmenting, detecting, and diagnosing breast cancer, as well as the early prediction of the response of tumors to neoadjuvant chemotherapy (NAC). By assimilating state-of-the-art computer algorithms capable of deciphering intricate imaging data and aiding radiologists in rendering precise and effective diagnoses, AI has profoundly revolutionized the landscape of breast cancer radiology. Its vast potential holds the promise of bolstering radiologists' capabilities and ameliorating patient outcomes in the realm of breast cancer management.
Collapse
Affiliation(s)
- Gehad A. Saleh
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Nihal M. Batouty
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Abdelrahman Gamal
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elnakib
- Electrical and Computer Engineering Department, School of Engineering, Penn State Erie, The Behrend College, Erie, PA 16563, USA;
| | - Omar Hamdy
- Surgical Oncology Department, Oncology Centre, Mansoura University, Mansoura 35516, Egypt;
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Jawad Yousaf
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Marah Alhalabi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Amal AbouEleneen
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elsaid Tolba
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
- The Higher Institute of Engineering and Automotive Technology and Energy, New Heliopolis, Cairo 11829, Egypt
| | - Samir Elmougy
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Sohail Contractor
- Department of Radiology, University of Louisville, Louisville, KY 40202, USA
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
16
|
Vogel-Minea CM, Bader W, Blohmer JU, Duda V, Eichler C, Fallenberg EM, Farrokh A, Golatta M, Gruber I, Hackelöer BJ, Heil J, Madjar H, Marzotko E, Merz E, Müller-Schimpfle M, Mundinger A, Ohlinger R, Peisker U, Schäfer FK, Schulz-Wendtland R, Solbach C, Warm M, Watermann D, Wojcinski S, Dudwiesus H, Hahn M. Best Practice Guideline - DEGUM Recommendations on Breast Ultrasound. ULTRASCHALL IN DER MEDIZIN (STUTTGART, GERMANY : 1980) 2023; 44:520-536. [PMID: 37072031 DOI: 10.1055/a-2020-9904] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Alongside mammography, breast ultrasound is an important and well-established method in assessment of breast lesions. With the "Best Practice Guideline", the DEGUM Breast Ultrasound (in German, "Mammasonografie") working group, intends to describe the additional and optional application modalities for the diagnostic confirmation of breast findings and to express DEGUM recommendations in this Part II, in addition to the current dignity criteria and assessment categories published in Part I, in order to facilitate the differential diagnosis of ambiguous lesions.The present "Best Practice Guideline" has set itself the goal of meeting the requirements for quality assurance and ensuring quality-controlled performance of breast ultrasound. The most important aspects of quality assurance are explained in this Part II of the Best Practice Guideline.
Collapse
Affiliation(s)
- Claudia Maria Vogel-Minea
- Brustzentrum, Diagnostische und Interventionelle Senologie, Rottal-Inn Kliniken Eggenfelden, Eggenfelden, Germany
| | - Werner Bader
- Zentrum für Frauenheilkunde, Brustzentrum, Universitätsklinikum OWL der Universität Bielefeld, Campus Klinikum Bielefeld, Bielefeld, Germany
| | - Jens-Uwe Blohmer
- Klinik für Gynäkologie mit Brustzentrum, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Volker Duda
- Senologische Diagnostik, Universitätsklinikum Gießen und Marburg, Marburg, Germany
| | - Christian Eichler
- Klinik für Brusterkrankungen, St Franziskus-Hospital Münster GmbH, Münster, Germany
| | - Eva Maria Fallenberg
- Department of Diagnostic and Interventional Radiology, Technical University of Munich Hospital Rechts der Isar, Munich, Germany
| | - André Farrokh
- Klinik für Gynäkologie und Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Kiel, Germany
| | - Michael Golatta
- Sektion Senologie, Universitäts-Frauenklinik Heidelberg, Heidelberg, Germany
- Brustzentrum Heidelberg, Klinik St. Elisabeth, Heidelberg, Germany
| | - Ines Gruber
- Frauenklinik, Department für Frauengesundheit, Universitätsklinikum Tübingen, Tübingen, Germany
| | | | - Jörg Heil
- Sektion Senologie, Universitäts-Frauenklinik Heidelberg, Heidelberg, Germany
- Brustzentrum Heidelberg, Klinik St. Elisabeth, Heidelberg, Germany
| | - Helmut Madjar
- Gynäkologie und Senologie, Praxis für Gynäkologie, Wiesbaden, Germany
| | - Ellen Marzotko
- Mammadiagnostik, Frauenheilkunde und Geburtshilfe, Praxis, Erfurt, Germany
| | - Eberhard Merz
- Frauenheilkunde, Zentrum für Ultraschall und Pränatalmedizin, Frankfurt, Germany
| | - Markus Müller-Schimpfle
- DKG-Brustzentrum, Klinik für Radiologie, Neuroradiologie und Nuklearmedizin, varisano Klinikum Frankfurt Höchst, Frankfurt am Main, Germany
| | - Alexander Mundinger
- Brustzentrum Osnabrück - Bildgebende und interventionelle Mamma Diagnostik, Franziskus Hospital Harderberg, Niels Stensen Kliniken, Georgsmarienhütte, Germany
| | - Ralf Ohlinger
- Interdisziplinäres Brustzentrum, Universitätsmedizin Greifswald, Klinik für Frauenheilkunde und Geburtshilfe, Greifswald, Germany
| | - Uwe Peisker
- BrustCentrum Aachen-Kreis Heinsberg, Hermann-Josef Krankenhaus, Akademisches Lehrkrankenhaus der RWTH-Aachen, Erkelenz, Germany
| | - Fritz Kw Schäfer
- Bereich Mammadiagnostik und Interventionen, Universitätsklinikum Schleswig-Holstein, Kiel, Germany
| | | | - Christine Solbach
- Senologie, Klinik für Frauenheilkunde und Geburtshilfe, Universitätsklinikum Frankfurt, Frankfurt, Germany
| | - Mathias Warm
- Brustzentrum, Krankenhaus Holweide, Kliniken der Stadt Köln, Koeln, Germany
| | - Dirk Watermann
- Frauenklinik, Evangelisches Diakoniekrankenhaus, Freiburg, Germany
| | - Sebastian Wojcinski
- Zentrum für Frauenheilkunde, Brustzentrum, Universitätsklinikum OWL Bielefeld, Bielefeld, Germany
| | | | - Markus Hahn
- Frauenklinik, Department für Frauengesundheit, Universität Tübingen, Tübingen, Germany
| |
Collapse
|
17
|
Lyu S, Cheung RCC. Efficient and Automatic Breast Cancer Early Diagnosis System Based on the Hierarchical Extreme Learning Machine. SENSORS (BASEL, SWITZERLAND) 2023; 23:7772. [PMID: 37765827 PMCID: PMC10535771 DOI: 10.3390/s23187772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 08/27/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023]
Abstract
Breast cancer is the leading type of cancer in women, causing nearly 600,000 deaths every year, globally. Although the tumors can be localized within the breast, they can spread to other body parts, causing more harm. Therefore, early diagnosis can help reduce the risks of this cancer. However, a breast cancer diagnosis is complicated, requiring biopsy by various methods, such as MRI, ultrasound, BI-RADS, or even needle aspiration and cytology with the suggestions of specialists. On certain occasions, such as body examinations of a large number of people, it is also a large workload to check the images. Therefore, in this work, we present an efficient and automatic diagnosis system based on the hierarchical extreme learning machine (H-ELM) for breast cancer ultrasound results with high efficiency and make a primary diagnosis of the images. To make it compatible to use, this system consists of PNG images and general medical software within the H-ELM framework, which is easily trained and applied. Furthermore, this system only requires ultrasound images on a small scale, of 28×28 pixels, reducing the resources and fulfilling the application with low-resolution images. The experimental results show that the system can achieve 86.13% in the classification of breast cancer based on ultrasound images from the public breast ultrasound images (BUSI) dataset, without other relative information and supervision, which is higher than the conventional deep learning methods on the same dataset. Moreover, the training time is highly reduced, to only 5.31 s, and consumes few resources. The experimental results indicate that this system could be helpful for precise and efficient early diagnosis of breast cancers with primary examination results.
Collapse
Affiliation(s)
- Songyang Lyu
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong
| | - Ray C C Cheung
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong
| |
Collapse
|
18
|
Tian R, Yu M, Liao L, Zhang C, Zhao J, Sang L, Qian W, Wang Z, Huang L, Ma H. An effective convolutional neural network for classification of benign and malignant breast and thyroid tumors from ultrasound images. Phys Eng Sci Med 2023; 46:995-1013. [PMID: 37195403 DOI: 10.1007/s13246-023-01262-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 04/16/2023] [Indexed: 05/18/2023]
Abstract
Breast and thyroid cancers are the two most common cancers among women worldwide. The early clinical diagnosis of breast and thyroid cancers often utilizes ultrasonography. Most of the ultrasound images of breast and thyroid cancer lack specificity, which reduces the accuracy of ultrasound clinical diagnosis. This study attempts to develop an effective convolutional neural network (E-CNN) for the classification of benign and malignant breast and thyroid tumors from ultrasound images. The 2-Dimension (2D) ultrasound images of 1052 breast tumors were collected, and 8245 2D tumor images were obtained from 76 thyroid cases. We performed tenfold cross-validation on breast and thyroid data, with a mean classification accuracy of 0.932 and 0.902, respectively. In addition, the proposed E-CNN was applied to classify and evaluate 9297 mixed images (breast and thyroid images). The mean classification accuracy was 0.875, and the mean area under the curve (AUC) was 0.955. Based on data in the same modality, we transferred the breast model to classify typical tumor images of 76 patients. The finetuning model achieved a mean classification accuracy of 0.945, and a mean AUC of 0.958. Meanwhile, the transfer thyroid model realized a mean classification accuracy of 0.932, and a mean AUC of 0.959, on 1052 breast tumor images. The experimental results demonstrate the ability of the E-CNN to learn the features and classify breast and thyroid tumors. Besides, it is promising to classify benign and malignant tumors from ultrasound images with the transfer model under the same modality.
Collapse
Affiliation(s)
- Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, Liaoning, China
| | - Miao Yu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, Liaoning, China
| | - Lingmin Liao
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi, China
- Jiangxi Key Laboratory of Clinical and Translational Cancer Research, Nanchang, 330006, China
| | - Chunquan Zhang
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi, China
| | - Jiali Zhao
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi, China
- Department of Oncology, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi, China
- Jiangxi Key Laboratory of Clinical and Translational Cancer Research, Nanchang, 330006, China
| | - Liang Sang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, 110001, Liaoning, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, Liaoning, China
| | - Zhiguo Wang
- Department of Nuclear Medicine, General Hospital of Northern Theatre Command, Shenyang, 110016, Liaoning, China
| | - Long Huang
- Department of Oncology, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi, China.
- Jiangxi Key Laboratory of Clinical and Translational Cancer Research, Nanchang, 330006, China.
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, Liaoning, China.
- National University of Singapore (Suzhou) Research Institute, Suzhou, 215123, China.
| |
Collapse
|
19
|
GadAllah MT, Mohamed AENA, Hefnawy AA, Zidan HE, El-Banby GM, Mohamed Badawy S. Convolutional Neural Networks Based Classification of Segmented Breast Ultrasound Images – A Comparative Preliminary Study. 2023 INTELLIGENT METHODS, SYSTEMS, AND APPLICATIONS (IMSA) 2023. [DOI: 10.1109/imsa58542.2023.10217585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
| | - Abd El-Naser A. Mohamed
- Menoufia University,Faculty of Electronic Engineering,Electronics and Electrical Communications Engineering Department,Menoufia,Egypt
| | - Alaa A. Hefnawy
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Hassan E. Zidan
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Ghada M. El-Banby
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| | - Samir Mohamed Badawy
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| |
Collapse
|
20
|
Meng M, Li H, Zhang M, He G, Wang L, Shen D. Reducing the number of unnecessary biopsies for mammographic BI-RADS 4 lesions through a deep transfer learning method. BMC Med Imaging 2023; 23:82. [PMID: 37312026 DOI: 10.1186/s12880-023-01023-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 05/23/2023] [Indexed: 06/15/2023] Open
Abstract
BACKGROUND In clinical practice, reducing unnecessary biopsies for mammographic BI-RADS 4 lesions is crucial. The objective of this study was to explore the potential value of deep transfer learning (DTL) based on the different fine-tuning strategies for Inception V3 to reduce the number of unnecessary biopsies that residents need to perform for mammographic BI-RADS 4 lesions. METHODS A total of 1980 patients with breast lesions were included, including 1473 benign lesions (185 women with bilateral breast lesions), and 692 malignant lesions collected and confirmed by clinical pathology or biopsy. The breast mammography images were randomly divided into three subsets, a training set, testing set, and validation set 1, at a ratio of 8:1:1. We constructed a DTL model for the classification of breast lesions based on Inception V3 and attempted to improve its performance with 11 fine-tuning strategies. The mammography images from 362 patients with pathologically confirmed BI-RADS 4 breast lesions were employed as validation set 2. Two images from each lesion were tested, and trials were categorized as correct if the judgement (≥ 1 image) was correct. We used precision (Pr), recall rate (Rc), F1 score (F1), and the area under the receiver operating characteristic curve (AUROC) as the performance metrics of the DTL model with validation set 2. RESULTS The S5 model achieved the best fit for the data. The Pr, Rc, F1 and AUROC of S5 were 0.90, 0.90, 0.90, and 0.86, respectively, for Category 4. The proportions of lesions downgraded by S5 were 90.73%, 84.76%, and 80.19% for categories 4 A, 4B, and 4 C, respectively. The overall proportion of BI-RADS 4 lesions downgraded by S5 was 85.91%. There was no significant difference between the classification results of the S5 model and pathological diagnosis (P = 0.110). CONCLUSION The S5 model we proposed here can be used as an effective approach for reducing the number of unnecessary biopsies that residents need to conduct for mammographic BI-RADS 4 lesions and may have other important clinical uses.
Collapse
Affiliation(s)
- Mingzhu Meng
- Department of Radiology, The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, Changzhou, 213164, Jiangsu Province, P. R. China
| | - Hong Li
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, Jiangsu Province, P.R. China
| | - Ming Zhang
- Department of Radiology, The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, Changzhou, 213164, Jiangsu Province, P. R. China
| | - Guangyuan He
- Department of Radiology, The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, Changzhou, 213164, Jiangsu Province, P. R. China
| | - Long Wang
- Department of Radiology, The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, Changzhou, 213164, Jiangsu Province, P. R. China.
| | - Dong Shen
- Department of Radiology, The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, Changzhou, 213164, Jiangsu Province, P. R. China.
| |
Collapse
|
21
|
Singla R, Hu R, Ringstrom C, Lessoway V, Reid J, Nguan C, Rohling R. The Kidneys Are Not All Normal: Transplanted Kidneys and Their Speckle Distributions. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:1268-1274. [PMID: 36842904 DOI: 10.1016/j.ultrasmedbio.2023.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 12/21/2022] [Accepted: 01/19/2023] [Indexed: 05/11/2023]
Abstract
OBJECTIVE Modelling ultrasound speckle to characterise tissue properties has generated considerable interest. As speckle is dependent on the underlying tissue architecture, modelling it may aid in tasks such as segmentation or disease detection. For the transplanted kidney, where ultrasound is used to investigate dysfunction, it is unknown which statistical distribution best characterises such speckle. This applies to the regions of the transplanted kidney: the cortex, the medulla and the central echogenic complex. Furthermore, it is unclear how these distributions vary by patient variables such as age, sex, body mass index, primary disease or donor type. These traits may influence speckle modelling given their influence on kidney anatomy. We investigate these two aims. METHODS B-mode images from n = 821 kidney transplant recipients (one image per recipient) were automatically segmented into the cortex, medulla and central echogenic complex using a neural network. Seven distinct probability distributions were fitted to each region's histogram, and statistical analysis was performed. DISCUSSION The Rayleigh and Nakagami distributions had model parameters that differed significantly between the three regions (p ≤ 0.05). Although both had excellent goodness of fit, the Nakagami had higher Kullbeck-Leibler divergence. Recipient age correlated weakly with scale in the cortex (Ω: ρ = 0.11, p = 0.004), while body mass index correlated weakly with shape in the medulla (m: ρ = 0.08, p = 0.04). Neither sex, primary disease nor donor type exhibited any correlation. CONCLUSION We propose the Nakagami distribution be used to characterize transplanted kidneys regionally independent of disease etiology and most patient characteristics.
Collapse
Affiliation(s)
- Rohit Singla
- School of Biomedical Engineering, University of British Columbia, Vancouver, British Columbia, Canada.
| | - Ricky Hu
- Faculty of Medicine, Queen's University, Kingston, Ontario, Canada
| | - Cailin Ringstrom
- Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - Victoria Lessoway
- Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - Janice Reid
- Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - Christopher Nguan
- Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Robert Rohling
- Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia, Canada; Mechanical Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
22
|
Dan Q, Zheng T, Liu L, Sun D, Chen Y. Ultrasound for Breast Cancer Screening in Resource-Limited Settings: Current Practice and Future Directions. Cancers (Basel) 2023; 15:cancers15072112. [PMID: 37046773 PMCID: PMC10093585 DOI: 10.3390/cancers15072112] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 03/09/2023] [Accepted: 03/30/2023] [Indexed: 04/05/2023] Open
Abstract
Breast cancer (BC) is the most prevalent cancer among women globally. Cancer screening can reduce mortality and improve women’s health. In developed countries, mammography (MAM) has been primarily utilized for population-based BC screening for several decades. However, it is usually unavailable in low-resource settings due to the lack of equipment, personnel, and time necessary to conduct and interpret the examinations. Ultrasound (US) with high detection sensitivity for women of younger ages and with dense breasts has become a supplement to MAM for breast examination. Some guidelines suggest using US as the primary screening tool in certain settings where MAM is unavailable and infeasible, but global recommendations have not yet reached a unanimous consensus. With the development of smart devices and artificial intelligence (AI) in medical imaging, clinical applications and preclinical studies have shown the potential of US combined with AI in BC screening. Nevertheless, there are few comprehensive reviews focused on the role of US in screening BC in underserved conditions, especially in technological, economical, and global perspectives. This work presents the benefits, limitations, advances, and future directions of BC screening with technology-assisted and resource-appropriate strategies, which may be helpful to implement screening initiatives in resource-limited countries.
Collapse
Affiliation(s)
- Qing Dan
- Department of Ultrasound, Peking University Shenzhen Hospital, Shenzhen Peking University-The Hong Kong University of Science and Technology Medical Center, Shenzhen 518036, China
| | - Tingting Zheng
- Department of Ultrasound, Peking University Shenzhen Hospital, Shenzhen Peking University-The Hong Kong University of Science and Technology Medical Center, Shenzhen 518036, China
| | - Li Liu
- Department of Ultrasound, Peking University Shenzhen Hospital, Shenzhen Peking University-The Hong Kong University of Science and Technology Medical Center, Shenzhen 518036, China
| | - Desheng Sun
- Department of Ultrasound, Peking University Shenzhen Hospital, Shenzhen Peking University-The Hong Kong University of Science and Technology Medical Center, Shenzhen 518036, China
| | - Yun Chen
- Department of Ultrasound, Peking University Shenzhen Hospital, Shenzhen Peking University-The Hong Kong University of Science and Technology Medical Center, Shenzhen 518036, China
| |
Collapse
|
23
|
Zhu Z, Wang SH, Zhang YD. A Survey of Convolutional Neural Network in Breast Cancer. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 136:2127-2172. [PMID: 37152661 PMCID: PMC7614504 DOI: 10.32604/cmes.2023.025484] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/28/2022] [Indexed: 05/09/2023]
Abstract
Problems For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers. Aims A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD). Methods We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation. Conclusion Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.
Collapse
Affiliation(s)
| | | | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
24
|
Xue P, Si M, Qin D, Wei B, Seery S, Ye Z, Chen M, Wang S, Song C, Zhang B, Ding M, Zhang W, Bai A, Yan H, Dang L, Zhao Y, Rezhake R, Zhang S, Qiao Y, Qu Y, Jiang Y. Unassisted Clinicians Versus Deep Learning-Assisted Clinicians in Image-Based Cancer Diagnostics: Systematic Review With Meta-analysis. J Med Internet Res 2023; 25:e43832. [PMID: 36862499 PMCID: PMC10020907 DOI: 10.2196/43832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 01/19/2023] [Accepted: 02/13/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND A number of publications have demonstrated that deep learning (DL) algorithms matched or outperformed clinicians in image-based cancer diagnostics, but these algorithms are frequently considered as opponents rather than partners. Despite the clinicians-in-the-loop DL approach having great potential, no study has systematically quantified the diagnostic accuracy of clinicians with and without the assistance of DL in image-based cancer identification. OBJECTIVE We systematically quantified the diagnostic accuracy of clinicians with and without the assistance of DL in image-based cancer identification. METHODS PubMed, Embase, IEEEXplore, and the Cochrane Library were searched for studies published between January 1, 2012, and December 7, 2021. Any type of study design was permitted that focused on comparing unassisted clinicians and DL-assisted clinicians in cancer identification using medical imaging. Studies using medical waveform-data graphics material and those investigating image segmentation rather than classification were excluded. Studies providing binary diagnostic accuracy data and contingency tables were included for further meta-analysis. Two subgroups were defined and analyzed, including cancer type and imaging modality. RESULTS In total, 9796 studies were identified, of which 48 were deemed eligible for systematic review. Twenty-five of these studies made comparisons between unassisted clinicians and DL-assisted clinicians and provided sufficient data for statistical synthesis. We found a pooled sensitivity of 83% (95% CI 80%-86%) for unassisted clinicians and 88% (95% CI 86%-90%) for DL-assisted clinicians. Pooled specificity was 86% (95% CI 83%-88%) for unassisted clinicians and 88% (95% CI 85%-90%) for DL-assisted clinicians. The pooled sensitivity and specificity values for DL-assisted clinicians were higher than for unassisted clinicians, at ratios of 1.07 (95% CI 1.05-1.09) and 1.03 (95% CI 1.02-1.05), respectively. Similar diagnostic performance by DL-assisted clinicians was also observed across the predefined subgroups. CONCLUSIONS The diagnostic performance of DL-assisted clinicians appears better than unassisted clinicians in image-based cancer identification. However, caution should be exercised, because the evidence provided in the reviewed studies does not cover all the minutiae involved in real-world clinical practice. Combining qualitative insights from clinical practice with data-science approaches may improve DL-assisted practice, although further research is required. TRIAL REGISTRATION PROSPERO CRD42021281372; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=281372.
Collapse
Affiliation(s)
- Peng Xue
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mingyu Si
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Dongxu Qin
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bingrui Wei
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Samuel Seery
- Faculty of Health and Medicine, Division of Health Research, Lancaster University, Lancaster, United Kingdom
| | - Zichen Ye
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mingyang Chen
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Sumeng Wang
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Cheng Song
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bo Zhang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ming Ding
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wenling Zhang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Anying Bai
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Huijiao Yan
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Le Dang
- Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuqian Zhao
- Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science & Technology of China, Sichuan, China
| | - Remila Rezhake
- Affiliated Cancer Hospital, The 3rd Affiliated Teaching Hospital of Xinjiang Medical University, Xinjiang, China
| | - Shaokai Zhang
- Henan Cancer Hospital, Affiliated Cancer Hospital of Zhengzhou University, Henan, China
| | - Youlin Qiao
- Center for Global Health, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yimin Qu
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yu Jiang
- Department of Epidemiology and Biostatistics, School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
25
|
A novel deep learning model for breast lesion classification using ultrasound Images: A multicenter data evaluation. Phys Med 2023; 107:102560. [PMID: 36878133 DOI: 10.1016/j.ejmp.2023.102560] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 02/20/2023] [Accepted: 02/26/2023] [Indexed: 03/07/2023] Open
Abstract
PURPOSE Breast cancer is one of the major reasons of death due to cancer in women. Early diagnosis is the most critical key for disease screening, control, and reducing mortality. A robust diagnosis relies on the correct classification of breast lesions. While breast biopsy is referred to as the "gold standard" in assessing both the activity and degree of breast cancer, it is an invasive and time-consuming approach. METHOD The current study's primary objective was to develop a novel deep-learning architecture based on the InceptionV3 network to classify ultrasound breast lesions. The main promotions of the proposed architecture were converting the InceptionV3 modules to residual inception ones, increasing their number, and altering the hyperparameters. In addition, we used a combination of five datasets (three public datasets and two prepared from different imaging centers) for training and evaluating the model. RESULTS The dataset was split into the train (80%) and test (20%) groups. The model achieved 0.83, 0.77, 0.8, 0.81, 0.81, 0.18, and 0.77 for the precision, recall, F1 score, accuracy, AUC, Root Mean Squared Error, and Cronbach's α in the test group, respectively. CONCLUSIONS This study illustrates that the improved InceptionV3 can robustly classify breast tumors, potentially reducing the need for biopsy in many cases.
Collapse
|
26
|
Alsubai S, Alqahtani A, Sha M. Genetic hyperparameter optimization with Modified Scalable-Neighbourhood Component Analysis for breast cancer prognostication. Neural Netw 2023; 162:240-257. [PMID: 36913821 DOI: 10.1016/j.neunet.2023.02.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 12/30/2022] [Accepted: 02/23/2023] [Indexed: 03/02/2023]
Abstract
Breast cancer is common among women resulting in mortality when left untreated. Early detection is vital so that suitable treatment could assist cancer from spreading further and save people's life. The traditional way of detection is a time-consuming process. With the evolvement of DM (Data Mining), the healthcare industry could be benefitted in predicting the disease as it permits the physicians to determine the significant attributes for diagnosis. Though, conventional techniques have used DM-based methods to identify breast cancer, they lacked in terms of prediction rate. Moreover, parametric-Softmax classifiers have been a general option by conventional works with fixed classes, particularly when huge labelled data are present during training. Nevertheless, this turns into an issue for open set cases where new classes are encountered along with few instances to learn a generalized parametric classifier. Thus, the present study aims to implement a non-parametric strategy by optimizing the embedding of a feature rather than parametric classifiers. This research utilizes Deep CNN (Deep Convolutional Neural Network) and Inception V3 for learning visual features which preserve neighbourhood outline in semantic space relying on NCA (Neighbourhood Component Analysis) criteria. Delimited by its bottleneck, the study proposes MS-NCA (Modified Scalable-Neighbourhood Component Analysis) that relies on a non-linear objective function to perform feature fusion by optimizing the distance-learning objective due to which it gains the capability of computing inner feature products without performing mapping which increases the scalability of MS-NCA. Finally, G-HPO (Genetic-Hyper-parameter Optimization) is proposed. In this case, the new stage in the algorithm simply denotes the enhancement in the length of chromosome bringing several hyperparameters into subsequent XGBoost, NB and RF models having numerous layers for identifying the normal and affected cases of breast cancer for which optimized hyper-parameter values of RF (Random Forest), NB (Naïve Bayes), and XGBoost (eXtreme Gradient Boosting) are determined. This process helps in improvising the classification rate which is confirmed through analytical results.
Collapse
Affiliation(s)
- Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| | - Mohemmed Sha
- College of Computer Engineering and Sciences, Prince Sattam Bin AbdulAziz University, Al Kharj, Saudi Arabia.
| |
Collapse
|
27
|
Xie L, Liu Z, Pei C, Liu X, Cui YY, He NA, Hu L. Convolutional neural network based on automatic segmentation of peritumoral shear-wave elastography images for predicting breast cancer. Front Oncol 2023; 13:1099650. [PMID: 36865812 PMCID: PMC9970986 DOI: 10.3389/fonc.2023.1099650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 01/31/2023] [Indexed: 02/16/2023] Open
Abstract
Objective Our aim was to develop dual-modal CNN models based on combining conventional ultrasound (US) images and shear-wave elastography (SWE) of peritumoral region to improve prediction of breast cancer. Method We retrospectively collected US images and SWE data of 1271 ACR- BIRADS 4 breast lesions from 1116 female patients (mean age ± standard deviation, 45.40 ± 9.65 years). The lesions were divided into three subgroups based on the maximum diameter (MD): ≤15 mm; >15 mm and ≤25 mm; >25 mm. We recorded lesion stiffness (SWV1) and 5-point average stiffness of the peritumoral tissue (SWV5). The CNN models were built based on the segmentation of different widths of peritumoral tissue (0.5 mm, 1.0 mm, 1.5 mm, 2.0 mm) and internal SWE image of the lesions. All single-parameter CNN models, dual-modal CNN models, and quantitative SWE parameters in the training cohort (971 lesions) and the validation cohort (300 lesions) were assessed by receiver operating characteristic (ROC) curve. Results The US + 1.0 mm SWE model achieved the highest area under the ROC curve (AUC) in the subgroup of lesions with MD ≤15 mm in both the training (0.94) and the validation cohorts (0.91). In the subgroups with MD between15 and 25 mm and above 25 mm, the US + 2.0 mm SWE model achieved the highest AUCs in both the training cohort (0.96 and 0.95, respectively) and the validation cohort (0.93 and 0.91, respectively). Conclusion The dual-modal CNN models based on the combination of US and peritumoral region SWE images allow accurate prediction of breast cancer.
Collapse
Affiliation(s)
- Li Xie
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Zhen Liu
- Department of Computing, Hebin Intelligent Robots Co., LTD., Hefei, China
| | - Chong Pei
- Department of Respiratory and Critical Care Medicine, The First People’s Hospital of Hefei City, The Third Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiao Liu
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Ya-yun Cui
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Nian-an He
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China,*Correspondence: Nian-an He, ; Lei Hu,
| | - Lei Hu
- Department of Ultrasound, The First Affiliated Hospital of University of Science and Technology of China (USTC), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China,*Correspondence: Nian-an He, ; Lei Hu,
| |
Collapse
|
28
|
Lin ZM, Wang TT, Zhu JY, Xu YY, Chen F, Huang PT. A nomogram based on combining clinical features and contrast enhanced ultrasound is not able to identify Her-2 over-expressing cancer from other breast cancers. Front Oncol 2023; 13:1035645. [PMID: 36776315 PMCID: PMC9909531 DOI: 10.3389/fonc.2023.1035645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Accepted: 01/02/2023] [Indexed: 01/27/2023] Open
Abstract
Objective The aim of this study was to evaluate whether a predictive model based on a contrast enhanced ultrasound (CEUS)-based nomogram and clinical features (Clin) could differentiate Her-2-overexpressing breast cancers from other breast cancers. Methods A total of 152 pathology-proven breast cancers including 55 Her-2-overexpressing cancers and 97 other cancers from two units that underwent preoperative CEUS examination, were included and divided into training (n = 102) and validation cohorts (n = 50). Multivariate regression analysis was utilized to identify independent indicators for developing predictive nomogram models. The area under the receiver operating characteristic (AUC) curve was also calculated to establish the diagnostic performance of different predictive models. The corresponding sensitivities and specificities of different models at the cutoff nomogram value were compared. Results In the training cohort, 7 clinical features (menstruation, larger tumor size, higher CA153 level, BMI, diastolic pressure, heart rate and outer upper quarter (OUQ)) + enlargement in CEUS with P < 0.2 according to the univariate analysis were submitted to the multivariate analysis. By incorporating clinical information and enlargement on the CEUS pattern, independently significant indicators for Her-2-overexpression were used for further predictive modeling as follows: Model I, nomogram model based on clinical features (Clin); Model II, nomogram model combining enlargement (Clin + Enlargement); Model III, nomogram model based on typical clinical features combining enlargement (MC + BMI + diastolic pressure (DP) + outer upper quarter (OUQ) + Enlargement). Model II achieved an AUC value of 0.776 at nomogram cutoff score value of 190, which was higher than that of the other models in the training cohort without significant differences (all P>0.05). In the test cohort, the diagnostic efficiency of predictive model was poor (all AUC<0.6). In addition, the sensitivity and specificity were not significantly different between Models I and II (all P>0.05), in either the training or the test cohort. In addition, Clin exhibited an AUC similar to that of model III (P=0.12). Moreover, model III exhibited a higher sensitivity (70.0%) than the other models with similar AUC and specificity, only in the test cohort. Conclusion The main finding of the study was that the predictive model based on a CEUS-based nomogram and clinical features could not differentiate Her-2-overexpressing breast cancers from other breast cancers.
Collapse
Affiliation(s)
- Zi-mei Lin
- Department of Ultrasound in Medicine, The Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Ting-ting Wang
- Department of Ultrasound in Medicine, The Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Jun-Yan Zhu
- Department of Ultrasound, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Yong-yuan Xu
- Department of Ultrasound in Medicine, The Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Fen Chen
- Department of Ultrasound, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Pin-tong Huang
- Department of Ultrasound in Medicine, The Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China,*Correspondence: Pin-tong Huang,
| |
Collapse
|
29
|
Efficient Breast Cancer Diagnosis from Complex Mammographic Images Using Deep Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7717712. [PMID: 36909966 PMCID: PMC9998154 DOI: 10.1155/2023/7717712] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 02/15/2023] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
Medical image analysis places a significant focus on breast cancer, which poses a significant threat to women's health and contributes to many fatalities. An early and precise diagnosis of breast cancer through digital mammograms can significantly improve the accuracy of disease detection. Computer-aided diagnosis (CAD) systems must analyze the medical imagery and perform detection, segmentation, and classification processes to assist radiologists with accurately detecting breast lesions. However, early-stage mammography cancer detection is certainly difficult. The deep convolutional neural network has demonstrated exceptional results and is considered a highly effective tool in the field. This study proposes a computational framework for diagnosing breast cancer using a ResNet-50 convolutional neural network to classify mammogram images. To train and classify the INbreast dataset into benign or malignant categories, the framework utilizes transfer learning from the pretrained ResNet-50 CNN on ImageNet. The results revealed that the proposed framework achieved an outstanding classification accuracy of 93%, surpassing other models trained on the same dataset. This novel approach facilitates early diagnosis and classification of malignant and benign breast cancer, potentially saving lives and resources. These outcomes highlight that deep convolutional neural network algorithms can be trained to achieve highly accurate results in various mammograms, along with the capacity to enhance medical tools by reducing the error rate in screening mammograms.
Collapse
|
30
|
Rana M, Bhushan M. Machine learning and deep learning approach for medical image analysis: diagnosis to detection. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:1-39. [PMID: 36588765 PMCID: PMC9788870 DOI: 10.1007/s11042-022-14305-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 11/01/2022] [Accepted: 12/10/2022] [Indexed: 06/17/2023]
Abstract
Computer-aided detection using Deep Learning (DL) and Machine Learning (ML) shows tremendous growth in the medical field. Medical images are considered as the actual origin of appropriate information required for diagnosis of disease. Detection of disease at the initial stage, using various modalities, is one of the most important factors to decrease mortality rate occurring due to cancer and tumors. Modalities help radiologists and doctors to study the internal structure of the detected disease for retrieving the required features. ML has limitations with the present modalities due to large amounts of data, whereas DL works efficiently with any amount of data. Hence, DL is considered as the enhanced technique of ML where ML uses the learning techniques and DL acquires details on how machines should react around people. DL uses a multilayered neural network to get more information about the used datasets. This study aims to present a systematic literature review related to applications of ML and DL for the detection along with classification of multiple diseases. A detailed analysis of 40 primary studies acquired from the well-known journals and conferences between Jan 2014-2022 was done. It provides an overview of different approaches based on ML and DL for the detection along with the classification of multiple diseases, modalities for medical imaging, tools and techniques used for the evaluation, description of datasets. Further, experiments are performed using MRI dataset to provide a comparative analysis of ML classifiers and DL models. This study will assist the healthcare community by enabling medical practitioners and researchers to choose an appropriate diagnosis technique for a given disease with reduced time and high accuracy.
Collapse
Affiliation(s)
- Meghavi Rana
- School of Computing, DIT University, Dehradun, India
| | - Megha Bhushan
- School of Computing, DIT University, Dehradun, India
| |
Collapse
|
31
|
Baek J, O’Connell AM, Parker KJ. Improving breast cancer diagnosis by incorporating raw ultrasound parameters into machine learning. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2022; 3:045013. [PMID: 36698865 PMCID: PMC9855672 DOI: 10.1088/2632-2153/ac9bcc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 10/15/2022] [Accepted: 10/19/2022] [Indexed: 01/28/2023] Open
Abstract
The improved diagnostic accuracy of ultrasound breast examinations remains an important goal. In this study, we propose a biophysical feature-based machine learning method for breast cancer detection to improve the performance beyond a benchmark deep learning algorithm and to furthermore provide a color overlay visual map of the probability of malignancy within a lesion. This overall framework is termed disease-specific imaging. Previously, 150 breast lesions were segmented and classified utilizing a modified fully convolutional network and a modified GoogLeNet, respectively. In this study multiparametric analysis was performed within the contoured lesions. Features were extracted from ultrasound radiofrequency, envelope, and log-compressed data based on biophysical and morphological models. The support vector machine with a Gaussian kernel constructed a nonlinear hyperplane, and we calculated the distance between the hyperplane and each feature's data point in multiparametric space. The distance can quantitatively assess a lesion and suggest the probability of malignancy that is color-coded and overlaid onto B-mode images. Training and evaluation were performed on in vivo patient data. The overall accuracy for the most common types and sizes of breast lesions in our study exceeded 98.0% for classification and 0.98 for an area under the receiver operating characteristic curve, which is more precise than the performance of radiologists and a deep learning system. Further, the correlation between the probability and Breast Imaging Reporting and Data System enables a quantitative guideline to predict breast cancer. Therefore, we anticipate that the proposed framework can help radiologists achieve more accurate and convenient breast cancer classification and detection.
Collapse
Affiliation(s)
- Jihye Baek
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States of America
| | - Avice M O’Connell
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Kevin J Parker
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States of America,Author to whom any correspondence should be addressed
| |
Collapse
|
32
|
Breast Cancer Classification by Using Multi-Headed Convolutional Neural Network Modeling. Healthcare (Basel) 2022; 10:healthcare10122367. [PMID: 36553891 PMCID: PMC9777990 DOI: 10.3390/healthcare10122367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/18/2022] [Accepted: 11/22/2022] [Indexed: 11/27/2022] Open
Abstract
Breast cancer is one of the most widely recognized diseases after skin cancer. Though it can occur in all kinds of people, it is undeniably more common in women. Several analytical techniques, such as Breast MRI, X-ray, Thermography, Mammograms, Ultrasound, etc., are utilized to identify it. In this study, artificial intelligence was used to rapidly detect breast cancer by analyzing ultrasound images from the Breast Ultrasound Images Dataset (BUSI), which consists of three categories: Benign, Malignant, and Normal. The relevant dataset comprises grayscale and masked ultrasound images of diagnosed patients. Validation tests were accomplished for quantitative outcomes utilizing the exhibition measures for each procedure. The proposed framework is discovered to be effective, substantiating outcomes with only raw image evaluation giving a 78.97% test accuracy and masked image evaluation giving 81.02% test precision, which could decrease human errors in the determination cycle. Additionally, our described framework accomplishes higher accuracy after using multi-headed CNN with two processed datasets based on masked and original images, where the accuracy hopped up to 92.31% (±2) with a Mean Squared Error (MSE) loss of 0.05. This work primarily contributes to identifying the usefulness of multi-headed CNN when working with two different types of data inputs. Finally, a web interface has been made to make this model usable for non-technical personals.
Collapse
|
33
|
Meng M, Zhang M, Shen D, He G. Differentiation of breast lesions on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using deep transfer learning based on DenseNet201. Medicine (Baltimore) 2022; 101:e31214. [PMID: 36397422 PMCID: PMC9666147 DOI: 10.1097/md.0000000000031214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
In order to achieve better performance, artificial intelligence is used in breast cancer diagnosis. In this study, we evaluated the efficacy of different fine-tuning strategies of deep transfer learning (DTL) based on the DenseNet201 model to differentiate malignant from benign lesions on breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We collected 4260 images of benign lesions and 4140 images of malignant lesions of the breast pertaining to pathologically confirmed cases. The benign and malignant groups was randomly divided into a training set and a testing set at a ratio of 9:1. A DTL model based on the DenseNet201 model was established, and the effectiveness of 4 fine-tuning strategies (S0: strategy 0, S1: strategy; S2: strategy; and S3: strategy) was compared. Additionally, DCE-MRI images of 48 breast lesions were selected to verify the robustness of the model. Ten images were obtained for each lesion. The classification was considered correct if more than 5 images were correctly classified. The metrics for model performance evaluation included accuracy (Ac) in the training and testing sets, precision (Pr), recall rate (Rc), f1 score (f1), and area under the receiver operating characteristic curve (AUROC) in the validation set. The Ac of the 4 fine-tuning strategies reached 100.00% in the training set. The S2 strategy exhibited good convergence in the testing set. The Ac of S2 was 98.01% in the testing set, which was higher than those of S0 (93.10%), S1 (90.45%), and S3 (93.90%). The average classification Pr, Rc, f1, and AUROC of S2 in the validation set were (89.00%, 80.00%, 0.81, and 0.79, respectively) higher than those of S0 (76.00%, 67.00%, 0.69, and 0.65, respectively), S1 (60.00%, 60.00%, 0.60, 0.66, and respectively), and S3 (77.00%, 73.00%, 0.74, 0.72, respectively). The degree of coincidence between S2 and the histopathological method for differentiating between benign and malignant breast lesions was high (κ = 0.749). The S2 strategy can improve the robustness of the DenseNet201 model in relatively small breast DCE-MRI datasets, and this is a reliable method to increase the Ac of discriminating benign from malignant breast lesions on DCE-MRI.
Collapse
Affiliation(s)
- Mingzhu Meng
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Ming Zhang
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Dong Shen
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Guangyuan He
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
- * Correspondence: Guangyuan He, Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, No.68 Gehuzhong Rd, Changzhou 213164, Jiangsu Province, China (e-mail: )
| |
Collapse
|
34
|
Balancing regional and global information: An interactive segmentation framework for ultrasound breast lesion. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
35
|
Liu FF, Pei Y. MicroRNA192 Promotes Metastasis and Invasion of Breast Cancer via Targeting Tensin1 and Enhancing Cell Division Control Protein 42 Homolog (CDC42) Expression. J BIOMATER TISS ENG 2022. [DOI: 10.1166/jbt.2022.3068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
We aimed to dissect the biological impacts and mechanisms of MicroRNA192 in breast cancer metastasis and invasion. Tumor tissues from patients and breast cancer cells were used to measure miR-192 level via RT-PCR. The miR-192 mimics, miR-192 inhibitor, si-Tensin1 and corresponding negative
controls were transfected into cells followed by analysis of cell invasion by transwell assay and CDC42 level by western blot. Afterwards, a tumor transplantation model was established to assess the malignancy progression and migration. The human miR-192 accounted for approximately 14% of
those overexpressed miRNAs. Overexpression of miR-192 promoted malignant cell invasion, while knockdown of endogenous miR-192 significantly decreased cell invasion, which suggested that miR-192 could exert a promotive factor in the invasive characteristic of breast cancer cells in vitro.
In contrast to control group, tumor metastasis was significantly provoked in the miR-192 overexpression group. miR-192 directly targeted and suppressed the expression of Tensin1. miR-192 enhanced the malignant invasiveness by regulating Cdc42 and was corrected with correlation with the survival
of patients. High miR-192 level is related to the malignant invasiveness and metastatic behavior, as well as the poor prognosis of patients with breast cancer via activating Cdc42 and targeting Tensin1.
Collapse
Affiliation(s)
- Fang-Fang Liu
- Department of Thyroid and Breast Surgery, The Affiliated Huai’an Hospital of Xuzhou Medical University, Huaian, Jiangsu, 223002, China
| | - Yin Pei
- Department of Nuclear Medicine, The First Hospital of Hebei Medicine University, Shijiazhuang, Hebei, 050000, China
| |
Collapse
|
36
|
Gong X, Zhao X, Fan L, Li T, Guo Y, Luo J. BUS-net: a bimodal ultrasound network for breast cancer diagnosis. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01596-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
37
|
An optimized deep learning architecture for breast cancer diagnosis based on improved marine predators algorithm. Neural Comput Appl 2022; 34:18015-18033. [PMID: 35698722 PMCID: PMC9175533 DOI: 10.1007/s00521-022-07445-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 05/14/2022] [Indexed: 11/12/2022]
Abstract
Breast cancer is the second leading cause of death in women; therefore, effective early detection of this cancer can reduce its mortality rate. Breast cancer detection and classification in the early phases of development may allow for optimal therapy. Convolutional neural networks (CNNs) have enhanced tumor detection and classification efficiency in medical imaging compared to traditional approaches. This paper proposes a novel classification model for breast cancer diagnosis based on a hybridized CNN and an improved optimization algorithm, along with transfer learning, to help radiologists detect abnormalities efficiently. The marine predators algorithm (MPA) is the optimization algorithm we used, and we improve it using the opposition-based learning strategy to cope with the implied weaknesses of the original MPA. The improved marine predators algorithm (IMPA) is used to find the best values for the hyperparameters of the CNN architecture. The proposed method uses a pretrained CNN model called ResNet50 (residual network). This model is hybridized with the IMPA algorithm, resulting in an architecture called IMPA-ResNet50. Our evaluation is performed on two mammographic datasets, the mammographic image analysis society (MIAS) and curated breast imaging subset of DDSM (CBIS-DDSM) datasets. The proposed model was compared with other state-of-the-art approaches. The obtained results showed that the proposed model outperforms the compared state-of-the-art approaches, which are beneficial to classification performance, achieving 98.32% accuracy, 98.56% sensitivity, and 98.68% specificity on the CBIS-DDSM dataset and 98.88% accuracy, 97.61% sensitivity, and 98.40% specificity on the MIAS dataset. To evaluate the performance of IMPA in finding the optimal values for the hyperparameters of ResNet50 architecture, it compared to four other optimization algorithms including gravitational search algorithm (GSA), Harris hawks optimization (HHO), whale optimization algorithm (WOA), and the original MPA algorithm. The counterparts algorithms are also hybrid with the ResNet50 architecture produce models named GSA-ResNet50, HHO-ResNet50, WOA-ResNet50, and MPA-ResNet50, respectively. The results indicated that the proposed IMPA-ResNet50 is achieved a better performance than other counterparts.
Collapse
|
38
|
Wang Q, Chen H, Luo G, Li B, Shang H, Shao H, Sun S, Wang Z, Wang K, Cheng W. Performance of novel deep learning network with the incorporation of the automatic segmentation network for diagnosis of breast cancer in automated breast ultrasound. Eur Radiol 2022; 32:7163-7172. [PMID: 35488916 DOI: 10.1007/s00330-022-08836-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/15/2022] [Accepted: 04/21/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To develop novel deep learning network (DLN) with the incorporation of the automatic segmentation network (ASN) for morphological analysis and determined the performance for diagnosis breast cancer in automated breast ultrasound (ABUS). METHODS A total of 769 breast tumors were enrolled in this study and were randomly divided into training set and test set at 600 vs. 169. The novel DLNs (Resent v2, ResNet50 v2, ResNet101 v2) added a new ASN to the traditional ResNet networks and extracted morphological information of breast tumors. The accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under the receiver operating characteristic (ROC) curve (AUC), and average precision (AP) were calculated. The diagnostic performances of novel DLNs were compared with those of two radiologists with different experience. RESULTS The ResNet34 v2 model had higher specificity (76.81%) and PPV (82.22%) than the other two, the ResNet50 v2 model had higher accuracy (78.11%) and NPV (72.86%), and the ResNet101 v2 model had higher sensitivity (85.00%). According to the AUCs and APs, the novel ResNet101 v2 model produced the best result (AUC 0.85 and AP 0.90) compared with the remaining five DLNs. Compared with the novice radiologist, the novel DLNs performed better. The F1 score was increased from 0.77 to 0.78, 0.81, and 0.82 by three novel DLNs. However, their diagnostic performance was worse than that of the experienced radiologist. CONCLUSIONS The novel DLNs performed better than traditional DLNs and may be helpful for novice radiologists to improve their diagnostic performance of breast cancer in ABUS. KEY POINTS • A novel automatic segmentation network to extract morphological information was successfully developed and implemented with ResNet deep learning networks. • The novel deep learning networks in our research performed better than the traditional deep learning networks in the diagnosis of breast cancer using ABUS images. • The novel deep learning networks in our research may be useful for novice radiologists to improve diagnostic performance.
Collapse
Affiliation(s)
- Qiucheng Wang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - He Chen
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Gongning Luo
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Bo Li
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Haitao Shang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Hua Shao
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Shanshan Sun
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Zhongshuai Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Kuanquan Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Wen Cheng
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China.
| |
Collapse
|
39
|
Parida PK, Dora L, Swain M, Agrawal S, Panda R. Data science methodologies in smart healthcare: a review. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00648-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
40
|
Deep learning in image-based breast and cervical cancer detection: a systematic review and meta-analysis. NPJ Digit Med 2022; 5:19. [PMID: 35169217 PMCID: PMC8847584 DOI: 10.1038/s41746-022-00559-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 12/22/2021] [Indexed: 12/15/2022] Open
Abstract
Accurate early detection of breast and cervical cancer is vital for treatment success. Here, we conduct a meta-analysis to assess the diagnostic performance of deep learning (DL) algorithms for early breast and cervical cancer identification. Four subgroups are also investigated: cancer type (breast or cervical), validation type (internal or external), imaging modalities (mammography, ultrasound, cytology, or colposcopy), and DL algorithms versus clinicians. Thirty-five studies are deemed eligible for systematic review, 20 of which are meta-analyzed, with a pooled sensitivity of 88% (95% CI 85–90%), specificity of 84% (79–87%), and AUC of 0.92 (0.90–0.94). Acceptable diagnostic performance with analogous DL algorithms was highlighted across all subgroups. Therefore, DL algorithms could be useful for detecting breast and cervical cancer using medical imaging, having equivalent performance to human clinicians. However, this tentative assertion is based on studies with relatively poor designs and reporting, which likely caused bias and overestimated algorithm performance. Evidence-based, standardized guidelines around study methods and reporting are required to improve the quality of DL research.
Collapse
|
41
|
Hejduk P, Marcon M, Unkelbach J, Ciritsis A, Rossi C, Borkowski K, Boss A. Fully automatic classification of automated breast ultrasound (ABUS) imaging according to BI-RADS using a deep convolutional neural network. Eur Radiol 2022; 32:4868-4878. [PMID: 35147776 PMCID: PMC9213284 DOI: 10.1007/s00330-022-08558-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 12/14/2021] [Accepted: 12/26/2021] [Indexed: 12/15/2022]
Abstract
Purpose The aim of this study was to develop and test a post-processing technique for detection and classification of lesions according to the BI-RADS atlas in automated breast ultrasound (ABUS) based on deep convolutional neural networks (dCNNs). Methods and materials In this retrospective study, 645 ABUS datasets from 113 patients were included; 55 patients had lesions classified as high malignancy probability. Lesions were categorized in BI-RADS 2 (no suspicion of malignancy), BI-RADS 3 (probability of malignancy < 3%), and BI-RADS 4/5 (probability of malignancy > 3%). A deep convolutional neural network was trained after data augmentation with images of lesions and normal breast tissue, and a sliding-window approach for lesion detection was implemented. The algorithm was applied to a test dataset containing 128 images and performance was compared with readings of 2 experienced radiologists. Results Results of calculations performed on single images showed accuracy of 79.7% and AUC of 0.91 [95% CI: 0.85–0.96] in categorization according to BI-RADS. Moderate agreement between dCNN and ground truth has been achieved (κ: 0.57 [95% CI: 0.50–0.64]) what is comparable with human readers. Analysis of whole dataset improved categorization accuracy to 90.9% and AUC of 0.91 [95% CI: 0.77–1.00], while achieving almost perfect agreement with ground truth (κ: 0.82 [95% CI: 0.69–0.95]), performing on par with human readers. Furthermore, the object localization technique allowed the detection of lesion position slice-wise. Conclusions Our results show that a dCNN can be trained to detect and distinguish lesions in ABUS according to the BI-RADS classification with similar accuracy as experienced radiologists. Key Points • A deep convolutional neural network (dCNN) was trained for classification of ABUS lesions according to the BI-RADS atlas. • A sliding-window approach allows accurate automatic detection and classification of lesions in ABUS examinations.
Collapse
Affiliation(s)
- Patryk Hejduk
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland.
| | - Magda Marcon
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Jan Unkelbach
- Department of Radiation Oncology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Alexander Ciritsis
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Cristina Rossi
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Karol Borkowski
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Andreas Boss
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| |
Collapse
|
42
|
Chowdary J, Yogarajah P, Chaurasia P, Guruviah V. A Multi-Task Learning Framework for Automated Segmentation and Classification of Breast Tumors From Ultrasound Images. ULTRASONIC IMAGING 2022; 44:3-12. [PMID: 35128997 PMCID: PMC8902030 DOI: 10.1177/01617346221075769] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Breast cancer is one of the most fatal diseases leading to the death of several women across the world. But early diagnosis of breast cancer can help to reduce the mortality rate. So an efficient multi-task learning approach is proposed in this work for the automatic segmentation and classification of breast tumors from ultrasound images. The proposed learning approach consists of an encoder, decoder, and bridge blocks for segmentation and a dense branch for the classification of tumors. For efficient classification, multi-scale features from different levels of the network are used. Experimental results show that the proposed approach is able to enhance the accuracy and recall of segmentation by 1.08%, 4.13%, and classification by 1.16%, 2.34%, respectively than the methods available in the literature.
Collapse
Affiliation(s)
| | - Pratheepan Yogarajah
- University of Ulster, Londonderry, UK
- Pratheepan Yogarajah, University of Ulster, Northland Road, Magee Campus, Londonderry, Northern Ireland BT48 7JL, UK.
| | | | | |
Collapse
|
43
|
Two-Stage Segmentation Framework Based on Distance Transformation. SENSORS 2021; 22:s22010250. [PMID: 35009793 PMCID: PMC8749866 DOI: 10.3390/s22010250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/25/2021] [Accepted: 12/26/2021] [Indexed: 11/16/2022]
Abstract
With the rise of deep learning, using deep learning to segment lesions and assist in diagnosis has become an effective means to promote clinical medical analysis. However, the partial volume effect of organ tissues leads to unclear and blurred edges of ROI in medical images, making it challenging to achieve high-accuracy segmentation of lesions or organs. In this paper, we assume that the distance map obtained by performing distance transformation on the ROI edge can be used as a weight map to make the network pay more attention to the learning of the ROI edge region. To this end, we design a novel framework to flexibly embed the distance map into the two-stage network to improve left atrium MRI segmentation performance. Furthermore, a series of distance map generation methods are proposed and studied to reasonably explore how to express the weight of assisting network learning. We conduct thorough experiments to verify the effectiveness of the proposed segmentation framework, and experimental results demonstrate that our hypothesis is feasible.
Collapse
|
44
|
Assari Z, Mahloojifar A, Ahmadinejad N. A bimodal BI-RADS-guided GoogLeNet-based CAD system for solid breast masses discrimination using transfer learning. Comput Biol Med 2021; 142:105160. [PMID: 34995955 DOI: 10.1016/j.compbiomed.2021.105160] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 12/14/2021] [Accepted: 12/18/2021] [Indexed: 12/14/2022]
Abstract
Numerous solid breast masses require sophisticated analysis to establish a differential diagnosis. Consequently, complementary modalities such as ultrasound imaging are frequently required to evaluate mammographically further detected masses. Radiologists mentally integrate complementary information from images acquired of the same patient to make a more conclusive and effective diagnosis. However, it has always been a challenging task. This paper details a novel bimodal GoogLeNet-based CAD system that addresses the challenges associated with combining information from mammographic and sonographic images for solid breast mass classification. Each modality is initially trained using two distinct monomodal models in the proposed framework. Then, using the high-level feature maps extracted from both modalities, a bimodal model is trained. In order to fully exploit the BI-RADS descriptors, different image content representations of each mass are obtained and used as input images. In addition, using an ImageNet pre-trained GoogLeNet model, two publicly available databases, and our collected dataset, a two-step transfer learning strategy has been proposed. Our bimodal model achieves the best recognition results in terms of sensitivity, specificity, F1-score, Matthews Correlation Coefficient, area under the receiver operating characteristic curve, and accuracy metrics of 90.91%, 89.87%, 90.32%, 80.78%, 95.82%, and 90.38%, respectively. The promising results indicate that the proposed CAD system can facilitate bimodal suspicious mass analysis and thus contribute significantly to improving breast cancer diagnostic performance.
Collapse
Affiliation(s)
- Zahra Assari
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
| | - Ali Mahloojifar
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran.
| | - Nasrin Ahmadinejad
- Medical Imaging Center, Cancer Research Institute, Imam Khomeini Hospital Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Sciences (TUMS), Tehran, Iran
| |
Collapse
|
45
|
Meraj T, Alosaimi W, Alouffi B, Rauf HT, Kumar SA, Damaševičius R, Alyami H. A quantization assisted U-Net study with ICA and deep features fusion for breast cancer identification using ultrasonic data. PeerJ Comput Sci 2021; 7:e805. [PMID: 35036531 PMCID: PMC8725669 DOI: 10.7717/peerj-cs.805] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 11/12/2021] [Indexed: 06/14/2023]
Abstract
Breast cancer is one of the leading causes of death in women worldwide-the rapid increase in breast cancer has brought about more accessible diagnosis resources. The ultrasonic breast cancer modality for diagnosis is relatively cost-effective and valuable. Lesion isolation in ultrasonic images is a challenging task due to its robustness and intensity similarity. Accurate detection of breast lesions using ultrasonic breast cancer images can reduce death rates. In this research, a quantization-assisted U-Net approach for segmentation of breast lesions is proposed. It contains two step for segmentation: (1) U-Net and (2) quantization. The quantization assists to U-Net-based segmentation in order to isolate exact lesion areas from sonography images. The Independent Component Analysis (ICA) method then uses the isolated lesions to extract features and are then fused with deep automatic features. Public ultrasonic-modality-based datasets such as the Breast Ultrasound Images Dataset (BUSI) and the Open Access Database of Raw Ultrasonic Signals (OASBUD) are used for evaluation comparison. The OASBUD data extracted the same features. However, classification was done after feature regularization using the lasso method. The obtained results allow us to propose a computer-aided design (CAD) system for breast cancer identification using ultrasonic modalities.
Collapse
Affiliation(s)
- Talha Meraj
- Department of Computer Science, COMSATS University Islamabad-Wah Campus, Wah Cantt, Pakistan
| | - Wael Alosaimi
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Hafiz Tayyab Rauf
- Department of Computer Science, Faculty of Engineering & Informatics, University of Bradford, Bradford, United Kingdom
| | - Swarn Avinash Kumar
- Department of Information Technology, Indian Institute of Information Technology, Uttar Pradesh, Jhalwa, Prayagraj, India
| | | | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| |
Collapse
|
46
|
Ma H, Tian R, Li H, Sun H, Lu G, Liu R, Wang Z. Fus2Net: a novel Convolutional Neural Network for classification of benign and malignant breast tumor in ultrasound images. Biomed Eng Online 2021; 20:112. [PMID: 34794443 PMCID: PMC8600702 DOI: 10.1186/s12938-021-00950-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 11/04/2021] [Indexed: 12/03/2022] Open
Abstract
BACKGROUND The rapid development of artificial intelligence technology has improved the capability of automatic breast cancer diagnosis, compared to traditional machine learning methods. Convolutional Neural Network (CNN) can automatically select high efficiency features, which helps to improve the level of computer-aided diagnosis (CAD). It can improve the performance of distinguishing benign and malignant breast ultrasound (BUS) tumor images, making rapid breast tumor screening possible. RESULTS The classification model was evaluated with a different dataset of 100 BUS tumor images (50 benign cases and 50 malignant cases), which was not used in network training. Evaluation indicators include accuracy, sensitivity, specificity, and area under curve (AUC) value. The results in the Fus2Net model had an accuracy of 92%, the sensitivity reached 95.65%, the specificity reached 88.89%, and the AUC value reached 0.97 for classifying BUS tumor images. CONCLUSIONS The experiment compared the existing CNN-categorized architecture, and the Fus2Net architecture we customed has more advantages in a comprehensive performance. The obtained results demonstrated that the Fus2Net classification method we proposed can better assist radiologists in the diagnosis of benign and malignant BUS tumor images. METHODS The existing public datasets are small and the amount of data suffer from the balance issue. In this paper, we provide a relatively larger dataset with a total of 1052 ultrasound images, including 696 benign images and 356 malignant images, which were collected from a local hospital. We proposed a novel CNN named Fus2Net for the benign and malignant classification of BUS tumor images and it contains two self-designed feature extraction modules. To evaluate how the classifier generalizes on the experimental dataset, we employed the training set (646 benign cases and 306 malignant cases) for tenfold cross-validation. Meanwhile, to solve the balance of the dataset, the training data were augmented before being fed into the Fus2Net. In the experiment, we used hyperparameter fine-tuning and regularization technology to make the Fus2Net convergence.
Collapse
Affiliation(s)
- He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning 110169 China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning 110169 China
| | - Hong Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning 110169 China
| | - Hang Sun
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning 110169 China
| | - Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning 110169 China
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, No. 83 Wenhua Road, Shenhe District, Shenyang, Liaoning 110016 China
| | - Ruibo Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning 110169 China
| | - Zhiguo Wang
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, No. 83 Wenhua Road, Shenhe District, Shenyang, Liaoning 110016 China
| |
Collapse
|
47
|
Karthik R, Menaka R, Kathiresan G, Anirudh M, Nagharjun M. Gaussian Dropout Based Stacked Ensemble CNN for Classification of Breast Tumor in Ultrasound Images. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2021.10.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
48
|
Zhang G, Zhao K, Hong Y, Qiu X, Zhang K, Wei B. SHA-MTL: soft and hard attention multi-task learning for automated breast cancer ultrasound image segmentation and classification. Int J Comput Assist Radiol Surg 2021; 16:1719-1725. [PMID: 34254225 DOI: 10.1007/s11548-021-02445-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 06/28/2021] [Indexed: 02/01/2023]
Abstract
Purpose The automatic analysis of ultrasound images facilitates the diagnosis of breast cancer effectively and objectively. However, due to the characteristics of ultrasound images, it is still a challenging task to achieve analyzation automatically. We suppose that the algorithm will extract lesion regions and distinguish categories easily if it is guided to focus on the lesion regions.Method We propose a multi-task learning (SHA-MTL) model based on soft and hard attention mechanisms for breast ultrasound (BUS) image simultaneous segmentation and binary classification. The SHA-MTL model consists of a dense CNN encoder and an upsampling decoder, which are connected by attention-gated (AG) units with soft attention mechanism. Cross-validation experiments are performed on BUS datasets with category and mask labels, and multiple comprehensive analyses are performed on the two tasks.Results We assess the SHA-MTL model on a public BUS image dataset. For the segmentation task, the sensitivity and DICE of the SHA-MTL model to the lesion regions increased by 2.27% and 1.19% compared with the single task model, respectively. The classification accuracy and F1 score increased by 2.45% and 3.82%, respectively.Conclusion The results validate the effectiveness of our model and indicate that the SHA-MTL model requires less a priori knowledge to achieve better results by comparing with other recent models. Therefore, we can draw the conclusion that paying more attention to the lesion region of BUS is conducive to the discrimination of lesion types.
Collapse
Affiliation(s)
- Guisheng Zhang
- College of Intelligence and Information Technology, Shandong University of Traditional Chinese Medicine, Jinan, 250355, China.,Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China
| | - Kehui Zhao
- The Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, 250000, China.
| | - Yanfei Hong
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China
| | - Xiaoyu Qiu
- The Library, Shandong University of Traditional Chinese Medicine, Jinan, 250355, China
| | - Kuixing Zhang
- College of Intelligence and Information Technology, Shandong University of Traditional Chinese Medicine, Jinan, 250355, China.,Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China
| | - Benzheng Wei
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China. .,Qingdao Academy of Chinese Medical Sciences, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China.
| |
Collapse
|
49
|
Irfan R, Almazroi AA, Rauf HT, Damaševičius R, Nasr EA, Abdelgawad AE. Dilated Semantic Segmentation for Breast Ultrasonic Lesion Detection Using Parallel Feature Fusion. Diagnostics (Basel) 2021; 11:1212. [PMID: 34359295 PMCID: PMC8304124 DOI: 10.3390/diagnostics11071212] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 04/16/2021] [Accepted: 04/27/2021] [Indexed: 12/15/2022] Open
Abstract
Breast cancer is becoming more dangerous by the day. The death rate in developing countries is rapidly increasing. As a result, early detection of breast cancer is critical, leading to a lower death rate. Several researchers have worked on breast cancer segmentation and classification using various imaging modalities. The ultrasonic imaging modality is one of the most cost-effective imaging techniques, with a higher sensitivity for diagnosis. The proposed study segments ultrasonic breast lesion images using a Dilated Semantic Segmentation Network (Di-CNN) combined with a morphological erosion operation. For feature extraction, we used the deep neural network DenseNet201 with transfer learning. We propose a 24-layer CNN that uses transfer learning-based feature extraction to further validate and ensure the enriched features with target intensity. To classify the nodules, the feature vectors obtained from DenseNet201 and the 24-layer CNN were fused using parallel fusion. The proposed methods were evaluated using a 10-fold cross-validation on various vector combinations. The accuracy of CNN-activated feature vectors and DenseNet201-activated feature vectors combined with the Support Vector Machine (SVM) classifier was 90.11 percent and 98.45 percent, respectively. With 98.9 percent accuracy, the fused version of the feature vector with SVM outperformed other algorithms. When compared to recent algorithms, the proposed algorithm achieves a better breast cancer diagnosis rate.
Collapse
Affiliation(s)
- Rizwana Irfan
- Department of Information Technology, College of Computing and Information Technology at Khulais, University of Jeddah, Jeddah 21959, Saudi Arabia; (R.I.); (A.A.A.)
| | - Abdulwahab Ali Almazroi
- Department of Information Technology, College of Computing and Information Technology at Khulais, University of Jeddah, Jeddah 21959, Saudi Arabia; (R.I.); (A.A.A.)
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland;
| | - Emad Abouel Nasr
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh 11421, Saudi Arabia; (E.A.N.); (A.E.A.)
| | - Abdelatty E. Abdelgawad
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh 11421, Saudi Arabia; (E.A.N.); (A.E.A.)
| |
Collapse
|
50
|
Ramachandran A, Kathavarayan Ramu S. Neural Network Pattern Recognition of Ultrasound Image Gray Scale Intensity Histograms of Breast Lesions to Differentiate Between Benign and Malignant Lesions: Analytical Study. JMIR BIOMEDICAL ENGINEERING 2021; 6:e23808. [PMID: 38907375 PMCID: PMC11041429 DOI: 10.2196/23808] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 03/04/2021] [Accepted: 04/04/2021] [Indexed: 01/23/2023] Open
Abstract
BACKGROUND Ultrasound-based radiomic features to differentiate between benign and malignant breast lesions with the help of machine learning is currently being researched. The mean echogenicity ratio has been used for the diagnosis of malignant breast lesions. However, gray scale intensity histogram values as a single radiomic feature for the detection of malignant breast lesions using machine learning algorithms have not been explored yet. OBJECTIVE This study aims to assess the utility of a simple convolutional neural network in classifying benign and malignant breast lesions using gray scale intensity values of the lesion. METHODS An open-access online data set of 200 ultrasonogram breast lesions were collected, and regions of interest were drawn over the lesions. The gray scale intensity values of the lesions were extracted. An input file containing the values and an output file consisting of the breast lesions' diagnoses were created. The convolutional neural network was trained using the files and tested on the whole data set. RESULTS The trained convolutional neural network had an accuracy of 94.5% and a precision of 94%. The sensitivity and specificity were 94.9% and 94.1%, respectively. CONCLUSIONS Simple neural networks, which are cheap and easy to use, can be applied to diagnose malignant breast lesions with gray scale intensity values obtained from ultrasonogram images in low-resource settings with minimal personnel.
Collapse
Affiliation(s)
| | - Shivabalan Kathavarayan Ramu
- Mahatma Gandhi Medical College and Research Institute, Puducherry, India
- All India Institute of Medical Sciences, New Delhi, India
| |
Collapse
|