1
|
Gómez-Flores W, Gregorio-Calas MJ, Coelho de Albuquerque Pereira W. BUS-BRA: A breast ultrasound dataset for assessing computer-aided diagnosis systems. Med Phys 2024; 51:3110-3123. [PMID: 37937827 DOI: 10.1002/mp.16812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 11/09/2023] Open
Abstract
PURPOSE Computer-aided diagnosis (CAD) systems on breast ultrasound (BUS) aim to increase the efficiency and effectiveness of breast screening, helping specialists to detect and classify breast lesions. CAD system development requires a set of annotated images, including lesion segmentation, biopsy results to specify benign and malignant cases, and BI-RADS categories to indicate the likelihood of malignancy. Besides, standardized partitions of training, validation, and test sets promote reproducibility and fair comparisons between different approaches. Thus, we present a publicly available BUS dataset whose novelty is the substantial increment of cases with the above-mentioned annotations and the inclusion of standardized partitions to objectively assess and compare CAD systems. ACQUISITION AND VALIDATION METHODS The BUS dataset comprises 1875 anonymized images from 1064 female patients acquired via four ultrasound scanners during systematic studies at the National Institute of Cancer (Rio de Janeiro, Brazil). The dataset includes biopsy-proven tumors divided into 722 benign and 342 malignant cases. Besides, a senior ultrasonographer performed a BI-RADS assessment in categories 2 to 5. Additionally, the ultrasonographer manually outlined the breast lesions to obtain ground truth segmentations. Furthermore, 5- and 10-fold cross-validation partitions are provided to standardize the training and test sets to evaluate and reproduce CAD systems. Finally, to validate the utility of the BUS dataset, an evaluation framework is implemented to assess the performance of deep neural networks for segmenting and classifying breast lesions. DATA FORMAT AND USAGE NOTES The BUS dataset is publicly available for academic and research purposes through an open-access repository under the name BUS-BRA: A Breast Ultrasound Dataset for Assessing CAD Systems. BUS images and reference segmentations are saved in Portable Network Graphic (PNG) format files, and the dataset information is stored in separate Comma-Separated Value (CSV) files. POTENTIAL APPLICATIONS The BUS-BRA dataset can be used to develop and assess artificial intelligence-based lesion detection and segmentation methods, and the classification of BUS images into pathological classes and BI-RADS categories. Other potential applications include developing image processing methods like despeckle filtering and contrast enhancement methods to improve image quality and feature engineering for image description.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Tamaulipas, Mexico
| | | | | |
Collapse
|
2
|
Zhou G, Mosadegh B. Distilling Knowledge From an Ensemble of Vision Transformers for Improved Classification of Breast Ultrasound. Acad Radiol 2024; 31:104-120. [PMID: 37666747 DOI: 10.1016/j.acra.2023.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/20/2023] [Accepted: 08/05/2023] [Indexed: 09/06/2023]
Abstract
RATIONALE AND OBJECTIVES To develop a deep learning model for the automated classification of breast ultrasound images as benign or malignant. More specifically, the application of vision transformers, ensemble learning, and knowledge distillation is explored for breast ultrasound classification. MATERIALS AND METHODS Single view, B-mode ultrasound images were curated from the publicly available Breast Ultrasound Image (BUSI) dataset, which has categorical ground truth labels (benign vs malignant) assigned by radiologists and malignant cases confirmed by biopsy. The performance of vision transformers (ViT) is compared to convolutional neural networks (CNN), followed by a comparison between supervised, self-supervised, and randomly initialized ViT. Subsequently, the ensemble of 10 independently trained ViT, where the ensemble model is the unweighted average of the output of each individual model is compared to the performance of each ViT alone. Finally, we train a single ViT to emulate the ensembled ViT using knowledge distillation. RESULTS On this dataset that was trained using five-fold cross validation, ViT outperforms CNN, while self-supervised ViT outperform supervised and randomly initialized ViT. The ensemble model achieves an area under the receiver operating characteristics curve (AuROC) and area under the precision recall curve (AuPRC) of 0.977 and 0.965 on the test set, outperforming the average AuROC and AuPRC of the independently trained ViTs (0.958 ± 0.05 and 0.931 ± 0.016). The distilled ViT achieves an AuROC and AuPRC of 0.972 and 0.960. CONCLUSION Both transfer learning and ensemble learning can each offer increased performance independently and can be sequentially combined to collectively improve the performance of the final model. Furthermore, a single vision transformer can be trained to match the performance of an ensemble of a set of vision transformers using knowledge distillation.
Collapse
Affiliation(s)
| | - Bobak Mosadegh
- Dalio Institute of Cardiovascular Imaging, Department of Radiology, Weill Cornell Medicine, New York, New York
| |
Collapse
|
3
|
Tasnim J, Hasan MK. CAM-QUS guided self-tuning modular CNNs with multi-loss functions for fully automated breast lesion classification in ultrasound images. Phys Med Biol 2023; 69:015018. [PMID: 38056017 DOI: 10.1088/1361-6560/ad1319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 12/06/2023] [Indexed: 12/08/2023]
Abstract
Objective.Breast cancer is the major cause of cancer death among women worldwide. Deep learning-based computer-aided diagnosis (CAD) systems for classifying lesions in breast ultrasound images can help materialise the early detection of breast cancer and enhance survival chances.Approach.This paper presents a completely automated BUS diagnosis system with modular convolutional neural networks tuned with novel loss functions. The proposed network comprises a dynamic channel input enhancement network, an attention-guided InceptionV3-based feature extraction network, a classification network, and a parallel feature transformation network to map deep features into quantitative ultrasound (QUS) feature space. These networks function together to improve classification accuracy by increasing the separation of benign and malignant class-specific features and enriching them simultaneously. Unlike the categorical crossentropy (CCE) loss-based traditional approaches, our method uses two additional novel losses: class activation mapping (CAM)-based and QUS feature-based losses, to capacitate the overall network learn the extraction of clinically valued lesion shape and texture-related properties focusing primarily the lesion area for explainable AI (XAI).Main results.Experiments on four public, one private, and a combined breast ultrasound dataset are used to validate our strategy. The suggested technique obtains an accuracy of 97.28%, sensitivity of 93.87%, F1-score of 95.42% on dataset 1 (BUSI), and an accuracy of 91.50%, sensitivity of 89.38%, and F1-score of 89.31% on the combined dataset, consisting of 1494 images collected from hospitals in five demographic locations using four ultrasound systems of different manufacturers. These results outperform techniques reported in the literature by a considerable margin.Significance.The proposed CAD system provides diagnosis from the auto-focused lesion area of B-mode BUS images, avoiding the explicit requirement of any segmentation or region of interest extraction, and thus can be a handy tool for making accurate and reliable diagnoses even in unspecialized healthcare centers.
Collapse
Affiliation(s)
- Jarin Tasnim
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh
| | - Md Kamrul Hasan
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh
| |
Collapse
|
4
|
Chowa SS, Azam S, Montaha S, Payel IJ, Bhuiyan MRI, Hasan MZ, Jonkman M. Graph neural network-based breast cancer diagnosis using ultrasound images with optimized graph construction integrating the medically significant features. J Cancer Res Clin Oncol 2023; 149:18039-18064. [PMID: 37982829 PMCID: PMC10725367 DOI: 10.1007/s00432-023-05464-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 10/06/2023] [Indexed: 11/21/2023]
Abstract
PURPOSE An automated computerized approach can aid radiologists in the early diagnosis of breast cancer. In this study, a novel method is proposed for classifying breast tumors into benign and malignant, based on the ultrasound images through a Graph Neural Network (GNN) model utilizing clinically significant features. METHOD Ten informative features are extracted from the region of interest (ROI), based on the radiologists' diagnosis markers. The significance of the features is evaluated using density plot and T test statistical analysis method. A feature table is generated where each row represents individual image, considered as node, and the edges between the nodes are denoted by calculating the Spearman correlation coefficient. A graph dataset is generated and fed into the GNN model. The model is configured through ablation study and Bayesian optimization. The optimized model is then evaluated with different correlation thresholds for getting the highest performance with a shallow graph. The performance consistency is validated with k-fold cross validation. The impact of utilizing ROIs and handcrafted features for breast tumor classification is evaluated by comparing the model's performance with Histogram of Oriented Gradients (HOG) descriptor features from the entire ultrasound image. Lastly, a clustering-based analysis is performed to generate a new filtered graph, considering weak and strong relationships of the nodes, based on the similarities. RESULTS The results indicate that with a threshold value of 0.95, the GNN model achieves the highest test accuracy of 99.48%, precision and recall of 100%, and F1 score of 99.28%, reducing the number of edges by 85.5%. The GNN model's performance is 86.91%, considering no threshold value for the graph generated from HOG descriptor features. Different threshold values for the Spearman's correlation score are experimented with and the performance is compared. No significant differences are observed between the previous graph and the filtered graph. CONCLUSION The proposed approach might aid the radiologists in effective diagnosing and learning tumor pattern of breast cancer.
Collapse
Affiliation(s)
- Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Israt Jahan Payel
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1216, Bangladesh
| | - Md Rahad Islam Bhuiyan
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1216, Bangladesh
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
5
|
Kondo S, Satoh M, Nishida M, Sakano R, Takagi K. Ceusia-Breast: computer-aided diagnosis with contrast enhanced ultrasound image analysis for breast lesions. BMC Med Imaging 2023; 23:114. [PMID: 37644398 PMCID: PMC10466705 DOI: 10.1186/s12880-023-01072-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 08/02/2023] [Indexed: 08/31/2023] Open
Abstract
BACKGROUND In recent years, contrast-enhanced ultrasonography (CEUS) has been used for various applications in breast diagnosis. The superiority of CEUS over conventional B-mode imaging in the ultrasound diagnosis of the breast lesions in clinical practice has been widely confirmed. On the other hand, there have been many proposals for computer-aided diagnosis of breast lesions on B-mode ultrasound images, but few for CEUS. We propose a semi-automatic classification method based on machine learning in CEUS of breast lesions. METHODS The proposed method extracts spatial and temporal features from CEUS videos and breast tumors are classified as benign or malignant using linear support vector machines (SVM) with combination of selected optimal features. In the proposed method, tumor regions are extracted using the guidance information specified by the examiners, then morphological and texture features of tumor regions obtained from B-mode and CEUS images and TIC features obtained from CEUS video are extracted. Then, our method uses SVM classifiers to classify breast tumors as benign or malignant. During SVM training, many features are prepared, and useful features are selected. We name our proposed method "Ceucia-Breast" (Contrast Enhanced UltraSound Image Analysis for BREAST lesions). RESULTS The experimental results on 119 subjects show that the area under the receiver operating curve, accuracy, precision, and recall are 0.893, 0.816, 0.841 and 0.920, respectively. The classification performance is improved by our method over conventional methods using only B-mode images. In addition, we confirm that the selected features are consistent with the CEUS guidelines for breast tumor diagnosis. Furthermore, we conduct an experiment on the operator dependency of specifying guidance information and find that the intra-operator and inter-operator kappa coefficients are 1.0 and 0.798, respectively. CONCLUSION The experimental results show a significant improvement in classification performance compared to conventional classification methods using only B-mode images. We also confirm that the selected features are related to the findings that are considered important in clinical practice. Furthermore, we verify the intra- and inter-examiner correlation in the guidance input for region extraction and confirm that both correlations are in strong agreement.
Collapse
|
6
|
GadAllah MT, Mohamed AENA, Hefnawy AA, Zidan HE, El-Banby GM, Mohamed Badawy S. Convolutional Neural Networks Based Classification of Segmented Breast Ultrasound Images – A Comparative Preliminary Study. 2023 INTELLIGENT METHODS, SYSTEMS, AND APPLICATIONS (IMSA) 2023. [DOI: 10.1109/imsa58542.2023.10217585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
| | - Abd El-Naser A. Mohamed
- Menoufia University,Faculty of Electronic Engineering,Electronics and Electrical Communications Engineering Department,Menoufia,Egypt
| | - Alaa A. Hefnawy
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Hassan E. Zidan
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Ghada M. El-Banby
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| | - Samir Mohamed Badawy
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| |
Collapse
|
7
|
Cruz-Ramos C, García-Avila O, Almaraz-Damian JA, Ponomaryov V, Reyes-Reyes R, Sadovnychiy S. Benign and Malignant Breast Tumor Classification in Ultrasound and Mammography Images via Fusion of Deep Learning and Handcraft Features. ENTROPY (BASEL, SWITZERLAND) 2023; 25:991. [PMID: 37509938 PMCID: PMC10378567 DOI: 10.3390/e25070991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/15/2023] [Accepted: 06/26/2023] [Indexed: 07/30/2023]
Abstract
Breast cancer is a disease that affects women in different countries around the world. The real cause of breast cancer is particularly challenging to determine, and early detection of the disease is necessary for reducing the death rate, due to the high risks associated with breast cancer. Treatment in the early period can increase the life expectancy and quality of life for women. CAD (Computer Aided Diagnostic) systems can perform the diagnosis of the benign and malignant lesions of breast cancer using technologies and tools based on image processing, helping specialist doctors to obtain a more precise point of view with fewer processes when making their diagnosis by giving a second opinion. This study presents a novel CAD system for automated breast cancer diagnosis. The proposed method consists of different stages. In the preprocessing stage, an image is segmented, and a mask of a lesion is obtained; during the next stage, the extraction of the deep learning features is performed by a CNN-specifically, DenseNet 201. Additionally, handcrafted features (Histogram of Oriented Gradients (HOG)-based, ULBP-based, perimeter area, area, eccentricity, and circularity) are obtained from an image. The designed hybrid system uses CNN architecture for extracting deep learning features, along with traditional methods which perform several handcraft features, following the medical properties of the disease with the purpose of later fusion via proposed statistical criteria. During the fusion stage, where deep learning and handcrafted features are analyzed, the genetic algorithms as well as mutual information selection algorithm, followed by several classifiers (XGBoost, AdaBoost, Multilayer perceptron (MLP)) based on stochastic measures, are applied to choose the most sensible information group among the features. In the experimental validation of two modalities of the CAD design, which performed two types of medical studies-mammography (MG) and ultrasound (US)-the databases mini-DDSM (Digital Database for Screening Mammography) and BUSI (Breast Ultrasound Images Dataset) were used. Novel CAD systems were evaluated and compared with recent state-of-the-art systems, demonstrating better performance in commonly used criteria, obtaining ACC of 97.6%, PRE of 98%, Recall of 98%, F1-Score of 98%, and IBA of 95% for the abovementioned datasets.
Collapse
Affiliation(s)
- Clara Cruz-Ramos
- Escuela Superior de Ingenieria Mecanica y Electrica-Culhuacan, Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
| | - Oscar García-Avila
- Escuela Superior de Ingenieria Mecanica y Electrica-Culhuacan, Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
| | - Jose-Agustin Almaraz-Damian
- Escuela Superior de Ingenieria Mecanica y Electrica-Culhuacan, Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
| | - Volodymyr Ponomaryov
- Escuela Superior de Ingenieria Mecanica y Electrica-Culhuacan, Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
| | - Rogelio Reyes-Reyes
- Escuela Superior de Ingenieria Mecanica y Electrica-Culhuacan, Instituto Politecnico Nacional, Santa Ana Ave. # 1000, Mexico City 04430, Mexico
| | - Sergiy Sadovnychiy
- Instituto Mexicano del Petroleo, Lazaro Cardenas Ave. # 152, Mexico City 07730, Mexico
| |
Collapse
|
8
|
Chen H, Ma M, Liu G, Wang Y, Jin Z, Liu C. Breast Tumor Classification in Ultrasound Images by Fusion of Deep Convolutional Neural Network and Shallow LBP Feature. J Digit Imaging 2023; 36:932-946. [PMID: 36720840 PMCID: PMC10287618 DOI: 10.1007/s10278-022-00711-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 09/27/2022] [Accepted: 09/29/2022] [Indexed: 02/02/2023] Open
Abstract
Breast cancer is one of the most dangerous and common cancers in women which leads to a major research topic in medical science. To assist physicians in pre-screening for breast cancer to reduce unnecessary biopsies, breast ultrasound and computer-aided diagnosis (CAD) have been used to distinguish between benign and malignant tumors. In this study, we proposed a CAD system for tumor diagnosis using a multi-channel fusion method and feature extraction structure based on multi-feature fusion on breast ultrasound (BUS) images. In the pre-processing stage, the multi-channel fusion method completed the color conversion of the BUS image to make it contain richer information. In the feature extraction stage, the pre-trained ResNet50 network was selected as the basic network, and three levels of features were combined based on adaptive spatial feature fusion (ASFF), and finally, the shallow local binary pattern (LBP) texture features were fused. Support vector machine (SVM) was used for comparative analysis. A retrospective analysis was carried out, and 1615 breast tumor images (572 benign and 1043 malignant) confirmed by pathological examinations were collected. After data processing and augmentation, for an independent test set consisting of 874 breast ultrasound images (457 benign and 417 malignant), the accuracy, precision, recall, specificity, F1 score, and AUC of our method were 96.91%, 98.75%, 94.72%, 98.91%, 0.97, and 0.991, respectively. The results show that the integration of shallow LBP texture features and multi-level depth features can more effectively improve the comprehensive performance of breast tumor diagnosis, and has strong clinical application value. Compared with the past methods, our proposed method is expected to realize the automatic diagnosis of breast tumors and provide an auxiliary tool for radiologists to accurately diagnose breast diseases.
Collapse
Affiliation(s)
- Hua Chen
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Minglun Ma
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Gang Liu
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China.
| | - Ying Wang
- The Second Hospital of Hebei Medical University, Shijiazhuang, 050000, China
| | - Zhihao Jin
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Chong Liu
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| |
Collapse
|
9
|
Song M, Kim Y. Unsupervised learning method via triple reconstruction for the classification of ultrasound breast lesions. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
10
|
Gu Y, Xu W, Lin B, An X, Tian J, Ran H, Ren W, Chang C, Yuan J, Kang C, Deng Y, Wang H, Luo B, Guo S, Zhou Q, Xue E, Zhan W, Zhou Q, Li J, Zhou P, Chen M, Gu Y, Chen W, Zhang Y, Li J, Cong L, Zhu L, Wang H, Jiang Y. Deep learning based on ultrasound images assists breast lesion diagnosis in China: a multicenter diagnostic study. Insights Imaging 2022; 13:124. [PMID: 35900608 PMCID: PMC9334487 DOI: 10.1186/s13244-022-01259-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 06/25/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Studies on deep learning (DL)-based models in breast ultrasound (US) remain at the early stage due to a lack of large datasets for training and independent test sets for verification. We aimed to develop a DL model for differentiating benign from malignant breast lesions on US using a large multicenter dataset and explore the model's ability to assist the radiologists. METHODS A total of 14,043 US images from 5012 women were prospectively collected from 32 hospitals. To develop the DL model, the patients from 30 hospitals were randomly divided into a training cohort (n = 4149) and an internal test cohort (n = 466). The remaining 2 hospitals (n = 397) were used as the external test cohorts (ETC). We compared the model with the prospective Breast Imaging Reporting and Data System assessment and five radiologists. We also explored the model's ability to assist the radiologists using two different methods. RESULTS The model demonstrated excellent diagnostic performance with the ETC, with a high area under the receiver operating characteristic curve (AUC, 0.913), sensitivity (88.84%), specificity (83.77%), and accuracy (86.40%). In the comparison set, the AUC was similar to that of the expert (p = 0.5629) and one experienced radiologist (p = 0.2112) and significantly higher than that of three inexperienced radiologists (p < 0.01). After model assistance, the accuracies and specificities of the radiologists were substantially improved without loss in sensitivities. CONCLUSIONS The DL model yielded satisfactory predictions in distinguishing benign from malignant breast lesions. The model showed the potential value in improving the diagnosis of breast lesions by radiologists.
Collapse
Affiliation(s)
- Yang Gu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Wen Xu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Bin Lin
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Xing An
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Jiawei Tian
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Haitao Ran
- Department of Ultrasound, The Second Affiliated Hospital of Chongqing Medical University and Chongqing Key Laboratory of Ultrasound Molecular Imaging, Chongqing, China
| | - Weidong Ren
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Jianjun Yuan
- Department of Ultrasonography, Henan Provincial People's Hospital, Zhengzhou, China
| | - Chunsong Kang
- Department of Ultrasound, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Taiyuan, China
| | - Youbin Deng
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College of Huazhong University of Science and Technology, Wuhan, China
| | - Hui Wang
- Department of Ultrasound, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Baoming Luo
- Department of Ultrasound, The Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Shenglan Guo
- Department of Ultrasonography, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Qi Zhou
- Department of Medical Ultrasound, The Second Affiliated Hospital, School of Medicine, Xi'an Jiaotong University, Xi'an, China
| | - Ensheng Xue
- Department of Ultrasound, Union Hospital of Fujian Medical University, Fujian Institute of Ultrasound Medicine, Fuzhou, China
| | - Weiwei Zhan
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University, School of Medicine, Shanghai, China
| | - Qing Zhou
- Department of Ultrasonography, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jie Li
- Department of Ultrasound, Qilu Hospital, Shandong University, Jinan, 250012, China
| | - Ping Zhou
- Department of Ultrasound, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Man Chen
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ying Gu
- Department of Ultrasonography, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Wu Chen
- Department of Ultrasound, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Yuhong Zhang
- Department of Ultrasound, The Second Hospital of Dalian Medical University, Dalian, China
| | - Jianchu Li
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Longfei Cong
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Lei Zhu
- Department of Medical Imaging Advanced Research, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Shenzhen, China
| | - Hongyan Wang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| | - Yuxin Jiang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| |
Collapse
|
11
|
Bio-Imaging-Based Machine Learning Algorithm for Breast Cancer Detection. Diagnostics (Basel) 2022; 12:diagnostics12051134. [PMID: 35626290 PMCID: PMC9140096 DOI: 10.3390/diagnostics12051134] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 04/26/2022] [Accepted: 04/27/2022] [Indexed: 01/22/2023] Open
Abstract
Breast cancer is one of the most widespread diseases in women worldwide. It leads to the second-largest mortality rate in women, especially in European countries. It occurs when malignant lumps that are cancerous start to grow in the breast cells. Accurate and early diagnosis can help in increasing survival rates against this disease. A computer-aided detection (CAD) system is necessary for radiologists to differentiate between normal and abnormal cell growth. This research consists of two parts; the first part involves a brief overview of the different image modalities, using a wide range of research databases to source information such as ultrasound, histography, and mammography to access various publications. The second part evaluates different machine learning techniques used to estimate breast cancer recurrence rates. The first step is to perform preprocessing, including eliminating missing values, data noise, and transformation. The dataset is divided as follows: 60% of the dataset is used for training, and the rest, 40%, is used for testing. We focus on minimizing type one false-positive rate (FPR) and type two false-negative rate (FNR) errors to improve accuracy and sensitivity. Our proposed model uses machine learning techniques such as support vector machine (SVM), logistic regression (LR), and K-nearest neighbor (KNN) to achieve better accuracy in breast cancer classification. Furthermore, we attain the highest accuracy of 97.7% with 0.01 FPR, 0.03 FNR, and an area under the ROC curve (AUC) score of 0.99. The results show that our proposed model successfully classifies breast tumors while overcoming previous research limitations. Finally, we summarize the paper with the future trends and challenges of the classification and segmentation in breast cancer detection.
Collapse
|
12
|
Pi Y, Yang P, Wei J, Zhao Z, Cai H, Yi Z. Fusing deep and handcrafted features for intelligent recognition of uptake patterns on thyroid scintigraphy. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
13
|
Garg V, Sahoo A, Saxena V. A cognitive approach to endometrial tuberculosis identification using hierarchical deep fusion method. Soft comput 2021. [DOI: 10.1007/s00500-021-06474-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
14
|
Method for Diagnosis of Acute Lymphoblastic Leukemia Based on ViT-CNN Ensemble Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:7529893. [PMID: 34471407 PMCID: PMC8405335 DOI: 10.1155/2021/7529893] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 08/07/2021] [Indexed: 12/31/2022]
Abstract
Acute lymphocytic leukemia (ALL) is a deadly cancer that not only affects adults but also accounts for about 25% of childhood cancers. Timely and accurate diagnosis of the cancer is an important premise for effective treatment to improve survival rate. Since the image of leukemic B-lymphoblast cells (cancer cells) under the microscope is very similar in morphology to that of normal B-lymphoid precursors (normal cells), it is difficult to distinguish between cancer cells and normal cells. Therefore, we propose the ViT-CNN ensemble model to classify cancer cells images and normal cells images to assist in the diagnosis of acute lymphoblastic leukemia. The ViT-CNN ensemble model is an ensemble model that combines the vision transformer model and convolutional neural network (CNN) model. The vision transformer model is an image classification model based entirely on the transformer structure, which has completely different feature extraction method from the CNN model. The ViT-CNN ensemble model can extract the features of cells images in two completely different ways to achieve better classification results. In addition, the data set used in this article is an unbalanced data set and has a certain amount of noise, and we propose a difference enhancement-random sampling (DERS) data enhancement method, create a new balanced data set, and use the symmetric cross-entropy loss function to reduce the impact of noise in the data set. The classification accuracy of the ViT-CNN ensemble model on the test set has reached 99.03%, and it is proved through experimental comparison that the effect is better than other models. The proposed method can accurately distinguish between cancer cells and normal cells and can be used as an effective method for computer-aided diagnosis of acute lymphoblastic leukemia.
Collapse
|
15
|
Rehman KU, Li J, Pei Y, Yasin A, Ali S, Mahmood T. Computer Vision-Based Microcalcification Detection in Digital Mammograms Using Fully Connected Depthwise Separable Convolutional Neural Network. SENSORS 2021; 21:s21144854. [PMID: 34300597 PMCID: PMC8309805 DOI: 10.3390/s21144854] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 07/12/2021] [Accepted: 07/12/2021] [Indexed: 01/21/2023]
Abstract
Microcalcification clusters in mammograms are one of the major signs of breast cancer. However, the detection of microcalcifications from mammograms is a challenging task for radiologists due to their tiny size and scattered location inside a denser breast composition. Automatic CAD systems need to predict breast cancer at the early stages to support clinical work. The intercluster gap, noise between individual MCs, and individual object’s location can affect the classification performance, which may reduce the true-positive rate. In this study, we propose a computer-vision-based FC-DSCNN CAD system for the detection of microcalcification clusters from mammograms and classification into malignant and benign classes. The computer vision method automatically controls the noise and background color contrast and directly detects the MC object from mammograms, which increases the classification performance of the neural network. The breast cancer classification framework has four steps: image preprocessing and augmentation, RGB to grayscale channel transformation, microcalcification region segmentation, and MC ROI classification using FC-DSCNN to predict malignant and benign cases. The proposed method was evaluated on 3568 DDSM and 2885 PINUM mammogram images with automatic feature extraction, obtaining a score of 0.97 with a 2.35 and 0.99 true-positive ratio with 2.45 false positives per image, respectively. Experimental results demonstrated that the performance of the proposed method remains higher than the traditional and previous approaches.
Collapse
Affiliation(s)
- Khalil ur Rehman
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (T.M.)
| | - Jianqiang Li
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (T.M.)
- Beijing Engineering Research Center for IoT Software and Systems, Beijing 100124, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Fukushima 965-8580, Japan
- Correspondence:
| | - Anaa Yasin
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (T.M.)
| | - Saqib Ali
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (T.M.)
| | - Tariq Mahmood
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (K.u.R.); (J.L.); (A.Y.); (S.A.); (T.M.)
- Division of Science and Technology, University of Education, Lahore 54000, Pakistan
| |
Collapse
|
16
|
Badawy SM, Mohamed AENA, Hefnawy AA, Zidan HE, GadAllah MT, El-Banby GM. Classification of Breast Ultrasound Images Based on Convolutional Neural Networks - A Comparative Study. 2021 INTERNATIONAL TELECOMMUNICATIONS CONFERENCE (ITC-EGYPT) 2021. [DOI: 10.1109/itc-egypt52936.2021.9513972] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|