1
|
Xu P, Zhao J, Wan M, Song Q, Su Q, Wang D. Classification of multi-feature fusion ultrasound images of breast tumor within category 4 using convolutional neural networks. Med Phys 2024; 51:4243-4257. [PMID: 38436433 DOI: 10.1002/mp.16946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 01/03/2024] [Accepted: 01/09/2024] [Indexed: 03/05/2024] Open
Abstract
BACKGROUND Breast tumor is a fatal threat to the health of women. Ultrasound (US) is a common and economical method for the diagnosis of breast cancer. Breast imaging reporting and data system (BI-RADS) category 4 has the highest false-positive value of about 30% among five categories. The classification task in BI-RADS category 4 is challenging and has not been fully studied. PURPOSE This work aimed to use convolutional neural networks (CNNs) for breast tumor classification using B-mode images in category 4 to overcome the dependence on operator and artifacts. Additionally, this work intends to take full advantage of morphological and textural features in breast tumor US images to improve classification accuracy. METHODS First, original US images coming directly from the hospital were cropped and resized. In 1385 B-mode US BI-RADS category 4 images, the biopsy eliminated 503 samples of benign tumor and left 882 of malignant. Then, K-means clustering algorithm and entropy of sliding windows of US images were conducted. Considering the diversity of different characteristic information of malignant and benign represented by original B-mode images, K-means clustering images and entropy images, they are fused in a three-channel form multi-feature fusion images dataset. The training, validation, and test sets are 969, 277, and 139. With transfer learning, 11 CNN models including DenseNet and ResNet were investigated. Finally, by comparing accuracy, precision, recall, F1-score, and area under curve (AUC) of the results, models which had better performance were selected. The normality of data was assessed by Shapiro-Wilk test. DeLong test and independent t-test were used to evaluate the significant difference of AUC and other values. False discovery rate was utilized to ultimately evaluate the advantages of CNN with highest evaluation metrics. In addition, the study of anti-log compression was conducted but no improvement has shown in CNNs classification results. RESULTS With multi-feature fusion images, DenseNet121 has highest accuracy of 80.22 ± 1.45% compared to other CNNs, precision of 77.97 ± 2.89% and AUC of 0.82 ± 0.01. Multi-feature fusion improved accuracy of DenseNet121 by 1.87% from classification of original B-mode images (p < 0.05). CONCLUSION The CNNs with multi-feature fusion show a good potential of reducing the false-positive rate within category 4. The work illustrated that CNNs and fusion images have the potential to reduce false-positive rate in breast tumor within US BI-RADS category 4, and make the diagnosis of category 4 breast tumors to be more accurate and precise.
Collapse
Affiliation(s)
- Pengfei Xu
- Department of Biomedical Engineering, Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| | - Jing Zhao
- The Second Hospital of Jilin University, Changchun, China
| | - Mingxi Wan
- Department of Biomedical Engineering, Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| | - Qing Song
- The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Qiang Su
- Department of Oncology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Diya Wang
- Department of Biomedical Engineering, Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
2
|
Gómez-Flores W, Gregorio-Calas MJ, Coelho de Albuquerque Pereira W. BUS-BRA: A breast ultrasound dataset for assessing computer-aided diagnosis systems. Med Phys 2024; 51:3110-3123. [PMID: 37937827 DOI: 10.1002/mp.16812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 11/09/2023] Open
Abstract
PURPOSE Computer-aided diagnosis (CAD) systems on breast ultrasound (BUS) aim to increase the efficiency and effectiveness of breast screening, helping specialists to detect and classify breast lesions. CAD system development requires a set of annotated images, including lesion segmentation, biopsy results to specify benign and malignant cases, and BI-RADS categories to indicate the likelihood of malignancy. Besides, standardized partitions of training, validation, and test sets promote reproducibility and fair comparisons between different approaches. Thus, we present a publicly available BUS dataset whose novelty is the substantial increment of cases with the above-mentioned annotations and the inclusion of standardized partitions to objectively assess and compare CAD systems. ACQUISITION AND VALIDATION METHODS The BUS dataset comprises 1875 anonymized images from 1064 female patients acquired via four ultrasound scanners during systematic studies at the National Institute of Cancer (Rio de Janeiro, Brazil). The dataset includes biopsy-proven tumors divided into 722 benign and 342 malignant cases. Besides, a senior ultrasonographer performed a BI-RADS assessment in categories 2 to 5. Additionally, the ultrasonographer manually outlined the breast lesions to obtain ground truth segmentations. Furthermore, 5- and 10-fold cross-validation partitions are provided to standardize the training and test sets to evaluate and reproduce CAD systems. Finally, to validate the utility of the BUS dataset, an evaluation framework is implemented to assess the performance of deep neural networks for segmenting and classifying breast lesions. DATA FORMAT AND USAGE NOTES The BUS dataset is publicly available for academic and research purposes through an open-access repository under the name BUS-BRA: A Breast Ultrasound Dataset for Assessing CAD Systems. BUS images and reference segmentations are saved in Portable Network Graphic (PNG) format files, and the dataset information is stored in separate Comma-Separated Value (CSV) files. POTENTIAL APPLICATIONS The BUS-BRA dataset can be used to develop and assess artificial intelligence-based lesion detection and segmentation methods, and the classification of BUS images into pathological classes and BI-RADS categories. Other potential applications include developing image processing methods like despeckle filtering and contrast enhancement methods to improve image quality and feature engineering for image description.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Tamaulipas, Mexico
| | | | | |
Collapse
|
3
|
Zhou J, Hou Z, Lu H, Wang W, Zhao W, Wang Z, Zheng D, Wang S, Tang W, Qu X. A deep supervised transformer U-shaped full-resolution residual network for the segmentation of breast ultrasound image. Med Phys 2023; 50:7513-7524. [PMID: 37816131 DOI: 10.1002/mp.16765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 09/05/2023] [Accepted: 09/07/2023] [Indexed: 10/12/2023] Open
Abstract
PURPOSE Breast ultrasound (BUS) is an important breast imaging tool. Automatic BUS image segmentation can measure the breast tumor size objectively and reduce doctors' workload. In this article, we proposed a deep supervised transformer U-shaped full-resolution residual network (DSTransUFRRN) to segment BUS images. METHODS In the proposed method, a full-resolution residual stream and a deep supervision mechanism were introduced into TransU-Net. The residual stream can keep full resolution features from different levels and enhance features fusion. Then, the deep supervision can suppress gradient dispersion. Moreover, the transformer module can suppress irrelevant features and improve feature extraction process. Two datasets (dataset A and B) were used for training and evaluation. The dataset A included 980 BUS image samples and the dataset B had 163 BUS image samples. RESULTS Cross-validation was conducted. For the dataset A, the proposed DSTransUFRRN achieved significantly higher Dice (91.04 ± 0.86%) than all compared methods (p < 0.05). For the dataset B, the Dice was lower than that for the dataset A due to the small number of samples, but the Dice of DSTransUFRRN (88.15% ± 2.11%) was significantly higher than that of other compared methods (p < 0.05). CONCLUSIONS In this study, we proposed DSTransUFRRN for BUS image segmentation. The proposed methods achieved significantly higher accuracy than the compared previous methods.
Collapse
Affiliation(s)
- Jiale Zhou
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Zuoxun Hou
- Beijing Institute of Mechanics & Electricity, Beijing, China
| | - Hongyan Lu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Wenhan Wang
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Wanchen Zhao
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Zenan Wang
- Department of Gastroenterology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Dezhi Zheng
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Shuai Wang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Wenzhong Tang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| |
Collapse
|
4
|
Teoh YX, Othmani A, Lai KW, Goh SL, Usman J. Stratifying knee osteoarthritis features through multitask deep hybrid learning: Data from the osteoarthritis initiative. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107807. [PMID: 37778138 DOI: 10.1016/j.cmpb.2023.107807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 08/02/2023] [Accepted: 09/08/2023] [Indexed: 10/03/2023]
Abstract
BACKGROUND AND OBJECTIVE Knee osteoarthritis (OA) is a debilitating musculoskeletal disorder that causes functional disability. Automatic knee OA diagnosis has great potential of enabling timely and early intervention, that can potentially reverse the degenerative process of knee OA. Yet, it is a tedious task, concerning the heterogeneity of the disorder. Most of the proposed techniques demonstrated single OA diagnostic task widely based on Kellgren Lawrence (KL) standard, a composite score of only a few imaging features (i.e. osteophytes, joint space narrowing and subchondral bone changes). However, only one key disease pattern was tackled. The KL standard fails to represent disease pattern of individual OA features, particularly osteophytes, joint-space narrowing, and pain intensity that play a fundamental role in OA manifestation. In this study, we aim to develop a multitask model using convolutional neural network (CNN) feature extractors and machine learning classifiers to detect nine important OA features: KL grade, knee osteophytes (both knee, medial fibular: OSFM, medial tibial: OSTM, lateral fibular: OSFL, and lateral tibial: OSTL), joint-space narrowing (medial: JSM, and lateral: JSL), and patient-reported pain intensity from plain radiography. METHODS We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers. RESULTS Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction. CONCLUSIONS The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.
Collapse
Affiliation(s)
- Yun Xin Teoh
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, 50603, Malaysia; LISSI, Université Paris-Est Créteil, Vitry sur Seine, 94400, France
| | - Alice Othmani
- LISSI, Université Paris-Est Créteil, Vitry sur Seine, 94400, France.
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, 50603, Malaysia.
| | - Siew Li Goh
- Sports Medicine Unit, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, 50603, Malaysia; Centre for Epidemiology and Evidence-Based Practice, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, 50603, Malaysia
| | - Juliana Usman
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, 50603, Malaysia
| |
Collapse
|
5
|
Yang L, Zhang B, Ren F, Gu J, Gao J, Wu J, Li D, Jia H, Li G, Zong J, Zhang J, Yang X, Zhang X, Du B, Wang X, Li N. Rapid Segmentation and Diagnosis of Breast Tumor Ultrasound Images at the Sonographer Level Using Deep Learning. Bioengineering (Basel) 2023; 10:1220. [PMID: 37892950 PMCID: PMC10604599 DOI: 10.3390/bioengineering10101220] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 09/28/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
BACKGROUND Breast cancer is one of the most common malignant tumors in women. A noninvasive ultrasound examination can identify mammary-gland-related diseases and is well tolerated by dense breast, making it a preferred method for breast cancer screening and of significant clinical value. However, the diagnosis of breast nodules or masses via ultrasound is performed by a doctor in real time, which is time-consuming and subjective. Junior doctors are prone to missed diagnoses, especially in remote areas or grass-roots hospitals, due to limited medical resources and other factors, which bring great risks to a patient's health. Therefore, there is an urgent need to develop fast and accurate ultrasound image analysis algorithms to assist diagnoses. METHODS We propose a breast ultrasound image-based assisted-diagnosis method based on convolutional neural networks, which can effectively improve the diagnostic speed and the early screening rate of breast cancer. Our method consists of two stages: tumor recognition and tumor classification. (1) Attention-based semantic segmentation is used to identify the location and size of the tumor; (2) the identified nodules are cropped to construct a training dataset. Then, a convolutional neural network for the diagnosis of benign and malignant breast nodules is trained on this dataset. We collected 2057 images from 1131 patients as the training and validation dataset, and 100 images of the patients with accurate pathological criteria were used as the test dataset. RESULTS The experimental results based on this dataset show that the MIoU of tumor location recognition is 0.89 and the average accuracy of benign and malignant diagnoses is 97%. The diagnosis performance of the developed diagnostic system is basically consistent with that of senior doctors and is superior to that of junior doctors. In addition, we can provide the doctor with a preliminary diagnosis so that it can be diagnosed quickly. CONCLUSION Our proposed method can effectively improve diagnostic speed and the early screening rate of breast cancer. The system provides a valuable aid for the ultrasonic diagnosis of breast cancer.
Collapse
Affiliation(s)
- Lei Yang
- Strategic Support Force Medical Center, Beijing 100024, China; (L.Y.); (J.W.); (J.Z.)
| | - Baichuan Zhang
- Chongqing Zhijian Life Technology Co., Ltd., Chongqing 400039, China; (B.Z.)
| | - Fei Ren
- State Key Laboratory of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100049, China;
| | - Jianwen Gu
- Strategic Support Force Medical Center, Beijing 100024, China; (L.Y.); (J.W.); (J.Z.)
| | - Jiao Gao
- Strategic Support Force Medical Center, Beijing 100024, China; (L.Y.); (J.W.); (J.Z.)
| | - Jihua Wu
- Strategic Support Force Medical Center, Beijing 100024, China; (L.Y.); (J.W.); (J.Z.)
| | - Dan Li
- Strategic Support Force Medical Center, Beijing 100024, China; (L.Y.); (J.W.); (J.Z.)
| | - Huaping Jia
- Strategic Support Force Medical Center, Beijing 100024, China; (L.Y.); (J.W.); (J.Z.)
| | - Guangling Li
- Central Medical District of Chinese PLA General Hospital, Beijing 100080, China
| | - Jing Zong
- Strategic Support Force Medical Center, Beijing 100024, China; (L.Y.); (J.W.); (J.Z.)
| | - Jing Zhang
- Strategic Support Force Medical Center, Beijing 100024, China; (L.Y.); (J.W.); (J.Z.)
| | - Xiaoman Yang
- Strategic Support Force Medical Center, Beijing 100024, China; (L.Y.); (J.W.); (J.Z.)
| | - Xueyuan Zhang
- Chongqing Zhijian Life Technology Co., Ltd., Chongqing 400039, China; (B.Z.)
| | - Baolin Du
- Chongqing Zhijian Life Technology Co., Ltd., Chongqing 400039, China; (B.Z.)
| | - Xiaowen Wang
- Chongqing Zhijian Life Technology Co., Ltd., Chongqing 400039, China; (B.Z.)
| | - Na Li
- Chongqing Zhijian Life Technology Co., Ltd., Chongqing 400039, China; (B.Z.)
| |
Collapse
|
6
|
Qu X, Ren C, Wang Z, Fan S, Zheng D, Wang S, Lin H, Jiang J, Xing W. Complex Transformer Network for Single-Angle Plane-Wave Imaging. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:2234-2246. [PMID: 37544831 DOI: 10.1016/j.ultrasmedbio.2023.07.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 06/05/2023] [Accepted: 07/09/2023] [Indexed: 08/08/2023]
Abstract
OBJECTIVE Plane-wave imaging (PWI) is a high-frame-rate imaging technique that sacrifices image quality. Deep learning can potentially enhance plane-wave image quality, but processing complex in-phase and quadrature (IQ) data and suppressing incoherent signals pose challenges. To address these challenges, we present a complex transformer network (CTN) that integrates complex convolution and complex self-attention (CSA) modules. METHODS The CTN operates in a four-step process: delaying complex IQ data from a 0° single-angle plane wave for each pixel as CTN input data; extracting reconstruction features with a complex convolution layer; suppressing irrelevant features derived from incoherent signals with two CSA modules; and forming output images with another complex convolution layer. The training labels are generated by minimum variance (MV). RESULTS Simulation, phantom and in vivo experiments revealed that CTN produced comparable- or even higher-quality images than MV, but with much shorter computation time. Evaluation metrics included contrast ratio, contrast-to-noise ratio, generalized contrast-to-noise ratio and lateral and axial full width at half-maximum and were -11.59 dB, 1.16, 0.68, 278 μm and 329 μm for simulation, respectively, and 9.87 dB, 0.96, 0.62, 357 μm and 305 μm for the phantom experiment, respectively. In vivo experiments further indicated that CTN could significantly improve details that were previously vague or even invisible in DAS and MV images. And after being accelerated by GPU, the CTN runtime (76.03 ms) was comparable to that of delay-and-sum (DAS, 61.24 ms). CONCLUSION The proposed CTN significantly improved the image contrast, resolution and some unclear details by the MV beamformer, making it an efficient tool for high-frame-rate imaging.
Collapse
Affiliation(s)
- Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Chujian Ren
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Zihao Wang
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Shuangchun Fan
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Dezhi Zheng
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Shuai Wang
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Hongxiang Lin
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Weiwei Xing
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China.
| |
Collapse
|
7
|
Shareef B, Xian M, Vakanski A, Wang H. Breast Ultrasound Tumor Classification Using a Hybrid Multitask CNN-Transformer Network. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14223:344-353. [PMID: 38601088 PMCID: PMC11006090 DOI: 10.1007/978-3-031-43901-8_33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification. Although convolutional neural networks (CNNs) have demonstrated reliable performance in tumor classification, they have inherent limitations for modeling global and long-range dependencies due to the localized nature of convolution operations. Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations. In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation using a hybrid architecture composed of CNNs and Swin Transformer components. The proposed approach was compared to nine BUS classification methods and evaluated using seven quantitative metrics on a dataset of 3,320 BUS images. The results indicate that Hybrid-MT-ESTAN achieved the highest accuracy, sensitivity, and F1 score of 82.7%, 86.4%, and 86.0%, respectively.
Collapse
Affiliation(s)
- Bryar Shareef
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho 83402, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho 83402, USA
| | - Aleksandar Vakanski
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho 83402, USA
| | - Haotian Wang
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho 83402, USA
| |
Collapse
|
8
|
Chen F, Han H, Wan P, Liao H, Liu C, Zhang D. Joint Segmentation and Differential Diagnosis of Thyroid Nodule in Contrast-Enhanced Ultrasound Images. IEEE Trans Biomed Eng 2023; 70:2722-2732. [PMID: 37027278 DOI: 10.1109/tbme.2023.3262842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
Abstract
OBJECTIVE Microvascular perfusion can be observed in real time with contrast-enhanced ultrasound (CEUS), which is a novel ultrasound technology for visualizing the dynamic patterns of parenchymal perfusion. Automatic lesion segmentation and differential diagnosis of malignant and benign based on CEUS are crucial but challenging tasks for computer-aided diagnosis of thyroid nodule. METHODS To tackle these two formidable challenges concurrently, we provide Trans-CEUS, a spatial-temporal transformer-based CEUS analysis model to finish the joint learning of these two challenging tasks. Specifically, the dynamic swin-transformer encoder and multi-level feature collaborative learning are combined into U-net for achieving accurate segmentation of lesions with ambiguous boundary from CEUS. In addition, variant transformer-based global spatial-temporal fusion is proposed to obtain long-distance enhancement perfusion of dynamic CEUS for promoting differential diagnosis. RESULTS Empirical results of clinical data showed that our Trans-CEUS model achieved not only a good lesion segmentation result with a high Dice similarity coefficient of 82.41%, but also superior diagnostic accuracy of 86.59%. Conclusion & significance: This research is novel since it is the first to incorporate the transformer into CEUS analysis, and it shows promising results on dynamic CEUS datasets for both segmentation and diagnosis tasks of the thyroid nodule.
Collapse
|
9
|
Al-Hejri AM, Al-Tam RM, Fazea M, Sable AH, Lee S, Al-antari MA. ETECADx: Ensemble Self-Attention Transformer Encoder for Breast Cancer Diagnosis Using Full-Field Digital X-ray Breast Images. Diagnostics (Basel) 2022; 13:diagnostics13010089. [PMID: 36611382 PMCID: PMC9818801 DOI: 10.3390/diagnostics13010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/22/2022] [Accepted: 12/24/2022] [Indexed: 12/29/2022] Open
Abstract
Early detection of breast cancer is an essential procedure to reduce the mortality rate among women. In this paper, a new AI-based computer-aided diagnosis (CAD) framework called ETECADx is proposed by fusing the benefits of both ensemble transfer learning of the convolutional neural networks as well as the self-attention mechanism of vision transformer encoder (ViT). The accurate and precious high-level deep features are generated via the backbone ensemble network, while the transformer encoder is used to diagnose the breast cancer probabilities in two approaches: Approach A (i.e., binary classification) and Approach B (i.e., multi-classification). To build the proposed CAD system, the benchmark public multi-class INbreast dataset is used. Meanwhile, private real breast cancer images are collected and annotated by expert radiologists to validate the prediction performance of the proposed ETECADx framework. The promising evaluation results are achieved using the INbreast mammograms with overall accuracies of 98.58% and 97.87% for the binary and multi-class approaches, respectively. Compared with the individual backbone networks, the proposed ensemble learning model improves the breast cancer prediction performance by 6.6% for binary and 4.6% for multi-class approaches. The proposed hybrid ETECADx shows further prediction improvement when the ViT-based ensemble backbone network is used by 8.1% and 6.2% for binary and multi-class diagnosis, respectively. For validation purposes using the real breast images, the proposed CAD system provides encouraging prediction accuracies of 97.16% for binary and 89.40% for multi-class approaches. The ETECADx has a capability to predict the breast lesions for a single mammogram in an average of 0.048 s. Such promising performance could be useful and helpful to assist the practical CAD framework applications providing a second supporting opinion of distinguishing various breast cancer malignancies.
Collapse
Affiliation(s)
- Aymen M. Al-Hejri
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Faculty of Administrative and Computer Sciences, University of Albaydha, Albaydha, Yemen
| | - Riyadh M. Al-Tam
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Faculty of Administrative and Computer Sciences, University of Albaydha, Albaydha, Yemen
| | - Muneer Fazea
- Department of Radiology, Al-Ma’amon Diagnostic Center, Sana’a, Yemen
- Department of Radiology, School of Medicine, Ibb University of Medical Sciences, Ibb, Yemen
| | - Archana Harsing Sable
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Correspondence: (A.H.S.); (M.A.A.-a.)
| | - Soojeong Lee
- Department of Computer Engineering, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (A.H.S.); (M.A.A.-a.)
| |
Collapse
|
10
|
A Hybrid Workflow of Residual Convolutional Transformer Encoder for Breast Cancer Classification Using Digital X-ray Mammograms. Biomedicines 2022; 10:biomedicines10112971. [PMID: 36428538 PMCID: PMC9687367 DOI: 10.3390/biomedicines10112971] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/03/2022] [Accepted: 11/13/2022] [Indexed: 11/19/2022] Open
Abstract
Breast cancer, which attacks the glandular epithelium of the breast, is the second most common kind of cancer in women after lung cancer, and it affects a significant number of people worldwide. Based on the advantages of Residual Convolutional Network and the Transformer Encoder with Multiple Layer Perceptron (MLP), this study proposes a novel hybrid deep learning Computer-Aided Diagnosis (CAD) system for breast lesions. While the backbone residual deep learning network is employed to create the deep features, the transformer is utilized to classify breast cancer according to the self-attention mechanism. The proposed CAD system has the capability to recognize breast cancer in two scenarios: Scenario A (Binary classification) and Scenario B (Multi-classification). Data collection and preprocessing, patch image creation and splitting, and artificial intelligence-based breast lesion identification are all components of the execution framework that are applied consistently across both cases. The effectiveness of the proposed AI model is compared against three separate deep learning models: a custom CNN, the VGG16, and the ResNet50. Two datasets, CBIS-DDSM and DDSM, are utilized to construct and test the proposed CAD system. Five-fold cross validation of the test data is used to evaluate the accuracy of the performance results. The suggested hybrid CAD system achieves encouraging evaluation results, with overall accuracies of 100% and 95.80% for binary and multiclass prediction challenges, respectively. The experimental results reveal that the proposed hybrid AI model could identify benign and malignant breast tissues significantly, which is important for radiologists to recommend further investigation of abnormal mammograms and provide the optimal treatment plan.
Collapse
|