1
|
Qian N, Jiang W, Wu X, Zhang N, Yu H, Guo Y. Lesion attention guided neural network for contrast-enhanced mammography-based biomarker status prediction in breast cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108194. [PMID: 38678959 DOI: 10.1016/j.cmpb.2024.108194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 04/13/2024] [Accepted: 04/21/2024] [Indexed: 05/01/2024]
Abstract
BACKGROUND AND OBJECTIVE Accurate identification of molecular biomarker statuses is crucial in cancer diagnosis, treatment, and prognosis. Studies have demonstrated that medical images could be utilized for non-invasive prediction of biomarker statues. The biomarker status-associated features extracted from medical images are essential in developing medical image-based non-invasive prediction models. Contrast-enhanced mammography (CEM) is a promising imaging technique for breast cancer diagnosis. This study aims to develop a neural network-based method to extract biomarker-related image features from CEM images and evaluate the potential of CEM in non-invasive biomarker status prediction. METHODS An end-to-end learning convolutional neural network with the whole breast images as inputs was proposed to extract CEM features for biomarker status prediction in breast cancer. The network focused on lesion regions and flexibly extracted image features from lesion and peri‑tumor regions by employing supervised learning with a smooth L1-based consistency constraint. An image-level weakly supervised segmentation network based on Vision Transformer with cross attention to contrast images of breasts with lesions against the contralateral breast images was developed for automatic lesion segmentation. Finally, prediction models were developed following further selection of significant features and the implementation of random forest-based classification. Results were reported using the area under the curve (AUC), accuracy, sensitivity, and specificity. RESULTS A dataset from 1203 breast cancer patients was utilized to develop and evaluate the proposed method. Compared to the method without lesion attention and with only lesion regions as inputs, the proposed method performed better at biomarker status prediction. Specifically, it achieved an AUC of 0.71 (95 % confidence interval [CI]: 0.65, 0.77) for Ki-67 and 0.73 (95 % CI: 0.65, 0.80) for human epidermal growth factor receptor 2 (HER2). CONCLUSIONS A lesion attention-guided neural network was proposed in this work to extract CEM image features for biomarker status prediction in breast cancer. The promising results demonstrated the potential of CEM in non-invasively predicting the biomarker statuses in breast cancer.
Collapse
Affiliation(s)
- Nini Qian
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Wei Jiang
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China; Department of Radiotherapy, Yantai Yuhuangding Hospital, Shandong 264000, China
| | - Xiaoqian Wu
- Department of Radiation Oncology, The Affiliated Hospital of Qingdao University, Qingdao 266071, China
| | - Ning Zhang
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Hui Yu
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Yu Guo
- Department of Biomedical Engineering, Medical School, Tianjin University, Tianjin 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China.
| |
Collapse
|
2
|
Chen J, Wen Z, Yang X, Jia J, Zhang X, Pian L, Zhao P. Ultrasound-Based Radiomics for the Classification of Henoch-Schönlein Purpura Nephritis in Children. ULTRASONIC IMAGING 2024; 46:110-120. [PMID: 38140769 DOI: 10.1177/01617346231220000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
Henoch-Schönlein purpura nephritis (HSPN) is one of the most common kidney diseases in children. The current diagnosis and classification of HSPN depend on pathological biopsy, which is seriously limited by its invasive and high-risk nature. The aim of the study was to explore the potential of radiomics model for evaluating the histopathological classification of HSPN based on the ultrasound (US) images. A total of 440 patients with Henoch-Schönlein purpura nephritis proved by biopsy were analyzed retrospectively. They were grouped according to two histopathological categories: those without glomerular crescent formation (ISKDC grades I-II) and those with glomerular crescent formation (ISKDC grades III-V). The patients were randomly assigned to either a training cohort (n = 308) or a validation cohort (n = 132) with a ratio of 7:3. The sonologist manually drew the regions of interest (ROI) on the ultrasound images of the right kidney including the cortex and medulla. Then, the ultrasound radiomics features were extracted using the Pyradiomics package. The dimensions of radiomics features were reduced by Spearman correlation coefficients and least absolute shrinkage and selection operator (LASSO) method. Finally, three radiomics models using k-nearest neighbor (KNN), logistic regression (LR), and support vector machine (SVM) were established, respectively. The predictive performance of such classifiers was assessed with receiver operating characteristic (ROC) curve. 105 radiomics features were extracted from derived US images of each patient and 14 features were ultimately selected for the machine learning analysis. Three machine learning models including k-nearest neighbor (KNN), logistic regression (LR), and support vector machine (SVM) were established for HSPN classification. Of the three classifiers, the SVM classifier performed the best in the validation cohort [area under the curve (AUC) =0.870 (95% CI, 0.795-0.944), sensitivity = 0.706, specificity = 0.950]. The US-based radiomics had good predictive value for HSPN classification, which can be served as a noninvasive tool to evaluate the severity of renal pathology and crescentic formation in children with HSPN.
Collapse
Affiliation(s)
- Jie Chen
- Department of Ultrasound Medical, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
- Department of Ultrasound Medical, The First Affiliated Hospital of Henan University of Chinese Medicine, Zhengzhou, China
| | - Zeying Wen
- Department of Radiology, The First Affiliated Hospital of Henan University of Chinese Medicine, Zhengzhou, China
| | - Xiaoqing Yang
- Department of Pathology, The First Affiliated Hospital of Henan University of Chinese Medicine, Zhengzhou, China
| | - Jie Jia
- Department of Ultrasound Medical, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaodong Zhang
- Department of Ultrasound Medical, The First Affiliated Hospital of Henan University of Chinese Medicine, Zhengzhou, China
| | - Linping Pian
- Department of Ultrasound Medical, The First Affiliated Hospital of Henan University of Chinese Medicine, Zhengzhou, China
| | - Ping Zhao
- Department of Ultrasound Medical, The First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| |
Collapse
|
3
|
Pawłowska A, Ćwierz-Pieńkowska A, Domalik A, Jaguś D, Kasprzak P, Matkowski R, Fura Ł, Nowicki A, Żołek N. Curated benchmark dataset for ultrasound based breast lesion analysis. Sci Data 2024; 11:148. [PMID: 38297002 PMCID: PMC10830496 DOI: 10.1038/s41597-024-02984-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 01/17/2024] [Indexed: 02/02/2024] Open
Abstract
A new detailed dataset of breast ultrasound scans (BrEaST) containing images of benign and malignant lesions as well as normal tissue examples, is presented. The dataset consists of 256 breast scans collected from 256 patients. Each scan was manually annotated and labeled by a radiologist experienced in breast ultrasound examination. In particular, each tumor was identified in the image using a freehand annotation and labeled according to BIRADS features and lexicon. The histopathological classification of the tumor was also provided for patients who underwent a biopsy. The BrEaST dataset is the first breast ultrasound dataset containing patient-level labels, image-level annotations, and tumor-level labels with all cases confirmed by follow-up care or core needle biopsy result. To enable research into breast disease detection, tumor segmentation and classification, the BrEaST dataset is made publicly available with the CC-BY 4.0 license.
Collapse
Affiliation(s)
- Anna Pawłowska
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland
| | - Anna Ćwierz-Pieńkowska
- Maria Sklodowska-Curie National Institute of Oncology - National Research Institute Branch in Krakow ul, Garncarska 11, 31-115, Kraków, Poland
| | - Agnieszka Domalik
- Maria Sklodowska-Curie National Institute of Oncology - National Research Institute Branch in Krakow ul, Garncarska 11, 31-115, Kraków, Poland
| | - Dominika Jaguś
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland
| | - Piotr Kasprzak
- Breast Unit, Lower Silesian Oncology, Pulmonology and Hematology Center, pl. Ludwika Hirszfelda 12, 53-413, Wrocław, Poland
| | - Rafał Matkowski
- Breast Unit, Lower Silesian Oncology, Pulmonology and Hematology Center, pl. Ludwika Hirszfelda 12, 53-413, Wrocław, Poland
- Department of Oncology, Wrocław Medical University, Wrocław, Poland
| | - Łukasz Fura
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland
| | - Andrzej Nowicki
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland
| | - Norbert Żołek
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland.
| |
Collapse
|
4
|
Tasnim J, Hasan MK. CAM-QUS guided self-tuning modular CNNs with multi-loss functions for fully automated breast lesion classification in ultrasound images. Phys Med Biol 2023; 69:015018. [PMID: 38056017 DOI: 10.1088/1361-6560/ad1319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 12/06/2023] [Indexed: 12/08/2023]
Abstract
Objective.Breast cancer is the major cause of cancer death among women worldwide. Deep learning-based computer-aided diagnosis (CAD) systems for classifying lesions in breast ultrasound images can help materialise the early detection of breast cancer and enhance survival chances.Approach.This paper presents a completely automated BUS diagnosis system with modular convolutional neural networks tuned with novel loss functions. The proposed network comprises a dynamic channel input enhancement network, an attention-guided InceptionV3-based feature extraction network, a classification network, and a parallel feature transformation network to map deep features into quantitative ultrasound (QUS) feature space. These networks function together to improve classification accuracy by increasing the separation of benign and malignant class-specific features and enriching them simultaneously. Unlike the categorical crossentropy (CCE) loss-based traditional approaches, our method uses two additional novel losses: class activation mapping (CAM)-based and QUS feature-based losses, to capacitate the overall network learn the extraction of clinically valued lesion shape and texture-related properties focusing primarily the lesion area for explainable AI (XAI).Main results.Experiments on four public, one private, and a combined breast ultrasound dataset are used to validate our strategy. The suggested technique obtains an accuracy of 97.28%, sensitivity of 93.87%, F1-score of 95.42% on dataset 1 (BUSI), and an accuracy of 91.50%, sensitivity of 89.38%, and F1-score of 89.31% on the combined dataset, consisting of 1494 images collected from hospitals in five demographic locations using four ultrasound systems of different manufacturers. These results outperform techniques reported in the literature by a considerable margin.Significance.The proposed CAD system provides diagnosis from the auto-focused lesion area of B-mode BUS images, avoiding the explicit requirement of any segmentation or region of interest extraction, and thus can be a handy tool for making accurate and reliable diagnoses even in unspecialized healthcare centers.
Collapse
Affiliation(s)
- Jarin Tasnim
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh
| | - Md Kamrul Hasan
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh
| |
Collapse
|
5
|
Chowa SS, Azam S, Montaha S, Payel IJ, Bhuiyan MRI, Hasan MZ, Jonkman M. Graph neural network-based breast cancer diagnosis using ultrasound images with optimized graph construction integrating the medically significant features. J Cancer Res Clin Oncol 2023; 149:18039-18064. [PMID: 37982829 PMCID: PMC10725367 DOI: 10.1007/s00432-023-05464-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 10/06/2023] [Indexed: 11/21/2023]
Abstract
PURPOSE An automated computerized approach can aid radiologists in the early diagnosis of breast cancer. In this study, a novel method is proposed for classifying breast tumors into benign and malignant, based on the ultrasound images through a Graph Neural Network (GNN) model utilizing clinically significant features. METHOD Ten informative features are extracted from the region of interest (ROI), based on the radiologists' diagnosis markers. The significance of the features is evaluated using density plot and T test statistical analysis method. A feature table is generated where each row represents individual image, considered as node, and the edges between the nodes are denoted by calculating the Spearman correlation coefficient. A graph dataset is generated and fed into the GNN model. The model is configured through ablation study and Bayesian optimization. The optimized model is then evaluated with different correlation thresholds for getting the highest performance with a shallow graph. The performance consistency is validated with k-fold cross validation. The impact of utilizing ROIs and handcrafted features for breast tumor classification is evaluated by comparing the model's performance with Histogram of Oriented Gradients (HOG) descriptor features from the entire ultrasound image. Lastly, a clustering-based analysis is performed to generate a new filtered graph, considering weak and strong relationships of the nodes, based on the similarities. RESULTS The results indicate that with a threshold value of 0.95, the GNN model achieves the highest test accuracy of 99.48%, precision and recall of 100%, and F1 score of 99.28%, reducing the number of edges by 85.5%. The GNN model's performance is 86.91%, considering no threshold value for the graph generated from HOG descriptor features. Different threshold values for the Spearman's correlation score are experimented with and the performance is compared. No significant differences are observed between the previous graph and the filtered graph. CONCLUSION The proposed approach might aid the radiologists in effective diagnosing and learning tumor pattern of breast cancer.
Collapse
Affiliation(s)
- Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Israt Jahan Payel
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1216, Bangladesh
| | - Md Rahad Islam Bhuiyan
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1216, Bangladesh
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
6
|
Gao J, Xu L, Wan M. Incremental learning for an evolving stream of medical ultrasound images via counterfactual thinking. Comput Med Imaging Graph 2023; 109:102290. [PMID: 37647830 DOI: 10.1016/j.compmedimag.2023.102290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 07/27/2023] [Accepted: 08/16/2023] [Indexed: 09/01/2023]
Abstract
Despite the fact that traditional deep learning (DL) approaches provide promising accuracy and efficiency in medical ultrasound image analysis, they cannot replace the physician in making a diagnosis since the DL model is only appropriate in static application scenarios. Currently, most DL-based models are incapable of learning new tasks in the dynamic clinical environments due to the catastrophic forgetting of old tasks. To address the above problem, we propose an incremental classifier that is sequentially trained on evolving tasks for medical ultrasound images by counterfactual thinking. Specifically, the proposed model consists of a feature extractor and a classifier that can add new classes at any time during training. Toward a more discriminative model in the continual learning setting, a contrastive strategy is designed to leverage fine-grained information by generating a series of counterfactual regions. For model optimization, we design a multi-task loss made up of a knowledge distillation loss, a cross-entropy loss, and a contrasting loss. This objective jointly enjoys the merits of less forgetting, better accuracy, and fine-grained information utilization. A newly collected dataset with 52 medical ultrasound classification tasks is used to demonstrate the effectiveness of our method. The proposed approach achieves 76.59%, 11.67%, and 7.93% in terms of the average incremental accuracy, forgetting rate, and feature retention, respectively.
Collapse
Affiliation(s)
- Junling Gao
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, PR China
| | - Lei Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, PR China; Xi'an Hospital of Traditional Chinese Medicine, Xi'an 710021, PR China
| | - Mingxi Wan
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, PR China.
| |
Collapse
|
7
|
Guo Y, Jiang R, Gu X, Cheng HD, Garg H. A Novel Fuzzy Relative-Position-Coding Transformer for Breast Cancer Diagnosis Using Ultrasonography. Healthcare (Basel) 2023; 11:2530. [PMID: 37761727 PMCID: PMC10531413 DOI: 10.3390/healthcare11182530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/31/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023] Open
Abstract
Breast cancer is a leading cause of death in women worldwide, and early detection is crucial for successful treatment. Computer-aided diagnosis (CAD) systems have been developed to assist doctors in identifying breast cancer on ultrasound images. In this paper, we propose a novel fuzzy relative-position-coding (FRPC) Transformer to classify breast ultrasound (BUS) images for breast cancer diagnosis. The proposed FRPC Transformer utilizes the self-attention mechanism of Transformer networks combined with fuzzy relative-position-coding to capture global and local features of the BUS images. The performance of the proposed method is evaluated on one benchmark dataset and compared with those obtained by existing Transformer approaches using various metrics. The experimental outcomes distinctly establish the superiority of the proposed method in achieving elevated levels of accuracy, sensitivity, specificity, and F1 score (all at 90.52%), as well as a heightened area under the receiver operating characteristic (ROC) curve (0.91), surpassing those attained by the original Transformer model (at 89.54%, 89.54%, 89.54%, and 0.89, respectively). Overall, the proposed FRPC Transformer is a promising approach for breast cancer diagnosis. It has potential applications in clinical practice and can contribute to the early detection of breast cancer.
Collapse
Affiliation(s)
- Yanhui Guo
- Department of Computer Science, University of Illinois, Springfield, IL 62703, USA
| | - Ruquan Jiang
- Department of Pediatrics, Xinxiang Medical University, Xinxiang 453003, China;
| | - Xin Gu
- School of Information Science and Technology, North China University of Technology, Beijing 100144, China;
| | - Heng-Da Cheng
- Department of Computer Science, Utah State University, Logan, UT 84322, USA;
| | - Harish Garg
- School of Mathematics, Thapar Institute of Engineering and Technology, Deemed University, Patiala 147004, Punjab, India;
| |
Collapse
|
8
|
Zhang B, Vakanski A, Xian M. BI-RADS-NET-V2: A Composite Multi-Task Neural Network for Computer-Aided Diagnosis of Breast Cancer in Ultrasound Images With Semantic and Quantitative Explanations. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2023; 11:79480-79494. [PMID: 37608804 PMCID: PMC10443928 DOI: 10.1109/access.2023.3298569] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Computer-aided Diagnosis (CADx) based on explainable artificial intelligence (XAI) can gain the trust of radiologists and effectively improve diagnosis accuracy and consultation efficiency. This paper proposes BI-RADS-Net-V2, a novel machine learning approach for fully automatic breast cancer diagnosis in ultrasound images. The BI-RADS-Net-V2 can accurately distinguish malignant tumors from benign ones and provides both semantic and quantitative explanations. The explanations are provided in terms of clinically proven morphological features used by clinicians for diagnosis and reporting mass findings, i.e., Breast Imaging Reporting and Data System (BI-RADS). The experiments on 1,192 Breast Ultrasound (BUS) images indicate that the proposed method improves the diagnosis accuracy by taking full advantage of the medical knowledge in BI-RADS while providing both semantic and quantitative explanations for the decision.
Collapse
Affiliation(s)
- Boyu Zhang
- Institute for Interdisciplinary Data Sciences, University of Idaho, Moscow, ID 83844, USA
| | - Aleksandar Vakanski
- Department of Nuclear Engineering and Industrial Management, University of Idaho, Idaho Falls, ID 83402, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA
| |
Collapse
|
9
|
Zafar A, Tanveer J, Ali MU, Lee SW. BU-DLNet: Breast Ultrasonography-Based Cancer Detection Using Deep-Learning Network Selection and Feature Optimization. Bioengineering (Basel) 2023; 10:825. [PMID: 37508852 PMCID: PMC10376009 DOI: 10.3390/bioengineering10070825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/04/2023] [Accepted: 07/09/2023] [Indexed: 07/30/2023] Open
Abstract
Early detection of breast lesions and distinguishing between malignant and benign lesions are critical for breast cancer (BC) prognosis. Breast ultrasonography (BU) is an important radiological imaging modality for the diagnosis of BC. This study proposes a BU image-based framework for the diagnosis of BC in women. Various pre-trained networks are used to extract the deep features of the BU images. Ten wrapper-based optimization algorithms, including the marine predator algorithm, generalized normal distribution optimization, slime mold algorithm, equilibrium optimizer (EO), manta-ray foraging optimization, atom search optimization, Harris hawks optimization, Henry gas solubility optimization, path finder algorithm, and poor and rich optimization, were employed to compute the optimal subset of deep features using a support vector machine classifier. Furthermore, a network selection algorithm was employed to determine the best pre-trained network. An online BU dataset was used to test the proposed framework. After comprehensive testing and analysis, it was found that the EO algorithm produced the highest classification rate for each pre-trained model. It produced the highest classification accuracy of 96.79%, and it was trained using only a deep feature vector with a size of 562 in the ResNet-50 model. Similarly, the Inception-ResNet-v2 had the second highest classification accuracy of 96.15% using the EO algorithm. Moreover, the results of the proposed framework are compared with those in the literature.
Collapse
Affiliation(s)
- Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Jawad Tanveer
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Muhammad Umair Ali
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Seung Won Lee
- Department of Precision Medicine, School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
10
|
Dubey AK, Chabert GL, Carriero A, Pasche A, Danna PSC, Agarwal S, Mohanty L, Sharma N, Yadav S, Jain A, Kumar A, Kalra MK, Sobel DW, Laird JR, Singh IM, Singh N, Tsoulfas G, Fouda MM, Alizad A, Kitas GD, Khanna NN, Viskovic K, Kukuljan M, Al-Maini M, El-Baz A, Saba L, Suri JS. Ensemble Deep Learning Derived from Transfer Learning for Classification of COVID-19 Patients on Hybrid Deep-Learning-Based Lung Segmentation: A Data Augmentation and Balancing Framework. Diagnostics (Basel) 2023; 13:diagnostics13111954. [PMID: 37296806 DOI: 10.3390/diagnostics13111954] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 05/22/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023] Open
Abstract
BACKGROUND AND MOTIVATION Lung computed tomography (CT) techniques are high-resolution and are well adopted in the intensive care unit (ICU) for COVID-19 disease control classification. Most artificial intelligence (AI) systems do not undergo generalization and are typically overfitted. Such trained AI systems are not practical for clinical settings and therefore do not give accurate results when executed on unseen data sets. We hypothesize that ensemble deep learning (EDL) is superior to deep transfer learning (TL) in both non-augmented and augmented frameworks. METHODOLOGY The system consists of a cascade of quality control, ResNet-UNet-based hybrid deep learning for lung segmentation, and seven models using TL-based classification followed by five types of EDL's. To prove our hypothesis, five different kinds of data combinations (DC) were designed using a combination of two multicenter cohorts-Croatia (80 COVID) and Italy (72 COVID and 30 controls)-leading to 12,000 CT slices. As part of generalization, the system was tested on unseen data and statistically tested for reliability/stability. RESULTS Using the K5 (80:20) cross-validation protocol on the balanced and augmented dataset, the five DC datasets improved TL mean accuracy by 3.32%, 6.56%, 12.96%, 47.1%, and 2.78%, respectively. The five EDL systems showed improvements in accuracy of 2.12%, 5.78%, 6.72%, 32.05%, and 2.40%, thus validating our hypothesis. All statistical tests proved positive for reliability and stability. CONCLUSION EDL showed superior performance to TL systems for both (a) unbalanced and unaugmented and (b) balanced and augmented datasets for both (i) seen and (ii) unseen paradigms, validating both our hypotheses.
Collapse
Affiliation(s)
- Arun Kumar Dubey
- Bharati Vidyapeeth's College of Engineering, New Delhi 110063, India
| | - Gian Luca Chabert
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09123 Cagliari, Italy
| | - Alessandro Carriero
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09123 Cagliari, Italy
| | - Alessio Pasche
- Department of Radiology, "Maggiore della Carità" Hospital, University of Piemonte Orientale, Via Solaroli 17, 28100 Novara, Italy
| | - Pietro S C Danna
- Department of Radiology, "Maggiore della Carità" Hospital, University of Piemonte Orientale, Via Solaroli 17, 28100 Novara, Italy
| | - Sushant Agarwal
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA
| | - Lopamudra Mohanty
- ABES Engineering College, Ghaziabad 201009, India
- Department of Computer Science Engineering, Bennett University, Greater Noida 201310, India
| | - Neeraj Sharma
- School of Biomedical Engineering, Indian Institute of Technology (BHU), Varanasi 221005, India
| | - Sarita Yadav
- Bharati Vidyapeeth's College of Engineering, New Delhi 110063, India
| | - Achin Jain
- Bharati Vidyapeeth's College of Engineering, New Delhi 110063, India
| | - Ashish Kumar
- Department of Computer Science Engineering, Bennett University, Greater Noida 201310, India
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02115, USA
| | - David W Sobel
- Men's Health Centre, Miriam Hospital Providence, Providence, RI 02906, USA
| | - John R Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA
| | - Inder M Singh
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA
| | - Narpinder Singh
- Department of Food Science and Technology, Graphic Era, Deemed to be University, Dehradun 248002, India
| | - George Tsoulfas
- Department of Surgery, Aristoteleion University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Mostafa M Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA
| | - Azra Alizad
- Department of Physiology & Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - George D Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK
| | - Narendra N Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110001, India
| | - Klaudija Viskovic
- Department of Radiology and Ultrasound, University Hospital for Infectious Diseases, 10000 Zagreb, Croatia
| | - Melita Kukuljan
- Department of Interventional and Diagnostic Radiology, Clinical Hospital Center Rijeka, 51000 Rijeka, Croatia
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology & Rheumatology Institute, Toronto, ON L4Z 4C4, Canada
| | - Ayman El-Baz
- Biomedical Engineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09123 Cagliari, Italy
| | - Jasjit S Suri
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA
| |
Collapse
|
11
|
Chen H, Ma M, Liu G, Wang Y, Jin Z, Liu C. Breast Tumor Classification in Ultrasound Images by Fusion of Deep Convolutional Neural Network and Shallow LBP Feature. J Digit Imaging 2023; 36:932-946. [PMID: 36720840 PMCID: PMC10287618 DOI: 10.1007/s10278-022-00711-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 09/27/2022] [Accepted: 09/29/2022] [Indexed: 02/02/2023] Open
Abstract
Breast cancer is one of the most dangerous and common cancers in women which leads to a major research topic in medical science. To assist physicians in pre-screening for breast cancer to reduce unnecessary biopsies, breast ultrasound and computer-aided diagnosis (CAD) have been used to distinguish between benign and malignant tumors. In this study, we proposed a CAD system for tumor diagnosis using a multi-channel fusion method and feature extraction structure based on multi-feature fusion on breast ultrasound (BUS) images. In the pre-processing stage, the multi-channel fusion method completed the color conversion of the BUS image to make it contain richer information. In the feature extraction stage, the pre-trained ResNet50 network was selected as the basic network, and three levels of features were combined based on adaptive spatial feature fusion (ASFF), and finally, the shallow local binary pattern (LBP) texture features were fused. Support vector machine (SVM) was used for comparative analysis. A retrospective analysis was carried out, and 1615 breast tumor images (572 benign and 1043 malignant) confirmed by pathological examinations were collected. After data processing and augmentation, for an independent test set consisting of 874 breast ultrasound images (457 benign and 417 malignant), the accuracy, precision, recall, specificity, F1 score, and AUC of our method were 96.91%, 98.75%, 94.72%, 98.91%, 0.97, and 0.991, respectively. The results show that the integration of shallow LBP texture features and multi-level depth features can more effectively improve the comprehensive performance of breast tumor diagnosis, and has strong clinical application value. Compared with the past methods, our proposed method is expected to realize the automatic diagnosis of breast tumors and provide an auxiliary tool for radiologists to accurately diagnose breast diseases.
Collapse
Affiliation(s)
- Hua Chen
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Minglun Ma
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Gang Liu
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China.
| | - Ying Wang
- The Second Hospital of Hebei Medical University, Shijiazhuang, 050000, China
| | - Zhihao Jin
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Chong Liu
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| |
Collapse
|
12
|
Kumar S, Sengupta S, Ali I, Gupta MK, Lalhlenmawia H, Azizov S, Kumar D. Identification and exploration of quinazoline-1,2,3-triazole inhibitors targeting EGFR in lung cancer. J Biomol Struct Dyn 2023; 41:11353-11372. [PMID: 37114510 DOI: 10.1080/07391102.2023.2204360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 12/17/2022] [Indexed: 04/29/2023]
Abstract
Epidermal growth factor receptor (EGFR) enhances lung cancer development, due to their inability to permeate the cell membrane, secreted growth factors work through specialized signal transduction pathways. The purpose of this study is to find out a novel anticancer agent that inhibits EGFR and reduces the chances of lung cancer. A series of triazole-substituted quinazoline hybrid compounds were designed by Chemdraw software and docked against five different crystallographic EGFR tyrosine kinase domain (TKD). For docking and visualization PyRx, Autodock vina, and Discovery studio visualizer were used. Molecule-14, Molecule-16, Molecule-19, Molecule-20, and Molecule-38 showed significant affinity but Molecule-19 showed excellent binding affinity (-12.4 kcal/mol) with crystallographic EGFR tyrosine kinase. The superimposition of the co-crystalized ligand with the hit compound shows similar conformation at the active site of EGFR (PDB ID: 4HJO) indicating excellent coupling and pharmaceutically active. The hit compound showed a good bioavailability score (0.55) with no sign of carcinogenesis, mutagenesis, or reproductive toxicity properties. MD simulation and MMGBSA represent good stability and binding free energy demonstrating that the hit (Molecule-19) may be used as a lead compound. Molecule-19 also showed good ADME properties, bioavailability scores, and synthetic accessibility with fewer signs of toxicity. It was observed that Molecule-19 may be a novel and potential inhibitor against EGFR with fewer side effects than the reference molecule. Additionally, the molecular dynamics simulation revealed the stable nature of protein-ligand interaction and provided information about the amino acid residues involved in binding. Overall, this study led to the identification of potential EGFR inhibitors with favorable pharmacokinetic properties. We believe that the outcome of this study can help to develop more potent drug-like molecules to tackle human lung cancer.
Collapse
Affiliation(s)
- Sunil Kumar
- Department of Pharmaceutical Chemistry, School of Pharmaceutical Sciences, Shoolini University, Solan, Himachal Pradesh, India
| | - Sounok Sengupta
- Department of Pharmacology, School of Pharmaceutical Sciences, Shoolini University, Solan, Himachal Pradesh, India
| | - Iqra Ali
- Department of Biosciences, COMSATS University Islamabad, Islamabad, Pakistan
| | - Manoj K Gupta
- Department of Chemistry, School of Basic Sciences, Central University of Haryana, Mahendergarh, Haryana, India
| | - H Lalhlenmawia
- Department of Pharmacy, Regional Institute of Paramedical and Nursing Sciences, Aizawl, Mizoram, India
| | - Shavkatjon Azizov
- Laboratory of Biological Active Macromolecular Systems, Institute of Bioorganic Chemistry, Academy of Sciences Uzbekistan, Tashkent, Uzbekistan
- Department of Pharmaceutical Chemistry, Tashkent Pharmaceutical Institute, Tashkent, Uzbekistan
| | - Deepak Kumar
- Department of Pharmaceutical Chemistry, School of Pharmaceutical Sciences, Shoolini University, Solan, Himachal Pradesh, India
| |
Collapse
|
13
|
Gu Y, Xu W, Liu T, An X, Tian J, Ran H, Ren W, Chang C, Yuan J, Kang C, Deng Y, Wang H, Luo B, Guo S, Zhou Q, Xue E, Zhan W, Zhou Q, Li J, Zhou P, Chen M, Gu Y, Chen W, Zhang Y, Li J, Cong L, Zhu L, Wang H, Jiang Y. Ultrasound-based deep learning in the establishment of a breast lesion risk stratification system: a multicenter study. Eur Radiol 2023; 33:2954-2964. [PMID: 36418619 DOI: 10.1007/s00330-022-09263-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 09/03/2022] [Accepted: 10/22/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVES To establish a breast lesion risk stratification system using ultrasound images to predict breast malignancy and assess Breast Imaging Reporting and Data System (BI-RADS) categories simultaneously. METHODS This multicenter study prospectively collected a dataset of ultrasound images for 5012 patients at thirty-two hospitals from December 2018 to December 2020. A deep learning (DL) model was developed to conduct binary categorization (benign and malignant) and BI-RADS categories (2, 3, 4a, 4b, 4c, and 5) simultaneously. The training set of 4212 patients and the internal test set of 416 patients were from thirty hospitals. The remaining two hospitals with 384 patients were used as an external test set. Three experienced radiologists performed a reader study on 324 patients randomly selected from the test sets. We compared the performance of the DL model with that of three radiologists and the consensus of the three radiologists. RESULTS In the external test set, the DL model achieved areas under the receiver operating characteristic curve (AUCs) of 0.980 and 0.945 for the binary categorization and six-way categorizations, respectively. In the reader study set, the DL BI-RADS categories achieved a similar AUC (0.901 vs. 0.933, p = 0.0632), sensitivity (90.98% vs. 95.90%, p = 0.1094), and accuracy (83.33% vs. 79.01%, p = 0.0541), but higher specificity (78.71% vs. 68.81%, p = 0.0012) than those of the consensus of the three radiologists. CONCLUSIONS The DL model performed well in distinguishing benign from malignant breast lesions and yielded outcomes similar to experienced radiologists. This indicates the potential applicability of the DL model in clinical diagnosis. KEY POINTS • The DL model can achieve binary categorization for benign and malignant breast lesions and six-way BI-RADS categorizations for categories 2, 3, 4a, 4b, 4c, and 5, simultaneously. • The DL model showed acceptable agreement with radiologists for the classification of breast lesions. • The DL model performed well in distinguishing benign from malignant breast lesions and had promise in helping reduce unnecessary biopsies of BI-RADS 4a lesions.
Collapse
Affiliation(s)
- Yang Gu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Wen Xu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Ting Liu
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Xing An
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Jiawei Tian
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Haitao Ran
- Department of Ultrasound, The Second Affiliated Hospital of Chongqing Medical University & Chongqing Key Laboratory of Ultrasound Molecular Imaging, Chongqing, China
| | - Weidong Ren
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - Jianjun Yuan
- Department of Ultrasonography, Henan Provincial People's Hospital, Zhengzhou, China
| | - Chunsong Kang
- Department of Ultrasound, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Taiyuan, China
| | - Youbin Deng
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College of Huazhong University of Science and Technology, Wuhan, China
| | - Hui Wang
- Department of Ultrasound, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Baoming Luo
- Department of Ultrasound, The Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Shenglan Guo
- Department of Ultrasonography, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Qi Zhou
- Department of Medical Ultrasound, The Second Affiliated Hospital, School of Medicine, Xi'an Jiaotong University, Xi'an, China
| | - Ensheng Xue
- Department of Ultrasound, Union Hospital of Fujian Medical University, Fujian Institute of Ultrasound Medicine, Fuzhou, China
| | - Weiwei Zhan
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University, School of Medicine, Shanghai, China
| | - Qing Zhou
- Department of Ultrasonography, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jie Li
- Department of Ultrasound, Qilu Hospital, Shandong University, Jinan, China
| | - Ping Zhou
- Department of Ultrasound, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Man Chen
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ying Gu
- Department of Ultrasonography, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Wu Chen
- Department of Ultrasound, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Yuhong Zhang
- Department of Ultrasound, The Second Hospital of Dalian Medical University, Dalian, China
| | - Jianchu Li
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Longfei Cong
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Lei Zhu
- Department of Medical Imaging Advanced Research, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Shenzhen, China
| | - Hongyan Wang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| | - Yuxin Jiang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| |
Collapse
|
14
|
Patient-specific method for predicting epileptic seizures based on DRSN-GRU. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
15
|
Differential Diagnosis of DCIS and Fibroadenoma Based on Ultrasound Images: a Difference-Based Self-Supervised Approach. Interdiscip Sci 2023; 15:262-272. [PMID: 36656448 DOI: 10.1007/s12539-022-00547-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 12/16/2022] [Accepted: 12/19/2022] [Indexed: 01/20/2023]
Abstract
Differentiation of ductal carcinoma in situ (DCIS, a precancerous lesion of the breast) from fibroadenoma (FA) using ultrasonography is significant for the early prevention of malignant breast tumors. Radiomics-based artificial intelligence (AI) can provide additional diagnostic information but usually requires extensive labeling efforts by clinicians with specialized knowledge. This study aims to investigate the feasibility of differentially diagnosing DCIS and FA using ultrasound radiomics-based AI techniques and further explore a novel approach that can reduce labeling efforts without sacrificing diagnostic performance. We included 461 DCIS and 651 FA patients, of whom 139 DCIS and 181 FA patients constituted a prospective test cohort. First, various feature engineering-based machine learning (FEML) and deep learning (DL) approaches were developed. Then, we designed a difference-based self-supervised (DSS) learning approach that only required FA samples to participate in training. The DSS approach consists of three steps: (1) pretraining a Bootstrap Your Own Latent (BYOL) model using FA images, (2) reconstructing images using the encoder and decoder of the pretrained model, and (3) distinguishing DCIS from FA based on the differences between the original and reconstructed images. The experimental results showed that the trained FEML and DL models achieved the highest AUC of 0.7935 (95% confidence interval, 0.7900-0.7969) on the prospective test cohort, indicating that the developed models are effective for assisting in differentiating DCIS from FA based on ultrasound images. Furthermore, the DSS model achieved an AUC of 0.8172 (95% confidence interval, 0.8124-0.8219), indicating that our model outperforms the conventional radiomics-based AI models and is more competitive.
Collapse
|
16
|
Xie W, Wang C, Lin Z, Luo X, Chen W, Xu M, Liang L, Liu X, Wang Y, Luo H, Cheng M. Multimodal fusion diagnosis of depression and anxiety based on CNN-LSTM model. Comput Med Imaging Graph 2022; 102:102128. [PMID: 36272311 DOI: 10.1016/j.compmedimag.2022.102128] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 09/20/2022] [Accepted: 09/28/2022] [Indexed: 11/07/2022]
Abstract
BACKGROUND In recent years, more and more people suffer from depression and anxiety. These symptoms are hard to be spotted and can be very dangerous. Currently, the Self-Reported Anxiety Scale (SAS) and Self-Reported Depression Scale (SDS) are commonly used for initial screening for depression and anxiety disorders. However, the information contained in these two scales is limited, while the symptoms of subjects are various and complex, which results in the inconsistency between the questionnaire evaluation results and the clinician's diagnosis results. To fully mine the scale data, we propose a method to extract the features from the facial expression and movements, which are generated from the video recorded simultaneously when subjects fill in the scale. Then we collect the facial expression, movements and scale information to establish a multimodal framework for improving the accuracy and robustness of the diagnosis of depression and anxiety. METHODS We collect the scale results of the subjects and the videos when filling in the scales. Given the two scales, SAS and SDS, we construct a model with two branches, where each branch processes the multimodal data of SAS and SDS, respectively. In the branch, we first build a convolutional neural network (CNN) to extracts the facial expression features in each frame of images. Secondly, we establish a long short-term memory (LSTM) network to further embedding the facial expression feature and build the connections between various frames, so that the movement feature in the video can be generated. Thirdly, we transform the scale scores into one-hot format, and feed them into the corresponding branch of the network to further mining the information of the multimodal data. Finally, we fuse the embeddings of these two branches to generate inference results of depression and anxiety. RESULTS AND CONCLUSIONS Based on the score results of SAS and SDS, our multimodal model further mines the video information, and can reach the accuracy of 0.946 in diagnosing depression and anxiety. This study demonstrates the feasibility of using our CNN-LSTM-based multimodal model for initial screening and diagnosis of depression and anxiety disorders with high diagnostic performance.
Collapse
Affiliation(s)
- Wanqing Xie
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China; Department of Psychology, School of Mental Health and Psychological Sciences, Anhui Medical University, Hefei, China; Suzhou Fanhan Information Technology Company, Ltd, Suzhou, China
| | - Chen Wang
- College of the Mathematical Sciences, Harbin Engineering University, Harbin, China
| | - Zhixiong Lin
- Department of Psychiatry, Affiliated Hospital of Guangdong Medical University, Zhanjiang, China
| | - Xudong Luo
- Department of Psychiatry, Affiliated Hospital of Guangdong Medical University, Zhanjiang, China
| | - Wenqian Chen
- College of the Mathematical Sciences, Harbin Engineering University, Harbin, China
| | - Manzhu Xu
- Department of Biological Sciences, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Lizhong Liang
- Department of Psychiatry, Affiliated Hospital of Guangdong Medical University, Zhanjiang, China; School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Xiaofeng Liu
- Suzhou Fanhan Information Technology Company, Ltd, Suzhou, China; Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, USA
| | - Yanzhong Wang
- School of Population Health & Environmental Sciences, Faculty of Life Science and Medicine, King's College London, London, UK
| | - Hui Luo
- Marine Biomedical Research Institute of Guangdong Medical University, Zhanjiang 510240, China.
| | - Mingmei Cheng
- Department of Intelligent Medical Engineering, School of Biomedical Engineering, Anhui Medical University, Hefei, China; Department of Psychology, School of Mental Health and Psychological Sciences, Anhui Medical University, Hefei, China.
| |
Collapse
|
17
|
Khan AI, Kim MJ, Dutta P. Fine-tuning-based Transfer Learning for Characterization of Adeno-Associated Virus. JOURNAL OF SIGNAL PROCESSING SYSTEMS 2022; 94:1515-1529. [PMID: 36742147 PMCID: PMC9897492 DOI: 10.1007/s11265-022-01758-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/01/2022] [Indexed: 06/18/2023]
Abstract
Accurate and precise identification of adeno-associated virus (AAV) vectors play an important role in dose-dependent gene therapy. Although solid-state nanopore techniques can potentially be used to characterize AAV vectors by capturing ionic current, the existing data analysis techniques fall short of identifying them from their ionic current profiles. Recently introduced machine learning methods such as deep convolutional neural network (CNN), developed for image identification tasks, can be applied for such classification. However, with smaller data set for the problem in hand, it is not possible to train a deep neural network from scratch for accurate classification of AAV vectors. To circumvent this, we applied a pre-trained deep CNN (GoogleNet) model to capture the basic features from ionic current signals and subsequently used fine-tuning-based transfer learning to classify AAV vectors. The proposed method is very generic as it requires minimal preprocessing and does not require any handcrafted features. Our results indicate that fine-tuning-based transfer learning can achieve an average classification accuracy between 90 and 99% in three realizations with a very small standard deviation. Results also indicate that the classification accuracy depends on the applied electric field (across nanopore) and the time frame used for data segmentation. We also found that the fine-tuning of the deep network outperforms feature extraction-based classification for the resistive pulse dataset. To expand the usefulness of the fine-tuning-based transfer learning, we have tested two other pre-trained deep networks (ResNet50 and InceptionV3) for the classification of AAVs. Overall, the fine-tuning-based transfer learning from pre-trained deep networks is very effective for classification, though deep networks such as ResNet50 and InceptionV3 take significantly longer training time than GoogleNet.
Collapse
Affiliation(s)
- Aminul Islam Khan
- School of Mechanical and Materials Engineering, Washington State University, Pullman, WA, 99164, USA
| | - Min Jun Kim
- Department of Mechanical Engineering, Southern Methodist University, Dallas, TX, 75275, USA
| | - Prashanta Dutta
- School of Mechanical and Materials Engineering, Washington State University, Pullman, WA, 99164, USA
| |
Collapse
|
18
|
Özdemir Ö, Sönmez EB. Attention mechanism and mixup data augmentation for classification of COVID-19 Computed Tomography images. JOURNAL OF KING SAUD UNIVERSITY. COMPUTER AND INFORMATION SCIENCES 2022; 34:6199-6207. [PMID: 38620953 PMCID: PMC8280602 DOI: 10.1016/j.jksuci.2021.07.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 07/01/2021] [Accepted: 07/07/2021] [Indexed: 12/21/2022]
Abstract
The Coronavirus disease is quickly spreading all over the world and the emergency situation is still out of control. Latest achievements of deep learning algorithms suggest the use of deep Convolutional Neural Network to implement a computer-aided diagnostic system for automatic classification of COVID-19 CT images. In this paper, we propose to employ a feature-wise attention layer in order to enhance the discriminative features obtained by convolutional networks. Moreover, the original performance of the network has been improved using the mixup data augmentation technique. This work compares the proposed attention-based model against the stacked attention networks, and traditional versus mixup data augmentation approaches. We deduced that feature-wise attention extension, while outperforming the stacked attention variants, achieves remarkable improvements over the baseline convolutional neural networks. That is, ResNet50 architecture extended with a feature-wise attention layer obtained 95.57% accuracy score, which, to best of our knowledge, fixes the state-of-the-art in the challenging COVID-CT dataset.
Collapse
Affiliation(s)
- Özgür Özdemir
- Computer Engineering Department, Istanbul Bilgi University, Turkey
| | | |
Collapse
|
19
|
Shia WC, Hsu FR, Dai ST, Guo SL, Chen DR. Semantic Segmentation of the Malignant Breast Imaging Reporting and Data System Lexicon on Breast Ultrasound Images by Using DeepLab v3. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22145352. [PMID: 35891030 PMCID: PMC9323504 DOI: 10.3390/s22145352] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 07/14/2022] [Accepted: 07/15/2022] [Indexed: 05/26/2023]
Abstract
In this study, an advanced semantic segmentation method and deep convolutional neural network was applied to identify the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound images, thereby facilitating image interpretation and diagnosis by providing radiologists an objective second opinion. A total of 684 images (380 benign and 308 malignant tumours) from 343 patients (190 benign and 153 malignant breast tumour patients) were analysed in this study. Six malignancy-related standardised BI-RADS features were selected after analysis. The DeepLab v3+ architecture and four decode networks were used, and their semantic segmentation performance was evaluated and compared. Subsequently, DeepLab v3+ with the ResNet-50 decoder showed the best performance in semantic segmentation, with a mean accuracy and mean intersection over union (IU) of 44.04% and 34.92%, respectively. The weighted IU was 84.36%. For the diagnostic performance, the area under the curve was 83.32%. This study aimed to automate identification of the malignant BI-RADS lexicon on breast ultrasound images to facilitate diagnosis and improve its quality. The evaluation showed that DeepLab v3+ with the ResNet-50 decoder was suitable for solving this problem, offering a better balance of performance and computational resource usage than a fully connected network and other decoders.
Collapse
Affiliation(s)
- Wei-Chung Shia
- Molecular Medicine Laboratory, Department of Research, Changhua Christian Hospital, Changhua 500, Taiwan
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan; (F.-R.H.); (S.-T.D.)
| | - Fang-Rong Hsu
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan; (F.-R.H.); (S.-T.D.)
| | - Seng-Tong Dai
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan; (F.-R.H.); (S.-T.D.)
| | - Shih-Lin Guo
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua 500, Taiwan;
| | - Dar-Ren Chen
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua 500, Taiwan;
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
| |
Collapse
|
20
|
Image Moment-Based Features for Mass Detection in Breast US Images via Machine Learning and Neural Network Classification Models. INVENTIONS 2022. [DOI: 10.3390/inventions7020042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Differentiating between malignant and benign masses using machine learning in the recognition of breast ultrasound (BUS) images is a technique with good accuracy and precision, which helps doctors make a correct diagnosis. The method proposed in this paper integrates Hu’s moments in the analysis of the breast tumor. The extracted features feed a k-nearest neighbor (k-NN) classifier and a radial basis function neural network (RBFNN) to classify breast tumors into benign and malignant. The raw images and the tumor masks provided as ground-truth images belong to the public digital BUS images database. Certain metrics such as accuracy, sensitivity, precision, and F1-score were used to evaluate the segmentation results and to select Hu’s moments showing the best capacity to discriminate between malignant and benign breast tissues in BUS images. Regarding the selection of Hu’s moments, the k-NN classifier reached 85% accuracy for moment M1 and 80% for moment M5 whilst RBFNN reached an accuracy of 76% for M1. The proposed method might be used to assist the clinical diagnosis of breast cancer identification by providing a good combination between segmentation and Hu’s moments.
Collapse
|
21
|
Hsieh YH, Hsu FR, Dai ST, Huang HY, Chen DR, Shia WC. Incorporating the Breast Imaging Reporting and Data System Lexicon with a Fully Convolutional Network for Malignancy Detection on Breast Ultrasound. Diagnostics (Basel) 2021; 12:66. [PMID: 35054233 PMCID: PMC8774546 DOI: 10.3390/diagnostics12010066] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 12/21/2021] [Accepted: 12/25/2021] [Indexed: 11/16/2022] Open
Abstract
In this study, we applied semantic segmentation using a fully convolutional deep learning network to identify characteristics of the Breast Imaging Reporting and Data System (BI-RADS) lexicon from breast ultrasound images to facilitate clinical malignancy tumor classification. Among 378 images (204 benign and 174 malignant images) from 189 patients (102 benign breast tumor patients and 87 malignant patients), we identified seven malignant characteristics related to the BI-RADS lexicon in breast ultrasound. The mean accuracy and mean IU of the semantic segmentation were 32.82% and 28.88, respectively. The weighted intersection over union was 85.35%, and the area under the curve was 89.47%, showing better performance than similar semantic segmentation networks, SegNet and U-Net, in the same dataset. Our results suggest that the utilization of a deep learning network in combination with the BI-RADS lexicon can be an important supplemental tool when using ultrasound to diagnose breast malignancy.
Collapse
Affiliation(s)
- Yung-Hsien Hsieh
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan; (Y.-H.H.); (F.-R.H.); (S.-T.D.)
| | - Fang-Rong Hsu
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan; (Y.-H.H.); (F.-R.H.); (S.-T.D.)
| | - Seng-Tong Dai
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan; (Y.-H.H.); (F.-R.H.); (S.-T.D.)
| | - Hsin-Ya Huang
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua 500, Taiwan;
| | - Dar-Ren Chen
- Comprehensive Breast Cancer Center, Changhua Christian Hospital, Changhua 500, Taiwan;
- School of Medicine, Chung Shan Medical University, Taichung 40201, Taiwan
| | - Wei-Chung Shia
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan; (Y.-H.H.); (F.-R.H.); (S.-T.D.)
- Molecular Medicine Laboratory, Department of Research, Changhua Christian Hospital, Changhua 500, Taiwan
| |
Collapse
|
22
|
Meraj T, Alosaimi W, Alouffi B, Rauf HT, Kumar SA, Damaševičius R, Alyami H. A quantization assisted U-Net study with ICA and deep features fusion for breast cancer identification using ultrasonic data. PeerJ Comput Sci 2021; 7:e805. [PMID: 35036531 PMCID: PMC8725669 DOI: 10.7717/peerj-cs.805] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 11/12/2021] [Indexed: 06/14/2023]
Abstract
Breast cancer is one of the leading causes of death in women worldwide-the rapid increase in breast cancer has brought about more accessible diagnosis resources. The ultrasonic breast cancer modality for diagnosis is relatively cost-effective and valuable. Lesion isolation in ultrasonic images is a challenging task due to its robustness and intensity similarity. Accurate detection of breast lesions using ultrasonic breast cancer images can reduce death rates. In this research, a quantization-assisted U-Net approach for segmentation of breast lesions is proposed. It contains two step for segmentation: (1) U-Net and (2) quantization. The quantization assists to U-Net-based segmentation in order to isolate exact lesion areas from sonography images. The Independent Component Analysis (ICA) method then uses the isolated lesions to extract features and are then fused with deep automatic features. Public ultrasonic-modality-based datasets such as the Breast Ultrasound Images Dataset (BUSI) and the Open Access Database of Raw Ultrasonic Signals (OASBUD) are used for evaluation comparison. The OASBUD data extracted the same features. However, classification was done after feature regularization using the lasso method. The obtained results allow us to propose a computer-aided design (CAD) system for breast cancer identification using ultrasonic modalities.
Collapse
Affiliation(s)
- Talha Meraj
- Department of Computer Science, COMSATS University Islamabad-Wah Campus, Wah Cantt, Pakistan
| | - Wael Alosaimi
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Hafiz Tayyab Rauf
- Department of Computer Science, Faculty of Engineering & Informatics, University of Bradford, Bradford, United Kingdom
| | - Swarn Avinash Kumar
- Department of Information Technology, Indian Institute of Information Technology, Uttar Pradesh, Jhalwa, Prayagraj, India
| | | | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| |
Collapse
|
23
|
Han J, Wang D, Li Z, Dey N, Crespo RG, Shi F. Plantar pressure image classification employing residual-network model-based conditional generative adversarial networks: a comparison of normal, planus, and talipes equinovarus feet. Soft comput 2021. [DOI: 10.1007/s00500-021-06073-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|