1
|
Jaiswal A, Gianchandani N, Singh D, Kumar V, Kaur M. Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning. J Biomol Struct Dyn 2020; 39:5682-5689. [PMID: 32619398 DOI: 10.1080/07391102.2020.1788642] [Citation(s) in RCA: 201] [Impact Index Per Article: 40.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Deep learning models are widely used in the automatic analysis of radiological images. These techniques can train the weights of networks on large datasets as well as fine tuning the weights of pre-trained networks on small datasets. Due to the small COVID-19 dataset available, the pre-trained neural networks can be used for diagnosis of coronavirus. However, these techniques applied on chest CT image is very limited till now. Hence, the main aim of this paper to use the pre-trained deep learning architectures as an automated tool to detection and diagnosis of COVID-19 in chest CT. A DenseNet201 based deep transfer learning (DTL) is proposed to classify the patients as COVID infected or not i.e. COVID-19 (+) or COVID (-). The proposed model is utilized to extract features by using its own learned weights on the ImageNet dataset along with a convolutional neural structure. Extensive experiments are performed to evaluate the performance of the propose DTL model on COVID-19 chest CT scan images. Comparative analyses reveal that the proposed DTL based COVID-19 classification model outperforms the competitive approaches.Communicated by Ramaswamy H. Sarma.
Collapse
|
Journal Article |
5 |
201 |
2
|
Islam MM, Karray F, Alhajj R, Zeng J. A Review on Deep Learning Techniques for the Diagnosis of Novel Coronavirus (COVID-19). IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:30551-30572. [PMID: 34976571 PMCID: PMC8675557 DOI: 10.1109/access.2021.3058537] [Citation(s) in RCA: 119] [Impact Index Per Article: 29.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 02/06/2021] [Indexed: 05/03/2023]
Abstract
Novel coronavirus (COVID-19) outbreak, has raised a calamitous situation all over the world and has become one of the most acute and severe ailments in the past hundred years. The prevalence rate of COVID-19 is rapidly rising every day throughout the globe. Although no vaccines for this pandemic have been discovered yet, deep learning techniques proved themselves to be a powerful tool in the arsenal used by clinicians for the automatic diagnosis of COVID-19. This paper aims to overview the recently developed systems based on deep learning techniques using different medical imaging modalities like Computer Tomography (CT) and X-ray. This review specifically discusses the systems developed for COVID-19 diagnosis using deep learning techniques and provides insights on well-known data sets used to train these networks. It also highlights the data partitioning techniques and various performance measures developed by researchers in this field. A taxonomy is drawn to categorize the recent works for proper insight. Finally, we conclude by addressing the challenges associated with the use of deep learning methods for COVID-19 detection and probable future trends in this research area. The aim of this paper is to facilitate experts (medical or otherwise) and technicians in understanding the ways deep learning techniques are used in this regard and how they can be potentially further utilized to combat the outbreak of COVID-19.
Collapse
|
research-article |
4 |
119 |
3
|
Li F, Liu Z, Chen H, Jiang M, Zhang X, Wu Z. Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm. Transl Vis Sci Technol 2019; 8:4. [PMID: 31737428 PMCID: PMC6855298 DOI: 10.1167/tvst.8.6.4] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 09/02/2019] [Indexed: 12/11/2022] Open
Abstract
PURPOSE To achieve automatic diabetic retinopathy (DR) detection in retinal fundus photographs through the use of a deep transfer learning approach using the Inception-v3 network. METHODS A total of 19,233 eye fundus color numerical images were retrospectively obtained from 5278 adult patients presenting for DR screening. The 8816 images passed image-quality review and were graded as no apparent DR (1374 images), mild nonproliferative DR (NPDR) (2152 images), moderate NPDR (2370 images), severe NPDR (1984 images), and proliferative DR (PDR) (936 images) by eight retinal experts according to the International Clinical Diabetic Retinopathy severity scale. After image preprocessing, 7935 DR images were selected from the above categories as a training dataset, while the rest of the images were used as validation dataset. We introduced a 10-fold cross-validation strategy to assess and optimize our model. We also selected the publicly independent Messidor-2 dataset to test the performance of our model. For discrimination between no referral (no apparent DR and mild NPDR) and referral (moderate NPDR, severe NPDR, and PDR), we also computed prediction accuracy, sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and κ value. RESULTS The proposed approach achieved a high classification accuracy of 93.49% (95% confidence interval [CI], 93.13%-93.85%), with a 96.93% sensitivity (95% CI, 96.35%-97.51%) and a 93.45% specificity (95% CI, 93.12%-93.79%), while the AUC was up to 0.9905 (95% CI, 0.9887-0.9923) on the independent test dataset. The κ value of our best model was 0.919, while the three experts had κ values of 0.906, 0.931, and 0.914, independently. CONCLUSIONS This approach could automatically detect DR with excellent sensitivity, accuracy, and specificity and could aid in making a referral recommendation for further evaluation and treatment with high reliability. TRANSLATIONAL RELEVANCE This approach has great value in early DR screening using retinal fundus photographs.
Collapse
|
research-article |
6 |
56 |
4
|
Wang S, Li Z, Yu Y, Xu J. Folding Membrane Proteins by Deep Transfer Learning. Cell Syst 2017; 5:202-211.e3. [PMID: 28957654 PMCID: PMC5637520 DOI: 10.1016/j.cels.2017.09.001] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Revised: 06/01/2017] [Accepted: 08/29/2017] [Indexed: 01/02/2023]
Abstract
Computational elucidation of membrane protein (MP) structures is challenging partially due to lack of sufficient solved structures for homology modeling. Here, we describe a high-throughput deep transfer learning method that first predicts MP contacts by learning from non-MPs and then predicts 3D structure models using the predicted contacts as distance restraints. Tested on 510 non-redundant MPs, our method has contact prediction accuracy at least 0.18 better than existing methods, predicts correct folds for 218 MPs, and generates 3D models with root-mean-square deviation (RMSD) less than 4 and 5 Å for 57 and 108 MPs, respectively. A rigorous blind test in the continuous automated model evaluation project shows that our method predicted high-resolution 3D models for two recent test MPs of 210 residues with RMSD ∼2 Å. We estimated that our method could predict correct folds for 1,345-1,871 reviewed human multi-pass MPs including a few hundred new folds, which shall facilitate the discovery of drugs targeting at MPs.
Collapse
|
Research Support, N.I.H., Extramural |
8 |
45 |
5
|
Kandaswamy C, Silva LM, Alexandre LA, Santos JM. High-Content Analysis of Breast Cancer Using Single-Cell Deep Transfer Learning. ACTA ACUST UNITED AC 2016; 21:252-9. [PMID: 26746583 DOI: 10.1177/1087057115623451] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2015] [Accepted: 11/30/2015] [Indexed: 01/17/2023]
Abstract
High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.
Collapse
|
Research Support, Non-U.S. Gov't |
9 |
44 |
6
|
Im S, Hyeon J, Rha E, Lee J, Choi HJ, Jung Y, Kim TJ. Classification of Diffuse Glioma Subtype from Clinical-Grade Pathological Images Using Deep Transfer Learning. SENSORS 2021; 21:s21103500. [PMID: 34067934 PMCID: PMC8156672 DOI: 10.3390/s21103500] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 05/06/2021] [Accepted: 05/14/2021] [Indexed: 11/16/2022]
Abstract
Diffuse gliomas are the most common primary brain tumors and they vary considerably in their morphology, location, genetic alterations, and response to therapy. In 2016, the World Health Organization (WHO) provided new guidelines for making an integrated diagnosis that incorporates both morphologic and molecular features to diffuse gliomas. In this study, we demonstrate how deep learning approaches can be used for an automatic classification of glioma subtypes and grading using whole-slide images that were obtained from routine clinical practice. A deep transfer learning method using the ResNet50V2 model was trained to classify subtypes and grades of diffuse gliomas according to the WHO’s new 2016 classification. The balanced accuracy of the diffuse glioma subtype classification model with majority voting was 0.8727. These results highlight an emerging role of deep learning in the future practice of pathologic diagnosis.
Collapse
|
Journal Article |
4 |
21 |
7
|
Bo L, Zhang Z, Jiang Z, Yang C, Huang P, Chen T, Wang Y, Yu G, Tan X, Cheng Q, Li D, Liu Z. Differentiation of Brain Abscess From Cystic Glioma Using Conventional MRI Based on Deep Transfer Learning Features and Hand-Crafted Radiomics Features. Front Med (Lausanne) 2021; 8:748144. [PMID: 34869438 PMCID: PMC8636043 DOI: 10.3389/fmed.2021.748144] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 10/06/2021] [Indexed: 12/12/2022] Open
Abstract
Objectives: To develop and validate the model for distinguishing brain abscess from cystic glioma by combining deep transfer learning (DTL) features and hand-crafted radiomics (HCR) features in conventional T1-weighted imaging (T1WI) and T2-weighted imaging (T2WI). Methods: This single-center retrospective analysis involved 188 patients with pathologically proven brain abscess (102) or cystic glioma (86). One thousand DTL and 105 HCR features were extracted from the T1WI and T2WI of the patients. Three feature selection methods and four classifiers, such as k-nearest neighbors (KNN), random forest classifier (RFC), logistic regression (LR), and support vector machine (SVM), for distinguishing brain abscess from cystic glioma were compared. The best feature combination and classifier were chosen according to the quantitative metrics including area under the curve (AUC), Youden Index, and accuracy. Results: In most cases, deep learning-based radiomics (DLR) features, i.e., DTL features combined with HCR features, contributed to a higher accuracy than HCR and DTL features alone for distinguishing brain abscesses from cystic gliomas. The AUC values of the model established, based on the DLR features in T2WI, were 0.86 (95% CI: 0.81, 0.91) in the training cohort and 0.85 (95% CI: 0.75, 0.95) in the test cohort, respectively. Conclusions: The model established with the DLR features can distinguish brain abscess from cystic glioma efficiently, providing a useful, inexpensive, convenient, and non-invasive method for differential diagnosis. This is the first time that conventional MRI radiomics is applied to identify these diseases. Also, the combination of HCR and DTL features can lead to get impressive performance.
Collapse
|
|
4 |
20 |
8
|
Mahmood T, Li J, Pei Y, Akhtar F. An Automated In-Depth Feature Learning Algorithm for Breast Abnormality Prognosis and Robust Characterization from Mammography Images Using Deep Transfer Learning. BIOLOGY 2021; 10:859. [PMID: 34571736 PMCID: PMC8468800 DOI: 10.3390/biology10090859] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 08/25/2021] [Accepted: 08/27/2021] [Indexed: 01/17/2023]
Abstract
BACKGROUND Diagnosing breast cancer masses and calcification clusters have paramount significance in mammography, which aids in mitigating the disease's complexities and curing it at early stages. However, a wrong mammogram interpretation may lead to an unnecessary biopsy of the false-positive findings, which reduces the patient's survival chances. Consequently, approaches that learn to discern breast masses can reduce the number of misconceptions and incorrect diagnoses. Conventionally used classification models focus on feature extraction techniques specific to a particular problem based on domain information. Deep learning strategies are becoming promising alternatives to solve the many challenges of feature-based approaches. METHODS This study introduces a convolutional neural network (ConvNet)-based deep learning method to extract features at varying densities and discern mammography's normal and suspected regions. Two different experiments were carried out to make an accurate diagnosis and classification. The first experiment consisted of five end-to-end pre-trained and fine-tuned deep convolution neural networks (DCNN). The in-depth features extracted from the ConvNet are also used to train the support vector machine algorithm to achieve excellent performance in the second experiment. Additionally, DCNN is the most frequently used image interpretation and classification method, including VGGNet, GoogLeNet, MobileNet, ResNet, and DenseNet. Moreover, this study pertains to data cleaning, preprocessing, and data augmentation, and improving mass recognition accuracy. The efficacy of all models is evaluated by training and testing three mammography datasets and has exhibited remarkable results. RESULTS Our deep learning ConvNet+SVM model obtained a discriminative training accuracy of 97.7% and validating accuracy of 97.8%, contrary to this, VGGNet16 method yielded 90.2%, 93.5% for VGGNet19, 63.4% for GoogLeNet, 82.9% for MobileNetV2, 75.1% for ResNet50, and 72.9% for DenseNet121. CONCLUSIONS The proposed model's improvement and validation are appropriated in conventional pathological practices that conceivably reduce the pathologist's strain in predicting clinical outcomes by analyzing patients' mammography images.
Collapse
|
research-article |
4 |
16 |
9
|
Gouda W, Almurafeh M, Humayun M, Jhanjhi NZ. Detection of COVID-19 Based on Chest X-rays Using Deep Learning. Healthcare (Basel) 2022; 10:343. [PMID: 35206957 PMCID: PMC8872326 DOI: 10.3390/healthcare10020343] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 01/27/2022] [Accepted: 01/30/2022] [Indexed: 01/09/2023] Open
Abstract
The coronavirus disease (COVID-19) is rapidly spreading around the world. Early diagnosis and isolation of COVID-19 patients has proven crucial in slowing the disease's spread. One of the best options for detecting COVID-19 reliably and easily is to use deep learning (DL) strategies. Two different DL approaches based on a pertained neural network model (ResNet-50) for COVID-19 detection using chest X-ray (CXR) images are proposed in this study. Augmenting, enhancing, normalizing, and resizing CXR images to a fixed size are all part of the preprocessing stage. This research proposes a DL method for classifying CXR images based on an ensemble employing multiple runs of a modified version of the Resnet-50. The proposed system is evaluated against two publicly available benchmark datasets that are frequently used by several researchers: COVID-19 Image Data Collection (IDC) and CXR Images (Pneumonia). The proposed system validates its dominance over existing methods such as VGG or Densnet, with values exceeding 99.63% in many metrics, such as accuracy, precision, recall, F1-score, and Area under the curve (AUC), based on the performance results obtained.
Collapse
|
research-article |
3 |
15 |
10
|
Wang H, Yu Y, Cai Y, Chen L, Chen X. A Vehicle Recognition Algorithm Based on Deep Transfer Learning with a Multiple Feature Subspace Distribution. SENSORS 2018; 18:s18124109. [PMID: 30477172 PMCID: PMC6308963 DOI: 10.3390/s18124109] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 11/16/2022]
Abstract
Vehicle detection is a key component of environmental sensing systems for Intelligent Vehicles (IVs). The traditional shallow model and offline learning-based vehicle detection method are not able to satisfy the real-world challenges of environmental complexity and scene dynamics. Focusing on these problems, this work proposes a vehicle detection algorithm based on a multiple feature subspace distribution deep model with online transfer learning. Based on the multiple feature subspace distribution hypothesis, a deep model is established in which multiple Restricted Boltzmann Machines (RBMs) construct the lower layers and a Deep Belief Network (DBN) composes the superstructure. For this deep model, an unsupervised feature extraction method is applied, which is based on sparse constraints. Then, a transfer learning method with online sample generation is proposed based on the deep model. Finally, the entire classifier is retrained online with supervised learning. The experiment is actuated using the KITTI road image datasets. The performance of the proposed method is compared with many state-of-the-art methods and it is demonstrated that the proposed deep transfer learning-based algorithm outperformed existing state-of-the-art methods.
Collapse
|
|
7 |
14 |
11
|
A Comparison of Computer-Aided Diagnosis Schemes Optimized Using Radiomics and Deep Transfer Learning Methods. Bioengineering (Basel) 2022; 9:bioengineering9060256. [PMID: 35735499 PMCID: PMC9219621 DOI: 10.3390/bioengineering9060256] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 05/25/2022] [Accepted: 06/13/2022] [Indexed: 01/29/2023] Open
Abstract
Objective: Radiomics and deep transfer learning are two popular technologies used to develop computer-aided detection and diagnosis (CAD) schemes of medical images. This study aims to investigate and to compare the advantages and the potential limitations of applying these two technologies in developing CAD schemes. Methods: A relatively large and diverse retrospective dataset including 3000 digital mammograms was assembled in which 1496 images depicted malignant lesions and 1504 images depicted benign lesions. Two CAD schemes were developed to classify breast lesions. The first scheme was developed using four steps namely, applying an adaptive multi-layer topographic region growing algorithm to segment lesions, computing initial radiomics features, applying a principal component algorithm to generate an optimal feature vector, and building a support vector machine classifier. The second CAD scheme was built based on a pre-trained residual net architecture (ResNet50) as a transfer learning model to classify breast lesions. Both CAD schemes were trained and tested using a 10-fold cross-validation method. Several score fusion methods were also investigated to classify breast lesions. CAD performances were evaluated and compared by the areas under the ROC curve (AUC). Results: The ResNet50 model-based CAD scheme yielded AUC = 0.85 ± 0.02, which was significantly higher than the radiomics feature-based CAD scheme with AUC = 0.77 ± 0.02 (p < 0.01). Additionally, the fusion of classification scores generated by the two CAD schemes did not further improve classification performance. Conclusion: This study demonstrates that using deep transfer learning is more efficient to develop CAD schemes and it enables a higher lesion classification performance than CAD schemes developed using radiomics-based technology.
Collapse
|
research-article |
3 |
11 |
12
|
Xuan J, Ke B, Ma W, Liang Y, Hu W. Spinal disease diagnosis assistant based on MRI images using deep transfer learning methods. Front Public Health 2023; 11:1044525. [PMID: 36908475 PMCID: PMC9998513 DOI: 10.3389/fpubh.2023.1044525] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 02/06/2023] [Indexed: 03/14/2023] Open
Abstract
Introduction In light of the potential problems of missed diagnosis and misdiagnosis in the diagnosis of spinal diseases caused by experience differences and fatigue, this paper investigates the use of artificial intelligence technology for auxiliary diagnosis of spinal diseases. Methods The LableImg tool was used to label the MRIs of 604 patients by clinically experienced doctors. Then, in order to select an appropriate object detection algorithm, deep transfer learning models of YOLOv3, YOLOv5, and PP-YOLOv2 were created and trained on the Baidu PaddlePaddle framework. The experimental results showed that the PP-YOLOv2 model achieved a 90.08% overall accuracy in the diagnosis of normal, IVD bulges and spondylolisthesis, which were 27.5 and 3.9% higher than YOLOv3 and YOLOv5, respectively. Finally, a visualization of the intelligent spine assistant diagnostic software based on the PP-YOLOv2 model was created and the software was made available to the doctors in the spine and osteopathic surgery at Guilin People's Hospital. Results and discussion This software automatically provides auxiliary diagnoses in 14.5 s on a standard computer, is much faster than doctors in diagnosing human spines, which typically take 10 min, and its accuracy of 98% can be compared to that of experienced doctors in the comparison of various diagnostic methods. It significantly improves doctors' working efficiency, reduces the phenomenon of missed diagnoses and misdiagnoses, and demonstrates the efficacy of the developed intelligent spinal auxiliary diagnosis software.
Collapse
|
research-article |
2 |
10 |
13
|
Tariq H, Rashid M, Javed A, Zafar E, Alotaibi SS, Zia MYI. Performance Analysis of Deep-Neural-Network-Based Automatic Diagnosis of Diabetic Retinopathy. SENSORS (BASEL, SWITZERLAND) 2021; 22:205. [PMID: 35009747 PMCID: PMC8749542 DOI: 10.3390/s22010205] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Revised: 12/13/2021] [Accepted: 12/22/2021] [Indexed: 06/14/2023]
Abstract
Diabetic retinopathy (DR) is a human eye disease that affects people who are suffering from diabetes. It causes damage to their eyes, including vision loss. It is treatable; however, it takes a long time to diagnose and may require many eye exams. Early detection of DR may prevent or delay the vision loss. Therefore, a robust, automatic and computer-based diagnosis of DR is essential. Currently, deep neural networks are being utilized in numerous medical areas to diagnose various diseases. Consequently, deep transfer learning is utilized in this article. We employ five convolutional-neural-network-based designs (AlexNet, GoogleNet, Inception V4, Inception ResNet V2 and ResNeXt-50). A collection of DR pictures is created. Subsequently, the created collections are labeled with an appropriate treatment approach. This automates the diagnosis and assists patients through subsequent therapies. Furthermore, in order to identify the severity of DR retina pictures, we use our own dataset to train deep convolutional neural networks (CNNs). Experimental results reveal that the pre-trained model Se-ResNeXt-50 obtains the best classification accuracy of 97.53% for our dataset out of all pre-trained models. Moreover, we perform five different experiments on each CNN architecture. As a result, a minimum accuracy of 84.01% is achieved for a five-degree classification.
Collapse
|
research-article |
4 |
9 |
14
|
Lehmler SJ, Saif-Ur-Rehman M, Tobias G, Iossifidis I. Deep transfer learning compared to subject-specific models for sEMG decoders. J Neural Eng 2022; 19. [PMID: 36206722 DOI: 10.1088/1741-2552/ac9860] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 10/07/2022] [Indexed: 12/24/2022]
Abstract
Objective. Accurate decoding of surface electromyography (sEMG) is pivotal for muscle-to-machine-interfaces and their application e.g. rehabilitation therapy. sEMG signals have high inter-subject variability, due to various factors, including skin thickness, body fat percentage, and electrode placement. Deep learning algorithms require long training time and tend to overfit if only few samples are available. In this study, we aim to investigate methods to calibrate deep learning models to a new user when only a limited amount of training data is available.Approach. Two methods are commonly used in the literature, subject-specific modeling and transfer learning. In this study, we investigate the effectiveness of transfer learning using weight initialization for recalibration of two different pretrained deep learning models on new subjects data and compare their performance to subject-specific models. We evaluate two models on three publicly available databases (non invasive adaptive prosthetics database 2-4) and compare the performance of both calibration schemes in terms of accuracy, required training data, and calibration time.Main results. On average over all settings, our transfer learning approach improves 5%-points on the pretrained models without fine-tuning, and 12%-points on the subject-specific models, while being trained for 22% fewer epochs on average. Our results indicate that transfer learning enables faster learning on fewer training samples than user-specific models.Significance. To the best of our knowledge, this is the first comparison of subject-specific modeling and transfer learning. These approaches are ubiquitously used in the field of sEMG decoding. But the lack of comparative studies until now made it difficult for scientists to assess appropriate calibration schemes. Our results guide engineers evaluating similar use cases.
Collapse
|
|
3 |
9 |
15
|
Hashimoto H, Kameda S, Maezawa H, Oshino S, Tani N, Khoo HM, Yanagisawa T, Yoshimine T, Kishima H, Hirata M. A Swallowing Decoder Based on Deep Transfer Learning: AlexNet Classification of the Intracranial Electrocorticogram. Int J Neural Syst 2020; 31:2050056. [PMID: 32938263 DOI: 10.1142/s0129065720500562] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
To realize a brain-machine interface to assist swallowing, neural signal decoding is indispensable. Eight participants with temporal-lobe intracranial electrode implants for epilepsy were asked to swallow during electrocorticogram (ECoG) recording. Raw ECoG signals or certain frequency bands of the ECoG power were converted into images whose vertical axis was electrode number and whose horizontal axis was time in milliseconds, which were used as training data. These data were classified with four labels (Rest, Mouth open, Water injection, and Swallowing). Deep transfer learning was carried out using AlexNet, and power in the high-[Formula: see text] band (75-150[Formula: see text]Hz) was the training set. Accuracy reached 74.01%, sensitivity reached 82.51%, and specificity reached 95.38%. However, using the raw ECoG signals, the accuracy obtained was 76.95%, comparable to that of the high-[Formula: see text] power. We demonstrated that a version of AlexNet pre-trained with visually meaningful images can be used for transfer learning of visually meaningless images made up of ECoG signals. Moreover, we could achieve high decoding accuracy using the raw ECoG signals, allowing us to dispense with the conventional extraction of high-[Formula: see text] power. Thus, the images derived from the raw ECoG signals were equivalent to those derived from the high-[Formula: see text] band for transfer deep learning.
Collapse
|
Journal Article |
5 |
9 |
16
|
Werner J, Kronberg RM, Stachura P, Ostermann PN, Müller L, Schaal H, Bhatia S, Kather JN, Borkhardt A, Pandyra AA, Lang KS, Lang PA. Deep Transfer Learning Approach for Automatic Recognition of Drug Toxicity and Inhibition of SARS-CoV-2. Viruses 2021; 13:v13040610. [PMID: 33918368 PMCID: PMC8066066 DOI: 10.3390/v13040610] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/25/2021] [Accepted: 03/30/2021] [Indexed: 12/11/2022] Open
Abstract
Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) causes COVID-19 and is responsible for the ongoing pandemic. Screening of potential antiviral drugs against SARS-CoV-2 depend on in vitro experiments, which are based on the quantification of the virus titer. Here, we used virus-induced cytopathic effects (CPE) in brightfield microscopy of SARS-CoV-2-infected monolayers to quantify the virus titer. Images were classified using deep transfer learning (DTL) that fine-tune the last layers of a pre-trained Resnet18 (ImageNet). To exclude toxic concentrations of potential drugs, the network was expanded to include a toxic score (TOX) that detected cell death (CPETOXnet). With this analytic tool, the inhibitory effects of chloroquine, hydroxychloroquine, remdesivir, and emetine were validated. Taken together we developed a simple method and provided open access implementation to quantify SARS-CoV-2 titers and drug toxicity in experimental settings, which may be adaptable to assays with other viruses. The quantification of virus titers from brightfield images could accelerate the experimental approach for antiviral testing.
Collapse
|
Research Support, Non-U.S. Gov't |
4 |
8 |
17
|
Ren B, Wu Y, Huang L, Zhang Z, Huang B, Zhang H, Ma J, Li B, Liu X, Wu G, Zhang J, Shen L, Liu Q, Ni J. Deep transfer learning of structural magnetic resonance imaging fused with blood parameters improves brain age prediction. Hum Brain Mapp 2022; 43:1640-1656. [PMID: 34913545 PMCID: PMC8886664 DOI: 10.1002/hbm.25748] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 11/14/2021] [Accepted: 12/01/2021] [Indexed: 12/27/2022] Open
Abstract
Machine learning has been applied to neuroimaging data for estimating brain age and capturing early cognitive impairment in neurodegenerative diseases. Blood parameters like neurofilament light chain are associated with aging. In order to improve brain age predictive accuracy, we constructed a model based on both brain structural magnetic resonance imaging (sMRI) and blood parameters. Healthy subjects (n = 93; 37 males; aged 50-85 years) were recruited. A deep learning network was firstly pretrained on a large set of MRI scans (n = 1,481; 659 males; aged 50-85 years) downloaded from multiple open-source datasets, to provide weights on our recruited dataset. Evaluating the network on the recruited dataset resulted in mean absolute error (MAE) of 4.91 years and a high correlation (r = .67, p <.001) against chronological age. The sMRI data were then combined with five blood biochemical indicators including GLU, TG, TC, ApoA1 and ApoB, and 9 dementia-associated biomarkers including ApoE genotype, HCY, NFL, TREM2, Aβ40, Aβ42, T-tau, TIMP1, and VLDLR to construct a bilinear fusion model, which achieved a more accurate prediction of brain age (MAE, 3.96 years; r = .76, p <.001). Notably, the fusion model achieved better improvement in the group of older subjects (70-85 years). Extracted attention maps of the network showed that amygdala, pallidum, and olfactory were effective for age estimation. Mediation analysis further showed that brain structural features and blood parameters provided independent and significant impact. The constructed age prediction model may have promising potential in evaluation of brain health based on MRI and blood parameters.
Collapse
|
Research Support, N.I.H., Extramural |
3 |
7 |
18
|
Souaidi M, El Ansari M. Multi-Scale Hybrid Network for Polyp Detection in Wireless Capsule Endoscopy and Colonoscopy Images. Diagnostics (Basel) 2022; 12:2030. [PMID: 36010380 PMCID: PMC9407378 DOI: 10.3390/diagnostics12082030] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 08/15/2022] [Accepted: 08/17/2022] [Indexed: 11/17/2022] Open
Abstract
The trade-off between speed and precision is a key step in the detection of small polyps in wireless capsule endoscopy (WCE) images. In this paper, we propose a hybrid network of an inception v4 architecture-based single-shot multibox detector (Hyb-SSDNet) to detect small polyp regions in both WCE and colonoscopy frames. Medical privacy concerns are considered the main barriers to WCE image acquisition. To satisfy the object detection requirements, we enlarged the training datasets and investigated deep transfer learning techniques. The Hyb-SSDNet framework adopts inception blocks to alleviate the inherent limitations of the convolution operation to incorporate contextual features and semantic information into deep networks. It consists of four main components: (a) multi-scale encoding of small polyp regions, (b) using the inception v4 backbone to enhance more contextual features in shallow and middle layers, and (c) concatenating weighted features of mid-level feature maps, giving them more importance to highly extract semantic information. Then, the feature map fusion is delivered to the next layer, followed by some downsampling blocks to generate new pyramidal layers. Finally, the feature maps are fed to multibox detectors, consistent with the SSD process-based VGG16 network. The Hyb-SSDNet achieved a 93.29% mean average precision (mAP) and a testing speed of 44.5 FPS on the WCE dataset. This work proves that deep learning has the potential to develop future research in polyp detection and classification tasks.
Collapse
|
research-article |
3 |
7 |
19
|
Jones MA, Faiz R, Qiu Y, Zheng B. Improving mammography lesion classification by optimal fusion of handcrafted and deep transfer learning features. Phys Med Biol 2022; 67:10.1088/1361-6560/ac5297. [PMID: 35130517 PMCID: PMC8935657 DOI: 10.1088/1361-6560/ac5297] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 02/07/2022] [Indexed: 12/20/2022]
Abstract
Objective.Handcrafted radiomics features or deep learning model-generated automated features are commonly used to develop computer-aided diagnosis schemes of medical images. The objective of this study is to test the hypothesis that handcrafted and automated features contain complementary classification information and fusion of these two types of features can improve CAD performance.Approach.We retrospectively assembled a dataset involving 1535 lesions (740 malignant and 795 benign). Regions of interest (ROI) surrounding suspicious lesions are extracted and two types of features are computed from each ROI. The first one includes 40 radiomic features and the second one includes automated features computed from a VGG16 network using a transfer learning method. A single channel ROI image is converted to three channel pseudo-ROI images by stacking the original image, a bilateral filtered image, and a histogram equalized image. Two VGG16 models using pseudo-ROIs and 3 stacked original ROIs without pre-processing are used to extract automated features. Five linear support vector machines (SVM) are built using the optimally selected feature vectors from the handcrafted features, two sets of VGG16 model-generated automated features, and the fusion of handcrafted and each set of automated features, respectively.Main Results.Using a 10-fold cross-validation, the fusion SVM using pseudo-ROIs yields the highest lesion classification performance with area under ROC curve (AUC = 0.756 ± 0.042), which is significantly higher than those yielded by other SVMs trained using handcrafted or automated features only (p < 0.05).Significance.This study demonstrates that both handcrafted and automated futures contain useful information to classify breast lesions. Fusion of these two types of features can further increase CAD performance.
Collapse
|
Research Support, N.I.H., Extramural |
3 |
7 |
20
|
Rezaeijo SM, Ghorvei M, Mofid B. Predicting breast cancer response to neoadjuvant chemotherapy using ensemble deep transfer learning based on CT images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2021; 29:835-850. [PMID: 34219704 DOI: 10.3233/xst-210910] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To develop an ensemble a deep transfer learning model of CT images for predicting pathologic complete response (pCR) in breast cancer patients undergoing neoadjuvant chemotherapy (NAC). METHODS The data were obtained from the public dataset 'QIN-Breast' from The Cancer Imaging Archive (TCIA). CT images were gathered before and after the first cycle of NAC. CT images of 121 breast cancer patients were used to train and test the model. Among these patients, 58 achieved a pCR and 63 showed a non-pCR based pathology examination of surgical results after NAC. The dataset was split into training and testing subsets with a ratio of 7:3. In addition, the number of training samples in the dataset was increased from 656 to 1,968 by performing an image augmentation method. Two deep transfer learning models namely, DenseNet201 and ResNet152V2, and the ensemble model with a concatenation of two models, were trained and tested using CT images. RESULTS The ensemble model obtained the highest accuracy of 100% on the testing dataset. Furthermore, we received the best performance of 100% in recall, precision and f1-score value for the ensemble model. This supports the fact that the ensemble model results in better-generalized model and leads to efficient framework. Although a 0.004 and 0.003 difference were seen between the AUC of two base models (DenseNet201 and ResNet152V2) and the proposed ensemble, this increase in the model quality is critical in medical research. T-SNE revealed that in the proposed ensemble, no points were clustered into the wrong class. These results expose the strong performance of the proposed ensemble. CONCLUSION The study concluded that the ensemble model can increase the ability to predict breast cancer response to first-cycle NAC than two DenseNet201 and ResNet152V2 models.
Collapse
|
|
4 |
6 |
21
|
Brima Y, Atemkeng M, Tankio Djiokap S, Ebiele J, Tchakounté F. Transfer Learning for the Detection and Diagnosis of Types of Pneumonia including Pneumonia Induced by COVID-19 from Chest X-ray Images. Diagnostics (Basel) 2021; 11:1480. [PMID: 34441414 PMCID: PMC8394302 DOI: 10.3390/diagnostics11081480] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 08/07/2021] [Accepted: 08/09/2021] [Indexed: 12/23/2022] Open
Abstract
Accurate early diagnosis of COVID-19 viral pneumonia, primarily in asymptomatic people, is essential to reduce the spread of the disease, the burden on healthcare capacity, and the overall death rate. It is essential to design affordable and accessible solutions to distinguish pneumonia caused by COVID-19 from other types of pneumonia. In this work, we propose a reliable approach based on deep transfer learning that requires few computations and converges faster. Experimental results demonstrate that our proposed framework for transfer learning is a potential and effective approach to detect and diagnose types of pneumonia from chest X-ray images with a test accuracy of 94.0%.
Collapse
|
research-article |
4 |
5 |
22
|
Diagnosis of COVID-19 from X-rays using combined CNN-RNN architecture with transfer learning. BENCHCOUNCIL TRANSACTIONS ON BENCHMARKS, STANDARDS AND EVALUATIONS 2023:100088. [PMCID: PMC10010001 DOI: 10.1016/j.tbench.2023.100088] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/16/2023]
Abstract
Combating the COVID-19 pandemic has emerged as one of the most promising issues in global healthcare. Accurate and fast diagnosis of COVID-19 cases is required for the right medical treatment to control this pandemic. Chest radiography imaging techniques are more effective than the reverse-transcription polymerase chain reaction (RT-PCR) method in detecting coronavirus. Due to the limited availability of medical images, transfer learning is better suited to classify patterns in medical images. This paper presents a combined architecture of convolutional neural network (CNN) and recurrent neural network (RNN) to diagnose COVID-19 patients from chest X-rays. The deep transfer techniques used in this experiment are VGG19, DenseNet121, InceptionV3, and Inception-ResNetV2, where CNN is used to extract complex features from samples and classify them using RNN. In our experiments, the VGG19-RNN architecture outperformed all other networks in terms of accuracy. Finally, decision-making regions of images were visualized using gradient-weighted class activation mapping (Grad-CAM). The system achieved promising results compared to other existing systems and might be validated in the future when more samples would be available. The experiment demonstrated a good alternative method to diagnose COVID-19 for medical staff. All the data used during the study are openly available from the Mendeley data repository at https://data.mendeley.com/datasets/mxc6vb7svm. For further research, we have made the source code publicly available at https://github.com/Asraf047/COVID19-CNN-RNN.
Collapse
|
research-article |
2 |
4 |
23
|
Fakieh B, AL-Ghamdi ASALM, Ragab M. Optimal Deep Stacked Sparse Autoencoder Based Osteosarcoma Detection and Classification Model. Healthcare (Basel) 2022; 10:1040. [PMID: 35742091 PMCID: PMC9222514 DOI: 10.3390/healthcare10061040] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 05/30/2022] [Accepted: 05/30/2022] [Indexed: 02/04/2023] Open
Abstract
Osteosarcoma is a kind of bone cancer which generally starts to develop in the lengthy bones in the legs and arms. Because of an increase in occurrence of cancer and patient-specific treatment options, the detection and classification of cancer becomes a difficult process. The manual recognition of osteosarcoma necessitates expert knowledge and is time consuming. An earlier identification of osteosarcoma can reduce the death rate. With the development of new technologies, automated detection models can be exploited for medical image classification, thereby decreasing the expert's reliance and resulting in timely identification. In recent times, an amount of Computer-Aided Detection (CAD) systems are available in the literature for the segmentation and detection of osteosarcoma using medicinal images. In this view, this research work develops a wind driven optimization with deep transfer learning enabled osteosarcoma detection and classification (WDODTL-ODC) method. The presented WDODTL-ODC model intends to determine the presence of osteosarcoma in the biomedical images. To accomplish this, the osteosarcoma model involves Gaussian filtering (GF) based on pre-processing and contrast enhancement techniques. In addition, deep transfer learning using a SqueezNet model is utilized as a featured extractor. At last, the Wind Driven Optimization (WDO) algorithm with a deep-stacked sparse auto-encoder (DSSAE) is employed for the classification process. The simulation outcome demonstrated that the WDODTL-ODC technique outperformed the existing models in the detection of osteosarcoma on biomedical images.
Collapse
|
research-article |
3 |
3 |
24
|
Chen KY, Shin J, Hasan MAM, Liaw JJ, Yuichi O, Tomioka Y. Fitness Movement Types and Completeness Detection Using a Transfer-Learning-Based Deep Neural Network. SENSORS (BASEL, SWITZERLAND) 2022; 22:5700. [PMID: 35957257 PMCID: PMC9371130 DOI: 10.3390/s22155700] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 07/25/2022] [Accepted: 07/27/2022] [Indexed: 06/15/2023]
Abstract
Fitness is important in people's lives. Good fitness habits can improve cardiopulmonary capacity, increase concentration, prevent obesity, and effectively reduce the risk of death. Home fitness does not require large equipment but uses dumbbells, yoga mats, and horizontal bars to complete fitness exercises and can effectively avoid contact with people, so it is deeply loved by people. People who work out at home use social media to obtain fitness knowledge, but learning ability is limited. Incomplete fitness is likely to lead to injury, and a cheap, timely, and accurate fitness detection system can reduce the risk of fitness injuries and can effectively improve people's fitness awareness. In the past, many studies have engaged in the detection of fitness movements, among which the detection of fitness movements based on wearable devices, body nodes, and image deep learning has achieved better performance. However, a wearable device cannot detect a variety of fitness movements, may hinder the exercise of the fitness user, and has a high cost. Both body-node-based and image-deep-learning-based methods have lower costs, but each has some drawbacks. Therefore, this paper used a method based on deep transfer learning to establish a fitness database. After that, a deep neural network was trained to detect the type and completeness of fitness movements. We used Yolov4 and Mediapipe to instantly detect fitness movements and stored the 1D fitness signal of movement to build a database. Finally, MLP was used to classify the 1D signal waveform of fitness. In the performance of the classification of fitness movement types, the mAP was 99.71%, accuracy was 98.56%, precision was 97.9%, recall was 98.56%, and the F1-score was 98.23%, which is quite a high performance. In the performance of fitness movement completeness classification, accuracy was 92.84%, precision was 92.85, recall was 92.84%, and the F1-score was 92.83%. The average FPS in detection was 17.5. Experimental results show that our method achieves higher accuracy compared to other methods.
Collapse
|
research-article |
3 |
2 |
25
|
Buragohain A, Mali B, Saha S, Singh PK. A deep transfer learning based approach to detect COVID‐19 waste. INTERNET TECHNOLOGY LETTERS 2022; 5:e327. [PMCID: PMC8646505 DOI: 10.1002/itl2.327] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 10/04/2021] [Accepted: 10/06/2021] [Indexed: 06/14/2023]
Abstract
COVID‐19 or Novel Coronavirus disease is not only creating a pandemic but also created another kind of problem, initiating a group of wastes, which is also called as COVID‐19 waste. COVID‐19 waste includes the mask, hand gloves, sanitizer bottles, Personal Protective Equipment (PPE) kits, syringes used to vaccinate people, etc. These wastes are now polluting every continent and ocean. Improper disposal of such wastes may increase the rate of spread of contamination. In this regard, we decided to build up a detection model, which will be able to detect some of the COVID‐19 wastes. We considered masks, hand gloves, and syringes as the initial wastes to get detected. We collected the dataset manually, annotated the images with these three classes, then trained different CNN models to compare the accuracies of the models for our dataset. We got the best model to be EfficientDet D0, which gives a mean average precision of 0.82. Further, we have also developed a UI to deploy the model, where general users can upload the images and can detect the wastes, controlling the threshold.
Collapse
|
research-article |
3 |
2 |