1
|
Yin S, Ding N, Ji Y, Qiao Z, Yuan J, Chi J, Jin L. The value of CT radiomics combined with deep transfer learning in predicting the nature of gallbladder polypoid lesions. Acta Radiol 2024:2841851241245970. [PMID: 38623640 DOI: 10.1177/02841851241245970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
BACKGROUND Computed tomography (CT) radiomics combined with deep transfer learning was used to identify cholesterol and adenomatous gallbladder polyps that have not been well evaluated before surgery. PURPOSE To investigate the potential of various machine learning models, incorporating radiomics and deep transfer learning, in predicting the nature of cholesterol and adenomatous gallbladder polyps. MATERIAL AND METHODS A retrospective analysis was conducted on clinical and imaging data from 100 patients with cholesterol or adenomatous polyps confirmed by surgery and pathology at our hospital between September 2015 and February 2023. Preoperative contrast-enhanced CT radiomics combined with deep learning features were utilized, and t-tests and least absolute shrinkage and selection operator (LASSO) cross-validation were employed for feature selection. Subsequently, 11 machine learning algorithms were utilized to construct prediction models, and the area under the ROC curve (AUC), accuracy, and F1 measure were used to assess model performance, which was validated in a validation group. RESULTS The Logistic algorithm demonstrated the most effective prediction in identifying polyp properties based on 10 radiomics combined with deep learning features, achieving the highest AUC (0.85 in the validation group, 95% confidence interval = 0.68-1.0). In addition, the accuracy (0.83 in the validation group) and F1 measure (0.76 in the validation group) also indicated strong performance. CONCLUSION The machine learning radiomics combined with deep learning model based on enhanced CT proves valuable in predicting the characteristics of cholesterol and adenomatous gallbladder polyps. This approach provides a more reliable basis for preoperative diagnosis and treatment of these conditions.
Collapse
Affiliation(s)
- Shengnan Yin
- Department of Radiology, Suzhou Ninth People's Hospital, Suzhou Ninth Hospital Affiliated to Soochow University, Suzhou, PR China
| | - Ning Ding
- Department of Radiology, Suzhou Ninth People's Hospital, Suzhou Ninth Hospital Affiliated to Soochow University, Suzhou, PR China
| | - Yiding Ji
- Department of Radiology, Suzhou Ninth People's Hospital, Suzhou Ninth Hospital Affiliated to Soochow University, Suzhou, PR China
| | - Zhenguo Qiao
- Department of Gastroenterology, Suzhou Ninth People's Hospital, Suzhou Ninth Hospital Affiliated to Soochow University, Suzhou, PR China
| | - Jianmao Yuan
- Department of General Surgery, Suzhou Ninth People's Hospital, Suzhou Ninth Hospital Affiliated to Soochow University, Suzhou, PR China
| | - Jing Chi
- Department of Radiology, Suzhou Ninth People's Hospital, Suzhou Ninth Hospital Affiliated to Soochow University, Suzhou, PR China
| | - Long Jin
- Department of Radiology, Suzhou Ninth People's Hospital, Suzhou Ninth Hospital Affiliated to Soochow University, Suzhou, PR China
| |
Collapse
|
2
|
Xu T, Zhao K, Hu Y, Li L, Wang W, Wang F, Zhou Y, Li J. Transferable non-invasive modal fusion-transformer (NIMFT) for end-to-end hand gesture recognition. J Neural Eng 2024; 21:026034. [PMID: 38565124 DOI: 10.1088/1741-2552/ad39a5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 04/02/2024] [Indexed: 04/04/2024]
Abstract
Objective.Recent studies have shown that integrating inertial measurement unit (IMU) signals with surface electromyographic (sEMG) can greatly improve hand gesture recognition (HGR) performance in applications such as prosthetic control and rehabilitation training. However, current deep learning models for multimodal HGR encounter difficulties in invasive modal fusion, complex feature extraction from heterogeneous signals, and limited inter-subject model generalization. To address these challenges, this study aims to develop an end-to-end and inter-subject transferable model that utilizes non-invasively fused sEMG and acceleration (ACC) data.Approach.The proposed non-invasive modal fusion-transformer (NIMFT) model utilizes 1D-convolutional neural networks-based patch embedding for local information extraction and employs a multi-head cross-attention (MCA) mechanism to non-invasively integrate sEMG and ACC signals, stabilizing the variability induced by sEMG. The proposed architecture undergoes detailed ablation studies after hyperparameter tuning. Transfer learning is employed by fine-tuning a pre-trained model on new subject and a comparative analysis is performed between the fine-tuning and subject-specific model. Additionally, the performance of NIMFT is compared to state-of-the-art fusion models.Main results.The NIMFT model achieved recognition accuracies of 93.91%, 91.02%, and 95.56% on the three action sets in the Ninapro DB2 dataset. The proposed embedding method and MCA outperformed the traditional invasive modal fusion transformer by 2.01% (embedding) and 1.23% (fusion), respectively. In comparison to subject-specific models, the fine-tuning model exhibited the highest average accuracy improvement of 2.26%, achieving a final accuracy of 96.13%. Moreover, the NIMFT model demonstrated superiority in terms of accuracy, recall, precision, and F1-score compared to the latest modal fusion models with similar model scale.Significance.The NIMFT is a novel end-to-end HGR model, utilizes a non-invasive MCA mechanism to integrate long-range intermodal information effectively. Compared to recent modal fusion models, it demonstrates superior performance in inter-subject experiments and offers higher training efficiency and accuracy levels through transfer learning than subject-specific approaches.
Collapse
Affiliation(s)
- Tianxiang Xu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Kunkun Zhao
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Yuxiang Hu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Liang Li
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Wei Wang
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Fulin Wang
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- Nanjing PANDA Electronics Equipment Co., Ltd, Nanjing 210033, People's Republic of China
| | - Yuxuan Zhou
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Jianqing Li
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| |
Collapse
|
3
|
Guo Y, Zhang J, Sun B, Wang Y. Adversarial Deep Transfer Learning in Fault Diagnosis: Progress, Challenges, and Future Prospects. Sensors (Basel) 2023; 23:7263. [PMID: 37631799 PMCID: PMC10459647 DOI: 10.3390/s23167263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 08/05/2023] [Accepted: 08/16/2023] [Indexed: 08/27/2023]
Abstract
Deep Transfer Learning (DTL) signifies a novel paradigm in machine learning, merging the superiorities of deep learning in feature representation with the merits of transfer learning in knowledge transference. This synergistic integration propels DTL to the forefront of research and development within the Intelligent Fault Diagnosis (IFD) sphere. While the early DTL paradigms, reliant on fine-tuning, demonstrated effectiveness, they encountered considerable obstacles in complex domains. In response to these challenges, Adversarial Deep Transfer Learning (ADTL) emerged. This review first categorizes ADTL into non-generative and generative models. The former expands upon traditional DTL, focusing on the efficient transference of features and mapping relationships, while the latter employs technologies such as Generative Adversarial Networks (GANs) to facilitate feature transformation. A thorough examination of the recent advancements of ADTL in the IFD field follows. The review concludes by summarizing the current challenges and future directions for DTL in fault diagnosis, including issues such as data imbalance, negative transfer, and adversarial training stability. Through this cohesive analysis, this review aims to offer valuable insights and guidance for the optimization and implementation of ADTL in real-world industrial scenarios.
Collapse
Affiliation(s)
| | - Jundong Zhang
- College of Marine Engineering, Dalian Maritime University, Dalian 116026, China; (Y.G.); (B.S.); (Y.W.)
| | | | | |
Collapse
|
4
|
Ji Y, Gao Y, Bao R, Li Q, Liu D, Sun Y, Ye Y. Prediction of COVID-19 Patients' Emergency Room Revisit using Multi-Source Transfer Learning. IEEE Int Conf Healthc Inform 2023; 2023:138-144. [PMID: 38486663 PMCID: PMC10939709 DOI: 10.1109/ichi57859.2023.00028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/17/2024]
Abstract
The coronavirus disease 2019 (COVID-19) has led to a global pandemic of significant severity. In addition to its high level of contagiousness, COVID-19 can have a heterogeneous clinical course, ranging from asymptomatic carriers to severe and potentially life-threatening health complications. Many patients have to revisit the emergency room (ER) within a short time after discharge, which significantly increases the workload for medical staff. Early identification of such patients is crucial for helping physicians focus on treating life-threatening cases. In this study, we obtained Electronic Health Records (EHRs) of 3,210 encounters from 13 affiliated ERs within the University of Pittsburgh Medical Center between March 2020 and January 2021. We leveraged a Natural Language Processing technique, ScispaCy, to extract clinical concepts and used the 1001 most frequent concepts to develop 7-day revisit models for COVID-19 patients in ERs. The research data we collected were obtained from 13 ERs, which may have distributional differences that could affect the model development. To address this issue, we employed a classic deep transfer learning method called the Domain Adversarial Neural Network (DANN) and evaluated different modeling strategies, including the Multi-DANN algorithm (which considers the source differences), the Single-DANN algorithm (which doesn't consider the source differences), and three baseline methods: using only source data, using only target data, and using a mixture of source and target data. Results showed that the Multi-DANN models outperformed the Single-DANN models and baseline models in predicting revisits of COVID-19 patients to the ER within 7 days after discharge (median AUROC = 0.8 vs. 0.5). Notably, the Multi-DANN strategy effectively addressed the heterogeneity among multiple source domains and improved the adaptation of source data to the target domain. Moreover, the high performance of Multi-DANN models indicates that EHRs are informative for developing a prediction model to identify COVID-19 patients who are very likely to revisit an ER within 7 days after discharge.
Collapse
Affiliation(s)
- Yuelyu Ji
- Department of Information Science, School of Computing and Information, University of Pittsburgh, Pittsburgh,USA
| | - Yuhe Gao
- Department of Biomedical Informatics, School of Medicine, University of Pittsburgh, Pittsburgh, USA
| | - Runxue Bao
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, USA
| | - Qi Li
- School of Business, State University of New York at New Paltz, New Paltz, USA
| | - Disheng Liu
- Department of Information Science, School of Computing and Information, University of Pittsburgh Pittsburgh, USA
| | - Yiming Sun
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh Pittsburgh, USA
| | - Ye Ye
- Department of Biomedical Informatics, School of Medicine, University of Pittsburgh, Pittsburgh, USA
| |
Collapse
|
5
|
Liao Y, Xiang Y, Zheng M, Wang J. DeepMiceTL: a deep transfer learning based prediction of mice cardiac conduction diseases using early electrocardiograms. Brief Bioinform 2023; 24:bbad109. [PMID: 36935112 PMCID: PMC10422927 DOI: 10.1093/bib/bbad109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 02/10/2023] [Accepted: 03/01/2023] [Indexed: 03/21/2023] Open
Abstract
Cardiac conduction disease is a major cause of morbidity and mortality worldwide. There is considerable clinical significance and an emerging need of early detection of these diseases for preventive treatment success before more severe arrhythmias occur. However, developing such early screening tools is challenging due to the lack of early electrocardiograms (ECGs) before symptoms occur in patients. Mouse models are widely used in cardiac arrhythmia research. The goal of this paper is to develop deep learning models to predict cardiac conduction diseases in mice using their early ECGs. We hypothesize that mutant mice present subtle abnormalities in their early ECGs before severe arrhythmias present. These subtle patterns can be detected by deep learning though they are hard to be identified by human eyes. We propose a deep transfer learning model, DeepMiceTL, which leverages knowledge from human ECGs to learn mouse ECG patterns. We further apply the Bayesian optimization and $k$-fold cross validation methods to tune the hyperparameters of the DeepMiceTL. Our results show that DeepMiceTL achieves a promising performance (F1-score: 83.8%, accuracy: 84.8%) in predicting the occurrence of cardiac conduction diseases using early mouse ECGs. This study is among the first efforts that use state-of-the-art deep transfer learning to identify ECG patterns during the early course of cardiac conduction disease in mice. Our approach not only could help in cardiac conduction disease research in mice, but also suggest a feasibility for early clinical diagnosis of human cardiac conduction diseases and other types of cardiac arrythmias using deep transfer learning in the future.
Collapse
Affiliation(s)
- Ying Liao
- Department of Industrial, Manufacturing & Systems Engineering, Texas Tech University, Lubbock, Texas, USA
| | - Yisha Xiang
- Department of Industrial Engineering, University of Houston, Houston, Texas, USA
| | - Mingjie Zheng
- Department of Pediatrics, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Jun Wang
- Department of Pediatrics, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| |
Collapse
|
6
|
Md. Milon Islam, Md. Zabirul Islam, Amanullah Asraf, Mabrook S. Al-Rakhami, Weiping Ding, Ali Hassan Sodhro. Diagnosis of COVID-19 from X-rays using combined CNN-RNN architecture with transfer learning. BenchCouncil Transactions on Benchmarks, Standards and Evaluations 2023:100088. [ DOI: 10.1016/j.tbench.2023.100088] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/16/2023]
Abstract
Combating the COVID-19 pandemic has emerged as one of the most promising issues in global healthcare. Accurate and fast diagnosis of COVID-19 cases is required for the right medical treatment to control this pandemic. Chest radiography imaging techniques are more effective than the reverse-transcription polymerase chain reaction (RT-PCR) method in detecting coronavirus. Due to the limited availability of medical images, transfer learning is better suited to classify patterns in medical images. This paper presents a combined architecture of convolutional neural network (CNN) and recurrent neural network (RNN) to diagnose COVID-19 patients from chest X-rays. The deep transfer techniques used in this experiment are VGG19, DenseNet121, InceptionV3, and Inception-ResNetV2, where CNN is used to extract complex features from samples and classify them using RNN. In our experiments, the VGG19-RNN architecture outperformed all other networks in terms of accuracy. Finally, decision-making regions of images were visualized using gradient-weighted class activation mapping (Grad-CAM). The system achieved promising results compared to other existing systems and might be validated in the future when more samples would be available. The experiment demonstrated a good alternative method to diagnose COVID-19 for medical staff. All the data used during the study are openly available from the Mendeley data repository at https://data.mendeley.com/datasets/mxc6vb7svm. For further research, we have made the source code publicly available at https://github.com/Asraf047/COVID19-CNN-RNN.
Collapse
|
7
|
Han Y, Wang Z, Chen A, Ali I, Cai J, Ye S, Wei Z, Li J. A deep transfer learning-based protocol accelerates full quantum mechanics calculation of protein. Brief Bioinform 2023; 24:6901901. [PMID: 36516300 DOI: 10.1093/bib/bbac532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 10/07/2022] [Accepted: 11/07/2022] [Indexed: 12/15/2022] Open
Abstract
Effective full quantum mechanics (FQM) calculation of protein remains a grand challenge and of great interest in computational biology with substantial applications in drug discovery, protein dynamic simulation and protein folding. However, the huge computational complexity of the existing QM methods impends their applications in large systems. Here, we design a transfer-learning-based deep learning (TDL) protocol for effective FQM calculations (TDL-FQM) on proteins. By incorporating a transfer-learning algorithm into deep neural network (DNN), the TDL-FQM protocol is capable of performing calculations at any given accuracy using models trained from small datasets with high-precision and knowledge learned from large amount of low-level calculations. The high-level double-hybrid DFT functional and high-level quality of basis set is used in this work as a case study to evaluate the performance of TDL-FQM, where the selected 15 proteins are predicted to have a mean absolute error of 0.01 kcal/mol/atom for potential energy and an average root mean square error of 1.47 kcal/mol/$ {\rm A^{^{ \!\!\!o}}} $ for atomic forces. The proposed TDL-FQM approach accelerates the FQM calculation more than thirty thousand times faster in average and presents more significant benefits in efficiency as the size of protein increases. The ability to learn knowledge from one task to solve related problems demonstrates that the proposed TDL-FQM overcomes the limitation of standard DNN and has a strong power to predict proteins with high precision, which solves the challenge of high precision prediction in large chemical and biological systems.
Collapse
Affiliation(s)
- Yanqiang Han
- National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Shanghai Jiao Tong University, Shanghai 200240, China.,Key Laboratory for Thin Film and Microfabrication of Ministry of Education, Department of Micro/Nano-electronics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Zhilong Wang
- National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Shanghai Jiao Tong University, Shanghai 200240, China.,Key Laboratory for Thin Film and Microfabrication of Ministry of Education, Department of Micro/Nano-electronics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - An Chen
- National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Shanghai Jiao Tong University, Shanghai 200240, China.,Key Laboratory for Thin Film and Microfabrication of Ministry of Education, Department of Micro/Nano-electronics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Imran Ali
- National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Shanghai Jiao Tong University, Shanghai 200240, China.,Key Laboratory for Thin Film and Microfabrication of Ministry of Education, Department of Micro/Nano-electronics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Junfei Cai
- National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Shanghai Jiao Tong University, Shanghai 200240, China.,Key Laboratory for Thin Film and Microfabrication of Ministry of Education, Department of Micro/Nano-electronics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Simin Ye
- National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Shanghai Jiao Tong University, Shanghai 200240, China.,Key Laboratory for Thin Film and Microfabrication of Ministry of Education, Department of Micro/Nano-electronics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Zhiyun Wei
- Shanghai Key Laboratory of Maternal Fetal Medicine, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University, Shanghai 200092, China
| | - Jinjin Li
- National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Shanghai Jiao Tong University, Shanghai 200240, China.,Key Laboratory for Thin Film and Microfabrication of Ministry of Education, Department of Micro/Nano-electronics, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
8
|
Xuan J, Ke B, Ma W, Liang Y, Hu W. Spinal disease diagnosis assistant based on MRI images using deep transfer learning methods. Front Public Health 2023; 11:1044525. [PMID: 36908475 PMCID: PMC9998513 DOI: 10.3389/fpubh.2023.1044525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 02/06/2023] [Indexed: 03/14/2023] Open
Abstract
Introduction In light of the potential problems of missed diagnosis and misdiagnosis in the diagnosis of spinal diseases caused by experience differences and fatigue, this paper investigates the use of artificial intelligence technology for auxiliary diagnosis of spinal diseases. Methods The LableImg tool was used to label the MRIs of 604 patients by clinically experienced doctors. Then, in order to select an appropriate object detection algorithm, deep transfer learning models of YOLOv3, YOLOv5, and PP-YOLOv2 were created and trained on the Baidu PaddlePaddle framework. The experimental results showed that the PP-YOLOv2 model achieved a 90.08% overall accuracy in the diagnosis of normal, IVD bulges and spondylolisthesis, which were 27.5 and 3.9% higher than YOLOv3 and YOLOv5, respectively. Finally, a visualization of the intelligent spine assistant diagnostic software based on the PP-YOLOv2 model was created and the software was made available to the doctors in the spine and osteopathic surgery at Guilin People's Hospital. Results and discussion This software automatically provides auxiliary diagnoses in 14.5 s on a standard computer, is much faster than doctors in diagnosing human spines, which typically take 10 min, and its accuracy of 98% can be compared to that of experienced doctors in the comparison of various diagnostic methods. It significantly improves doctors' working efficiency, reduces the phenomenon of missed diagnoses and misdiagnoses, and demonstrates the efficacy of the developed intelligent spinal auxiliary diagnosis software.
Collapse
Affiliation(s)
- Junbo Xuan
- Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, China.,School of Artificial Intelligence, Nanning College for Vocational Technology, Nanning, China
| | - Baoyi Ke
- Department of Spine and Osteopathy Surgery, Guilin People's Hospital, Guilin, China
| | - Wenyu Ma
- Department of Spine and Osteopathy Surgery, Guilin People's Hospital, Guilin, China
| | - Yinghao Liang
- School of Artificial Intelligence, Nanning College for Vocational Technology, Nanning, China
| | - Wei Hu
- Department of Spine and Osteopathy Surgery, Guilin People's Hospital, Guilin, China
| |
Collapse
|
9
|
Chola C, Muaad AY, Bin Heyat MB, Benifa JVB, Naji WR, Hemachandran K, Mahmoud NF, Samee NA, Al-Antari MA, Kadah YM, Kim TS. BCNet: A Deep Learning Computer-Aided Diagnosis Framework for Human Peripheral Blood Cell Identification. Diagnostics (Basel) 2022; 12:diagnostics12112815. [PMID: 36428875 PMCID: PMC9689932 DOI: 10.3390/diagnostics12112815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/03/2022] [Accepted: 11/12/2022] [Indexed: 11/19/2022] Open
Abstract
Blood cells carry important information that can be used to represent a person's current state of health. The identification of different types of blood cells in a timely and precise manner is essential to cutting the infection risks that people face on a daily basis. The BCNet is an artificial intelligence (AI)-based deep learning (DL) framework that was proposed based on the capability of transfer learning with a convolutional neural network to rapidly and automatically identify the blood cells in an eight-class identification scenario: Basophil, Eosinophil, Erythroblast, Immature Granulocytes, Lymphocyte, Monocyte, Neutrophil, and Platelet. For the purpose of establishing the dependability and viability of BCNet, exhaustive experiments consisting of five-fold cross-validation tests are carried out. Using the transfer learning strategy, we conducted in-depth comprehensive experiments on the proposed BCNet's architecture and test it with three optimizers of ADAM, RMSprop (RMSP), and stochastic gradient descent (SGD). Meanwhile, the performance of the proposed BCNet is directly compared using the same dataset with the state-of-the-art deep learning models of DensNet, ResNet, Inception, and MobileNet. When employing the different optimizers, the BCNet framework demonstrated better classification performance with ADAM and RMSP optimizers. The best evaluation performance was achieved using the RMSP optimizer in terms of 98.51% accuracy and 96.24% F1-score. Compared with the baseline model, the BCNet clearly improved the prediction accuracy performance 1.94%, 3.33%, and 1.65% using the optimizers of ADAM, RMSP, and SGD, respectively. The proposed BCNet model outperformed the AI models of DenseNet, ResNet, Inception, and MobileNet in terms of the testing time of a single blood cell image by 10.98, 4.26, 2.03, and 0.21 msec. In comparison to the most recent deep learning models, the BCNet model could be able to generate encouraging outcomes. It is essential for the advancement of healthcare facilities to have such a recognition rate improving the detection performance of the blood cells.
Collapse
Affiliation(s)
- Channabasava Chola
- Department of Electronics and Information Convergence Engineering, College of Electronics and Information, Kyung Hee University, Suwon-si 17104, Republic of Korea
| | - Abdullah Y. Muaad
- Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
| | - Md Belal Bin Heyat
- IoT Research Center, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
- Centre for VLSI and Embedded System Technologies, International Institute of Information Technology, Hyderabad 500032, India
- Department of Science and Engineering, Novel Global Community Educational Foundation, Hebersham, NSW 2770, Australia
| | - J. V. Bibal Benifa
- Department of Computer Science and Engineering, Indian Institute of Information Technology Kottayam, Kerala 686635, India
| | - Wadeea R. Naji
- Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
| | - K. Hemachandran
- Department of Artificial Intelligence, Woxsen University, Hyderabad 502345, India
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Mugahed A. Al-Antari
- Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Yasser M. Kadah
- Electrical and Computer Engineering Department, King Abdulaziz University, Jeddah 22254, Saudi Arabia
- Biomedical Engineering Department, Cairo University, Giza 12613, Egypt
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Tae-Seong Kim
- Department of Electronics and Information Convergence Engineering, College of Electronics and Information, Kyung Hee University, Suwon-si 17104, Republic of Korea
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| |
Collapse
|
10
|
Meng M, Zhang M, Shen D, He G. Differentiation of breast lesions on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using deep transfer learning based on DenseNet201. Medicine (Baltimore) 2022; 101:e31214. [PMID: 36397422 PMCID: PMC9666147 DOI: 10.1097/md.0000000000031214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
In order to achieve better performance, artificial intelligence is used in breast cancer diagnosis. In this study, we evaluated the efficacy of different fine-tuning strategies of deep transfer learning (DTL) based on the DenseNet201 model to differentiate malignant from benign lesions on breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We collected 4260 images of benign lesions and 4140 images of malignant lesions of the breast pertaining to pathologically confirmed cases. The benign and malignant groups was randomly divided into a training set and a testing set at a ratio of 9:1. A DTL model based on the DenseNet201 model was established, and the effectiveness of 4 fine-tuning strategies (S0: strategy 0, S1: strategy; S2: strategy; and S3: strategy) was compared. Additionally, DCE-MRI images of 48 breast lesions were selected to verify the robustness of the model. Ten images were obtained for each lesion. The classification was considered correct if more than 5 images were correctly classified. The metrics for model performance evaluation included accuracy (Ac) in the training and testing sets, precision (Pr), recall rate (Rc), f1 score (f1), and area under the receiver operating characteristic curve (AUROC) in the validation set. The Ac of the 4 fine-tuning strategies reached 100.00% in the training set. The S2 strategy exhibited good convergence in the testing set. The Ac of S2 was 98.01% in the testing set, which was higher than those of S0 (93.10%), S1 (90.45%), and S3 (93.90%). The average classification Pr, Rc, f1, and AUROC of S2 in the validation set were (89.00%, 80.00%, 0.81, and 0.79, respectively) higher than those of S0 (76.00%, 67.00%, 0.69, and 0.65, respectively), S1 (60.00%, 60.00%, 0.60, 0.66, and respectively), and S3 (77.00%, 73.00%, 0.74, 0.72, respectively). The degree of coincidence between S2 and the histopathological method for differentiating between benign and malignant breast lesions was high (κ = 0.749). The S2 strategy can improve the robustness of the DenseNet201 model in relatively small breast DCE-MRI datasets, and this is a reliable method to increase the Ac of discriminating benign from malignant breast lesions on DCE-MRI.
Collapse
Affiliation(s)
- Mingzhu Meng
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Ming Zhang
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Dong Shen
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Guangyuan He
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
- * Correspondence: Guangyuan He, Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, No.68 Gehuzhong Rd, Changzhou 213164, Jiangsu Province, China (e-mail: )
| |
Collapse
|
11
|
Lehmler SJ, Saif-Ur-Rehman M, Tobias G, Iossifidis I. Deep transfer learning compared to subject-specific models for sEMG decoders. J Neural Eng 2022; 19. [PMID: 36206722 DOI: 10.1088/1741-2552/ac9860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 10/07/2022] [Indexed: 12/24/2022]
Abstract
Objective. Accurate decoding of surface electromyography (sEMG) is pivotal for muscle-to-machine-interfaces and their application e.g. rehabilitation therapy. sEMG signals have high inter-subject variability, due to various factors, including skin thickness, body fat percentage, and electrode placement. Deep learning algorithms require long training time and tend to overfit if only few samples are available. In this study, we aim to investigate methods to calibrate deep learning models to a new user when only a limited amount of training data is available.Approach. Two methods are commonly used in the literature, subject-specific modeling and transfer learning. In this study, we investigate the effectiveness of transfer learning using weight initialization for recalibration of two different pretrained deep learning models on new subjects data and compare their performance to subject-specific models. We evaluate two models on three publicly available databases (non invasive adaptive prosthetics database 2-4) and compare the performance of both calibration schemes in terms of accuracy, required training data, and calibration time.Main results. On average over all settings, our transfer learning approach improves 5%-points on the pretrained models without fine-tuning, and 12%-points on the subject-specific models, while being trained for 22% fewer epochs on average. Our results indicate that transfer learning enables faster learning on fewer training samples than user-specific models.Significance. To the best of our knowledge, this is the first comparison of subject-specific modeling and transfer learning. These approaches are ubiquitously used in the field of sEMG decoding. But the lack of comparative studies until now made it difficult for scientists to assess appropriate calibration schemes. Our results guide engineers evaluating similar use cases.
Collapse
Affiliation(s)
- Stephan Johann Lehmler
- Institute of Computer Science, University of Applied Science Ruhr West, Mülheim an der Ruhr, Germany.,Faculty of Electrical Engineering and Information Technology, Ruhr-University, Bochum, Germany
| | - Muhammad Saif-Ur-Rehman
- Institute of Computer Science, University of Applied Science Ruhr West, Mülheim an der Ruhr, Germany
| | | | - Ioannis Iossifidis
- Institute of Computer Science, University of Applied Science Ruhr West, Mülheim an der Ruhr, Germany
| |
Collapse
|
12
|
Bhuiyan MR, Abdullah J. Detection on Cell Cancer Using the Deep Transfer Learning and Histogram Based Image Focus Quality Assessment. Sensors (Basel) 2022; 22:7007. [PMID: 36146356 PMCID: PMC9504738 DOI: 10.3390/s22187007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 08/15/2022] [Accepted: 08/26/2022] [Indexed: 06/16/2023]
Abstract
In recent years, the number of studies using whole-slide imaging (WSIs) of histopathology slides has expanded significantly. For the development and validation of artificial intelligence (AI) systems, glass slides from retrospective cohorts including patient follow-up data have been digitized. It has become crucial to determine that the quality of such resources meets the minimum requirements for the development of AI in the future. The need for automated quality control is one of the obstacles preventing the clinical implementation of digital pathology work processes. As a consequence of the inaccuracy of scanners in determining the focus of the image, the resulting visual blur can render the scanned slide useless. Moreover, when scanned at a resolution of 20× or higher, the resulting picture size of a scanned slide is often enormous. Therefore, for digital pathology to be clinically relevant, computational algorithms must be used to rapidly and reliably measure the picture's focus quality and decide if an image requires re-scanning. We propose a metric for evaluating the quality of digital pathology images that uses a sum of even-derivative filter bases to generate a human visual-system-like kernel, which is described as the inverse of the lens' point spread function. This kernel is then used for a digital pathology image to change high-frequency image data degraded by the scanner's optics and assess the patch-level focus quality. Through several studies, we demonstrate that our technique correlates with ground-truth z-level data better than previous methods, and is computationally efficient. Using deep learning techniques, our suggested system is able to identify positive and negative cancer cells in images. We further expand our technique to create a local slide-level focus quality heatmap, which can be utilized for automated slide quality control, and we illustrate our method's value in clinical scan quality control by comparing it to subjective slide quality ratings. The proposed method, GoogleNet, VGGNet, and ResNet had accuracy values of 98.5%, 94.5%, 94.00%, and 95.00% respectively.
Collapse
|
13
|
Liu C, Zheng X, Bao Z, He Z, Gao M, Song W. A Novel Deep Transfer Learning Method for Intelligent Fault Diagnosis Based on Variational Mode Decomposition and Efficient Channel Attention. Entropy (Basel) 2022; 24:1087. [PMID: 36010751 PMCID: PMC9407064 DOI: 10.3390/e24081087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 08/02/2022] [Accepted: 08/05/2022] [Indexed: 06/15/2023]
Abstract
In recent years, deep learning has been applied to intelligent fault diagnosis and has achieved great success. However, the fault diagnosis method of deep learning assumes that the training dataset and the test dataset are obtained under the same operating conditions. This condition can hardly be met in real application scenarios. Additionally, signal preprocessing technology also has an important influence on intelligent fault diagnosis. How to effectively relate signal preprocessing to a transfer diagnostic model is a challenge. To solve the above problems, we propose a novel deep transfer learning method for intelligent fault diagnosis based on Variational Mode Decomposition (VMD) and Efficient Channel Attention (ECA). In the proposed method, the VMD adaptively matches the optimal center frequency and finite bandwidth of each mode to achieve effective separation of signals. To fuse the mode features more effectively after VMD decomposition, ECA is used to learn channel attention. The experimental results show that the proposed signal preprocessing and feature fusion module can increase the accuracy and generality of the transfer diagnostic model. Moreover, we comprehensively analyze and compare our method with state-of-the-art methods at different noise levels, and the results show that our proposed method has better robustness and generalization performance.
Collapse
Affiliation(s)
- Caiming Liu
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou 310018, China
- Zhejiang Provincial Key Lab of Equipment Electronics, Hangzhou 310018, China
| | - Xiaorong Zheng
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou 310018, China
- Zhejiang Provincial Key Lab of Equipment Electronics, Hangzhou 310018, China
| | - Zhengyi Bao
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou 310018, China
- Zhejiang Provincial Key Lab of Equipment Electronics, Hangzhou 310018, China
| | - Zhiwei He
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou 310018, China
- Zhejiang Provincial Key Lab of Equipment Electronics, Hangzhou 310018, China
| | - Mingyu Gao
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou 310018, China
- Zhejiang Provincial Key Lab of Equipment Electronics, Hangzhou 310018, China
| | - Wenlong Song
- Tianneng Battery Group Co., Ltd., Changxing 313100, China
| |
Collapse
|
14
|
Chen KY, Shin J, Hasan MAM, Liaw JJ, Yuichi O, Tomioka Y. Fitness Movement Types and Completeness Detection Using a Transfer-Learning-Based Deep Neural Network. Sensors (Basel) 2022; 22:5700. [PMID: 35957257 PMCID: PMC9371130 DOI: 10.3390/s22155700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 07/25/2022] [Accepted: 07/27/2022] [Indexed: 06/15/2023]
Abstract
Fitness is important in people's lives. Good fitness habits can improve cardiopulmonary capacity, increase concentration, prevent obesity, and effectively reduce the risk of death. Home fitness does not require large equipment but uses dumbbells, yoga mats, and horizontal bars to complete fitness exercises and can effectively avoid contact with people, so it is deeply loved by people. People who work out at home use social media to obtain fitness knowledge, but learning ability is limited. Incomplete fitness is likely to lead to injury, and a cheap, timely, and accurate fitness detection system can reduce the risk of fitness injuries and can effectively improve people's fitness awareness. In the past, many studies have engaged in the detection of fitness movements, among which the detection of fitness movements based on wearable devices, body nodes, and image deep learning has achieved better performance. However, a wearable device cannot detect a variety of fitness movements, may hinder the exercise of the fitness user, and has a high cost. Both body-node-based and image-deep-learning-based methods have lower costs, but each has some drawbacks. Therefore, this paper used a method based on deep transfer learning to establish a fitness database. After that, a deep neural network was trained to detect the type and completeness of fitness movements. We used Yolov4 and Mediapipe to instantly detect fitness movements and stored the 1D fitness signal of movement to build a database. Finally, MLP was used to classify the 1D signal waveform of fitness. In the performance of the classification of fitness movement types, the mAP was 99.71%, accuracy was 98.56%, precision was 97.9%, recall was 98.56%, and the F1-score was 98.23%, which is quite a high performance. In the performance of fitness movement completeness classification, accuracy was 92.84%, precision was 92.85, recall was 92.84%, and the F1-score was 92.83%. The average FPS in detection was 17.5. Experimental results show that our method achieves higher accuracy compared to other methods.
Collapse
Affiliation(s)
- Kuan-Yu Chen
- School of Computer Science and Engineering, The University of Aizu Fukushima, Aizuwakamatsu 9658580, Japan; (K.-Y.C.); (M.A.M.H.); (O.Y.); (Y.T.)
- Department of Information and Communication Engineering, Chaoyang University of Technology Taichung, Taichung 41349, Taiwan;
| | - Jungpil Shin
- School of Computer Science and Engineering, The University of Aizu Fukushima, Aizuwakamatsu 9658580, Japan; (K.-Y.C.); (M.A.M.H.); (O.Y.); (Y.T.)
| | - Md. Al Mehedi Hasan
- School of Computer Science and Engineering, The University of Aizu Fukushima, Aizuwakamatsu 9658580, Japan; (K.-Y.C.); (M.A.M.H.); (O.Y.); (Y.T.)
| | - Jiun-Jian Liaw
- Department of Information and Communication Engineering, Chaoyang University of Technology Taichung, Taichung 41349, Taiwan;
| | - Okuyama Yuichi
- School of Computer Science and Engineering, The University of Aizu Fukushima, Aizuwakamatsu 9658580, Japan; (K.-Y.C.); (M.A.M.H.); (O.Y.); (Y.T.)
| | - Yoichi Tomioka
- School of Computer Science and Engineering, The University of Aizu Fukushima, Aizuwakamatsu 9658580, Japan; (K.-Y.C.); (M.A.M.H.); (O.Y.); (Y.T.)
| |
Collapse
|
15
|
Danala G, Maryada SK, Islam W, Faiz R, Jones M, Qiu Y, Zheng B. A Comparison of Computer-Aided Diagnosis Schemes Optimized Using Radiomics and Deep Transfer Learning Methods. Bioengineering (Basel) 2022; 9. [PMID: 35735499 DOI: 10.3390/bioengineering9060256] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 05/25/2022] [Accepted: 06/13/2022] [Indexed: 01/29/2023] Open
Abstract
Objective: Radiomics and deep transfer learning are two popular technologies used to develop computer-aided detection and diagnosis (CAD) schemes of medical images. This study aims to investigate and to compare the advantages and the potential limitations of applying these two technologies in developing CAD schemes. Methods: A relatively large and diverse retrospective dataset including 3000 digital mammograms was assembled in which 1496 images depicted malignant lesions and 1504 images depicted benign lesions. Two CAD schemes were developed to classify breast lesions. The first scheme was developed using four steps namely, applying an adaptive multi-layer topographic region growing algorithm to segment lesions, computing initial radiomics features, applying a principal component algorithm to generate an optimal feature vector, and building a support vector machine classifier. The second CAD scheme was built based on a pre-trained residual net architecture (ResNet50) as a transfer learning model to classify breast lesions. Both CAD schemes were trained and tested using a 10-fold cross-validation method. Several score fusion methods were also investigated to classify breast lesions. CAD performances were evaluated and compared by the areas under the ROC curve (AUC). Results: The ResNet50 model-based CAD scheme yielded AUC = 0.85 ± 0.02, which was significantly higher than the radiomics feature-based CAD scheme with AUC = 0.77 ± 0.02 (p < 0.01). Additionally, the fusion of classification scores generated by the two CAD schemes did not further improve classification performance. Conclusion: This study demonstrates that using deep transfer learning is more efficient to develop CAD schemes and it enables a higher lesion classification performance than CAD schemes developed using radiomics-based technology.
Collapse
|
16
|
Li W, Li W, Qin Z, Tan L, Huang L, Liu F, Xiao C. Deep Transfer Learning for Ni-Based Superalloys Microstructure Recognition on γ' Phase. Materials (Basel) 2022; 15:4251. [PMID: 35744305 DOI: 10.3390/ma15124251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 06/02/2022] [Accepted: 06/13/2022] [Indexed: 11/17/2022]
Abstract
Ni-based superalloys are widely used to manufacture the critical hot-end components of aviation jet engines and various industrial gas turbines. The analysis of Ni-based superalloys microstructures is an important research task during the design and development of superalloys. The material microstructure information can only be understood by experts in the long history. Image segmentation and recognition are developing techniques for accelerating the microstructure analysis automatically. Although deep learning techniques have achieved satisfactory performance, they usually suffer from generalization, i.e., performing worse on a new dataset. In this paper, a deep transfer learning method which just needs a small number of labeled images is proposed to perform the microstructure recognition on γ′ phase. To evaluate the effectiveness, we homely prepare two Ni-based superalloys at temperatures 900 °C and 1000 °C, and manually annotate two datasets named as W-900 and W-1000. Experimental results demonstrate that the proposed method only needs 3 and 5 labeled images to achieve state-of-the-art segmentation accuracy during the transfer from W-900 to W-1000 and the transfer from W-1000 to W-900, while enjoying the advantage of fast convergence. In addition, a simple and effective software for the Ni-based superalloys microstructure recognition on γ′ phase is developed to improve the efficiency of materials experts, which will greatly facilitate the design of new Ni-base superalloys and even other multicomponent alloys.
Collapse
|
17
|
Jones MA, Faiz R, Qiu Y, Zheng B. Improving mammography lesion classification by optimal fusion of handcrafted and deep transfer learning features. Phys Med Biol 2022; 67:10.1088/1361-6560/ac5297. [PMID: 35130517 PMCID: PMC8935657 DOI: 10.1088/1361-6560/ac5297] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 02/07/2022] [Indexed: 12/20/2022]
Abstract
Objective.Handcrafted radiomics features or deep learning model-generated automated features are commonly used to develop computer-aided diagnosis schemes of medical images. The objective of this study is to test the hypothesis that handcrafted and automated features contain complementary classification information and fusion of these two types of features can improve CAD performance.Approach.We retrospectively assembled a dataset involving 1535 lesions (740 malignant and 795 benign). Regions of interest (ROI) surrounding suspicious lesions are extracted and two types of features are computed from each ROI. The first one includes 40 radiomic features and the second one includes automated features computed from a VGG16 network using a transfer learning method. A single channel ROI image is converted to three channel pseudo-ROI images by stacking the original image, a bilateral filtered image, and a histogram equalized image. Two VGG16 models using pseudo-ROIs and 3 stacked original ROIs without pre-processing are used to extract automated features. Five linear support vector machines (SVM) are built using the optimally selected feature vectors from the handcrafted features, two sets of VGG16 model-generated automated features, and the fusion of handcrafted and each set of automated features, respectively.Main Results.Using a 10-fold cross-validation, the fusion SVM using pseudo-ROIs yields the highest lesion classification performance with area under ROC curve (AUC = 0.756 ± 0.042), which is significantly higher than those yielded by other SVMs trained using handcrafted or automated features only (p < 0.05).Significance.This study demonstrates that both handcrafted and automated futures contain useful information to classify breast lesions. Fusion of these two types of features can further increase CAD performance.
Collapse
Affiliation(s)
- Meredith A. Jones
- School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA,Author to whom any correspondence should be addressed.,
| | - Rowzat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
18
|
Gouda W, Almurafeh M, Humayun M, Jhanjhi NZ. Detection of COVID-19 Based on Chest X-rays Using Deep Learning. Healthcare (Basel) 2022; 10:343. [PMID: 35206957 PMCID: PMC8872326 DOI: 10.3390/healthcare10020343] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 01/27/2022] [Accepted: 01/30/2022] [Indexed: 01/09/2023] Open
Abstract
The coronavirus disease (COVID-19) is rapidly spreading around the world. Early diagnosis and isolation of COVID-19 patients has proven crucial in slowing the disease's spread. One of the best options for detecting COVID-19 reliably and easily is to use deep learning (DL) strategies. Two different DL approaches based on a pertained neural network model (ResNet-50) for COVID-19 detection using chest X-ray (CXR) images are proposed in this study. Augmenting, enhancing, normalizing, and resizing CXR images to a fixed size are all part of the preprocessing stage. This research proposes a DL method for classifying CXR images based on an ensemble employing multiple runs of a modified version of the Resnet-50. The proposed system is evaluated against two publicly available benchmark datasets that are frequently used by several researchers: COVID-19 Image Data Collection (IDC) and CXR Images (Pneumonia). The proposed system validates its dominance over existing methods such as VGG or Densnet, with values exceeding 99.63% in many metrics, such as accuracy, precision, recall, F1-score, and Area under the curve (AUC), based on the performance results obtained.
Collapse
Affiliation(s)
- Walaa Gouda
- Department of Computer Engineering and Network, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Aljouf, Saudi Arabia
| | - Maram Almurafeh
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Aljouf, Saudi Arabia; (M.A.); (M.H.)
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Aljouf, Saudi Arabia; (M.A.); (M.H.)
| | - Noor Zaman Jhanjhi
- School of Computer Science and Engineering (SCE), Taylor’s University, Subang Jaya 47500, Selangor, Malaysia;
| |
Collapse
|
19
|
Abstract
This study proposes a new predictive segmentation method for liver tumors detection using computed tomography (CT) liver images. In the medical imaging field, the exact localization of metastasis lesions after acquisition faces persistent problems both for diagnostic aid and treatment effectiveness. Therefore, the improvement in the diagnostic process is substantially crucial in order to increase the success chance of the management and the therapeutic follow-up. The proposed procedure highlights a computerized approach based on an encoder-decoder structure in order to provide volumetric analysis of pathologic tumors. Specifically, we developed an automatic algorithm for the liver tumors defect segmentation through the Seg-Net and U-Net architectures from metastasis CT images. In this study, we collected a dataset of 200 pathologically confirmed metastasis cancer cases. A total of 8,297 CT image slices of these cases were used developing and optimizing the proposed segmentation architecture. The model was trained and validated using 170 and 30 cases or 85% and 15% of the CT image data, respectively. Study results demonstrate the strength of the proposed approach that reveals the superlative segmentation performance as evaluated using following indices including F1-score = 0.9573, Recall = 0.9520, IOU = 0.9654, Binary cross entropy = 0.0032 and p-value <0.05, respectively. In comparison to state-of-the-art techniques, the proposed method yields a higher precision rate by specifying metastasis tumor position.
Collapse
Affiliation(s)
- Hanene Sahli
- Laboratory of Signal Image and Energy Mastery (SIME), LR13ES03, University of Tunis, ENSIT, 1008, Tunis, Tunisia
| | - Amine Ben Slama
- Laboratory of Biophysics and Medical Technologies, LR13ES07, University of Tunis EL Manar, ISTMT, 1006, Tunis, Tunisia
| | - Salam Labidi
- Laboratory of Biophysics and Medical Technologies, LR13ES07, University of Tunis EL Manar, ISTMT, 1006, Tunis, Tunisia
| |
Collapse
|
20
|
Buragohain A, Mali B, Saha S, Singh PK. A deep transfer learning based approach to detect COVID‐19 waste. Internet Technology Letters 2022; 5:e327. [PMCID: PMC8646505 DOI: 10.1002/itl2.327] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 10/04/2021] [Accepted: 10/06/2021] [Indexed: 06/14/2023]
Abstract
COVID‐19 or Novel Coronavirus disease is not only creating a pandemic but also created another kind of problem, initiating a group of wastes, which is also called as COVID‐19 waste. COVID‐19 waste includes the mask, hand gloves, sanitizer bottles, Personal Protective Equipment (PPE) kits, syringes used to vaccinate people, etc. These wastes are now polluting every continent and ocean. Improper disposal of such wastes may increase the rate of spread of contamination. In this regard, we decided to build up a detection model, which will be able to detect some of the COVID‐19 wastes. We considered masks, hand gloves, and syringes as the initial wastes to get detected. We collected the dataset manually, annotated the images with these three classes, then trained different CNN models to compare the accuracies of the models for our dataset. We got the best model to be EfficientDet D0, which gives a mean average precision of 0.82. Further, we have also developed a UI to deploy the model, where general users can upload the images and can detect the wastes, controlling the threshold.
Collapse
Affiliation(s)
- Ayushman Buragohain
- Department of CSECentral Institute of Technology KokrajharKokrajharAssamIndia
| | - Bhabesh Mali
- Department of CSECentral Institute of Technology KokrajharKokrajharAssamIndia
| | - Santanu Saha
- Department of CSECentral Institute of Technology KokrajharKokrajharAssamIndia
| | - Pranav Kumar Singh
- Department of CSECentral Institute of Technology KokrajharKokrajharAssamIndia
| |
Collapse
|
21
|
Tariq H, Rashid M, Javed A, Zafar E, Alotaibi SS, Zia MYI. Performance Analysis of Deep-Neural-Network-Based Automatic Diagnosis of Diabetic Retinopathy. Sensors (Basel) 2021; 22:205. [PMID: 35009747 PMCID: PMC8749542 DOI: 10.3390/s22010205] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Revised: 12/13/2021] [Accepted: 12/22/2021] [Indexed: 06/14/2023]
Abstract
Diabetic retinopathy (DR) is a human eye disease that affects people who are suffering from diabetes. It causes damage to their eyes, including vision loss. It is treatable; however, it takes a long time to diagnose and may require many eye exams. Early detection of DR may prevent or delay the vision loss. Therefore, a robust, automatic and computer-based diagnosis of DR is essential. Currently, deep neural networks are being utilized in numerous medical areas to diagnose various diseases. Consequently, deep transfer learning is utilized in this article. We employ five convolutional-neural-network-based designs (AlexNet, GoogleNet, Inception V4, Inception ResNet V2 and ResNeXt-50). A collection of DR pictures is created. Subsequently, the created collections are labeled with an appropriate treatment approach. This automates the diagnosis and assists patients through subsequent therapies. Furthermore, in order to identify the severity of DR retina pictures, we use our own dataset to train deep convolutional neural networks (CNNs). Experimental results reveal that the pre-trained model Se-ResNeXt-50 obtains the best classification accuracy of 97.53% for our dataset out of all pre-trained models. Moreover, we perform five different experiments on each CNN architecture. As a result, a minimum accuracy of 84.01% is achieved for a five-degree classification.
Collapse
Affiliation(s)
- Hassan Tariq
- Department of Electrical Engineering, School of Engineering, University of Management and Technology (UMT), Lahore 54770, Pakistan; (H.T.); (A.J.); (E.Z.)
| | - Muhammad Rashid
- Department of Computer Engineering, Umm Al-Qura University, Makkah 21955, Saudi Arabia;
| | - Asfa Javed
- Department of Electrical Engineering, School of Engineering, University of Management and Technology (UMT), Lahore 54770, Pakistan; (H.T.); (A.J.); (E.Z.)
| | - Eeman Zafar
- Department of Electrical Engineering, School of Engineering, University of Management and Technology (UMT), Lahore 54770, Pakistan; (H.T.); (A.J.); (E.Z.)
| | - Saud S. Alotaibi
- Department of Information Systems, Umm Al-Qura University, Makkah 21955, Saudi Arabia;
| | | |
Collapse
|
22
|
Ren B, Wu Y, Huang L, Zhang Z, Huang B, Zhang H, Ma J, Li B, Liu X, Wu G, Zhang J, Shen L, Liu Q, Ni J. Deep transfer learning of structural magnetic resonance imaging fused with blood parameters improves brain age prediction. Hum Brain Mapp 2021; 43:1640-1656. [PMID: 34913545 PMCID: PMC8886664 DOI: 10.1002/hbm.25748] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 11/14/2021] [Accepted: 12/01/2021] [Indexed: 12/27/2022] Open
Abstract
Machine learning has been applied to neuroimaging data for estimating brain age and capturing early cognitive impairment in neurodegenerative diseases. Blood parameters like neurofilament light chain are associated with aging. In order to improve brain age predictive accuracy, we constructed a model based on both brain structural magnetic resonance imaging (sMRI) and blood parameters. Healthy subjects (n = 93; 37 males; aged 50–85 years) were recruited. A deep learning network was firstly pretrained on a large set of MRI scans (n = 1,481; 659 males; aged 50–85 years) downloaded from multiple open‐source datasets, to provide weights on our recruited dataset. Evaluating the network on the recruited dataset resulted in mean absolute error (MAE) of 4.91 years and a high correlation (r = .67, p <.001) against chronological age. The sMRI data were then combined with five blood biochemical indicators including GLU, TG, TC, ApoA1 and ApoB, and 9 dementia‐associated biomarkers including ApoE genotype, HCY, NFL, TREM2, Aβ40, Aβ42, T‐tau, TIMP1, and VLDLR to construct a bilinear fusion model, which achieved a more accurate prediction of brain age (MAE, 3.96 years; r = .76, p <.001). Notably, the fusion model achieved better improvement in the group of older subjects (70–85 years). Extracted attention maps of the network showed that amygdala, pallidum, and olfactory were effective for age estimation. Mediation analysis further showed that brain structural features and blood parameters provided independent and significant impact. The constructed age prediction model may have promising potential in evaluation of brain health based on MRI and blood parameters.
Collapse
Affiliation(s)
- Bingyu Ren
- Shenzhen Key Laboratory of Marine Biotechnology and Ecology, College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, China
| | - Yingtong Wu
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Liumei Huang
- Shenzhen Key Laboratory of Marine Biotechnology and Ecology, College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, China
| | - Zhiguo Zhang
- MIND Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China
| | - Huajie Zhang
- Shenzhen Key Laboratory of Marine Biotechnology and Ecology, College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, China
| | - Jinting Ma
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Bing Li
- Medical AI Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xukun Liu
- Shenzhen Key Laboratory of Marine Biotechnology and Ecology, College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, China
| | - Guangyao Wu
- Radiology Department, Shenzhen University General Hospital and Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, China
| | - Jian Zhang
- Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China.,Health Science Center, Shenzhen University, Shenzhen, China
| | - Liming Shen
- Shenzhen Key Laboratory of Marine Biotechnology and Ecology, College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, China
| | - Qiong Liu
- Shenzhen Key Laboratory of Marine Biotechnology and Ecology, College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, China.,Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China.,Shenzhen Bay Laboratory, Shenzhen, China
| | - Jiazuan Ni
- Shenzhen Key Laboratory of Marine Biotechnology and Ecology, College of Life Sciences and Oceanography, Shenzhen University, Shenzhen, China
| |
Collapse
|
23
|
Bo L, Zhang Z, Jiang Z, Yang C, Huang P, Chen T, Wang Y, Yu G, Tan X, Cheng Q, Li D, Liu Z. Differentiation of Brain Abscess From Cystic Glioma Using Conventional MRI Based on Deep Transfer Learning Features and Hand-Crafted Radiomics Features. Front Med (Lausanne) 2021; 8:748144. [PMID: 34869438 PMCID: PMC8636043 DOI: 10.3389/fmed.2021.748144] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 10/06/2021] [Indexed: 12/12/2022] Open
Abstract
Objectives: To develop and validate the model for distinguishing brain abscess from cystic glioma by combining deep transfer learning (DTL) features and hand-crafted radiomics (HCR) features in conventional T1-weighted imaging (T1WI) and T2-weighted imaging (T2WI). Methods: This single-center retrospective analysis involved 188 patients with pathologically proven brain abscess (102) or cystic glioma (86). One thousand DTL and 105 HCR features were extracted from the T1WI and T2WI of the patients. Three feature selection methods and four classifiers, such as k-nearest neighbors (KNN), random forest classifier (RFC), logistic regression (LR), and support vector machine (SVM), for distinguishing brain abscess from cystic glioma were compared. The best feature combination and classifier were chosen according to the quantitative metrics including area under the curve (AUC), Youden Index, and accuracy. Results: In most cases, deep learning-based radiomics (DLR) features, i.e., DTL features combined with HCR features, contributed to a higher accuracy than HCR and DTL features alone for distinguishing brain abscesses from cystic gliomas. The AUC values of the model established, based on the DLR features in T2WI, were 0.86 (95% CI: 0.81, 0.91) in the training cohort and 0.85 (95% CI: 0.75, 0.95) in the test cohort, respectively. Conclusions: The model established with the DLR features can distinguish brain abscess from cystic glioma efficiently, providing a useful, inexpensive, convenient, and non-invasive method for differential diagnosis. This is the first time that conventional MRI radiomics is applied to identify these diseases. Also, the combination of HCR and DTL features can lead to get impressive performance.
Collapse
Affiliation(s)
- Linlin Bo
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Zijian Zhang
- Department of Oncology, Xiangya Hospital, Central South University, Changsha, China
| | - Zekun Jiang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Chao Yang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Pu Huang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Tingyin Chen
- Department of Network Information Center, Xiangya Hospital, Centra South University, Changsha, China
| | - Yifan Wang
- Department of Network Information Center, Xiangya Hospital, Centra South University, Changsha, China
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Xiao Tan
- Department of Oncology, Xiangya Hospital, Central South University, Changsha, China
| | - Quan Cheng
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.,National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Zhixiong Liu
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.,National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| |
Collapse
|
24
|
Li J, Wang P, Zhou Y, Liang H, Lu Y, Luan K. A novel classification method of lymph node metastasis in colorectal cancer. Bioengineered 2021; 12:2007-2021. [PMID: 34024255 PMCID: PMC8806456 DOI: 10.1080/21655979.2021.1930333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 05/07/2021] [Accepted: 05/08/2021] [Indexed: 11/21/2022] Open
Abstract
Colorectal cancer lymph node metastasis, which is highly associated with the patient's cancer recurrence and survival rate, has been the focus of many therapeutic strategies that are highly associated with the patient's cancer recurrence and survival rate. The popular methods for classification of lymph node metastasis by neural networks, however, show limitations as the available low-level features are inadequate for classification, and the radiologists are unable to quickly review the images. Identifying lymph node metastasis in colorectal cancer is a key factor in the treatment of patients with colorectal cancer. In the present work, an automatic classification method based on deep transfer learning was proposed. Specifically, the method resolved the problem of repetition of low-level features and combined these features with high-level features into a new feature map for classification; and a merged layer which merges all transmitted features from previous layers into a map of the first full connection layer. With a dataset collected from Harbin Medical University Cancer Hospital, the experiment involved a sample of 3,364 patients. Among these samples, 1,646 were positive, and 1,718 were negative. The experiment results showed the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were 0.8732, 0.8746, 0.8746 and 0.8728, respectively, and the accuracy and AUC were 0.8358 and 0.8569, respectively. These demonstrated that our method significantly outperformed the previous classification methods for colorectal cancer lymph node metastasis without increasing the depth and width of the model.
Collapse
Affiliation(s)
- Jin Li
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China
| | - Peng Wang
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China
| | - Yang Zhou
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China
- Department of Radiology, Harbin Medical University Cancer Hospital, Harbin, Heilongjiang Province, China
| | - Hong Liang
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China
| | - Yang Lu
- College of Information and Electrical Engineering, Heilongjiang Bayi Agricultural University, Daqing, Heilongjiang Province, China
| | - Kuan Luan
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China
| |
Collapse
|
25
|
Mahmood T, Li J, Pei Y, Akhtar F. An Automated In-Depth Feature Learning Algorithm for Breast Abnormality Prognosis and Robust Characterization from Mammography Images Using Deep Transfer Learning. Biology (Basel) 2021; 10:859. [PMID: 34571736 PMCID: PMC8468800 DOI: 10.3390/biology10090859] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 08/25/2021] [Accepted: 08/27/2021] [Indexed: 01/17/2023]
Abstract
BACKGROUND Diagnosing breast cancer masses and calcification clusters have paramount significance in mammography, which aids in mitigating the disease's complexities and curing it at early stages. However, a wrong mammogram interpretation may lead to an unnecessary biopsy of the false-positive findings, which reduces the patient's survival chances. Consequently, approaches that learn to discern breast masses can reduce the number of misconceptions and incorrect diagnoses. Conventionally used classification models focus on feature extraction techniques specific to a particular problem based on domain information. Deep learning strategies are becoming promising alternatives to solve the many challenges of feature-based approaches. METHODS This study introduces a convolutional neural network (ConvNet)-based deep learning method to extract features at varying densities and discern mammography's normal and suspected regions. Two different experiments were carried out to make an accurate diagnosis and classification. The first experiment consisted of five end-to-end pre-trained and fine-tuned deep convolution neural networks (DCNN). The in-depth features extracted from the ConvNet are also used to train the support vector machine algorithm to achieve excellent performance in the second experiment. Additionally, DCNN is the most frequently used image interpretation and classification method, including VGGNet, GoogLeNet, MobileNet, ResNet, and DenseNet. Moreover, this study pertains to data cleaning, preprocessing, and data augmentation, and improving mass recognition accuracy. The efficacy of all models is evaluated by training and testing three mammography datasets and has exhibited remarkable results. RESULTS Our deep learning ConvNet+SVM model obtained a discriminative training accuracy of 97.7% and validating accuracy of 97.8%, contrary to this, VGGNet16 method yielded 90.2%, 93.5% for VGGNet19, 63.4% for GoogLeNet, 82.9% for MobileNetV2, 75.1% for ResNet50, and 72.9% for DenseNet121. CONCLUSIONS The proposed model's improvement and validation are appropriated in conventional pathological practices that conceivably reduce the pathologist's strain in predicting clinical outcomes by analyzing patients' mammography images.
Collapse
Affiliation(s)
- Tariq Mahmood
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.)
- Division of Science and Technology, University of Education, Lahore 54000, Pakistan
| | - Jianqiang Li
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.)
- Beijing Engineering Research Center for IoT Software and Systems, Beijing 100124, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu 965-8580, Japan
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur 65200, Pakistan;
| |
Collapse
|
26
|
Brima Y, Atemkeng M, Tankio Djiokap S, Ebiele J, Tchakounté F. Transfer Learning for the Detection and Diagnosis of Types of Pneumonia including Pneumonia Induced by COVID-19 from Chest X-ray Images. Diagnostics (Basel) 2021; 11:1480. [PMID: 34441414 PMCID: PMC8394302 DOI: 10.3390/diagnostics11081480] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 08/07/2021] [Accepted: 08/09/2021] [Indexed: 12/23/2022] Open
Abstract
Accurate early diagnosis of COVID-19 viral pneumonia, primarily in asymptomatic people, is essential to reduce the spread of the disease, the burden on healthcare capacity, and the overall death rate. It is essential to design affordable and accessible solutions to distinguish pneumonia caused by COVID-19 from other types of pneumonia. In this work, we propose a reliable approach based on deep transfer learning that requires few computations and converges faster. Experimental results demonstrate that our proposed framework for transfer learning is a potential and effective approach to detect and diagnose types of pneumonia from chest X-ray images with a test accuracy of 94.0%.
Collapse
Affiliation(s)
- Yusuf Brima
- African Institute for Mathematical Sciences (AIMS), Kigali P.O. Box 7150, Rwanda;
| | - Marcellin Atemkeng
- Department of Mathematics, Rhodes University, Grahamstown 6140, South Africa
| | - Stive Tankio Djiokap
- Department of Arts, Technology and Heritage, Institute of Fine Arts, University of Dschang, Foumban P.O. Box 31, Cameroon;
| | - Jaures Ebiele
- African Institute for Mathematical Sciences (AIMS), Kigali P.O. Box 7150, Rwanda;
| | - Franklin Tchakounté
- Department of Mathematics and Computer Science, Faculty of Science, University of Ngaoundéré, Ngaoundéré P.O. Box 454, Cameroon;
| |
Collapse
|
27
|
Chen Z, Zhang X, Huang W, Gao J, Zhang S. Cross Modal Few-Shot Contextual Transfer for Heterogenous Image Classification. Front Neurorobot 2021; 15:654519. [PMID: 34108871 PMCID: PMC8180855 DOI: 10.3389/fnbot.2021.654519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Accepted: 04/14/2021] [Indexed: 11/23/2022] Open
Abstract
Deep transfer learning aims at dealing with challenges in new tasks with insufficient samples. However, when it comes to few-shot learning scenarios, due to the low diversity of several known training samples, they are prone to be dominated by specificity, thus leading to one-sidedness local features instead of the reliable global feature of the actual categories they belong to. To alleviate the difficulty, we propose a cross-modal few-shot contextual transfer method that leverages the contextual information as a supplement and learns context awareness transfer in few-shot image classification scenes, which fully utilizes the information in heterogeneous data. The similarity measure in the image classification task is reformulated via fusing textual semantic modal information and visual semantic modal information extracted from images. This performs as a supplement and helps to inhibit the sample specificity. Besides, to better extract local visual features and reorganize the recognition pattern, the deep transfer scheme is also used for reusing a powerful extractor from the pre-trained model. Simulation experiments show that the introduction of cross-modal and intra-modal contextual information can effectively suppress the deviation of defining category features with few samples and improve the accuracy of few-shot image classification tasks.
Collapse
Affiliation(s)
- Zhikui Chen
- The School of Software Technology, Dalian University of Technology, Dalian, China
- The Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, Dalian, China
| | - Xu Zhang
- The School of Software Technology, Dalian University of Technology, Dalian, China
| | - Wei Huang
- Department of Critical Care Medicine, First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Jing Gao
- The School of Software Technology, Dalian University of Technology, Dalian, China
- The Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, Dalian, China
| | - Suhua Zhang
- The School of Software Technology, Dalian University of Technology, Dalian, China
| |
Collapse
|
28
|
Im S, Hyeon J, Rha E, Lee J, Choi HJ, Jung Y, Kim TJ. Classification of Diffuse Glioma Subtype from Clinical-Grade Pathological Images Using Deep Transfer Learning. Sensors (Basel) 2021; 21:s21103500. [PMID: 34067934 PMCID: PMC8156672 DOI: 10.3390/s21103500] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 05/06/2021] [Accepted: 05/14/2021] [Indexed: 11/16/2022]
Abstract
Diffuse gliomas are the most common primary brain tumors and they vary considerably in their morphology, location, genetic alterations, and response to therapy. In 2016, the World Health Organization (WHO) provided new guidelines for making an integrated diagnosis that incorporates both morphologic and molecular features to diffuse gliomas. In this study, we demonstrate how deep learning approaches can be used for an automatic classification of glioma subtypes and grading using whole-slide images that were obtained from routine clinical practice. A deep transfer learning method using the ResNet50V2 model was trained to classify subtypes and grades of diffuse gliomas according to the WHO’s new 2016 classification. The balanced accuracy of the diffuse glioma subtype classification model with majority voting was 0.8727. These results highlight an emerging role of deep learning in the future practice of pathologic diagnosis.
Collapse
Affiliation(s)
- Sanghyuk Im
- Department of Neurosurgery, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - Jonghwan Hyeon
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Eunyoung Rha
- Department of Plastic and Reconstructive Surgery, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - Janghyeon Lee
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Ho-Jin Choi
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Yuchae Jung
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Tae-Jung Kim
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
- Correspondence: ; Tel.: +82-2-3779-2157
| |
Collapse
|
29
|
Werner J, Kronberg RM, Stachura P, Ostermann PN, Müller L, Schaal H, Bhatia S, Kather JN, Borkhardt A, Pandyra AA, Lang KS, Lang PA. Deep Transfer Learning Approach for Automatic Recognition of Drug Toxicity and Inhibition of SARS-CoV-2. Viruses 2021; 13:v13040610. [PMID: 33918368 PMCID: PMC8066066 DOI: 10.3390/v13040610] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/25/2021] [Accepted: 03/30/2021] [Indexed: 12/11/2022] Open
Abstract
Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) causes COVID-19 and is responsible for the ongoing pandemic. Screening of potential antiviral drugs against SARS-CoV-2 depend on in vitro experiments, which are based on the quantification of the virus titer. Here, we used virus-induced cytopathic effects (CPE) in brightfield microscopy of SARS-CoV-2-infected monolayers to quantify the virus titer. Images were classified using deep transfer learning (DTL) that fine-tune the last layers of a pre-trained Resnet18 (ImageNet). To exclude toxic concentrations of potential drugs, the network was expanded to include a toxic score (TOX) that detected cell death (CPETOXnet). With this analytic tool, the inhibitory effects of chloroquine, hydroxychloroquine, remdesivir, and emetine were validated. Taken together we developed a simple method and provided open access implementation to quantify SARS-CoV-2 titers and drug toxicity in experimental settings, which may be adaptable to assays with other viruses. The quantification of virus titers from brightfield images could accelerate the experimental approach for antiviral testing.
Collapse
Affiliation(s)
- Julia Werner
- Department of Molecular Medicine II, Medical Faculty, Heinrich-Heine-University, 40225 Düsseldorf, Germany; (J.W.); (R.M.K.); (P.S.)
| | - Raphael M. Kronberg
- Department of Molecular Medicine II, Medical Faculty, Heinrich-Heine-University, 40225 Düsseldorf, Germany; (J.W.); (R.M.K.); (P.S.)
- Mathematical Modelling of Biological Systems, Heinrich-Heine-University, 40225 Düsseldorf, Germany
| | - Pawel Stachura
- Department of Molecular Medicine II, Medical Faculty, Heinrich-Heine-University, 40225 Düsseldorf, Germany; (J.W.); (R.M.K.); (P.S.)
| | - Philipp N. Ostermann
- Institute of Virology, Medical Faculty, Heinrich-Heine-University, 40225 Düsseldorf, Germany; (P.N.O.); (L.M.); (H.S.)
| | - Lisa Müller
- Institute of Virology, Medical Faculty, Heinrich-Heine-University, 40225 Düsseldorf, Germany; (P.N.O.); (L.M.); (H.S.)
| | - Heiner Schaal
- Institute of Virology, Medical Faculty, Heinrich-Heine-University, 40225 Düsseldorf, Germany; (P.N.O.); (L.M.); (H.S.)
| | - Sanil Bhatia
- Department of Pediatric Oncology, Hematology and Clinical Immunology, Medical Faculty, Center of Child and Adolescent Health, Heinrich-Heine-University, 40225 Düsseldorf, Germany; (S.B.); (A.B.); (A.A.P.)
| | - Jakob N. Kather
- Department of Medicine III, University Hospital RWTH Aachen, 52074 Aachen, Germany;
| | - Arndt Borkhardt
- Department of Pediatric Oncology, Hematology and Clinical Immunology, Medical Faculty, Center of Child and Adolescent Health, Heinrich-Heine-University, 40225 Düsseldorf, Germany; (S.B.); (A.B.); (A.A.P.)
| | - Aleksandra A. Pandyra
- Department of Pediatric Oncology, Hematology and Clinical Immunology, Medical Faculty, Center of Child and Adolescent Health, Heinrich-Heine-University, 40225 Düsseldorf, Germany; (S.B.); (A.B.); (A.A.P.)
| | - Karl S. Lang
- Institute of Immunology, Medical Faculty, University of Duisburg-Essen, 45147 Essen, Germany;
| | - Philipp A. Lang
- Department of Molecular Medicine II, Medical Faculty, Heinrich-Heine-University, 40225 Düsseldorf, Germany; (J.W.); (R.M.K.); (P.S.)
- Correspondence:
| |
Collapse
|
30
|
Islam MM, Karray F, Alhajj R, Zeng J. A Review on Deep Learning Techniques for the Diagnosis of Novel Coronavirus (COVID-19). IEEE Access 2021; 9:30551-30572. [PMID: 34976571 PMCID: PMC8675557 DOI: 10.1109/access.2021.3058537] [Citation(s) in RCA: 114] [Impact Index Per Article: 38.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 02/06/2021] [Indexed: 05/03/2023]
Abstract
Novel coronavirus (COVID-19) outbreak, has raised a calamitous situation all over the world and has become one of the most acute and severe ailments in the past hundred years. The prevalence rate of COVID-19 is rapidly rising every day throughout the globe. Although no vaccines for this pandemic have been discovered yet, deep learning techniques proved themselves to be a powerful tool in the arsenal used by clinicians for the automatic diagnosis of COVID-19. This paper aims to overview the recently developed systems based on deep learning techniques using different medical imaging modalities like Computer Tomography (CT) and X-ray. This review specifically discusses the systems developed for COVID-19 diagnosis using deep learning techniques and provides insights on well-known data sets used to train these networks. It also highlights the data partitioning techniques and various performance measures developed by researchers in this field. A taxonomy is drawn to categorize the recent works for proper insight. Finally, we conclude by addressing the challenges associated with the use of deep learning methods for COVID-19 detection and probable future trends in this research area. The aim of this paper is to facilitate experts (medical or otherwise) and technicians in understanding the ways deep learning techniques are used in this regard and how they can be potentially further utilized to combat the outbreak of COVID-19.
Collapse
Affiliation(s)
- Md. Milon Islam
- Centre for Pattern Analysis and Machine IntelligenceDepartment of Electrical and Computer EngineeringUniversity of WaterlooWaterlooONN2L 3G1Canada
| | - Fakhri Karray
- Centre for Pattern Analysis and Machine IntelligenceDepartment of Electrical and Computer EngineeringUniversity of WaterlooWaterlooONN2L 3G1Canada
| | - Reda Alhajj
- Department of Computer ScienceUniversity of CalgaryCalgaryABT2N 1N4Canada
| | - Jia Zeng
- Institute for Personalized Cancer TherapyMD Anderson Cancer CenterHoustonTX77030USA
| |
Collapse
|
31
|
Rezaeijo SM, Ghorvei M, Mofid B. Predicting breast cancer response to neoadjuvant chemotherapy using ensemble deep transfer learning based on CT images. J Xray Sci Technol 2021; 29:835-850. [PMID: 34219704 DOI: 10.3233/xst-210910] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To develop an ensemble a deep transfer learning model of CT images for predicting pathologic complete response (pCR) in breast cancer patients undergoing neoadjuvant chemotherapy (NAC). METHODS The data were obtained from the public dataset 'QIN-Breast' from The Cancer Imaging Archive (TCIA). CT images were gathered before and after the first cycle of NAC. CT images of 121 breast cancer patients were used to train and test the model. Among these patients, 58 achieved a pCR and 63 showed a non-pCR based pathology examination of surgical results after NAC. The dataset was split into training and testing subsets with a ratio of 7:3. In addition, the number of training samples in the dataset was increased from 656 to 1,968 by performing an image augmentation method. Two deep transfer learning models namely, DenseNet201 and ResNet152V2, and the ensemble model with a concatenation of two models, were trained and tested using CT images. RESULTS The ensemble model obtained the highest accuracy of 100% on the testing dataset. Furthermore, we received the best performance of 100% in recall, precision and f1-score value for the ensemble model. This supports the fact that the ensemble model results in better-generalized model and leads to efficient framework. Although a 0.004 and 0.003 difference were seen between the AUC of two base models (DenseNet201 and ResNet152V2) and the proposed ensemble, this increase in the model quality is critical in medical research. T-SNE revealed that in the proposed ensemble, no points were clustered into the wrong class. These results expose the strong performance of the proposed ensemble. CONCLUSION The study concluded that the ensemble model can increase the ability to predict breast cancer response to first-cycle NAC than two DenseNet201 and ResNet152V2 models.
Collapse
Affiliation(s)
- Seyed Masoud Rezaeijo
- Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran
| | - Mohammadreza Ghorvei
- Department of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
| | - Bahram Mofid
- Department of Radiation Oncology, Faculty of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
32
|
Hashimoto H, Kameda S, Maezawa H, Oshino S, Tani N, Khoo HM, Yanagisawa T, Yoshimine T, Kishima H, Hirata M. A Swallowing Decoder Based on Deep Transfer Learning: AlexNet Classification of the Intracranial Electrocorticogram. Int J Neural Syst 2020; 31:2050056. [PMID: 32938263 DOI: 10.1142/s0129065720500562] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
To realize a brain-machine interface to assist swallowing, neural signal decoding is indispensable. Eight participants with temporal-lobe intracranial electrode implants for epilepsy were asked to swallow during electrocorticogram (ECoG) recording. Raw ECoG signals or certain frequency bands of the ECoG power were converted into images whose vertical axis was electrode number and whose horizontal axis was time in milliseconds, which were used as training data. These data were classified with four labels (Rest, Mouth open, Water injection, and Swallowing). Deep transfer learning was carried out using AlexNet, and power in the high-[Formula: see text] band (75-150[Formula: see text]Hz) was the training set. Accuracy reached 74.01%, sensitivity reached 82.51%, and specificity reached 95.38%. However, using the raw ECoG signals, the accuracy obtained was 76.95%, comparable to that of the high-[Formula: see text] power. We demonstrated that a version of AlexNet pre-trained with visually meaningful images can be used for transfer learning of visually meaningless images made up of ECoG signals. Moreover, we could achieve high decoding accuracy using the raw ECoG signals, allowing us to dispense with the conventional extraction of high-[Formula: see text] power. Thus, the images derived from the raw ECoG signals were equivalent to those derived from the high-[Formula: see text] band for transfer deep learning.
Collapse
Affiliation(s)
- Hiroaki Hashimoto
- Department of Neurological Diagnosis and Restoration, Graduate School of Medicine, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan.,Department of Neurosurgery, Otemae Hospital, Chuo-Ku Otemae 1-5-34, Osaka, Osaka 540-0008, Japan.,Endowed Research Department of Clinical Neuroengineering, Global Center for Medical Engineering and Informatics, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan
| | - Seiji Kameda
- Department of Neurological Diagnosis and Restoration, Graduate School of Medicine, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan
| | - Hitoshi Maezawa
- Department of Neurological Diagnosis and Restoration, Graduate School of Medicine, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan
| | - Satoru Oshino
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan
| | - Naoki Tani
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan
| | - Hui Ming Khoo
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan
| | - Takufumi Yanagisawa
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan
| | - Toshiki Yoshimine
- Endowed Research Department of Clinical Neuroengineering, Global Center for Medical Engineering and Informatics, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan
| | - Haruhiko Kishima
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan
| | - Masayuki Hirata
- Department of Neurological Diagnosis and Restoration, Graduate School of Medicine, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan.,Endowed Research Department of Clinical Neuroengineering, Global Center for Medical Engineering and Informatics, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan.,Department of Neurosurgery, Graduate School of Medicine, Osaka University, Yamadaoka 2-2, Suita, Osaka 565-0871, Japan
| |
Collapse
|
33
|
Jaiswal A, Gianchandani N, Singh D, Kumar V, Kaur M. Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning. J Biomol Struct Dyn 2020; 39:5682-5689. [PMID: 32619398 DOI: 10.1080/07391102.2020.1788642] [Citation(s) in RCA: 184] [Impact Index Per Article: 46.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Deep learning models are widely used in the automatic analysis of radiological images. These techniques can train the weights of networks on large datasets as well as fine tuning the weights of pre-trained networks on small datasets. Due to the small COVID-19 dataset available, the pre-trained neural networks can be used for diagnosis of coronavirus. However, these techniques applied on chest CT image is very limited till now. Hence, the main aim of this paper to use the pre-trained deep learning architectures as an automated tool to detection and diagnosis of COVID-19 in chest CT. A DenseNet201 based deep transfer learning (DTL) is proposed to classify the patients as COVID infected or not i.e. COVID-19 (+) or COVID (-). The proposed model is utilized to extract features by using its own learned weights on the ImageNet dataset along with a convolutional neural structure. Extensive experiments are performed to evaluate the performance of the propose DTL model on COVID-19 chest CT scan images. Comparative analyses reveal that the proposed DTL based COVID-19 classification model outperforms the competitive approaches.Communicated by Ramaswamy H. Sarma.
Collapse
Affiliation(s)
- Aayush Jaiswal
- Department of Computer Science and Engineering, School of Computing and Information Technology, Manipal University Jaipur, Jaipur, India
| | - Neha Gianchandani
- Department of Computer Science and Engineering, School of Computing and Information Technology, Manipal University Jaipur, Jaipur, India
| | - Dilbag Singh
- Department of Computer Science and Engineering, School of Computing and Information Technology, Manipal University Jaipur, Jaipur, India
| | - Vijay Kumar
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, India
| | - Manjit Kaur
- Department of Computer and Communication Engineering, School of Computing and Information Technology, Manipal University Jaipur, Jaipur, India
| |
Collapse
|
34
|
Li F, Liu Z, Chen H, Jiang M, Zhang X, Wu Z. Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm. Transl Vis Sci Technol 2019; 8:4. [PMID: 31737428 PMCID: PMC6855298 DOI: 10.1167/tvst.8.6.4] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 09/02/2019] [Indexed: 12/11/2022] Open
Abstract
Purpose To achieve automatic diabetic retinopathy (DR) detection in retinal fundus photographs through the use of a deep transfer learning approach using the Inception-v3 network. Methods A total of 19,233 eye fundus color numerical images were retrospectively obtained from 5278 adult patients presenting for DR screening. The 8816 images passed image-quality review and were graded as no apparent DR (1374 images), mild nonproliferative DR (NPDR) (2152 images), moderate NPDR (2370 images), severe NPDR (1984 images), and proliferative DR (PDR) (936 images) by eight retinal experts according to the International Clinical Diabetic Retinopathy severity scale. After image preprocessing, 7935 DR images were selected from the above categories as a training dataset, while the rest of the images were used as validation dataset. We introduced a 10-fold cross-validation strategy to assess and optimize our model. We also selected the publicly independent Messidor-2 dataset to test the performance of our model. For discrimination between no referral (no apparent DR and mild NPDR) and referral (moderate NPDR, severe NPDR, and PDR), we also computed prediction accuracy, sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and κ value. Results The proposed approach achieved a high classification accuracy of 93.49% (95% confidence interval [CI], 93.13%–93.85%), with a 96.93% sensitivity (95% CI, 96.35%–97.51%) and a 93.45% specificity (95% CI, 93.12%–93.79%), while the AUC was up to 0.9905 (95% CI, 0.9887–0.9923) on the independent test dataset. The κ value of our best model was 0.919, while the three experts had κ values of 0.906, 0.931, and 0.914, independently. Conclusions This approach could automatically detect DR with excellent sensitivity, accuracy, and specificity and could aid in making a referral recommendation for further evaluation and treatment with high reliability. Translational Relevance This approach has great value in early DR screening using retinal fundus photographs.
Collapse
Affiliation(s)
- Feng Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Zheng Liu
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Hua Chen
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Minshan Jiang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Xuedian Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Zhizheng Wu
- Department of Precision Mechanical Engineering, Shanghai University, Shanghai 200072, China
| |
Collapse
|
35
|
Wang H, Yu Y, Cai Y, Chen L, Chen X. A Vehicle Recognition Algorithm Based on Deep Transfer Learning with a Multiple Feature Subspace Distribution. Sensors (Basel) 2018; 18:s18124109. [PMID: 30477172 PMCID: PMC6308963 DOI: 10.3390/s18124109] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 11/16/2022]
Abstract
Vehicle detection is a key component of environmental sensing systems for Intelligent Vehicles (IVs). The traditional shallow model and offline learning-based vehicle detection method are not able to satisfy the real-world challenges of environmental complexity and scene dynamics. Focusing on these problems, this work proposes a vehicle detection algorithm based on a multiple feature subspace distribution deep model with online transfer learning. Based on the multiple feature subspace distribution hypothesis, a deep model is established in which multiple Restricted Boltzmann Machines (RBMs) construct the lower layers and a Deep Belief Network (DBN) composes the superstructure. For this deep model, an unsupervised feature extraction method is applied, which is based on sparse constraints. Then, a transfer learning method with online sample generation is proposed based on the deep model. Finally, the entire classifier is retrained online with supervised learning. The experiment is actuated using the KITTI road image datasets. The performance of the proposed method is compared with many state-of-the-art methods and it is demonstrated that the proposed deep transfer learning-based algorithm outperformed existing state-of-the-art methods.
Collapse
Affiliation(s)
- Hai Wang
- School of Automotive and Traffic Engineering, Jiangsu University, Zhenjiang 212013, China.
| | - Yijie Yu
- School of Automotive and Traffic Engineering, Jiangsu University, Zhenjiang 212013, China.
| | - Yingfeng Cai
- Automotive Engineering Research Institute, Jiangsu University, Zhenjiang 212013, China.
| | - Long Chen
- Automotive Engineering Research Institute, Jiangsu University, Zhenjiang 212013, China.
| | - Xiaobo Chen
- Automotive Engineering Research Institute, Jiangsu University, Zhenjiang 212013, China.
| |
Collapse
|
36
|
Abstract
Computational elucidation of membrane protein (MP) structures is challenging partially due to lack of sufficient solved structures for homology modeling. Here, we describe a high-throughput deep transfer learning method that first predicts MP contacts by learning from non-MPs and then predicts 3D structure models using the predicted contacts as distance restraints. Tested on 510 non-redundant MPs, our method has contact prediction accuracy at least 0.18 better than existing methods, predicts correct folds for 218 MPs, and generates 3D models with root-mean-square deviation (RMSD) less than 4 and 5 Å for 57 and 108 MPs, respectively. A rigorous blind test in the continuous automated model evaluation project shows that our method predicted high-resolution 3D models for two recent test MPs of 210 residues with RMSD ∼2 Å. We estimated that our method could predict correct folds for 1,345-1,871 reviewed human multi-pass MPs including a few hundred new folds, which shall facilitate the discovery of drugs targeting at MPs.
Collapse
Affiliation(s)
- Sheng Wang
- Toyota Technological Institute at Chicago, Chicago, IL 60637, USA; Department of Human Genetics, University of Chicago, Chicago, IL 60637, USA; Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| | - Zhen Li
- Toyota Technological Institute at Chicago, Chicago, IL 60637, USA; Department of Computer Science, University of Hong Kong, Hong Kong
| | - Yizhou Yu
- Department of Computer Science, University of Hong Kong, Hong Kong
| | - Jinbo Xu
- Toyota Technological Institute at Chicago, Chicago, IL 60637, USA.
| |
Collapse
|
37
|
Kandaswamy C, Silva LM, Alexandre LA, Santos JM. High-Content Analysis of Breast Cancer Using Single-Cell Deep Transfer Learning. ACTA ACUST UNITED AC 2016; 21:252-9. [PMID: 26746583 DOI: 10.1177/1087057115623451] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2015] [Accepted: 11/30/2015] [Indexed: 01/17/2023]
Abstract
High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.
Collapse
Affiliation(s)
- Chetak Kandaswamy
- Instituto de Engenharia Biomédica (INEB), Porto, Portugal Departamento de Engenharia Eletrotécnica e de Computadores, Faculdade de Engenharia da Universidade do Porto, Porto, Portugal
| | - Luís M Silva
- Instituto de Engenharia Biomédica (INEB), Porto, Portugal Instituto de Investigação e Inovação em Saúde, Universidade do Porto, Porto, Portugal Departamento de Matemática, Universidade de Aveiro, Aveiro, Portugal
| | - Luís A Alexandre
- Universidade da Beira Interior, Instituto de Telecomunicações, Covilhã, Portugal
| | - Jorge M Santos
- Instituto de Engenharia Biomédica (INEB), Porto, Portugal Instituto de Investigação e Inovação em Saúde, Universidade do Porto, Porto, Portugal Departamento de Matemática, Instituto Superior de Engenharia do Instituto Politécnico do Porto, Porto, Portugal
| |
Collapse
|