1
|
Yu Y, Chen R, Yi J, Huang K, Yu X, Zhang J, Song C. Non-invasive prediction of axillary lymph node dissection exemption in breast cancer patients post-neoadjuvant therapy: A radiomics and deep learning analysis on longitudinal DCE-MRI data. Breast 2024; 77:103786. [PMID: 39137488 PMCID: PMC11369401 DOI: 10.1016/j.breast.2024.103786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 07/15/2024] [Accepted: 08/08/2024] [Indexed: 08/15/2024] Open
Abstract
PURPOSE In breast cancer (BC) patients with clinical axillary lymph node metastasis (cN+) undergoing neoadjuvant therapy (NAT), precise axillary lymph node (ALN) assessment dictates therapeutic strategy. There is a critical demand for a precise method to assess the axillary lymph node (ALN) status in these patients. MATERIALS AND METHODS A retrospective analysis was conducted on 160 BC patients undergoing NAT at Fujian Medical University Union Hospital. We analyzed baseline and two-cycle reassessment dynamic contrast-enhanced MRI (DCE-MRI) images, extracting 3668 radiomic and 4096 deep learning features, and computing 1834 delta-radiomic and 2048 delta-deep learning features. Light Gradient Boosting Machine (LightGBM), Support Vector Machine (SVM), RandomForest, and Multilayer Perceptron (MLP) algorithms were employed to develop risk models and were evaluated using 10-fold cross-validation. RESULTS Of the patients, 61 (38.13 %) achieved ypN0 status post-NAT. Univariate and multivariable logistic regression analyses revealed molecular subtypes and Ki67 as pivotal predictors of achieving ypN0 post-NAT. The SVM-based "Data Amalgamation" model that integrates radiomic, deep learning features, and clinical data, exhibited an outstanding AUC of 0.986 (95 % CI: 0.954-1.000), surpassing other models. CONCLUSION Our study illuminates the challenges and opportunities inherent in breast cancer management post-NAT. By introducing a sophisticated, SVM-based "Data Amalgamation" model, we propose a way towards accurate, dynamic ALN assessments, offering potential for personalized therapeutic strategies in BC.
Collapse
Affiliation(s)
- Yushuai Yu
- Department of Breast Surgery, Fujian Medical University Union Hospital, Fuzhou, Fujian Province, 350001, China; Department of Breast Surgery, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian Province, 350014, China
| | - Ruiliang Chen
- Department of Breast Surgery, Fujian Medical University Union Hospital, Fuzhou, Fujian Province, 350001, China
| | - Jialu Yi
- Department of Breast Surgery, Fujian Medical University Union Hospital, Fuzhou, Fujian Province, 350001, China
| | - Kaiyan Huang
- Department of Breast and Thyroid Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian Province, 362000, China
| | - Xin Yu
- Department of Breast Surgery, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian Province, 350014, China
| | - Jie Zhang
- Department of Breast Surgery, Fujian Medical University Union Hospital, Fuzhou, Fujian Province, 350001, China.
| | - Chuangui Song
- Department of Breast Surgery, Fujian Medical University Union Hospital, Fuzhou, Fujian Province, 350001, China; Department of Breast Surgery, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian Province, 350014, China.
| |
Collapse
|
2
|
Sannasi Chakravarthy SR, Bharanidharan N, Vinothini C, Vinoth Kumar V, Mahesh TR, Guluwadi S. Adaptive Mish activation and ranger optimizer-based SEA-ResNet50 model with explainable AI for multiclass classification of COVID-19 chest X-ray images. BMC Med Imaging 2024; 24:206. [PMID: 39123118 PMCID: PMC11313131 DOI: 10.1186/s12880-024-01394-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 08/06/2024] [Indexed: 08/12/2024] Open
Abstract
A recent global health crisis, COVID-19 is a significant global health crisis that has profoundly affected lifestyles. The detection of such diseases from similar thoracic anomalies using medical images is a challenging task. Thus, the requirement of an end-to-end automated system is vastly necessary in clinical treatments. In this way, the work proposes a Squeeze-and-Excitation Attention-based ResNet50 (SEA-ResNet50) model for detecting COVID-19 utilizing chest X-ray data. Here, the idea lies in improving the residual units of ResNet50 using the squeeze-and-excitation attention mechanism. For further enhancement, the Ranger optimizer and adaptive Mish activation function are employed to improve the feature learning of the SEA-ResNet50 model. For evaluation, two publicly available COVID-19 radiographic datasets are utilized. The chest X-ray input images are augmented during experimentation for robust evaluation against four output classes namely normal, pneumonia, lung opacity, and COVID-19. Then a comparative study is done for the SEA-ResNet50 model against VGG-16, Xception, ResNet18, ResNet50, and DenseNet121 architectures. The proposed framework of SEA-ResNet50 together with the Ranger optimizer and adaptive Mish activation provided maximum classification accuracies of 98.38% (multiclass) and 99.29% (binary classification) as compared with the existing CNN architectures. The proposed method achieved the highest Kappa validation scores of 0.975 (multiclass) and 0.98 (binary classification) over others. Furthermore, the visualization of the saliency maps of the abnormal regions is represented using the explainable artificial intelligence (XAI) model, thereby enhancing interpretability in disease diagnosis.
Collapse
Affiliation(s)
- S R Sannasi Chakravarthy
- Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam, India
| | - N Bharanidharan
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, 632014, India
| | - C Vinothini
- Department of Computer Science and Engineering, Dayananda Sagar College of Engineering, Bangalore, India
| | - Venkatesan Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, 632014, India
| | - T R Mahesh
- Department of Computer Science and Engineering, JAIN (Deemed-to-Be University), Bengaluru, 562112, India
| | - Suresh Guluwadi
- Adama Science and Technology University, Adama, 302120, Ethiopia.
| |
Collapse
|
3
|
Wang X, Li S, Pun CM, Guo Y, Xu F, Gao H, Lu H. A Parkinson's Auxiliary Diagnosis Algorithm Based on a Hyperparameter Optimization Method of Deep Learning. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:912-923. [PMID: 37027659 DOI: 10.1109/tcbb.2023.3246961] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Parkinson's disease is a common mental disease in the world, especially in the middle-aged and elderly groups. Today, clinical diagnosis is the main diagnostic method of Parkinson's disease, but the diagnosis results are not ideal, especially in the early stage of the disease. In this paper, a Parkinson's auxiliary diagnosis algorithm based on a hyperparameter optimization method of deep learning is proposed for the Parkinson's diagnosis. The diagnosis system uses ResNet50 to achieve feature extraction and Parkinson's classification, mainly including speech signal processing part, algorithm improvement part based on Artificial Bee Colony algorithm (ABC) and optimizing the hyperparameters of ResNet50 part. The improved algorithm is called Gbest Dimension Artificial Bee Colony algorithm (GDABC), proposing "Range pruning strategy" which aims at narrowing the scope of search and "Dimension adjustment strategy" which is to adjust gbest dimension by dimension. The accuracy of the diagnosis system in the verification set of Mobile Device Voice Recordings at King's College London (MDVR-CKL) dataset can reach more than 96%. Compared with current Parkinson's sound diagnosis methods and other optimization algorithms, our auxiliary diagnosis system shows better classification performance on the dataset within limited time and resources.
Collapse
|
4
|
Guo L, Zhou C, Xu J, Huang C, Yu Y, Lu G. Deep Learning for Chest X-ray Diagnosis: Competition Between Radiologists with or Without Artificial Intelligence Assistance. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:922-934. [PMID: 38332402 PMCID: PMC11169143 DOI: 10.1007/s10278-024-00990-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 12/13/2023] [Accepted: 12/13/2023] [Indexed: 02/10/2024]
Abstract
This study aimed to assess the performance of a deep learning algorithm in helping radiologist achieve improved efficiency and accuracy in chest radiograph diagnosis. We adopted a deep learning algorithm to concurrently detect the presence of normal findings and 13 different abnormalities in chest radiographs and evaluated its performance in assisting radiologists. Each competing radiologist had to determine the presence or absence of these signs based on the label provided by the AI. The 100 radiographs were randomly divided into two sets for evaluation: one without AI assistance (control group) and one with AI assistance (test group). The accuracy, false-positive rate, false-negative rate, and analysis time of 111 radiologists (29 senior, 32 intermediate, and 50 junior) were evaluated. A radiologist was given an initial score of 14 points for each image read, with 1 point deducted for an incorrect answer and 0 points given for a correct answer. The final score for each doctor was automatically calculated by the backend calculator. We calculated the mean scores of each radiologist in the two groups (the control group and the test group) and calculated the mean scores to evaluate the performance of the radiologists with and without AI assistance. The average score of the 111 radiologists was 597 (587-605) in the control group and 619 (612-626) in the test group (P < 0.001). The time spent by the 111 radiologists on the control and test groups was 3279 (2972-3941) and 1926 (1710-2432) s, respectively (P < 0.001). The performance of the 111 radiologists in the two groups was evaluated by the area under the receiver operating characteristic curve (AUC). The radiologists showed better performance on the test group of radiographs in terms of normal findings, pulmonary fibrosis, heart shadow enlargement, mass, pleural effusion, and pulmonary consolidation recognition, with AUCs of 1.0, 0.950, 0.991, 1.0, 0.993, and 0.982, respectively. The radiologists alone showed better performance in aortic calcification (0.993), calcification (0.933), cavity (0.963), nodule (0.923), pleural thickening (0.957), and rib fracture (0.987) recognition. This competition verified the positive effects of deep learning methods in assisting radiologists in interpreting chest X-rays. AI assistance can help to improve both the efficacy and efficiency of radiologists.
Collapse
Affiliation(s)
- Lili Guo
- Department of Radiology, The Affiliated Huaian No. 1 People's Hospital of Nanjing Medical University, Huai'an, 223300, China.
| | - Changsheng Zhou
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China
| | - Jingxu Xu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Chencui Huang
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Yizhou Yu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Guangming Lu
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China.
| |
Collapse
|
5
|
Hoffer O, Brzezinski RY, Ganim A, Shalom P, Ovadia-Blechman Z, Ben-Baruch L, Lewis N, Peled R, Shimon C, Naftali-Shani N, Katz E, Zimmer Y, Rabin N. Smartphone-based detection of COVID-19 and associated pneumonia using thermal imaging and a transfer learning algorithm. JOURNAL OF BIOPHOTONICS 2024:e202300486. [PMID: 38253344 DOI: 10.1002/jbio.202300486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/28/2023] [Accepted: 12/31/2023] [Indexed: 01/24/2024]
Abstract
COVID-19-related pneumonia is typically diagnosed using chest x-ray or computed tomography images. However, these techniques can only be used in hospitals. In contrast, thermal cameras are portable, inexpensive devices that can be connected to smartphones. Thus, they can be used to detect and monitor medical conditions outside hospitals. Herein, a smartphone-based application using thermal images of a human back was developed for COVID-19 detection. Image analysis using a deep learning algorithm revealed a sensitivity and specificity of 88.7% and 92.3%, respectively. The findings support the future use of noninvasive thermal imaging in primary screening for COVID-19 and associated pneumonia.
Collapse
Affiliation(s)
- Oshrit Hoffer
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Rafael Y Brzezinski
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
- Internal Medicine "C" and "E", Tel Aviv Medical Center, Tel Aviv, Israel
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Adam Ganim
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Perry Shalom
- School of Software Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Zehava Ovadia-Blechman
- School of Medical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Lital Ben-Baruch
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Nir Lewis
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
| | - Racheli Peled
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
| | - Carmi Shimon
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Nili Naftali-Shani
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
| | - Eyal Katz
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Yair Zimmer
- School of Medical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Neta Rabin
- Department of Industrial Engineering, Tel-Aviv University, Tel Aviv, Israel
| |
Collapse
|
6
|
Tangruangkiat S, Chaiwongkot N, Pamarapa C, Rawangwong T, Khunnarong A, Chainarong C, Sathapanawanthana P, Hiranrat P, Keerativittayayut R, Sungkarat W, Phonlakrai M. Diagnosis of focal liver lesions from ultrasound images using a pretrained residual neural network. J Appl Clin Med Phys 2024; 25:e14210. [PMID: 37991141 PMCID: PMC10795428 DOI: 10.1002/acm2.14210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 09/18/2023] [Accepted: 11/07/2023] [Indexed: 11/23/2023] Open
Abstract
OBJECTIVE This study aims to develop a ResNet50-based deep learning model for focal liver lesion (FLL) classification in ultrasound images, comparing its performance with other models and prior research. METHODOLOGY We retrospectively collected 581 ultrasound images from the Chulabhorn Hospital's HCC surveillance and screening project (2010-2018). The dataset comprised five classes: non-FLL, hepatic cyst (Cyst), hemangioma (HMG), focal fatty sparing (FFS), and hepatocellular carcinoma (HCC). We conducted 5-fold cross-validation after random dataset partitioning, enhancing training data with data augmentation. Our models used modified pre-trained ResNet50, GGN, ResNet18, and VGG16 architectures. Model performance, assessed via confusion matrices for sensitivity, specificity, and accuracy, was compared across models and with prior studies. RESULTS ResNet50 outperformed other models, achieving a 5-fold cross-validation accuracy of 87 ± 2.2%. While VGG16 showed similar performance, it exhibited higher uncertainty. In the testing phase, the pretrained ResNet50 excelled in classifying non-FLL, cysts, and FFS. To compare with other research, ResNet50 surpassed the prior methods like two-layered feed-forward neural networks (FFNN) and CNN+ReLU in FLL diagnosis. CONCLUSION ResNet50 exhibited good performance in FLL diagnosis, especially for HCC classification, suggesting its potential for developing computer-aided FLL diagnosis. However, further refinement is required for HCC and HMG classification in future studies.
Collapse
Affiliation(s)
- Sutthirak Tangruangkiat
- School of Radiological Technology, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| | - Napatsorn Chaiwongkot
- School of Radiological Technology, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| | - Chayanon Pamarapa
- School of Radiological Technology, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| | - Thanatcha Rawangwong
- School of Radiological Technology, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| | - Araya Khunnarong
- School of Radiological Technology, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| | - Chanyanuch Chainarong
- School of Radiological Technology, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| | - Preeyanun Sathapanawanthana
- School of Radiological Technology, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| | - Pantajaree Hiranrat
- Sonographer School, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| | - Ruedeerat Keerativittayayut
- School of Radiological Technology, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| | - Witaya Sungkarat
- School of Radiological Technology, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| | - Monchai Phonlakrai
- School of Radiological Technology, Faculty of Health Science TechnologyChulabhorn Royal AcademyBangkokThailand
| |
Collapse
|
7
|
Tang J, Liang Y, Jiang Y, Liu J, Zhang R, Huang D, Pang C, Huang C, Luo D, Zhou X, Li R, Zhang K, Xie B, Hu L, Zhu F, Xia H, Lu L, Wang H. A multicenter study on two-stage transfer learning model for duct-dependent CHDs screening in fetal echocardiography. NPJ Digit Med 2023; 6:143. [PMID: 37573426 PMCID: PMC10423245 DOI: 10.1038/s41746-023-00883-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 07/21/2023] [Indexed: 08/14/2023] Open
Abstract
Duct-dependent congenital heart diseases (CHDs) are a serious form of CHD with a low detection rate, especially in underdeveloped countries and areas. Although existing studies have developed models for fetal heart structure identification, there is a lack of comprehensive evaluation of the long axis of the aorta. In this study, a total of 6698 images and 48 videos are collected to develop and test a two-stage deep transfer learning model named DDCHD-DenseNet for screening critical duct-dependent CHDs. The model achieves a sensitivity of 0.973, 0.843, 0.769, and 0.759, and a specificity of 0.985, 0.967, 0.956, and 0.759, respectively, on the four multicenter test sets. It is expected to be employed as a potential automatic screening tool for hierarchical care and computer-aided diagnosis. Our two-stage strategy effectively improves the robustness of the model and can be extended to screen for other fetal heart development defects.
Collapse
Affiliation(s)
- Jiajie Tang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
- School of Information Management, Wuhan University, Wuhan, China
| | - Yongen Liang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Yuxuan Jiang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
- School of Information Management, Wuhan University, Wuhan, China
| | - Jinrong Liu
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Rui Zhang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Danping Huang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Chengcheng Pang
- Cardiovascular Pediatrics/Guangdong Cardiovascular Institute/Medical Big Data Center, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Chen Huang
- Department of Medical Ultrasonics/Shenzhen Longgang Maternal and Child Health Hospital, Shenzhen, China
| | - Dongni Luo
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Xue Zhou
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Ruizhuo Li
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
- School of Medicine, Southern China University of Technology, Guangzhou, China
| | - Kanghui Zhang
- School of Information Management, Wuhan University, Wuhan, China
| | - Bingbing Xie
- School of Information Management, Wuhan University, Wuhan, China
| | - Lianting Hu
- Cardiovascular Pediatrics/Guangdong Cardiovascular Institute/Medical Big Data Center, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Fanfan Zhu
- School of Information Management, Wuhan University, Wuhan, China
| | - Huimin Xia
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
| | - Long Lu
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
- School of Information Management, Wuhan University, Wuhan, China.
- Center for Healthcare Big Data Research, The Big Data Institute, Wuhan University, Wuhan, China.
- School of Public Health, Wuhan University, Wuhan, China.
| | - Hongying Wang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
| |
Collapse
|
8
|
Ukwuoma CC, Cai D, Heyat MBB, Bamisile O, Adun H, Al-Huda Z, Al-Antari MA. Deep learning framework for rapid and accurate respiratory COVID-19 prediction using chest X-ray images. JOURNAL OF KING SAUD UNIVERSITY. COMPUTER AND INFORMATION SCIENCES 2023; 35:101596. [PMID: 37275558 PMCID: PMC10211254 DOI: 10.1016/j.jksuci.2023.101596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 05/17/2023] [Accepted: 05/19/2023] [Indexed: 06/07/2023]
Abstract
COVID-19 is a contagious disease that affects the human respiratory system. Infected individuals may develop serious illnesses, and complications may result in death. Using medical images to detect COVID-19 from essentially identical thoracic anomalies is challenging because it is time-consuming, laborious, and prone to human error. This study proposes an end-to-end deep-learning framework based on deep feature concatenation and a Multi-head Self-attention network. Feature concatenation involves fine-tuning the pre-trained backbone models of DenseNet, VGG-16, and InceptionV3, which are trained on a large-scale ImageNet, whereas a Multi-head Self-attention network is adopted for performance gain. End-to-end training and evaluation procedures are conducted using the COVID-19_Radiography_Dataset for binary and multi-classification scenarios. The proposed model achieved overall accuracies (96.33% and 98.67%) and F1_scores (92.68% and 98.67%) for multi and binary classification scenarios, respectively. In addition, this study highlights the difference in accuracy (98.0% vs. 96.33%) and F_1 score (97.34% vs. 95.10%) when compared with feature concatenation against the highest individual model performance. Furthermore, a virtual representation of the saliency maps of the employed attention mechanism focusing on the abnormal regions is presented using explainable artificial intelligence (XAI) technology. The proposed framework provided better COVID-19 prediction results outperforming other recent deep learning models using the same dataset.
Collapse
Affiliation(s)
- Chiagoziem C Ukwuoma
- The College of Nuclear Technology and Automation Engineering, Chengdu University of Technology, Sichuan, 610059, China
| | - Dongsheng Cai
- The College of Nuclear Technology and Automation Engineering, Chengdu University of Technology, Sichuan, 610059, China
| | - Md Belal Bin Heyat
- IoT Research Center, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong 518060, China
| | - Olusola Bamisile
- Sichuan Industrial Internet Intelligent Monitoring and Application Engineering Technology Research Center, Chengdu University of Technology, China
| | - Humphrey Adun
- Department of Mechanical and Energy Systems Engineering, Cyprus International University, Nicosia, North Nicosia, Cyprus
| | - Zaid Al-Huda
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan, China
| | - Mugahed A Al-Antari
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| |
Collapse
|
9
|
Kabir T, Chen L, Walji MF, Giancardo L, Jiang X, Shams S. Dental CLAIRES: Contrastive LAnguage Image REtrieval Search for Dental Research. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2023; 2023:300-309. [PMID: 37350885 PMCID: PMC10283104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/24/2023]
Abstract
Learning about diagnostic features and related clinical information from dental radiographs is important for dental research. However, the lack of expert-annotated data and convenient search tools poses challenges. Our primary objective is to design a search tool that uses a user's query for oral-related research. The proposed framework, Contrastive LAnguage Image REtrieval Search for dental research, Dental CLAIRES, utilizes periapical radiographs and associated clinical details such as periodontal diagnosis, demographic information to retrieve the best-matched images based on the text query. We applied a contrastive representation learning method to find images described by the user's text by maximizing the similarity score of positive pairs (true pairs) and minimizing the score of negative pairs (random pairs). Our model achieved a hit@3 ratio of 96% and a Mean Reciprocal Rank (MRR) of 0.82. We also designed a graphical user interface that allows researchers to verify the model's performance with interactions.
Collapse
Affiliation(s)
- Tanjida Kabir
- University of Texas Health Science Center at Houston, School of Biomedical Informatics, Houston, Texas, USA
| | - Luyao Chen
- University of Texas Health Science Center at Houston, School of Biomedical Informatics, Houston, Texas, USA
| | - Muhammad F Walji
- Department of Diagnostic and Biomedical Sciences, The University of Texas Health Science Center at Houston, School of Dentistry, Houston, Texas, USA
| | - Luca Giancardo
- University of Texas Health Science Center at Houston, School of Biomedical Informatics, Houston, Texas, USA
| | - Xiaoqian Jiang
- University of Texas Health Science Center at Houston, School of Biomedical Informatics, Houston, Texas, USA
| | - Shayan Shams
- University of Texas Health Science Center at Houston, School of Biomedical Informatics, Houston, Texas, USA
- Department of Applied Data Science, San Jose State University, San Jose, California, USA
| |
Collapse
|
10
|
Riedel P, von Schwerin R, Schaudt D, Hafner A, Späte C. ResNetFed: Federated Deep Learning Architecture for Privacy-Preserving Pneumonia Detection from COVID-19 Chest Radiographs. JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:203-224. [PMID: 37359194 PMCID: PMC10265567 DOI: 10.1007/s41666-023-00132-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 02/20/2023] [Accepted: 04/12/2023] [Indexed: 06/28/2023]
Abstract
Personal health data is subject to privacy regulations, making it challenging to apply centralized data-driven methods in healthcare, where personalized training data is frequently used. Federated Learning (FL) promises to provide a decentralized solution to this problem. In FL, siloed data is used for the model training to ensure data privacy. In this paper, we investigate the viability of the federated approach using the detection of COVID-19 pneumonia as a use case. 1411 individual chest radiographs, sourced from the public data repository COVIDx8 are used. The dataset contains radiographs of 753 normal lung findings and 658 COVID-19 related pneumonias. We partition the data unevenly across five separate data silos in order to reflect a typical FL scenario. For the binary image classification analysis of these radiographs, we propose ResNetFed, a pre-trained ResNet50 model modified for federation so that it supports Differential Privacy. In addition, we provide a customized FL strategy for the model training with COVID-19 radiographs. The experimental results show that ResNetFed clearly outperforms locally trained ResNet50 models. Due to the uneven distribution of the data in the silos, we observe that the locally trained ResNet50 models perform significantly worse than ResNetFed models (mean accuracies of 63% and 82.82%, respectively). In particular, ResNetFed shows excellent model performance in underpopulated data silos, achieving up to +34.9 percentage points higher accuracy compared to local ResNet50 models. Thus, with ResNetFed, we provide a federated solution that can assist the initial COVID-19 screening in medical centers in a privacy-preserving manner.
Collapse
Affiliation(s)
- Pascal Riedel
- Institute for Informatics, University of Applied Sciences, Prittwitzstraße 10, Ulm, 89075 Baden-Württemberg Germany
| | - Reinhold von Schwerin
- Institute for Informatics, University of Applied Sciences, Prittwitzstraße 10, Ulm, 89075 Baden-Württemberg Germany
| | - Daniel Schaudt
- Institute for Informatics, University of Applied Sciences, Prittwitzstraße 10, Ulm, 89075 Baden-Württemberg Germany
| | - Alexander Hafner
- Institute for Informatics, University of Applied Sciences, Prittwitzstraße 10, Ulm, 89075 Baden-Württemberg Germany
| | - Christian Späte
- Transferzentrum für Digitalisierung, Analytics & Data Science Ulm (DASU), Ensingerstraße 4, Ulm, 89073 Baden-Württemberg Germany
| |
Collapse
|
11
|
Zhang X, Dong X, Saripan MIB, Du D, Wu Y, Wang Z, Cao Z, Wen D, Liu Y, Marhaban MH. Deep learning PET/CT-based radiomics integrates clinical data: A feasibility study to distinguish between tuberculosis nodules and lung cancer. Thorac Cancer 2023. [PMID: 37183577 DOI: 10.1111/1759-7714.14924] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/21/2023] [Accepted: 04/22/2023] [Indexed: 05/16/2023] Open
Abstract
BACKGROUND Radiomic diagnosis models generally consider only a single dimension of information, leading to limitations in their diagnostic accuracy and reliability. The integration of multiple dimensions of information into the deep learning model have the potential to improve its diagnostic capabilities. The purpose of study was to evaluate the performance of deep learning model in distinguishing tuberculosis (TB) nodules and lung cancer (LC) based on deep learning features, radiomic features, and clinical information. METHODS Positron emission tomography (PET) and computed tomography (CT) image data from 97 patients with LC and 77 patients with TB nodules were collected. One hundred radiomic features were extracted from both PET and CT imaging using the pyradiomics platform, and 2048 deep learning features were obtained through a residual neural network approach. Four models included traditional machine learning model with radiomic features as input (traditional radiomics), a deep learning model with separate input of image features (deep convolutional neural networks [DCNN]), a deep learning model with two inputs of radiomic features and deep learning features (radiomics-DCNN) and a deep learning model with inputs of radiomic features and deep learning features and clinical information (integrated model). The models were evaluated using area under the curve (AUC), sensitivity, accuracy, specificity, and F1-score metrics. RESULTS The results of the classification of TB nodules and LC showed that the integrated model achieved an AUC of 0.84 (0.82-0.88), sensitivity of 0.85 (0.80-0.88), and specificity of 0.84 (0.83-0.87), performing better than the other models. CONCLUSION The integrated model was found to be the best classification model in the diagnosis of TB nodules and solid LC.
Collapse
Affiliation(s)
- Xiaolei Zhang
- Faculty of Engineering, Universiti Putra Malaysia, Serdang, Malaysia
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Xianling Dong
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
- Hebei International Research Center of Medical Engineering and Hebei Provincial Key Laboratory of Nerve Injury and Repair, Chengde Medical University, Chengde, Hebei, China
| | | | - Dongyang Du
- School of Biomedical Engineering and Guangdong Province Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Yanjun Wu
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Zhongxiao Wang
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Zhendong Cao
- Department of Radiology, the Affiliated Hospital of Chengde Medical University, Chengde, China
| | - Dong Wen
- Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China
| | - Yanli Liu
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | | |
Collapse
|
12
|
Rajpal A, Sehra K, Mishra A, Chetty G. A low-resolution real-time face recognition using extreme learning machine and its variants. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2183544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2023]
Affiliation(s)
- Ankit Rajpal
- Department of Computer Science, University of Delhi, Delhi, India
| | - Khushwant Sehra
- Department of Electronic Science, University of Delhi South Campus, Delhi, India
| | - Anurag Mishra
- Department of Electronics, Deen Dayal Upadhyaya College, University of Delhi, Delhi, India
| | - Girija Chetty
- Faculty of Science and Technology, University of Canberra, Bruce, ACT, Australia
| |
Collapse
|
13
|
Sigalingging X, Prakosa SW, Leu JS, Hsieh HY, Avian C, Faisal M. SCANet: Implementation of Selective Context Adaptation Network in Smart Farming Applications. SENSORS (BASEL, SWITZERLAND) 2023; 23:1358. [PMID: 36772398 PMCID: PMC9921277 DOI: 10.3390/s23031358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/18/2023] [Accepted: 01/19/2023] [Indexed: 06/18/2023]
Abstract
In the last decade, deep learning has enjoyed its spotlight as the game-changing addition to smart farming and precision agriculture. Such development has been predominantly observed in developed countries, while on the other hand, in developing countries most farmers especially ones with smallholder farms have not enjoyed such wide and deep adoption of this new technologies. In this paper we attempt to improve the image classification part of smart farming and precision agriculture. Agricultural commodities tend to possess certain textural details on their surfaces which we attempt to exploit. In this work, we propose a deep learning based approach called Selective Context Adaptation Network (SCANet). SCANet performs feature enhancement strategy by leveraging level-wise information and employing context selection mechanism. In exploiting contextual correlation feature of the crop images our proposed approach demonstrates the effectiveness of the context selection mechanism. Our proposed scheme achieves 88.72% accuracy and outperforms the existing approaches. Our model is evaluated on the cocoa bean dataset constructed from the real cocoa bean industry scene in Indonesia.
Collapse
|
14
|
Ukwuoma CC, Qin Z, Heyat MBB, Akhtar F, Smahi A, Jackson JK, Furqan Qadri S, Muaad AY, Monday HN, Nneji GU. Automated Lung-Related Pneumonia and COVID-19 Detection Based on Novel Feature Extraction Framework and Vision Transformer Approaches Using Chest X-ray Images. Bioengineering (Basel) 2022; 9:709. [PMID: 36421110 PMCID: PMC9687434 DOI: 10.3390/bioengineering9110709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/04/2022] [Accepted: 11/16/2022] [Indexed: 11/22/2022] Open
Abstract
According to research, classifiers and detectors are less accurate when images are blurry, have low contrast, or have other flaws which raise questions about the machine learning model's ability to recognize items effectively. The chest X-ray image has proven to be the preferred image modality for medical imaging as it contains more information about a patient. Its interpretation is quite difficult, nevertheless. The goal of this research is to construct a reliable deep-learning model capable of producing high classification accuracy on chest x-ray images for lung diseases. To enable a thorough study of the chest X-ray image, the suggested framework first derived richer features using an ensemble technique, then a global second-order pooling is applied to further derive higher global features of the images. Furthermore, the images are then separated into patches and position embedding before analyzing the patches individually via a vision transformer approach. The proposed model yielded 96.01% sensitivity, 96.20% precision, and 98.00% accuracy for the COVID-19 Radiography Dataset while achieving 97.84% accuracy, 96.76% sensitivity and 96.80% precision, for the Covid-ChestX-ray-15k dataset. The experimental findings reveal that the presented models outperform traditional deep learning models and other state-of-the-art approaches provided in the literature.
Collapse
Affiliation(s)
- Chiagoziem C. Ukwuoma
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Md Belal Bin Heyat
- IoT Research Center, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
- Centre for VLSI and Embedded System Technologies, International Institute of Information Technology, Hyderabad 500032, India
- Department of Science and Engineering, Novel Global Community Educational Foundation, Hebersham, NSW 2770, Australia
| | - Faijan Akhtar
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Abla Smahi
- School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School, Peking University, Shenzhen 518060, China
| | - Jehoiada K. Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Syed Furqan Qadri
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, China
| | | | - Happy N. Monday
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Grace U. Nneji
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| |
Collapse
|
15
|
Koh SJT, Nafea M, Nugroho H. Towards edge devices implementation: deep learning model with visualization for COVID-19 prediction from chest X-ray. ADVANCES IN COMPUTATIONAL INTELLIGENCE 2022; 2:33. [PMID: 36187081 PMCID: PMC9516511 DOI: 10.1007/s43674-022-00044-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 05/17/2022] [Accepted: 09/01/2022] [Indexed: 10/29/2022]
Abstract
Due to the outbreak of COVID-19 disease globally, countries around the world are facing shortages of resources (i.e. testing kits, medicine). A quick diagnosis of COVID-19 and isolating patients are crucial in curbing the pandemic, especially in rural areas. This is because the disease is highly contagious and can spread easily. To assist doctors, several studies have proposed an initial detection of COVID-19 cases using radiological images. In this paper, we propose an alternative method for analyzing chest X-ray images to provide an efficient and accurate diagnosis of COVID-19 which can run on edge devices. The approach acts as an enabler for the deep learning model to be deployed in practical application. Here, the convolutional neural network models which are fine-tuned to predict COVID-19 and pneumonia infection from chest X-ray images are developed by adopting transfer learning techniques. The developed model yielded an accuracy of 98.13%, sensitivity of 97.7%, and specificity of 99.1%. To highlight the important regions in the X-ray images which directs the model to its decision/prediction, we adopted the Gradient Class Activation Map (Grad-CAM). The generated heat maps from the Grad-CAM were then compared with the annotated X-ray images by board-certified radiologists. Results showed that the findings strongly correlate with clinical evidence. For practical deployment, we implemented the trained model in edge devices (NCS2) and this has achieved an improvement of 90% in inference speed compared to CPU. This shows that the developed model has the potential to be implemented on the edge, for example in primary care clinics and rural areas which are not well-equipped or do not have access to stable internet connections.
Collapse
Affiliation(s)
- Shaline Jia Thean Koh
- Present Address: Department of Electrical and Electronic Engineering, University of Nottingham Malaysia, Semenyih, 43500 Malaysia
| | - Marwan Nafea
- Present Address: Department of Electrical and Electronic Engineering, University of Nottingham Malaysia, Semenyih, 43500 Malaysia
| | - Hermawan Nugroho
- Present Address: Department of Electrical and Electronic Engineering, University of Nottingham Malaysia, Semenyih, 43500 Malaysia
| |
Collapse
|
16
|
Zhang X, Zheng L, Tan Z, Li S. Loop Closure Detection Based on Residual Network and Capsule Network for Mobile Robot. SENSORS (BASEL, SWITZERLAND) 2022; 22:7137. [PMID: 36236235 PMCID: PMC9573234 DOI: 10.3390/s22197137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 07/10/2022] [Accepted: 07/17/2022] [Indexed: 06/16/2023]
Abstract
Loop closure detection based on a residual network (ResNet) and a capsule network (CapsNet) is proposed to address the problems of low accuracy and poor robustness for mobile robot simultaneous localization and mapping (SLAM) in complex scenes. First, the residual network of a feature coding strategy is introduced to extract the shallow geometric features and deep semantic features of images, reduce the amount of image noise information, accelerate the convergence speed of the model, and solve the problems of gradient disappearance and network degradation of deep neural networks. Then, the dynamic routing mechanism of the capsule network is optimized through the entropy peak density, and a vector is used to represent the spatial position relationship between features, which can improve the ability of image feature extraction and expression to optimize the overall performance of networks. Finally, the optimized residual network and capsule network are fused to retain the differences and correlations between features, and the global feature descriptors and feature vectors are combined to calculate the similarity of image features for loop closure detection. The experimental results show that the proposed method can achieve loop closure detection for mobile robots in complex scenes, such as view changes, illumination changes, and dynamic objects, and improve the accuracy and robustness of mobile robot SLAM.
Collapse
Affiliation(s)
- Xin Zhang
- School of Mechanical Engineering, Shenyang Ligong University, Shenyang 110159, China
- Shenyang Institute of Computing Technology Co., Ltd., Chinese Academy of Sciences, Shenyang 110168, China
- Software College, Northeastern University, Shenyang 110169, China
| | - Liaomo Zheng
- Shenyang Institute of Computing Technology Co., Ltd., Chinese Academy of Sciences, Shenyang 110168, China
| | - Zhenhua Tan
- Software College, Northeastern University, Shenyang 110169, China
| | - Suo Li
- School of Mechanical Engineering, Shenyang Ligong University, Shenyang 110159, China
| |
Collapse
|
17
|
A Novel Lightweight Approach to COVID-19 Diagnostics Based on Chest X-ray Images. J Clin Med 2022; 11:jcm11195501. [PMID: 36233368 PMCID: PMC9571927 DOI: 10.3390/jcm11195501] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 09/03/2022] [Accepted: 09/15/2022] [Indexed: 11/22/2022] Open
Abstract
Background: This paper presents a novel lightweight approach based on machine learning methods supporting COVID-19 diagnostics based on X-ray images. The presented schema offers effective and quick diagnosis of COVID-19. Methods: Real data (X-ray images) from hospital patients were used in this study. All labels, namely those that were COVID-19 positive and negative, were confirmed by a PCR test. Feature extraction was performed using a convolutional neural network, and the subsequent classification of samples used Random Forest, XGBoost, LightGBM and CatBoost. Results: The LightGBM model was the most effective in classifying patients on the basis of features extracted from X-ray images, with an accuracy of 1.00, a precision of 1.00, a recall of 1.00 and an F1-score of 1.00. Conclusion: The proposed schema can potentially be used as a support for radiologists to improve the diagnostic process. The presented approach is efficient and fast. Moreover, it is not excessively complex computationally.
Collapse
|
18
|
Yang J, Zhang L, Tang X. CrodenseNet: An efficient parallel cross DenseNet for COVID-19 infection detection. Biomed Signal Process Control 2022; 77:103775. [DOI: 10.1016/j.bspc.2022.103775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 03/28/2022] [Accepted: 04/27/2022] [Indexed: 11/02/2022]
|
19
|
Efficient Data-Driven Crop Pest Identification Based on Edge Distance-Entropy for Sustainable Agriculture. SUSTAINABILITY 2022. [DOI: 10.3390/su14137825] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Human agricultural activities are always accompanied by pests and diseases, which have brought great losses to the production of crops. Intelligent algorithms based on deep learning have achieved some achievements in the field of pest control, but relying on a large amount of data to drive consumes a lot of resources, which is not conducive to the sustainable development of smart agriculture. The research in this paper starts with data, and is committed to finding efficient data, solving the data dilemma, and helping sustainable agricultural development. Starting from the data, this paper proposed an Edge Distance-Entropy data evaluation method, which can be used to obtain efficient crop pests, and the data consumption is reduced by 5% to 15% compared with the existing methods. The experimental results demonstrate that this method can obtain efficient crop pest data, and only use about 60% of the data to achieve 100% effect. Compared with other data evaluation methods, the method proposed in this paper achieve state-of-the-art results. The work conducted in this paper solves the dilemma of the existing intelligent algorithms for pest control relying on a large amount of data, and has important practical significance for realizing the sustainable development of modern smart agriculture.
Collapse
|
20
|
Pandey SK, Bhandari AK, Singh H. A transfer learning based deep learning model to diagnose covid-19 CT scan images. HEALTH AND TECHNOLOGY 2022; 12:845-866. [PMID: 35698586 PMCID: PMC9177227 DOI: 10.1007/s12553-022-00677-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 05/20/2022] [Indexed: 12/15/2022]
Abstract
To save the life of human beings during the pandemic conditions we need an effective automated method to deal with this situation. In pandemic conditions when the available resources becomes insufficient to handle the patient's load, then we needed some fast and reliable method which analyse the patient medical data with high efficiency and accuracy within time limitations. In this manuscript, an effective and efficient method is proposed for exact diagnosis of the patient whether it is coronavirus disease-2019 (covid-19) positive or negative with the help of deep learning. To find the correct diagnosis with high accuracy we use pre-processed segmented images for the analysis with deep learning. In the first step the X-ray image or computed tomography (CT) of a covid-19 infected person is analysed with various schemes of image segmentation like simple thresholding at 0.3, simple thresholding at 0.6, multiple thresholding (between 26-230) and Otsu's algorithm. On comparative analysis of all these methods, it is found that the Otsu's algorithm is a simple and optimum scheme to improve the segmented outcome of binary image for the diagnosis point of view. Otsu's segmentation scheme gives more precise values in comparison to other methods on the scale of various image quality parameters like accuracy, sensitivity, f-measure, precision, and specificity. For image classification here we use Resnet-50, MobileNet and VGG-16 models of deep learning which gives accuracy 70.24%, 72.95% and 83.18% respectively with non-segmented CT scan images and 75.08%, 80.12% and 99.28% respectively with Otsu's segmented CT scan images. On a comparative study we find that the VGG-16 models with CT scan image segmented with Otsu's segmentation gives very high accuracy of 99.28%. On the basis of the diagnosis of the patient firstly we go for an arterial blood gas (ABG) analysis and then on the behalf of this diagnosis and ABG report, the severity level of the patient can be decided and according to this severity level, proper treatment protocols can be followed immediately to save the patient's life. Compared with the existing works, our deep learning based novel method reduces the complexity, takes much less time and has a greater accuracy for exact diagnosis of coronavirus disease-2019 (covid-19).
Collapse
Affiliation(s)
- Sanat Kumar Pandey
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Bihar, India
| | - Ashish Kumar Bhandari
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Bihar, India
| | - Himanshu Singh
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tiruchippalli, India
| |
Collapse
|
21
|
Ho TT, Tran KD, Huang Y. FedSGDCOVID: Federated SGD COVID-19 Detection under Local Differential Privacy Using Chest X-ray Images and Symptom Information. SENSORS (BASEL, SWITZERLAND) 2022; 22:3728. [PMID: 35632136 PMCID: PMC9147951 DOI: 10.3390/s22103728] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 05/09/2022] [Accepted: 05/10/2022] [Indexed: 12/15/2022]
Abstract
Coronavirus (COVID-19) has created an unprecedented global crisis because of its detrimental effect on the global economy and health. COVID-19 cases have been rapidly increasing, with no sign of stopping. As a result, test kits and accurate detection models are in short supply. Early identification of COVID-19 patients will help decrease the infection rate. Thus, developing an automatic algorithm that enables the early detection of COVID-19 is essential. Moreover, patient data are sensitive, and they must be protected to prevent malicious attackers from revealing information through model updates and reconstruction. In this study, we presented a higher privacy-preserving federated learning system for COVID-19 detection without sharing data among data owners. First, we constructed a federated learning system using chest X-ray images and symptom information. The purpose is to develop a decentralized model across multiple hospitals without sharing data. We found that adding the spatial pyramid pooling to a 2D convolutional neural network improves the accuracy of chest X-ray images. Second, we explored that the accuracy of federated learning for COVID-19 identification reduces significantly for non-independent and identically distributed (Non-IID) data. We then proposed a strategy to improve the model's accuracy on Non-IID data by increasing the total number of clients, parallelism (client-fraction), and computation per client. Finally, for our federated learning model, we applied a differential privacy stochastic gradient descent (DP-SGD) to improve the privacy of patient data. We also proposed a strategy to maintain the robustness of federated learning to ensure the security and accuracy of the model.
Collapse
Affiliation(s)
- Trang-Thi Ho
- Research Center for Information Technology Innovation, Academia Sinica, Taipei 10607, Taiwan; (K.-D.T.); (Y.H.)
| | | | | |
Collapse
|
22
|
Aslan MF, Sabanci K, Durdu A, Unlersen MF. COVID-19 diagnosis using state-of-the-art CNN architecture features and Bayesian Optimization. Comput Biol Med 2022; 142:105244. [PMID: 35077936 PMCID: PMC8770389 DOI: 10.1016/j.compbiomed.2022.105244] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 01/17/2022] [Accepted: 01/17/2022] [Indexed: 12/16/2022]
Abstract
The coronavirus outbreak 2019, called COVID-19, which originated in Wuhan, negatively affected the lives of millions of people and many people died from this infection. To prevent the spread of the disease, which is still in effect, various restriction decisions have been taken all over the world. In addition, the number of COVID-19 tests has been increased to quarantine infected people. However, due to the problems encountered in the supply of RT-PCR tests and the ease of obtaining Computed Tomography and X-ray images, imaging-based methods have become very popular in the diagnosis of COVID-19. Therefore, studies using these images to classify COVID-19 have increased. This paper presents a classification method for computed tomography chest images in the COVID-19 Radiography Database using features extracted by popular Convolutional Neural Networks (CNN) models (AlexNet, ResNet18, ResNet50, Inceptionv3, Densenet201, Inceptionresnetv2, MobileNetv2, GoogleNet). The determination of hyperparameters of Machine Learning (ML) algorithms by Bayesian optimization, and ANN-based image segmentation are the two main contributions in this study. First of all, lung segmentation is performed automatically from the raw image with Artificial Neural Networks (ANNs). To ensure data diversity, data augmentation is applied to the COVID-19 classes, which are fewer than the other two classes. Then these images are applied as input to five different CNN models. The features extracted from each CNN model are given as input to four different ML algorithms, namely Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), Naive Bayes (NB), and Decision Tree (DT) for classification. To achieve the most successful classification accuracy, the hyperparameters of each ML algorithm are determined using Bayesian optimization. With the classification made using these hyperparameters, the highest success is obtained as 96.29% with the DenseNet201 model and SVM algorithm. The Sensitivity, Precision, Specificity, MCC, and F1-Score metric values for this structure are 0.9642, 0.9642, 0.9812, 0.9641 and 0.9453, respectively. These results showed that ML methods with the most optimum hyperparameters can produce successful results.
Collapse
Affiliation(s)
- Muhammet Fatih Aslan
- Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, Karaman, Turkey
| | - Kadir Sabanci
- Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, Karaman, Turkey.
| | - Akif Durdu
- Electrical and Electronics Engineering, Konya Technical University, Konya, Turkey
| | | |
Collapse
|
23
|
Deep Ensemble Learning-Based Models for Diagnosis of COVID-19 from Chest CT Images. Healthcare (Basel) 2022; 10:healthcare10010166. [PMID: 35052328 PMCID: PMC8776223 DOI: 10.3390/healthcare10010166] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 01/11/2022] [Accepted: 01/13/2022] [Indexed: 12/11/2022] Open
Abstract
Novel coronavirus (COVID-19) has been endangering human health and life since 2019. The timely quarantine, diagnosis, and treatment of infected people are the most necessary and important work. The most widely used method of detecting COVID-19 is real-time polymerase chain reaction (RT-PCR). Along with RT-PCR, computed tomography (CT) has become a vital technique in diagnosing and managing COVID-19 patients. COVID-19 reveals a number of radiological signatures that can be easily recognized through chest CT. These signatures must be analyzed by radiologists. It is, however, an error-prone and time-consuming process. Deep Learning-based methods can be used to perform automatic chest CT analysis, which may shorten the analysis time. The aim of this study is to design a robust and rapid medical recognition system to identify positive cases in chest CT images using three Ensemble Learning-based models. There are several techniques in Deep Learning for developing a detection system. In this paper, we employed Transfer Learning. With this technique, we can apply the knowledge obtained from a pre-trained Convolutional Neural Network (CNN) to a different but related task. In order to ensure the robustness of the proposed system for identifying positive cases in chest CT images, we used two Ensemble Learning methods namely Stacking and Weighted Average Ensemble (WAE) to combine the performances of three fine-tuned Base-Learners (VGG19, ResNet50, and DenseNet201). For Stacking, we explored 2-Levels and 3-Levels Stacking. The three generated Ensemble Learning-based models were trained on two chest CT datasets. A variety of common evaluation measures (accuracy, recall, precision, and F1-score) are used to perform a comparative analysis of each method. The experimental results show that the WAE method provides the most reliable performance, achieving a high recall value which is a desirable outcome in medical applications as it poses a greater risk if a true infected patient is not identified.
Collapse
|
24
|
Attallah O. A computer-aided diagnostic framework for coronavirus diagnosis using texture-based radiomics images. Digit Health 2022; 8:20552076221092543. [PMID: 35433024 PMCID: PMC9005822 DOI: 10.1177/20552076221092543] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/21/2022] [Indexed: 12/14/2022] Open
Abstract
The accurate and rapid detection of the novel coronavirus infection, coronavirus is very important to prevent the fast spread of such disease. Thus, reducing negative effects that influenced many industrial sectors, especially healthcare. Artificial intelligence techniques in particular deep learning could help in the fast and precise diagnosis of coronavirus from computed tomography images. Most artificial intelligence-based studies used the original computed tomography images to build their models; however, the integration of texture-based radiomics images and deep learning techniques could improve the diagnostic accuracy of the novel coronavirus diseases. This study proposes a computer-assisted diagnostic framework based on multiple deep learning and texture-based radiomics approaches. It first trains three Residual Networks (ResNets) deep learning techniques with two texture-based radiomics images including discrete wavelet transform and gray-level covariance matrix instead of the original computed tomography images. Then, it fuses the texture-based radiomics deep features sets extracted from each using discrete cosine transform. Thereafter, it further combines the fused texture-based radiomics deep features obtained from the three convolutional neural networks. Finally, three support vector machine classifiers are utilized for the classification procedure. The proposed method is validated experimentally on the benchmark severe respiratory syndrome coronavirus 2 computed tomography image dataset. The accuracies attained indicate that using texture-based radiomics (gray-level covariance matrix, discrete wavelet transform) images for training the ResNet-18 (83.22%, 74.9%), ResNet-50 (80.94%, 78.39%), and ResNet-101 (80.54%, 77.99%) is better than using the original computed tomography images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. Furthermore, the sensitivity, specificity, accuracy, precision, and F1-score achieved using the proposed computer-assisted diagnostic after the two fusion steps are 99.47%, 99.72%, 99.60%, 99.72%, and 99.60% which proves that combining texture-based radiomics deep features obtained from the three ResNets has boosted its performance. Thus, fusing multiple texture-based radiomics deep features mined from several convolutional neural networks is better than using only one type of radiomics approach and a single convolutional neural network. The performance of the proposed computer-assisted diagnostic framework allows it to be used by radiologists in attaining fast and accurate diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
25
|
Verma AK, Vamsi I, Saurabh P, Sudha R, G R S, S R. Wavelet and deep learning-based detection of SARS-nCoV from thoracic X-ray images for rapid and efficient testing. EXPERT SYSTEMS WITH APPLICATIONS 2021; 185:115650. [PMID: 34366576 PMCID: PMC8327617 DOI: 10.1016/j.eswa.2021.115650] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Revised: 06/02/2021] [Accepted: 07/20/2021] [Indexed: 05/07/2023]
Abstract
This paper proposes a wavelet and artificial intelligence-enabled rapid and efficient testing procedure for patients with Severe Acute Respiratory Coronavirus Syndrome (SARS-nCoV) through a deep learning approach from thoracic X-ray images. Presently, the virus infection is diagnosed primarily by a process called the real-time Reverse Transcriptase-Polymerase Chain Reaction (rRT-PCR) based on its genetic prints. This whole procedure takes a substantial amount of time to identify and diagnose the patients infected by the virus. The proposed research uses a wavelet-based convolution neural network architectures to detect SARS-nCoV. CNN is pre-trained on the ImageNet and trained end-to-end using thoracic X-ray images. To execute Discrete Wavelet Transforms (DWT), the available mother wavelet functions from different families, namely Haar, Daubechies, Symlet, Biorthogonal, Coiflet, and Discrete Meyer, were considered. Two-level decomposition via DWT is adopted to extract prominent features peripheral and subpleural ground-glass opacities, often in the lower lobes explicitly from thoracic X-ray images to suppress noise effect, further enhancing the signal to noise ratio. The proposed wavelet-based deep learning models of both, two-class instances (COVID vs. Normal) and four-class instances (COVID-19 vs. PNA bacterial vs. PNA viral vs. Normal) were validated from publicly available databases using k-Fold Cross Validation (k-Fold CV) technique. In addition to these X-ray images, images of recent COVID-19 patients were further used to examine the model's practicality and real-time feasibility in combating the current pandemic situation. It was observed that the Symlet 7 approximation component with two-level manifested the highest test accuracy of 98.87%, followed by Biorthogonal 2.6 with an efficiency of 98.73%. While the test accuracy for Symlet 7 and Biorthogonal 2.6 is high, Haar and Daubechies with two levels have demonstrated excellent validation accuracy on unseen data. It was also observed that the precision, the recall rate, and the dice similarity coefficient for four-class instances were 98%, 98%, and 99%, respectively, using the proposed algorithm.
Collapse
Affiliation(s)
- Amar Kumar Verma
- Department of Electrical and Electronics, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, 500078, India
| | - Inturi Vamsi
- Department of Mechanical Engineering, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, 500078, India
| | - Prerna Saurabh
- Department of Computer Science and Engineering, Vellore Institute of Technology-Vellore Campus, Tamil Nadu, 632014, India
| | - Radhika Sudha
- Department of Electrical and Electronics, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, 500078, India
| | - Sabareesh G R
- Department of Mechanical Engineering, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, 500078, India
| | - Rajkumar S
- Department of Computer Science and Engineering, Vellore Institute of Technology-Vellore Campus, Tamil Nadu, 632014, India
| |
Collapse
|
26
|
Zhao W, Jiang W, Qiu X. Fine-Tuning Convolutional Neural Networks for COVID-19 Detection from Chest X-ray Images. Diagnostics (Basel) 2021; 11:1887. [PMID: 34679585 PMCID: PMC8535063 DOI: 10.3390/diagnostics11101887] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 09/30/2021] [Accepted: 10/10/2021] [Indexed: 12/24/2022] Open
Abstract
As the COVID-19 pandemic continues to ravage the world, the use of chest X-ray (CXR) images as a complementary screening strategy to reverse transcription-polymerase chain reaction (RT-PCR) testing continues to grow owing to its routine clinical application to respiratory diseases. We performed extensive convolutional neural network (CNN) fine-tuning experiments and identified that models pretrained on larger out-of-domain datasets show an improved performance. This suggests that a priori knowledge of models from out-of-field training should also apply to X-ray images. With appropriate hyperparameters selection, we found that higher resolution images carry more clinical information, and the use of mixup in training improved the performance of the model. The experimental showed that our proposed transfer learning present state-of-the-art results. Furthermore, we evaluated the performance of our model with a small amount of downstream training data and found that the model still performed well in COVID-19 identification. We also explored the mechanism of model detection using a gradient-weighted class activation mapping (Grad-CAM) method for CXR imaging to interpret the detection of radiology images. The results helped us understand how the model detects COVID-19, which can be used to discover new visual features and assist radiologists in screening.
Collapse
Affiliation(s)
- Wentao Zhao
- College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China; (W.Z.); (X.Q.)
- School of Intelligent Transportation, Zhejiang Institute of Mechanical & Electrical Engineering, Hangzhou 310053, China
| | - Wei Jiang
- College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China; (W.Z.); (X.Q.)
| | - Xinguo Qiu
- College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China; (W.Z.); (X.Q.)
| |
Collapse
|
27
|
A Hybrid Deep Learning Approach for COVID-19 Diagnosis via CT and X-ray Medical Images. IOCA 2021 2021. [DOI: 10.3390/ioca2021-10909] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
28
|
Moses DA. Deep learning applied to automatic disease detection using chest X-rays. J Med Imaging Radiat Oncol 2021; 65:498-517. [PMID: 34231311 DOI: 10.1111/1754-9485.13273] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/08/2021] [Indexed: 12/24/2022]
Abstract
Deep learning (DL) has shown rapid advancement and considerable promise when applied to the automatic detection of diseases using CXRs. This is important given the widespread use of CXRs across the world in diagnosing significant pathologies, and the lack of trained radiologists to report them. This review article introduces the basic concepts of DL as applied to CXR image analysis including basic deep neural network (DNN) structure, the use of transfer learning and the application of data augmentation. It then reviews the current literature on how DNN models have been applied to the detection of common CXR abnormalities (e.g. lung nodules, pneumonia, tuberculosis and pneumothorax) over the last few years. This includes DL approaches employed for the classification of multiple different diseases (multi-class classification). Performance of different techniques and models and their comparison with human observers are presented. Some of the challenges facing DNN models, including their future implementation and relationships to radiologists, are also discussed.
Collapse
Affiliation(s)
- Daniel A Moses
- Graduate School of Biomedical Engineering, Faculty of Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Department of Medical Imaging, Prince of Wales Hospital, Sydney, New South Wales, Australia
| |
Collapse
|
29
|
Jangam E, Annavarapu CSR. A stacked ensemble for the detection of COVID-19 with high recall and accuracy. Comput Biol Med 2021; 135:104608. [PMID: 34247135 PMCID: PMC8241584 DOI: 10.1016/j.compbiomed.2021.104608] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 06/04/2021] [Accepted: 06/22/2021] [Indexed: 12/24/2022]
Abstract
The main challenges for the automatic detection of the coronavirus disease (COVID-19) from computed tomography (CT) scans of an individual are: a lack of large datasets, ambiguity in the characteristics of COVID-19 and the detection techniques having low sensitivity (or recall). Hence, developing diagnostic techniques with high recall and automatic feature extraction using the available data are crucial for controlling the spread of COVID-19. This paper proposes a novel stacked ensemble capable of detecting COVID-19 from a patient's chest CT scans with high recall and accuracy. A systematic approach for designing a stacked ensemble from pre-trained computer vision models using transfer learning (TL) is presented. A novel diversity measure that results in the stacked ensemble with high recall and accuracy is proposed. The stacked ensemble proposed in this paper considers four pre-trained computer vision models: the visual geometry group (VGG)-19, residual network (ResNet)-101, densely connected convolutional network (DenseNet)-169 and wide residual network (WideResNet)-50-2. The proposed model was trained and evaluated with three different chest CT scans. As recall is more important than precision, the trade-offs between recall and precision were explored in relevance to COVID-19. The optimal recommended threshold values were found for each dataset.
Collapse
Affiliation(s)
- Ebenezer Jangam
- Department of Information Technology, VRSiddhartha Engineering College, Vijayawada, Andhra Pradesh, India; Department of Computer Science and Engineering, Indian Institute of Technology (ISM), Dhanbad, Jharkhand, India
| | | |
Collapse
|