1
|
Ma D, Li C, Du T, Qiao L, Tang D, Ma Z, Shi L, Lu G, Meng Q, Chen Z, Grzegorzek M, Sun H. PHE-SICH-CT-IDS: A benchmark CT image dataset for evaluation semantic segmentation, object detection and radiomic feature extraction of perihematomal edema in spontaneous intracerebral hemorrhage. Comput Biol Med 2024; 173:108342. [PMID: 38522249 DOI: 10.1016/j.compbiomed.2024.108342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/05/2024] [Accepted: 03/17/2024] [Indexed: 03/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Intracerebral hemorrhage is one of the diseases with the highest mortality and poorest prognosis worldwide. Spontaneous intracerebral hemorrhage (SICH) typically presents acutely, prompt and expedited radiological examination is crucial for diagnosis, localization, and quantification of the hemorrhage. Early detection and accurate segmentation of perihematomal edema (PHE) play a critical role in guiding appropriate clinical intervention and enhancing patient prognosis. However, the progress and assessment of computer-aided diagnostic methods for PHE segmentation and detection face challenges due to the scarcity of publicly accessible brain CT image datasets. METHODS This study establishes a publicly available CT dataset named PHE-SICH-CT-IDS for perihematomal edema in spontaneous intracerebral hemorrhage. The dataset comprises 120 brain CT scans and 7,022 CT images, along with corresponding medical information of the patients. To demonstrate its effectiveness, classical algorithms for semantic segmentation, object detection, and radiomic feature extraction are evaluated. The experimental results confirm the suitability of PHE-SICH-CT-IDS for assessing the performance of segmentation, detection and radiomic feature extraction methods. RESULTS This study conducts numerous experiments using classical machine learning and deep learning methods, demonstrating the differences in various segmentation and detection methods on the PHE-SICH-CT-IDS. The highest precision achieved in semantic segmentation is 76.31%, while object detection attains a maximum precision of 97.62%. The experimental results on radiomic feature extraction and analysis prove the suitability of PHE-SICH-CT-IDS for evaluating image features and highlight the predictive value of these features for the prognosis of SICH patients. CONCLUSION To the best of our knowledge, this is the first publicly available dataset for PHE in SICH, comprising various data formats suitable for applications across diverse medical scenarios. We believe that PHE-SICH-CT-IDS will allure researchers to explore novel algorithms, providing valuable support for clinicians and patients in the clinical setting. PHE-SICH-CT-IDS is freely published for non-commercial purpose at https://figshare.com/articles/dataset/PHE-SICH-CT-IDS/23957937.
Collapse
Affiliation(s)
- Deguo Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Lin Qiao
- Shengjing Hospital, China Medical University, Shenyang, China
| | - Dechao Tang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Zhiyu Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Guotao Lu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Qingtao Meng
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Zhihao Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Hongzan Sun
- Shengjing Hospital, China Medical University, Shenyang, China.
| |
Collapse
|
2
|
Ozaltin O, Yeniay O, Subasi A. OzNet: A New Deep Learning Approach for Automated Classification of COVID-19 Computed Tomography Scans. BIG DATA 2023; 11:420-436. [PMID: 36927081 DOI: 10.1089/big.2022.0042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Coronavirus disease 2019 (COVID-19) is spreading rapidly around the world. Therefore, the classification of computed tomography (CT) scans alleviates the workload of experts, whose workload increased considerably during the pandemic. Convolutional neural network (CNN) architectures are successful for the classification of medical images. In this study, we have developed a new deep CNN architecture called OzNet. Moreover, we have compared it with pretrained architectures namely AlexNet, DenseNet201, GoogleNet, NASNetMobile, ResNet-50, SqueezeNet, and VGG-16. In addition, we have compared the classification success of three preprocessing methods with raw CT scans. We have not only classified the raw CT scans, but also have performed the classification with three different preprocessing methods, which are discrete wavelet transform (DWT), intensity adjustment, and gray to color red, green, blue image conversion on the data sets. Furthermore, it is known that the architecture's performance increases with the use of DWT preprocessing method rather than using the raw data set. The results are extremely promising with the CNN algorithms using the COVID-19 CT scans processed with the DWT. The proposed DWT-OzNet has achieved a high classification performance of more than 98.8% for each calculated metric.
Collapse
Affiliation(s)
- Oznur Ozaltin
- Department of Statistics, Institute of Science, Hacettepe University, Ankara, Turkey
| | - Ozgur Yeniay
- Department of Statistics, Institute of Science, Hacettepe University, Ankara, Turkey
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, Finland
- Department of Computer Science, College of Engineering, Effat University, Jeddah, Saudi Arabia
| |
Collapse
|
3
|
Chang TY, Huang CK, Weng CH, Chen JY. Feature-based deep neural network approach for predicting mortality risk in patients with COVID-19. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2023; 124:106644. [PMID: 37366394 PMCID: PMC10277846 DOI: 10.1016/j.engappai.2023.106644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 05/20/2023] [Accepted: 06/12/2023] [Indexed: 06/28/2023]
Abstract
In this study, we integrate deep neural network (DNN) with hybrid approaches (feature selection and instance clustering) to build prediction models for predicting mortality risk in patients with COVID-19. Besides, we use cross-validation methods to evaluate the performance of these prediction models, including feature based DNN, cluster-based DNN, DNN, and neural network (multi-layer perceptron). The COVID-19 dataset with 12,020 instances and 10 cross-validation methods are used to evaluate the prediction models. The experimental results showed that the proposed feature based DNN model, holding Recall (98.62%), F1-score (91.99%), Accuracy (91.41%), and False Negative Rate (1.38%), outperforms than original prediction model (neural network) in the prediction performance. Furthermore, the proposed approach uses the Top 5 features to build a DNN prediction model with high prediction performance, exhibiting the well prediction as the model built by all features (57 features). The novelty of this study is that we integrate feature selection, instance clustering, and DNN techniques to improve prediction performance. Moreover, the proposed approach which is built with fewer features performs much better than the original prediction models in many metrics and can still remain high prediction performance.
Collapse
Affiliation(s)
- Thing-Yuan Chang
- Department of Information Management, National Chin-Yi University of Technology, Taichung 41130, Taiwan, Republic of China
| | - Cheng-Kui Huang
- Department of Business Administration, National Chung Cheng University, 168, University Rd., Min-Hsiung, Chia-Yi, Taiwan, Republic of China
| | - Cheng-Hsiung Weng
- Department of Information Management, National Chin-Yi University of Technology, Taichung 41130, Taiwan, Republic of China
- Department of Information Management, National Changhua University of Education, Changhua City, Changhua County, Taiwan, Republic of China
| | - Jing-Yuan Chen
- Department of Information Management, National Chin-Yi University of Technology, Taichung 41130, Taiwan, Republic of China
| |
Collapse
|
4
|
Lee MH, Shomanov A, Kudaibergenova M, Viderman D. Deep Learning Methods for Interpretation of Pulmonary CT and X-ray Images in Patients with COVID-19-Related Lung Involvement: A Systematic Review. J Clin Med 2023; 12:jcm12103446. [PMID: 37240552 DOI: 10.3390/jcm12103446] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/25/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
SARS-CoV-2 is a novel virus that has been affecting the global population by spreading rapidly and causing severe complications, which require prompt and elaborate emergency treatment. Automatic tools to diagnose COVID-19 could potentially be an important and useful aid. Radiologists and clinicians could potentially rely on interpretable AI technologies to address the diagnosis and monitoring of COVID-19 patients. This paper aims to provide a comprehensive analysis of the state-of-the-art deep learning techniques for COVID-19 classification. The previous studies are methodically evaluated, and a summary of the proposed convolutional neural network (CNN)-based classification approaches is presented. The reviewed papers have presented a variety of CNN models and architectures that were developed to provide an accurate and quick automatic tool to diagnose the COVID-19 virus based on presented CT scan or X-ray images. In this systematic review, we focused on the critical components of the deep learning approach, such as network architecture, model complexity, parameter optimization, explainability, and dataset/code availability. The literature search yielded a large number of studies over the past period of the virus spread, and we summarized their past efforts. State-of-the-art CNN architectures, with their strengths and weaknesses, are discussed with respect to diverse technical and clinical evaluation metrics to safely implement current AI studies in medical practice.
Collapse
Affiliation(s)
- Min-Ho Lee
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Adai Shomanov
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Madina Kudaibergenova
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Dmitriy Viderman
- School of Medicine, Nazarbayev University, 5/1 Kerey and Zhanibek Khandar Str., Astana 010000, Kazakhstan
| |
Collapse
|
5
|
Rabie AH, Mohamed AM, Abo-Elsoud MA, Saleh AI. A new Covid-19 diagnosis strategy using a modified KNN classifier. Neural Comput Appl 2023; 35:1-25. [PMID: 37362572 PMCID: PMC10153048 DOI: 10.1007/s00521-023-08588-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 04/05/2023] [Indexed: 06/28/2023]
Abstract
Covid-19 is a very dangerous disease as a result of the rapid and unprecedented spread of any previous disease. It is truly a crisis that threatens the world since its first appearance in December 2019 until our time. Due to the lack of a vaccine that has proved sufficiently effective so far, the rapid and more accurate diagnosis of this disease is extremely necessary to enable the medical staff to identify infected cases and isolate them from the rest to prevent further loss of life. In this paper, Covid-19 diagnostic strategy (CDS) as a new classification strategy that consists of two basic phases: Feature selection phase (FSP) and diagnosis phase (DP) has been introduced. During the first phase called FSP, the best set of features in laboratory test findings for Covid-19 patients will be selected using enhanced gray wolf optimization (EGWO). EGWO combines both types of selection techniques called wrapper and filter. Accordingly, EGWO includes two stages called filter stage (FS) and wrapper stage (WS). While FS uses many different filter methods, WS uses a wrapper method called binary gray wolf optimization (BGWO). The second phase called DP aims to give fast and more accurate diagnosis using a hybrid diagnosis methodology (HDM) based on the selected features from FSP. In fact, the HDM consists of two phases called weighting patient phase (WP2) and diagnostic patient phase (DP2). WP2 aims to calculate the belonging degree of each patient in the testing dataset to class category using naïve Bayes (NB) as a weight method. On the other hand, K-nearest neighbor (KNN) will be used in DP2 based on the weights of patients in the testing dataset as a new training dataset to give rapid and more accurate detection. The suggested CDS outperforms other strategies according to accuracy, precision, recall (or sensitivity) and F-measure calculations that are equal to 99%, 88%, 90% and 91%, respectively, as showed in experimental results.
Collapse
Affiliation(s)
- Asmaa H. Rabie
- Computers and Control Department Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Alaa M. Mohamed
- Delta Higher Institute for Engineering and Technology, Talkha, Mansoura, Egypt
| | - M. A. Abo-Elsoud
- Electronics and Communication Department Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Ahmed I. Saleh
- Computers and Control Department Faculty of Engineering, Mansoura University, Mansoura, Egypt
| |
Collapse
|
6
|
Sunnetci KM, Alkan A. Biphasic majority voting-based comparative COVID-19 diagnosis using chest X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 216:119430. [PMID: 36570382 PMCID: PMC9767662 DOI: 10.1016/j.eswa.2022.119430] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 05/27/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
The COVID-19 pandemic has been affecting the world since December 2019, and nowadays, the number of infected is increasing rapidly. Chest X-ray images are clinical adjuncts that can be used in the diagnosis of COVID-19 disease. Because of the rapid spread of COVID-19 disease worldwide and the limited number of expert radiologists, the proposed method uses the automatic diagnosis method rather than a manual diagnosis method. In the paper, COVID-19 Positive/Negative (2275 Positive, 4626 Negative) and Normal/Pneumonia (2313 Normal, 2313 Pneumonia) are diagnosed using chest X-ray images. Herein, 80 % and 20 % of the images are used in the training and validation set, respectively. In the proposed method, six different classifiers are trained using chest X-ray images, and the five most successful classifiers are used in both phases. In Phase-1 and Phase-2, image features are extracted using the Bag of Features method for Cosine K-Nearest Neighbor (KNN), Linear Discriminant, Logistic Regression, Bagged Trees Ensemble, Medium Gaussian Support Vector Machine (SVM), excluding SqueezeNet Deep Learning (K = 2000 and K = 1500 for Phase-1 and Phase-2, respectively). In both phases, the five most successful classifiers are determined, and images classify with the help of the Majority Voting (Mathematical Evaluation) method. The application of the proposed method is designed for users to diagnose COVID-19 Positive, Normal, and Pneumonia. The results show that accuracy values obtained by Majority Voting (Mathematical Evaluation) method for Phase-1 and Phase-2 are equal to 99.86 % and 99.28 %, respectively. Thus, it indicates that the accuracy of the whole system is 99.63 %. When we analyze the classification performance metrics for Phase-1 and Phase-2, Specificity (%), Precision (%), Recall (%), F1 Score (%), Area Under Curve (AUC), and Matthews Correlation Coefficient (MCC) are equal to 99.98-99.83-99.07-99.51-0.9974-0.9855 and 99.73-99.69-98.63-99.23-0.9928-0.9518, respectively. Moreover, if the classification performance metrics of the whole system are examined, it is seen that Specificity (%), Precision (%), Recall (%), F1 Score (%), AUC, and MCC are 99.88, 99.78, 98.90, 99.40, 0.9956, and 0.9720, respectively. When the studies in the literature are examined, the results show that the proposed model is better than its counterparts. Because the best performance metrics for the dataset used were obtained in this study. In addition, since the biphasic majority voting technique is used in the study, it is seen that the proposed model is more reliable. On the other hand, although there are tens of thousands of studies on this subject, the usability of these models is debatable since most of them do not have graphical user interface applications. Already, in artificial intelligence technologies, besides the performance of the developed models, their usability is also important. Because the developed models can generally be used by people who are less knowledgeable about artificial intelligence.
Collapse
Affiliation(s)
- Kubilay Muhammed Sunnetci
- Department of Electrical and Electronics Engineering, Osmaniye Korkut Ata University, Osmaniye, Turkey
- Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, Turkey
| | - Ahmet Alkan
- Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, Turkey
| |
Collapse
|
7
|
Detection and classification of COVID-19 by using faster R-CNN and mask R-CNN on CT images. Neural Comput Appl 2023; 35:13597-13611. [PMCID: PMC10014413 DOI: 10.1007/s00521-023-08450-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 02/28/2023] [Indexed: 03/17/2023]
Abstract
The coronavirus (COVID-19) pandemic has a devastating impact on people’s daily lives and healthcare systems. The rapid spread of this virus should be stopped by early detection of infected patients through efficient screening. Artificial intelligence techniques are used for accurate disease detection in computed tomography (CT) images. This article aims to develop a process that can accurately diagnose COVID-19 using deep learning techniques on CT images. Using CT images collected from Yozgat Bozok University, the presented method begins with the creation of an original dataset, which includes 4000 CT images. The faster R-CNN and mask R-CNN methods are presented for this purpose in order to train and test the dataset to categorize patients with COVID-19 and pneumonia infections. In this study, the results are compared using VGG-16 for faster R-CNN model and ResNet-50 and ResNet-101 backbones for mask R-CNN. The faster R-CNN model used in the study has an accuracy rate of 93.86%, and the ROI (region of interest) classification loss is 0.061 per ROI. At the conclusion of the final training, the mask R-CNN model generates mAP (mean average precision) values for ResNet-50 and ResNet-101, respectively, of 97.72% and 95.65%. The results for five folds are obtained by applying the cross-validation to the methods used. With training, our model performs better than the industry standard baselines and can help with automated COVID-19 severity quantification in CT images.
Collapse
|
8
|
Classification of Covid-19 misinformation on social media based on neuro-fuzzy and neural network: A systematic review. Neural Comput Appl 2023; 35:699-717. [PMID: 36159189 PMCID: PMC9488884 DOI: 10.1007/s00521-022-07797-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 09/06/2022] [Indexed: 01/11/2023]
Abstract
The spread of Covid-19 misinformation on social media had significant real-world consequences, and it raised fears among internet users since the pandemic has begun. Researchers from all over the world have shown an interest in developing deception classification methods to reduce the issue. Despite numerous obstacles that can thwart the efforts, the researchers aim to create an automated, stable, accurate, and effective mechanism for misinformation classification. In this paper, a systematic literature review is conducted to analyse the state-of-the-art related to the classification of misinformation on social media. IEEE Xplore, SpringerLink, ScienceDirect, Scopus, Taylor & Francis, Wiley, Google Scholar are used as databases to find relevant papers since 2018-2021. Firstly, the study begins by reviewing the history of the issues surrounding Covid-19 misinformation and its effects on social media users. Secondly, various neuro-fuzzy and neural network classification methods are identified. Thirdly, the strength, limitations, and challenges of neuro-fuzzy and neural network approaches are verified for the classification misinformation specially in case of Covid-19. Finally, the most efficient hybrid method of neuro-fuzzy and neural networks in terms of performance accuracy is discovered. This study is wrapped up by suggesting a hybrid ANFIS-DNN model for improving Covid-19 misinformation classification. The results of this study can be served as a roadmap for future research on misinformation classification.
Collapse
|
9
|
Sun W, Pang Y, Zhang G. CCT: Lightweight compact convolutional transformer for lung disease CT image classification. Front Physiol 2022; 13:1066999. [DOI: 10.3389/fphys.2022.1066999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 10/25/2022] [Indexed: 11/06/2022] Open
Abstract
Computed tomography (CT) imaging results are an important criterion for the diagnosis of lung disease. CT images can clearly show the characteristics of lung lesions. Early and accurate detection of lung diseases helps clinicians to improve patient care effectively. Therefore, in this study, we used a lightweight compact convolutional transformer (CCT) to build a prediction model for lung disease classification using chest CT images. We added a position offset term and changed the attention mechanism of the transformer encoder to an axial attention mechanism module. As a result, the classification performance of the model was improved in terms of height and width. We show that the model effectively classifies COVID-19, community pneumonia, and normal conditions on the CC-CCII dataset. The proposed model outperforms other comparable models in the test set, achieving an accuracy of 98.5% and a sensitivity of 98.6%. The results show that our method achieves a larger field of perception on CT images, which positively affects the classification of CT images. Thus, the method can provide adequate assistance to clinicians.
Collapse
|
10
|
Aslan N, Ozmen Koca G, Kobat MA, Dogan S. Multi-classification deep CNN model for diagnosing COVID-19 using iterative neighborhood component analysis and iterative ReliefF feature selection techniques with X-ray images. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS : AN INTERNATIONAL JOURNAL SPONSORED BY THE CHEMOMETRICS SOCIETY 2022; 224:104539. [PMID: 35368832 PMCID: PMC8964480 DOI: 10.1016/j.chemolab.2022.104539] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 03/07/2022] [Accepted: 03/14/2022] [Indexed: 05/16/2023]
Abstract
BACKGROUND The acute respiratory syndrome coronavirus 2 (SARS-CoV-2) disease seriously affected worldwide health. It remains an important worldwide concern as the number of patients infected with this virus and the death rate is increasing rapidly. Early diagnosis is very important to hinder the spread of the coronavirus. Therefore, this article is intended to facilitate radiologists automatically determine COVID-19 early on X-ray images. Iterative Neighborhood Component Analysis (INCA) and Iterative ReliefF (IRF) feature selection methods are applied to increase the accuracy of the performance criteria of trained deep Convolutional Neural Networks (CNN). MATERIALS AND METHODS The COVID-19 dataset consists of a total of 15153 X-ray images for 4961 patient cases. The work includes thirteen different deep CNN model architectures. Normalized data of lung X-ray image for each deep CNN mesh model are analyzed to classify disease status in the category of Normal, Viral Pneumonia and COVID-19. The performance criteria are improved by applying the INCA and IRF feature selection methods to the trained CNN in order to improve the analysis, forecasting results, make a faster and more accurate decision. RESULTS Thirteen different deep CNN experiments and evaluations are successfully performed based on 80-20% of lung X-ray images for training and testing, respectively. The highest predictive values are seen in the analysis using INCA feature selection in the VGG16 network. The means of performance criteria obtained using the accuracy, sensitivity, F-score, precision, MCC, dice, Jaccard, and specificity are 99.14%, 97.98%, 99.58%, 98.80%, 97.81%, 98.83%, 97.68%, and 99.56%, respectively. This proposed study is indicated the useful application of deep CNN models to classify COVID-19 in X-ray images.
Collapse
Affiliation(s)
- Narin Aslan
- Department of Mechatronics Engineering, Faculty of Technology, Firat University, Elazig, Turkey
| | - Gonca Ozmen Koca
- Department of Mechatronics Engineering, Faculty of Technology, Firat University, Elazig, Turkey
| | - Mehmet Ali Kobat
- Department of Cardiology, Firat University Hospital, Elazig, Turkey
| | - Sengul Dogan
- Department of Digital Forensics Engineering, Faculty of Technology, Firat University, Elazig, Turkey
| |
Collapse
|
11
|
Hassan F, Albahli S, Javed A, Irtaza A. A Robust Framework for Epidemic Analysis, Prediction and Detection of COVID-19. Front Public Health 2022; 10:805086. [PMID: 35602122 PMCID: PMC9120631 DOI: 10.3389/fpubh.2022.805086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 03/28/2022] [Indexed: 11/13/2022] Open
Abstract
Covid-19 has become a pandemic that affects lots of individuals daily, worldwide, and, particularly, the widespread disruption in numerous countries, namely, the US, Italy, India, Saudi Arabia. The timely detection of this infectious disease is mandatory to prevent the quick spread globally and locally. Moreover, the timely detection of COVID-19 in the coming time is significant to well cope with the disease control by Governments. The common symptoms of COVID are fever as well as dry cough, which is similar to the normal flu. The disease is devastating and spreads quickly, which affects individuals of all ages, particularly, aged people and those with feeble immune systems. There is a standard method employed to detect the COVID, namely, the real-time polymerase chain reaction (RT-PCR) test. But this method has shortcomings, i.e., it takes a long time and generates maximum false-positive cases. Consequently, we necessitate to propose a robust framework for the detection as well as for the estimation of COVID cases globally. To achieve the above goals, we proposed a novel technique to analyze, predict, and detect the COVID-19 infection. We made dependable estimates on significant pandemic parameters and made predictions of infection as well as potential washout time frames for numerous countries globally. We used a publicly available dataset composed by Johns Hopkins Center for estimation, analysis, and predictions of COVID cases during the time period of 21 April 2020 to 27 June 2020. We employed a simple circulation for fast as well as simple estimates of the COVID model and estimated the parameters of the Gaussian curve, utilizing a parameter, namely, the least-square parameter curve fitting for numerous countries in distinct areas. Forecasts of COVID depend upon the potential results of Gaussian time evolution with a central limit theorem of data the Covid prediction to be justified. For gaussian distribution, the parameters, namely, extreme time and thickness are regulated using a statistical Y2 fit for the aim of doubling times after 21 April 2020. Moreover, for the detection of COVID-19, we also proposed a novel technique, employing the two features, namely, Histogram of Oriented Gradients and Scale Invariant Feature Transform. We also designed a CNN-based architecture named COVIDDetectorNet for classification purposes. We fed the extracted features into the proposed COVIDDetectorNet to detect COVID-19, viral pneumonia, and other lung infections. Our method obtained an accuracy of 96.51, 92.62, and 86.53% for two, three, and four classes, respectively. Experimental outcomes illustrate that our method is reliable to be employed for the forecast and detection of COVID-19 disease.
Collapse
Affiliation(s)
- Farman Hassan
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Saleh Albahli
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
- *Correspondence: Saleh Albahli
| | - Ali Javed
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
- Department of Computer Science and Engineering, Oakland University, Detroit, MI, United States
| | - Aun Irtaza
- Department of Computer and Electrical Engineering, University of Michigan, Dearborn, MI, United States
| |
Collapse
|
12
|
Muhammad U, Hoque MZ, Oussalah M, Keskinarkaus A, Seppänen T, Sarder P. SAM: Self-augmentation mechanism for COVID-19 detection using chest X-ray images. Knowl Based Syst 2022; 241:108207. [PMID: 35068707 PMCID: PMC8762871 DOI: 10.1016/j.knosys.2022.108207] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 01/07/2022] [Accepted: 01/08/2022] [Indexed: 12/20/2022]
Abstract
COVID-19 is a rapidly spreading viral disease and has affected over 100 countries worldwide. The numbers of casualties and cases of infection have escalated particularly in countries with weakened healthcare systems. Recently, reverse transcription-polymerase chain reaction (RT-PCR) is the test of choice for diagnosing COVID-19. However, current evidence suggests that COVID-19 infected patients are mostly stimulated from a lung infection after coming in contact with this virus. Therefore, chest X-ray (i.e., radiography) and chest CT can be a surrogate in some countries where PCR is not readily available. This has forced the scientific community to detect COVID-19 infection from X-ray images and recently proposed machine learning methods offer great promise for fast and accurate detection. Deep learning with convolutional neural networks (CNNs) has been successfully applied to radiological imaging for improving the accuracy of diagnosis. However, the performance remains limited due to the lack of representative X-ray images available in public benchmark datasets. To alleviate this issue, we propose a self-augmentation mechanism for data augmentation in the feature space rather than in the data space using reconstruction independent component analysis (RICA). Specifically, a unified architecture is proposed which contains a deep convolutional neural network (CNN), a feature augmentation mechanism, and a bidirectional LSTM (BiLSTM). The CNN provides the high-level features extracted at the pooling layer where the augmentation mechanism chooses the most relevant features and generates low-dimensional augmented features. Finally, BiLSTM is used to classify the processed sequential information. We conducted experiments on three publicly available databases to show that the proposed approach achieves the state-of-the-art results with accuracy of 97%, 84% and 98%. Explainability analysis has been carried out using feature visualization through PCA projection and t-SNE plots.
Collapse
Affiliation(s)
- Usman Muhammad
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Md Ziaul Hoque
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Mourad Oussalah
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland.,Medical Imaging, Physics, and Technology (MIPT), Faculty of Medicine, University of Oulu, Finland
| | - Anja Keskinarkaus
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Tapio Seppänen
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Pinaki Sarder
- Department of Pathology and Anatomical Sciences, University at Buffalo, USA
| |
Collapse
|
13
|
Subramanian N, Elharrouss O, Al-Maadeed S, Chowdhury M. A review of deep learning-based detection methods for COVID-19. Comput Biol Med 2022; 143:105233. [PMID: 35180499 PMCID: PMC8798789 DOI: 10.1016/j.compbiomed.2022.105233] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 01/10/2022] [Accepted: 01/10/2022] [Indexed: 12/16/2022]
Abstract
COVID-19 is a fast-spreading pandemic, and early detection is crucial for stopping the spread of infection. Lung images are used in the detection of coronavirus infection. Chest X-ray (CXR) and computed tomography (CT) images are available for the detection of COVID-19. Deep learning methods have been proven efficient and better performing in many computer vision and medical imaging applications. In the rise of the COVID pandemic, researchers are using deep learning methods to detect coronavirus infection in lung images. In this paper, the currently available deep learning methods that are used to detect coronavirus infection in lung images are surveyed. The available methodologies, public datasets, datasets that are used by each method and evaluation metrics are summarized in this paper to help future researchers. The evaluation metrics that are used by the methods are comprehensively compared.
Collapse
Affiliation(s)
- Nandhini Subramanian
- Qatar University College of Engineering, Computer Science and Engineering, Qatar.
| | - Omar Elharrouss
- Qatar University College of Engineering, Computer Science and Engineering, Qatar.
| | - Somaya Al-Maadeed
- Qatar University College of Engineering, Computer Science and Engineering, Qatar.
| | - Muhammed Chowdhury
- Qatar University College of Engineering, Computer Science and Engineering, Qatar.
| |
Collapse
|
14
|
Liu H, Cui G, Luo Y, Guo Y, Zhao L, Wang Y, Subasi A, Dogan S, Tuncer T. Artificial Intelligence-Based Breast Cancer Diagnosis Using Ultrasound Images and Grid-Based Deep Feature Generator. Int J Gen Med 2022; 15:2271-2282. [PMID: 35256855 PMCID: PMC8898057 DOI: 10.2147/ijgm.s347491] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 01/11/2022] [Indexed: 01/30/2023] Open
Abstract
Purpose Breast cancer is a prominent cancer type with high mortality. Early detection of breast cancer could serve to improve clinical outcomes. Ultrasonography is a digital imaging technique used to differentiate benign and malignant tumors. Several artificial intelligence techniques have been suggested in the literature for breast cancer detection using breast ultrasonography (BUS). Nowadays, particularly deep learning methods have been applied to biomedical images to achieve high classification performances. Patients and Methods This work presents a new deep feature generation technique for breast cancer detection using BUS images. The widely known 16 pre-trained CNN models have been used in this framework as feature generators. In the feature generation phase, the used input image is divided into rows and columns, and these deep feature generators (pre-trained models) have applied to each row and column. Therefore, this method is called a grid-based deep feature generator. The proposed grid-based deep feature generator can calculate the error value of each deep feature generator, and then it selects the best three feature vectors as a final feature vector. In the feature selection phase, iterative neighborhood component analysis (INCA) chooses 980 features as an optimal number of features. Finally, these features are classified by using a deep neural network (DNN). Results The developed grid-based deep feature generation-based image classification model reached 97.18% classification accuracy on the ultrasonic images for three classes, namely malignant, benign, and normal. Conclusion The findings obviously denoted that the proposed grid deep feature generator and INCA-based feature selection model successfully classified breast ultrasonic images.
Collapse
Affiliation(s)
- Haixia Liu
- Department of Ultrasound, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Guozhong Cui
- Department of Surgical Oncology, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Yi Luo
- Medical Statistics Room, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Yajie Guo
- Department of Ultrasound, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Lianli Zhao
- Department of Internal Medicine teaching and research group, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, China
| | - Yueheng Wang
- Department of Ultrasound, The Second Hospital of Hebei MedicalUniversity, Shijiazhuang, Hebei Province, 050000, People's Republic of China
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, 20520, Finland.,Department of Computer Science, College of Engineering, Effat University, Jeddah, 21478, Saudi Arabia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
| |
Collapse
|
15
|
Dhere A, Sivaswamy J. COVID detection from Chest X-Ray Images using multi-scale attention. IEEE J Biomed Health Inform 2022; 26:1496-1505. [PMID: 35157603 DOI: 10.1109/jbhi.2022.3151171] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Deep learning based methods have shown great promise in achieving accurate automatic detection of Coronavirus Disease (COVID) - 19 from Chest X-Ray (CXR) images. However, incorporating explainability in these solutions remains relatively less explored. We present a hierarchical classification approach for separating normal, non-COVID pneumonia (NCP) and COVID cases using CXR images. We demonstrate that the proposed method achieves clinically consistent explanations. We achieve this using a novel multi-scale attention architecture called Multi-scale Attention Residual Learning (MARL) and a new loss function based on conicity for training the proposed architecture. The proposed classification strategy has two stages. The first stage uses a model derived from DenseNet to separate pneumonia cases from normal cases while the second stage uses the MARL architecture to discriminate between COVID and NCP cases. With a five-fold cross validation, the proposed method achieves 93%, 96.28%, and 84.51% accuracy respectively over three public datasets for normal vs. NCP vs. COVID classification. This is competitive to the state-of-the-art methods. We also provide explanations in the form of GradCAM attributions, which are well aligned with expert annotations. The attributions are also seen to clearly indicate that MARL deems the peripheral regions of the lungs to be more important in the case of COVID cases while central regions are seen as more important in NCP cases. This observation matches the criteria described by radiologists in clinical literature, thereby attesting to the utility of the derived explanations.
Collapse
|
16
|
Feng Y, Yang X, Qiu D, Zhang H, Wei D, Liu J. PCXRNet: Pneumonia diagnosis from Chest X-Ray Images using Condense attention block and Multiconvolution attention block. IEEE J Biomed Health Inform 2022; 26:1484-1495. [PMID: 35120015 DOI: 10.1109/jbhi.2022.3148317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Coronavirus disease 2019 (COVID-19) has become a global pandemic. Many recognition approaches based on convolutional neural networks have been proposed for COVID-19 chest X-ray images. However, only a few of them make good use of the potential inter- and intra-relationships of feature maps. Considering the limitation mentioned above, this paper proposes an attention-based convolutional neural network, called PCXRNet, for diagnosis of pneumonia using chest X-ray images. To utilize the information from the channels of the feature maps, we added a novel condense attention module (CDSE) that comprised of two steps: condensation step and squeeze-excitation step. Unlike traditional channel attention modules, CDSE first downsamples the feature map channel by channel to condense the information, followed by the squeeze-excitation step, in which the channel weights are calculated. To make the model pay more attention to informative spatial parts in every feature map, we proposed a multi-convolution spatial attention module (MCSA). It reduces the number of parameters and introduces more nonlinearity. The CDSE and MCSA complement each other in series to tackle the problem of redundancy in feature maps and provide useful information from and between feature maps. We used the ChestXRay2017 dataset to explore the internal structure of PCXRNet, and the proposed network was applied to COVID-19 diagnosis. Additional experiments were conducted on a tuberculosis dataset to verify the effectiveness of PCXRNet. As a result, the network achieves an accuracy of 94.619%, recall of 94.753%, precision of 95.286%, and F1-score of 94.996% on the COVID-19 dataset.
Collapse
|
17
|
Islam MK, Habiba SU, Khan TA, Tasnim F. COV-RadNet: A Deep Convolutional Neural Network for Automatic Detection of COVID-19 from Chest X-Rays and CT Scans. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE UPDATE 2022; 2:100064. [PMID: 36039092 PMCID: PMC9404230 DOI: 10.1016/j.cmpbup.2022.100064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 08/19/2022] [Accepted: 08/23/2022] [Indexed: 05/07/2023]
Abstract
With the increase in severity of COVID-19 pandemic situation, the world is facing a critical fight to cope up with the impacts on human health, education and economy. The ongoing battle with the novel corona virus, is showing much priority to diagnose and provide rapid treatment to the patients. The rapid growth of COVID-19 has broken the healthcare system of the affected countries, creating a shortage in ICUs, test kits, ventilation support system. etc. This paper aims at finding an automatic COVID-19 detection approach which will assist the medical practitioners to diagnose the disease quickly and effectively. In this paper, a deep convolutional neural network, 'COV-RadNet' is proposed to detect COVID positive, viral pneumonia, lung opacity and normal, healthy people by analyzing their Chest Radiographic (X-ray and CT scans) images. Data augmentation technique is applied to balance the dataset 'COVID 19 Radiography Dataset' to make the classifier more robust to the classification task. We have applied transfer learning approach using four deep learning based models: VGG16, VGG19, ResNet152 and ResNext 101 to detect COVID-19 from chest X-ray images. We have achieved 97% classification accuracy using our proposed COV-RadNet model for COVID/Viral Pneumonia/Lungs Opacity/Normal, 99.5% accuracy to detect COVID/Viral Pneumonia/Normal and 99.72% accuracy to detect COVID and non-COVID people. Using chest CT scan images, we have found 99.25% accuracy to classify between COVID and non-COVID classes. Among the performance of the pre-trained models, ResNext 101 has shown the highest accuracy of 98.5% for multiclass classification (COVID, viral pneumonia, Lungs opacity and normal).
Collapse
Affiliation(s)
- Md Khairul Islam
- Khulna University of Engineering & Technology, Khulna, 9203, Khulna, Bangladesh
| | - Sultana Umme Habiba
- Khulna University of Engineering & Technology, Khulna, 9203, Khulna, Bangladesh
| | - Tahsin Ahmed Khan
- Khulna University of Engineering & Technology, Khulna, 9203, Khulna, Bangladesh
| | - Farzana Tasnim
- International Islamic University Chittagong, Kumira, 4318, Chittagong, Bangladesh
| |
Collapse
|
18
|
Gudigar A, Raghavendra U, Nayak S, Ooi CP, Chan WY, Gangavarapu MR, Dharmik C, Samanth J, Kadri NA, Hasikin K, Barua PD, Chakraborty S, Ciaccio EJ, Acharya UR. Role of Artificial Intelligence in COVID-19 Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:8045. [PMID: 34884045 PMCID: PMC8659534 DOI: 10.3390/s21238045] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 11/26/2021] [Accepted: 11/26/2021] [Indexed: 12/15/2022]
Abstract
The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Sneha Nayak
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore;
| | - Wai Yee Chan
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur 50603, Malaysia;
| | - Mokshagna Rohit Gangavarapu
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chinmay Dharmik
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nahrizul Adib Kadri
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia;
- School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
| | - Subrata Chakraborty
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore;
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
- International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
| |
Collapse
|
19
|
Kaur P, Harnal S, Tiwari R, Alharithi FS, Almulihi AH, Noya ID, Goyal N. A Hybrid Convolutional Neural Network Model for Diagnosis of COVID-19 Using Chest X-ray Images. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:12191. [PMID: 34831960 PMCID: PMC8618754 DOI: 10.3390/ijerph182212191] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 11/15/2021] [Accepted: 11/15/2021] [Indexed: 12/23/2022]
Abstract
COVID-19 declared as a pandemic that has a faster rate of infection and has impacted the lives and the country's economy due to forced lockdowns. Its detection using RT-PCR is required long time and due to which its infection has grown exponentially. This creates havoc for the shortage of testing kits in many countries. This work has proposed a new image processing-based technique for the health care systems named "C19D-Net", to detect "COVID-19" infection from "Chest X-Ray" (XR) images, which can help radiologists to improve their accuracy of detection COVID-19. The proposed system extracts deep learning (DL) features by applying the InceptionV4 architecture and Multiclass SVM classifier to classify and detect COVID-19 infection into four different classes. The dataset of 1900 Chest XR images has been collected from two publicly accessible databases. Images are pre-processed with proper scaling and regular feeding to the proposed model for accuracy attainments. Extensive tests are conducted with the proposed model ("C19D-Net") and it has succeeded to achieve the highest COVID-19 detection accuracy as 96.24% for 4-classes, 95.51% for three-classes, and 98.1% for two-classes. The proposed method has outperformed well in expressions of "precision", "accuracy", "F1-score" and "recall" in comparison with most of the recent previously published methods. As a result, for the present situation of COVID-19, the proposed "C19D-Net" can be employed in places where test kits are in short supply, to help the radiologists to improve their accuracy of detection of COVID-19 patients through XR-Images.
Collapse
Affiliation(s)
- Prabhjot Kaur
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India; (P.K.); (S.H.)
| | - Shilpi Harnal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India; (P.K.); (S.H.)
| | - Rajeev Tiwari
- Department of Systemics, School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand, India;
| | - Fahd S. Alharithi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia; (F.S.A.); (A.H.A.)
| | - Ahmed H. Almulihi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif 21944, Saudi Arabia; (F.S.A.); (A.H.A.)
| | - Irene Delgado Noya
- Higher Polytechnic School/Industrial Organization Engineering, Universidad Europea del Atlántico, 39011 Santander, Spain;
- Department of Project Management, Universidad Internacional Iberoamericana, Campeche 24560, Mexico
| | - Nitin Goyal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India; (P.K.); (S.H.)
| |
Collapse
|
20
|
Yang H, Ni J, Gao J, Han Z, Luan T. A novel method for peanut variety identification and classification by Improved VGG16. Sci Rep 2021; 11:15756. [PMID: 34344983 PMCID: PMC8333428 DOI: 10.1038/s41598-021-95240-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 07/21/2021] [Indexed: 11/09/2022] Open
Abstract
Crop variety identification is an essential link in seed detection, phenotype collection and scientific breeding. This paper takes peanut as an example to explore a new method for crop variety identification. Peanut is a crucial oil crop and cash crop. The yield and quality of different peanut varieties are different, so it is necessary to identify and classify different peanut varieties. The traditional image processing method of peanut variety identification needs to extract many features, which has defects such as intense subjectivity and insufficient generalization ability. Based on the deep learning technology, this paper improved the deep convolutional neural network VGG16 and applied the improved VGG16 to the identification and classification task of 12 varieties of peanuts. Firstly, the peanut pod images of 12 varieties obtained by the scanner were preprocessed with gray-scale, binarization, and ROI extraction to form a peanut pod data set with a total of 3365 images of 12 varieties. A series of improvements have been made to VGG16. Remove the F6 and F7 fully connected layers of VGG16. Add Conv6 and Global Average Pooling Layer. The three convolutional layers of conv5 have changed into Depth Concatenation and add the Batch Normalization(BN) layers to the model. Besides, fine-tuning is carried out based on the improved VGG16. We adjusted the location of the BN layers. Adjust the number of filters for Conv6. Finally, the improved VGG16 model's training test results were compared with the other classic models, AlexNet, VGG16, GoogLeNet, ResNet18, ResNet50, SqueezeNet, DenseNet201 and MobileNetv2 verify its superiority. The average accuracy of the improved VGG16 model on the peanut pods test set was 96.7%, which was 8.9% higher than that of VGG16, and 1.6–12.3% higher than that of other classical models. Besides, supplementary experiments were carried out to prove the robustness and generality of the improved VGG16. The improved VGG16 was applied to the identification and classification of seven corn grain varieties with the same method and an average accuracy of 90.1% was achieved. The experimental results show that the improved VGG16 proposed in this paper can identify and classify peanut pods of different varieties, proving the feasibility of a convolutional neural network in variety identification and classification. The model proposed in this experiment has a positive significance for exploring other Crop variety identification and classification.
Collapse
Affiliation(s)
- Haoyan Yang
- College of Animation and Communication, Qingdao Agricultural University, Qingdao, 266109, Shandong, China
| | - Jiangong Ni
- College of Science and Information Science, Qingdao Agricultural University, Qingdao, 266109, Shandong, China
| | - Jiyue Gao
- College of Science and Information Science, Qingdao Agricultural University, Qingdao, 266109, Shandong, China
| | - Zhongzhi Han
- College of Science and Information Science, Qingdao Agricultural University, Qingdao, 266109, Shandong, China.
| | - Tao Luan
- College of Animation and Communication, Qingdao Agricultural University, Qingdao, 266109, Shandong, China.
| |
Collapse
|