101
|
PCA-Based Incremental Extreme Learning Machine (PCA-IELM) for COVID-19 Patient Diagnosis Using Chest X-Ray Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9107430. [PMID: 35800685 PMCID: PMC9253873 DOI: 10.1155/2022/9107430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 04/29/2022] [Indexed: 11/24/2022]
Abstract
Novel coronavirus 2019 has created a pandemic and was first reported in December 2019. It has had very adverse consequences on people's daily life, healthcare, and the world's economy as well. According to the World Health Organization's most recent statistics, COVID-19 has become a worldwide pandemic, and the number of infected persons and fatalities growing at an alarming rate. It is highly required to have an effective system to early detect the COVID-19 patients to curb the further spreading of the virus from the affected person. Therefore, to early identify positive cases in patients and to support radiologists in the automatic diagnosis of COVID-19 from X-ray images, a novel method PCA-IELM is proposed based on principal component analysis (PCA) and incremental extreme learning machine. The suggested method's key addition is that it considers the benefits of PCA and the incremental extreme learning machine. Further, our strategy PCA-IELM reduces the input dimension by extracting the most important information from an image. Consequently, the technique can effectively increase the COVID-19 patient prediction performance. In addition to these, PCA-IELM has a faster training speed than a multi-layer neural network. The proposed approach was tested on a COVID-19 patient's chest X-ray image dataset. The experimental results indicate that the proposed approach PCA-IELM outperforms PCA-SVM and PCA-ELM in terms of accuracy (98.11%), precision (96.11%), recall (97.50%), F1-score (98.50%), etc., and training speed.
Collapse
|
102
|
Li P, Hao H, Mao X, Xu J, Lv Y, Chen W, Ge D, Zhang Z. Convolutional neural network-based applied research on the enrichment of heavy metals in the soil-rice system in China. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2022; 29:53642-53655. [PMID: 35290576 DOI: 10.1007/s11356-022-19640-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 03/05/2022] [Indexed: 06/14/2023]
Abstract
The enrichment of heavy metals in the soil-rice system is affected by various factors, which hampers the prediction of heavy metal concentrations. In this research, a prediction model (CNN-HM) of heavy metal concentrations in rice was constructed based on convolutional neural network (CNN) technology and 17 environmental factors. For comparison, other machine learning models, such as multiple linear regression, Bayesian ridge regression, support vector machine, and backpropagation neural networks, were applied. Furthermore, the LH-OAT method was used to evaluate the sensitivity of CNN-HM to each environmental factor. The results showed that the R2 values of CNN-HM for Cd, Pb, Cr, As, and Hg were 0.818, 0.709, 0.688, 0.462, and 0.816, respectively, and both the MAE and RMAE values were acceptable. The sensitivity analysis showed that the concentrations of Cd and Pb, mechanical composition, soil pH, and altitude were the main sensitive features for CNN-HM. Compared with CNN-HM based on all input features, the performance of the quick prediction model that was based on the sensitive features did not degrade significantly, thereby indicating that CNN-HM has stronger stability and robustness. The quick prediction model has extensive application value for timely prediction of the enrichment of heavy metals in emergencies. This study demonstrated the effectiveness and practicability of CNNs in predicting heavy metal enrichment in the soil-rice system and provided a new perspective and solution for heavy metal prediction.
Collapse
Affiliation(s)
- Panpan Li
- College of Computer, National University of Defense Technology, Changsha, 410005, People's Republic of China
| | - Huijuan Hao
- College of Resources and Environment, Hunan Agricultural University, Changsha, 410128, People's Republic of China
- Risk Assessment Laboratory for Environmental Factors of Agro-Product Quality Safety, Ministry of Agriculture and Villages, Changsha, 410005, People's Republic of China
| | - Xiaoguang Mao
- College of Computer, National University of Defense Technology, Changsha, 410005, People's Republic of China
| | - Jianjun Xu
- College of Computer, National University of Defense Technology, Changsha, 410005, People's Republic of China
| | - Yuntao Lv
- Risk Assessment Laboratory for Environmental Factors of Agro-Product Quality Safety, Ministry of Agriculture and Villages, Changsha, 410005, People's Republic of China
| | - Wanming Chen
- Risk Assessment Laboratory for Environmental Factors of Agro-Product Quality Safety, Ministry of Agriculture and Villages, Changsha, 410005, People's Republic of China
| | - Dabing Ge
- College of Resources and Environment, Hunan Agricultural University, Changsha, 410128, People's Republic of China
| | - Zhuo Zhang
- College of Information and Communication Technology, Guangzhou College of Commerce, Guangzhou, 510000, People's Republic of China.
| |
Collapse
|
103
|
Latent Low-Rank Projection Learning with Graph Regularization for Feature Extraction of Hyperspectral Images. REMOTE SENSING 2022. [DOI: 10.3390/rs14133078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Due to the great benefit of rich spectral information, hyperspectral images (HSIs) have been successfully applied in many fields. However, some problems of concern also limit their further applications, such as high dimension and expensive labeling. To address these issues, an unsupervised latent low-rank projection learning with graph regularization (LatLRPL) method is presented for feature extraction and classification of HSIs in this paper, in which discriminative features can be extracted from the view of latent space by decomposing the latent low-rank matrix into two different matrices, also benefiting from the preservation of intrinsic subspace structures by the graph regularization. Different from the graph embedding-based methods that need two phases to obtain the low-dimensional projections, one step is enough for LatLRPL by constructing the integrated projection learning model, reducing the complexity and simultaneously improving the robustness. To improve the performance, a simple but effective strategy is exploited by conducting the local weighted average on the pixels in a sliding window for HSIs. Experiments on the Indian Pines and Pavia University datasets demonstrate the superiority of the proposed LatLRPL method.
Collapse
|
104
|
Deep learning for necrosis detection using canine perivascular wall tumour whole slide images. Sci Rep 2022; 12:10634. [PMID: 35739267 PMCID: PMC9226022 DOI: 10.1038/s41598-022-13928-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 05/30/2022] [Indexed: 12/11/2022] Open
Abstract
Necrosis seen in histopathology Whole Slide Images is a major criterion that contributes towards scoring tumour grade which then determines treatment options. However conventional manual assessment suffers from inter-operator reproducibility impacting grading precision. To address this, automatic necrosis detection using AI may be used to assess necrosis for final scoring that contributes towards the final clinical grade. Using deep learning AI, we describe a novel approach for automating necrosis detection in Whole Slide Images, tested on a canine Soft Tissue Sarcoma (cSTS) data set consisting of canine Perivascular Wall Tumours (cPWTs). A patch-based deep learning approach was developed where different variations of training a DenseNet-161 Convolutional Neural Network architecture were investigated as well as a stacking ensemble. An optimised DenseNet-161 with post-processing produced a hold-out test F1-score of 0.708 demonstrating state-of-the-art performance. This represents a novel first-time automated necrosis detection method in the cSTS domain as well specifically in detecting necrosis in cPWTs demonstrating a significant step forward in reproducible and reliable necrosis assessment for improving the precision of tumour grading.
Collapse
|
105
|
A Band Selection Approach for Hyperspectral Image Based on a Modified Hybrid Rice Optimization Algorithm. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071293] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Hyperspectral image (HSI) analysis has become one of the most active topics in the field of remote sensing, which could provide powerful assistance for sensing a larger-scale environment. Nevertheless, a large number of high-correlation and redundancy bands in HSI data provide a massive challenge for image recognition and classification. Hybrid Rice Optimization (HRO) is a novel meta-heuristic, and its population is approximately divided into three groups with an equal number of individuals according to self-equilibrium and symmetry, which has been successfully applied in band selection. However, there are some limitations of primary HRO with respect to the local search for better solutions and this may result in overlooking a promising solution. Therefore, a modified HRO (MHRO) based on an opposition-based-learning (OBL) strategy and differential evolution (DE) operators is proposed for band selection in this paper. Firstly, OBL is adopted in the initialization phase of MHRO to increase the diversity of the population. Then, the exploitation ability is enhanced by embedding DE operators into the search process at each iteration. Experimental results verify that the proposed method shows superiority in both the classification accuracy and selected number of bands compared to other algorithms involved in the paper.
Collapse
|
106
|
High-Precision Seedling Detection Model Based on Multi-Activation Layer and Depth-Separable Convolution Using Images Acquired by Drones. DRONES 2022. [DOI: 10.3390/drones6060152] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Crop seedling detection is an important task in the seedling stage of crops in fine agriculture. In this paper, we propose a high-precision lightweight object detection network model based on a multi-activation layer and depth-separable convolution module to detect crop seedlings, aiming to improve the accuracy of traditional artificial intelligence methods. Due to the insufficient dataset, various image enhancement methods are used in this paper. The dataset in this paper was collected from Shahe Town, Laizhou City, Yantai City, Shandong Province, China. Experimental results on this dataset show that the proposed method can effectively improve the seedling detection accuracy, with the F1 score and mAP reaching 0.95 and 0.89, respectively, which are the best values among the compared models. In order to verify the generalization performance of the model, we also conducted a validation on the maize seedling dataset, and experimental results verified the generalization performance of the model. In order to apply the proposed method to real agricultural scenarios, we encapsulated the proposed model in a Jetson logic board and built a smart hardware that can quickly detect seedlings.
Collapse
|
107
|
Malakar S, Roy SD, Das S, Sen S, Velásquez JD, Sarkar R. Computer Based Diagnosis of Some Chronic Diseases: A Medical Journey of the Last Two Decades. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2022; 29:5525-5567. [PMID: 35729963 PMCID: PMC9199478 DOI: 10.1007/s11831-022-09776-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 05/22/2022] [Indexed: 06/15/2023]
Abstract
Disease prediction from diagnostic reports and pathological images using artificial intelligence (AI) and machine learning (ML) is one of the fastest emerging applications in recent days. Researchers are striving to achieve near-perfect results using advanced hardware technologies in amalgamation with AI and ML based approaches. As a result, a large number of AI and ML based methods are found in the literature. A systematic survey describing the state-of-the-art disease prediction methods, specifically chronic disease prediction algorithms, will provide a clear idea about the recent models developed in this field. This will also help the researchers to identify the research gaps present there. To this end, this paper looks over the approaches in the literature designed for predicting chronic diseases like Breast Cancer, Lung Cancer, Leukemia, Heart Disease, Diabetes, Chronic Kidney Disease and Liver Disease. The advantages and disadvantages of various techniques are thoroughly explained. This paper also presents a detailed performance comparison of different methods. Finally, it concludes the survey by highlighting some future research directions in this field that can be addressed through the forthcoming research attempts.
Collapse
Affiliation(s)
- Samir Malakar
- Department of Computer Science, Asutosh College, Kolkata, India
| | - Soumya Deep Roy
- Department of Metallurgical and Material Engineering, Jadavpur University, Kolkata, India
| | - Soham Das
- Department of Metallurgical and Material Engineering, Jadavpur University, Kolkata, India
| | - Swaraj Sen
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Juan D. Velásquez
- Departament of Industrial Engineering, University of Chile, Santiago, Chile
- Instituto Sistemas Complejos de Ingeniería (ISCI), Santiago, Chile
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| |
Collapse
|
108
|
Sun M, Xu L, Luo R, Lu Y, Jia W. Fast Location and Recognition of Green Apple Based on RGB-D Image. FRONTIERS IN PLANT SCIENCE 2022; 13:864458. [PMID: 35755709 PMCID: PMC9218757 DOI: 10.3389/fpls.2022.864458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 04/26/2022] [Indexed: 06/15/2023]
Abstract
In the process of green apple harvesting or yield estimation, affected by the factors, such as fruit color, light, and orchard environment, the accurate recognition and fast location of the target fruit brings tremendous challenges to the vision system. In this article, we improve a density peak cluster segmentation algorithm for RGB images with the help of a gradient field of depth images to locate and recognize target fruit. Specifically, the image depth information is adopted to analyze the gradient field of the target image. The vorticity center and two-dimensional plane projection are constructed to realize the accurate center location. Next, an optimized density peak clustering algorithm is applied to segment the target image, where a kernel density estimation is utilized to optimize the segmentation algorithm, and a double sort algorithm is applied to efficiently obtain the accurate segmentation area of the target image. Finally, the segmentation area with the circle center is the target fruit area, and the maximum value method is employed to determine the radius. The above two results are merged to achieve the contour fitting of the target fruits. The novel method is designed without iteration, classifier, and several samples, which has greatly improved operating efficiency. The experimental results show that the presented method significantly improves accuracy and efficiency. Meanwhile, this new method deserves further promotion.
Collapse
Affiliation(s)
- Meili Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Liancheng Xu
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Rong Luo
- State Key Laboratory of Biobased Materials and Green Papermaking, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China
| | - Yuqi Lu
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Weikuan Jia
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
- Key Laboratory of Facility Agriculture Measurement and Control Technology and Equipment of Machinery Industry, Zhenjiang, China
| |
Collapse
|
109
|
Deep Feature Extraction for Cymbidium Species Classification Using Global–Local CNN. HORTICULTURAE 2022. [DOI: 10.3390/horticulturae8060470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Cymbidium is the most famous and widely distributed type of plant in the Orchidaceae family. It has extremely high ornamental and economic value. With the continuous development of the Cymbidium industry in recent years, it has become increasingly difficult to classify, identify, develop, and utilize orchids. In this study, a classification model GL-CNN based on a convolutional neural network was proposed to solve the problem of Cymbidium classification. First, the image set was expanded by four methods (mirror rotation, salt-and-pepper noise, image sharpening, and random angle flip), and then a cascade fusion strategy was used to fit the multiscale features obtained from the two branches. Comparing the performance of GL-CNN with other four classic models (AlexNet, ResNet50, GoogleNet, and VGG16), the results showed that GL-CNN achieves the highest classification prediction accuracy with a value of 94.13%. This model can effectively detect different species of Cymbidium and provide a reference for the identification of Cymbidium germplasm resources.
Collapse
|
110
|
Buyukarikan B, Ulker E. Classification of physiological disorders in apples fruit using a hybrid model based on convolutional neural network and machine learning methods. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07350-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
111
|
Zadeh FA, Ardalani MV, Salehi AR, Jalali Farahani R, Hashemi M, Mohammed AH. An Analysis of New Feature Extraction Methods Based on Machine Learning Methods for Classification Radiological Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3035426. [PMID: 35634075 PMCID: PMC9131703 DOI: 10.1155/2022/3035426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Revised: 02/02/2022] [Accepted: 03/08/2022] [Indexed: 12/02/2022]
Abstract
The lungs are COVID-19's most important focus, as it induces inflammatory changes in the lungs that can lead to respiratory insufficiency. Reducing the supply of oxygen to human cells negatively impacts humans, and multiorgan failure with a high mortality rate may, in certain circumstances, occur. Radiological pulmonary evaluation is a vital part of patient therapy for the critically ill patient with COVID-19. The evaluation of radiological imagery is a specialized activity that requires a radiologist. Artificial intelligence to display radiological images is one of the essential topics. Using a deep machine learning technique to identify morphological differences in the lungs of COVID-19-infected patients could yield promising results on digital images of chest X-rays. Minor differences in digital images that are not detectable or apparent to the human eye may be detected using computer vision algorithms. This paper uses machine learning methods to diagnose COVID-19 on chest X-rays, and the findings have been very promising. The dataset includes COVID-19-enhanced X-ray images for disease detection using chest X-ray images. The data were gathered from two publicly accessible datasets. The feature extractions are done using the gray level co-occurrence matrix methods. K-nearest neighbor, support vector machine, linear discrimination analysis, naïve Bayes, and convolutional neural network methods are used for the classification of patients. According to the findings, convolutional neural networks' efficiency linked to imaging modalities with fewer human involvements outperforms other traditional machine learning approaches.
Collapse
Affiliation(s)
| | - Mohammadreza Vazifeh Ardalani
- Robotics Research Laboratory, Center of Excellence in Experimental Solid Mechanics and Dynamics, School of Mechanical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Ali Rezaei Salehi
- Industrial Engineering Department, Technical and Engineering Faculty, University of Science and Culture, Tehran, Iran
| | | | - Mandana Hashemi
- School of Industrial and Information Engineering, Politecnico di Milano University, Milan, Italy
| | - Adil Hussein Mohammed
- Department of Communication and Computer Engineering, Faculty of Engineering, Cihan University-Erbil, Erbil, Kurdistan Region, Iraq
| |
Collapse
|
112
|
Fu L, Li S, Sun Y, Mu Y, Hu T, Gong H. Lightweight-Convolutional Neural Network for Apple Leaf Disease Identification. FRONTIERS IN PLANT SCIENCE 2022; 13:831219. [PMID: 35685005 PMCID: PMC9171387 DOI: 10.3389/fpls.2022.831219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 04/22/2022] [Indexed: 06/15/2023]
Abstract
As a widely consumed fruit worldwide, it is extremely important to prevent and control disease in apple trees. In this research, we designed convolutional neural networks (CNNs) for five diseases that affect apple tree leaves based on the AlexNet model. First, the coarse-grained features of the disease are extracted in the model using dilated convolution, which helps to maintain a large receptive field while reducing the number of parameters. The parallel convolution module is added to extract leaf disease features at multiple scales. Subsequently, the series 3 × 3 convolutions shortcut connection allows the model to deal with additional nonlinearities. Further, the attention mechanism is added to all aggregated output modules to better fit channel features and reduce the impact of a complex background on the model performance. Finally, the two fully connected layers are replaced by global pooling to reduce the number of model parameters, to ensure that the features are not lost. The final recognition accuracy of the model is 97.36%, and the size of the model is 5.87 MB. In comparison with five other models, our model design is reasonable and has good robustness; further, the results show that the proposed model is lightweight and can identify apple leaf diseases with high accuracy.
Collapse
Affiliation(s)
- Lili Fu
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Shijun Li
- College of Electronic and Information Engineering, Wuzhou University, Wuzhou, China
| | - Yu Sun
- College of Information Technology, Jilin Agricultural University, Changchun, China
- Jilin Province Agricultural Internet of Things Technology Collaborative Innovation Center, Changchun, China
- Jilin Province Intelligent Environmental Engineering Research Center, Changchun, China
- Jilin Province Information Technology and Intelligent Agricultural Engineering Research Center, Changchun, China
| | - Ye Mu
- College of Information Technology, Jilin Agricultural University, Changchun, China
- Jilin Province Agricultural Internet of Things Technology Collaborative Innovation Center, Changchun, China
- Jilin Province Intelligent Environmental Engineering Research Center, Changchun, China
- Jilin Province Information Technology and Intelligent Agricultural Engineering Research Center, Changchun, China
| | - Tianli Hu
- College of Information Technology, Jilin Agricultural University, Changchun, China
- Jilin Province Agricultural Internet of Things Technology Collaborative Innovation Center, Changchun, China
- Jilin Province Intelligent Environmental Engineering Research Center, Changchun, China
- Jilin Province Information Technology and Intelligent Agricultural Engineering Research Center, Changchun, China
| | - He Gong
- College of Information Technology, Jilin Agricultural University, Changchun, China
- Jilin Province Agricultural Internet of Things Technology Collaborative Innovation Center, Changchun, China
- Jilin Province Intelligent Environmental Engineering Research Center, Changchun, China
- Jilin Province Information Technology and Intelligent Agricultural Engineering Research Center, Changchun, China
| |
Collapse
|
113
|
Abstract
A total of 8.46 million tons of date fruit are produced annually around the world. The date fruit is considered a high-valued confectionery and fruit crop. The hot arid zones of Southwest Asia, North Africa, and the Middle East are the major producers of date fruit. The production of dates in 1961 was 1.8 million tons, which increased to 2.8 million tons in 1985. In 2001, the production of dates was recorded at 5.4 million tons, whereas recently it has reached 8.46 million tons. A common problem found in the industry is the absence of an autonomous system for the classification of date fruit, resulting in reliance on only the manual expertise, often involving hard work, expense, and bias. Recently, Machine Learning (ML) techniques have been employed in such areas of agriculture and fruit farming and have brought great convenience to human life. An automated system based on ML can carry out the fruit classification and sorting tasks that were previously handled by human experts. In various fields, CNNs (convolutional neural networks) have achieved impressive results in image classification. Considering the success of CNNs and transfer learning in other image classification problems, this research also employs a similar approach and proposes an efficient date classification model. In this research, a dataset of eight different classes of date fruit has been created to train the proposed model. Different preprocessing techniques have been applied in the proposed model, such as image augmentation, decayed learning rate, model checkpointing, and hybrid weight adjustment to increase the accuracy rate. The results show that the proposed model based on MobileNetV2 architecture has achieved 99% accuracy. The proposed model has also been compared with other existing models such as AlexNet, VGG16, InceptionV3, ResNet, and MobileNetV2. The results prove that the proposed model performs better than all other models in terms of accuracy.
Collapse
|
114
|
COVID-19 Chest X-ray Classification and Severity Assessment Using Convolutional and Transformer Neural Networks. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12104861] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The coronavirus pandemic started in Wuhan, China in December 2019, and put millions of people in a difficult situation. This fatal virus spread to over 227 countries and the number of infected patients increased to over 400 million cases, causing over 6 million deaths worldwide. Due to the serious consequence of this virus, it is necessary to develop a detection method that can respond quickly to prevent the spreading of COVID-19. Using chest X-ray images to detect COVID-19 is one of the promising techniques; however, with a large number of COVID-19 infected cases every day, the number of radiologists available to diagnose the chest X-ray images is not sufficient. We must have a computer aid system that helps doctors instantly and automatically determine COVID-19 cases. Recently, with the emergence of deep learning methods applied for medical and biomedical uses, using convolutional neural net and transformer applications for chest X-ray images can be a supplement for COVID-19 testing. In this paper, we attempt to classify three types of chest X-ray, which are normal, pneumonia, and COVID-19 using deep learning methods on a customized dataset. We also carry out an experiment on the COVID-19 severity assessment task using a tailored dataset. Five deep learning models were obtained to conduct our experiments: DenseNet121, ResNet50, InceptionNet, Swin Transformer, and Hybrid EfficientNet-DOLG neural networks. The results indicated that chest X-ray and deep learning could be reliable methods for supporting doctors in COVID-19 identification and severity assessment tasks.
Collapse
|
115
|
Tell Me More: Automating Emojis Classification for Better Accessibility and Emotional Context Recognition. FUTURE INTERNET 2022. [DOI: 10.3390/fi14050142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Users of web or chat social networks typically use emojis (e.g., smilies, memes, hearts) to convey in their textual interactions the emotions underlying the context of the communication, aiming for better interpretability, especially for short polysemous phrases. Semantic-based context recognition tools, employed in any chat or social network, can directly comprehend text-based emoticons (i.e., emojis created from a combination of symbols and characters) and translate them into audio information (e.g., text-to-speech readers for individuals with vision impairment). On the other hand, for a comprehensive understanding of the semantic context, image-based emojis require image-recognition algorithms. This study aims to explore and compare different classification methods for pictograms, applied to emojis collected from Internet sources. Each emoji is labeled according to the basic Ekman model of six emotional states. The first step involves extraction of emoji features through convolutional neural networks, which are then used to train conventional supervised machine learning classifiers for purposes of comparison. The second experimental step broadens the comparison to deep learning networks. The results reveal that both the conventional and deep learning classification approaches accomplish the goal effectively, with deep transfer learning exhibiting a highly satisfactory performance, as expected.
Collapse
|
116
|
Lu Y, Du J, Liu P, Zhang Y, Hao Z. Image Classification and Recognition of Rice Diseases: A Hybrid DBN and Particle Swarm Optimization Algorithm. Front Bioeng Biotechnol 2022; 10:855667. [PMID: 35573246 PMCID: PMC9091375 DOI: 10.3389/fbioe.2022.855667] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 03/28/2022] [Indexed: 11/13/2022] Open
Abstract
Rice blast, rice sheath blight, and rice brown spot have become the most popular diseases in the cold areas of northern China. In order to further improve the accuracy and efficiency of rice disease diagnosis, a framework for automatic classification and recognition of rice diseases is proposed in this study. First, we constructed a training and testing data set including 1,500 images of rice blast, 1,500 images of rice sheath blight, and 1,500 images of rice brown spot, and 1,100 healthy images were collected from the rice experimental field. Second, the deep belief network (DBN) model is designed to include 15 hidden restricted Boltzmann machine layers and a support vector machine (SVM) optimized with switching particle swarm (SPSO). It is noted that the developed DBN and SPSO-SVM can simultaneously learn three proposed features including color, texture, and shape to recognize the disease type from the region of interest obtained by preprocessing the disease images. The proposed model leads to a hit rate of 91.37%, accuracy of 94.03%, and a false measurement rate of 8.63%, with the 10-fold cross-validation strategy. The value of the area under the receiver operating characteristic curve (AUC) is 0.97, whose accuracy is much higher than that of the conventional machine learning model. The simulation results show that the DBN and SPSO-SVM models can effectively extract the image features of rice diseases during recognition, and have good anti-interference and robustness.
Collapse
Affiliation(s)
- Yang Lu
- College of Information and Electrical Engineering, Heilongjiang Bayi Agricultural University, Daqing, China
- *Correspondence: Yang Lu,
| | - Jiaojiao Du
- College of Information and Electrical Engineering, Heilongjiang Bayi Agricultural University, Daqing, China
| | - Pengfei Liu
- College of Information and Electrical Engineering, Heilongjiang Bayi Agricultural University, Daqing, China
| | - Yong Zhang
- School of Physics and Electronic Engineering, Northeast Petroleum University, Daqing, China
| | - Zhiqiang Hao
- Key Laboratory for Metallurgical Equipment and Control of Ministry of Education, Wuhan University of Science and Technology, Wuhan, China
| |
Collapse
|
117
|
COV-DLS: Prediction of COVID-19 from X-Rays Using Enhanced Deep Transfer Learning Techniques. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6216273. [PMID: 35422979 PMCID: PMC9002900 DOI: 10.1155/2022/6216273] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Accepted: 02/11/2022] [Indexed: 12/12/2022]
Abstract
In this paper, modifications in neoteric architectures such as VGG16, VGG19, ResNet50, and InceptionV3 are proposed for the classification of COVID-19 using chest X-rays. The proposed architectures termed "COV-DLS" consist of two phases: heading model construction and classification. The heading model construction phase utilizes four modified deep learning architectures, namely Modified-VGG16, Modified-VGG19, Modified-ResNet50, and Modified-InceptionV3. An attempt is made to modify these neoteric architectures by incorporating the average pooling and dense layers. The dropout layer is also added to prevent the overfitting problem. Two dense layers with different activation functions are also added. Thereafter, the output of these modified models is applied during the classification phase, when COV-DLS are applied on a COVID-19 chest X-ray image data set. Classification accuracy of 98.61% is achieved by Modified-VGG16, 97.22% by Modified-VGG19, 95.13% by Modified-ResNet50, and 99.31% by Modified-InceptionV3. COV-DLS outperforms existing deep learning models in terms of accuracy and F1-score.
Collapse
|
118
|
Mahmood A, Singh SK, Tiwari AK. Pre-trained deep learning-based classification of jujube fruits according to their maturity level. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07213-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
119
|
Let AI Perform Better Next Time—A Systematic Review of Medical Imaging-Based Automated Diagnosis of COVID-19: 2020–2022. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083895] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The pandemic of COVID-19 has caused millions of infections, which has led to a great loss all over the world, socially and economically. Due to the false-negative rate and the time-consuming characteristic of the Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests, diagnosing based on X-ray images and Computed Tomography (CT) images has been widely adopted to confirm positive COVID-19 RT-PCR tests. Since the very beginning of the pandemic, researchers in the artificial intelligence area have proposed a large number of automatic diagnosing models, hoping to assist radiologists and improve the diagnosing accuracy. However, after two years of development, there are still few models that can actually be applied in real-world scenarios. Numerous problems have emerged in the research of the automated diagnosis of COVID-19. In this paper, we present a systematic review of these diagnosing models. A total of 179 proposed models are involved. First, we compare the medical image modalities (CT or X-ray) for COVID-19 diagnosis from both the clinical perspective and the artificial intelligence perspective. Then, we classify existing methods into two types—image-level diagnosis (i.e., classification-based methods) and pixel-level diagnosis (i.e., segmentation-based models). For both types of methods, we define universal model pipelines and analyze the techniques that have been applied in each step of the pipeline in detail. In addition, we also review some commonly adopted public COVID-19 datasets. More importantly, we present an in-depth discussion of the existing automated diagnosis models and note a total of three significant problems: biased model performance evaluation; inappropriate implementation details; and a low reproducibility, reliability and explainability. For each point, we give corresponding recommendations on how we can avoid making the same mistakes and let AI perform better in the next pandemic.
Collapse
|
120
|
Determination of the Severity and Percentage of COVID-19 Infection through a Hierarchical Deep Learning System. J Pers Med 2022; 12:jpm12040535. [PMID: 35455654 PMCID: PMC9027976 DOI: 10.3390/jpm12040535] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/21/2022] [Accepted: 03/24/2022] [Indexed: 12/18/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19) has caused millions of deaths and one of the greatest health crises of all time. In this disease, one of the most important aspects is the early detection of the infection to avoid the spread. In addition to this, it is essential to know how the disease progresses in patients, to improve patient care. This contribution presents a novel method based on a hierarchical intelligent system, that analyzes the application of deep learning models to detect and classify patients with COVID-19 using both X-ray and chest computed tomography (CT). The methodology was divided into three phases, the first being the detection of whether or not a patient suffers from COVID-19, the second step being the evaluation of the percentage of infection of this disease and the final phase is to classify the patients according to their severity. Stratification of patients suffering from COVID-19 according to their severity using automatic systems based on machine learning on medical images (especially X-ray and CT of the lungs) provides a powerful tool to help medical experts in decision making. In this article, a new contribution is made to a stratification system with three severity levels (mild, moderate and severe) using a novel histogram database (which defines how the infection is in the different CT slices for a patient suffering from COVID-19). The first two phases use CNN Densenet-161 pre-trained models, and the last uses SVM with LDA supervised learning algorithms as classification models. The initial stage detects the presence of COVID-19 through X-ray multi-class (COVID-19 vs. No-Findings vs. Pneumonia) and the results obtained for accuracy, precision, recall, and F1-score values are 88%, 91%, 87%, and 89%, respectively. The following stage manifested the percentage of COVID-19 infection in the slices of the CT-scans for a patient and the results in the metrics evaluation are 0.95 in Pearson Correlation coefficient, 5.14 in MAE and 8.47 in RMSE. The last stage finally classifies a patient in three degrees of severity as a function of global infection of the lungs and the results achieved are 95% accurate.
Collapse
|
121
|
AlexNet Convolutional Neural Network for Disease Detection and Classification of Tomato Leaf. ELECTRONICS 2022. [DOI: 10.3390/electronics11060951] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
With limited retrieval of reserves and restricted capability in plant pathology, automation of processes becomes essential. All over the world, farmers are struggling to prevent various harm from bacteria or pathogens such as viruses, fungi, worms, protozoa, and insects. Deep learning is currently widely used across a wide range of applications, including desktop, web, and mobile. In this study, the authors attempt to implement the function of AlexNet modification architecture-based CNN on the Android platform to predict tomato diseases based on leaf image. A dataset with of 18,345 training data and 4,585 testing data was used to create the predictive model. The information is separated into ten labels for tomato leaf diseases, each with 64 × 64 RGB pixels. The best model using the Adam optimizer with a realizing rate of 0.0005, the number of epochs 75, batch size 128, and an uncompromising cross-entropy loss function, has a high model accuracy with an average of 98%, a strictness rate of 0.98, a recall value of 0.99, and an F1-count of 0.98 with a loss of 0.1331, so that the classification results are good and very precise.
Collapse
|
122
|
Shahi TB, Sitaula C, Neupane A, Guo W. Fruit classification using attention-based MobileNetV2 for industrial applications. PLoS One 2022; 17:e0264586. [PMID: 35213643 PMCID: PMC8880666 DOI: 10.1371/journal.pone.0264586] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 02/13/2022] [Indexed: 11/18/2022] Open
Abstract
Recent deep learning methods for fruits classification resulted in promising performance. However, these methods are with heavy-weight architectures in nature, and hence require a higher storage and expensive training operations due to feeding a large number of training parameters. There is a necessity to explore lightweight deep learning models without compromising the classification accuracy. In this paper, we propose a lightweight deep learning model using the pre-trained MobileNetV2 model and attention module. First, the convolution features are extracted to capture the high-level object-based information. Second, an attention module is used to capture the interesting semantic information. The convolution and attention modules are then combined together to fuse both the high-level object-based information and the interesting semantic information, which is followed by the fully connected layers and the softmax layer. Evaluation of our proposed method, which leverages transfer learning approach, on three public fruit-related benchmark datasets shows that our proposed method outperforms the four latest deep learning methods with a smaller number of trainable parameters and a superior classification accuracy. Our model has a great potential to be adopted by industries closely related to the fruit growing and retailing or processing chain for automatic fruit identification and classifications in the future.
Collapse
Affiliation(s)
- Tej Bahadur Shahi
- School of Engineering and Technology, Central Queensland University, North Rockhampton, QLD, Australia
- * E-mail:
| | - Chiranjibi Sitaula
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, Australia
| | - Arjun Neupane
- School of Engineering and Technology, Central Queensland University, North Rockhampton, QLD, Australia
| | - William Guo
- School of Engineering and Technology, Central Queensland University, North Rockhampton, QLD, Australia
| |
Collapse
|
123
|
Polat H. Multi-task semantic segmentation of CT images for COVID-19 infections using DeepLabV3+ based on dilated residual network. Phys Eng Sci Med 2022; 45:443-455. [PMID: 35286619 PMCID: PMC8919169 DOI: 10.1007/s13246-022-01110-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Revised: 02/07/2022] [Accepted: 02/08/2022] [Indexed: 11/28/2022]
Abstract
COVID-19 is a deadly outbreak that has been declared a public health emergency of international concern. The massive damage of the disease to public health, social life, and the global economy increases the importance of alternative rapid diagnosis and follow-up methods. RT-PCR assay, which is considered the gold standard in diagnosing the disease, is complicated, expensive, time-consuming, prone to contamination, and may give false-negative results. These drawbacks reinforce the trend toward medical imaging techniques such as computed tomography (CT). Typical visual signs such as ground-glass opacity (GGO) and consolidation of CT images allow for quantitative assessment of the disease. In this context, it is aimed at the segmentation of the infected lung CT images with the residual network-based DeepLabV3+, which is a redesigned convolutional neural network (CNN) model. In order to evaluate the robustness of the proposed model, three different segmentation tasks as Task-1, Task-2, and Task-3 were applied. Task-1 represents binary segmentation as lung (infected and non-infected tissues) and background. Task-2 represents multi-class segmentation as lung (non-infected tissue), COVID (GGO, consolidation, and pleural effusion irregularities are gathered under a single roof), and background. Finally, the segmentation in which each lesion type is considered as a separate class is defined as Task-3. COVID-19 imaging data for each segmentation task consists of 100 CT single-slice scans from over 40 diagnosed patients. The performance of the model was evaluated using Dice similarity coefficient (DSC), intersection over union (IoU), sensitivity, specificity, and accuracy by performing five-fold cross-validation. The average DSC performance for three different segmentation tasks was obtained as 0.98, 0.858, and 0.616, respectively. The experimental results demonstrate that the proposed method has robust performance and great potential in evaluating COVID-19 infection.
Collapse
Affiliation(s)
- Hasan Polat
- Department of Electrical and Energy, Bingol University, Selahaddin-i Eyyübi Mah. Aydınlık Cad No:1, 12000, Bingöl, Turkey.
| |
Collapse
|
124
|
The Self-Supervised Spectral–Spatial Vision Transformer Network for Accurate Prediction of Wheat Nitrogen Status from UAV Imagery. REMOTE SENSING 2022. [DOI: 10.3390/rs14061400] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Nitrogen (N) fertilizer is routinely applied by farmers to increase crop yields. At present, farmers often over-apply N fertilizer in some locations or at certain times because they do not have high-resolution crop N status data. N-use efficiency can be low, with the remaining N lost to the environment, resulting in higher production costs and environmental pollution. Accurate and timely estimation of N status in crops is crucial to improving cropping systems’ economic and environmental sustainability. Destructive approaches based on plant tissue analysis are time consuming and impractical over large fields. Recent advances in remote sensing and deep learning have shown promise in addressing the aforementioned challenges in a non-destructive way. In this work, we propose a novel deep learning framework: a self-supervised spectral–spatial attention-based vision transformer (SSVT). The proposed SSVT introduces a Spectral Attention Block (SAB) and a Spatial Interaction Block (SIB), which allows for simultaneous learning of both spatial and spectral features from UAV digital aerial imagery, for accurate N status prediction in wheat fields. Moreover, the proposed framework introduces local-to-global self-supervised learning to help train the model from unlabelled data. The proposed SSVT has been compared with five state-of-the-art models including: ResNet, RegNet, EfficientNet, EfficientNetV2, and the original vision transformer on both testing and independent datasets. The proposed approach achieved high accuracy (0.96) with good generalizability and reproducibility for wheat N status estimation.
Collapse
|
125
|
Zhang Z, Flores P, Friskop A, Liu Z, Igathinathane C, Han X, Kim HJ, Jahan N, Mathew J, Shreya S. Enhancing Wheat Disease Diagnosis in a Greenhouse Using Image Deep Features and Parallel Feature Fusion. FRONTIERS IN PLANT SCIENCE 2022; 13:834447. [PMID: 35371139 PMCID: PMC8965652 DOI: 10.3389/fpls.2022.834447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 01/27/2022] [Indexed: 06/14/2023]
Abstract
Since the assessment of wheat diseases (e.g., leaf rust and tan spot) via visual observation is subjective and inefficient, this study focused on developing an automatic, objective, and efficient diagnosis approach. For each plant, color, and color-infrared (CIR) images were collected in a paired mode. An automatic approach based on the image processing technique was developed to crop the paired images to have the same region, after which a developed semiautomatic webtool was used to expedite the dataset creation. The webtool generated the dataset from either image and automatically built the corresponding dataset from the other image. Each image was manually categorized into one of the three groups: control (disease-free), disease light, and disease severity. After the image segmentation, handcrafted features (HFs) were extracted from each format of images, and disease diagnosis results demonstrated that the parallel feature fusion had higher accuracy over features from either type of image. Performance of deep features (DFs) extracted through different deep learning (DL) models (e.g., AlexNet, VGG16, ResNet101, GoogLeNet, and Xception) on wheat disease detection was compared, and those extracted by ResNet101 resulted in the highest accuracy, perhaps because deep layers extracted finer features. In addition, parallel deep feature fusion generated a higher accuracy over DFs from a single-source image. DFs outperformed HFs in wheat disease detection, and the DFs coupled with parallel feature fusion resulted in diagnosis accuracies of 75, 84, and 71% for leaf rust, tan spot, and leaf rust + tan spot, respectively. The methodology developed directly for greenhouse applications, to be used by plant pathologists, breeders, and other users, can be extended to field applications with future tests on field data and model fine-tuning.
Collapse
Affiliation(s)
- Zhao Zhang
- Key Laboratory of Modern Precision Agriculture System Integration Research, Ministry of Education, China Agricultural University, Beijing, China
- Key Lab of Agricultural Information Acquisition Technology, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing, China
| | - Paulo Flores
- Department of Agricultural and Biosystems Engineering, North Dakota State University, Fargo, ND, United States
| | - Andrew Friskop
- Department of Plant Sciences, North Dakota State University, Fargo, ND, United States
| | - Zhaohui Liu
- Department of Plant Sciences, North Dakota State University, Fargo, ND, United States
| | - C. Igathinathane
- Department of Agricultural and Biosystems Engineering, North Dakota State University, Fargo, ND, United States
| | - X. Han
- Department of Biosystems Engineering, College of Agriculture and Life Sciences, Kangwon National University, Chuncheon, South Korea
- Interdisciplinary Program in Smart Agriculture, College of Agriculture and Life Sciences, Kangwon National University, Chuncheon, South Korea
| | - H. J. Kim
- Interdisciplinary Program in Smart Agriculture, College of Agriculture and Life Sciences, Kangwon National University, Chuncheon, South Korea
- Department of Biosystems and Biomaterials Engineering, College of Agriculture and Life Sciences, Seoul National University, Seoul, South Korea
| | - N. Jahan
- Department of Agricultural and Biosystems Engineering, North Dakota State University, Fargo, ND, United States
| | - J. Mathew
- Department of Agricultural and Biosystems Engineering, North Dakota State University, Fargo, ND, United States
| | - S. Shreya
- Department of Electrical and Computer Engineering, North Dakota State University, Fargo, ND, United States
| |
Collapse
|
126
|
Cubical Homology-Based Machine Learning: An Application in Image Classification. AXIOMS 2022. [DOI: 10.3390/axioms11030112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Persistent homology is a powerful tool in topological data analysis (TDA) to compute, study, and encode efficiently multi-scale topological features and is being increasingly used in digital image classification. The topological features represent a number of connected components, cycles, and voids that describe the shape of data. Persistent homology extracts the birth and death of these topological features through a filtration process. The lifespan of these features can be represented using persistent diagrams (topological signatures). Cubical homology is a more efficient method for extracting topological features from a 2D image and uses a collection of cubes to compute the homology, which fits the digital image structure of grids. In this research, we propose a cubical homology-based algorithm for extracting topological features from 2D images to generate their topological signatures. Additionally, we propose a novel score measure, which measures the significance of each of the sub-simplices in terms of persistence. In addition, gray-level co-occurrence matrix (GLCM) and contrast limited adapting histogram equalization (CLAHE) are used as supplementary methods for extracting features. Supervised machine learning models are trained on selected image datasets to study the efficacy of the extracted topological features. Among the eight tested models with six published image datasets of varying pixel sizes, classes, and distributions, our experiments demonstrate that cubical homology-based machine learning with the deep residual network (ResNet 1D) and Light Gradient Boosting Machine (lightGBM) shows promise with the extracted topological features.
Collapse
|
127
|
Sethy PK, Behera SK. Automatic classification with concatenation of deep and handcrafted features of histological images for breast carcinoma diagnosis. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:9631-9643. [DOI: 10.1007/s11042-021-11756-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 09/27/2021] [Accepted: 11/22/2021] [Indexed: 08/02/2023]
|
128
|
Development of a computer-aided tool for detection of COVID-19 pneumonia from CXR images using machine learning algorithm. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2022. [PMCID: PMC8841229 DOI: 10.1016/j.jrras.2022.02.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The novel coronavirus (SARS-CoV-2) is spreading rapidly worldwide, and it has become a greater risk for human beings. To curb the community transmission of this virus, rapid detection and identification of the affected people via a quick diagnostic process are necessary. Media studies have shown that most COVID-19 victims endure lung disease. For rapid identification of the affected patient, chest CT scans and X-ray images have been reported to be suitable techniques. However, chest X-ray (CXR) shows more convenience than the CT imaging techniques because it has faster imaging times than CT and is also simple and cost-effective. Literature shows that transfer learning is one of the most successful techniques to analyze chest X-ray images and correctly identify various types of pneumonia. Since SVM has a remarkable aspect that tremendously provides good results using a small data set thus in this study we have used SVM machine learning algorithm to diagnose COVID-19 from chest X-ray images. The image processing tool called RGB and SqueezeNet models were used to get more images to diagnose the available data set. Our adopted model shows an accuracy of 98.8% to detect the COVID-19 affected patient from CXR images. It is expected that our proposed computer-aided detection tool (CAT) will play a key role in reducing the spread of infectious diseases in society through a faster patient screening process.
Collapse
|
129
|
Sethy PK. Identification of wheat tiller based on AlexNet-feature fusion. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:8309-8316. [DOI: 10.1007/s11042-022-12286-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 11/22/2021] [Accepted: 01/14/2022] [Indexed: 08/02/2023]
|
130
|
Subhalakshmi RT, Balamurugan SAA, Sasikala S. Deep learning based fusion model for COVID-19 diagnosis and classification using computed tomography images. CONCURRENT ENGINEERING, RESEARCH, AND APPLICATIONS 2022; 30:116-127. [PMID: 35382156 PMCID: PMC8968394 DOI: 10.1177/1063293x211021435] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Recently, the COVID-19 pandemic becomes increased in a drastic way, with the availability of a limited quantity of rapid testing kits. Therefore, automated COVID-19 diagnosis models are essential to identify the existence of disease from radiological images. Earlier studies have focused on the development of Artificial Intelligence (AI) techniques using X-ray images on COVID-19 diagnosis. This paper aims to develop a Deep Learning Based MultiModal Fusion technique called DLMMF for COVID-19 diagnosis and classification from Computed Tomography (CT) images. The proposed DLMMF model operates on three main processes namely Weiner Filtering (WF) based pre-processing, feature extraction and classification. The proposed model incorporates the fusion of deep features using VGG16 and Inception v4 models. Finally, Gaussian Naïve Bayes (GNB) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the DLMMF model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity of 96.53%, specificity of 95.81%, accuracy of 96.81% and F-score of 96.73%.
Collapse
Affiliation(s)
- RT Subhalakshmi
- Department of Information Technology, Sethu Institute of Technology, Virudhunagar, Tamil Nadu, India
| | - S Appavu alias Balamurugan
- Department of Computer Science, Central University of Tamil Nadu, Thiruvarur, Tamil Nadu, India
- S Appavu alias Balamurugan, Department of Computer Science, Central University of Tamil Nadu, Thiruvarur – 610 005, Tamilnadu, India.
| | - S Sasikala
- Department of Computer Science and Engineering, Velammal College of Engineering and Technology, Madurai, Tamil Nadu, India
| |
Collapse
|
131
|
Abstract
Cereals are an important and major source of the human diet. They constitute more than two-thirds of the world’s food source and cover more than 56% of the world’s cultivatable land. These important sources of food are affected by a variety of damaging diseases, causing significant loss in annual production. In this regard, detection of diseases at an early stage and quantification of the severity has acquired the urgent attention of researchers worldwide. One emerging and popular approach for this task is the utilization of machine learning techniques. In this work, we have identified the most common and damaging diseases affecting cereal crop production, and we also reviewed 45 works performed on the detection and classification of various diseases that occur on six cereal crops within the past five years. In addition, we identified and summarised numerous publicly available datasets for each cereal crop, which the lack thereof we identified as the main challenges faced for researching the application of machine learning in cereal crop detection. In this survey, we identified deep convolutional neural networks trained on hyperspectral data as the most effective approach for early detection of diseases and transfer learning as the most commonly used and yielding the best result training method.
Collapse
|
132
|
Two-Stage Hybrid Approach of Deep Learning Networks for Interstitial Lung Disease Classification. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7340902. [PMID: 35155680 PMCID: PMC8826206 DOI: 10.1155/2022/7340902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/14/2022] [Accepted: 01/21/2022] [Indexed: 11/18/2022]
Abstract
High-resolution computed tomography (HRCT) images in interstitial lung disease (ILD) screening can help improve healthcare quality. However, most of the earlier ILD classification work involves time-consuming manual identification of the region of interest (ROI) from the lung HRCT image before applying the deep learning classification algorithm. This paper has developed a two-stage hybrid approach of deep learning networks for ILD classification. A conditional generative adversarial network (c-GAN) has segmented the lung part from the HRCT images at the first stage. The c-GAN with multiscale feature extraction module has been used for accurate lung segmentation from the HRCT images with lung abnormalities. At the second stage, a pretrained ResNet50 has been used to extract the features from the segmented lung image for classification into six ILD classes using the support vector machine classifier. The proposed two-stage algorithm takes a whole HRCT as input eliminating the need for extracting the ROI and classifies the given HRCT image into an ILD class. The performance of the proposed two-stage deep learning network-based ILD classifier has improved considerably due to the stage-wise improvement of deep learning algorithm performance.
Collapse
|
133
|
Ha YJ, Lee G, Yoo M, Jung S, Yoo S, Kim J. Feasibility study of multi-site split learning for privacy-preserving medical systems under data imbalance constraints in COVID-19, X-ray, and cholesterol dataset. Sci Rep 2022; 12:1534. [PMID: 35087165 PMCID: PMC8795162 DOI: 10.1038/s41598-022-05615-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 01/11/2022] [Indexed: 11/09/2022] Open
Abstract
It seems as though progressively more people are in the race to upload content, data, and information online; and hospitals haven't neglected this trend either. Hospitals are now at the forefront for multi-site medical data sharing to provide ground-breaking advancements in the way health records are shared and patients are diagnosed. Sharing of medical data is essential in modern medical research. Yet, as with all data sharing technology, the challenge is to balance improved treatment with protecting patient's personal information. This paper provides a novel split learning algorithm coined the term, "multi-site split learning", which enables a secure transfer of medical data between multiple hospitals without fear of exposing personal data contained in patient records. It also explores the effects of varying the number of end-systems and the ratio of data-imbalance on the deep learning performance. A guideline for the most optimal configuration of split learning that ensures privacy of patient data whilst achieving performance is empirically given. We argue the benefits of our multi-site split learning algorithm, especially regarding the privacy preserving factor, using CT scans of COVID-19 patients, X-ray bone scans, and cholesterol level medical data.
Collapse
Affiliation(s)
- Yoo Jeong Ha
- Korea University, School of Electrical Engineering, Seoul, 02841, Republic of Korea
| | - Gusang Lee
- Korea University, School of Electrical Engineering, Seoul, 02841, Republic of Korea
| | - Minjae Yoo
- Korea University, School of Electrical Engineering, Seoul, 02841, Republic of Korea
| | - Soyi Jung
- Hallym University, School of Software, Chuncheon, 24252, Republic of Korea.
| | - Seehwan Yoo
- Department of Mobile Systems Engineering, Dankook University, Yongin, 16890, Republic of Korea.
| | - Joongheon Kim
- Korea University, School of Electrical Engineering, Seoul, 02841, Republic of Korea.
| |
Collapse
|
134
|
COVID-19 Detection in Chest X-ray Images Using a New Channel Boosted CNN. Diagnostics (Basel) 2022; 12:diagnostics12020267. [PMID: 35204358 PMCID: PMC8871483 DOI: 10.3390/diagnostics12020267] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 01/07/2022] [Accepted: 01/16/2022] [Indexed: 02/01/2023] Open
Abstract
COVID-19 is a respiratory illness that has affected a large population worldwide and continues to have devastating consequences. It is imperative to detect COVID-19 at the earliest opportunity to limit the span of infection. In this work, we developed a new CNN architecture STM-RENet to interpret the radiographic patterns from X-ray images. The proposed STM-RENet is a block-based CNN that employs the idea of split–transform–merge in a new way. In this regard, we have proposed a new convolutional block STM that implements the region and edge-based operations separately, as well as jointly. The systematic use of region and edge implementations in combination with convolutional operations helps in exploring region homogeneity, intensity inhomogeneity, and boundary-defining features. The learning capacity of STM-RENet is further enhanced by developing a new CB-STM-RENet that exploits channel boosting and learns textural variations to effectively screen the X-ray images of COVID-19 infection. The idea of channel boosting is exploited by generating auxiliary channels from the two additional CNNs using Transfer Learning, which are then concatenated to the original channels of the proposed STM-RENet. A significant performance improvement is shown by the proposed CB-STM-RENet in comparison to the standard CNNs on three datasets, especially on the stringent CoV-NonCoV-15k dataset. The good detection rate (97%), accuracy (96.53%), and reasonable F-score (95%) of the proposed technique suggest that it can be adapted to detect COVID-19 infected patients.
Collapse
|
135
|
Lu W, Du R, Niu P, Xing G, Luo H, Deng Y, Shu L. Soybean Yield Preharvest Prediction Based on Bean Pods and Leaves Image Recognition Using Deep Learning Neural Network Combined With GRNN. FRONTIERS IN PLANT SCIENCE 2022; 12:791256. [PMID: 35095964 PMCID: PMC8792930 DOI: 10.3389/fpls.2021.791256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 12/08/2021] [Indexed: 06/14/2023]
Abstract
Soybean yield is a highly complex trait determined by multiple factors such as genotype, environment, and their interactions. The earlier the prediction during the growing season the better. Accurate soybean yield prediction is important for germplasm innovation and planting environment factor improvement. But until now, soybean yield has been determined by weight measurement manually after soybean plant harvest which is time-consuming, has high cost and low precision. This paper proposed a soybean yield in-field prediction method based on bean pods and leaves image recognition using a deep learning algorithm combined with a generalized regression neural network (GRNN). A faster region-convolutional neural network (Faster R-CNN), feature pyramid network (FPN), single shot multibox detector (SSD), and You Only Look Once (YOLOv3) were employed for bean pods recognition in which recognition precision and speed were 86.2, 89.8, 80.1, 87.4%, and 13 frames per second (FPS), 7 FPS, 24 FPS, and 39 FPS, respectively. Therefore, YOLOv3 was selected considering both recognition precision and speed. For enhancing detection performance, YOLOv3 was improved by changing IoU loss function, using the anchor frame clustering algorithm, and utilizing the partial neural network structure with which recognition precision increased to 90.3%. In order to improve soybean yield prediction precision, leaves were identified and counted, moreover, pods were further classified as single, double, treble, four, and five seeds types by improved YOLOv3 because each type seed weight varies. In addition, soybean seed number prediction models of each soybean planter were built using PLSR, BP, and GRNN with the input of different type pod numbers and leaf numbers with which prediction results were 96.24, 96.97, and 97.5%, respectively. Finally, the soybean yield of each planter was obtained by accumulating the weight of all soybean pod types and the average accuracy was up to 97.43%. The results show that it is feasible to predict the soybean yield of plants in situ with high precision by fusing the number of leaves and different type soybean pods recognized by a deep neural network combined with GRNN which can speed up germplasm innovation and planting environmental factor optimization.
Collapse
Affiliation(s)
- Wei Lu
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Rongting Du
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Pengshuai Niu
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Guangnan Xing
- College of Agriculture, Nanjing Agricultural University, Nanjing, China
| | - Hui Luo
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| | - Yiming Deng
- College of Engineering, Michigan State University, East Lansing, MI, United States
| | - Lei Shu
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing, China
| |
Collapse
|
136
|
Underwater Fish Detection and Counting Using Mask Regional Convolutional Neural Network. WATER 2022. [DOI: 10.3390/w14020222] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Fish production has become a roadblock to the development of fish farming, and one of the issues encountered throughout the hatching process is the counting procedure. Previous research has mainly depended on the use of non-machine learning-based and machine learning-based counting methods and so was unable to provide precise results. In this work, we used a robotic eye camera to capture shrimp photos on a shrimp farm to train the model. The image data were classified into three categories based on the density of shrimps: low density, medium density, and high density. We used the parameter calibration strategy to discover the appropriate parameters and provided an improved Mask Regional Convolutional Neural Network (Mask R-CNN) model. As a result, the enhanced Mask R-CNN model can reach an accuracy rate of up to 97.48%.
Collapse
|
137
|
Nasiri H, Alavi SA. A Novel Framework Based on Deep Learning and ANOVA Feature Selection Method for Diagnosis of COVID-19 Cases from Chest X-Ray Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4694567. [PMID: 35013680 PMCID: PMC8742147 DOI: 10.1155/2022/4694567] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 12/20/2021] [Indexed: 12/12/2022]
Abstract
Background and Objective. The new coronavirus disease (known as COVID-19) was first identified in Wuhan and quickly spread worldwide, wreaking havoc on the economy and people's everyday lives. As the number of COVID-19 cases is rapidly increasing, a reliable detection technique is needed to identify affected individuals and care for them in the early stages of COVID-19 and reduce the virus's transmission. The most accessible method for COVID-19 identification is Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR); however, it is time-consuming and has false-negative results. These limitations encouraged us to propose a novel framework based on deep learning that can aid radiologists in diagnosing COVID-19 cases from chest X-ray images. Methods. In this paper, a pretrained network, DenseNet169, was employed to extract features from X-ray images. Features were chosen by a feature selection method, i.e., analysis of variance (ANOVA), to reduce computations and time complexity while overcoming the curse of dimensionality to improve accuracy. Finally, selected features were classified by the eXtreme Gradient Boosting (XGBoost). The ChestX-ray8 dataset was employed to train and evaluate the proposed method. Results and Conclusion. The proposed method reached 98.72% accuracy for two-class classification (COVID-19, No-findings) and 92% accuracy for multiclass classification (COVID-19, No-findings, and Pneumonia). The proposed method's precision, recall, and specificity rates on two-class classification were 99.21%, 93.33%, and 100%, respectively. Also, the proposed method achieved 94.07% precision, 88.46% recall, and 100% specificity for multiclass classification. The experimental results show that the proposed framework outperforms other methods and can be helpful for radiologists in the diagnosis of COVID-19 cases.
Collapse
Affiliation(s)
- Hamid Nasiri
- Department of Computer Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Seyed Ali Alavi
- Electrical and Computer Engineering Department, Semnan University, Semnan, Iran
| |
Collapse
|
138
|
Ensemble Averaging of Transfer Learning Models for Identification of Nutritional Deficiency in Rice Plant. ELECTRONICS 2022. [DOI: 10.3390/electronics11010148] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Computer vision-based automation has become popular in detecting and monitoring plants’ nutrient deficiencies in recent times. The predictive model developed by various researchers were so designed that it can be used in an embedded system, keeping in mind the availability of computational resources. Nevertheless, the enormous popularity of smart phone technology has opened the door of opportunity to common farmers to have access to high computing resources. To facilitate smart phone users, this study proposes a framework of hosting high end systems in the cloud where processing can be done, and farmers can interact with the cloud-based system. With the availability of high computational power, many studies have been focused on applying convolutional Neural Networks-based Deep Learning (CNN-based DL) architectures, including Transfer learning (TL) models on agricultural research. Ensembling of various TL architectures has the potential to improve the performance of predictive models by a great extent. In this work, six TL architectures viz. InceptionV3, ResNet152V2, Xception, DenseNet201, InceptionResNetV2, and VGG19 are considered, and their various ensemble models are used to carry out the task of deficiency diagnosis in rice plants. Two publicly available datasets from Mendeley and Kaggle are used in this study. The ensemble-based architecture enhanced the highest classification accuracy to 100% from 99.17% in the Mendeley dataset, while for the Kaggle dataset; it was enhanced to 92% from 90%.
Collapse
|
139
|
Vijayakumar K, Rajinikanth V, Kirubakaran MK. Automatic detection of breast cancer in ultrasound images using Mayfly algorithm optimized handcrafted features. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:751-766. [PMID: 35527619 DOI: 10.3233/xst-221136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
BACKGROUND The incidence rates of breast cancer in women community is progressively raising and the premature diagnosis is necessary to detect and cure the disease. OBJECTIVE To develop a novel automated disuse detection framework to examine the Breast-Ultrasound-Images (BUI). METHODS This scheme includes the following stages; (i) Image acquisition and resizing, (ii) Gaussian filter-based pre-processing, (iii) Handcrafted features extraction, (iv) Optimal feature selection with Mayfly Algorithm (MA), (v) Binary classification and validation. The dataset includes BUI extracted from 133 normal, 445 benign and 210 malignant cases. Each BUI is resized to 256×256×1 pixels and the resized BUIs are used to develop and test the new scheme. Handcrafted feature-based cancer detection is employed and the parameters, such as Entropies, Local-Binary-Pattern (LBP) and Hu moments are considered. To avoid the over-fitting problem, a feature reduction procedure is also implemented with MA and the reduced feature sub-set is used to train and validate the classifiers developed in this research. RESULTS The experiments were performed to classify BUIs between (i) normal and benign, (ii) normal and malignant, and (iii) benign and malignant cases. The results show that classification accuracy of > 94%, precision of > 92%, sensitivity of > 92% and specificity of > 90% are achieved applying the developed new schemes or framework. CONCLUSION In this work, a machine-learning scheme is employed to detect/classify the disease using BUI and achieves promising results. In future, we will test the feasibility of implementing deep-learning method to this framework to further improve detection accuracy.
Collapse
Affiliation(s)
- K Vijayakumar
- Department of Computer Science and Engineering, St. Joseph's Institute of Technology, Chennai, Tamilnadu, India
| | - V Rajinikanth
- Department of Electronics and Instrumentation Engineering, St. Joseph's College of Engineering, Chennai, Tamilnadu, India
| | - M K Kirubakaran
- Department of Information Technology, St. Joseph's Institute of Technology, Chennai, Tamilnadu, India
| |
Collapse
|
140
|
Rajesh Kannan S, Sivakumar J, Ezhilarasi P. Automatic detection of COVID-19 in chest radiographs using serially concatenated deep and handcrafted features. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:231-244. [PMID: 34924434 DOI: 10.3233/xst-211050] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Since the infectious disease occurrence rate in the human community is gradually rising due to varied reasons, appropriate diagnosis and treatments are essential to control its spread. The recently discovered COVID-19 is one of the contagious diseases, which infected numerous people globally. This contagious disease is arrested by several diagnoses and handling actions. Medical image-supported diagnosis of COVID-19 infection is an approved clinical practice. This research aims to develop a new Deep Learning Method (DLM) to detect the COVID-19 infection using the chest X-ray. The proposed work implemented two methods namely, detection of COVID-19 infection using (i) a Firefly Algorithm (FA) optimized deep-features and (ii) the combined deep and machine features optimized with FA. In this work, a 5-fold cross-validation method is engaged to train and test detection methods. The performance of this system is analyzed individually resulting in the confirmation that the deep feature-based technique helps to achieve a detection accuracy of > 92% with SVM-RBF classifier and combining deep and machine features achieves > 96% accuracy with Fine KNN classifier. In the future, this technique may have potential to play a vital role in testing and validating the X-ray images collected from patients suffering from the infection diseases.
Collapse
Affiliation(s)
| | - J Sivakumar
- St. Joseph's College of Engineering, OMR, Chennai, India
| | - P Ezhilarasi
- St. Joseph's College of Engineering, OMR, Chennai, India
| |
Collapse
|
141
|
Sanket S, Vergin Raja Sarobin M, Jani Anbarasi L, Thakor J, Singh U, Narayanan S. Detection of novel coronavirus from chest X-rays using deep convolutional neural networks. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:22263-22288. [PMID: 34512112 PMCID: PMC8423603 DOI: 10.1007/s11042-021-11257-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Revised: 06/06/2021] [Accepted: 07/08/2021] [Indexed: 05/11/2023]
Abstract
With over 172 Million people infected with the novel coronavirus (COVID-19) globally and with the numbers increasing exponentially, the dire need of a fast diagnostic system keeps on surging. With shortage of kits, and deadly underlying disease due to its vastly mutating and contagious properties, the tired physicians need a fast diagnostic method to cater the requirements of the soaring number of infected patients. Laboratory testing has turned out to be an arduous, cost-ineffective and requiring a well-equipped laboratory for analysis. This paper proposes a convolutional neural network (CNN) based model for analysis/detection of COVID-19, dubbed as CovCNN, which uses the patient's chest X-ray images for the diagnosis of COVID-19 with an aim to assist the medical practitioners to expedite the diagnostic process amongst high workload conditions. In the proposed CovCNN model, a novel deep-CNN based architecture has been incorporated with multiple folds of CNN. These models utilize depth wise convolution with varying dilation rates for efficiently extracting diversified features from chest X-rays. 657 chest X-rays of which 219 were X-ray images of patients infected from COVID-19 and the remaining were the images of non-COVID-19 (i.e. normal or COVID-19 negative) patients. Further, performance evaluation on the dataset using different pre-trained models has been analyzed based on the loss and accuracy curve. The experimental results show that the highest classification accuracy (98.4%) is achieved using the proposed CovCNN model.
Collapse
Affiliation(s)
- Shashwat Sanket
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - M. Vergin Raja Sarobin
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - L. Jani Anbarasi
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - Jayraj Thakor
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - Urmila Singh
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - Sathiya Narayanan
- School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
| |
Collapse
|
142
|
Demir F, Demir K, Şengür A. DeepCov19Net: Automated COVID-19 Disease Detection with a Robust and Effective Technique Deep Learning Approach. NEW GENERATION COMPUTING 2022; 40:1053-1075. [PMID: 35035024 PMCID: PMC8753945 DOI: 10.1007/s00354-021-00152-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 12/26/2021] [Indexed: 05/17/2023]
Abstract
The new type of coronavirus disease, which has spread from Wuhan, China since the beginning of 2020 called COVID-19, has caused many deaths and cases in most countries and has reached a global pandemic scale. In addition to test kits, imaging techniques with X-rays used in lung patients have been frequently used in the detection of COVID-19 cases. In the proposed method, a novel approach based on a deep learning model named DeepCovNet was utilized to classify chest X-ray images containing COVID-19, normal (healthy), and pneumonia classes. The convolutional-autoencoder model, which had convolutional layers in encoder and decoder blocks, was trained by using the processed chest X-ray images from scratch for deep feature extraction. The distinctive features were selected with a novel and robust algorithm named SDAR from the deep feature set. In the classification stage, an SVM classifier with various kernel functions was used to evaluate the classification performance of the proposed method. Also, hyperparameters of the SVM classifier were optimized with the Bayesian algorithm for increasing classification accuracy. Specificity, sensitivity, precision, and F-score, were also used as performance metrics in addition to accuracy which was used as the main criterion. The proposed method with an accuracy of 99.75 outperformed the other approaches based on deep learning.
Collapse
Affiliation(s)
- Fatih Demir
- Biomedical Department, Vocational School of Technical Sciences, Firat University, Elazig, Turkey
| | - Kürşat Demir
- Mechatronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey
| | - Abdulkadir Şengür
- Electrical-Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey
| |
Collapse
|
143
|
Malik H, Anees T. BDCNet: multi-classification convolutional neural network model for classification of COVID-19, pneumonia, and lung cancer from chest radiographs. MULTIMEDIA SYSTEMS 2022; 28:815-829. [PMID: 35068705 PMCID: PMC8763428 DOI: 10.1007/s00530-021-00878-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Accepted: 12/07/2021] [Indexed: 05/08/2023]
Abstract
Globally, coronavirus disease (COVID-19) has badly affected the medical system and economy. Sometimes, the deadly COVID-19 has the same symptoms as other chest diseases such as pneumonia and lungs cancer and can mislead the doctors in diagnosing coronavirus. Frontline doctors and researchers are working assiduously in finding the rapid and automatic process for the detection of COVID-19 at the initial stage, to save human lives. However, the clinical diagnosis of COVID-19 is highly subjective and variable. The objective of this study is to implement a multi-classification algorithm based on deep learning (DL) model for identifying the COVID-19, pneumonia, and lung cancer diseases from chest radiographs. In the present study, we have proposed a model with the combination of Vgg-19 and convolutional neural networks (CNN) named BDCNet and applied it on different publically available benchmark databases to diagnose the COVID-19 and other chest tract diseases. To the best of our knowledge, this is the first study to diagnose the three chest diseases in a single deep learning model. We also computed and compared the classification accuracy of our proposed model with four well-known pre-trained models such as ResNet-50, Vgg-16, Vgg-19, and inception v3. Our proposed model achieved an AUC of 0.9833 (with an accuracy of 99.10%, a recall of 98.31%, a precision of 99.9%, and an f1-score of 99.09%) in classifying the different chest diseases. Moreover, CNN-based pre-trained models VGG-16, VGG-19, ResNet-50, and Inception-v3 achieved an accuracy of classifying multi-diseases are 97.35%, 97.14%, 97.15%, and 95.10%, respectively. The results revealed that our proposed model produced a remarkable performance as compared to its competitor approaches, thus providing significant assistance to diagnostic radiographers and health experts.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore, 54000 Pakistan
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan, 60000 Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore, 54000 Pakistan
| |
Collapse
|
144
|
Wang C, Liu S, Wang Y, Xiong J, Zhang Z, Zhao B, Luo L, Lin G, He P. Application of Convolutional Neural Network-Based Detection Methods in Fresh Fruit Production: A Comprehensive Review. FRONTIERS IN PLANT SCIENCE 2022; 13:868745. [PMID: 35651761 PMCID: PMC9149381 DOI: 10.3389/fpls.2022.868745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 03/03/2022] [Indexed: 05/12/2023]
Abstract
As one of the representative algorithms of deep learning, a convolutional neural network (CNN) with the advantage of local perception and parameter sharing has been rapidly developed. CNN-based detection technology has been widely used in computer vision, natural language processing, and other fields. Fresh fruit production is an important socioeconomic activity, where CNN-based deep learning detection technology has been successfully applied to its important links. To the best of our knowledge, this review is the first on the whole production process of fresh fruit. We first introduced the network architecture and implementation principle of CNN and described the training process of a CNN-based deep learning model in detail. A large number of articles were investigated, which have made breakthroughs in response to challenges using CNN-based deep learning detection technology in important links of fresh fruit production including fruit flower detection, fruit detection, fruit harvesting, and fruit grading. Object detection based on CNN deep learning was elaborated from data acquisition to model training, and different detection methods based on CNN deep learning were compared in each link of the fresh fruit production. The investigation results of this review show that improved CNN deep learning models can give full play to detection potential by combining with the characteristics of each link of fruit production. The investigation results also imply that CNN-based detection may penetrate the challenges created by environmental issues, new area exploration, and multiple task execution of fresh fruit production in the future.
Collapse
Affiliation(s)
- Chenglin Wang
- Faculty of Modern Agricultural Engineering, Kunming University of Science and Technology, Kunming, China
- School of Intelligent Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Suchun Liu
- School of Intelligent Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Yawei Wang
- School of Intelligent Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Juntao Xiong
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, China
- *Correspondence: Juntao Xiong,
| | - Zhaoguo Zhang
- Faculty of Modern Agricultural Engineering, Kunming University of Science and Technology, Kunming, China
- Zhaoguo Zhang,
| | - Bo Zhao
- Chinese Academy of Agricultural Mechanization Sciences, Beijing, China
| | - Lufeng Luo
- School of Mechatronic Engineering and Automation, Foshan University, Foshan, China
| | - Guichao Lin
- School of Mechanical and Electrical Engineering, Zhongkai University of Agriculture and Engineering, Guangzhou, China
| | - Peng He
- School of Electronic and Information Engineering, Taizhou University, Taizhou, China
| |
Collapse
|
145
|
Sarv Ahrabi S, Piazzo L, Momenzadeh A, Scarpiniti M, Baccarelli E. Exploiting probability density function of deep convolutional autoencoders' latent space for reliable COVID-19 detection on CT scans. THE JOURNAL OF SUPERCOMPUTING 2022; 78:12024-12045. [PMID: 35228777 PMCID: PMC8867464 DOI: 10.1007/s11227-022-04349-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/30/2022] [Indexed: 05/04/2023]
Abstract
We present a probabilistic method for classifying chest computed tomography (CT) scans into COVID-19 and non-COVID-19. To this end, we design and train, in an unsupervised manner, a deep convolutional autoencoder (DCAE) on a selected training data set, which is composed only of COVID-19 CT scans. Once the model is trained, the encoder can generate the compact hidden representation (the hidden feature vectors) of the training data set. Afterwards, we exploit the obtained hidden representation to build up the target probability density function (PDF) of the training data set by means of kernel density estimation (KDE). Subsequently, in the test phase, we feed a test CT into the trained encoder to produce the corresponding hidden feature vector, and then, we utilise the target PDF to compute the corresponding PDF value of the test image. Finally, this obtained value is compared to a threshold to assign the COVID-19 label or non-COVID-19 to the test image. We numerically check our approach's performance (i.e. test accuracy and training times) by comparing it with those of some state-of-the-art methods.
Collapse
Affiliation(s)
- Sima Sarv Ahrabi
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00184 Roma, Italy
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00184 Roma, Italy
| | - Alireza Momenzadeh
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00184 Roma, Italy
| | - Michele Scarpiniti
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00184 Roma, Italy
| | - Enzo Baccarelli
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00184 Roma, Italy
| |
Collapse
|
146
|
Suganyadevi S, Seethalakshmi V, Balasamy K. A review on deep learning in medical image analysis. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:19-38. [PMID: 34513553 PMCID: PMC8417661 DOI: 10.1007/s13735-021-00218-1] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 08/06/2021] [Accepted: 08/09/2021] [Indexed: 05/02/2023]
Abstract
Ongoing improvements in AI, particularly concerning deep learning techniques, are assisting to identify, classify, and quantify patterns in clinical images. Deep learning is the quickest developing field in artificial intelligence and is effectively utilized lately in numerous areas, including medication. A brief outline is given on studies carried out on the region of application: neuro, brain, retinal, pneumonic, computerized pathology, bosom, heart, breast, bone, stomach, and musculoskeletal. For information exploration, knowledge deployment, and knowledge-based prediction, deep learning networks can be successfully applied to big data. In the field of medical image processing methods and analysis, fundamental information and state-of-the-art approaches with deep learning are presented in this paper. The primary goals of this paper are to present research on medical image processing as well as to define and implement the key guidelines that are identified and addressed.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - V. Seethalakshmi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - K. Balasamy
- Department of IT, Dr. Mahalingam College of Engineering and Technology, Coimbatore, India
| |
Collapse
|
147
|
Loraksa C, Mongkolsomlit S, Nimsuk N, Uscharapong M, Kiatisevi P. Effectiveness of Learning Systems from Common Image File Types to Detect Osteosarcoma Based on Convolutional Neural Networks (CNNs) Models. J Imaging 2021; 8:jimaging8010002. [PMID: 35049843 PMCID: PMC8779891 DOI: 10.3390/jimaging8010002] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 12/13/2021] [Accepted: 12/23/2021] [Indexed: 12/18/2022] Open
Abstract
Osteosarcoma is a rare bone cancer which is more common in children than in adults and has a high chance of metastasizing to the patient’s lungs. Due to initiated cases, it is difficult to diagnose and hard to detect the nodule in a lung at the early state. Convolutional Neural Networks (CNNs) are effectively applied for early state detection by considering CT-scanned images. Transferring patients from small hospitals to the cancer specialized hospital, Lerdsin Hospital, poses difficulties in information sharing because of the privacy and safety regulations. CD-ROM media was allowed for transferring patients’ data to Lerdsin Hospital. Digital Imaging and Communications in Medicine (DICOM) files cannot be stored on a CD-ROM. DICOM must be converted into other common image formats, such as BMP, JPG and PNG formats. Quality of images can affect the accuracy of the CNN models. In this research, the effect of different image formats is studied and experimented. Three popular medical CNN models, VGG-16, ResNet-50 and MobileNet-V2, are considered and used for osteosarcoma detection. The positive and negative class images are corrected from Lerdsin Hospital, and 80% of all images are used as a training dataset, while the rest are used to validate the trained models. Limited training images are simulated by reducing images in the training dataset. Each model is trained and validated by three different image formats, resulting in 54 testing cases. F1-Score and accuracy are calculated and compared for the models’ performance. VGG-16 is the most robust of all the formats. PNG format is the most preferred image format, followed by BMP and JPG formats, respectively.
Collapse
Affiliation(s)
- Chanunya Loraksa
- Medical Engineering, Faculty of Engineering, Thammasat University, Pathum Thani 12121, Thailand;
- Correspondence: ; Tel.: +66-(0)63-241-5888
| | | | - Nitikarn Nimsuk
- Medical Engineering, Faculty of Engineering, Thammasat University, Pathum Thani 12121, Thailand;
| | - Meenut Uscharapong
- Department of Medical Services, Lerdsin Hospital, Ministry of Public Health in Thailand, Bangkok 10500, Thailand; (M.U.); (P.K.)
| | - Piya Kiatisevi
- Department of Medical Services, Lerdsin Hospital, Ministry of Public Health in Thailand, Bangkok 10500, Thailand; (M.U.); (P.K.)
| |
Collapse
|
148
|
Meshram V, Patil K. FruitNet: Indian fruits image dataset with quality for machine learning applications. Data Brief 2021; 40:107686. [PMID: 34917715 PMCID: PMC8668825 DOI: 10.1016/j.dib.2021.107686] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 12/02/2021] [Accepted: 12/03/2021] [Indexed: 11/25/2022] Open
Abstract
Fast and precise fruit classification or recognition as per quality parameter is the unmet need of agriculture business. This is an open research problem, which always attracts researchers. Machine learning and deep learning techniques have shown very promising results for the classification and object detection problems. Neat and clean dataset is the elementary requirement to build accurate and robust machine learning models for the real-time environment. With this objective we have created an image dataset of Indian fruits with quality parameter which are highly consumed or exported. Accordingly, we have considered six fruits namely apple, banana, guava, lime, orange, and pomegranate to create a dataset. The dataset is divided into three folders (1) Good quality fruits (2) Bad quality fruits, and (3) Mixed quality fruits each consists of six fruits subfolders. Total 19,500+ images in the processed format are available in the dataset. We strongly believe that the proposed dataset is very helpful for training, testing and validation of fruit classification or reorganization machine leaning model.
Collapse
|
149
|
Khan M, Mehran MT, Haq ZU, Ullah Z, Naqvi SR, Ihsan M, Abbass H. Applications of artificial intelligence in COVID-19 pandemic: A comprehensive review. EXPERT SYSTEMS WITH APPLICATIONS 2021; 185:115695. [PMID: 34400854 PMCID: PMC8359727 DOI: 10.1016/j.eswa.2021.115695] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 05/14/2021] [Accepted: 07/28/2021] [Indexed: 05/06/2023]
Abstract
During the current global public health emergency caused by novel coronavirus disease 19 (COVID-19), researchers and medical experts started working day and night to search for new technologies to mitigate the COVID-19 pandemic. Recent studies have shown that artificial intelligence (AI) has been successfully employed in the health sector for various healthcare procedures. This study comprehensively reviewed the research and development on state-of-the-art applications of artificial intelligence for combating the COVID-19 pandemic. In the process of literature retrieval, the relevant literature from citation databases including ScienceDirect, Google Scholar, and Preprints from arXiv, medRxiv, and bioRxiv was selected. Recent advances in the field of AI-based technologies are critically reviewed and summarized. Various challenges associated with the use of these technologies are highlighted and based on updated studies and critical analysis, research gaps and future recommendations are identified and discussed. The comparison between various machine learning (ML) and deep learning (DL) methods, the dominant AI-based technique, mostly used ML and DL methods for COVID-19 detection, diagnosis, screening, classification, drug repurposing, prediction, and forecasting, and insights about where the current research is heading are highlighted. Recent research and development in the field of artificial intelligence has greatly improved the COVID-19 screening, diagnostics, and prediction and results in better scale-up, timely response, most reliable, and efficient outcomes, and sometimes outperforms humans in certain healthcare tasks. This review article will help researchers, healthcare institutes and organizations, government officials, and policymakers with new insights into how AI can control the COVID-19 pandemic and drive more research and studies for mitigating the COVID-19 outbreak.
Collapse
Affiliation(s)
- Muzammil Khan
- School of Chemical & Materials Engineering, National University of Sciences & Technology, H-12, Islamabad 44000, Pakistan
| | - Muhammad Taqi Mehran
- School of Chemical & Materials Engineering, National University of Sciences & Technology, H-12, Islamabad 44000, Pakistan
| | - Zeeshan Ul Haq
- School of Chemical & Materials Engineering, National University of Sciences & Technology, H-12, Islamabad 44000, Pakistan
| | - Zahid Ullah
- School of Chemical & Materials Engineering, National University of Sciences & Technology, H-12, Islamabad 44000, Pakistan
| | - Salman Raza Naqvi
- School of Chemical & Materials Engineering, National University of Sciences & Technology, H-12, Islamabad 44000, Pakistan
| | - Mehreen Ihsan
- Peshawar Medical College, Peshawar, Khyber Pakhtunkhwa 25000, Pakistan
| | - Haider Abbass
- National Cyber Security Auditing and Evaluation LAb, National University of Sciences & Technology, MCS Campus, Rawalpindi 43600, Pakistan
| |
Collapse
|
150
|
Arias-Garzón D, Alzate-Grisales JA, Orozco-Arias S, Arteaga-Arteaga HB, Bravo-Ortiz MA, Mora-Rubio A, Saborit-Torres JM, Serrano JÁM, de la Iglesia Vayá M, Cardona-Morales O, Tabares-Soto R. COVID-19 detection in X-ray images using convolutional neural networks. MACHINE LEARNING WITH APPLICATIONS 2021; 6:100138. [PMID: 34939042 PMCID: PMC8378046 DOI: 10.1016/j.mlwa.2021.100138] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 08/10/2021] [Accepted: 08/10/2021] [Indexed: 12/15/2022] Open
Abstract
COVID-19 global pandemic affects health care and lifestyle worldwide, and its early detection is critical to control cases’ spreading and mortality. The actual leader diagnosis test is the Reverse transcription Polymerase chain reaction (RT-PCR), result times and cost of these tests are high, so other fast and accessible diagnostic tools are needed. Inspired by recent research that correlates the presence of COVID-19 to findings in Chest X-ray images, this papers’ approach uses existing deep learning models (VGG19 and U-Net) to process these images and classify them as positive or negative for COVID-19. The proposed system involves a preprocessing stage with lung segmentation, removing the surroundings which does not offer relevant information for the task and may produce biased results; after this initial stage comes the classification model trained under the transfer learning scheme; and finally, results analysis and interpretation via heat maps visualization. The best models achieved a detection accuracy of COVID-19 around 97%.
Collapse
Affiliation(s)
- Daniel Arias-Garzón
- Department of Electronics and Industrial Automation, Universidad Autonóma de Manizales, Manizales 170001, Colombia
| | | | - Simon Orozco-Arias
- Department of Computer Science, Universidad Autonóma de Manizales, Manizales 170001, Colombia
- Department of Systems and Informatics, Universidad de Caldas, Manizales 170004, Colombia
| | | | - Mario Alejandro Bravo-Ortiz
- Department of Electronics and Industrial Automation, Universidad Autonóma de Manizales, Manizales 170001, Colombia
| | - Alejandro Mora-Rubio
- Department of Electronics and Industrial Automation, Universidad Autonóma de Manizales, Manizales 170001, Colombia
| | - Jose Manuel Saborit-Torres
- Unidad Mixta de Imagen Biomédica FISABIO-CIPF. Fundación para el Fomento de la Investigación Sanitario y Biomédica de la Comunidad Valenciana, Valencia 46020, Spain
| | - Joaquim Ángel Montell Serrano
- Unidad Mixta de Imagen Biomédica FISABIO-CIPF. Fundación para el Fomento de la Investigación Sanitario y Biomédica de la Comunidad Valenciana, Valencia 46020, Spain
| | - Maria de la Iglesia Vayá
- Unidad Mixta de Imagen Biomédica FISABIO-CIPF. Fundación para el Fomento de la Investigación Sanitario y Biomédica de la Comunidad Valenciana, Valencia 46020, Spain
| | - Oscar Cardona-Morales
- Department of Electronics and Industrial Automation, Universidad Autonóma de Manizales, Manizales 170001, Colombia
| | - Reinel Tabares-Soto
- Department of Electronics and Industrial Automation, Universidad Autonóma de Manizales, Manizales 170001, Colombia
| |
Collapse
|