51
|
Bhosale YH, Patnaik KS. Bio-medical imaging (X-ray, CT, ultrasound, ECG), genome sequences applications of deep neural network and machine learning in diagnosis, detection, classification, and segmentation of COVID-19: a Meta-analysis & systematic review. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-54. [PMID: 37362676 PMCID: PMC10015538 DOI: 10.1007/s11042-023-15029-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 02/01/2023] [Accepted: 02/27/2023] [Indexed: 06/28/2023]
Abstract
This review investigates how Deep Machine Learning (DML) has dealt with the Covid-19 epidemic and provides recommendations for future Covid-19 research. Despite the fact that vaccines for this epidemic have been developed, DL methods have proven to be a valuable asset in radiologists' arsenals for the automated assessment of Covid-19. This detailed review debates the techniques and applications developed for Covid-19 findings using DL systems. It also provides insights into notable datasets used to train neural networks, data partitioning, and various performance measurement metrics. The PRISMA taxonomy has been formed based on pretrained(45 systems) and hybrid/custom(17 systems) models with radiography modalities. A total of 62 systems with respect to X-ray(32), CT(19), ultrasound(7), ECG(2), and genome sequence(2) based modalities as taxonomy are selected from the studied articles. We originate by valuing the present phase of DL and conclude with significant limitations. The restrictions contain incomprehensibility, simplification measures, learning from incomplete labeled data, and data secrecy. Moreover, DML can be utilized to detect and classify Covid-19 from other COPD illnesses. The proposed literature review has found many DL-based systems to fight against Covid19. We expect this article will assist in speeding up the procedure of DL for Covid-19 researchers, including medical, radiology technicians, and data engineers.
Collapse
Affiliation(s)
- Yogesh H. Bhosale
- Computer Science and Engineering Department, Birla Institute of Technology, Mesra, Ranchi, India
| | - K. Sridhar Patnaik
- Computer Science and Engineering Department, Birla Institute of Technology, Mesra, Ranchi, India
| |
Collapse
|
52
|
Deep belief network Assisted quadratic logit boost classifier for brain tumor detection using MR images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
53
|
Remesh KM, Nair LR. A novel technique for the detection of Covid-19 patients with the applications of three-way decisions using variance-based criterion. MICROPROCESSORS AND MICROSYSTEMS 2023; 97:104758. [PMID: 36619210 PMCID: PMC9811918 DOI: 10.1016/j.micpro.2023.104758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 10/30/2022] [Accepted: 01/02/2023] [Indexed: 06/17/2023]
Abstract
Everyone is making constant efforts to establish an effective diagnostic approach, therapy and control of the spread of the pandemic. Due to a flexible formulation, the parameters prior to the normal distributions and explicitly formulate assumptions on the transition probabilities between these categories over time. The spread of the COVID-19 pandemic represents a serious threat for scientists and academics, health professionals and even governments today. The Hospital wards are classified into Intensive Care Unit (ICU), Regular Wards (RW) with Recovered (R) and Deceased (D).. The formulation may be truncated to include particular hypotheses with an epidemiological interpretation. The principles of Three-Way Decision Theory could be used to anticipate and diagnose COVID-19 patients were classified into one of three zones based on their symptoms: Positive, Negative, or Boundary, and treatment are recommended if necessary. The thresholds that distinguish the three zones are determined using a variance-based criterion. Examine the impact of nonpharmaceutical interventions and the findings from data gathered during the second wave of the pandemic in Trivandrum, India.The Three-Way Decision Theory model has a good fit and gives good predictive performance, especially for RW and ICU patients, according to suitable discrepancy metrics that were created to assess and compare models. 95 percent accuracy increased and calculated values for 10 days to demonstrate the temporal aspects of the expected daily reproduction number R.
Collapse
Affiliation(s)
- K M Remesh
- Division of Computer Science and Engineering, School of Engineering, CUSAT, Kochi, India
| | - Latha R Nair
- Division of Computer Science and Engineering, School of Engineering, CUSAT, Kochi, India
| |
Collapse
|
54
|
Dhivya S, Mohanavalli S, Kavitha S. Automated carcinoma classification using efficient nuclei-based patch selection and deep learning techniques. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-222136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/04/2023]
Abstract
Breast cancer can be successfully treated if diagnosed at its earliest, though it is considered as a fatal disease among women. The histopathology slide turned images are the gold standard for tumor diagnosis. However, the manual diagnosis is still tedious due to its structural complexity. With the advent of computer-aided diagnosis, time and computation intensive manual procedure can be managed with the development of an automated classification system. The feature extraction and classification are quite challenging as these images involve complex structures and overlapping nuclei. A novel nuclei-based patch extraction method is proposed for the extraction of non-overlapping nuclei patches obtained from the breast tumor dataset. An ensemble of pre-trained models is used to extract the discriminating features from the identified and augmented non-overlapping nuclei patches. The discriminative features are further fused using p-norm pooling technique and are classified using a LightGBM classifier with 10-fold cross-validation. The obtained results showed an increase in the overall performance in terms of accuracy, sensitivity, specificity, and precision. The proposed framework yielded an accuracy of 98.3% for binary class classification and 95.1% for multi-class classification on ICIAR 2018 dataset.
Collapse
Affiliation(s)
- S. Dhivya
- Department of Information Technology, Sri Sivasubramaniya Nadar College of Engineering, Chennai, Tamil Nadu, India
| | - S. Mohanavalli
- Department of Information Technology, Sri Sivasubramaniya Nadar College of Engineering, Chennai, Tamil Nadu, India
| | - S. Kavitha
- Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, Tamil Nadu, India
| |
Collapse
|
55
|
Prabhu A, Shobha Rani N, Basavaraju H. An orientation independent vision based weight estimation model for Alphonso mangoes. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-223510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
One of the most essential factors in classifying and qualitatively evaluating mangoes for various industrial uses is weight. To meet grading requirements during industrial processing, this paper presents an orientation-independent weight estimation method for the mango cultivar “Alphonso.” In this study, size and geometry are considered as key variables in estimating weight. Based on the visual fruit geometry, generalized hand-crafted local and global features, and conventional features are calculated and subjected to the proposed feature selection methodology for optimal feature identification. The optimal features are employed in regression analysis to estimate the predicted weight. Four regression models –MLR, Linear SVR, RBF SVR, and polynomial SVR—are used during the experimental trials. A self-collected mango database with two orientations per sample is obtained using a CCD camera. Three different weight estimation techniques are used in the analysis concerning orientation 1, orientation 2, and combining both orientations. The SVR RBF kernel yields a higher correlation between predicted and actual weights, and experiments demonstrate that orientation 1 is symmetric to orientation 2. By exhibiting a correlation coefficient of R2 = 0.99 with SVR-RBF for weight estimation using both orientations as well as individual orientations, it is observed that the correlation between predicted and estimated weights is nearly identical
Collapse
Affiliation(s)
- Akshatha Prabhu
- Department of Computer Science, Amrita School of Computing, Mysuru Campus, Amrita Vishwa Vidyapeetham, India
| | - N. Shobha Rani
- Department of Computer Science, Amrita School of Computing, Mysuru Campus, Amrita Vishwa Vidyapeetham, India
| | - H.T. Basavaraju
- Department of Computer Science, Yuvaraja college, Mysuru, India
| |
Collapse
|
56
|
Zhang T, Gao Z, Liu Z, Hussain SF, Waqas M, Halim Z, Li Y. Infrared ship target segmentation based on Adversarial Domain Adaptation. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
|
57
|
Tiwari S, Chanak P, Singh SK. A Review of the Machine Learning Algorithms for Covid-19 Case Analysis. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE 2023; 4:44-59. [PMID: 36908643 PMCID: PMC9983698 DOI: 10.1109/tai.2022.3142241] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 12/25/2021] [Indexed: 11/09/2022]
Abstract
The purpose of this article is to see how machine learning (ML) algorithms and applications are used in the COVID-19 inquiry and for other purposes. The available traditional methods for COVID-19 international epidemic prediction, researchers and authorities have given more attention to simple statistical and epidemiological methodologies. The inadequacy and absence of medical testing for diagnosing and identifying a solution is one of the key challenges in preventing the spread of COVID-19. A few statistical-based improvements are being strengthened to answer this challenge, resulting in a partial resolution up to a certain level. ML have advocated a wide range of intelligence-based approaches, frameworks, and equipment to cope with the issues of the medical industry. The application of inventive structure, such as ML and other in handling COVID-19 relevant outbreak difficulties, has been investigated in this article. The major goal of this article is to 1) Examining the impact of the data type and data nature, as well as obstacles in data processing for COVID-19. 2) Better grasp the importance of intelligent approaches like ML for the COVID-19 pandemic. 3) The development of improved ML algorithms and types of ML for COVID-19 prognosis. 4) Examining the effectiveness and influence of various strategies in COVID-19 pandemic. 5) To target on certain potential issues in COVID-19 diagnosis in order to motivate academics to innovate and expand their knowledge and research into additional COVID-19-affected industries.
Collapse
Affiliation(s)
- Shrikant Tiwari
- Department of Computer Science and EngineeringIndian Institute of Technology (BHU) Varanasi 221005 India
| | - Prasenjit Chanak
- Department of Computer Science and EngineeringIndian Institute of Technology (BHU) Varanasi 221005 India
| | - Sanjay Kumar Singh
- Department of Computer Science and EngineeringIndian Institute of Technology (BHU) Varanasi 221005 India
| |
Collapse
|
58
|
Sigalingging X, Prakosa SW, Leu JS, Hsieh HY, Avian C, Faisal M. SCANet: Implementation of Selective Context Adaptation Network in Smart Farming Applications. SENSORS (BASEL, SWITZERLAND) 2023; 23:1358. [PMID: 36772398 PMCID: PMC9921277 DOI: 10.3390/s23031358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/18/2023] [Accepted: 01/19/2023] [Indexed: 06/18/2023]
Abstract
In the last decade, deep learning has enjoyed its spotlight as the game-changing addition to smart farming and precision agriculture. Such development has been predominantly observed in developed countries, while on the other hand, in developing countries most farmers especially ones with smallholder farms have not enjoyed such wide and deep adoption of this new technologies. In this paper we attempt to improve the image classification part of smart farming and precision agriculture. Agricultural commodities tend to possess certain textural details on their surfaces which we attempt to exploit. In this work, we propose a deep learning based approach called Selective Context Adaptation Network (SCANet). SCANet performs feature enhancement strategy by leveraging level-wise information and employing context selection mechanism. In exploiting contextual correlation feature of the crop images our proposed approach demonstrates the effectiveness of the context selection mechanism. Our proposed scheme achieves 88.72% accuracy and outperforms the existing approaches. Our model is evaluated on the cocoa bean dataset constructed from the real cocoa bean industry scene in Indonesia.
Collapse
|
59
|
Kabilesh S, Mohanapriya D, Suseendhar P, Indra J, Gunasekar T, Senthilvel N. Research on Artificial Intelligence based Fruit Disease Identification System (AI-FDIS) with the Internet of Things (IoT). JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-222017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Monitoring fruit quality, volume, and development on the plantation are critical to ensuring that the fruits are harvested at the optimal time. Fruits are more susceptible to the disease while they are actively growing. It is possible to safeguard and enhance agricultural productivity by early detection of fruit diseases. A huge farm makes it tough to inspect each tree to learn about its fruit personally. There are several applications for image processing with the Internet of Things (IoT) in various fields. To safeguard the fruit trees from illness and weather conditions, it is difficult for the farmers and their workers to regularly examine these large areas. With the advent of Precision Farming, a new way of thinking about agriculture has emerged, incorporating cutting-edge technological innovations. One of the modern farmers’ biggest challenges is detecting fruit diseases in their early stages. If infections aren’t identified in time, farmers might see a drop in income. Hence this paper is about an Artificial Intelligence Based Fruit Disease Identification System (AI-FDIS) with a drone system featuring a high-accuracy camera, substantial computing capability, and connectivity for precision farming. As a result, it is possible to monitor large agricultural areas precisely, identify diseased plants, and decide on the chemical to spray and the precise dosage to use. It is connected to a cloud server that receives images and generates information from these images, including crop production projections. The farm base can interface with the system with a user-friendly Human-Robot Interface (HRI). It is possible to handle a vast area of farmland daily using this method. The agricultural drone is used to reduce environmental impact and boost crop productivity.
Collapse
Affiliation(s)
- S.K. Kabilesh
- Department of Electronics and Communication Engineering, Jai Shriram Engineering College, Tirupur, Tamilnadu, India
| | - D. Mohanapriya
- Department of Electronics and Communication Engineering, Jai Shriram Engineering College, Tirupur, Tamilnadu, India
| | - P. Suseendhar
- Department of Electronics and Communication Engineering, Karpagam University Coimbatore, Tamilnadu, India
| | - J. Indra
- Department of Information Technology, KPR Institute of Engineering and Technology, Coimbatore, Tamilnadu, India
| | - T. Gunasekar
- Department of Electrical and Electronics Engineering, Kongu Engineering College, Perundurai, Tamilnadu India
| | - N. Senthilvel
- Electronics and Communication Engineering, Veltech Multitech Dr. Rangarajan Dr. Sakunthala Engineering College, India
| |
Collapse
|
60
|
Jeon D, Kang Y, Lee S, Choi S, Sung Y, Lee TH, Kim C. Digitalizing breeding in plants: A new trend of next-generation breeding based on genomic prediction. FRONTIERS IN PLANT SCIENCE 2023; 14:1092584. [PMID: 36743488 PMCID: PMC9892199 DOI: 10.3389/fpls.2023.1092584] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/05/2023] [Indexed: 06/18/2023]
Abstract
As the world's population grows and food needs diversification, the demand for cereals and horticultural crops with beneficial traits increases. In order to meet a variety of demands, suitable cultivars and innovative breeding methods need to be developed. Breeding methods have changed over time following the advance of genetics. With the advent of new sequencing technology in the early 21st century, predictive breeding, such as genomic selection (GS), emerged when large-scale genomic information became available. GS shows good predictive ability for the selection of individuals with traits of interest even for quantitative traits by using various types of the whole genome-scanning markers, breaking away from the limitations of marker-assisted selection (MAS). In the current review, we briefly describe the history of breeding techniques, each breeding method, various statistical models applied to GS and methods to increase the GS efficiency. Consequently, we intend to propose and define the term digital breeding through this review article. Digital breeding is to develop a predictive breeding methods such as GS at a higher level, aiming to minimize human intervention by automatically proceeding breeding design, propagating breeding populations, and to make selections in consideration of various environments, climates, and topography during the breeding process. We also classified the phases of digital breeding based on the technologies and methods applied to each phase. This review paper will provide an understanding and a direction for the final evolution of plant breeding in the future.
Collapse
Affiliation(s)
- Donghyun Jeon
- Plant Computational Genomics Laboratory, Department of Science in Smart Agriculture Systems, Chungnam National University, Daejeon, Republic of Korea
| | - Yuna Kang
- Plant Computational Genomics Laboratory, Department of Crop Science, Chungnam National University, Daejeon, Republic of Korea
| | - Solji Lee
- Plant Computational Genomics Laboratory, Department of Crop Science, Chungnam National University, Daejeon, Republic of Korea
| | - Sehyun Choi
- Plant Computational Genomics Laboratory, Department of Crop Science, Chungnam National University, Daejeon, Republic of Korea
| | - Yeonjun Sung
- Plant Computational Genomics Laboratory, Department of Science in Smart Agriculture Systems, Chungnam National University, Daejeon, Republic of Korea
| | - Tae-Ho Lee
- Genomics Division, National Institute of Agricultural Sciences, Jeonju, Republic of Korea
| | - Changsoo Kim
- Plant Computational Genomics Laboratory, Department of Science in Smart Agriculture Systems, Chungnam National University, Daejeon, Republic of Korea
- Plant Computational Genomics Laboratory, Department of Crop Science, Chungnam National University, Daejeon, Republic of Korea
| |
Collapse
|
61
|
Vinod DN, Prabaharan SRS. COVID-19-The Role of Artificial Intelligence, Machine Learning, and Deep Learning: A Newfangled. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:2667-2682. [PMID: 36685135 PMCID: PMC9843670 DOI: 10.1007/s11831-023-09882-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 01/05/2023] [Indexed: 05/29/2023]
Abstract
The absolute previously infected novel coronavirus (COVID-19) was found in Wuhan, China, in December 2019. The COVID-19 epidemic has spread to more than 220 nations and territories globally and has altogether influenced each part of our day-to-day lives. As of 9th March 2022, a total aggregate of 44,78,82,185 (60,07,317) contaminated (dead) COVID-19 cases were accounted for all over the world. The quantities of contaminated cases passing despite everything increment essentially and do not indicate a controlled circumstance. The scope of this paper is to address this issue by presenting a comprehensive and comparative analysis of the existing Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) based approaches used in significance in reacting to the COVID-19 epidemic and diagnosing the severe impacts. The paper provides, firstly, an overview of COVID-19 infection and highlights of this article; Secondly, an overview of exploring various executive innovations by utilizing different resources to stop the spread of COVID-19; Thirdly, a comparison of existing predicting methods of COVID-19 in the literature, with focus on ML, DL and AI-driven techniques with performance metrics; and finally, a discussion on the results of the work as well as future scope.
Collapse
Affiliation(s)
- Dasari Naga Vinod
- Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, Tamil Nadu 600062 India
| | - S. R. S. Prabaharan
- Sathyabama Centre for Advanced Studies, Sathyabama Institute of Science and Technology, Rajiv Gandhi Salai, Chennai, Tamil Nadu 600119 India
| |
Collapse
|
62
|
Wali A, Ali S, Naseer A, Karim S, Alamgir Z. Computer-aided COVID-19 diagnosis: a possibility? J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
- Aamir Wali
- FAST School of Computing, National University of Computer and Emerging Sciences, Faisal Town, Lahore, Pakistan
| | - Shahroze Ali
- FAST School of Computing, National University of Computer and Emerging Sciences, Faisal Town, Lahore, Pakistan
| | - Asma Naseer
- FAST School of Computing, National University of Computer and Emerging Sciences, Faisal Town, Lahore, Pakistan
| | - Saira Karim
- FAST School of Computing, National University of Computer and Emerging Sciences, Faisal Town, Lahore, Pakistan
| | - Zareen Alamgir
- FAST School of Computing, National University of Computer and Emerging Sciences, Faisal Town, Lahore, Pakistan
| |
Collapse
|
63
|
Zhao Y, Yang Y, Xu X, Sun C. Precision detection of crop diseases based on improved YOLOv5 model. FRONTIERS IN PLANT SCIENCE 2023; 13:1066835. [PMID: 36699833 PMCID: PMC9868932 DOI: 10.3389/fpls.2022.1066835] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
Accurate identification of crop diseases can effectively improve crop yield. Most current crop diseases present small targets, dense numbers, occlusions and similar appearance of different diseases, and the current target detection algorithms are not effective in identifying similar crop diseases. Therefore, in this paper, an improved model based on YOLOv5s was proposed to improve the detection of crop diseases. First, the CSP structure of the original model in the feature fusion stage was improved, and a lightweight structure was used in the improved CSP structure to reduce the model parameters, while the feature information of different layers was extracted in the form of multiple branches. A structure named CAM was proposed, which can extract global and local features of each network layer separately, and the CAM structure can better fuse semantic and scale inconsistent features to enhance the extraction of global information of the network. In order to increase the number of positive samples in the model training process, one more grid was added to the original model with three grids to predict the target, and the formula for the prediction frame centroid offset was modified to obtain the better prediction frame centroid offset when the target centroid falled on the special point of the grid. To solve the problem of the prediction frame being scaled incorrectly during model training, an improved DIoU loss function was used to replace the GIoU loss function used in the original YOLOv5s. Finally, the improved model was trained using transfer learning, the results showed that the improved model had the best mean average precision (mAP) performance compared to the Faster R-CNN, SSD, YOLOv3, YOLOv4, YOLOv4-tiny, and YOLOv5s models, and the mAP, F1 score, and recall of the improved model were 95.92%, 0.91, and 87.89%, respectively. Compared with YOLOv5s, they improved by 4.58%, 5%, and 4.78%, respectively. The detection speed of the improved model was 40.01 FPS, which can meet the requirement of real-time detection. The results showed that the improved model outperformed the original model in several aspects, had stronger robustness and higher accuracy, and can provide better detection for crop diseases.
Collapse
|
64
|
Hu K, Liu Y, Nie J, Zheng X, Zhang W, Liu Y, Xie T. Rice pest identification based on multi-scale double-branch GAN-ResNet. FRONTIERS IN PLANT SCIENCE 2023; 14:1167121. [PMID: 37123817 PMCID: PMC10140523 DOI: 10.3389/fpls.2023.1167121] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 03/28/2023] [Indexed: 05/03/2023]
Abstract
Rice production is crucial to the food security of all human beings, and how rice pests and diseases can be effectively prevented in and timely detected is a hotspot issue in the field of smart agriculture. Deep learning has become the preferred method for rice pest identification due to its excellent performance, especially in the aspect of autonomous learning of image features. However, in the natural environment, the dataset is too small and vulnerable to the complex background, which easily leads to problems such as overfitting, and too difficult to extract the fine features during the process of training. To solve the above problems, a Multi-Scale Dual-branch structural rice pest identification model based on a generative adversarial network and improved ResNet was proposed. Based on the ResNet model, the ConvNeXt residual block was introduced to optimize the calculation ratio of the residual blocks, and the double-branch structure was constructed to extract disease features of different sizes in the input disease images, which it adjusts the size of the convolution kernel of each branch. In the complex natural environment, data pre-processing methods such as random brightness and motion blur, and data enhancement methods such as mirroring, cropping, and scaling were used to allow the dataset of 5,932 rice disease images captured from the natural environment to be expanded to 20,000 by the dataset in this paper. The new model was trained on the new dataset to identify four common rice diseases. The experimental results showed that the recognition accuracy of the new rice pest recognition model, which was proposed for the first time, improved by 2.66% compared with the original ResNet model. Under the same experimental conditions, the new model had the best performance when compared with classical networks such as AlexNet, VGG, DenseNet, ResNet, and Transformer, and its recognition accuracy could be as high as 99.34%. The model has good generalization ability and excellent robustness, which solves the current problems in rice pest identification, such as the data set is too small and easy to lead to overfitting, and the picture background is difficult to extract disease features, and greatly improves the recognition accuracy of the model by using a multi-scale double branch structure. It provides a superior solution for crop pest and disease identification.
Collapse
Affiliation(s)
- Kui Hu
- School of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha, China
- Research Center of Smart Forestry Cloud, Central South University of Forestry and Technology, Changsha, China
| | - YongMin Liu
- School of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha, China
- Research Center of Smart Forestry Cloud, Central South University of Forestry and Technology, Changsha, China
- *Correspondence: YongMin Liu,
| | - Jiawei Nie
- School of Animal Science, South China Agricultural University, Guangzhou, China
| | - Xinying Zheng
- Business School of Hunan Normal University, Changsha, China
| | - Wei Zhang
- School of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha, China
- Research Center of Smart Forestry Cloud, Central South University of Forestry and Technology, Changsha, China
| | - Yuan Liu
- School of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha, China
- Research Center of Smart Forestry Cloud, Central South University of Forestry and Technology, Changsha, China
| | - TianQiang Xie
- School of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha, China
- Research Center of Smart Forestry Cloud, Central South University of Forestry and Technology, Changsha, China
| |
Collapse
|
65
|
Ma X, Tong J, Huang W, Lin H. Characteristic mango price forecasting using combined deep-learning optimization model. PLoS One 2023; 18:e0283584. [PMID: 37053221 PMCID: PMC10101496 DOI: 10.1371/journal.pone.0283584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 03/06/2023] [Indexed: 04/14/2023] Open
Abstract
Accurate product price forecasting is helpful for scientific decision-making and precise industrial planning. As a characteristic fruit that drives regional development, mango price prediction is of great significance to several economies. However, owing to the strong volatility of mango prices, forecasting is vulnerable to uncertainties and is very challenging. In this study, a deep-learning combination forecasting model based on a back-propagation (BP) long short-term memory (LSTM) neural network is proposed. Using daily mango price data from a large fruit wholesale trading center in China from January 2nd, 2014, to April 18th, 2022, mango price changes are learned and predicted to support the fruit industry. The results show that the root mean-square error, mean absolute percentage error, and the R2 determination coefficient of the BP-LSTM combination model are 0.0175, 0.14%, and 0.9998, respectively. The prediction results of the combined model are better than those of the separate BP and LSTM models. Furthermore, it best fits the actual price profile and has better generalizability.
Collapse
Affiliation(s)
- Xiaoya Ma
- Department of Logistics Management and Engineering, Nanning Normal University, Nanning, Guangxi, China
- Guangxi Key Lab of Human-Machine Interaction and Intelligent Decision, Nanning Normal University, Nanning, Guangxi, China
| | - Jin Tong
- Department of Logistics Management and Engineering, Nanning Normal University, Nanning, Guangxi, China
| | - Wu Huang
- School of Business Administration, Zhongnan University of Economics and Law, Wuhan, Hubei, China
| | - Haitao Lin
- Yuxi Normal University, Yuxi, Yunnan, China
| |
Collapse
|
66
|
Chow LS, Tang GS, Solihin MI, Gowdh NM, Ramli N, Rahmat K. Quantitative and Qualitative Analysis of 18 Deep Convolutional Neural Network (CNN) Models with Transfer Learning to Diagnose COVID-19 on Chest X-Ray (CXR) Images. SN COMPUTER SCIENCE 2023; 4:141. [PMID: 36624807 PMCID: PMC9813876 DOI: 10.1007/s42979-022-01545-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 12/03/2022] [Indexed: 01/06/2023]
Abstract
Coronavirus disease 2019 (COVID-19) is a disease caused by a novel strain of coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), severely affecting the lungs. Our study aims to combine both quantitative and qualitative analysis of the convolutional neural network (CNN) model to diagnose COVID-19 on chest X-ray (CXR) images. We investigated 18 state-of-the-art CNN models with transfer learning, which include AlexNet, DarkNet-19, DarkNet-53, DenseNet-201, GoogLeNet, Inception-ResNet-v2, Inception-v3, MobileNet-v2, NasNet-Large, NasNet-Mobile, ResNet-18, ResNet-50, ResNet-101, ShuffleNet, SqueezeNet, VGG-16, VGG-19, and Xception. Their performances were evaluated quantitatively using six assessment metrics: specificity, sensitivity, precision, negative predictive value (NPV), accuracy, and F1-score. The top four models with accuracy higher than 90% are VGG-16, ResNet-101, VGG-19, and SqueezeNet. The accuracy of these top four models is between 90.7% and 94.3%; the F1-score is between 90.8% and 94.3%. The VGG-16 scored the highest accuracy of 94.3% and F1-score of 94.3%. The majority voting with all the 18 CNN models and top 4 models produced an accuracy of 93.0% and 94.0%, respectively. The top four and bottom three models were chosen for the qualitative analysis. A gradient-weighted class activation mapping (Grad-CAM) was used to visualize the significant region of activation for the decision-making of image classification. Two certified radiologists performed blinded subjective voting on the Grad-CAM images in comparison with their diagnosis. The qualitative analysis showed that SqueezeNet is the closest model to the diagnosis of two certified radiologists. It demonstrated a competitively good accuracy of 90.7% and F1-score of 90.8% with 111 times fewer parameters and 7.7 times faster than VGG-16. Therefore, this study recommends both VGG-16 and SqueezeNet as additional tools for the diagnosis of COVID-19.
Collapse
Affiliation(s)
- Li Sze Chow
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Technology and Built Environment, UCSI University, 1, Jalan Puncak Menara Gading, Taman Connaught, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Goon Sheng Tang
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Technology and Built Environment, UCSI University, 1, Jalan Puncak Menara Gading, Taman Connaught, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Mahmud Iwan Solihin
- Department of Mechanical and Mechatronics Engineering, Faculty of Engineering, Technology and Built Environment, UCSI University, 1, Jalan Puncak Menara Gading, Taman Connaught, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Nadia Muhammad Gowdh
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur, Malaysia
| | - Norlisah Ramli
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur, Malaysia
| | - Kartini Rahmat
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur, Malaysia
| |
Collapse
|
67
|
Patnaik V, Mohanty M, Subudhi AK. Identification of healthy biological leafs using hybrid-feature classifier. THE IMAGING SCIENCE JOURNAL 2022. [DOI: 10.1080/13682199.2022.2157533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Vijaya Patnaik
- Department of ECE, ITER, SOA Deemed to be University, Odisha, India
| | - Monalisa Mohanty
- Department of ECE, ITER, SOA Deemed to be University, Odisha, India
| | | |
Collapse
|
68
|
Zhu D, Feng Q, Zhang J, Yang W. Cotton disease identification method based on pruning. FRONTIERS IN PLANT SCIENCE 2022; 13:1038791. [PMID: 36589068 PMCID: PMC9795023 DOI: 10.3389/fpls.2022.1038791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 11/21/2022] [Indexed: 06/17/2023]
Abstract
Deep convolutional neural networks (DCNN) have shown promising performance in plant disease recognition. However, these networks cannot be deployed on resource-limited smart devices due to their vast parameters and computations. To address the issue of deployability when developing cotton disease identification applications for mobile/smart devices, we compress the disease recognition models employing the pruning algorithm. The algorithm uses the γ coefficient in the Batch Normalization layer to prune the channels to realize the compression of DCNN. To further improve the accuracy of the model, we suggest two strategies in combination with transfer learning: compression after transfer learning or transfer learning after compression. In our experiments, the source dataset is famous PlantVillage while the target dataset is the cotton disease image set which contains images collected from the Internet and taken from the fields. We select VGG16, ResNet164 and DenseNet40 as compressed models for comparison. The experimental results show that transfer learning after compression overall surpass its counterpart. When compression rate is set to 80% the accuracies of compressed version of VGG16, ResNet164 and DenseNet40 are 90.77%, 96.31% and 97.23%, respectively, and the parameters are only 0.30M, 0.43M and 0.26M, respectively. Among the compressed models, DenseNet40 has the highest accuracy and the smallest parameters. The best model (DenseNet40-80%-T) is pruned 75.70% of the parameters and cut off 65.52% of the computations, with the model size being only 2.2 MB. Compared with the version of compression after transfer learning, the accuracy of the model is improved by 0.74%. We further develop a cotton disease recognition APP on the Android platform based on the model and on the test phone, the average time to identify a single image is just 87ms.
Collapse
Affiliation(s)
- Dongqin Zhu
- School of Mechanical and Electrical Engineering, Gansu Agricultural University, Lanzhou, China
| | - Quan Feng
- School of Mechanical and Electrical Engineering, Gansu Agricultural University, Lanzhou, China
| | - Jianhua Zhang
- Agricultural Information Institute, Chinese Academy of Agricultural Sciences, Beijing, China
- National Nanfan Research Institute, Chinese Academy of Agricultural Sciences, Sanya, China
| | - Wanxia Yang
- School of Mechanical and Electrical Engineering, Gansu Agricultural University, Lanzhou, China
| |
Collapse
|
69
|
Yadav PK, Burks T, Frederick Q, Qin J, Kim M, Ritenour MA. Citrus disease detection using convolution neural network generated features and Softmax classifier on hyperspectral image data. FRONTIERS IN PLANT SCIENCE 2022; 13:1043712. [PMID: 36570926 PMCID: PMC9768035 DOI: 10.3389/fpls.2022.1043712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 11/18/2022] [Indexed: 06/17/2023]
Abstract
Identification and segregation of citrus fruit with diseases and peel blemishes are required to preserve market value. Previously developed machine vision approaches could only distinguish cankerous from non-cankerous citrus, while this research focused on detecting eight different peel conditions on citrus fruit using hyperspectral (HSI) imagery and an AI-based classification algorithm. The objectives of this paper were: (i) selecting the five most discriminating bands among 92 using PCA, (ii) training and testing a custom convolution neural network (CNN) model for classification with the selected bands, and (iii) comparing the CNN's performance using 5 PCA bands compared to five randomly selected bands. A hyperspectral imaging system from earlier work was used to acquire reflectance images in the spectral region from 450 to 930 nm (92 spectral bands). Ruby Red grapefruits with normal, cankerous, and 5 other common peel diseases including greasy spot, insect damage, melanose, scab, and wind scar were tested. A novel CNN based on the VGG-16 architecture was developed for feature extraction, and SoftMax for classification. The PCA-based bands were found to be 666.15, 697.54, 702.77, 849.24 and 917.25 nm, which resulted in an average accuracy, sensitivity, and specificity of 99.84%, 99.84% and 99.98% respectively. However, 10 trials of five randomly selected bands resulted in only a slightly lower performance, with accuracy, sensitivity, and specificity of 98.87%, 98.43% and 99.88%, respectively. These results demonstrate that an AI-based algorithm can successfully classify eight different peel conditions. The findings reported herein can be used as a precursor to develop a machine vision-based, real-time peel condition classification system for citrus processing.
Collapse
Affiliation(s)
- Pappu Kumar Yadav
- Department of Agricultural and Biological Engineering, University of Florida, Gainesville, FL, United States
| | - Thomas Burks
- Department of Agricultural and Biological Engineering, University of Florida, Gainesville, FL, United States
| | - Quentin Frederick
- Department of Agricultural and Biological Engineering, University of Florida, Gainesville, FL, United States
| | - Jianwei Qin
- USDA/ARS Environmental Microbial and Food Safety Laboratory, Beltsville Agricultural Research Center, Beltsville, MD, United States
| | - Moon Kim
- USDA/ARS Environmental Microbial and Food Safety Laboratory, Beltsville Agricultural Research Center, Beltsville, MD, United States
| | - Mark A. Ritenour
- Department of Horticultural Sciences, University of Florida, Fort Pierce, FL, United States
| |
Collapse
|
70
|
Rice plant disease classification using dilated convolutional neural network with global average pooling. Ecol Modell 2022. [DOI: 10.1016/j.ecolmodel.2022.110166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
71
|
Dual_Pachi: Attention-based dual path framework with intermediate second order-pooling for Covid-19 detection from chest X-ray images. Comput Biol Med 2022; 151:106324. [PMID: 36423531 PMCID: PMC9671873 DOI: 10.1016/j.compbiomed.2022.106324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/27/2022] [Accepted: 11/14/2022] [Indexed: 11/19/2022]
Abstract
Numerous machine learning and image processing algorithms, most recently deep learning, allow the recognition and classification of COVID-19 disease in medical images. However, feature extraction, or the semantic gap between low-level visual information collected by imaging modalities and high-level semantics, is the fundamental shortcoming of these techniques. On the other hand, several techniques focused on the first-order feature extraction of the chest X-Ray thus making the employed models less accurate and robust. This study presents Dual_Pachi: Attention Based Dual Path Framework with Intermediate Second Order-Pooling for more accurate and robust Chest X-ray feature extraction for Covid-19 detection. Dual_Pachi consists of 4 main building Blocks; Block one converts the received chest X-Ray image to CIE LAB coordinates (L & AB channels which are separated at the first three layers of a modified Inception V3 Architecture.). Block two further exploit the global features extracted from block one via a global second-order pooling while block three focuses on the low-level visual information and the high-level semantics of Chest X-ray image features using a multi-head self-attention and an MLP Layer without sacrificing performance. Finally, the fourth block is the classification block where classification is done using fully connected layers and SoftMax activation. Dual_Pachi is designed and trained in an end-to-end manner. According to the results, Dual_Pachi outperforms traditional deep learning models and other state-of-the-art approaches described in the literature with an accuracy of 0.96656 (Data_A) and 0.97867 (Data_B) for the Dual_Pachi approach and an accuracy of 0.95987 (Data_A) and 0.968 (Data_B) for the Dual_Pachi without attention block model. A Grad-CAM-based visualization is also built to highlight where the applied attention mechanism is concentrated.
Collapse
|
72
|
VegNet: Dataset of vegetable quality images for machine learning applications. Data Brief 2022; 45:108657. [DOI: 10.1016/j.dib.2022.108657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 09/27/2022] [Accepted: 09/30/2022] [Indexed: 11/06/2022] Open
|
73
|
Momeny M, Neshat AA, Jahanbakhshi A, Mahmoudi M, Ampatzidis Y, Radeva P. Grading and fraud detection of saffron via learning-to-augment incorporated Inception-v4 CNN. Food Control 2022. [DOI: 10.1016/j.foodcont.2022.109554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
74
|
Nguyen-Trong K, Nguyen-Hoang K. Multi-modal approach for COVID-19 detection using coughs and self-reported symptoms. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-222863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
COVID-19 (Coronavirus Disease of 2019) is one of the most challenging healthcare crises of the twenty-first century. The pandemic causes many negative impacts on all aspects of life and livelihoods. Although recent developments of relevant vaccines, such as Pfizer/BioNTech mRNA, AstraZeneca, or Moderna, the emergence of new virus mutations and their fast infection rate yet pose significant threats to public health. In this context, early detection of the disease is an important factor to reduce its effect and quickly control the spread of pandemic. Nevertheless, many countries still rely on methods that are either expensive and time-consuming (i.e., Reverse-transcription polymerase chain reaction) or uncomfortable and difficult for self-testing (i.e., Rapid Antigen Test Nasal). Recently, deep learning methods have been proposed as a potential solution for COVID-19 analysis. However, previous works usually focus on a single symptom, which can omit critical information for disease diagnosis. Therefore, in this study, we propose a multi-modal method to detect COVID-19 using cough sounds and self-reported symptoms. The proposed method consists of five neural networks to deal with different input features, including CNN-biLSTM for MFCC features, EfficientNetV2 for Mel spectrogram images, MLP for self-reported symptoms, C-YAMNet for cough detection, and RNNoise for noise-canceling. Experimental results demonstrated that our method outperformed the other state-of-the-art methods with a high AUC, accuracy, and F1-score of 98.6%, 96.9%, and 96.9% on the testing set.
Collapse
Affiliation(s)
- Khanh Nguyen-Trong
- Faculty of Information Technology, Posts and Telecommunications Institute of Technology, Hanoi, Viet Nam
| | - Khoi Nguyen-Hoang
- Faculty of Information Technology, Posts and Telecommunications Institute of Technology, Hanoi, Viet Nam
| |
Collapse
|
75
|
Zhu X, Shen D, Wang R, Zheng Y, Su S, Chen F. Maturity Grading and Identification of Camellia oleifera Fruit Based on Unsupervised Image Clustering. Foods 2022; 11:foods11233800. [PMID: 36496609 PMCID: PMC9736105 DOI: 10.3390/foods11233800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/19/2022] [Accepted: 11/23/2022] [Indexed: 11/27/2022] Open
Abstract
Maturity grading and identification of Camellia oleifera are prerequisites to determining proper harvest maturity windows and safeguarding the yield and quality of Camellia oil. One problem in Camellia oleifera production and research is the worldwide confusion regarding the grading and identification of Camellia oleifera fruit maturity. To solve this problem, a Camellia oleifera fruit maturity grading and identification model based on the unsupervised image clustering model DeepCluster has been developed in the current study. The proposed model includes the following two branches: a maturity grading branch and a maturity identification branch. The proposed model jointly learns the parameters of the maturity grading branch and maturity identification branch and used the maturity clustering assigned from the maturity grading branch as pseudo-labels to update the parameters of the maturity identification branch. The maturity grading experiment was conducted using a training set consisting of 160 Camellia oleifera fruit samples and 2628 Camellia oleifera fruit digital images collected using a smartphone. The proposed model for grading Camellia oleifera fruit samples and images in training set into the following three maturity levels: unripe (47 samples and 883 images), ripe (62 samples and 1005 images), and overripe (51 samples and 740 images). Results suggest that there was a significant difference among the maturity stages graded by the proposed method with respect to seed oil content, seed soluble protein content, seed soluble sugar content, seed starch content, dry seed weight, and moisture content. The maturity identification experiment was conducted using a testing set consisting of 160 Camellia oleifera fruit digital images (50 unripe, 60 ripe, and 50 overripe) collected using a smartphone. According to the results, the overall accuracy of maturity identification for Camellia oleifera fruit was 91.25%. Moreover, a Gradient-weighted Class Activation Mapping (Grad-CAM) visualization analysis reveals that the peel regions, crack regions, and seed regions were the critical regions for Camellia oleifera fruit maturity identification. Our results corroborate a maturity grading and identification application of unsupervised image clustering techniques and are supported by additional physical and quality properties of maturity. The current findings may facilitate the harvesting process of Camellia oleifera fruits, which is especially critical for the improvement of Camellia oil production and quality.
Collapse
Affiliation(s)
- Xueyan Zhu
- School of Technology, Beijing Forestry University, Beijing 100083, China
- Beijing Laboratory of Urban and Rural Ecological Environment, Beijing Forestry University, Beijing 100083, China
| | - Deyu Shen
- School of Technology, Beijing Forestry University, Beijing 100083, China
- Beijing Laboratory of Urban and Rural Ecological Environment, Beijing Forestry University, Beijing 100083, China
| | - Ruipeng Wang
- School of Technology, Beijing Forestry University, Beijing 100083, China
- Beijing Laboratory of Urban and Rural Ecological Environment, Beijing Forestry University, Beijing 100083, China
| | - Yili Zheng
- School of Technology, Beijing Forestry University, Beijing 100083, China
- Beijing Laboratory of Urban and Rural Ecological Environment, Beijing Forestry University, Beijing 100083, China
| | - Shuchai Su
- Key Laboratory of Silviculture and Conversation, Ministry of Education, Beijing Forestry University, Beijing 100083, China
| | - Fengjun Chen
- School of Technology, Beijing Forestry University, Beijing 100083, China
- Beijing Laboratory of Urban and Rural Ecological Environment, Beijing Forestry University, Beijing 100083, China
- Correspondence:
| |
Collapse
|
76
|
Lanjewar MG, Shaikh AY, Parab J. Cloud-based COVID-19 disease prediction system from X-Ray images using convolutional neural network on smartphone. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:1-30. [PMID: 36467434 PMCID: PMC9684956 DOI: 10.1007/s11042-022-14232-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/01/2022] [Accepted: 11/04/2022] [Indexed: 06/17/2023]
Abstract
COVID-19 has engulfed over 200 nations through human-to-human transmission, either directly or indirectly. Reverse Transcription-polymerase Chain Reaction (RT-PCR) has been endorsed as a standard COVID-19 diagnostic procedure but has caveats such as low sensitivity, the need for a skilled workforce, and is time-consuming. Coronaviruses show significant manifestation in Chest X-Ray (CX-Ray) images and, thus, can be a viable option for an alternate COVID-19 diagnostic strategy. An automatic COVID-19 detection system can be developed to detect the disease, thus reducing strain on the healthcare system. This paper discusses a real-time Convolutional Neural Network (CNN) based system for COVID-19 illness prediction from CX-Ray images on the cloud. The implemented CNN model displays exemplary results, with training accuracy being 99.94% and validation accuracy reaching 98.81%. The confusion matrix was utilized to assess the models' outcome and achieved 99% precision, 98% recall, 99% F1 score, 100% training area under the curve (AUC) and 98.3% validation AUC. The same CX-Ray dataset was also employed to predict the COVID-19 disease with deep Convolution Neural Networks (DCNN), such as ResNet50, VGG19, InceptonV3, and Xception. The prediction outcome demonstrated that the present CNN was more capable than the DCNN models. The efficient CNN model was deployed to the Platform as a Service (PaaS) cloud.
Collapse
Affiliation(s)
- Madhusudan G. Lanjewar
- School of Physical and Applied Sciences, Goa University, Taleigao Plateau, Goa, 403206 India
| | - Arman Yusuf Shaikh
- School of Physical and Applied Sciences, Goa University, Taleigao Plateau, Goa, 403206 India
| | - Jivan Parab
- School of Physical and Applied Sciences, Goa University, Taleigao Plateau, Goa, 403206 India
| |
Collapse
|
77
|
Xu M, Yoon S, Jeong Y, Park DS. Transfer learning for versatile plant disease recognition with limited data. FRONTIERS IN PLANT SCIENCE 2022; 13:1010981. [PMID: 36507376 PMCID: PMC9726777 DOI: 10.3389/fpls.2022.1010981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 10/20/2022] [Indexed: 06/17/2023]
Abstract
Deep learning has witnessed a significant improvement in recent years to recognize plant diseases by observing their corresponding images. To have a decent performance, current deep learning models tend to require a large-scale dataset. However, collecting a dataset is expensive and time-consuming. Hence, the limited data is one of the main challenges to getting the desired recognition accuracy. Although transfer learning is heavily discussed and verified as an effective and efficient method to mitigate the challenge, most proposed methods focus on one or two specific datasets. In this paper, we propose a novel transfer learning strategy to have a high performance for versatile plant disease recognition, on multiple plant disease datasets. Our transfer learning strategy differs from the current popular one due to the following factors. First, PlantCLEF2022, a large-scale dataset related to plants with 2,885,052 images and 80,000 classes, is utilized to pre-train a model. Second, we adopt a vision transformer (ViT) model, instead of a convolution neural network. Third, the ViT model undergoes transfer learning twice to save computations. Fourth, the model is first pre-trained in ImageNet with a self-supervised loss function and with a supervised loss function in PlantCLEF2022. We apply our method to 12 plant disease datasets and the experimental results suggest that our method surpasses the popular one by a clear margin for different dataset settings. Specifically, our proposed method achieves a mean testing accuracy of 86.29over the 12 datasets in a 20-shot case, 12.76 higher than the current state-of-the-art method's accuracy of 73.53. Furthermore, our method outperforms other methods in one plant growth stage prediction and the one weed recognition dataset. To encourage the community and related applications, we have made public our codes and pre-trained model.
Collapse
Affiliation(s)
- Mingle Xu
- Department of Electronics Engineering, Jeonbuk National University, Jeonbuk, South Korea
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonbuk, South Korea
| | - Sook Yoon
- Department of Computer Engineering, Mokpo National University, Jeonnam, South Korea
| | - Yongchae Jeong
- Department of Electronics Engineering, Jeonbuk National University, Jeonbuk, South Korea
| | - Dong Sun Park
- Department of Electronics Engineering, Jeonbuk National University, Jeonbuk, South Korea
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonbuk, South Korea
| |
Collapse
|
78
|
Li Z, Chen P, Shuai L, Wang M, Zhang L, Wang Y, Mu J. A Copy Paste and Semantic Segmentation-Based Approach for the Classification and Assessment of Significant Rice Diseases. PLANTS (BASEL, SWITZERLAND) 2022; 11:3174. [PMID: 36432903 PMCID: PMC9695445 DOI: 10.3390/plants11223174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2022] [Revised: 11/03/2022] [Accepted: 11/18/2022] [Indexed: 06/16/2023]
Abstract
The accurate segmentation of significant rice diseases and assessment of the degree of disease damage are the keys to their early diagnosis and intelligent monitoring and are the core of accurate pest control and information management. Deep learning applied to rice disease detection and segmentation can significantly improve the accuracy of disease detection and identification but requires a large number of training samples to determine the optimal parameters of the model. This study proposed a lightweight network based on copy paste and semantic segmentation for accurate disease region segmentation and severity assessment. First, a dataset for rice significant disease segmentation was selected and collated based on 3 open-source datasets, containing 450 sample images belonging to 3 categories of rice leaf bacterial blight, blast and brown spot. Then, to increase the diversity of samples, a data augmentation method, rice leaf disease copy paste (RLDCP), was proposed that expanded the collected disease samples with the concept of copy and paste. The new RSegformer model was then trained by replacing the new backbone network with the lightweight semantic segmentation network Segformer, combining the attention mechanism and changing the upsampling operator, so that the model could better balance local and global information, speed up the training process and reduce the degree of overfitting of the network. The results show that RLDCP could effectively improve the accuracy and generalisation performance of the semantic segmentation model compared with traditional data augmentation methods and could improve the MIoU of the semantic segmentation model by about 5% with a dataset only twice the size. RSegformer can achieve an 85.38% MIoU at a model size of 14.36 M. The method proposed in this paper can quickly, easily and accurately identify disease occurrence areas, their species and the degree of disease damage, providing a reference for timely and effective rice disease control.
Collapse
Affiliation(s)
- Zhiyong Li
- College of Information Engineering, Sichuan Agricultural University, Ya’an 625000, China
- Sichuan Key Laboratory of Agricultural Information Engineering, Ya’an 625000, China
| | - Peng Chen
- College of Information Engineering, Sichuan Agricultural University, Ya’an 625000, China
- Sichuan Key Laboratory of Agricultural Information Engineering, Ya’an 625000, China
| | - Luyu Shuai
- College of Information Engineering, Sichuan Agricultural University, Ya’an 625000, China
- Sichuan Key Laboratory of Agricultural Information Engineering, Ya’an 625000, China
| | - Mantao Wang
- College of Information Engineering, Sichuan Agricultural University, Ya’an 625000, China
- Sichuan Key Laboratory of Agricultural Information Engineering, Ya’an 625000, China
| | - Liang Zhang
- College of Information Engineering, Sichuan Agricultural University, Ya’an 625000, China
- Sichuan Key Laboratory of Agricultural Information Engineering, Ya’an 625000, China
| | - Yuchao Wang
- College of Mechanical and Electrical Engineering, Sichuan Agricultural University, Ya’an 625000, China
| | - Jiong Mu
- College of Information Engineering, Sichuan Agricultural University, Ya’an 625000, China
- Sichuan Key Laboratory of Agricultural Information Engineering, Ya’an 625000, China
| |
Collapse
|
79
|
Calibrated bagging deep learning for image semantic segmentation: A case study on COVID-19 chest X-ray image. PLoS One 2022; 17:e0276250. [DOI: 10.1371/journal.pone.0276250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 10/03/2022] [Indexed: 11/17/2022] Open
Abstract
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes coronavirus disease 2019 (COVID-19). Imaging tests such as chest X-ray (CXR) and computed tomography (CT) can provide useful information to clinical staff for facilitating a diagnosis of COVID-19 in a more efficient and comprehensive manner. As a breakthrough of artificial intelligence (AI), deep learning has been applied to perform COVID-19 infection region segmentation and disease classification by analyzing CXR and CT data. However, prediction uncertainty of deep learning models for these tasks, which is very important to safety-critical applications like medical image processing, has not been comprehensively investigated. In this work, we propose a novel ensemble deep learning model through integrating bagging deep learning and model calibration to not only enhance segmentation performance, but also reduce prediction uncertainty. The proposed method has been validated on a large dataset that is associated with CXR image segmentation. Experimental results demonstrate that the proposed method can improve the segmentation performance, as well as decrease prediction uncertainty.
Collapse
|
80
|
Ukwuoma CC, Qin Z, Agbesi VK, Ejiyi CJ, Bamisile O, Chikwendu IA, Tienin BW, Hossin MA. LCSB-inception: Reliable and effective light-chroma separated branches for Covid-19 detection from chest X-ray images. Comput Biol Med 2022; 150:106195. [PMID: 37859288 PMCID: PMC9561436 DOI: 10.1016/j.compbiomed.2022.106195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/03/2022] [Accepted: 10/09/2022] [Indexed: 11/24/2022]
Abstract
According to the World Health Organization, an estimate of more than five million infections and 355,000 deaths have been recorded worldwide since the emergence of the coronavirus disease (COVID-19). Various researchers have developed interesting and effective deep learning frameworks to tackle this disease. However, poor feature extraction from the Chest X-ray images and the high computational cost of the available models impose difficulties to an accurate and fast Covid-19 detection framework. Thus, the major purpose of this study is to offer an accurate and efficient approach for extracting COVID-19 features from chest X-rays that is also less computationally expensive than earlier research. To achieve the specified goal, we explored the Inception V3 deep artificial neural network. This study proposed LCSB-Inception; a two-path (L and AB channel) Inception V3 network along the first three convolutional layers. The RGB input image is first transformed to CIE LAB coordinates (L channel which is aimed at learning the textural and edge features of the Chest X-Ray and AB channel which is aimed at learning the color variations of the Chest X-ray images). The L achromatic channel and the AB channels filters are set to 50%L-50%AB. This method saves between one-third and one-half of the parameters in the divided branches. We further introduced a global second-order pooling at the last two convolutional blocks for more robust image feature extraction against the conventional max-pooling. The detection accuracy of the LCSB-Inception is further improved by employing the Contrast Limited Adaptive Histogram Equalization (CLAHE) image enhancement technique on the input image before feeding them to the network. The proposed LCSB-Inception network is experimented on using two loss functions (Categorically smooth loss and categorically Cross-entropy) and two learning rates whereas Accuracy, Precision, Sensitivity, Specificity F1-Score, and AUC Score were used for evaluation via the chestX-ray-15k (Data_1) and COVID-19 Radiography dataset (Data_2). The proposed models produced an acceptable outcome with an accuracy of 0.97867 (Data_1) and 0.98199 (Data_2) according to the experimental findings. In terms of COVID-19 identification, the suggested models outperform conventional deep learning models and other state-of-the-art techniques presented in the literature based on the results.
Collapse
Affiliation(s)
- Chiagoziem C Ukwuoma
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Sichuan, PR China.
| | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Sichuan, PR China.
| | - Victor Kwaku Agbesi
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, PR China
| | - Chukwuebuka J Ejiyi
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Sichuan, PR China
| | - Olusola Bamisile
- Sichuan Industrial Internet Intelligent Monitoring and Application Engineering Technology Research Center, Chengdu University of Technology, Chenghua District, Chengdu, Sichuan, PR China
| | - Ijeoma A Chikwendu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Sichuan, PR China
| | - Bole W Tienin
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Sichuan, PR China
| | - Md Altab Hossin
- School of Innovation and Entrepreneurship, Chengdu University, No. 2025, Chengluo Avenue, 610106, Chengdu, Sichuan, PR China
| |
Collapse
|
81
|
Kanda PS, Xia K, Kyslytysna A, Owoola EO. Tomato Leaf Disease Recognition on Leaf Images Based on Fine-Tuned Residual Neural Networks. PLANTS (BASEL, SWITZERLAND) 2022; 11:2935. [PMID: 36365386 PMCID: PMC9653987 DOI: 10.3390/plants11212935] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 09/19/2022] [Accepted: 09/28/2022] [Indexed: 06/16/2023]
Abstract
Humans depend heavily on agriculture, which is the main source of prosperity. The various plant diseases that farmers must contend with have constituted a lot of challenges in crop production. The main issues that should be taken into account for maximizing productivity are the recognition and prevention of plant diseases. Early diagnosis of plant disease is essential for maximizing the level of agricultural yield as well as saving costs and reducing crop loss. In addition, the computerization of the whole process makes it simple for implementation. In this paper, an intelligent method based on deep learning is presented to recognize nine common tomato diseases. To this end, a residual neural network algorithm is presented to recognize tomato diseases. This research is carried out on four levels of diversity including depth size, discriminative learning rates, training and validation data split ratios, and batch sizes. For the experimental analysis, five network depths are used to measure the accuracy of the network. Based on the experimental results, the proposed method achieved the highest F1 score of 99.5%, which outperformed most previous competing methods in tomato leaf disease recognition. Further testing of our method on the Flavia leaf image dataset resulted in a 99.23% F1 score. However, the method had a drawback that some of the false predictions were of tomato early light and tomato late blight, which are two classes of fine-grained distinction.
Collapse
|
82
|
Mukhiddinov M, Muminov A, Cho J. Improved Classification Approach for Fruits and Vegetables Freshness Based on Deep Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:8192. [PMID: 36365888 PMCID: PMC9653939 DOI: 10.3390/s22218192] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 10/24/2022] [Accepted: 10/24/2022] [Indexed: 06/16/2023]
Abstract
Classification of fruit and vegetable freshness plays an essential role in the food industry. Freshness is a fundamental measure of fruit and vegetable quality that directly affects the physical health and purchasing motivation of consumers. In addition, it is a significant determinant of market price; thus, it is imperative to study the freshness of fruits and vegetables. Owing to similarities in color, texture, and external environmental changes, such as shadows, lighting, and complex backgrounds, the automatic recognition and classification of fruits and vegetables using machine vision is challenging. This study presents a deep-learning system for multiclass fruit and vegetable categorization based on an improved YOLOv4 model that first recognizes the object type in an image before classifying it into one of two categories: fresh or rotten. The proposed system involves the development of an optimized YOLOv4 model, creating an image dataset of fruits and vegetables, data argumentation, and performance evaluation. Furthermore, the backbone of the proposed model was enhanced using the Mish activation function for more precise and rapid detection. Compared with the previous YOLO series, a complete experimental evaluation of the proposed method can obtain a higher average precision than the original YOLOv4 and YOLOv3 with 50.4%, 49.3%, and 41.7%, respectively. The proposed system has outstanding prospects for the construction of an autonomous and real-time fruit and vegetable classification system for the food industry and marketplaces and can also help visually impaired people to choose fresh food and avoid food poisoning.
Collapse
|
83
|
Contrasting EfficientNet, ViT, and gMLP for COVID-19 Detection in Ultrasound Imagery. J Pers Med 2022; 12:jpm12101707. [PMID: 36294846 PMCID: PMC9605641 DOI: 10.3390/jpm12101707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/19/2022] [Accepted: 10/10/2022] [Indexed: 11/06/2022] Open
Abstract
A timely diagnosis of coronavirus is critical in order to control the spread of the virus. To aid in this, we propose in this paper a deep learning-based approach for detecting coronavirus patients using ultrasound imagery. We propose to exploit the transfer learning of a EfficientNet model pre-trained on the ImageNet dataset for the classification of ultrasound images of suspected patients. In particular, we contrast the results of EfficentNet-B2 with the results of ViT and gMLP. Then, we show the results of the three models by learning from scratch, i.e., without transfer learning. We view the detection problem from a multiclass classification perspective by classifying images as COVID-19, pneumonia, and normal. In the experiments, we evaluated the models on a publically available ultrasound dataset. This dataset consists of 261 recordings (202 videos + 59 images) belonging to 216 distinct patients. The best results were obtained using EfficientNet-B2 with transfer learning. In particular, we obtained precision, recall, and F1 scores of 95.84%, 99.88%, and 24 97.41%, respectively, for detecting the COVID-19 class. EfficientNet-B2 with transfer learning presented an overall accuracy of 96.79%, outperforming gMLP and ViT, which achieved accuracies of 93.03% and 92.82%, respectively.
Collapse
|
84
|
Albahli S, Masood M. Efficient attention-based CNN network (EANet) for multi-class maize crop disease classification. FRONTIERS IN PLANT SCIENCE 2022; 13:1003152. [PMID: 36311068 PMCID: PMC9597248 DOI: 10.3389/fpls.2022.1003152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
Maize leaf disease significantly reduces the quality and overall crop yield. Therefore, it is crucial to monitor and diagnose illnesses during the growth season to take necessary actions. However, accurate identification is challenging to achieve as the existing automated methods are computationally complex or perform well on images with a simple background. Whereas, the realistic field conditions include a lot of background noise that makes this task difficult. In this study, we presented an end-to-end learning CNN architecture, Efficient Attention Network (EANet) based on the EfficientNetv2 model to identify multi-class maize crop diseases. To further enhance the capacity of the feature representation, we introduced a spatial-channel attention mechanism to focus on affected locations and help the detection network accurately recognize multiple diseases. We trained the EANet model using focal loss to overcome class-imbalanced data issues and transfer learning to enhance network generalization. We evaluated the presented approach on the publically available datasets having samples captured under various challenging environmental conditions such as varying background, non-uniform light, and chrominance variances. Our approach showed an overall accuracy of 99.89% for the categorization of various maize crop diseases. The experimental and visual findings reveal that our model shows improved performance compared to conventional CNNs, and the attention mechanism properly accentuates the disease-relevant information by ignoring the background noise.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
| | - Momina Masood
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| |
Collapse
|
85
|
Sinwar D, Dhaka VS, Tesfaye BA, Raghuwanshi G, Kumar A, Maakar SK, Agrawal S. Artificial Intelligence and Deep Learning Assisted Rapid Diagnosis of COVID-19 from Chest Radiographical Images: A Survey. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:1306664. [PMID: 36304775 PMCID: PMC9581633 DOI: 10.1155/2022/1306664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/06/2022] [Accepted: 09/27/2022] [Indexed: 01/26/2023]
Abstract
Artificial Intelligence (AI) has been applied successfully in many real-life domains for solving complex problems. With the invention of Machine Learning (ML) paradigms, it becomes convenient for researchers to predict the outcome based on past data. Nowadays, ML is acting as the biggest weapon against the COVID-19 pandemic by detecting symptomatic cases at an early stage and warning people about its futuristic effects. It is observed that COVID-19 has blown out globally so much in a short period because of the shortage of testing facilities and delays in test reports. To address this challenge, AI can be effectively applied to produce fast as well as cost-effective solutions. Plenty of researchers come up with AI-based solutions for preliminary diagnosis using chest CT Images, respiratory sound analysis, voice analysis of symptomatic persons with asymptomatic ones, and so forth. Some AI-based applications claim good accuracy in predicting the chances of being COVID-19-positive. Within a short period, plenty of research work is published regarding the identification of COVID-19. This paper has carefully examined and presented a comprehensive survey of more than 110 papers that came from various reputed sources, that is, Springer, IEEE, Elsevier, MDPI, arXiv, and medRxiv. Most of the papers selected for this survey presented candid work to detect and classify COVID-19, using deep-learning-based models from chest X-Rays and CT scan images. We hope that this survey covers most of the work and provides insights to the research community in proposing efficient as well as accurate solutions for fighting the pandemic.
Collapse
Affiliation(s)
- Deepak Sinwar
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India
| | - Vijaypal Singh Dhaka
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India
| | - Biniyam Alemu Tesfaye
- Department of Computer Science, College of Informatics, Bule Hora University, Bule Hora, Ethiopia
| | - Ghanshyam Raghuwanshi
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India
| | - Ashish Kumar
- Department of Mathematics and Statistics, Manipal University Jaipur, Jaipur, India
| | - Sunil Kr. Maakar
- School of Computing Science & Engineering, Galgotias University, Greater Noida, India
| | - Sanjay Agrawal
- Department of Electrical Engineering, Rajkiya Engineering College, Akbarpur, Ambedkar Nagar, India
| |
Collapse
|
86
|
Bommu S, M AK, Babburu K, N S, Thalluri LN, G VG, Gopalan A, Mallapati PK, Guha K, Mohammad HR, S SK. Smart City IoT System Network Level Routing Analysis and Blockchain Security Based Implementation. JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY 2022; 18:1351-1368. [PMID: 37521954 PMCID: PMC9549033 DOI: 10.1007/s42835-022-01239-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 07/21/2022] [Accepted: 08/16/2022] [Indexed: 08/01/2023]
Abstract
This paper demonstrates, network-level performance analysis and implementation of smart city Internet of Things (IoT) system with Infrastructure as a Service (IaaS) level cloud computing architecture. The smart city IoT network topology performance is analyzed at the simulation level using the NS3 simulator by extracting most of the performance-deciding parameters. The performance-enhanced smart city topology is practically implemented in IaaS level architecture. The intended smart city IoT system can monitor the principal parameters like video surveillance with a thermal camera (to identify the virus-like COVID-19 infected people), transport, water quality, solar radiation, sound pollution, air quality (O3, NO2, CO, Particles), parking zones, iconic places, E-suggestions, PRO information over low power wide area network in 61.88 km × 61.88 km range. Primarily we have addressed the IoT network-level routing and quality of service (QoS) challenges and implementation level security challenges. The simulation level network topology analysis is performed to improve the routing and QoS. Blockchain technology-based decentralization is adopted to enrich the IoT system performance in terms of security.
Collapse
Affiliation(s)
- Samuyelu Bommu
- Department of Electronics and Communication Engineering, PVP Siddhartha Institute of Technology, Vijayawada, 520007 Andhra Pradesh India
| | - Aravind Kumar M
- Department of Electronics and Communication Engineering, West Godavari Institute of Science and Engineering, Avapadu, Prakasraopalem, East Godavari District, 534112 Andhra Pradesh India
| | - Kiranmai Babburu
- Department of Electronics and Communication Engineering, Lendi Institute of Engineering and Technology, Vizianagaraml, 535005 Andhra Pradesh India
| | - Srikanth N
- Department of Electronics and Communication Engineering, St. Peters Engineering College, Medchal, 500043 Telangana India
| | - Lakshmi Narayana Thalluri
- Department of Electronics and Communication Engineering, Andhra Loyola Institute of Engineering and Technology, Dr. A. P. J. Abdul Kalam Research Forum, Vijayawada, 520008 Andhra Pradesh India
| | - V. Ganesh G
- Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, 522502 Andhra Pradesh India
| | - Anitha Gopalan
- Department of Electronics and Communication Engineering, Saveetha School of Engineering, SIMATS, Thandalam, Chennai, 602105 Tamilnadu India
| | - Purna Kishore Mallapati
- Department of Electronics and Communication Engineering, KKR & KSR Institute Of Technology & Science, Vinjanampadu, Guntur, 522017 Andhra Pradesh India
| | - Koushik Guha
- National MEMS Design Center, Department of Electronics and Communication Engineering, National Institute of Technology, Silchar, 788010 Assam India
| | - Hayath Rajvee Mohammad
- Department of Electronics and Communication Engineering, PBR VITS, Kavali, 524201 Andhra Pradesh India
| | - S. Kiran S
- Department of Electronics and Communication Engineering, Lendi Institute of Engineering and Technology, Vizianagaram, 535005 Andhra Pradesh India
| |
Collapse
|
87
|
Citakoglu H, Coşkun Ö. Comparison of hybrid machine learning methods for the prediction of short-term meteorological droughts of Sakarya Meteorological Station in Turkey. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2022; 29:75487-75511. [PMID: 35655018 DOI: 10.1007/s11356-022-21083-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 05/21/2022] [Indexed: 06/15/2023]
Abstract
Drought is a harmful natural disaster with various negative effects on many aspects of life. In this research, short-term meteorological droughts were predicted with hybrid machine learning models using monthly precipitation data (1960-2020 period) of Sakarya Meteorological Station, located in the northwest of Turkey. Standardized precipitation index (SPI), depending only on precipitation data, was used as the drought index, and 1-, 3-, and 6-month time scales for short-term droughts were considered. In the prediction models, drought index was predicted at t + 1 output variable by using t, t - 1, t - 2, and t - 3 input variables. Artificial neural networks (ANNs), adaptive neuro-fuzzy inference system (ANFIS), Gaussian process regression (GPR), support vector machine regression (SVMR), k-nearest neighbors (KNN) algorithms were employed as stand-alone machine learning methods. Variation mode decomposition (VMD), discrete wavelet transform (DWT), and empirical mode decomposition (EMD) were utilized as pre-processing techniques to create hybrid models. Six different performance criteria were used to assess model performance. The hybrid models used together with the pre-processing techniques were found to be more successful than the stand-alone models. Hybrid VMD-GPR model yielded the best results (NSE = 0.9345, OI = 0.9438, R2 = 0.9367) for 1-month time scale, hybrid VMD-GPR model (NSE = 0.9528, OI = 0.9559, R2 = 0.9565) for 3-month time scale, and hybrid DWT-ANN model (NSE = 0.9398, OI = 0.9483, R2 = 0.9450) for 6-month time scale. Considering the entire performance criteria, it was determined that the decomposition success of VMD was higher than DWT and EMD.
Collapse
Affiliation(s)
- Hatice Citakoglu
- Department of Civil Engineering, Erciyes University, Kayseri, Turkey.
| | - Ömer Coşkun
- Turkish General Directorate of State Hydraulic Works (DSI), Kayseri, Turkey
| |
Collapse
|
88
|
Sun M, Xu L, Chen X, Ji Z, Zheng Y, Jia W. BFP Net: Balanced Feature Pyramid Network for Small Apple Detection in Complex Orchard Environment. PLANT PHENOMICS (WASHINGTON, D.C.) 2022; 2022:9892464. [PMID: 36320456 PMCID: PMC9595048 DOI: 10.34133/2022/9892464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 08/31/2022] [Indexed: 11/22/2022]
Abstract
Despite of significant achievements made in the detection of target fruits, small fruit detection remains a great challenge, especially for immature small green fruits with a few pixels. The closeness of color between the fruit skin and the background greatly increases the difficulty of locating small target fruits in the natural orchard environment. In this paper, we propose a balanced feature pyramid network (BFP Net) for small apple detection. This network can balance information mapped to small apples from two perspectives: multiple-scale fruits on the different layers of FPN and a characteristic of a new extended feature from the output of ResNet50 conv1. Specifically, we design a weight-like feature fusion architecture on the lateral connection and top-down structure to alleviate the small-scale information imbalance on the different layers of FPN. Moreover, a new extended layer from ResNet50 conv1 is embedded into the lowest layer of standard FPN, and a decoupled-aggregated module is devised on this new extended layer of FPN to complement spatial location information and relieve the problem of locating small apple. In addition, a feature Kullback-Leibler distillation loss is introduced to transfer favorable knowledge from the teacher model to the student model. Experimental results show that APS of our method reaches 47.0%, 42.2%, and 35.6% on the benchmark of the GreenApple, MinneApple, and Pascal VOC, respectively. Overall, our method is not only slightly better than some state-of-the-art methods but also has a good generalization performance.
Collapse
Affiliation(s)
- Meili Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
- Key Laboratory of Facility Agriculture Measurement and Control Technology and Equipment of Machinery Industry, Zhenjiang 212013, China
| | - Liancheng Xu
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Xiude Chen
- National Engineering Research Center for Apple, Shandong Agriculture University, Taian 271018, China
| | - Ze Ji
- School of Engineering, Cardiff University, Cardiff CF24 3AA, UK
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Weikuan Jia
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
- Key Laboratory of Facility Agriculture Measurement and Control Technology and Equipment of Machinery Industry, Zhenjiang 212013, China
| |
Collapse
|
89
|
Multiscale voting mechanism for rice leaf disease recognition under natural field conditions. INT J INTELL SYST 2022. [DOI: 10.1002/int.23081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
90
|
Malik H, Anees T, Din M, Naeem A. CDC_Net: multi-classification convolutional neural network model for detection of COVID-19, pneumothorax, pneumonia, lung Cancer, and tuberculosis using chest X-rays. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:13855-13880. [PMID: 36157356 PMCID: PMC9485026 DOI: 10.1007/s11042-022-13843-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 06/30/2022] [Accepted: 09/06/2022] [Indexed: 05/27/2023]
Abstract
Coronavirus (COVID-19) has adversely harmed the healthcare system and economy throughout the world. COVID-19 has similar symptoms as other chest disorders such as lung cancer (LC), pneumothorax, tuberculosis (TB), and pneumonia, which might mislead the clinical professionals in detecting a new variant of flu called coronavirus. This motivates us to design a model to classify multi-chest infections. A chest x-ray is the most ubiquitous disease diagnosis process in medical practice. As a result, chest x-ray examinations are the primary diagnostic tool for all of these chest infections. For the sake of saving human lives, paramedics and researchers are working tirelessly to establish a precise and reliable method for diagnosing the disease COVID-19 at an early stage. However, COVID-19's medical diagnosis is exceedingly idiosyncratic and varied. A multi-classification method based on the deep learning (DL) model is developed and tested in this work to automatically classify the COVID-19, LC, pneumothorax, TB, and pneumonia from chest x-ray images. COVID-19 and other chest tract disorders are diagnosed using a convolutional neural network (CNN) model called CDC Net that incorporates residual network thoughts and dilated convolution. For this study, we used this model in conjunction with publically available benchmark data to identify these diseases. For the first time, a single deep learning model has been used to diagnose five different chest ailments. In terms of classification accuracy, recall, precision, and f1-score, we compared the proposed model to three CNN-based pre-trained models, such as Vgg-19, ResNet-50, and inception v3. An AUC of 0.9953 was attained by the CDC Net when it came to identifying various chest diseases (with an accuracy of 99.39%, a recall of 98.13%, and a precision of 99.42%). Moreover, CNN-based pre-trained models Vgg-19, ResNet-50, and inception v3 achieved accuracy in classifying multi-chest diseases are 95.61%, 96.15%, and 95.16%, respectively. Using chest x-rays, the proposed model was found to be highly accurate in diagnosing chest diseases. Based on our testing data set, the proposed model shows significant performance as compared to its competitor methods. Statistical analyses of the datasets using McNemar's, and ANOVA tests also showed the robustness of the proposed model.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore, 54000 Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore, 54000 Pakistan
| | - Muizzud Din
- Department of Computer Science, Ghazi University, Dera Ghazi Khan, 32200 Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore, 54000 Pakistan
| |
Collapse
|
91
|
Li Y, Xue J, Wang K, Zhang M, Li Z. Surface Defect Detection of Fresh-Cut Cauliflowers Based on Convolutional Neural Network with Transfer Learning. Foods 2022; 11:foods11182915. [PMID: 36141042 PMCID: PMC9498786 DOI: 10.3390/foods11182915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/14/2022] [Accepted: 09/16/2022] [Indexed: 11/16/2022] Open
Abstract
A fresh-cut cauliflower surface defect detection and classification model based on a convolutional neural network with transfer learning is proposed to address the low efficiency of the traditional manual detection of fresh-cut cauliflower surface defects. Four thousand, seven hundred and ninety images of fresh-cut cauliflower were collected in four categories including healthy, diseased, browning, and mildewed. In this study, the pre-trained MobileNet model was fine-tuned to improve training speed and accuracy. The model optimization was achieved by selecting the optimal combination of training hyper-parameters and adjusting the different number of frozen layers; the parameters downloaded from ImageNet were optimally integrated with the parameters trained on our own model. A comparison of test results was presented by combining VGG19, InceptionV3, and NASNetMobile. Experimental results showed that the MobileNet model's loss value was 0.033, its accuracy was 99.27%, and the F1 score was 99.24% on the test set when the learning rate was set as 0.001, dropout was set as 0.5, and the frozen layer was set as 80. This model had better capability and stronger robustness and was more suitable for the surface defect detection of fresh-cut cauliflower when compared with other models, and the experiment's results demonstrated the method's feasibility.
Collapse
Affiliation(s)
- Yaodi Li
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Jianxin Xue
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
- Correspondence: ; Tel.: +86-133-1344-0069
| | - Kai Wang
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Mingyue Zhang
- College of Agricultural Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| | - Zezhen Li
- College of Food Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China
| |
Collapse
|
92
|
Bhosale YH, Patnaik KS. Application of Deep Learning Techniques in Diagnosis of Covid-19 (Coronavirus): A Systematic Review. Neural Process Lett 2022; 55:1-53. [PMID: 36158520 PMCID: PMC9483290 DOI: 10.1007/s11063-022-11023-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/29/2022] [Indexed: 01/09/2023]
Abstract
Covid-19 is now one of the most incredibly intense and severe illnesses of the twentieth century. Covid-19 has already endangered the lives of millions of people worldwide due to its acute pulmonary effects. Image-based diagnostic techniques like X-ray, CT, and ultrasound are commonly employed to get a quick and reliable clinical condition. Covid-19 identification out of such clinical scans is exceedingly time-consuming, labor-intensive, and susceptible to silly intervention. As a result, radiography imaging approaches using Deep Learning (DL) are consistently employed to achieve great results. Various artificial intelligence-based systems have been developed for the early prediction of coronavirus using radiography pictures. Specific DL methods such as CNN and RNN noticeably extract extremely critical characteristics, primarily in diagnostic imaging. Recent coronavirus studies have used these techniques to utilize radiography image scans significantly. The disease, as well as the present pandemic, was studied using public and private data. A total of 64 pre-trained and custom DL models concerning imaging modality as taxonomies are selected from the studied articles. The constraints relevant to DL-based techniques are the sample selection, network architecture, training with minimal annotated database, and security issues. This includes evaluating causal agents, pathophysiology, immunological reactions, and epidemiological illness. DL-based Covid-19 detection systems are the key focus of this review article. Covid-19 work is intended to be accelerated as a result of this study.
Collapse
Affiliation(s)
- Yogesh H. Bhosale
- Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi 835215 India
| | - K. Sridhar Patnaik
- Department of Computer Science and Engineering, Birla Institute of Technology, Mesra, Ranchi 835215 India
| |
Collapse
|
93
|
Li P, Hao H, Bai Y, Li Y, Mao X, Xu J, Liu M, Lv Y, Chen W, Ge D. Convolutional neural networks-based health risk modelling of some heavy metals in a soil-rice system. THE SCIENCE OF THE TOTAL ENVIRONMENT 2022; 838:156466. [PMID: 35690189 DOI: 10.1016/j.scitotenv.2022.156466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 05/29/2022] [Accepted: 05/31/2022] [Indexed: 06/15/2023]
Abstract
The long-term consumption of heavy metal-rich rice can cause serious harm to human health. However, the existing health risk assessment (HRA) can only be performed after the rice has been harvested, and this approach belongs to a passive and lagging pattern. This study is the first to explore the feasibility of health risk (HR) prediction by proposing the indirect model CNNHR-IND and the direct model CNNHR-DIR based on the convolutional neural network (CNN) technology. The dataset included 390 pairs of soil-rice samples collected from You County, China, with 17 environmental covariates. The R2 values for CNNHR-IND for non-carcinogenic and carcinogenic risks were 0.578 and 0.554, respectively, and those for CNNHR-DIR were 0.647 and 0.574, respectively. The results demonstrated that both models performed well, especially CNNHR-DIR had a higher estimation accuracy. The spatial autocorrelation analysis indicated that CNNHR-DIR exerted no systematic bias in the prediction results for health risks, confirming the rationality of the CNNHR-DIR model. The sensitivity analysis further confirmed the generalizability and robustness of CNNHR-DIR. This study proved the feasibility of HR prediction and the potential of CNN technology in HRA, and is significant regarding early risk warnings of rice planting and the sustainable development of public health.
Collapse
Affiliation(s)
- Panpan Li
- College of Computer, National University of Defense Technology, Changsha 410003, PR China
| | - Huijuan Hao
- College of Resources and Environment, Hunan Agricultural University, Changsha 410128, PR China; Risk Assessment Laboratory for Environmental Factors of Agro-product Quality Safety (Changsha), Ministry of Agriculture and Rural Affairs, Changsha 410005, PR China
| | - Yang Bai
- General Hospital of Northern Theater Command, Shenyang 110000, PR China
| | - Yuanyuan Li
- Hunan Pinbiao Huace Testing Technology Co., Ltd, Changsha 410100, PR China
| | - Xiaoguang Mao
- College of Computer, National University of Defense Technology, Changsha 410003, PR China.
| | - Jianjun Xu
- College of Computer, National University of Defense Technology, Changsha 410003, PR China
| | - Meng Liu
- General Hospital of Northern Theater Command, Shenyang 110000, PR China
| | - Yuntao Lv
- Risk Assessment Laboratory for Environmental Factors of Agro-product Quality Safety (Changsha), Ministry of Agriculture and Rural Affairs, Changsha 410005, PR China
| | - Wanming Chen
- Risk Assessment Laboratory for Environmental Factors of Agro-product Quality Safety (Changsha), Ministry of Agriculture and Rural Affairs, Changsha 410005, PR China
| | - Dabing Ge
- College of Resources and Environment, Hunan Agricultural University, Changsha 410128, PR China
| |
Collapse
|
94
|
Sunsuhi G, Albin Jose S. An Adaptive Eroded Deep Convolutional neural network for brain image segmentation and classification using Inception ResnetV2. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
95
|
Sarv Ahrabi S, Momenzadeh A, Baccarelli E, Scarpiniti M, Piazzo L. How much BiGAN and CycleGAN-learned hidden features are effective for COVID-19 detection from CT images? A comparative study. THE JOURNAL OF SUPERCOMPUTING 2022; 79:2850-2881. [PMID: 36042937 PMCID: PMC9411851 DOI: 10.1007/s11227-022-04775-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
Bidirectional generative adversarial networks (BiGANs) and cycle generative adversarial networks (CycleGANs) are two emerging machine learning models that, up to now, have been used as generative models, i.e., to generate output data sampled from a target probability distribution. However, these models are also equipped with encoding modules, which, after weakly supervised training, could be, in principle, exploited for the extraction of hidden features from the input data. At the present time, how these extracted features could be effectively exploited for classification tasks is still an unexplored field. Hence, motivated by this consideration, in this paper, we develop and numerically test the performance of a novel inference engine that relies on the exploitation of BiGAN and CycleGAN-learned hidden features for the detection of COVID-19 disease from other lung diseases in computer tomography (CT) scans. In this respect, the main contributions of the paper are twofold. First, we develop a kernel density estimation (KDE)-based inference method, which, in the training phase, leverages the hidden features extracted by BiGANs and CycleGANs for estimating the (a priori unknown) probability density function (PDF) of the CT scans of COVID-19 patients and, then, in the inference phase, uses it as a target COVID-PDF for the detection of COVID diseases. As a second major contribution, we numerically evaluate and compare the classification accuracies of the implemented BiGAN and CycleGAN models against the ones of some state-of-the-art methods, which rely on the unsupervised training of convolutional autoencoders (CAEs) for attaining feature extraction. The performance comparisons are carried out by considering a spectrum of different training loss functions and distance metrics. The obtained classification accuracies of the proposed CycleGAN-based (resp., BiGAN-based) models outperform the corresponding ones of the considered benchmark CAE-based models of about 16% (resp., 14%).
Collapse
Affiliation(s)
- Sima Sarv Ahrabi
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University or Rome, Via Eudossiana, 18, 00184 Roma, Italy
| | - Alireza Momenzadeh
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University or Rome, Via Eudossiana, 18, 00184 Roma, Italy
| | - Enzo Baccarelli
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University or Rome, Via Eudossiana, 18, 00184 Roma, Italy
| | - Michele Scarpiniti
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University or Rome, Via Eudossiana, 18, 00184 Roma, Italy
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University or Rome, Via Eudossiana, 18, 00184 Roma, Italy
| |
Collapse
|
96
|
Sahleh A, Salahi M. Improved robust nonparallel support vector machines. INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS 2022. [DOI: 10.1007/s41060-022-00356-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
97
|
Chacon WDC, dos Santos Alves MJ, Monteiro AR, González SYG, Ayala Valencia G. Image analysis applied to control postharvest maturity of papayas (
Carica papaya
L.). J FOOD PROCESS PRES 2022. [DOI: 10.1111/jfpp.16999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
| | | | | | | | - Germán Ayala Valencia
- Department of Chemical and Food Engineering Federal University of Santa Catarina Florianópolis SC Brazil
| |
Collapse
|
98
|
In-Field Automatic Identification of Pomegranates Using a Farmer Robot. SENSORS 2022; 22:s22155821. [PMID: 35957377 PMCID: PMC9370860 DOI: 10.3390/s22155821] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 07/31/2022] [Accepted: 08/02/2022] [Indexed: 02/04/2023]
Abstract
Ground vehicles equipped with vision-based perception systems can provide a rich source of information for precision agriculture tasks in orchards, including fruit detection and counting, phenotyping, plant growth and health monitoring. This paper presents a semi-supervised deep learning framework for automatic pomegranate detection using a farmer robot equipped with a consumer-grade camera. In contrast to standard deep-learning methods that require time-consuming and labor-intensive image labeling, the proposed system relies on a novel multi-stage transfer learning approach, whereby a pre-trained network is fine-tuned for the target task using images of fruits in controlled conditions, and then it is progressively extended to more complex scenarios towards accurate and efficient segmentation of field images. Results of experimental tests, performed in a commercial pomegranate orchard in southern Italy, are presented using the DeepLabv3+ (Resnet18) architecture, and they are compared with those that were obtained based on conventional manual image annotation. The proposed framework allows for accurate segmentation results, achieving an F1-score of 86.42% and IoU of 97.94%, while relieving the burden of manual labeling.
Collapse
|
99
|
Rasheed J, Shubair RM. Screening Lung Diseases Using Cascaded Feature Generation and Selection Strategies. Healthcare (Basel) 2022; 10:healthcare10071313. [PMID: 35885839 PMCID: PMC9317294 DOI: 10.3390/healthcare10071313] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 07/13/2022] [Accepted: 07/13/2022] [Indexed: 12/15/2022] Open
Abstract
The global pandemic COVID-19 is still a cause of a health emergency in several parts of the world. Apart from standard testing techniques to identify positive cases, auxiliary tools based on artificial intelligence can help with the identification and containment of the disease. The need for the development of alternative smart diagnostic tools to combat the COVID-19 pandemic has become more urgent. In this study, a smart auxiliary framework based on machine learning (ML) is proposed; it can help medical practitioners in the identification of COVID-19-affected patients, among others with pneumonia and healthy individuals, and can help in monitoring the status of COVID-19 cases using X-ray images. We investigated the application of transfer-learning (TL) networks and various feature-selection techniques for improving the classification accuracy of ML classifiers. Three different TL networks were tested to generate relevant features from images; these TL networks include AlexNet, ResNet101, and SqueezeNet. The generated relevant features were further refined by applying feature-selection methods that include iterative neighborhood component analysis (iNCA), iterative chi-square (iChi2), and iterative maximum relevance–minimum redundancy (iMRMR). Finally, classification was performed using convolutional neural network (CNN), linear discriminant analysis (LDA), and support vector machine (SVM) classifiers. Moreover, the study exploited stationary wavelet (SW) transform to handle the overfitting problem by decomposing each image in the training set up to three levels. Furthermore, it enhanced the dataset, using various operations as data-augmentation techniques, including random rotation, translation, and shear operations. The analysis revealed that the combination of AlexNet, ResNet101, SqueezeNet, iChi2, and SVM was very effective in the classification of X-ray images, producing a classification accuracy of 99.2%. Similarly, AlexNet, ResNet101, and SqueezeNet, along with iChi2 and the proposed CNN network, yielded 99.0% accuracy. The results showed that the cascaded feature generator and selection strategies significantly affected the performance accuracy of the classifier.
Collapse
Affiliation(s)
- Jawad Rasheed
- Department of Software Engineering, Nisantasi University, Istanbul 34398, Turkey
- Correspondence:
| | - Raed M. Shubair
- Department of Electrical and Computer Engineering, New York University (NYU), Abu Dhabi 129188, United Arab Emirates;
| |
Collapse
|
100
|
Zhou H, Deng J, Cai D, Lv X, Wu BM. Effects of Image Dataset Configuration on the Accuracy of Rice Disease Recognition Based on Convolution Neural Network. FRONTIERS IN PLANT SCIENCE 2022; 13:910878. [PMID: 35865283 PMCID: PMC9295741 DOI: 10.3389/fpls.2022.910878] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/10/2022] [Indexed: 06/02/2023]
Abstract
In recent years, the convolution neural network has been the most widely used deep learning algorithm in the field of plant disease diagnosis and has performed well in classification. However, in practice, there are still some specific issues that have not been paid adequate attention to. For instance, the same pathogen may cause similar or different symptoms when infecting plant leaves, while the same pathogen may cause similar or disparate symptoms on different parts of the plant. Therefore, questions come up naturally: should the images showing different symptoms of the same disease be in one class or two separate classes in the image database? Also, how will the different classification methods affect the results of image recognition? In this study, taking rice leaf blast and neck blast caused by Magnaporthe oryzae, and rice sheath blight caused by Rhizoctonia solani as examples, three experiments were designed to explore how database configuration affects recognition accuracy in recognizing different symptoms of the same disease on the same plant part, similar symptoms of the same disease on different parts, and different symptoms on different parts. The results suggested that when the symptoms of the same disease were the same or similar, no matter whether they were on the same plant part or not, training combined classes of these images can get better performance than training them separately. When the difference between symptoms was obvious, the classification was relatively easy, and both separate training and combined training could achieve relatively high recognition accuracy. The results also, to a certain extent, indicated that the greater the number of images in the training data set, the higher the average classification accuracy.
Collapse
|