1
|
Zhou H, Hu Y, Liu S, Zhou G, Xu J, Chen A, Wang Y, Li L, Hu Y. A Precise Framework for Rice Leaf Disease Image-Text Retrieval Using FHTW-Net. Plant Phenomics 2024; 6:0168. [PMID: 38666226 PMCID: PMC11045261 DOI: 10.34133/plantphenomics.0168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 03/13/2024] [Indexed: 04/28/2024]
Abstract
Cross-modal retrieval for rice leaf diseases is crucial for prevention, providing agricultural experts with data-driven decision support to address disease threats and safeguard rice production. To overcome the limitations of current crop leaf disease retrieval frameworks, we focused on four common rice leaf diseases and established the first cross-modal rice leaf disease retrieval dataset (CRLDRD). We introduced cross-modal retrieval to the domain of rice leaf disease retrieval and introduced FHTW-Net, a framework for rice leaf disease image-text retrieval. To address the challenge of matching diverse image categories with complex text descriptions during the retrieval process, we initially employed ViT and BERT to extract fine-grained image and text feature sequences enriched with contextual information. Subsequently, two-way mixed self-attention (TMS) was introduced to enhance both image and text feature sequences, with the aim of uncovering important semantic information in both modalities. Then, we developed false-negative elimination-hard negative mining (FNE-HNM) strategy to facilitate in-depth exploration of semantic connections between different modalities. This strategy aids in selecting challenging negative samples for elimination to constrain the model within the triplet loss function. Finally, we introduced warm-up bat algorithm (WBA) for learning rate optimization, which improves the model's convergence speed and accuracy. Experimental results demonstrated that FHTW-Net outperforms state-of-the-art models. In image-to-text retrieval, it achieved R@1, R@5, and R@10 accuracies of 83.5%, 92%, and 94%, respectively, while in text-to-image retrieval, it achieved accuracies of 82.5%, 98%, and 98.5%, respectively. FHTW-Net offers advanced technical support and algorithmic guidance for cross-modal retrieval of rice leaf diseases.
Collapse
Affiliation(s)
- Hongliang Zhou
- College of Computer and Information Engineering,
Central South University of Forestry and Technology, Changsha 410004, Hunan, China
| | - Yufan Hu
- College of Computer and Information Engineering,
Central South University of Forestry and Technology, Changsha 410004, Hunan, China
| | - Shuai Liu
- College of Computer and Information Engineering,
Central South University of Forestry and Technology, Changsha 410004, Hunan, China
| | - Guoxiong Zhou
- College of Computer and Information Engineering,
Central South University of Forestry and Technology, Changsha 410004, Hunan, China
| | - Jiaxin Xu
- College of Computer and Information Engineering,
Central South University of Forestry and Technology, Changsha 410004, Hunan, China
| | - Aibin Chen
- College of Computer and Information Engineering,
Central South University of Forestry and Technology, Changsha 410004, Hunan, China
| | - Yanfeng Wang
- National University of Defense Technology, Changsha 410015, Hunan, China
| | - Liujun Li
- Department of Soil and Water Systems,
University of Idaho, Moscow, ID 83844, USA
| | - Yahui Hu
- Plant Protection Research Institute,
Academy of Agricultural Sciences, Changsha 410125, Hunan, China
| |
Collapse
|
2
|
Wang W, Zhu A, Wei H, Yu L. A novel method for vegetable and fruit classification based on using diffusion maps and machine learning. Curr Res Food Sci 2024; 8:100737. [PMID: 38681525 PMCID: PMC11046067 DOI: 10.1016/j.crfs.2024.100737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 04/10/2024] [Accepted: 04/12/2024] [Indexed: 05/01/2024] Open
Abstract
Vegetable and fruit classification can help all links of agricultural product circulation to better carry out inventory management, logistics planning and supply chain coordination, and improve the efficiency and response speed of the supply chain. However, the current classification of vegetables and fruits mainly relies on manual classification, which inevitably introduces the influence of human subjective factors, resulting in errors and misjudgments in the classification of vegetables and fruits. In response to this serious problem, this research proposes an efficient and reproducible novel model to classify multiple vegetables and fruits using handcrafted features. In the proposed model, preprocessing operations such as Gaussian filtering, grayscale and binarization are performed on the pictures of vegetables and fruits to improve the quality of the pictures; statistical texture features representing vegetable and fruit categories, wavelet transform features and shape features are extracted from the preprocessed images; the feature dimension reduction method of diffusion maps is used to reduce the redundant information of the combined features composed of statistical texture features, wavelet transform features and shape features; five effective machine learning methods were used to classify the types of vegetables and fruits. In this research, the proposed method was rigorously verified experimentally and the results show that the SVM classifier achieves 96.25% classification accuracy of vegetables and fruits, which proves that the proposed method is helpful to improve the quality and management level of vegetables and fruits, and provide strong support for agricultural production and supply chain.
Collapse
Affiliation(s)
- Wenbo Wang
- School of Management, Shenyang University of Technology, 110870, Shenyang, China
| | - Aimin Zhu
- School of Management, Shenyang University of Technology, 110870, Shenyang, China
| | - Hongjiang Wei
- School of Management, Shenyang University of Technology, 110870, Shenyang, China
| | - Lijuan Yu
- School of Management, Shenyang University of Technology, 110870, Shenyang, China
| |
Collapse
|
3
|
Behera SK, Mahakud R, Panigrahi M, Sethy PK, Pati R. Diagnosis of retinal damage using Resnet rescaling and support vector machine (Resnet-RS-SVM): a case study from an Indian hospital. Int Ophthalmol 2024; 44:174. [PMID: 38613630 DOI: 10.1007/s10792-024-03058-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Accepted: 02/16/2024] [Indexed: 04/15/2024]
Abstract
PURPOSE This study aims to address the challenge of identifying retinal damage in medical applications through a computer-aided diagnosis (CAD) approach. Data was collected from four prominent eye hospitals in India for analysis and model development. METHODS Data was collected from Silchar Medical College and Hospital (SMCH), Aravind Eye Hospital (Tamil Nadu), LV Prasad Eye Hospital (Hyderabad), and Medanta (Gurugram). A modified version of the ResNet-101 architecture, named ResNet-RS, was utilized for retinal damage identification. In this modified architecture, the last layer's softmax function was replaced with a support vector machine (SVM). The resulting model, termed ResNet-RS-SVM, was trained and evaluated on each hospital's dataset individually and collectively. RESULTS The proposed ResNet-RS-SVM model achieved high accuracies across the datasets from the different hospitals: 99.17% for Aravind, 98.53% for LV Prasad, 98.33% for Medanta, and 100% for SMCH. When considering all hospitals collectively, the model attained an accuracy of 97.19%. CONCLUSION The findings demonstrate the effectiveness of the ResNet-RS-SVM model in accurately identifying retinal damage in diverse datasets collected from multiple eye hospitals in India. This approach presents a promising advancement in computer-aided diagnosis for improving the detection and management of retinal diseases.
Collapse
Affiliation(s)
- Santi Kumari Behera
- Department of Computer Science and Engineering, VSSUT Burla, Burla, 768018, India
| | - Rina Mahakud
- Department of Computer Science and Engineering, ITER, SOA University, Bhubaneswar, Odisha, India
| | - Millee Panigrahi
- Department of Electronics and Telecommunication Engineering, Trident Academy of Technology, Bhubaneswar, Odisha, India
| | | | - Rasmikanta Pati
- Department of Basic Science and Humanities, Sambalpur University Institute of Information Technology, Burla, Odisha, India
| |
Collapse
|
4
|
Biradar N, Hosalli G. Segmentation and detection of crop pests using novel U-Net with hybrid deep learning mechanism. Pest Management Science 2024. [PMID: 38506377 DOI: 10.1002/ps.8083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 02/24/2024] [Accepted: 03/13/2024] [Indexed: 03/21/2024]
Abstract
OBJECTIVE In India, agriculture is the backbone of economic sectors because of the increasing demand for agricultural products. However, agricultural production has been affected due to the presence of pests in crops. Several methods were developed to solve the crop pest detection issue, but they failed to achieve better results. Therefore, the proposed study used a new hybrid deep learning mechanism for segmenting and detecting pests in crops. METHOD Image collection, pre-processing, segmentation, and detection are the steps involved in the proposed study. There are three steps involved in pre-processing: image rescaling, equalized joint histogram based contrast enhancement (Eq-JH-CE), and bendlet transform based De-noising (BT-D). Next, the pre-processed images are segmented using the DenseNet-77 UNet model. In this section, the complexity of the conventional UNet model is mitigated by hybridizing it with the DenseNet-77 model. Once the segmentation is done with an improved model, the crop pests are detected and classified by proposing a novel Convolutional Slice-Attention based Gated Recurrent Unit (CS-AGRU) model. The proposed model is the combination of a convolutional Neural Network (CNN) and a Gated Recurrent Unit (GRU). In order to achieve better accuracy outcomes, the proposed study hybridized these models due to their great efficiency. Also, the slice attention mechanism is applied over the proposed model for fetching relevant feature information and thereby enhancing the computational efficiency. So, pests in the crop are finally detected using the proposed method. RESULT The Python programming language is utilized for implementation. The proposed approach shows a better accuracy range of 99.52%, IoU of 99.1%, precision of 98.88%, recall of 99.53%, F1-score of 99.35%, and FNR of 0.011 compared to existing techniques. DISCUSSION Identifying and classifying pests helps farmers anticipate potential threats to their crops. By knowing which pests are prevalent in their region or are likely to infest certain crops, farmers can implement preventive measures to protect their crops, such as planting pest-resistant varieties, using crop rotation, or deploying traps and barriers. © 2024 Society of Chemical Industry.
Collapse
Affiliation(s)
- Nagaveni Biradar
- Department of Computer Science & Engineering, Rao Bahadhur Y Mahabaleswarappa Engineering College, VTU, Belagavi, Karnataka, India
| | - Girisha Hosalli
- Department of Computer Science & Engineering, Rao Bahadhur Y Mahabaleswarappa Engineering College, VTU, Belagavi, Karnataka, India
| |
Collapse
|
5
|
Heng Q, Yu S, Zhang Y. A new AI-based approach for automatic identification of tea leaf disease using deep neural network based on hybrid pooling. Heliyon 2024; 10:e26465. [PMID: 38434404 PMCID: PMC10906319 DOI: 10.1016/j.heliyon.2024.e26465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 02/09/2024] [Accepted: 02/14/2024] [Indexed: 03/05/2024] Open
Abstract
The degree of production efficiency and the quality of the commodities produced may both be directly impacted by the presence of illnesses in tea leaves. These days, this procedure may be automated with the use of artificial intelligence tools, and a number of approaches have been put out to satisfy these needs. Nonetheless, current research efforts have focused on improving diagnosis accuracy and expanding the variety of illnesses that might affect tea leaves. In this article, a new method is proposed for accurately diagnosing tea leaf diseases using artificial intelligence techniques. In the proposed method, the input images are preprocessed to remove redundant information. Then, a hybrid pooling-based Convolutional Neural Network (CNN) is employed to extract image features. In this method, the pooling layers of the CNN model are randomly adjusted based on either max pooling or average pooling functions. This strategy can enhance the efficiency of the CNN-based feature extraction model. In this method, the pooling layers of the CNN model are randomly adjusted based on either max pooling or average pooling functions. This strategy can enhance the efficiency of the CNN-based feature extraction model. After feature extraction, a weighted Random Forest (WRF) model is used for the detection of tea leaf diseases. The outputs of the decision tree models and their corresponding weights are used to identify tea leaf illnesses in this classification model, where each tree in the random forest is given a weight depending on how well it performs. The Cuckoo Search Optimization (CSO) method is used in the proposed classification model to give a weight to each tree. Tea Sickness Dataset (TSD) has been used as the basis for evaluating the suggested method's effectiveness. The findings show that the suggested approach has an average accuracy of 92.47% in identifying seven different forms of tea leaf illnesses. Additionally, the recall and accuracy metrics indicate results of 92.35 and 92.26, respectively, indicating improvements over earlier techniques.
Collapse
Affiliation(s)
- Qidong Heng
- School of Public Administration, Beijing City University, Beijing, 100083, China
| | - Sibo Yu
- School of Information Science and Engineering, Beijing City University, Beijing, 100083, China
| | - Yandong Zhang
- Anxi Campus - Anxi College of Tea Science, Fujian Agriculture and Forestry University, Fuzhou, 350000, Fujian, China
| |
Collapse
|
6
|
Myslicka M, Kawala-Sterniuk A, Bryniarska A, Sudol A, Podpora M, Gasz R, Martinek R, Kahankova Vilimkova R, Vilimek D, Pelc M, Mikolajewski D. Review of the application of the most current sophisticated image processing methods for the skin cancer diagnostics purposes. Arch Dermatol Res 2024; 316:99. [PMID: 38446274 DOI: 10.1007/s00403-024-02828-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 12/28/2023] [Accepted: 01/25/2024] [Indexed: 03/07/2024]
Abstract
This paper presents the most current and innovative solutions applying modern digital image processing methods for the purpose of skin cancer diagnostics. Skin cancer is one of the most common types of cancers. It is said that in the USA only, one in five people will develop skin cancer and this trend is constantly increasing. Implementation of new, non-invasive methods plays a crucial role in both identification and prevention of skin cancer occurrence. Early diagnosis and treatment are needed in order to decrease the number of deaths due to this disease. This paper also contains some information regarding the most common skin cancer types, mortality and epidemiological data for Poland, Europe, Canada and the USA. It also covers the most efficient and modern image recognition methods based on the artificial intelligence applied currently for diagnostics purposes. In this work, both professional, sophisticated as well as inexpensive solutions were presented. This paper is a review paper and covers the period of 2017 and 2022 when it comes to solutions and statistics. The authors decided to focus on the latest data, mostly due to the rapid technology development and increased number of new methods, which positively affects diagnosis and prognosis.
Collapse
Affiliation(s)
- Maria Myslicka
- Faculty of Medicine, Wroclaw Medical University, J. Mikulicza-Radeckiego 5, 50-345, Wroclaw, Poland.
| | - Aleksandra Kawala-Sterniuk
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland.
| | - Anna Bryniarska
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Adam Sudol
- Faculty of Natural Sciences and Technology, University of Opole, Dmowskiego 7-9, 45-368, Opole, Poland
| | - Michal Podpora
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Rafal Gasz
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Radek Martinek
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Radana Kahankova Vilimkova
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Mariusz Pelc
- Institute of Computer Science, University of Opole, Oleska 48, 45-052, Opole, Poland
- School of Computing and Mathematical Sciences, University of Greenwich, Old Royal Naval College, Park Row, SE10 9LS, London, UK
| | - Dariusz Mikolajewski
- Institute of Computer Science, Kazimierz Wielki University in Bydgoszcz, ul. Kopernika 1, 85-074, Bydgoszcz, Poland
- Neuropsychological Research Unit, 2nd Clinic of the Psychiatry and Psychiatric Rehabilitation, Medical University in Lublin, Gluska 1, 20-439, Lublin, Poland
| |
Collapse
|
7
|
Abhisheka B, Biswas SK, Purkayastha B. HBMD-Net: Feature Fusion Based Breast Cancer Classification with Class Imbalance Resolution. J Imaging Inform Med 2024:10.1007/s10278-024-01046-5. [PMID: 38409609 DOI: 10.1007/s10278-024-01046-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/06/2024] [Accepted: 02/09/2024] [Indexed: 02/28/2024]
Abstract
Breast cancer, a widespread global disease, represents a significant threat to women's health and lives, ranking as one of the most vulnerable malignant tumors they face. Many researchers have proposed their computer-aided diagnosis systems for classifying breast cancer. The majority of these approaches primarily utilize deep learning (DL) methods, which are not entirely reliable. These approaches overlook the crucial necessity of incorporating both local and global information for precise tumor detection, despite the fact that the subtle nuances are crucial for precise breast cancer classification. In addition, there are a limited number of publicly available breast cancer datasets, and the ones that are available tend to be imbalanced in nature. Therefore, this paper presents the hybrid breast mass detection-network (HBMD-Net) to address two critical challenges: class imbalance and the need to recognize that relying solely on either global or local features falls short in achieving precise tumor classification. To overcome the problem of class imbalance, HBMD-Net incorporates the borderline synthetic minority over-sampling technique (BSMOTE). Simultaneously, it employs a feature fusion approach, combining features by utilizing ResNet50 to extract deep features that provide global information, while handcrafted features are derived using histogram orientation gradient (HOG), that provide local information. In addition, an ROI segmentation has been implemented to avoid misclassifications. This integrated strategy substantially enhances breast cancer classification performance. Moreover, the proposed method integrates the block matching and 3D (BM3D) denoising filter to effectively eliminate multiplicative noise that has enhanced the performance of the system. The evaluation of the proposed HBMD-Net encompasses two breast ultrasound (BUS) datasets, namely BUSI and UDIAT. The proposed model has demonstrated a satisfactory performance, achieving accuracies of 99.14% and 94.49% respectively.
Collapse
Affiliation(s)
- Barsha Abhisheka
- Computer Science and Engineering, NIT Silchar, Silchar, 788010, Assam, India.
| | - Saroj Kr Biswas
- Computer Science and Engineering, NIT Silchar, Silchar, 788010, Assam, India
| | | |
Collapse
|
8
|
Shafik W, Tufail A, De Silva Liyanage C, Apong RAAHM. Using transfer learning-based plant disease classification and detection for sustainable agriculture. BMC Plant Biol 2024; 24:136. [PMID: 38408925 PMCID: PMC10895770 DOI: 10.1186/s12870-024-04825-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 02/15/2024] [Indexed: 02/28/2024]
Abstract
Subsistence farmers and global food security depend on sufficient food production, which aligns with the UN's "Zero Hunger," "Climate Action," and "Responsible Consumption and Production" sustainable development goals. In addition to already available methods for early disease detection and classification facing overfitting and fine feature extraction complexities during the training process, how early signs of green attacks can be identified or classified remains uncertain. Most pests and disease symptoms are seen in plant leaves and fruits, yet their diagnosis by experts in the laboratory is expensive, tedious, labor-intensive, and time-consuming. Notably, how plant pests and diseases can be appropriately detected and timely prevented is a hotspot paradigm in smart, sustainable agriculture remains unknown. In recent years, deep transfer learning has demonstrated tremendous advances in the recognition accuracy of object detection and image classification systems since these frameworks utilize previously acquired knowledge to solve similar problems more effectively and quickly. Therefore, in this research, we introduce two plant disease detection (PDDNet) models of early fusion (AE) and the lead voting ensemble (LVE) integrated with nine pre-trained convolutional neural networks (CNNs) and fine-tuned by deep feature extraction for efficient plant disease identification and classification. The experiments were carried out on 15 classes of the popular PlantVillage dataset, which has 54,305 image samples of different plant disease species in 38 categories. Hyperparameter fine-tuning was done with popular pre-trained models, including DenseNet201, ResNet101, ResNet50, GoogleNet, AlexNet, ResNet18, EfficientNetB7, NASNetMobile, and ConvNeXtSmall. We test these CNNs on the stated plant disease detection and classification problem, both independently and as part of an ensemble. In the final phase, a logistic regression (LR) classifier is utilized to determine the performance of various CNN model combinations. A comparative analysis was also performed on classifiers, deep learning, the proposed model, and similar state-of-the-art studies. The experiments demonstrated that PDDNet-AE and PDDNet-LVE achieved 96.74% and 97.79%, respectively, compared to current CNNs when tested on several plant diseases, depicting its exceptional robustness and generalization capabilities and mitigating current concerns in plant disease detection and classification.
Collapse
Affiliation(s)
- Wasswa Shafik
- School of Digital Science, Universiti Brunei Darussalam, Tungku Link, Gadong, BE1410, Brunei
| | - Ali Tufail
- School of Digital Science, Universiti Brunei Darussalam, Tungku Link, Gadong, BE1410, Brunei.
| | | | | |
Collapse
|
9
|
Rethemiotaki I. Brain tumour detection from magnetic resonance imaging using convolutional neural networks. Contemp Oncol (Pozn) 2024; 27:230-241. [PMID: 38405206 PMCID: PMC10883197 DOI: 10.5114/wo.2023.135320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 01/02/2024] [Indexed: 02/27/2024] Open
Abstract
Introduction The aim of this work is to detect and classify brain tumours using computational intelligence techniques on magnetic resonance imaging (MRI) images. Material and methods A dataset of 3264 MRI brain images consisting of 4 categories: unspecified glioma, meningioma, pituitary, and healthy brain, was used in this study. Twelve convolutional neural networks (GoogleNet, MobileNetV2, Xception, DesNet-BC, ResNet 50, SqueezeNet, ShuffleNet, VGG-16, AlexNet, Enet, EfficientB0, and MobileNetV2 with meta pseudo-labels) were used to classify gliomas, meningiomas, pituitary tumours, and healthy brains to find the most appropriate model. The experiments included image preprocessing and hyperparameter tuning. The performance of each neural network was evaluated based on accuracy, precision, recall, and F-measure for each type of brain tumour. Results The experimental results show that the MobileNetV2 convolutional neural network (CNN) model was able to diagnose brain tumours with 99% accuracy, 98% recall, and 99% F1 score. On the other hand, the validation data analysis shows that the CNN model GoogleNet has the highest accuracy (97%) among CNNs and seems to be the best choice for brain tumour classification. Conclusions The results of this work highlight the importance of artificial intelligence and machine learning for brain tumour prediction. Furthermore, this study achieved the highest accuracy in brain tumour classification to date, and it is also the only study to compare the performance of so many neural networks simultaneously.
Collapse
Affiliation(s)
- Irene Rethemiotaki
- School of Electrical and Computer Engineering, Technical University of Crete, Chania, Crete, Greece
| |
Collapse
|
10
|
He Y, Zhang G, Gao Q. A novel ensemble learning method for crop leaf disease recognition. Front Plant Sci 2024; 14:1280671. [PMID: 38264019 PMCID: PMC10804852 DOI: 10.3389/fpls.2023.1280671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 11/28/2023] [Indexed: 01/25/2024]
Abstract
Deep learning models have been widely applied in the field of crop disease recognition. There are various types of crops and diseases, each potentially possessing distinct and effective features. This brings a great challenge to the generalization performance of recognition models and makes it very difficult to build a unified model capable of achieving optimal recognition performance on all kinds of crops and diseases. In order to solve this problem, we have proposed a novel ensemble learning method for crop leaf disease recognition (named ELCDR). Unlike the traditional voting strategy of ensemble learning, ELCDR assigns different weights to the models based on their feature extraction performance during ensemble learning. In ELCDR, the models' feature extraction performance is measured by the distribution of the feature vectors of the training set. If a model could distinguish more feature differences between different categories, then it receives a higher weight during ensemble learning. We conducted experiments on the disease images of four kinds of crops. The experimental results show that in comparison to the optimal single model recognition method, ELCDR improves by as much as 1.5 (apple), 0.88 (corn), 2.25 (grape), and 1.5 (rice) percentage points in accuracy. Compared with the voting strategy of ensemble learning, ELCDR improves by as much as 1.75 (apple), 1.25 (corn), 0.75 (grape), and 7 (rice) percentage points in accuracy in each case. Additionally, ELCDR also has improvements on precision, recall, and F1 measure metrics. These experiments provide evidence of the effectiveness of ELCDR in the realm of crop leaf disease recognition.
Collapse
Affiliation(s)
- Yun He
- School of Big Data, Yunnan Agricultural University, Kunming, China
- Key Laboratory for Crop Production and Intelligent Agriculture of Yunnan Province, Yunnan Agricultural University, Kunming, China
| | - Guangchuan Zhang
- Key Laboratory for Crop Production and Intelligent Agriculture of Yunnan Province, Yunnan Agricultural University, Kunming, China
- School of Mechanical and Electrical Engineering, Yunnan Agricultural University, Kunming, China
| | - Quan Gao
- School of Big Data, Yunnan Agricultural University, Kunming, China
- Key Laboratory for Crop Production and Intelligent Agriculture of Yunnan Province, Yunnan Agricultural University, Kunming, China
| |
Collapse
|
11
|
Dong X, Zhao K, Wang Q, Wu X, Huang Y, Wu X, Zhang T, Dong Y, Gao Y, Chen P, Liu Y, Chen D, Wang S, Yang X, Yang J, Wang Y, Gao Z, Wu X, Bai Q, Li S, Hao G. PlantPAD: a platform for large-scale image phenomics analysis of disease in plant science. Nucleic Acids Res 2024; 52:D1556-D1568. [PMID: 37897364 PMCID: PMC10767946 DOI: 10.1093/nar/gkad917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 09/21/2023] [Accepted: 10/13/2023] [Indexed: 10/30/2023] Open
Abstract
Plant disease, a huge burden, can cause yield loss of up to 100% and thus reduce food security. Actually, smart diagnosing diseases with plant phenomics is crucial for recovering the most yield loss, which usually requires sufficient image information. Hence, phenomics is being pursued as an independent discipline to enable the development of high-throughput phenotyping for plant disease. However, we often face challenges in sharing large-scale image data due to incompatibilities in formats and descriptions provided by different communities, limiting multidisciplinary research exploration. To this end, we build a Plant Phenomics Analysis of Disease (PlantPAD) platform with large-scale information on disease. Our platform contains 421 314 images, 63 crops and 310 diseases. Compared to other databases, PlantPAD has extensive, well-annotated image data and in-depth disease information, and offers pre-trained deep-learning models for accurate plant disease diagnosis. PlantPAD supports various valuable applications across multiple disciplines, including intelligent disease diagnosis, disease education and efficient disease detection and control. Through three applications of PlantPAD, we show the easy-to-use and convenient functions. PlantPAD is mainly oriented towards biologists, computer scientists, plant pathologists, farm managers and pesticide scientists, which may easily explore multidisciplinary research to fight against plant diseases. PlantPAD is freely available at http://plantpad.samlab.cn.
Collapse
Affiliation(s)
- Xinyu Dong
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Kejun Zhao
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Qi Wang
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
- Text Computing & Cognitive Intelligence Engineering Research Center of National Education Ministry, Guizhou University, Guiyang 550025, Guizhou, China
| | - Xingcai Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Yuanqin Huang
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| | - Xue Wu
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| | - Tianhan Zhang
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Yawen Dong
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| | - Yangyang Gao
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| | - Panfeng Chen
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Yingwei Liu
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| | - Dongyu Chen
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| | - Shuang Wang
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| | - Xiaoyan Yang
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| | - Jing Yang
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Yong Wang
- Department of Plant Pathology, Agriculture College, Guizhou University, Guiyang 550025, Guizhou, China
| | - Zhenran Gao
- New Rural Development Research Institute, Guizhou University, Guiyang 550025, Guizhou, China
| | - Xian Wu
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| | - Qingrong Bai
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| | - Shaobo Li
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Gefei Hao
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- National Key Laboratory of Green Pesticide, Key Laboratory of Green Pesticide and Agricultural Bioengineering, Ministry of Education, Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China; Center for Research and Development of Fine Chemicals, Guizhou University, Guiyang 550025, China
| |
Collapse
|
12
|
Kaplun D, Deka S, Bora A, Choudhury N, Basistha J, Purkayastha B, Mazumder IZ, Gulvanskii V, Sarma KK, Misra DD. An intelligent agriculture management system for rainfall prediction and fruit health monitoring. Sci Rep 2024; 14:512. [PMID: 38177254 PMCID: PMC10766985 DOI: 10.1038/s41598-023-49186-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 12/05/2023] [Indexed: 01/06/2024] Open
Abstract
Contrary to popular belief, agriculture is becoming more data-driven with artificial intelligence and Internet-of-Things (IoT) playing crucial roles. In this paper, the integrated processing executed by various sensors combined as an IoT pack and driving an intelligent agriculture management system designed for rainfall prediction and fruit health monitoring have been included. The proposed system based on an AI aided model makes use of a Convolutional Neural Network (CNN) with long short-term memory (LSTM) layer for rainfall prediction and a CNN with SoftMax layer along with a few deep learning pre-trained models for fruit health monitoring. Another model that works as a combined rainfall predictor and fruit health recognizer is designed using a CNN + LSTM and a multi-head self-attention mechanism which proves to be effective. The entire system is cloud resident and available for use through an application.
Collapse
Affiliation(s)
- Dmitrii Kaplun
- Artificial Intelligence Research Institute, China University of Mining and Technology, Xuzhou, 221116, China
| | - Surajit Deka
- Department of Electronics and Communication Engineering, Gauhati University, Guwahati, Assam, 781014, India.
| | - Arunabh Bora
- Department of Electronics and Communication Engineering, Gauhati University, Guwahati, Assam, 781014, India
| | - Nupur Choudhury
- Department of Electronics and Communication Engineering, Gauhati University, Guwahati, Assam, 781014, India
| | - Jyotishman Basistha
- Department of Electronics and Communication Engineering, Gauhati University, Guwahati, Assam, 781014, India
| | - Bhaswadeep Purkayastha
- Department of Electronics and Communication Engineering, Gauhati University, Guwahati, Assam, 781014, India
| | - Ifthikaruz Zaman Mazumder
- Department of Electronics and Communication Engineering, Gauhati University, Guwahati, Assam, 781014, India
| | - Vyacheslav Gulvanskii
- Mobile Information Systems Laboratory, Saint Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia
- Department of Automation and Control Processes, Saint Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia
| | - Kandarpa Kumar Sarma
- Department of Electronics and Communication Engineering, Gauhati University, Guwahati, Assam, 781014, India.
| | - Debashis Dev Misra
- Department of Computer Science and Engineering, Assam Downtown University, Guwahati, Assam, 781026, India
| |
Collapse
|
13
|
Nan G, Li H, Du H, Liu Z, Wang M, Xu S. A Semantic Segmentation Method Based on AS-Unet++ for Power Remote Sensing of Images. Sensors (Basel) 2024; 24:269. [PMID: 38203131 PMCID: PMC10781366 DOI: 10.3390/s24010269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/16/2023] [Accepted: 12/18/2023] [Indexed: 01/12/2024]
Abstract
In order to achieve the automatic planning of power transmission lines, a key step is to precisely recognize the feature information of remote sensing images. Considering that the feature information has different depths and the feature distribution is not uniform, a semantic segmentation method based on a new AS-Unet++ is proposed in this paper. First, the atrous spatial pyramid pooling (ASPP) and the squeeze-and-excitation (SE) module are added to traditional Unet, such that the sensing field can be expanded and the important features can be enhanced, which is called AS-Unet. Second, an AS-Unet++ structure is built by using different layers of AS-Unet, such that the feature extraction parts of each layer of AS-Unet are stacked together. Compared with Unet, the proposed AS-Unet++ automatically learns features at different depths and determines a depth with optimal performance. Once the optimal number of network layers is determined, the excess layers can be pruned, which will greatly reduce the number of trained parameters. The experimental results show that the overall recognition accuracy of AS-Unet++ is significantly improved compared to Unet.
Collapse
Affiliation(s)
| | | | - Haibo Du
- School of Electrical Engineering and Automation, Hefei University of Technology, Hefei 230009, China; (G.N.); (H.L.); (Z.L.); (M.W.); (S.X.)
| | | | | | | |
Collapse
|
14
|
Tasnim J, Hasan MK. CAM-QUS guided self-tuning modular CNNs with multi-loss functions for fully automated breast lesion classification in ultrasound images. Phys Med Biol 2023; 69:015018. [PMID: 38056017 DOI: 10.1088/1361-6560/ad1319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 12/06/2023] [Indexed: 12/08/2023]
Abstract
Objective.Breast cancer is the major cause of cancer death among women worldwide. Deep learning-based computer-aided diagnosis (CAD) systems for classifying lesions in breast ultrasound images can help materialise the early detection of breast cancer and enhance survival chances.Approach.This paper presents a completely automated BUS diagnosis system with modular convolutional neural networks tuned with novel loss functions. The proposed network comprises a dynamic channel input enhancement network, an attention-guided InceptionV3-based feature extraction network, a classification network, and a parallel feature transformation network to map deep features into quantitative ultrasound (QUS) feature space. These networks function together to improve classification accuracy by increasing the separation of benign and malignant class-specific features and enriching them simultaneously. Unlike the categorical crossentropy (CCE) loss-based traditional approaches, our method uses two additional novel losses: class activation mapping (CAM)-based and QUS feature-based losses, to capacitate the overall network learn the extraction of clinically valued lesion shape and texture-related properties focusing primarily the lesion area for explainable AI (XAI).Main results.Experiments on four public, one private, and a combined breast ultrasound dataset are used to validate our strategy. The suggested technique obtains an accuracy of 97.28%, sensitivity of 93.87%, F1-score of 95.42% on dataset 1 (BUSI), and an accuracy of 91.50%, sensitivity of 89.38%, and F1-score of 89.31% on the combined dataset, consisting of 1494 images collected from hospitals in five demographic locations using four ultrasound systems of different manufacturers. These results outperform techniques reported in the literature by a considerable margin.Significance.The proposed CAD system provides diagnosis from the auto-focused lesion area of B-mode BUS images, avoiding the explicit requirement of any segmentation or region of interest extraction, and thus can be a handy tool for making accurate and reliable diagnoses even in unspecialized healthcare centers.
Collapse
Affiliation(s)
- Jarin Tasnim
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh
| | - Md Kamrul Hasan
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh
| |
Collapse
|
15
|
Pandit P, Sagar A, Ghose B, Dey P, Paul M, Alqadhi S, Mallick J, Almohamad H, Abdo HG. Hybrid time series models with exogenous variable for improved yield forecasting of major Rabi crops in India. Sci Rep 2023; 13:22240. [PMID: 38097613 PMCID: PMC10721813 DOI: 10.1038/s41598-023-49544-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 12/09/2023] [Indexed: 12/17/2023] Open
Abstract
Accurate and in-time prediction of crop yield plays a crucial role in the planning, management, and decision-making processes within the agricultural sector. In this investigation, utilizing area under irrigation (%) as an exogenous variable, we have made an exertion to assess the suitability of different hybrid models such as ARIMAX (Autoregressive Integrated Moving Average with eXogenous Regressor)-TDNN (Time-Delay Neural Network), ARIMAX-NLSVR (Non-Linear Support Vector Regression), ARIMAX-WNN (Wavelet Neural Network), ARIMAX-CNN (Convolutional Neural Network), ARIMAX-RNN (Recurrent Neural Network) and ARIMAX-LSTM (Long Short Term Memory) as compared to their individual counterparts for yield forecasting of major Rabi crops in India. The accuracy of the ARIMA model has also been considered as a benchmark. Empirical outcomes reveal that the ARIMAX-LSTM hybrid modeling combination outperforms all other time series models in terms of root mean square error (RMSE) and mean absolute percentage error (MAPE) values. For these models, an average improvement of RMSE and MAPE values has been observed to be 10.41% and 12.28%, respectively over all other competing models and 15.83% and 18.42%, respectively over the benchmark ARIMA model. The incorporation of the area under irrigation (%) as an exogenous variable in the ARIMAX framework and the inbuilt capability of the LSTM model to process complex non-linear patterns have been observed to significantly enhance the accuracy of forecasting. The performance supremacy of other hybrid models over their individual counterparts has also been evident. The results also suggest avoiding any performance generalization of individual models for their hybrid structures.
Collapse
Affiliation(s)
- Pramit Pandit
- Department of Agricultural Statistics & Computer Application, Rabindra Nath Tagore Agriculture College, Birsa Agricultural University, Ranchi, 834006, India
| | - Atish Sagar
- Department of Agricultural Engineering, Rabindra Nath Tagore Agriculture College, Birsa Agricultural University, Ranchi, 834006, India
| | - Bikramjeet Ghose
- Department of Agricultural Statistics, Bidhan Chandra Krishi Viswavidyalaya, Mohanpur, 741252, India
| | - Prithwiraj Dey
- Agricultural and Food Engineering Department, Indian Institute of Technology Kharagpur, Kharagpur, 721302, India
| | - Moumita Paul
- Department of Agricultural Statistics, Bidhan Chandra Krishi Viswavidyalaya, Mohanpur, 741252, India
| | - Saeed Alqadhi
- Department of Civil Engineering, College of Engineering, King Khalid University, Abha, Kingdom of Saudi Arabia
| | - Javed Mallick
- Department of Civil Engineering, College of Engineering, King Khalid University, Abha, Kingdom of Saudi Arabia
| | - Hussein Almohamad
- Department of Geography, College of Arabic Language and Social Studies, Qassim University, Buraydah, 51452, Saudi Arabia
| | - Hazem Ghassan Abdo
- Geography Department, Faculty of Arts and Humanities, Tartous University, Tartous, Syria.
| |
Collapse
|
16
|
Zhao X, Wang J, Wang J, Wang J, Hong R, Shen T, Liu Y, Liang Y. DTLR-CS: Deep tensor low rank channel cross fusion neural network for reproductive cell segmentation. PLoS One 2023; 18:e0294727. [PMID: 38032913 PMCID: PMC10688749 DOI: 10.1371/journal.pone.0294727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 11/07/2023] [Indexed: 12/02/2023] Open
Abstract
In recent years, with the development of deep learning technology, deep neural networks have been widely used in the field of medical image segmentation. U-shaped Network(U-Net) is a segmentation network proposed for medical images based on full-convolution and is gradually becoming the most commonly used segmentation architecture in the medical field. The encoder of U-Net is mainly used to capture the context information in the image, which plays an important role in the performance of the semantic segmentation algorithm. However, it is unstable for U-Net with simple skip connection to perform unstably in global multi-scale modelling, and it is prone to semantic gaps in feature fusion. Inspired by this, in this work, we propose a Deep Tensor Low Rank Channel Cross Fusion Neural Network (DTLR-CS) to replace the simple skip connection in U-Net. To avoid space compression and to solve the high rank problem, we designed a tensor low-ranking module to generate a large number of low-rank tensors containing context features. To reduce semantic differences, we introduced a cross-fusion connection module, which consists of a channel cross-fusion sub-module and a feature connection sub-module. Based on the proposed network, experiments have shown that our network has accurate cell segmentation performance.
Collapse
Affiliation(s)
- Xia Zhao
- Reproductive Medicine Center, Zhongda Hospital, Southeast University, Nanjing, Jiangsu Province, China
| | - Jiahui Wang
- School of Medicine, Southeast University, Nanjing, Jiangsu Province, China
| | - Jing Wang
- Reproductive Medicine Center, Zhongda Hospital, Southeast University, Nanjing, Jiangsu Province, China
| | - Jing Wang
- Reproductive Medicine Center, Zhongda Hospital, Southeast University, Nanjing, Jiangsu Province, China
| | - Renyun Hong
- Reproductive Medicine Center, Zhongda Hospital, Southeast University, Nanjing, Jiangsu Province, China
| | - Tao Shen
- Reproductive Medicine Center, Zhongda Hospital, Southeast University, Nanjing, Jiangsu Province, China
| | - Yi Liu
- School of Medicine, Southeast University, Nanjing, Jiangsu Province, China
| | - Yuanjiao Liang
- Reproductive Medicine Center, Zhongda Hospital, Southeast University, Nanjing, Jiangsu Province, China
| |
Collapse
|
17
|
Tarek Z, Elhoseny M, Alghamdi MI, El-Hasnony IM. Leveraging three-tier deep learning model for environmental cleaner plants production. Sci Rep 2023; 13:19499. [PMID: 37945683 PMCID: PMC10636176 DOI: 10.1038/s41598-023-43465-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 09/24/2023] [Indexed: 11/12/2023] Open
Abstract
The world's population is expected to exceed 9 billion people by 2050, necessitating a 70% increase in agricultural output and food production to meet the demand. Due to resource shortages, climate change, the COVID-19 pandemic, and highly harsh socioeconomic predictions, such a demand is challenging to complete without using computation and forecasting methods. Machine learning has grown with big data and high-performance computers technologies to open up new data-intensive scientific opportunities in the multidisciplinary agri-technology area. Throughout the plant's developmental period, diseases and pests are natural disasters, from seed production to seedling growth. This paper introduces an early diagnosis framework for plant diseases based on fog computing and edge environment by IoT sensors measurements and communication technologies. The effectiveness of employing pre-trained CNN architectures as feature extractors in identifying plant illnesses has been studied. As feature extractors, standard pre-trained CNN models, AlexNet are employed. The obtained in-depth features are eliminated by proposing a revised version of the grey wolf optimization (GWO) algorithm that approved its efficiency through experiments. The features subset selected were used to train the SVM classifier. Ten datasets for different plants are utilized to assess the proposed model. According to the findings, the proposed model achieved better outcomes for all used datasets. As an average for all datasets, the accuracy of the proposed model is 93.84 compared to 85.49, 87.89, 87.04 for AlexNet, GoogleNet, and the SVM, respectively.
Collapse
Affiliation(s)
- Zahraa Tarek
- Faculty of Computers and Information Science, Mansoura University, Mansoura, Egypt
| | - Mohamed Elhoseny
- Faculty of Computers and Information Science, Mansoura University, Mansoura, Egypt
- College of Computing and Informatics, University of Sharjah, Sharjah, United Arab Emirates
| | - Mohamemd I Alghamdi
- Department of Computer Science, Al-Baha University, Al Bahah, Kingdom of Saudi Arabia
| | - Ibrahim M El-Hasnony
- Faculty of Computers and Information Science, Mansoura University, Mansoura, Egypt.
| |
Collapse
|
18
|
Ahmad I, Merla A, Ali F, Shah B, AlZubi AA, AlZubi MA. A deep transfer learning approach for COVID-19 detection and exploring a sense of belonging with Diabetes. Front Public Health 2023; 11:1308404. [PMID: 38026271 PMCID: PMC10657998 DOI: 10.3389/fpubh.2023.1308404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 10/18/2023] [Indexed: 12/01/2023] Open
Abstract
COVID-19 is an epidemic disease that results in death and significantly affects the older adult and those afflicted with chronic medical conditions. Diabetes medication and high blood glucose levels are significant predictors of COVID-19-related death or disease severity. Diabetic individuals, particularly those with preexisting comorbidities or geriatric patients, are at a higher risk of COVID-19 infection, including hospitalization, ICU admission, and death, than those without Diabetes. Everyone's lives have been significantly changed due to the COVID-19 outbreak. Identifying patients infected with COVID-19 in a timely manner is critical to overcoming this challenge. The Real-Time Polymerase Chain Reaction (RT-PCR) diagnostic assay is currently the gold standard for COVID-19 detection. However, RT-PCR is a time-consuming and costly technique requiring a lab kit that is difficult to get in crises and epidemics. This work suggests the CIDICXR-Net50 model, a ResNet-50-based Transfer Learning (TL) method for COVID-19 detection via Chest X-ray (CXR) image classification. The presented model is developed by substituting the final ResNet-50 classifier layer with a new classification head. The model is trained on 3,923 chest X-ray images comprising a substantial dataset of 1,360 viral pneumonia, 1,363 normal, and 1,200 COVID-19 CXR images. The proposed model's performance is evaluated in contrast to the results of six other innovative pre-trained models. The proposed CIDICXR-Net50 model attained 99.11% accuracy on the provided dataset while maintaining 99.15% precision and recall. This study also explores potential relationships between COVID-19 and Diabetes.
Collapse
Affiliation(s)
- Ijaz Ahmad
- Digital Transition, Innovation and Health Service, Leonardo da Vinci Telematic University, Chieti, Italy
| | - Arcangelo Merla
- Department of Engineering and Geology (INGEO) University "G. d’Annunzio" Chieti-Pescara, Pescara, Italy
| | - Farman Ali
- Department of Computer Science and Engineering, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul, Republic of Korea
| | - Babar Shah
- College of Technological Innovation, Zayed University, Dubai, United Arab Emirates
| | - Ahmad Ali AlZubi
- Department of Computer Science, Community College, King Saud University, Riyadh, Saudi Arabia
| | - Mallak Ahmad AlZubi
- Faculty of Medicine, Jordan University of Science and Technology, Irbid, Jordan
| |
Collapse
|
19
|
Ning H, Liu S, Zhu Q, Zhou T. Convolutional neural network in rice disease recognition: accuracy, speed and lightweight. Front Plant Sci 2023; 14:1269371. [PMID: 38023901 PMCID: PMC10646333 DOI: 10.3389/fpls.2023.1269371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 10/18/2023] [Indexed: 12/01/2023]
Abstract
There are many rice diseases, which have very serious negative effects on rice growth and final yield. It is very important to identify the categories of rice diseases and control them. In the past, the identification of rice disease types was completely dependent on manual work, which required a high level of human experience. But the method often could not achieve the desired effect, and was difficult to popularize on a large scale. Convolutional neural networks are good at extracting localized features from input data, converting low-level shape and texture features into high-level semantic features. Models trained by convolutional neural network technology based on existing data can extract common features of data and make the framework have generalization ability. Applying ensemble learning or transfer learning techniques to convolutional neural network can further improve the performance of the model. In recent years, convolutional neural network technology has been applied to the automatic recognition of rice diseases, which reduces the manpower burden and ensures the accuracy of recognition. In this paper, the applications of convolutional neural network technology in rice disease recognition are summarized, and the fruitful achievements in rice disease recognition accuracy, speed, and mobile device deployment are described. This paper also elaborates on the lightweighting of convolutional neural networks for real-time applications as well as mobile deployments, and the various improvements in the dataset and model structure to enhance the model recognition performance.
Collapse
Affiliation(s)
- Hongwei Ning
- College of Information and Network Engineering, Anhui Science and Technology University, Bengbu, Anhui, China
| | - Sheng Liu
- Information Network Security College, Yunnan Police College, Kunming, Yunnan, China
| | - Qifei Zhu
- Information Network Security College, Yunnan Police College, Kunming, Yunnan, China
| | - Teng Zhou
- Mechanical and Electrical Engineering College, Hainan University, Haikou, Hainan, China
| |
Collapse
|
20
|
Dai Q, Tao Y, Liu D, Zhao C, Sui D, Xu J, Shi T, Leng X, Lu M. Ultrasound radiomics models based on multimodal imaging feature fusion of papillary thyroid carcinoma for predicting central lymph node metastasis. Front Oncol 2023; 13:1261080. [PMID: 38023240 PMCID: PMC10643192 DOI: 10.3389/fonc.2023.1261080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 10/09/2023] [Indexed: 12/01/2023] Open
Abstract
Objective This retrospective study aimed to establish ultrasound radiomics models to predict central lymph node metastasis (CLNM) based on preoperative multimodal ultrasound imaging features fusion of primary papillary thyroid carcinoma (PTC). Methods In total, 498 cases of unifocal PTC were randomly divided into two sets which comprised 348 cases (training set) and 150 cases (validition set). In addition, the testing set contained 120 cases of PTC at different times. Post-operative histopathology was the gold standard for CLNM. The following steps were used to build models: the regions of interest were segmented in PTC ultrasound images, multimodal ultrasound image features were then extracted by the deep learning residual neural network with 50-layer network, followed by feature selection and fusion; subsequently, classification was performed using three classical classifiers-adaptive boosting (AB), linear discriminant analysis (LDA), and support vector machine (SVM). The performances of the unimodal models (Unimodal-AB, Unimodal-LDA, and Unimodal-SVM) and the multimodal models (Multimodal-AB, Multimodal-LDA, and Multimodal-SVM) were evaluated and compared. Results The Multimodal-SVM model achieved the best predictive performance than the other models (P < 0.05). For the Multimodal-SVM model validation and testing sets, the areas under the receiver operating characteristic curves (AUCs) were 0.910 (95% CI, 0.894-0.926) and 0.851 (95% CI, 0.833-0.869), respectively. The AUCs of the Multimodal-SVM model were 0.920 (95% CI, 0.881-0.959) in the cN0 subgroup-1 cases and 0.828 (95% CI, 0.769-0.887) in the cN0 subgroup-2 cases. Conclusion The ultrasound radiomics model only based on the PTC multimodal ultrasound image have high clinical value in predicting CLNM and can provide a reference for treatment decisions.
Collapse
Affiliation(s)
- Quan Dai
- Department of Ultrasound, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Medicine & Laboratory of Translational Research in Ultrasound Theranostics, Chengdu, China
| | - Yi Tao
- Department of Ultrasound, West China Hospital of Sichuan University, Chengdu, China
| | - Dongmei Liu
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Chen Zhao
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Dong Sui
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, China
| | - Jinshun Xu
- Department of Ultrasound, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Medicine & Laboratory of Translational Research in Ultrasound Theranostics, Chengdu, China
| | - Tiefeng Shi
- Department of General Surgery, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Xiaoping Leng
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Man Lu
- Department of Ultrasound, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Medicine & Laboratory of Translational Research in Ultrasound Theranostics, Chengdu, China
| |
Collapse
|
21
|
Lin S, Li J, Huang D, Cheng Z, Xiang L, Ye D, Weng H. Early Detection of Rice Blast Using a Semi-Supervised Contrastive Unpaired Translation Iterative Network Based on UAV Images. Plants (Basel) 2023; 12:3675. [PMID: 37960032 PMCID: PMC10647743 DOI: 10.3390/plants12213675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/16/2023] [Accepted: 10/23/2023] [Indexed: 11/15/2023]
Abstract
Rice blast has caused major production losses in rice, and thus the early detection of rice blast plays a crucial role in global food security. In this study, a semi-supervised contrastive unpaired translation iterative network is specifically designed based on unmanned aerial vehicle (UAV) images for rice blast detection. It incorporates multiple critic contrastive unpaired translation networks to generate fake images with different disease levels through an iterative process of data augmentation. These generated fake images, along with real images, are then used to establish a detection network called RiceBlastYolo. Notably, the RiceBlastYolo model integrates an improved fpn and a general soft labeling approach. The results show that the detection precision of RiceBlastYolo is 99.51% under intersection over union (IOU0.5) conditions and the average precision is 98.75% under IOU0.5-0.9 conditions. The precision and recall rates are respectively 98.23% and 99.99%, which are higher than those of common detection models (YOLO, YOLACT, YOLACT++, Mask R-CNN, and Faster R-CNN). Additionally, external data also verified the ability of the model. The findings demonstrate that our proposed model can accurately identify rice blast under field-scale conditions.
Collapse
Affiliation(s)
- Shaodan Lin
- College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou 350002, China; (S.L.); (D.H.); (Z.C.)
- College of Mechanical and Intelligent Manufacturing, Fujian Chuanzheng Communications College, Fuzhou 350007, China
| | - Jiayi Li
- College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou 350002, China; (S.L.); (D.H.); (Z.C.)
- Fujian Key Laboratory of Agricultural Information Sensing Technology, Fuzhou 350002, China
| | - Deyao Huang
- College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou 350002, China; (S.L.); (D.H.); (Z.C.)
- Fujian Key Laboratory of Agricultural Information Sensing Technology, Fuzhou 350002, China
| | - Zuxin Cheng
- College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou 350002, China; (S.L.); (D.H.); (Z.C.)
- College of Agriculture, Fujian Agriculture and Forestry University, Fuzhou 350002, China
| | - Lirong Xiang
- Department of Biological and Agricultural Engineering, North Carolina State University, Raleigh, NC 27606, USA;
| | - Dapeng Ye
- College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou 350002, China; (S.L.); (D.H.); (Z.C.)
- Fujian Key Laboratory of Agricultural Information Sensing Technology, Fuzhou 350002, China
- Agricultural Artificial Intelligence Research Center, College of Future Technology, Fujian Agriculture and Forestry University, Fuzhou 350007, China
| | - Haiyong Weng
- College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou 350002, China; (S.L.); (D.H.); (Z.C.)
- Fujian Key Laboratory of Agricultural Information Sensing Technology, Fuzhou 350002, China
- Agricultural Artificial Intelligence Research Center, College of Future Technology, Fujian Agriculture and Forestry University, Fuzhou 350007, China
| |
Collapse
|
22
|
Ni M, Xin X, Yu G, Liu Y, Gong Y. Research on the Application of Integrated Learning Models in Oilfield Production Forecasting. ACS Omega 2023; 8:39583-39595. [PMID: 37901481 PMCID: PMC10601073 DOI: 10.1021/acsomega.3c05422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 09/25/2023] [Indexed: 10/31/2023]
Abstract
Forecasting oil production is crucially important in oilfield management. Currently, multifeature-based modeling methods are widely used, but such modeling methods are not universally applicable due to the different actual conditions of oilfields in different places. In this paper, a time series forecasting method based on an integrated learning model is proposed, which combines the advantages of linearity and nonlinearity and is only concerned with the internal characteristics of the production curve itself, without considering other factors. The method includes processing the production history data using singular spectrum analysis, training the autoregressive integrated moving average model and Prophet, training the wavelet neural network, and forecasting oil production. The method is validated using historical production data from the J oilfield in China from 2011 to 2021, and compared with single models, Arps model, and mainstream time series forecasting models. The results show that in the early prediction, the difference in prediction error between the integrated learning model and other models is not obvious, but in the late prediction, the integrated model still predicts stably and the other models compared with it will show more obvious fluctuations. Therefore, the model in this article can make stable and accurate predictions.
Collapse
Affiliation(s)
- MingCheng Ni
- School
of Petroleum Engineering, Yangtze University, Wuhan, Hubei 430100, China
| | - XianKang Xin
- School
of Petroleum Engineering, Yangtze University, Wuhan, Hubei 430100, China
- Hubei
Provincial Key Laboratory of Oil and Gas Drilling and Production Engineering
(Yangtze University), Wuhan, Hubei 430100, China
- School
of Petroleum Engineering, Yangtze University:
National Engineering Research Center for Oil and Gas Drilling and
Completion Technology, Wuhan, Hubei 430100, China
| | - GaoMing Yu
- School
of Petroleum Engineering, Yangtze University, Wuhan, Hubei 430100, China
- Hubei
Provincial Key Laboratory of Oil and Gas Drilling and Production Engineering
(Yangtze University), Wuhan, Hubei 430100, China
- School
of Petroleum Engineering, Yangtze University:
National Engineering Research Center for Oil and Gas Drilling and
Completion Technology, Wuhan, Hubei 430100, China
| | - Yu Liu
- School
of Petroleum Engineering, Yangtze University, Wuhan, Hubei 430100, China
| | - YuGang Gong
- School
of Petroleum Engineering, Yangtze University, Wuhan, Hubei 430100, China
| |
Collapse
|
23
|
Bai Y, Sun X, Ji Y, Fu W, Duan X. Lightweight 3D Dense Autoencoder Network for Hyperspectral Remote Sensing Image Classification. Sensors (Basel) 2023; 23:8635. [PMID: 37896728 PMCID: PMC10610785 DOI: 10.3390/s23208635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/16/2023] [Accepted: 10/20/2023] [Indexed: 10/29/2023]
Abstract
The lack of labeled training samples restricts the improvement of Hyperspectral Remote Sensing Image (HRSI) classification accuracy based on deep learning methods. In order to improve the HRSI classification accuracy when there are few training samples, a Lightweight 3D Dense Autoencoder Network (L3DDAN) is proposed. Structurally, the L3DDAN is designed as a stacked autoencoder which consists of an encoder and a decoder. The encoder is a hybrid combination of 3D convolutional operations and 3D dense block for extracting deep features from raw data. The decoder composed of 3D deconvolution operations is designed to reconstruct data. The L3DDAN is trained by unsupervised learning without labeled samples and supervised learning with a small number of labeled samples, successively. The network composed of the fine-tuned encoder and trained classifier is used for classification tasks. The extensive comparative experiments on three benchmark HRSI datasets demonstrate that the proposed framework with fewer trainable parameters can maintain superior performance to the other eight state-of-the-art algorithms when there are only a few training samples. The proposed L3DDAN can be applied to HRSI classification tasks, such as vegetation classification. Future work mainly focuses on training time reduction and applications on more real-world datasets.
Collapse
Affiliation(s)
- Yang Bai
- Information and Communicaiton Schnool, Guilin University of Electronic Technology, Guilin 541004, China; (Y.B.); (Y.J.); (W.F.); (X.D.)
- Guangxi Key Laboratory of Precision Navigation Technology and Application, Guilin University of Electronic Technology, Guilin 541004, China
| | - Xiyan Sun
- Information and Communicaiton Schnool, Guilin University of Electronic Technology, Guilin 541004, China; (Y.B.); (Y.J.); (W.F.); (X.D.)
- Guangxi Key Laboratory of Precision Navigation Technology and Application, Guilin University of Electronic Technology, Guilin 541004, China
- National & Local Joint Engineering Research Center of Satellite Navigation Positioning and Location Service, Guilin 541004, China
| | - Yuanfa Ji
- Information and Communicaiton Schnool, Guilin University of Electronic Technology, Guilin 541004, China; (Y.B.); (Y.J.); (W.F.); (X.D.)
- Guangxi Key Laboratory of Precision Navigation Technology and Application, Guilin University of Electronic Technology, Guilin 541004, China
| | - Wentao Fu
- Information and Communicaiton Schnool, Guilin University of Electronic Technology, Guilin 541004, China; (Y.B.); (Y.J.); (W.F.); (X.D.)
- National & Local Joint Engineering Research Center of Satellite Navigation Positioning and Location Service, Guilin 541004, China
| | - Xiaoyu Duan
- Information and Communicaiton Schnool, Guilin University of Electronic Technology, Guilin 541004, China; (Y.B.); (Y.J.); (W.F.); (X.D.)
| |
Collapse
|
24
|
Sun C, Zhou X, Zhang M, Qin A. SE-VisionTransformer: Hybrid Network for Diagnosing Sugarcane Leaf Diseases Based on Attention Mechanism. Sensors (Basel) 2023; 23:8529. [PMID: 37896622 PMCID: PMC10611343 DOI: 10.3390/s23208529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/11/2023] [Accepted: 10/11/2023] [Indexed: 10/29/2023]
Abstract
Sugarcane is an important raw material for sugar and chemical production. However, in recent years, various sugarcane diseases have emerged, severely impacting the national economy. To address the issue of identifying diseases in sugarcane leaf sections, this paper proposes the SE-VIT hybrid network. Unlike traditional methods that directly use models for classification, this paper compares threshold, K-means, and support vector machine (SVM) algorithms for extracting leaf lesions from images. Due to SVM's ability to accurately segment these lesions, it is ultimately selected for the task. The paper introduces the SE attention module into ResNet-18 (CNN), enhancing the learning of inter-channel weights. After the pooling layer, multi-head self-attention (MHSA) is incorporated. Finally, with the inclusion of 2D relative positional encoding, the accuracy is improved by 5.1%, precision by 3.23%, and recall by 5.17%. The SE-VIT hybrid network model achieves an accuracy of 97.26% on the PlantVillage dataset. Additionally, when compared to four existing classical neural network models, SE-VIT demonstrates significantly higher accuracy and precision, reaching 89.57% accuracy. Therefore, the method proposed in this paper can provide technical support for intelligent management of sugarcane plantations and offer insights for addressing plant diseases with limited datasets.
Collapse
Affiliation(s)
- Cuimin Sun
- School of Computer and Electronic Information Engineering, Guangxi University, Nanning 530004, China; (X.Z.); (M.Z.); (A.Q.)
| | | | | | | |
Collapse
|
25
|
Chen H, Han Y, Liu Y, Liu D, Jiang L, Huang K, Wang H, Guo L, Wang X, Wang J, Xue W. Classification models for Tobacco Mosaic Virus and Potato Virus Y using hyperspectral and machine learning techniques. Front Plant Sci 2023; 14:1211617. [PMID: 37915507 PMCID: PMC10617679 DOI: 10.3389/fpls.2023.1211617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 10/03/2023] [Indexed: 11/03/2023]
Abstract
Tobacco Mosaic Virus (TMV) and Potato Virus Y (PVY) pose significant threats to crop production. Non-destructive and accurate surveillance is crucial to effective disease control. In this study, we propose the adoption of hyperspectral and machine learning technologies to discern the type and severity of tobacco leaves affected by PVY and TMV infection. Initially, we applied three preprocessing methods - Multivariate Scattering Correction (MSC), Standard Normal Variate (SNV), and Savitzky-Golay smoothing filter (SavGol) - to corrected the leaf full-length spectral sheet data (350-2500nm). Subsequently, we employed two classifiers, support vector machine (SVM) and random forest (RF), to establish supervised classification models, including binary classification models (healthy/diseased leaves or PVY/TMV infected leaves) and six-class classification models (healthy and various severity levels of diseased leaves). Based on the core evaluation index, our models achieved accuracies in the range of 91-100% in the binary classification. In general, SVM demonstrated superior performance compared to RF in distinguishing leaves infected with PVY and TMV. Different combinations of preprocessing methods and classifiers have distinct capabilities in the six-class classification. Notably, SavGol united with SVM gave an excellent performance in the identification of different PVY severity levels with 98.1% average precision, and also achieved a high recognition rate (96.2%) in the different TMV severity level classifications. The results further highlighted that the effective wavelengths captured by SVM, 700nm and 1800nm, would be valuable for estimating disease severity levels. Our study underscores the efficacy of integrating hyperspectral technology and machine learning, showcasing their potential for accurate and non-destructive monitoring of plant viral diseases.
Collapse
Affiliation(s)
- Haitao Chen
- Tobacco Research Institute of Chongqing Company, Chongqing, China
| | - Yujing Han
- Tobacco Research Institute, Chinese Academy of Agricultural Sciences, Qingdao, China
| | - Yongchang Liu
- Tobacco Research Institute, Chinese Academy of Agricultural Sciences, Qingdao, China
| | - Dongyang Liu
- Science and Technology Department of Sichuan Liangshan Company, Liangshan Yi Autonomous Prefecture, Xichang, China
| | - Lianqiang Jiang
- Science and Technology Department of Sichuan Liangshan Company, Liangshan Yi Autonomous Prefecture, Xichang, China
| | - Kun Huang
- Science and Technology Department of Yunnan Honghe Company, Hani-Yi Autonomous of Honghe Prefecture, Mile, China
| | - Hongtao Wang
- Tobacco Research Institute, Chinese Academy of Agricultural Sciences, Qingdao, China
| | - Leifeng Guo
- Agricultural Information Institute, Chinese Academy of Agricultural Sciences, Beijing, China
| | - Xinwei Wang
- Tobacco Research Institute, Chinese Academy of Agricultural Sciences, Qingdao, China
| | - Jie Wang
- Tobacco Research Institute, Chinese Academy of Agricultural Sciences, Qingdao, China
| | - Wenxin Xue
- Tobacco Research Institute, Chinese Academy of Agricultural Sciences, Qingdao, China
| |
Collapse
|
26
|
Xu M, Kim H, Yang J, Fuentes A, Meng Y, Yoon S, Kim T, Park DS. Embracing limited and imperfect training datasets: opportunities and challenges in plant disease recognition using deep learning. Front Plant Sci 2023; 14:1225409. [PMID: 37810377 PMCID: PMC10557492 DOI: 10.3389/fpls.2023.1225409] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/30/2023] [Indexed: 10/10/2023]
Abstract
Recent advancements in deep learning have brought significant improvements to plant disease recognition. However, achieving satisfactory performance often requires high-quality training datasets, which are challenging and expensive to collect. Consequently, the practical application of current deep learning-based methods in real-world scenarios is hindered by the scarcity of high-quality datasets. In this paper, we argue that embracing poor datasets is viable and aims to explicitly define the challenges associated with using these datasets. To delve into this topic, we analyze the characteristics of high-quality datasets, namely, large-scale images and desired annotation, and contrast them with the limited and imperfect nature of poor datasets. Challenges arise when the training datasets deviate from these characteristics. To provide a comprehensive understanding, we propose a novel and informative taxonomy that categorizes these challenges. Furthermore, we offer a brief overview of existing studies and approaches that address these challenges. We point out that our paper sheds light on the importance of embracing poor datasets, enhances the understanding of the associated challenges, and contributes to the ambitious objective of deploying deep learning in real-world applications. To facilitate the progress, we finally describe several outstanding questions and point out potential future directions. Although our primary focus is on plant disease recognition, we emphasize that the principles of embracing and analyzing poor datasets are applicable to a wider range of domains, including agriculture. Our project is public available at https://github.com/xml94/EmbracingLimitedImperfectTrainingDatasets.
Collapse
Affiliation(s)
- Mingle Xu
- Department of Electronic Engineering, Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, Republic of Korea
| | - Hyongsuk Kim
- Department of Electronic Engineering, Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, Republic of Korea
| | - Jucheng Yang
- College of Artificial Intelligence, Tianjin University of Science and Technology, Tianjin, China
| | - Alvaro Fuentes
- Department of Electronic Engineering, Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, Republic of Korea
| | - Yao Meng
- Department of Electronic Engineering, Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, Republic of Korea
| | - Sook Yoon
- Department of Computer Engineering, Mokpo National University, Muan, Republic of Korea
| | - Taehyun Kim
- National Institute of Agricultural Sciences, Wanju, Republic of Korea
| | - Dong Sun Park
- Department of Electronic Engineering, Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonju, Republic of Korea
| |
Collapse
|
27
|
Attallah O. RiPa-Net: Recognition of Rice Paddy Diseases with Duo-Layers of CNNs Fostered by Feature Transformation and Selection. Biomimetics (Basel) 2023; 8:417. [PMID: 37754168 PMCID: PMC10527565 DOI: 10.3390/biomimetics8050417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/31/2023] [Accepted: 09/05/2023] [Indexed: 09/28/2023] Open
Abstract
Rice paddy diseases significantly reduce the quantity and quality of crops, so it is essential to recognize them quickly and accurately for prevention and control. Deep learning (DL)-based computer-assisted expert systems are encouraging approaches to solving this issue and dealing with the dearth of subject-matter specialists in this area. Nonetheless, a major generalization obstacle is posed by the existence of small discrepancies between various classes of paddy diseases. Numerous studies have used features taken from a single deep layer of an individual complex DL construction with many deep layers and parameters. All of them have relied on spatial knowledge only to learn their recognition models trained with a large number of features. This study suggests a pipeline called "RiPa-Net" based on three lightweight CNNs that can identify and categorize nine paddy diseases as well as healthy paddy. The suggested pipeline gathers features from two different layers of each of the CNNs. Moreover, the suggested method additionally applies the dual-tree complex wavelet transform (DTCWT) to the deep features of the first layer to obtain spectral-temporal information. Additionally, it incorporates the deep features of the first layer of the three CNNs using principal component analysis (PCA) and discrete cosine transform (DCT) transformation methods, which reduce the dimension of the first layer features. The second layer's spatial deep features are then combined with these fused time-frequency deep features. After that, a feature selection process is introduced to reduce the size of the feature vector and choose only those features that have a significant impact on the recognition process, thereby further reducing recognition complexity. According to the results, combining deep features from two layers of different lightweight CNNs can improve recognition accuracy. Performance also improves as a result of the acquired spatial-spectral-temporal information used to learn models. Using 300 features, the cubic support vector machine (SVM) achieves an outstanding accuracy of 97.5%. The competitive ability of the suggested pipeline is confirmed by a comparison of the experimental results with findings from previously conducted research on the recognition of paddy diseases.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
28
|
Shi H, Guo J, Deng Y, Qin Z. Machine learning-based anomaly detection of groundwater microdynamics: case study of Chengdu, China. Sci Rep 2023; 13:14718. [PMID: 37679353 PMCID: PMC10485069 DOI: 10.1038/s41598-023-38447-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 07/08/2023] [Indexed: 09/09/2023] Open
Abstract
Detection of subsurface hydrodynamic anomalies plays a significant role in groundwater resource management and environmental monitoring. In this paper, based on data from the groundwater level, atmospheric pressure, and precipitation in the Chengdu area of China, a method for detecting outliers considering the factors affecting groundwater levels is proposed. By analyzing the factors affecting groundwater levels in the monitoring site and eliminating them, simplified groundwater data is obtained. Applying sl-Pauta (self-learning-based Pauta), iForest (Isolated Forest), OCSVM (One-Class SVM), and KNN to synthetic data with known outliers, testing and evaluating the effectiveness of 4 technologies. Finally, the four methods are applied to the detection of outliers in simplified groundwater levels. The results show that in the detection of outliers in synthesized data, the OCSVM method has the best detection performance, with a precision rate of 88.89%, a recall rate of 91.43%, an F1 score of 90.14%, and an AUC value of 95.66%. In the detection of outliers in simplified groundwater levels, a qualitative analysis of the displacement data within the field of view indicates that the outlier detection performance of iForest and OCSVM is better than that of KNN. The proposed method for considering the factors affecting groundwater levels can improve the efficiency and accuracy of detecting outliers in groundwater level data.
Collapse
Affiliation(s)
- Haoxin Shi
- State Key Laboratory of Geohazard Prevention and Geoenviromment Protection, Chengdu University of Technology, Chengdu, 610059, China
- College of Construction Engineering, Jilin University, Changchun, 130026, China
| | - Jian Guo
- State Key Laboratory of Geohazard Prevention and Geoenviromment Protection, Chengdu University of Technology, Chengdu, 610059, China.
| | - Yuandong Deng
- College of New Energy and Environment, Jilin University, Changchun, 130026, China
| | - Zixuan Qin
- State Key Laboratory of Geohazard Prevention and Geoenviromment Protection, Chengdu University of Technology, Chengdu, 610059, China
| |
Collapse
|
29
|
Arslan M, Haider A, Khurshid M, Abu Bakar SSU, Jani R, Masood F, Tahir T, Mitchell K, Panchagnula S, Mandair S. From Pixels to Pathology: Employing Computer Vision to Decode Chest Diseases in Medical Images. Cureus 2023; 15:e45587. [PMID: 37868395 PMCID: PMC10587792 DOI: 10.7759/cureus.45587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
Radiology has been a pioneer in the healthcare industry's digital transformation, incorporating digital imaging systems like picture archiving and communication system (PACS) and teleradiology over the past thirty years. This shift has reshaped radiology services, positioning the field at a crucial junction for potential evolution into an integrated diagnostic service through artificial intelligence and machine learning. These technologies offer advanced tools for radiology's transformation. The radiology community has advanced computer-aided diagnosis (CAD) tools using machine learning techniques, notably deep learning convolutional neural networks (CNNs), for medical image pattern recognition. However, the integration of CAD tools into clinical practice has been hindered by challenges in workflow integration, unclear business models, and limited clinical benefits, despite development dating back to the 1990s. This comprehensive review focuses on detecting chest-related diseases through techniques like chest X-rays (CXRs), magnetic resonance imaging (MRI), nuclear medicine, and computed tomography (CT) scans. It examines the utilization of computer-aided programs by researchers for disease detection, addressing key areas: the role of computer-aided programs in disease detection advancement, recent developments in MRI, CXR, radioactive tracers, and CT scans for chest disease identification, research gaps for more effective development, and the incorporation of machine learning programs into diagnostic tools.
Collapse
Affiliation(s)
- Muhammad Arslan
- Department of Emergency Medicine, Royal Infirmary of Edinburgh, National Health Service (NHS) Lothian, Edinburgh, GBR
| | - Ali Haider
- Department of Allied Health Sciences, The University of Lahore, Gujrat Campus, Gujrat, PAK
| | - Mohsin Khurshid
- Department of Microbiology, Government College University Faisalabad, Faisalabad, PAK
| | | | - Rutva Jani
- Department of Internal Medicine, C. U. Shah Medical College and Hospital, Gujarat, IND
| | - Fatima Masood
- Department of Internal Medicine, Gulf Medical University, Ajman, ARE
| | - Tuba Tahir
- Department of Business Administration, Iqra University, Karachi, PAK
| | - Kyle Mitchell
- Department of Internal Medicine, University of Science, Arts and Technology, Olveston, MSR
| | - Smruthi Panchagnula
- Department of Internal Medicine, Ganni Subbalakshmi Lakshmi (GSL) Medical College, Hyderabad, IND
| | - Satpreet Mandair
- Department of Internal Medicine, Medical University of the Americas, Charlestown, KNA
| |
Collapse
|
30
|
Pham TD. Prediction of Five-Year Survival Rate for Rectal Cancer Using Markov Models of Convolutional Features of RhoB Expression on Tissue Microarray. IEEE/ACM Trans Comput Biol Bioinform 2023; 20:3195-3204. [PMID: 37155403 DOI: 10.1109/tcbb.2023.3274211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The ability to predict survival in cancer is clinically important because the finding can help patients and physicians make optimal treatment decisions. Artificial intelligence in the context of deep learning has been increasingly realized by the informatics-oriented medical community as a powerful machine-learning technology for cancer research, diagnosis, prediction, and treatment. This paper presents the combination of deep learning, data coding, and probabilistic modeling for predicting five-year survival in a cohort of patients with rectal cancer using images of RhoB expression on biopsies. Using about one-third of the patients' data for testing, the proposed approach achieved 90% prediction accuracy, which is much higher than the direct use of the best pretrained convolutional neural network (70%) and the best coupling of a pretrained model and support vector machines (70%).
Collapse
|
31
|
Sharma M, Kumar CJ, Talukdar J, Singh TP, Dhiman G, Sharma A. Identification of rice leaf diseases and deficiency disorders using a novel DeepBatch technique. Open Life Sci 2023; 18:20220689. [PMID: 37663670 PMCID: PMC10473464 DOI: 10.1515/biol-2022-0689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 06/25/2023] [Accepted: 07/27/2023] [Indexed: 09/05/2023] Open
Abstract
Rice is one of the most widely consumed foods all over the world. Various diseases and deficiency disorders impact the rice crop's growth, thereby hampering the rice yield. Therefore, proper crop monitoring is very important for the early diagnosis of diseases or deficiency disorders. Diagnosis of diseases and disorders requires specialized manpower, which is not scalable and accessible to all farmers. To address this issue, machine learning and deep learning (DL)-driven automated systems are designed, which may help the farmers in diagnosing disease/deficiency disorders in crops so that proper care can be taken on time. Various studies have used transfer learning (TL) models in the recent past. In recent studies, further improvement in rice disease and deficiency disorder diagnosis system performance is achieved by performing the ensemble of various TL models. However, in all these DL-based studies, the segmentation of the region of interest is not done beforehand and the infected-region extraction is left for the DL model to handle automatically. Therefore, this article proposes a novel framework for the diagnosis of rice-infected leaves based on DL-based segmentation with bitwise logical AND operation and DL-based classification. The rice diseases covered in this study are bacterial leaf blight, brown spot, and leaf smut. The rice nutrient deficiencies like nitrogen (N), phosphorous (P), and potassium (K) were also included. The results of the experiment conducted on these datasets showed that the performance of DeepBatch was significantly improved as compared to the conventional technique.
Collapse
Affiliation(s)
- Mayuri Sharma
- Department of CSE, Assam Royal Global University, Guwahati, Assam, India
| | | | | | - Thipendra Pal Singh
- School of Computer Science Engineering & Technology, Bennett University, Greater Noida, India
| | - Gaurav Dhiman
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
- Department of Computer Science and Engineering, University Centre for Research and Development, Chandigarh University, Gharuan, 140413, Mohali, India
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, 248002, India
- Division of Research and Development, Lovely Professional University, Phagwara, India
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India
- Department of Computer Science, Government Bikram College of Commerce, Patiala, India
| | - Ashutosh Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India
- School of Computer Science, University of Petroleum and Energy Studies, Dehradun, India
| |
Collapse
|
32
|
Ranđelović P, Đorđević V, Miladinović J, Prodanović S, Ćeran M, Vollmann J. High-throughput phenotyping for non-destructive estimation of soybean fresh biomass using a machine learning model and temporal UAV data. Plant Methods 2023; 19:89. [PMID: 37633921 PMCID: PMC10463513 DOI: 10.1186/s13007-023-01054-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 07/15/2023] [Indexed: 08/28/2023]
Abstract
BACKGROUND Biomass accumulation as a growth indicator can be significant in achieving high and stable soybean yields. More robust genotypes have a better potential for exploiting available resources such as water or sunlight. Biomass data implemented as a new trait in soybean breeding programs could be beneficial in the selection of varieties that are more competitive against weeds and have better radiation use efficiency. The standard techniques for biomass determination are invasive, inefficient, and restricted to one-time point per plot. Machine learning models (MLMs) based on the multispectral (MS) images were created so as to overcome these issues and provide a non-destructive, fast, and accurate tool for in-season estimation of soybean fresh biomass (FB). The MS photos were taken during two growing seasons of 10 soybean varieties, using six-sensor digital camera mounted on the unmanned aerial vehicle (UAV). For model calibration, canopy cover (CC), plant height (PH), and 31 vegetation index (VI) were extracted from the images and used as predictors in the random forest (RF) and partial least squares regression (PLSR) algorithm. To create a more efficient model, highly correlated VIs were excluded and only the triangular greenness index (TGI) and green chlorophyll index (GCI) remained. RESULTS More precise results with a lower mean absolute error (MAE) were obtained with RF (MAE = 0.17 kg/m2) compared to the PLSR (MAE = 0.20 kg/m2). High accuracy in the prediction of soybean FB was achieved using only four predictors (CC, PH and two VIs). The selected model was additionally tested in a two-year trial on an independent set of soybean genotypes in drought simulation environments. The results showed that soybean grown under drought conditions accumulated less biomass than the control, which was expected due to the limited resources. CONCLUSION The research proved that soybean FB could be successfully predicted using UAV photos and MLM. The filtration of highly correlated variables reduced the final number of predictors, improving the efficiency of remote biomass estimation. The additional testing conducted in the independent environment proved that model is capable to distinguish different values of soybean FB as a consequence of drought. Assessed variability in FB indicates the robustness and effectiveness of the proposed model, as a novel tool for the non-destructive estimation of soybean FB.
Collapse
Affiliation(s)
- Predrag Ranđelović
- Institute of Field and Vegetable Crops, Maksima Gorkog 30, 21000, Novi Sad, Serbia.
| | - Vuk Đorđević
- Institute of Field and Vegetable Crops, Maksima Gorkog 30, 21000, Novi Sad, Serbia
| | - Jegor Miladinović
- Institute of Field and Vegetable Crops, Maksima Gorkog 30, 21000, Novi Sad, Serbia
| | - Slaven Prodanović
- Faculty of Agriculture, Department of Genetics, Plant Breeding and Seed Science, University of Belgrade, Nemanjina 6, 11080, Zemun-Belgrade, Serbia
| | - Marina Ćeran
- Institute of Field and Vegetable Crops, Maksima Gorkog 30, 21000, Novi Sad, Serbia
| | - Johann Vollmann
- Department of Crop Sciences, Institute of Plant Breeding, University of Natural Resources and Life Sciences, Konrad Lorenz Str. 24, 3430, Vienna, Tulln an der Donau, Austria
| |
Collapse
|
33
|
Kumar A, Yadav DP, Kumar D, Pant M, Pant G. Multi-scale feature fusion-based lightweight dual stream transformer for detection of paddy leaf disease. Environ Monit Assess 2023; 195:1020. [PMID: 37548778 DOI: 10.1007/s10661-023-11628-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 07/22/2023] [Indexed: 08/08/2023]
Abstract
Traditionally, rice leaf disease identification relies on a visual examination of abnormalities or an analytical result obtained by growing bacteria in the research lab. This method of visual evaluation is qualitative and error-prone. On the other hand, an artificial neural network system is fast and more accurate. Several pieces of research using traditional machine learning and deep convolution neural networks (CNN) have been utilized to overcome the issues. Still, these methods need more semantic contextual global and local feature extraction. Due to this, efficiency is less. Hence, in the present study, a multi-scale feature fusion-based RDTNet has been designed. The RDTNet contains two modules, and the first module extracts feature via three scales from the local binary pattern (LBP), gray, and a histogram of orient gradient (HOG) image. The second module extracts semantic global and local features through the transformer and convolution block. Furthermore, the computing cost is reduced by dividing the query into two parts and feeding them to convolution and the transformer block. The results indicate that the proposed method has a very high average precision, f1-score, and accuracy of 99.55%, 99.54%, and 99.53%, respectively. It is suggestive of improved classification accuracy using multi-scale features and the transformer. The model has also been validated on other datasets confirming that the present model can be used for real-time rice disease diagnosis. In the future, such models can be used for monitoring other crops, including wheat, tomato, and potato.
Collapse
Affiliation(s)
- Ajitesh Kumar
- Department of Computer Engineering & Applications, G.L.A. University, Mathura (U.P.), India.
| | - Dhirendra Prasad Yadav
- Department of Computer Engineering & Applications, G.L.A. University, Mathura (U.P.), India
| | - Deepak Kumar
- Department of Computer Science, NIT Meghalaya, Shillong, India
| | - Manu Pant
- Department of Biotechnology, Graphic Era (Deemed to Be University), Dehradun, India
| | - Gaurav Pant
- Department of Microbiology, Graphic Era (Deemed to be University), Dehradun, India
| |
Collapse
|
34
|
Sahoo S, Mishra S, Panda B, Bhoi AK, Barsocchi P. An Augmented Modulated Deep Learning Based Intelligent Predictive Model for Brain Tumor Detection Using GAN Ensemble. Sensors (Basel) 2023; 23:6930. [PMID: 37571713 PMCID: PMC10422344 DOI: 10.3390/s23156930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/25/2023] [Accepted: 07/28/2023] [Indexed: 08/13/2023]
Abstract
Brain tumor detection in the initial stage is becoming an intricate task for clinicians worldwide. The diagnosis of brain tumor patients is rigorous in the later stages, which is a serious concern. Although there are related pragmatic clinical tools and multiple models based on machine learning (ML) for the effective diagnosis of patients, these models still provide less accuracy and take immense time for patient screening during the diagnosis process. Hence, there is still a need to develop a more precise model for more accurate screening of patients to detect brain tumors in the beginning stages and aid clinicians in diagnosis, making the brain tumor assessment more reliable. In this research, a performance analysis of the impact of different generative adversarial networks (GAN) on the early detection of brain tumors is presented. Based on it, a novel hybrid enhanced predictive convolution neural network (CNN) model using a hybrid GAN ensemble is proposed. Brain tumor image data is augmented using a GAN ensemble, which is fed for classification using a hybrid modulated CNN technique. The outcome is generated through a soft voting approach where the final prediction is based on the GAN, which computes the highest value for different performance metrics. This analysis demonstrated that evaluation with a progressive-growing generative adversarial network (PGGAN) architecture produced the best result. In the analysis, PGGAN outperformed others, computing the accuracy, precision, recall, F1-score, and negative predictive value (NPV) to be 98.85, 98.45%, 97.2%, 98.11%, and 98.09%, respectively. Additionally, a very low latency of 3.4 s is determined with PGGAN. The PGGAN model enhanced the overall performance of the identification of brain cell tissues in real time. Therefore, it may be inferred to suggest that brain tumor detection in patients using PGGAN augmentation with the proposed modulated CNN technique generates the optimum performance using the soft voting approach.
Collapse
Affiliation(s)
- Saswati Sahoo
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India;
| | - Sushruta Mishra
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India;
| | - Baidyanath Panda
- LTIMindtree, 1 American Row, 3rd Floor, Hartford, CT 06103, USA;
| | - Akash Kumar Bhoi
- Directorate of Research, Sikkim Manipal University, Gangtok 737102, India;
- KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Paolo Barsocchi
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| |
Collapse
|
35
|
Liu K, Wang J, Zhang K, Chen M, Zhao H, Liao J. A Lightweight Recognition Method for Rice Growth Period Based on Improved YOLOv5s. Sensors (Basel) 2023; 23:6738. [PMID: 37571522 PMCID: PMC10422421 DOI: 10.3390/s23156738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/19/2023] [Accepted: 07/25/2023] [Indexed: 08/13/2023]
Abstract
The identification of the growth and development period of rice is of great significance to achieve high-yield and high-quality rice. However, the acquisition of rice growth period information mainly relies on manual observation, which has problems such as low efficiency and strong subjectivity. In order to solve these problems, a lightweight recognition method is proposed to automatically identify the growth period of rice: Small-YOLOv5, which is based on improved YOLOv5s. Firstly, the new backbone feature extraction network MobileNetV3 was used to replace the YOLOv5s backbone network to reduce the model size and the number of model parameters, thus improving the detection speed of the model. Secondly, in the feature fusion stage of YOLOv5s, we introduced a more lightweight convolution method, GsConv, to replace the standard convolution. The computational cost of GsConv is about 60-70% of the standard convolution, but its contribution to the model learning ability is no less than that of the standard convolution. Based on GsConv, we built a lightweight neck network to reduce the complexity of the network model while maintaining accuracy. To verify the performance of Small-YOLOv5s, we tested it on a self-built dataset of rice growth period. The results show that compared with YOLOv5s (5.0) on the self-built dataset, the number of the model parameter was reduced by 82.4%, GFLOPS decreased by 85.9%, and the volume reduced by 86.0%. The mAP (0.5) value of the improved model was 98.7%, only 0.8% lower than that of the original YOLOv5s model. Compared with the mainstream lightweight model YOLOV5s- MobileNetV3-Small, the number of the model parameter was decreased by 10.0%, the volume reduced by 9.6%, and the mAP (0.5:0.95) improved by 5.0%-reaching 94.7%-and the recall rate improved by 1.5%-reaching 98.9%. Based on experimental comparisons, the effectiveness and superiority of the model have been verified.
Collapse
Affiliation(s)
- Kaixuan Liu
- College of Engineering, Anhui Agricultural University, Hefei 230036, China; (K.L.); (K.Z.); (M.C.); (H.Z.)
| | - Jie Wang
- Anhui Provincial Rural Comprehensive Economic Information Center, Hefei 230031, China;
| | - Kai Zhang
- College of Engineering, Anhui Agricultural University, Hefei 230036, China; (K.L.); (K.Z.); (M.C.); (H.Z.)
| | - Minhui Chen
- College of Engineering, Anhui Agricultural University, Hefei 230036, China; (K.L.); (K.Z.); (M.C.); (H.Z.)
| | - Haonan Zhao
- College of Engineering, Anhui Agricultural University, Hefei 230036, China; (K.L.); (K.Z.); (M.C.); (H.Z.)
| | - Juan Liao
- College of Engineering, Anhui Agricultural University, Hefei 230036, China; (K.L.); (K.Z.); (M.C.); (H.Z.)
- Hefei Institute of Technology Innovation Engineering, Chinese Academy of Sciences, Hefei 230094, China
| |
Collapse
|
36
|
Wang S, Khan A, Lin Y, Jiang Z, Tang H, Alomar SY, Sanaullah M, Bhatti UA. Deep reinforcement learning enables adaptive-image augmentation for automated optical inspection of plant rust. Front Plant Sci 2023; 14:1142957. [PMID: 37484461 PMCID: PMC10360175 DOI: 10.3389/fpls.2023.1142957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 05/29/2023] [Indexed: 07/25/2023]
Abstract
This study proposes an adaptive image augmentation scheme using deep reinforcement learning (DRL) to improve the performance of a deep learning-based automated optical inspection system. The study addresses the challenge of inconsistency in the performance of single image augmentation methods. It introduces a DRL algorithm, DQN, to select the most suitable augmentation method for each image. The proposed approach extracts geometric and pixel indicators to form states, and uses DeepLab-v3+ model to verify the augmented images and generate rewards. Image augmentation methods are treated as actions, and the DQN algorithm selects the best methods based on the images and segmentation model. The study demonstrates that the proposed framework outperforms any single image augmentation method and achieves better segmentation performance than other semantic segmentation models. The framework has practical implications for developing more accurate and robust automated optical inspection systems, critical for ensuring product quality in various industries. Future research can explore the generalizability and scalability of the proposed framework to other domains and applications. The code for this application is uploaded at https://github.com/lynnkobe/Adaptive-Image-Augmentation.git.
Collapse
Affiliation(s)
- Shiyong Wang
- School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, China
| | - Asad Khan
- Metaverse Research Institute, School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou, China
| | - Ying Lin
- School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, China
| | - Zhuo Jiang
- College of Food Science, South China Agricultural University, Guangzhou, China
| | - Hao Tang
- School of Information and Communication Engineering, Hainan University, Haikou, China
| | | | - Muhammad Sanaullah
- Department of Computer Science, Bahauddin Zakariya University, Multan, Pakistan
| | - Uzair Aslam Bhatti
- School of Information and Communication Engineering, Hainan University, Haikou, China
| |
Collapse
|
37
|
Merrouchi M, Benyoussef Y, Skittou M, Atifi K, Gadi T. ConvCoroNet: a deep convolutional neural network optimized with iterative thresholding algorithm for Covid-19 detection using chest X-ray images. J Biomol Struct Dyn 2023:1-14. [PMID: 37354142 DOI: 10.1080/07391102.2023.2227726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 06/15/2023] [Indexed: 06/26/2023]
Abstract
Covid-19 is a global pandemic. Early and accurate detection of positive cases prevent the further spread of this epidemic and help to treat rapidly the infected patients. During the peak of this epidemic, there was an insufficiency of Covid-19 test kits. In addition, this technique takes a considerable time in the diagnosis. Hence the need to find fast, accurate and low-cost method to replace or supplement RT PCR-based methods. Covid-19 is a respiratory disease, chest X-ray images are often used to diagnose pneumonia. From this perspective, these images can play an important role in the Covid-19 detection. In this article, we propose ConvCoroNet, a deep convolutional neural network model optimized with new method based on iterative thresholding algorithm to detect coronavirus automatically from chest X-ray images. ConvCoroNet is trained on a dataset prepared by collecting chest X-ray images of Covid-19, pneumonia and normal cases from publically datasets. The experimental results of our proposed model show a high accuracy of 99.50%, sensitivity of 98.80% and specificity of 99.85% when detecting Covid-19 from chest X-ray images. ConvCoroNet achieves promising results in the automatic detection of Covid-19 from chest X-ray images. It may be able to help radiologists in the Covid-19 detection by reducing the examination time of X-ray images.Communicated by Ramaswamy H. Sarma.
Collapse
Affiliation(s)
- M Merrouchi
- Faculty of Science and Technology, Hassan First, Settat, Morocco
| | - Y Benyoussef
- National School of Applied Sciences, Hassan First, Berrechid, Morocco
| | - M Skittou
- Faculty of Science and Technology, Hassan First, Settat, Morocco
| | - K Atifi
- Faculty of Science and Technology, Hassan First, Settat, Morocco
| | - T Gadi
- Faculty of Science and Technology, Hassan First, Settat, Morocco
| |
Collapse
|
38
|
Bovenizer W, Chetthamrongchai P. A comprehensive systematic and bibliometric review of the IoT-based healthcare systems. Cluster Comput 2023; 26:1-27. [PMID: 37359057 PMCID: PMC10251338 DOI: 10.1007/s10586-023-04047-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 04/28/2023] [Accepted: 05/19/2023] [Indexed: 06/28/2023]
Abstract
In the healthcare sector, the growth in technology has had a huge effect. Besides, when introduced to the world of healthcare, the Internet of Things (IoT) will simplify the transition by helping physicians closely track their patients, allowing rapid recovery. Aged patients/people should be intensively checked, and their loved ones must be aware of their wellbeing periodically. Therefore, using IoT in healthcare will simplify the lives of physicians and patients alike. Hence, this study explored a comprehensive review of intelligent IoT-based embedded healthcare systems. The papers around intelligent IoT-based healthcare systems printed until Dec-2022 are studied, and some research lines are suggested for the upcoming researchers. Thus, this study's innovation will apply healthcare systems based on IoT to include certain strategies for the future deployment of new generations of IoT-based health technology. The findings revealed that IoT is beneficial for governments to strengthen society's health and economic relations. Besides, owing to novel functional principles, IoT needs modern safety infrastructure. This study is helpful for prevalent and useful electronic healthcare services, health experts, and clinicians.
Collapse
Affiliation(s)
- Wimalyn Bovenizer
- College of Digital Innovation Technology, Rangsit University, Pathum Thani, Thailand
| | | |
Collapse
|
39
|
Chen T, Wang R, Du J, Chen H, Zhang J, Dong W, Zhang M. CMRD-Net: a deep learning-based Cnaphalocrocis medinalis damage symptom rotated detection framework for in-field survey. Front Plant Sci 2023; 14:1180716. [PMID: 37360701 PMCID: PMC10285459 DOI: 10.3389/fpls.2023.1180716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 05/03/2023] [Indexed: 06/28/2023]
Abstract
The damage symptoms of Cnaphalocrocis medinalis (C.medinalis) is an important evaluation index for pest prevention and control. However, due to various shapes, arbitrary-oriented directions and heavy overlaps of C.medinalis damage symptoms under complex field conditions, generic object detection methods based on horizontal bounding box cannot achieve satisfactory results. To address this problem, we develop a Cnaphalocrocis medinalis damage symptom rotated detection framework called CMRD-Net. It mainly consists of a Horizontal-to-Rotated region proposal network (H2R-RPN) and a Rotated-to-Rotated region convolutional neural network (R2R-RCNN). First, the H2R-RPN is utilized to extract rotated region proposals, combined with adaptive positive sample selection that solves the hard definition of positive samples caused by oriented instances. Second, the R2R-RCNN performs feature alignment based on rotated proposals, and exploits oriented-aligned features to detect the damage symptoms. The experimental results on our constructed dataset show that our proposed method outperforms those state-of-the-art rotated object detection algorithms achieving 73.7% average precision (AP). Additionally, the results demonstrate that our method is more suitable than horizontal detection methods for in-field survey of C.medinalis.
Collapse
Affiliation(s)
- Tianjiao Chen
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
- Science Island Branch, University of Science and Technology of China, Hefei, China
| | - Rujing Wang
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
- Science Island Branch, University of Science and Technology of China, Hefei, China
- Institutes of Physical Science and Information Technology, Anhui University, Hefei, China
| | - Jianming Du
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
| | - Hongbo Chen
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
- Science Island Branch, University of Science and Technology of China, Hefei, China
| | - Jie Zhang
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
| | - Wei Dong
- Agricultural Economy and Information Research Institute, Anhui Academy of Agricultural Sciences, Hefei, China
| | - Meng Zhang
- Jingxian Plant Protection Station, Jingxian Plantation Technology Extension Center, Xuancheng, China
| |
Collapse
|
40
|
Das S, Ayus I, Gupta D. A comprehensive review of COVID-19 detection with machine learning and deep learning techniques. Health Technol (Berl) 2023; 13:1-14. [PMID: 37363343 PMCID: PMC10244837 DOI: 10.1007/s12553-023-00757-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/14/2023] [Indexed: 06/28/2023]
Abstract
Purpose The first transmission of coronavirus to humans started in Wuhan city of China, took the shape of a pandemic called Corona Virus Disease 2019 (COVID-19), and posed a principal threat to the entire world. The researchers are trying to inculcate artificial intelligence (Machine learning or deep learning models) for the efficient detection of COVID-19. This research explores all the existing machine learning (ML) or deep learning (DL) models, used for COVID-19 detection which may help the researcher to explore in different directions. The main purpose of this review article is to present a compact overview of the application of artificial intelligence to the research experts, helping them to explore the future scopes of improvement. Methods The researchers have used various machine learning, deep learning, and a combination of machine and deep learning models for extracting significant features and classifying various health conditions in COVID-19 patients. For this purpose, the researchers have utilized different image modalities such as CT-Scan, X-Ray, etc. This study has collected over 200 research papers from various repositories like Google Scholar, PubMed, Web of Science, etc. These research papers were passed through various levels of scrutiny and finally, 50 research articles were selected. Results In those listed articles, the ML / DL models showed an accuracy of 99% and above while performing the classification of COVID-19. This study has also presented various clinical applications of various research. This study specifies the importance of various machine and deep learning models in the field of medical diagnosis and research. Conclusion In conclusion, it is evident that ML/DL models have made significant progress in recent years, but there are still limitations that need to be addressed. Overfitting is one such limitation that can lead to incorrect predictions and overburdening of the models. The research community must continue to work towards finding ways to overcome these limitations and make machine and deep learning models even more effective and efficient. Through this ongoing research and development, we can expect even greater advances in the future.
Collapse
Affiliation(s)
- Sreeparna Das
- Department of Computer Science and Engineering, National Institute of Technology Arunachal Pradesh, Jote, Arunachal Pradesh 791113 India
| | - Ishan Ayus
- Department of Computer Science and Engineering, ITER, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha 751030 India
| | - Deepak Gupta
- Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology Allahabad, Prayagraj, UP 211004 India
| |
Collapse
|
41
|
Hassoun A, Kamiloglu S, Garcia-Garcia G, Parra-López C, Trollman H, Jagtap S, Aadil RM, Esatbeyoglu T. Implementation of relevant fourth industrial revolution innovations across the supply chain of fruits and vegetables: A short update on Traceability 4.0. Food Chem 2023; 409:135303. [PMID: 36586255 DOI: 10.1016/j.foodchem.2022.135303] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 11/29/2022] [Accepted: 12/21/2022] [Indexed: 12/24/2022]
Abstract
Food Traceability 4.0 refers to the application of fourth industrial revolution (or Industry 4.0) technologies to ensure food authenticity, safety, and high food quality. Growing interest in food traceability has led to the development of a wide range of chemical, biomolecular, isotopic, chromatographic, and spectroscopic methods with varied performance and success rates. This review will give an update on the application of Traceability 4.0 in the fruits and vegetables sector, focusing on relevant Industry 4.0 enablers, especially Artificial Intelligence, the Internet of Things, blockchain, and Big Data. The results show that the Traceability 4.0 has significant potential to improve quality and safety of many fruits and vegetables, enhance transparency, reduce the costs of food recalls, and decrease waste and loss. However, due to their high implementation costs and lack of adaptability to industrial environments, most of these advanced technologies have not yet gone beyond the laboratory scale. Therefore, further research is anticipated to overcome current limitations for large-scale applications.
Collapse
|
42
|
Rajinikanth V, Vincent PMDR, Gnanaprakasam CN, Srinivasan K, Chang CY. Brain Tumor Class Detection in Flair/T2 Modality MRI Slices Using Elephant-Herd Algorithm Optimized Features. Diagnostics (Basel) 2023; 13:diagnostics13111832. [PMID: 37296683 DOI: 10.3390/diagnostics13111832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/08/2023] [Accepted: 05/19/2023] [Indexed: 06/12/2023] Open
Abstract
Several advances in computing facilities were made due to the advancement of science and technology, including the implementation of automation in multi-specialty hospitals. This research aims to develop an efficient deep-learning-based brain-tumor (BT) detection scheme to detect the tumor in FLAIR- and T2-modality magnetic-resonance-imaging (MRI) slices. MRI slices of the axial-plane brain are used to test and verify the scheme. The reliability of the developed scheme is also verified through clinically collected MRI slices. In the proposed scheme, the following stages are involved: (i) pre-processing the raw MRI image, (ii) deep-feature extraction using pretrained schemes, (iii) watershed-algorithm-based BT segmentation and mining the shape features, (iv) feature optimization using the elephant-herding algorithm (EHA), and (v) binary classification and verification using three-fold cross-validation. Using (a) individual features, (b) dual deep features, and (c) integrated features, the BT-classification task is accomplished in this study. Each experiment is conducted separately on the chosen BRATS and TCIA benchmark MRI slices. This research indicates that the integrated feature-based scheme helps to achieve a classification accuracy of 99.6667% when a support-vector-machine (SVM) classifier is considered. Further, the performance of this scheme is verified using noise-attacked MRI slices, and better classification results are achieved.
Collapse
Affiliation(s)
- Venkatesan Rajinikanth
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai 602105, India
| | - P M Durai Raj Vincent
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - C N Gnanaprakasam
- Department of Electronics and Instrumentation Engineering, St. Joseph's College of Engineering, OMR, Chennai 600119, India
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Chuan-Yu Chang
- Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan
- Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu 310401, Taiwan
| |
Collapse
|
43
|
Kulkarni S, Rabidas R. Fully convolutional network for automated detection and diagnosis of mammographic masses. Multimed Tools Appl 2023:1-22. [PMID: 37362703 PMCID: PMC10169189 DOI: 10.1007/s11042-023-14757-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 01/19/2022] [Accepted: 02/05/2023] [Indexed: 06/28/2023]
Abstract
Breast cancer, though rare in male, is very frequent in female and has high mortality rate which can be reduced if detected and diagnosed at the early stage. Thus, in this paper, deep learning architecture based on U-Net is proposed for the detection of breast masses and its characterization as benign or malignant. The evaluation of the proposed architecture in detection is carried out on two benchmark datasets- INbreast and DDSM and achieved a true positive rate of 99.64% at 0.25 false positives per image for INbreast dataset while the same for DDSM are 97.36% and 0.38 FPs/I, respectively. For mass characterization, an accuracy of 97.39% with an AUC of 0.97 is obtained for INbreast while the same for DDSM are 96.81%, and 0.96, respectively. The measured results are further compared with the state-of-the-art techniques where the introduced scheme takes an edge over others.
Collapse
Affiliation(s)
- Sujata Kulkarni
- Department of Electronics & Communication Engineering, Assam University, Silchar, 788010 Assam India
| | - Rinku Rabidas
- Department of Electronics & Communication Engineering, Assam University, Silchar, 788010 Assam India
| |
Collapse
|
44
|
Akinyelu AA, Bah B. COVID-19 Diagnosis in Computerized Tomography (CT) and X-ray Scans Using Capsule Neural Network. Diagnostics (Basel) 2023; 13:diagnostics13081484. [PMID: 37189585 DOI: 10.3390/diagnostics13081484] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/14/2023] [Accepted: 04/14/2023] [Indexed: 05/17/2023] Open
Abstract
This study proposes a deep-learning-based solution (named CapsNetCovid) for COVID-19 diagnosis using a capsule neural network (CapsNet). CapsNets are robust for image rotations and affine transformations, which is advantageous when processing medical imaging datasets. This study presents a performance analysis of CapsNets on standard images and their augmented variants for binary and multi-class classification. CapsNetCovid was trained and evaluated on two COVID-19 datasets of CT images and X-ray images. It was also evaluated on eight augmented datasets. The results show that the proposed model achieved classification accuracy, precision, sensitivity, and F1-score of 99.929%, 99.887%, 100%, and 99.319%, respectively, for the CT images. It also achieved a classification accuracy, precision, sensitivity, and F1-score of 94.721%, 93.864%, 92.947%, and 93.386%, respectively, for the X-ray images. This study presents a comparative analysis between CapsNetCovid, CNN, DenseNet121, and ResNet50 in terms of their ability to correctly identify randomly transformed and rotated CT and X-ray images without the use of data augmentation techniques. The analysis shows that CapsNetCovid outperforms CNN, DenseNet121, and ResNet50 when trained and evaluated on CT and X-ray images without data augmentation. We hope that this research will aid in improving decision making and diagnostic accuracy of medical professionals when diagnosing COVID-19.
Collapse
Affiliation(s)
- Andronicus A Akinyelu
- Research Centre, African Institute for Mathematical Sciences (AIMS) South Africa, Cape Town 7945, South Africa
- Department of Computer Science and Informatics, University of the Free State, Phuthaditjhaba 9866, South Africa
| | - Bubacarr Bah
- Research Centre, African Institute for Mathematical Sciences (AIMS) South Africa, Cape Town 7945, South Africa
- Department of Mathematical Sciences, Stellenbosch University, Cape Town 7945, South Africa
| |
Collapse
|
45
|
Chithambarathanu M, Jeyakumar MK. Survey on crop pest detection using deep learning and machine learning approaches. Multimed Tools Appl 2023:1-34. [PMID: 37362671 PMCID: PMC10088765 DOI: 10.1007/s11042-023-15221-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/20/2022] [Accepted: 03/30/2023] [Indexed: 06/28/2023]
Abstract
The most important elements in the realm of commercial food standards are effective pest management and control. Crop pests can make a huge impact on crop quality and productivity. It is critical to seek and develop new tools to diagnose the pest disease before it caused major crop loss. Crop abnormalities, pests, or dietetic deficiencies have usually been diagnosed by human experts. Anyhow, this was both costly and time-consuming. To resolve these issues, some approaches for crop pest detection have to be focused on. A clear overview of recent research in the area of crop pests and pathogens identification using techniques in Machine Learning Techniques like Random Forest (RF), Support Vector Machine (SVM), and Decision Tree (DT), Naive Bayes (NB), and also some Deep Learning methods like Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Deep convolutional neural network (DCNN), Deep Belief Network (DBN) was presented. The outlined strategy increases crop productivity while providing the highest level of crop protection. By offering the greatest amount of crop protection, the described strategy improves crop efficiency. This survey provides knowledge of some modern approaches for keeping an eye on agricultural fields for pest detection and contains a definition of plant pest detection to identify and categorise citrus plant pests, rice, and cotton as well as numerous ways of detecting them. These methods enable automatic monitoring of vast domains, therefore lowering human error and effort.
Collapse
Affiliation(s)
- M. Chithambarathanu
- Department of Computer Science and Engineering, Noorul Islam Centre for Higher Education, Kumaracoil, Tamilnadu India
| | - M. K. Jeyakumar
- Department of Computer Applications, Noorul Islam Centre for Higher Education, Kumaracoil, Tamilnadu India
| |
Collapse
|
46
|
Neupane C, Pereira M, Koirala A, Walsh KB. Fruit Sizing in Orchard: A Review from Caliper to Machine Vision with Deep Learning. Sensors (Basel) 2023; 23:3868. [PMID: 37112207 PMCID: PMC10144371 DOI: 10.3390/s23083868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/04/2023] [Accepted: 04/05/2023] [Indexed: 06/19/2023]
Abstract
Forward estimates of harvest load require information on fruit size as well as number. The task of sizing fruit and vegetables has been automated in the packhouse, progressing from mechanical methods to machine vision over the last three decades. This shift is now occurring for size assessment of fruit on trees, i.e., in the orchard. This review focuses on: (i) allometric relationships between fruit weight and lineal dimensions; (ii) measurement of fruit lineal dimensions with traditional tools; (iii) measurement of fruit lineal dimensions with machine vision, with attention to the issues of depth measurement and recognition of occluded fruit; (iv) sampling strategies; and (v) forward prediction of fruit size (at harvest). Commercially available capability for in-orchard fruit sizing is summarized, and further developments of in-orchard fruit sizing by machine vision are anticipated.
Collapse
|
47
|
Hari P, Singh MP. A lightweight convolutional neural network for disease detection of fruit leaves. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08496-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
|
48
|
Zhang C, Wang J, Yan T, Lu X, Lu G, Tang X, Huang B. An instance-based deep transfer learning method for quality identification of Longjing tea from multiple geographical origins. COMPLEX INTELL SYST 2023. [DOI: 10.1007/s40747-023-01024-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
AbstractFor practitioners, it is very crucial to realize accurate and automatic vision-based quality identification of Longjing tea. Due to the high similarity between classes, the classification accuracy of traditional image processing combined with machine learning algorithm is not satisfactory. High-performance deep learning methods require large amounts of annotated data, but collecting and labeling massive amounts of data is very time consuming and monotonous. To gain as much useful knowledge as possible from related tasks, an instance-based deep transfer learning method for the quality identification of Longjing tea is proposed. The method mainly consists of two steps: (i) The MobileNet V2 model is trained using the hybrid training dataset containing all labeled samples from source and target domains. The trained MobileNet V2 model is used as a feature extractor, and (ii) the extracted features are input into the proposed multiclass TrAdaBoost algorithm for training and identification. Longjing tea images from three geographical origins, West Lake, Qiantang, and Yuezhou, are collected, and the tea from each geographical origin contains four grades. The Longjing tea from West Lake is regarded as the source domain, which contains more labeled samples. The Longjing tea from the other two geographical origins contains only limited labeled samples, which are regarded as the target domain. Comparative experimental results show that the method with the best performance is the MobileNet V2 feature extractor trained with a hybrid training dataset combined with multiclass TrAdaBoost with linear support vector machine (SVM). The overall Longjing tea quality identification accuracy is 93.6% and 91.5% on the two target domain datasets, respectively. The proposed method can achieve accurate quality identification of Longjing tea with limited samples. It can provide some heuristics for designing image-based tea quality identification systems.
Collapse
|
49
|
Bhosale YH, Patnaik KS. Bio-medical imaging (X-ray, CT, ultrasound, ECG), genome sequences applications of deep neural network and machine learning in diagnosis, detection, classification, and segmentation of COVID-19: a Meta-analysis & systematic review. Multimed Tools Appl 2023:1-54. [PMID: 37362676 PMCID: PMC10015538 DOI: 10.1007/s11042-023-15029-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 02/01/2023] [Accepted: 02/27/2023] [Indexed: 06/28/2023]
Abstract
This review investigates how Deep Machine Learning (DML) has dealt with the Covid-19 epidemic and provides recommendations for future Covid-19 research. Despite the fact that vaccines for this epidemic have been developed, DL methods have proven to be a valuable asset in radiologists' arsenals for the automated assessment of Covid-19. This detailed review debates the techniques and applications developed for Covid-19 findings using DL systems. It also provides insights into notable datasets used to train neural networks, data partitioning, and various performance measurement metrics. The PRISMA taxonomy has been formed based on pretrained(45 systems) and hybrid/custom(17 systems) models with radiography modalities. A total of 62 systems with respect to X-ray(32), CT(19), ultrasound(7), ECG(2), and genome sequence(2) based modalities as taxonomy are selected from the studied articles. We originate by valuing the present phase of DL and conclude with significant limitations. The restrictions contain incomprehensibility, simplification measures, learning from incomplete labeled data, and data secrecy. Moreover, DML can be utilized to detect and classify Covid-19 from other COPD illnesses. The proposed literature review has found many DL-based systems to fight against Covid19. We expect this article will assist in speeding up the procedure of DL for Covid-19 researchers, including medical, radiology technicians, and data engineers.
Collapse
Affiliation(s)
- Yogesh H. Bhosale
- Computer Science and Engineering Department, Birla Institute of Technology, Mesra, Ranchi, India
| | - K. Sridhar Patnaik
- Computer Science and Engineering Department, Birla Institute of Technology, Mesra, Ranchi, India
| |
Collapse
|
50
|
Vinay Kumar V, Grace Kanmani Prince P. Deep belief network Assisted quadratic logit boost classifier for brain tumor detection using MR images. Biomed Signal Process Control 2023; 81:104415. [DOI: 10.1016/j.bspc.2022.104415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|