1
|
Xiong P, Zhang C, He L, Zhan X, Han Y. Deep learning-based rice pest detection research. PLoS One 2024; 19:e0313387. [PMID: 39509376 PMCID: PMC11542820 DOI: 10.1371/journal.pone.0313387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Accepted: 10/22/2024] [Indexed: 11/15/2024] Open
Abstract
With the increasing pressure on global food security, the effective detection and management of rice pests have become crucial. Traditional pest detection methods are not only time-consuming and labor-intensive but also often fail to achieve real-time monitoring and rapid response. This study aims to address the issue of rice pest detection through deep learning techniques to enhance agricultural productivity and sustainability. The research utilizes the IP102 large-scale rice pest benchmark dataset, publicly released by CVPR in 2019, which includes 9,663 images of eight types of pests, with a training-to-testing ratio of 8:2. By optimizing the YOLOv8 model, incorporating the CBAM (Convolutional Block Attention Module) attention mechanism, and the BiFPN (Bidirectional Feature Pyramid Network) for feature fusion, the detection accuracy in complex agricultural environments was significantly improved. Experimental results show that the improved YOLOv8 model achieved mAP@0.5 and mAP@0.5:0.95 scores of 98.8% and 78.6%, respectively, representing increases of 2.8% and 2.35% over the original model. This study confirms the potential of deep learning technology in the field of pest detection, providing a new technological approach for future agricultural pest management.
Collapse
Affiliation(s)
- Peng Xiong
- Wuhan Polytechnic University, Wuhan, Hubei, China
| | - Cong Zhang
- Wuhan Polytechnic University, Wuhan, Hubei, China
| | - Linfeng He
- Wuhan Polytechnic University, Wuhan, Hubei, China
| | - Xiaoyun Zhan
- Wuhan Polytechnic University, Wuhan, Hubei, China
| | - Yuantao Han
- Wuhan Polytechnic University, Wuhan, Hubei, China
| |
Collapse
|
2
|
Wu R, He F, Rong Z, Liang Z, Xu W, Ni F, Dong W. TP-Transfiner: high-quality segmentation network for tea pest. FRONTIERS IN PLANT SCIENCE 2024; 15:1411689. [PMID: 39193216 PMCID: PMC11347396 DOI: 10.3389/fpls.2024.1411689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 07/11/2024] [Indexed: 08/29/2024]
Abstract
Detecting and controlling tea pests promptly are crucial for safeguarding tea production quality. Due to the insufficient feature extraction ability of traditional CNN-based methods, they face challenges such as inaccuracy and inefficiency of detecting pests in dense and mimicry scenarios. This study proposes an end-to-end tea pest detection and segmentation framework, TeaPest-Transfiner (TP-Transfiner), based on Mask Transfiner to address the challenge of detecting and segmenting pests in mimicry and dense scenarios. In order to improve the feature extraction inability and weak accuracy of traditional convolution modules, this study proposes three strategies. Firstly, a deformable attention block is integrated into the model, which consists of deformable convolution and self-attention using the key content only term. Secondly, the FPN architecture in the backbone network is improved with a more effective feature-aligned pyramid network (FaPN). Lastly, focal loss is employed to balance positive and negative samples during the training period, and parameters are adapted to the dataset distribution. Furthermore, to address the lack of tea pest images, a dataset called TeaPestDataset is constructed, which contains 1,752 images and 29 species of tea pests. Experimental results on the TeaPestDataset show that the proposed TP-Transfiner model achieves state-of-the-art performance compared with other models, attaining a detection precision (AP50) of 87.211% and segmentation performance of 87.381%. Notably, the model shows a significant improvement in segmentation average precision (mAP) by 9.4% and a reduction in model size by 30% compared to the state-of-the-art CNN-based model Mask R-CNN. Simultaneously, TP-Transfiner's lightweight module fusion maintains fast inference speeds and a compact model size, demonstrating practical potential for pest control in tea gardens, especially in dense and mimicry scenarios.
Collapse
Affiliation(s)
- Ruizhao Wu
- College of Informatics, Huazhong Agricultural University, Wuhan, China
| | - Feng He
- College of Informatics, Huazhong Agricultural University, Wuhan, China
- Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, College of Informatics, Huazhong Agricultural University, Wuhan, China
| | - Ziyang Rong
- College of Informatics, Huazhong Agricultural University, Wuhan, China
- Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, College of Informatics, Huazhong Agricultural University, Wuhan, China
| | - Zhixue Liang
- School of Computer Science, Wuhan University, Wuhan, China
| | - Wenxing Xu
- College of Plant Science & Technology, Huazhong Agricultural University, Wuhan, China
| | - Fuchuan Ni
- Engineering Research Center of Intelligent Technology for Agriculture, Ministry of Education, College of Informatics, Huazhong Agricultural University, Wuhan, China
| | - Wenyong Dong
- School of Computer Science, Wuhan University, Wuhan, China
| |
Collapse
|
3
|
Bery S, Brown-Brandl TM, Jones BT, Rohrer GA, Sharma SR. Determining the Presence and Size of Shoulder Lesions in Sows Using Computer Vision. Animals (Basel) 2023; 14:131. [PMID: 38200862 PMCID: PMC10777999 DOI: 10.3390/ani14010131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 12/23/2023] [Accepted: 12/26/2023] [Indexed: 01/12/2024] Open
Abstract
Shoulder sores predominantly arise in breeding sows and often result in untimely culling. Reported prevalence rates vary significantly, spanning between 5% and 50% depending upon the type of crate flooring inside a farm, the animal's body condition, or an existing injury that causes lameness. These lesions represent not only a welfare concern but also have an economic impact due to the labor needed for treatment and medication. The objective of this study was to evaluate the use of computer vision techniques in detecting and determining the size of shoulder lesions. A Microsoft Kinect V2 camera captured the top-down depth and RGB images of sows in farrowing crates. The RGB images were collected at a resolution of 1920 × 1080. To ensure the best view of the lesions, images were selected with sows lying on their right and left sides with all legs extended. A total of 824 RGB images from 70 sows with lesions at various stages of development were identified and annotated. Three deep learning-based object detection models, YOLOv5, YOLOv8, and Faster-RCNN, pre-trained with the COCO and ImageNet datasets, were implemented to localize the lesion area. YOLOv5 was the best predictor as it was able to detect lesions with an mAP@0.5 of 0.92. To estimate the lesion area, lesion pixel segmentation was carried out on the localized region using traditional image processing techniques like Otsu's binarization and adaptive thresholding alongside DL-based segmentation models based on U-Net architecture. In conclusion, this study demonstrates the potential of computer vision techniques in effectively detecting and assessing the size of shoulder lesions in breeding sows, providing a promising avenue for improving sow welfare and reducing economic losses.
Collapse
Affiliation(s)
- Shubham Bery
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE 68583, USA; (S.B.); (S.R.S.)
| | - Tami M. Brown-Brandl
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE 68583, USA; (S.B.); (S.R.S.)
| | - Bradley T. Jones
- Genetics and Breeding Research Unit, USDA-ARS U.S. Meat Animal Research Center, Clay Center, NE 68933, USA; (B.T.J.); (G.A.R.)
| | - Gary A. Rohrer
- Genetics and Breeding Research Unit, USDA-ARS U.S. Meat Animal Research Center, Clay Center, NE 68933, USA; (B.T.J.); (G.A.R.)
| | - Sudhendu Raj Sharma
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE 68583, USA; (S.B.); (S.R.S.)
| |
Collapse
|
4
|
Li X, Wang L, Miao H, Zhang S. Aphid Recognition and Counting Based on an Improved YOLOv5 Algorithm in a Climate Chamber Environment. INSECTS 2023; 14:839. [PMID: 37999038 PMCID: PMC10671967 DOI: 10.3390/insects14110839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 10/23/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023]
Abstract
Due to changes in light intensity, varying degrees of aphid aggregation, and small scales in the climate chamber environment, accurately identifying and counting aphids remains a challenge. In this paper, an improved YOLOv5 aphid detection model based on CNN is proposed to address aphid recognition and counting. First, to reduce the overfitting problem of insufficient data, the proposed YOLOv5 model uses an image enhancement method combining Mosaic and GridMask to expand the aphid dataset. Second, a convolutional block attention mechanism (CBAM) is proposed in the backbone layer to improve the recognition accuracy of aphid small targets. Subsequently, the feature fusion method of bi-directional feature pyramid network (BiFPN) is employed to enhance the YOLOv5 neck, further improving the recognition accuracy and speed of aphids; in addition, a Transformer structure is introduced in front of the detection head to investigate the impact of aphid aggregation and light intensity on recognition accuracy. Experiments have shown that, through the fusion of the proposed methods, the model recognition accuracy and recall rate can reach 99.1%, the value mAP@0.5 can reach 99.3%, and the inference time can reach 9.4 ms, which is significantly better than other YOLO series networks. Moreover, it has strong robustness in actual recognition tasks and can provide a reference for pest prevention and control in climate chambers.
Collapse
Affiliation(s)
| | | | - Hong Miao
- College of Mechanical Engineering, Yangzhou University, Yangzhou 225127, China
| | | |
Collapse
|
5
|
Wu S, Wang J, Liu L, Chen D, Lu H, Xu C, Hao R, Li Z, Wang Q. Enhanced YOLOv5 Object Detection Algorithm for Accurate Detection of Adult Rhynchophorus ferrugineus. INSECTS 2023; 14:698. [PMID: 37623408 PMCID: PMC10455671 DOI: 10.3390/insects14080698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 07/27/2023] [Accepted: 08/07/2023] [Indexed: 08/26/2023]
Abstract
The red palm weevil (RPW, Rhynchophorus ferrugineus) is an invasive and highly destructive pest that poses a serious threat to palm plants. To improve the efficiency of adult RPWs' management, an enhanced YOLOv5 object detection algorithm based on an attention mechanism is proposed in this paper. Firstly, the detection capabilities for small targets are enhanced by adding a convolutional layer to the backbone network of YOLOv5 and forming a quadruple down-sampling layer by splicing and down-sampling the convolutional layers. Secondly, the Squeeze-and-Excitation (SE) attention mechanism and Convolutional Block Attention Module (CBAM) attention mechanism are inserted directly before the SPPF structure to improve the feature extraction capability of the model for targets. Then, 2600 images of RPWs in different scenes and forms are collected and organized for data support. These images are divided into a training set, validation set and test set following a ratio of 7:2:1. Finally, an experiment is conducted, demonstrating that the enhanced YOLOv5 algorithm achieves an average precision of 90.1% (mAP@0.5) and a precision of 93.8% (P), which is a significant improvement compared with related models. In conclusion, the enhanced model brings a higher detection accuracy and real-time performance to the RPW-controlled pest pre-detection system, which helps us to take timely preventive and control measures to avoid serious pest infestation. It also provides scalability for other pest pre-detection systems; with the corresponding dataset and training, the algorithm can be adapted to the detection tasks of other pests, which in turn brings a wider range of applications in the field of monitoring and control of agricultural pests.
Collapse
Affiliation(s)
- Shuai Wu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Jianping Wang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Li Liu
- Hainan Key Laboratory of Tropical Oil Crops Biology, Coconut Research Institute of Chinese Academy of Tropical Agricultural Sciences, Wenchang 571339, China
| | - Danyang Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
- Shunde Innovation School, University of Science and Technology Beijing, Foshan 528399, China
| | - Huimin Lu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
- Shunde Innovation School, University of Science and Technology Beijing, Foshan 528399, China
| | - Chao Xu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Rui Hao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Zhao Li
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Qingxuan Wang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
| |
Collapse
|
6
|
Kang H, Ai L, Zhen Z, Lu B, Man Z, Yi P, Li M, Lin L. A Novel Deep Learning Model for Accurate Pest Detection and Edge Computing Deployment. INSECTS 2023; 14:660. [PMID: 37504666 PMCID: PMC10380246 DOI: 10.3390/insects14070660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 06/30/2023] [Accepted: 07/10/2023] [Indexed: 07/29/2023]
Abstract
In this work, an attention-mechanism-enhanced method based on a single-stage object detection model was proposed and implemented for the problem of rice pest detection. A multi-scale feature fusion network was first constructed to improve the model's predictive accuracy when dealing with pests of different scales. Attention mechanisms were then introduced to enable the model to focus more on the pest areas in the images, significantly enhancing the model's performance. Additionally, a small knowledge distillation network was designed for edge computing scenarios, achieving a high inference speed while maintaining a high accuracy. Experimental verification on the IDADP dataset shows that the model outperforms current state-of-the-art object detection models in terms of precision, recall, accuracy, mAP, and FPS. Specifically, a mAP of 87.5% and an FPS value of 56 were achieved, significantly outperforming other comparative models. These results sufficiently demonstrate the effectiveness and superiority of the proposed method.
Collapse
Affiliation(s)
- Huangyi Kang
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Luxin Ai
- College of Plant Protection, China Agricultural University, Beijing 100083, China
| | - Zengyi Zhen
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Baojia Lu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Zhangli Man
- College of Plant Protection, China Agricultural University, Beijing 100083, China
| | - Pengyu Yi
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Manzhou Li
- College of Plant Protection, China Agricultural University, Beijing 100083, China
| | - Li Lin
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| |
Collapse
|