1
|
Xiong H, Li J, Wang T, Zhang F, Wang Z. EResNet-SVM: an overfitting-relieved deep learning model for recognition of plant diseases and pests. JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE 2024; 104:6018-6034. [PMID: 38483173 DOI: 10.1002/jsfa.13462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 02/01/2024] [Accepted: 03/14/2024] [Indexed: 04/28/2024]
Abstract
BACKGROUND The accurate recognition and early warning for plant diseases and pests are a prerequisite of intelligent prevention and control for plant diseases and pests. As a result of the phenotype similarity of the hazarded plant after plant diseases and pests occur, as well as the interference of the external environment, traditional deep learning models often face the overfitting problem in phenotype recognition of plant diseases and pests, which leads to not only the slow convergence speed of the network, but also low recognition accuracy. RESULTS Motivated by the above problems, the present study proposes a deep learning model EResNet-support vector machine (SVM) to alleviate the overfitting for the recognition and classification of plant diseases and pests. First, the feature extraction capability of the model is improved by increasing feature extraction layers in the convolutional neural network. Second, the order-reduced modules are embedded and a sparsely activated function is introduced to reduce model complexity and alleviate overfitting. Finally, a classifier fused by SVM and fully connected layers are introduced to transforms the original non-linear classification problem into a linear classification problem in high-dimensional space to further alleviate the overfitting and improve the recognition accuracy of plant diseases and pests. The ablation experiments further demonstrate that the fused structure can effectively alleviate the overfitting and improve the recognition accuracy. The experimental recognition results for typical plant diseases and pests show that the proposed EResNet-SVM model has 99.30% test accuracy for eight conditions (seven plant diseases and one normal), which is 5.90% higher than the original ResNet18. Compared with the classic AlexNet, GoogLeNet, Xception, SqueezeNet and DenseNet201 models, the accuracy of the EResNet-SVM model has improved by 5.10%, 7%, 8.10%, 6.20% and 1.90%, respectively. The testing accuracy of the EResNet-SVM model for 6 insect pests is 100%, which is 3.90% higher than that of the original ResNet18 model. CONCLUSION This research provides not only useful references for alleviating the overfitting problem in deep learning, but also a theoretical and technical support for the intelligent detection and control of plant diseases and pests. © 2024 Society of Chemical Industry.
Collapse
Affiliation(s)
- Haitao Xiong
- College of Mechanical and Electrical Engineering, Qingdao Agricultural University, Qingdao, China
| | - Juan Li
- College of Mechanical and Electrical Engineering, Qingdao Agricultural University, Qingdao, China
| | - Tiewei Wang
- College of Mechanical and Electrical Engineering, Qingdao Agricultural University, Qingdao, China
- Jimo District Water Conservancy Bureau of Qingdao City, Qingdao, China
| | - Fan Zhang
- College of Mechanical and Electrical Engineering, Qingdao Agricultural University, Qingdao, China
| | - Ziyang Wang
- College of Mechanical and Electrical Engineering, Qingdao Agricultural University, Qingdao, China
| |
Collapse
|
2
|
Yang S, Zhou G, Feng Y, Zhang J, Jia Z. SRNet-YOLO: A model for detecting tiny and very tiny pests in cotton fields based on super-resolution reconstruction. FRONTIERS IN PLANT SCIENCE 2024; 15:1416940. [PMID: 39184581 PMCID: PMC11341441 DOI: 10.3389/fpls.2024.1416940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Accepted: 07/18/2024] [Indexed: 08/27/2024]
Abstract
Introduction Effective pest management is important during the natural growth phases of cotton in the wild. As cotton fields are infested with "tiny pests" (smaller than 32×32 pixels) and "very tiny pests" (smaller than 16×16 pixels) during growth, making it difficult for common object detection models to accurately detect and fail to make sound agricultural decisions. Methods In this study, we proposed a framework for detecting "tiny pests" and "very tiny pests" in wild cotton fields, named SRNet-YOLO. SRNet-YOLO includes a YOLOv8 feature extraction module, a feature map super-resolution reconstruction module (FM-SR), and a fusion mechanism based on BiFormer attention (BiFormerAF). Specially, the FM-SR module is designed for the feature map level to recover the important feature in detail, in other words, this module reconstructs the P5 layer feature map into the size of the P3 layer. And then we designed the BiFormerAF module to fuse this reconstruct layer with the P3 layer, which greatly improves the detection performance. The purpose of the BiFormerAF module is to solve the problem of possible loss of feature after reconstruction. Additionally, to validate the performance of our method for "tiny pests" and "very tiny pests" detection in cotton fields, we have developed a large dataset, named Cotton-Yellow-Sticky-2023, which collected pests by yellow sticky traps. Results Through comprehensive experimental verification, we demonstrate that our proposed framework achieves exceptional performance. Our method achieved 78.2% mAP on the "tiny pests" test result, it surpasses the performance of leading detection models such as YOLOv3, YOLOv5, YOLOv7 and YOLOv8 by 6.9%, 7.2%, 5.7% and 4.1%, respectively. Meanwhile, our results on "very tiny pests" reached 57% mAP, which are 32.2% higher than YOLOv8. To verify the generalizability of the model, our experiments on Yellow Sticky Traps (low-resolution) dataset still maintained the highest 92.8% mAP. Discussion The above experimental results indicate that our model not only provides help in solving the problem of tiny pests in cotton fields, but also has good generalizability and can be used for the detection of tiny pests in other crops.
Collapse
Affiliation(s)
- Sen Yang
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
- The Key Laboratory of Signal Detection and Processing, Xinjiang Uygur Autonomous Region, Xinjiang University, Urumqi, China
| | - Gang Zhou
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
- The Key Laboratory of Signal Detection and Processing, Xinjiang Uygur Autonomous Region, Xinjiang University, Urumqi, China
| | - Yuwei Feng
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
- The Key Laboratory of Signal Detection and Processing, Xinjiang Uygur Autonomous Region, Xinjiang University, Urumqi, China
| | - Jiang Zhang
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
- The Key Laboratory of Signal Detection and Processing, Xinjiang Uygur Autonomous Region, Xinjiang University, Urumqi, China
| | - Zhenhong Jia
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
- The Key Laboratory of Signal Detection and Processing, Xinjiang Uygur Autonomous Region, Xinjiang University, Urumqi, China
| |
Collapse
|
3
|
Sun K, Liu C, Han J, Zhang J, Qi Y. Phenotypic detection of flax plants based on improved Flax-YOLOv5. FRONTIERS IN PLANT SCIENCE 2024; 15:1404772. [PMID: 39055359 PMCID: PMC11269193 DOI: 10.3389/fpls.2024.1404772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 06/18/2024] [Indexed: 07/27/2024]
Abstract
Accurate detection and counting of flax plant organs are crucial for obtaining phenotypic data and are the cornerstone of flax variety selection and management strategies. In this study, a Flax-YOLOv5 model is proposed for obtaining flax plant phenotypic data. Based on the solid foundation of the original YOLOv5x feature extraction network, the network structure was extended to include the BiFormer module, which seamlessly integrates bi-directional encoders and converters, enabling it to focus on key features in an adaptive query manner. As a result, this improves the computational performance and efficiency of the model. In addition, we introduced the SIoU function to compute the regression loss, which effectively solves the problem of mismatch between predicted and actual frames. The flax plants grown in Lanzhou were collected to produce the training, validation, and test sets, and the detection results on the validation set showed that the average accuracy (mAP@0.5) was 99.29%. In the test set, the correlation coefficients (R) of the model's prediction results with the manually measured number of flax fruits, plant height, main stem length, and number of main stem divisions were 99.59%, 99.53%, 99.05%, and 92.82%, respectively. This study provides a stable and reliable method for the detection and quantification of flax phenotypic characteristics. It opens up a new technical way of selecting and breeding good varieties.
Collapse
Affiliation(s)
- Kai Sun
- College of Information Science and Technology, Gansu Agricultural University, Lanzhou, China
| | - Chengzhong Liu
- College of Information Science and Technology, Gansu Agricultural University, Lanzhou, China
| | - Junying Han
- College of Information Science and Technology, Gansu Agricultural University, Lanzhou, China
| | - Jianping Zhang
- Crop Research Institute, Gansu Academy of Agricultural Sciences, Lanzhou, China
| | - Yanni Qi
- Crop Research Institute, Gansu Academy of Agricultural Sciences, Lanzhou, China
| |
Collapse
|
4
|
Xiong B, Li D, Zhang Q, Desneux N, Luo C, Hu Z. Image detection model construction of Apolygus lucorum and Empoasca spp. based on improved YOLOv5. PEST MANAGEMENT SCIENCE 2024; 80:2577-2586. [PMID: 38243837 DOI: 10.1002/ps.7964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 10/21/2023] [Accepted: 01/04/2024] [Indexed: 01/22/2024]
Abstract
BACKGROUND The polyphagous mirid bug Apolygus lucorum (Meyer-Dür) and the green leafhopper Empoasca spp. Walsh are small pests that are widely distributed and important pests of many economically important crops, especially kiwis. Conventional monitoring methods are expensive, laborious and error-prone. Currently, deep learning methods are ineffective at recognizing them. This study proposes a new deep-learning-based YOLOv5s_HSSE model to automatically detect and count them on sticky card traps. RESULTS Based on a database of 1502 images, all images were collected from kiwi orchards at multiple locations and times. We trained the YOLOv5s model to detect and count them and then changed the activation function to Hard swish in YOLOv5s, introduced the SIoU Loss function, and added the squeeze-and-excitation attention mechanism to form a new YOLOv5s_HSSE model. Mean average precision of this model in the test dataset was 95.9%, the recall rate was 93.9% and the frames per second was 155, which are higher than those of other single-stage deep-learning models, such as SSD, YOLOv3 and YOLOv4. CONCLUSION The proposed YOLOv5s_HSSE model can be used to identify and count A. lucorum and Empoasca spp., and it is a new, efficient and accurate monitoring method. Pest detection will benefit from the broader applications of deep learning. © 2024 Society of Chemical Industry.
Collapse
Affiliation(s)
- Bo Xiong
- State Key Laboratory of Crop Stress Biology for Arid Areas, Key Laboratory of Plant Protection Resources and Pest Management of Ministry of Education, Key Laboratory of Integrated Pest Management on the Loess Plateau of Ministry of Agriculture and Rural Affairs, College of Plant Protection, Northwest A&F University, Yangling, China
| | - Delu Li
- State Key Laboratory of Crop Stress Biology for Arid Areas, Key Laboratory of Plant Protection Resources and Pest Management of Ministry of Education, Key Laboratory of Integrated Pest Management on the Loess Plateau of Ministry of Agriculture and Rural Affairs, College of Plant Protection, Northwest A&F University, Yangling, China
| | - Qi Zhang
- State Key Laboratory of Crop Stress Biology for Arid Areas, Key Laboratory of Plant Protection Resources and Pest Management of Ministry of Education, Key Laboratory of Integrated Pest Management on the Loess Plateau of Ministry of Agriculture and Rural Affairs, College of Plant Protection, Northwest A&F University, Yangling, China
| | | | - Chen Luo
- State Key Laboratory of Crop Stress Biology for Arid Areas, Key Laboratory of Plant Protection Resources and Pest Management of Ministry of Education, Key Laboratory of Integrated Pest Management on the Loess Plateau of Ministry of Agriculture and Rural Affairs, College of Plant Protection, Northwest A&F University, Yangling, China
| | - Zuqing Hu
- State Key Laboratory of Crop Stress Biology for Arid Areas, Key Laboratory of Plant Protection Resources and Pest Management of Ministry of Education, Key Laboratory of Integrated Pest Management on the Loess Plateau of Ministry of Agriculture and Rural Affairs, College of Plant Protection, Northwest A&F University, Yangling, China
| |
Collapse
|
5
|
Zhou T, Zhan W, Xiong M. A series of methods incorporating deep learning and computer vision techniques in the study of fruit fly (Diptera: Tephritidae) regurgitation. FRONTIERS IN PLANT SCIENCE 2024; 14:1337467. [PMID: 38288408 PMCID: PMC10822896 DOI: 10.3389/fpls.2023.1337467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 12/18/2023] [Indexed: 01/31/2024]
Abstract
In this study, we explored the potential of fruit fly regurgitation as a window to understand complex behaviors, such as predation and defense mechanisms, with implications for species-specific control measures that can enhance fruit quality and yield. We leverage deep learning and computer vision technologies to propose three distinct methodologies that advance the recognition, extraction, and trajectory tracking of fruit fly regurgitation. These methods show promise for broader applications in insect behavioral studies. Our evaluations indicate that the I3D model achieved a Top-1 Accuracy of 96.3% in regurgitation recognition, which is a notable improvement over the C3D and X3D models. The segmentation of the regurgitated substance via a combined U-Net and CBAM framework attains an MIOU of 90.96%, outperforming standard network models. Furthermore, we utilized threshold segmentation and OpenCV for precise quantification of the regurgitation liquid, while the integration of the Yolov5 and DeepSort algorithms provided 99.8% accuracy in fruit fly detection and tracking. The success of these methods suggests their efficacy in fruit fly regurgitation research and their potential as a comprehensive tool for interdisciplinary insect behavior analysis, leading to more efficient and non-destructive insect control strategies in agricultural settings.
Collapse
Affiliation(s)
- Tongzhou Zhou
- Department of Computing, The Hong Kong Polytechnic University, Hong Kong, Hong Kong SAR, China
| | - Wei Zhan
- School of Computer Science, Yangtze University, Jingzhou, China
| | - Mengyuan Xiong
- School of Computer Science, Yangtze University, Jingzhou, China
| |
Collapse
|
6
|
Li X, Wang L, Miao H, Zhang S. Aphid Recognition and Counting Based on an Improved YOLOv5 Algorithm in a Climate Chamber Environment. INSECTS 2023; 14:839. [PMID: 37999038 PMCID: PMC10671967 DOI: 10.3390/insects14110839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 10/23/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023]
Abstract
Due to changes in light intensity, varying degrees of aphid aggregation, and small scales in the climate chamber environment, accurately identifying and counting aphids remains a challenge. In this paper, an improved YOLOv5 aphid detection model based on CNN is proposed to address aphid recognition and counting. First, to reduce the overfitting problem of insufficient data, the proposed YOLOv5 model uses an image enhancement method combining Mosaic and GridMask to expand the aphid dataset. Second, a convolutional block attention mechanism (CBAM) is proposed in the backbone layer to improve the recognition accuracy of aphid small targets. Subsequently, the feature fusion method of bi-directional feature pyramid network (BiFPN) is employed to enhance the YOLOv5 neck, further improving the recognition accuracy and speed of aphids; in addition, a Transformer structure is introduced in front of the detection head to investigate the impact of aphid aggregation and light intensity on recognition accuracy. Experiments have shown that, through the fusion of the proposed methods, the model recognition accuracy and recall rate can reach 99.1%, the value mAP@0.5 can reach 99.3%, and the inference time can reach 9.4 ms, which is significantly better than other YOLO series networks. Moreover, it has strong robustness in actual recognition tasks and can provide a reference for pest prevention and control in climate chambers.
Collapse
Affiliation(s)
| | | | - Hong Miao
- College of Mechanical Engineering, Yangzhou University, Yangzhou 225127, China
| | | |
Collapse
|
7
|
Sun Y, Zhan W, Dong T, Guo Y, Liu H, Gui L, Zhang Z. Real-Time Recognition and Detection of Bactrocera minax (Diptera: Trypetidae) Grooming Behavior Using Body Region Localization and Improved C3D Network. SENSORS (BASEL, SWITZERLAND) 2023; 23:6442. [PMID: 37514739 PMCID: PMC10386511 DOI: 10.3390/s23146442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 07/09/2023] [Accepted: 07/14/2023] [Indexed: 07/30/2023]
Abstract
Pest management has long been a critical aspect of crop protection. Insect behavior is of great research value as an important indicator for assessing insect characteristics. Currently, insect behavior research is increasingly based on the quantification of behavior. Traditional manual observation and analysis methods can no longer meet the requirements of data volume and observation time. In this paper, we propose a method based on region localization combined with an improved 3D convolutional neural network for six grooming behaviors of Bactrocera minax: head grooming, foreleg grooming, fore-mid leg grooming, mid-hind leg grooming, hind leg grooming, and wing grooming. The overall recognition accuracy reached 93.46%. We compared the results obtained from the detection model with manual observations; the average difference was about 12%. This shows that the model reached a level close to manual observation. Additionally, recognition time using this method is only one-third of that required for manual observation, making it suitable for real-time detection needs. Experimental data demonstrate that this method effectively eliminates the interference caused by the walking behavior of Bactrocera minax, enabling efficient and automated detection of grooming behavior. Consequently, it offers a convenient means of studying pest characteristics in the field of crop protection.
Collapse
Affiliation(s)
- Yong Sun
- School of Computer Science, Yangtze University, Jingzhou 434023, China
- Jingzhou Yingtuo Technology Co., Ltd., Jingzhou 434023, China
| | - Wei Zhan
- School of Computer Science, Yangtze University, Jingzhou 434023, China
| | - Tianyu Dong
- School of Computer Science, Yangtze University, Jingzhou 434023, China
| | - Yuheng Guo
- School of Computer Science, Yangtze University, Jingzhou 434023, China
| | - Hu Liu
- School of Computer Science, Yangtze University, Jingzhou 434023, China
| | - Lianyou Gui
- College of Agriculture, Yangtze University, Jingzhou 434023, China
| | - Zhiliang Zhang
- School of Computer Science, Yangtze University, Jingzhou 434023, China
| |
Collapse
|
8
|
Shen J, Zhang L, Yang L, Xu H, Chen S, Ji J, Huang S, Liang H, Dong C, Lou X. Testing a Method Based on an Improved UNet and Skeleton Thinning Algorithm to Obtain Branch Phenotypes of Tall and Valuable Trees Using Abies beshanzuensis as the Research Sample. PLANTS (BASEL, SWITZERLAND) 2023; 12:2444. [PMID: 37447004 DOI: 10.3390/plants12132444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/19/2023] [Accepted: 06/22/2023] [Indexed: 07/15/2023]
Abstract
Sudden changes in the morphological characteristics of trees are closely related to plant health, and automated phenotypic measurements can help improve the efficiency of plant health monitoring, and thus aid in the conservation of old and valuable tress. The irregular distribution of branches and the influence of the natural environment make it very difficult to monitor the status of branches in the field. In order to solve the problem of branch phenotype monitoring of tall and valuable plants in the field environment, this paper proposes an improved UNet model to achieve accurate extraction of trunk and branches. This paper also proposes an algorithm that can measure the branch length and inclination angle by using the main trunk and branches separated in the previous stage, finding the skeleton line of a single branch via digital image morphological processing and the Zhang-Suen thinning algorithm, obtaining the number of pixel points as the branch length, and then using Euclidean distance to fit a straight line to calculate the inclination angle of each branch. These were carried out in order to monitor the change in branch length and inclination angle and to determine whether plant branch breakage or external stress events had occurred. We evaluated the method on video images of Abies beshanzuensis, and the experimental results showed that the present algorithm has more excellent performance at 94.30% MIoU as compared with other target segmentation algorithms. The coefficient of determination (R2) is higher than 0.89 for the calculation of the branch length and inclination angle. In summary, the algorithm proposed in this paper can effectively segment the branches of tall plants and measure their length and inclination angle in a field environment, thus providing an effective method to monitor the health of valuable plants.
Collapse
Affiliation(s)
- Jiahui Shen
- College of Mathematics and Computer Science, Zhejiang A & F University, Hangzhou 311300, China
- Key Laboratory of State Forestry and Grassland Administration on Forestry Sensing Technology and Intelligent Equipment, Zhejiang A & F University, Hangzhou 311300, China
- Key Laboratory of Forestry Intelligent Monitoring and Information Technology Research of Zhejiang Province, Zhejiang A & F University, Hangzhou 311300, China
| | - Lihong Zhang
- Longquan Forestry Bureau, Longquan 323700, China
| | - Laibang Yang
- Hangzhou Ganzhi Technology Co., Ltd., Lin'an 311300, China
| | - Hao Xu
- Zhejiang Forestry Bureau, Hangzhou 310000, China
| | - Sheng Chen
- Center for Forest Resource Monitoring of Zhejiang Province, Hangzhou 310000, China
| | - Jingyong Ji
- Longquan Forestry Bureau, Longquan 323700, China
| | - Siqi Huang
- Longquan Urban Forestry Workstation, Longquan 323700, China
| | - Hao Liang
- College of Mathematics and Computer Science, Zhejiang A & F University, Hangzhou 311300, China
- Key Laboratory of State Forestry and Grassland Administration on Forestry Sensing Technology and Intelligent Equipment, Zhejiang A & F University, Hangzhou 311300, China
- Key Laboratory of Forestry Intelligent Monitoring and Information Technology Research of Zhejiang Province, Zhejiang A & F University, Hangzhou 311300, China
| | - Chen Dong
- College of Mathematics and Computer Science, Zhejiang A & F University, Hangzhou 311300, China
- Key Laboratory of State Forestry and Grassland Administration on Forestry Sensing Technology and Intelligent Equipment, Zhejiang A & F University, Hangzhou 311300, China
- Key Laboratory of Forestry Intelligent Monitoring and Information Technology Research of Zhejiang Province, Zhejiang A & F University, Hangzhou 311300, China
| | - Xiongwei Lou
- College of Mathematics and Computer Science, Zhejiang A & F University, Hangzhou 311300, China
- Key Laboratory of State Forestry and Grassland Administration on Forestry Sensing Technology and Intelligent Equipment, Zhejiang A & F University, Hangzhou 311300, China
- Key Laboratory of Forestry Intelligent Monitoring and Information Technology Research of Zhejiang Province, Zhejiang A & F University, Hangzhou 311300, China
| |
Collapse
|
9
|
Zhou X, Zou X, Tang W, Yan Z, Meng H, Luo X. Unstructured road extraction and roadside fruit recognition in grape orchards based on a synchronous detection algorithm. FRONTIERS IN PLANT SCIENCE 2023; 14:1103276. [PMID: 37332733 PMCID: PMC10272741 DOI: 10.3389/fpls.2023.1103276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 05/08/2023] [Indexed: 06/20/2023]
Abstract
Accurate road extraction and recognition of roadside fruit in complex orchard environments are essential prerequisites for robotic fruit picking and walking behavioral decisions. In this study, a novel algorithm was proposed for unstructured road extraction and roadside fruit synchronous recognition, with wine grapes and nonstructural orchards as research objects. Initially, a preprocessing method tailored to field orchards was proposed to reduce the interference of adverse factors in the operating environment. The preprocessing method contained 4 parts: interception of regions of interest, bilateral filter, logarithmic space transformation and image enhancement based on the MSRCR algorithm. Subsequently, the analysis of the enhanced image enabled the optimization of the gray factor, and a road region extraction method based on dual-space fusion was proposed by color channel enhancement and gray factor optimization. Furthermore, the YOLO model suitable for grape cluster recognition in the wild environment was selected, and its parameters were optimized to enhance the recognition performance of the model for randomly distributed grapes. Finally, a fusion recognition framework was innovatively established, wherein the road extraction result was taken as input, and the optimized parameter YOLO model was utilized to identify roadside fruits, thus realizing synchronous road extraction and roadside fruit detection. Experimental results demonstrated that the proposed method based on the pretreatment could reduce the impact of interfering factors in complex orchard environments and enhance the quality of road extraction. Using the optimized YOLOv7 model, the precision, recall, mAP, and F1-score for roadside fruit cluster detection were 88.9%, 89.7%, 93.4%, and 89.3%, respectively, all of which were higher than those of the YOLOv5 model and were more suitable for roadside grape recognition. Compared to the identification results obtained by the grape detection algorithm alone, the proposed synchronous algorithm increased the number of fruit identifications by 23.84% and the detection speed by 14.33%. This research enhanced the perception ability of robots and provided a solid support for behavioral decision systems.
Collapse
Affiliation(s)
- Xinzhao Zhou
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China
- Foshan-Zhongke Innovation Research Institute of Intelligent Agriculture, Foshan, China
| | - Xiangjun Zou
- Foshan-Zhongke Innovation Research Institute of Intelligent Agriculture, Foshan, China
- Foshan Sino-tech Industrial Technology Research Institute, Foshan, China
| | - Wei Tang
- Foshan-Zhongke Innovation Research Institute of Intelligent Agriculture, Foshan, China
| | - Zhiwei Yan
- Foshan-Zhongke Innovation Research Institute of Intelligent Agriculture, Foshan, China
| | - Hewei Meng
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China
| | - Xiwen Luo
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China
- College of Engineering, South China Agricultural University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Agricultural Artificial Intelligence (GDKL-AAI), Guangzhou, China
| |
Collapse
|
10
|
Huang Y, Luo Y, Cao Y, Lin X, Wei H, Wu M, Yang X, Zhao Z. Damage Detection of Unwashed Eggs through Video and Deep Learning. Foods 2023; 12:foods12112179. [PMID: 37297424 DOI: 10.3390/foods12112179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/22/2023] [Accepted: 05/24/2023] [Indexed: 06/12/2023] Open
Abstract
Broken eggs can be harmful to human health but are also unfavorable for transportation and production. This study proposes a video-based detection model for the real-time detection of broken eggs regarding unwashed eggs in dynamic scenes. A system capable of the continuous rotation and translation of eggs was designed to display the entire surface of an egg. We added CA into the backbone network, fusing BiFPN and GSConv with the neck to improve YOLOv5. The improved YOLOV5 model uses intact and broken eggs for training. In order to accurately judge the category of eggs in the process of movement, ByteTrack was used to track the eggs and assign an ID to each egg. The detection results of the different frames of YOLOv5 in the video were associated by ID, and we used the method of five consecutive frames to determine the egg category. The experimental results show that, when compared to the original YOLOv5, the improved YOLOv5 model improves the precision of detecting broken eggs by 2.2%, recall by 4.4%, and mAP:0.5 by 4.1%. The experimental field results showed an accuracy of 96.4% when the improved YOLOv5 (combined with ByteTrack) was used for the video detection of broken eggs. The video-based model can detect eggs that are always in motion, which is more suitable for actual detection than a single image-based detection model. In addition, this study provides a reference for the research of video-based non-destructive testing.
Collapse
Affiliation(s)
- Yuan Huang
- College of Engineering, South China Agricultural University, Guangzhou 510642, China
| | - Yangfan Luo
- College of Engineering, South China Agricultural University, Guangzhou 510642, China
| | - Yangyang Cao
- College of Engineering, South China Agricultural University, Guangzhou 510642, China
| | - Xu Lin
- College of Engineering, South China Agricultural University, Guangzhou 510642, China
| | - Hongfei Wei
- College of Engineering, South China Agricultural University, Guangzhou 510642, China
| | - Mengcheng Wu
- College of Engineering, South China Agricultural University, Guangzhou 510642, China
| | - Xiaonan Yang
- College of Engineering, South China Agricultural University, Guangzhou 510642, China
| | - Zuoxi Zhao
- College of Engineering, South China Agricultural University, Guangzhou 510642, China
- Key Laboratory of Key Technology on Agricultural Machine and Equipment, South China Agricultural University, Ministry of Education, Guangzhou 510642, China
| |
Collapse
|
11
|
Wang M, Fu B, Fan J, Wang Y, Zhang L, Xia C. Sweet potato leaf detection in a natural scene based on faster R-CNN with a visual attention mechanism and DIoU-NMS. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
12
|
Momeny M, Jahanbakhshi A, Neshat AA, Hadipour-Rokni R, Zhang YD, Ampatzidis Y. Detection of citrus black spot disease and ripeness level in orange fruit using learning-to-augment incorporated deep networks. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101829] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
|
13
|
Rookognise: Acoustic detection and identification of individual rooks in field recordings using multi-task neural networks. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|