1
|
Pan Y, Yu X, Dong J, Zhao Y, Li S, Jin X. Classification of field wheat varieties based on a lightweight G-PPW-VGG11 model. FRONTIERS IN PLANT SCIENCE 2024; 15:1375245. [PMID: 38831908 PMCID: PMC11145979 DOI: 10.3389/fpls.2024.1375245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 04/15/2024] [Indexed: 06/05/2024]
Abstract
Introduction In agriculture, especially wheat cultivation, farmers often use multi-variety planting strategies to reduce monoculture-related harvest risks. However, the subtle morphological differences among wheat varieties make accurate discrimination technically challenging. Traditional variety classification methods, reliant on expert knowledge, are inefficient for modern intelligent agricultural management. Numerous existing classification models are computationally complex, memory-intensive, and difficult to deploy on mobile devices effectively. This study introduces G-PPW-VGG11, an innovative lightweight convolutional neural network model, to address these issues. Methods G-PPW-VGG11 ingeniously combines partial convolution (PConv) and partially mixed depthwise separable convolution (PMConv), reducing computational complexity and feature redundancy. Simultaneously, incorporating ECANet, an efficient channel attention mechanism, enables precise leaf information capture and effective background noise suppression. Additionally, G-PPW-VGG11 replaces traditional VGG11's fully connected layers with two pointwise convolutional layers and a global average pooling layer, significantly reducing memory footprint and enhancing nonlinear expressiveness and training efficiency. Results Rigorous testing showed G-PPW-VGG11's superior performance, with an impressive 93.52% classification accuracy and only 1.79MB memory usage. Compared to VGG11, G-PPW-VGG11 showed a 5.89% increase in accuracy, 35.44% faster inference, and a 99.64% reduction in memory usage. G-PPW-VGG11 also surpasses traditional lightweight networks in classification accuracy and inference speed. Notably, G-PPW-VGG11 was successfully deployed on Android and its performance evaluated in real-world settings. The results showed an 84.67% classification accuracy with an average time of 291.04ms per image. Discussion This validates the model's feasibility for practical agricultural wheat variety classification, establishing a foundation for intelligent management. For future research, the trained model and complete dataset are made publicly available.
Collapse
Affiliation(s)
- Yu Pan
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China
- Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing, China
- Xinjiang Production and Construction Corps Key Laboratory of Modern Agricultural Machinery, Shihezi, China
- Engineering Research Center for Production Mechanization of Oasis Characteristic Cash Crop, Ministry of Education, Shihezi, China
| | - Xun Yu
- Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing, China
- State Key Laboratory of Crop Gene Resources and Breeding, Chinese Academy of Agricultural Sciences, Beijing, China
| | - Jihua Dong
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China
- Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing, China
- Engineering Research Center for Production Mechanization of Oasis Characteristic Cash Crop, Ministry of Education, Shihezi, China
| | - Yonghang Zhao
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China
- Xinjiang Production and Construction Corps Key Laboratory of Modern Agricultural Machinery, Shihezi, China
- Engineering Research Center for Production Mechanization of Oasis Characteristic Cash Crop, Ministry of Education, Shihezi, China
| | - Shuanming Li
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China
- Xinjiang Production and Construction Corps Key Laboratory of Modern Agricultural Machinery, Shihezi, China
- Engineering Research Center for Production Mechanization of Oasis Characteristic Cash Crop, Ministry of Education, Shihezi, China
| | - Xiuliang Jin
- Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing, China
- State Key Laboratory of Crop Gene Resources and Breeding, Chinese Academy of Agricultural Sciences, Beijing, China
| |
Collapse
|
2
|
Peng J, Ouyang C, Peng H, Hu W, Wang Y, Jiang P. MultiFuseYOLO: Redefining Wine Grape Variety Recognition through Multisource Information Fusion. SENSORS (BASEL, SWITZERLAND) 2024; 24:2953. [PMID: 38733058 PMCID: PMC11086123 DOI: 10.3390/s24092953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 05/03/2024] [Accepted: 05/04/2024] [Indexed: 05/13/2024]
Abstract
Based on the current research on the wine grape variety recognition task, it has been found that traditional deep learning models relying only on a single feature (e.g., fruit or leaf) for classification can face great challenges, especially when there is a high degree of similarity between varieties. In order to effectively distinguish these similar varieties, this study proposes a multisource information fusion method, which is centered on the SynthDiscrim algorithm, aiming to achieve a more comprehensive and accurate wine grape variety recognition. First, this study optimizes and improves the YOLOV7 model and proposes a novel target detection and recognition model called WineYOLO-RAFusion, which significantly improves the fruit localization precision and recognition compared with YOLOV5, YOLOX, and YOLOV7, which are traditional deep learning models. Secondly, building upon the WineYOLO-RAFusion model, this study incorporated the method of multisource information fusion into the model, ultimately forming the MultiFuseYOLO model. Experiments demonstrated that MultiFuseYOLO significantly outperformed other commonly used models in terms of precision, recall, and F1 score, reaching 0.854, 0.815, and 0.833, respectively. Moreover, the method improved the precision of the hard to distinguish Chardonnay and Sauvignon Blanc varieties, which increased the precision from 0.512 to 0.813 for Chardonnay and from 0.533 to 0.775 for Sauvignon Blanc. In conclusion, the MultiFuseYOLO model offers a reliable and comprehensive solution to the task of wine grape variety identification, especially in terms of distinguishing visually similar varieties and realizing high-precision identifications.
Collapse
Affiliation(s)
- Jialiang Peng
- College of Information and Intelligence, Hunan Agricultural University, Changsha 410128, China; (J.P.); (C.O.); (H.P.)
| | - Cheng Ouyang
- College of Information and Intelligence, Hunan Agricultural University, Changsha 410128, China; (J.P.); (C.O.); (H.P.)
| | - Hao Peng
- College of Information and Intelligence, Hunan Agricultural University, Changsha 410128, China; (J.P.); (C.O.); (H.P.)
| | - Wenwu Hu
- College of Mechanical and Electrical Engineering, Hunan Agricultural University, Changsha 410128, China;
| | - Yi Wang
- College of Information and Intelligence, Hunan Agricultural University, Changsha 410128, China; (J.P.); (C.O.); (H.P.)
| | - Ping Jiang
- College of Mechanical and Electrical Engineering, Hunan Agricultural University, Changsha 410128, China;
| |
Collapse
|
3
|
Kim YT, Ha STT, In BC. Development of a longevity prediction model for cut roses using hyperspectral imaging and a convolutional neural network. FRONTIERS IN PLANT SCIENCE 2024; 14:1296473. [PMID: 38273951 PMCID: PMC10809400 DOI: 10.3389/fpls.2023.1296473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 12/19/2023] [Indexed: 01/27/2024]
Abstract
Introduction Hyperspectral imaging (HSI) and deep learning techniques have been widely applied to predict postharvest quality and shelf life in multiple horticultural crops such as vegetables, mushrooms, and fruits; however, few studies show the application of these techniques to evaluate the quality issues of cut flowers. Therefore, in this study, we developed a non-contact and rapid detection technique for the emergence of gray mold disease (GMD) and the potential longevity of cut roses using deep learning techniques based on HSI data. Methods Cut flowers of two rose cultivars ('All For Love' and 'White Beauty') underwent either dry transport (thus impaired cut flower hydration), ethylene exposure, or Botrytis cinerea inoculation, in order to identify the characteristic light wavelengths that are closely correlated with plant physiological states based on HSI. The flower bud of cut roses was selected for HSI measurement and the development of a vase life prediction model utilizing YOLOv5. Results and discussion The HSI results revealed that spectral reflectance between 470 to 680 nm was strongly correlated with gray mold disease (GMD), whereas those between 700 to 900 nm were strongly correlated with flower wilting or vase life. To develop a YOLOv5 prediction model that can be used to anticipate flower longevity, the vase life of cut roses was classed into two categories as over 5 d (+5D) and under 5 d (-5D), based on scoring a grading standard on the flower quality. A total of 3000 images from HSI were forwarded to the YOLOv5 model for training and prediction of GMD and vase life of cut flowers. Validation of the prediction model using independent data confirmed its high predictive accuracy in evaluating the vase life of both 'All For Love' (r2 = 0.86) and 'White Beauty' (r2 = 0.83) cut flowers. The YOLOv5 model also accurately detected and classified GMD in the cut rose flowers based on the image data. Our results demonstrate that the combination of HSI and deep learning is a reliable method for detecting early GMD infection and evaluating the longevity of cut roses.
Collapse
Affiliation(s)
| | | | - Byung-Chun In
- Department of Smart Horticultural Science, Andong National University, Andong, Republic of Korea
| |
Collapse
|
4
|
da Silva Ribeiro JE, dos Santos Coêlho E, de Oliveira AKS, Correia da Silva AG, de Araújo Rangel Lopes W, de Almeida Oliveira PH, Freire da Silva E, Barros Júnior AP, Maria da Silveira L. Artificial neural network approach for predicting the sesame ( Sesamum indicum L.) leaf area: A non-destructive and accurate method. Heliyon 2023; 9:e17834. [PMID: 37501953 PMCID: PMC10368775 DOI: 10.1016/j.heliyon.2023.e17834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 06/21/2023] [Accepted: 06/28/2023] [Indexed: 07/29/2023] Open
Abstract
The estimative of the leaf area using a nondestructive method is paramount for successive evaluations in the same plant with precision and speed, not requiring high-cost equipment. Thus, the objective of this work was to construct models to estimate leaf area using artificial neural network models (ANN) and regression and to compare which model is the most effective model for predicting leaf area in sesame culture. A total of 11,000 leaves of four sesame cultivars were collected. Then, the length (L) and leaf width (W), and the actual leaf area (LA) were quantified. For the ANN model, the parameters of the length and width of the leaf were used as input variables of the network, with hidden layers and leaf area as the desired output parameter. For the linear regression models, leaf dimensions were considered independent variables, and the actual leaf area was the dependent variable. The criteria for choosing the best models were: the lowest root of the mean squared error (RMSE), mean absolute error (MAE), and absolute mean percentage error (MAPE), and higher coefficients of determination (R2). Among the linear regression models, the equation yˆ=0.515+0.584*LW was considered the most indicated to estimate the leaf area of the sesame. In modeling with ANNs, the best results were found for model 2-3-1, with two input variables (L and W), three hidden variables, and an output variable (LA). The ANN model was more accurate than the regression models, recording the lowest errors and higher R2 in the training phase (RMSE: 0.0040; MAE: 0.0027; MAPE: 0.0587; and R2: 0.9834) and in the test phase (RMSE: 0.0106; MAE: 0.0029; MAPE: 0.0611; and R2: 0.9828). Thus, the ANN method is the most indicated and accurate for predicting the leaf area of the sesame.
Collapse
|
5
|
Chandel NS, Rajwade YA, Dubey K, Chandel AK, Subeesh A, Tiwari MK. Water Stress Identification of Winter Wheat Crop with State-of-the-Art AI Techniques and High-Resolution Thermal-RGB Imagery. PLANTS (BASEL, SWITZERLAND) 2022; 11:3344. [PMID: 36501383 PMCID: PMC9741210 DOI: 10.3390/plants11233344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 11/25/2022] [Accepted: 11/27/2022] [Indexed: 06/17/2023]
Abstract
Timely crop water stress detection can help precision irrigation management and minimize yield loss. A two-year study was conducted on non-invasive winter wheat water stress monitoring using state-of-the-art computer vision and thermal-RGB imagery inputs. Field treatment plots were irrigated using two irrigation systems (flood and sprinkler) at four rates (100, 75, 50, and 25% of crop evapotranspiration [ETc]). A total of 3200 images under different treatments were captured at critical growth stages, that is, 20, 35, 70, 95, and 108 days after sowing using a custom-developed thermal-RGB imaging system. Crop and soil response measurements of canopy temperature (Tc), relative water content (RWC), soil moisture content (SMC), and relative humidity (RH) were significantly affected by the irrigation treatments showing the lowest Tc (22.5 ± 2 °C), and highest RWC (90%) and SMC (25.7 ± 2.2%) for 100% ETc, and highest Tc (28 ± 3 °C), and lowest RWC (74%) and SMC (20.5 ± 3.1%) for 25% ETc. The RGB and thermal imagery were then used as inputs to feature-extraction-based deep learning models (AlexNet, GoogLeNet, Inception V3, MobileNet V2, ResNet50) while, RWC, SMC, Tc, and RH were the inputs to function-approximation models (Artificial Neural Network (ANN), Kernel Nearest Neighbor (KNN), Logistic Regression (LR), Support Vector Machine (SVM) and Long Short-Term Memory (DL-LSTM)) to classify stressed/non-stressed crops. Among the feature extraction-based models, ResNet50 outperformed other models showing a discriminant accuracy of 96.9% with RGB and 98.4% with thermal imagery inputs. Overall, classification accuracy was higher for thermal imagery compared to RGB imagery inputs. The DL-LSTM had the highest discriminant accuracy of 96.7% and less error among the function approximation-based models for classifying stress/non-stress. The study suggests that computer vision coupled with thermal-RGB imagery can be instrumental in high-throughput mitigation and management of crop water stress.
Collapse
Affiliation(s)
- Narendra S. Chandel
- Agricultural Mechanization Division, ICAR—Central Institute of Agricultural Engineering, Bhopal 462038, MP, India
| | - Yogesh A. Rajwade
- Irrigation and Drainage Engineering Division, ICAR—Central Institute of Agricultural Engineering, Bhopal 462038, MP, India
| | - Kumkum Dubey
- Agricultural Mechanization Division, ICAR—Central Institute of Agricultural Engineering, Bhopal 462038, MP, India
| | - Abhilash K. Chandel
- Department of Biological Systems Engineering, Virginia Tech Tidewater AREC, Suffolk, VA 23437, USA
- Center for Advanced Innovation in Agriculture (CAIA), Virginia Tech, Blacksburg, VA 24061, USA
| | - A. Subeesh
- Agricultural Mechanization Division, ICAR—Central Institute of Agricultural Engineering, Bhopal 462038, MP, India
| | - Mukesh K. Tiwari
- College of Agricultural Engineering and Technology, Anand Agricultural University, Godhra 389001, GJ, India
| |
Collapse
|
6
|
Vishal MK, Saluja R, Aggrawal D, Banerjee B, Raju D, Kumar S, Chinnusamy V, Sahoo RN, Adinarayana J. Leaf Count Aided Novel Framework for Rice ( Oryza sativa L.) Genotypes Discrimination in Phenomics: Leveraging Computer Vision and Deep Learning Applications. PLANTS (BASEL, SWITZERLAND) 2022; 11:2663. [PMID: 36235529 PMCID: PMC9614605 DOI: 10.3390/plants11192663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 08/02/2022] [Accepted: 08/26/2022] [Indexed: 06/16/2023]
Abstract
Drought is a detrimental factor to gaining higher yields in rice (Oryza sativa L.), especially amid the rising occurrence of drought across the globe. To combat this situation, it is essential to develop novel drought-resilient varieties. Therefore, screening of drought-adaptive genotypes is required with high precision and high throughput. In contemporary emerging science, high throughput plant phenotyping (HTPP) is a crucial technology that attempts to break the bottleneck of traditional phenotyping. In traditional phenotyping, screening significant genotypes is a tedious task and prone to human error while measuring various plant traits. In contrast, owing to the potential advantage of HTPP over traditional phenotyping, image-based traits, also known as i-traits, were used in our study to discriminate 110 genotypes grown for genome-wide association study experiments under controlled (well-watered), and drought-stress (limited water) conditions, under a phenomics experiment in a controlled environment with RGB images. Our proposed framework non-destructively estimated drought-adaptive plant traits from the images, such as the number of leaves, convex hull, plant-aspect ratio (plant spread), and similarly associated geometrical and morphological traits for analyzing and discriminating genotypes. The results showed that a single trait, the number of leaves, can also be used for discriminating genotypes. This critical drought-adaptive trait was associated with plant size, architecture, and biomass. In this work, the number of leaves and other characteristics were estimated non-destructively from top view images of the rice plant for each genotype. The estimation of the number of leaves for each rice plant was conducted with the deep learning model, YOLO (You Only Look Once). The leaves were counted by detecting corresponding visible leaf tips in the rice plant. The detection accuracy was 86-92% for dense to moderate spread large plants, and 98% for sparse spread small plants. With this framework, the susceptible genotypes (MTU1010, PUSA-1121 and similar genotypes) and drought-resistant genotypes (Heera, Anjali, Dular and similar genotypes) were grouped in the core set with a respective group of drought-susceptible and drought-tolerant genotypes based on the number of leaves, and the leaves' emergence during the peak drought-stress period. Moreover, it was found that the number of leaves was significantly associated with other pertinent morphological, physiological and geometrical traits. Other geometrical traits were measured from the RGB images with the help of computer vision.
Collapse
Affiliation(s)
| | - Rohit Saluja
- CSE, Indian Institute of Technology Bombay, Mumbai 400076, India
- Indian Institute of Information Technology, Hyderabad 500032, India
| | | | - Biplab Banerjee
- CSRE, Indian Institute of Technology Bombay, Mumbai 400076, India
| | - Dhandapani Raju
- Indian Council of Agricultural Research—Indian Agricultural Research Institute, Pusa, New Delhi 110012, India
| | - Sudhir Kumar
- Indian Council of Agricultural Research—Indian Agricultural Research Institute, Pusa, New Delhi 110012, India
| | - Viswanathan Chinnusamy
- Indian Council of Agricultural Research—Indian Agricultural Research Institute, Pusa, New Delhi 110012, India
| | - Rabi Narayan Sahoo
- Indian Council of Agricultural Research—Indian Agricultural Research Institute, Pusa, New Delhi 110012, India
| | | |
Collapse
|
7
|
Liu KH, Yang MH, Huang ST, Lin C. Plant Species Classification Based on Hyperspectral Imaging via a Lightweight Convolutional Neural Network Model. FRONTIERS IN PLANT SCIENCE 2022; 13:855660. [PMID: 35498669 PMCID: PMC9044035 DOI: 10.3389/fpls.2022.855660] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 03/01/2022] [Indexed: 06/14/2023]
Abstract
In recent years, many image-based approaches have been proposed to classify plant species. Most methods utilized red green blue (RGB) imaging materials and designed custom features to classify the plant images using machine learning algorithms. Those works primarily focused on analyzing single-leaf images instead of live-crown images. Without considering the additional features of the leaves' color and spatial pattern, they failed to handle cases that contained leaves similar in appearance due to the limited spectral information of RGB imaging. To tackle this dilemma, this study proposes a novel framework that combines hyperspectral imaging (HSI) and deep learning techniques for plant image classification. We built a plant image dataset containing 1,500 images of 30 different plant species taken by a 470-900 nm hyperspectral camera and designed a lightweight conventional neural network (CNN) model (LtCNN) to perform image classification. Several state-of-art CNN classifiers are chosen for comparison. The impact of using different band combinations as the network input is also investigated. Results show that using simulated RGB images achieves a kappa coefficient of nearly 0.90 while using the combination of 3-band RGB and 3-band near-infrared images can improve to 0.95. It is also found that the proposed LtCNN can obtain a satisfactory performance of plant classification (kappa = 0.95) using critical spectral features of the green edge (591 nm), red-edge (682 nm), and near-infrared (762 nm) bands. This study also demonstrates the excellent adaptability of the LtCNN model in recognizing leaf features of plant live-crown images while using a relatively smaller number of training samples than complex CNN models such as AlexNet, GoogLeNet, and VGGNet.
Collapse
Affiliation(s)
- Keng-Hao Liu
- Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Meng-Hsien Yang
- Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Sheng-Ting Huang
- Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Chinsu Lin
- Department of Forestry and Natural Resources, National Chiayi University, Chiayi, Taiwan
| |
Collapse
|
8
|
Feng ZH, Wang LY, Yang ZQ, Zhang YY, Li X, Song L, He L, Duan JZ, Feng W. Hyperspectral Monitoring of Powdery Mildew Disease Severity in Wheat Based on Machine Learning. FRONTIERS IN PLANT SCIENCE 2022; 13:828454. [PMID: 35386677 PMCID: PMC8977770 DOI: 10.3389/fpls.2022.828454] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 01/20/2022] [Indexed: 06/14/2023]
Abstract
Powdery mildew has a negative impact on wheat growth and restricts yield formation. Therefore, accurate monitoring of the disease is of great significance for the prevention and control of powdery mildew to protect world food security. The canopy spectral reflectance was obtained using a ground feature hyperspectrometer during the flowering and filling periods of wheat, and then the Savitzky-Golay method was used to smooth the measured spectral data, and as original reflectivity (OR). Firstly, the OR was spectrally transformed using the mean centralization (MC), multivariate scattering correction (MSC), and standard normal variate transform (SNV) methods. Secondly, the feature bands of above four transformed spectral data were extracted through a combination of the Competitive Adaptive Reweighted Sampling (CARS) and Successive Projections Algorithm (SPA) algorithms. Finally, partial least square regression (PLSR), support vector regression (SVR), and random forest regression (RFR) were used to construct an optimal monitoring model for wheat powdery mildew disease index (mean disease index, mDI). The results showed that after Pearson correlation, two-band optimization combinations and machine learning method modeling comparisons, the comprehensive performance of the MC spectrum data was the best, and it was a better method for pretreating disease spectrum data. The transformed spectral data combined with the CARS-SPA algorithm was able to extract the characteristic bands more effectively. The number of bands screened was more than the number of bands extracted by the OR data, and the band positions were more evenly distributed. In comparison of different machine learning modeling methods, the RFR model performed the best (coefficient of determination, R 2 = 0.741-0.852), while the SVR and PLSR models performed similarly (R 2 = 0.733-0.836). Taken together, the estimation accuracy of spectral data transformation using the MC method combined with the RFR model (MC-RFR) was the highest, the model R 2 was 0.849-0.852, and the root mean square error (RMSE) and the mean absolute error (MAE) ranged from 2.084 to 2.177 and 1.684 to 1.777, respectively. Compared with the OR combined with the RFR model (OR-RFR), the R 2 increased by 14.39%, and the R 2 of RMSE and MAE decreased by 23.9 and 27.87%. Also, the monitoring accuracy of flowering stage is better than that of grain filling stage, which is due to the relative stability of canopy structure in flowering stage. It can be seen that without changing the shape of the spectral curve, and that the use of MC to preprocess spectral data, the use of CARS and SPA algorithms to extract characteristic bands, and the use of RFR modeling methods to enhance the synergy between multiple variables, and the established model (MC-CARS-SPA-RFR) can better extract the covariant relationship between the canopy spectrum and the disease, thereby improving the monitoring accuracy of wheat powdery mildew. The research results of this study provide ideas and methods for realizing high-precision remote sensing monitoring of crop disease status.
Collapse
Affiliation(s)
- Zi-Heng Feng
- State Key Laboratory of Wheat and Maize Crop Science, CIMMYT-China Wheat and Maize Joint Research Center, Henan Agricultural University, Zhengzhou, China
- Information and Management Science College, Henan Agricultural University, Zhengzhou, China
| | - Lu-Yuan Wang
- State Key Laboratory of Wheat and Maize Crop Science, CIMMYT-China Wheat and Maize Joint Research Center, Henan Agricultural University, Zhengzhou, China
| | - Zhe-Qing Yang
- State Key Laboratory of Wheat and Maize Crop Science, CIMMYT-China Wheat and Maize Joint Research Center, Henan Agricultural University, Zhengzhou, China
| | - Yan-Yan Zhang
- State Key Laboratory of Wheat and Maize Crop Science, CIMMYT-China Wheat and Maize Joint Research Center, Henan Agricultural University, Zhengzhou, China
| | - Xiao Li
- College of Science, Henan Agricultural University, Zhengzhou, China
| | - Li Song
- State Key Laboratory of Wheat and Maize Crop Science, CIMMYT-China Wheat and Maize Joint Research Center, Henan Agricultural University, Zhengzhou, China
| | - Li He
- State Key Laboratory of Wheat and Maize Crop Science, CIMMYT-China Wheat and Maize Joint Research Center, Henan Agricultural University, Zhengzhou, China
| | - Jian-Zhao Duan
- State Key Laboratory of Wheat and Maize Crop Science, CIMMYT-China Wheat and Maize Joint Research Center, Henan Agricultural University, Zhengzhou, China
| | - Wei Feng
- State Key Laboratory of Wheat and Maize Crop Science, CIMMYT-China Wheat and Maize Joint Research Center, Henan Agricultural University, Zhengzhou, China
| |
Collapse
|
9
|
Jung DH, Kim JD, Kim HY, Lee TS, Kim HS, Park SH. A Hyperspectral Data 3D Convolutional Neural Network Classification Model for Diagnosis of Gray Mold Disease in Strawberry Leaves. FRONTIERS IN PLANT SCIENCE 2022; 13:837020. [PMID: 35360322 PMCID: PMC8963811 DOI: 10.3389/fpls.2022.837020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
Gray mold disease is one of the most frequently occurring diseases in strawberries. Given that it spreads rapidly, rapid countermeasures are necessary through the development of early diagnosis technology. In this study, hyperspectral images of strawberry leaves that were inoculated with gray mold fungus to cause disease were taken; these images were classified into healthy and infected areas as seen by the naked eye. The areas where the infection spread after time elapsed were classified as the asymptomatic class. Square regions of interest (ROIs) with a dimensionality of 16 × 16 × 150 were acquired as training data, including infected, asymptomatic, and healthy areas. Then, 2D and 3D data were used in the development of a convolutional neural network (CNN) classification model. An effective wavelength analysis was performed before the development of the CNN model. Further, the classification model that was developed with 2D training data showed a classification accuracy of 0.74, while the model that used 3D data acquired an accuracy of 0.84; this indicated that the 3D data produced slightly better performance. When performing classification between healthy and asymptomatic areas for developing early diagnosis technology, the two CNN models showed a classification accuracy of 0.73 with regards to the asymptomatic ones. To increase accuracy in classifying asymptomatic areas, a model was developed by smoothing the spectrum data and expanding the first and second derivatives; the results showed that it was possible to increase the asymptomatic classification accuracy to 0.77 and reduce the misclassification of asymptomatic areas as healthy areas. Based on these results, it is concluded that the proposed 3D CNN classification model can be used as an early diagnosis sensor of gray mold diseases since it produces immediate on-site analysis results of hyperspectral images of leaves.
Collapse
|
10
|
Dai F, Wang F, Yang D, Lin S, Chen X, Lan Y, Deng X. Detection Method of Citrus Psyllids With Field High-Definition Camera Based on Improved Cascade Region-Based Convolution Neural Networks. FRONTIERS IN PLANT SCIENCE 2022; 12:816272. [PMID: 35140732 PMCID: PMC8819152 DOI: 10.3389/fpls.2021.816272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 12/06/2021] [Indexed: 05/17/2023]
Abstract
Citrus psyllid is the only insect vector of citrus Huanglongbing (HLB), which is the most destructive disease in the citrus industry. There is no effective treatment for HLB, so detecting citrus psyllids as soon as possible is the key prevention measure for citrus HLB. It is time-consuming and laborious to search for citrus psyllids through artificial patrol, which is inconvenient for the management of citrus orchards. With the development of artificial intelligence technology, a computer vision method instead of the artificial patrol can be adopted for orchard management to reduce the cost and time. The citrus psyllid is small in shape and gray in color, similar to the stem, stump, and withered part of the leaves, leading to difficulty for the traditional target detection algorithm to achieve a good recognition effect. In this work, in order to make the model have good generalization ability under outdoor light condition, a high-definition camera to collect data set of citrus psyllids and citrus fruit flies under natural light condition was used, a method to increase the number of small target pests in citrus based on semantic segmentation algorithm was proposed, and the cascade region-based convolution neural networks (R-CNN) (convolutional neural network) algorithm was improved to enhance the recognition effect of small target pests using multiscale training, combining CBAM attention mechanism with high-resolution feature retention network high-resoultion network (HRNet) as feature extraction network, adding sawtooth atrous spatial pyramid pooling (ASPP) structure to fully extract high-resolution features from different scales, and adding feature pyramid networks (FPN) structure for feature fusion at different scales. To mine difficult samples more deeply, an online hard sample mining strategy was adopted in the process of model sampling. The results show that the improved cascade R-CNN algorithm after training has an average recognition accuracy of 88.78% for citrus psyllids. Compared with VGG16, ResNet50, and other common networks, the improved small target recognition algorithm obtains the highest recognition performance. Experimental results also show that the improved cascade R-CNN algorithm not only performs well in citrus psylla identification but also in other small targets such as citrus fruit flies, which makes it possible and feasible to detect small target pests with a field high-definition camera.
Collapse
Affiliation(s)
- Fen Dai
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou, China
- Guangdong Engineering Technology Research Center of Smart Agriculture, Guangzhou, China
| | - Fengcheng Wang
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
| | - Dongzi Yang
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
| | - Shaoming Lin
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
| | - Xin Chen
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou, China
- Guangdong Engineering Technology Research Center of Smart Agriculture, Guangzhou, China
| | - Yubin Lan
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou, China
- Guangdong Engineering Technology Research Center of Smart Agriculture, Guangzhou, China
| | - Xiaoling Deng
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou, China
- Guangdong Engineering Technology Research Center of Smart Agriculture, Guangzhou, China
| |
Collapse
|
11
|
Yang D, Wang F, Hu Y, Lan Y, Deng X. Citrus Huanglongbing Detection Based on Multi-Modal Feature Fusion Learning. FRONTIERS IN PLANT SCIENCE 2021; 12:809506. [PMID: 35027917 PMCID: PMC8751206 DOI: 10.3389/fpls.2021.809506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 12/06/2021] [Indexed: 06/14/2023]
Abstract
Citrus Huanglongbing (HLB), also named citrus greening disease, occurs worldwide and is known as a citrus cancer without an effective treatment. The symptoms of HLB are similar to those of nutritional deficiency or other disease. The methods based on single-source information, such as RGB images or hyperspectral data, are not able to achieve great detection performance. In this study, a multi-modal feature fusion network, combining a RGB image network and hyperspectral band extraction network, was proposed to recognize HLB from four categories (HLB, suspected HLB, Zn-deficient, and healthy). Three contributions including a dimension-reduction scheme for hyperspectral data based on a soft attention mechanism, a feature fusion proposal based on a bilinear fusion method, and auxiliary classifiers to extract more useful information are introduced in this manuscript. The multi-modal feature fusion network can effectively classify the above four types of citrus leaves and is better than single-modal classifiers. In experiments, the highest accuracy of multi-modal network recognition was 97.89% when the amount of data was not very abundant (1,325 images of the four aforementioned types and 1,325 pieces of hyperspectral data), while the single-modal network with RGB images only achieved 87.98% recognition and the single-modal network using hyperspectral information only 89%. Results show that the proposed multi-modal network implementing the concept of multi-source information fusion provides a better way to detect citrus HLB and citrus deficiency.
Collapse
Affiliation(s)
- Dongzi Yang
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
| | - Fengcheng Wang
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
| | - Yuqi Hu
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
| | - Yubin Lan
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou, China
- Guangdong Engineering Technology Research Center of Smart Agriculture, Guangzhou, China
| | - Xiaoling Deng
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticide Spraying Technology, Guangzhou, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou, China
- Guangdong Engineering Technology Research Center of Smart Agriculture, Guangzhou, China
| |
Collapse
|
12
|
Wöber W, Mehnen L, Sykacek P, Meimberg H. Investigating Explanatory Factors of Machine Learning Models for Plant Classification. PLANTS (BASEL, SWITZERLAND) 2021; 10:plants10122674. [PMID: 34961145 PMCID: PMC8708324 DOI: 10.3390/plants10122674] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 11/24/2021] [Accepted: 12/01/2021] [Indexed: 06/12/2023]
Abstract
Recent progress in machine learning and deep learning has enabled the implementation of plant and crop detection using systematic inspection of the leaf shapes and other morphological characters for identification systems for precision farming. However, the models used for this approach tend to become black-box models, in the sense that it is difficult to trace characters that are the base for the classification. The interpretability is therefore limited and the explanatory factors may not be based on reasonable visible characters. We investigate the explanatory factors of recent machine learning and deep learning models for plant classification tasks. Based on a Daucus carota and a Beta vulgaris image data set, we implement plant classification models and compare those models by their predictive performance as well as explainability. For comparison we implemented a feed forward convolutional neuronal network as a default model. To evaluate the performance, we trained an unsupervised Bayesian Gaussian process latent variable model as well as a convolutional autoencoder for feature extraction and rely on a support vector machine for classification. The explanatory factors of all models were extracted and analyzed. The experiments show, that feed forward convolutional neuronal networks (98.24% and 96.10% mean accuracy) outperforms the Bayesian Gaussian process latent variable pipeline (92.08% and 94.31% mean accuracy) as well as the convolutional autoenceoder pipeline (92.38% and 93.28% mean accuracy) based approaches in terms of classification accuracy, even though not significant for Beta vulgaris images. Additionally, we found that the neuronal network used biological uninterpretable image regions for the plant classification task. In contrast to that, the unsupervised learning models rely on explainable visual characters. We conclude that supervised convolutional neuronal networks must be used carefully to ensure biological interpretability. We recommend unsupervised machine learning, careful feature investigation, and statistical feature analysis for biological applications.
Collapse
Affiliation(s)
- Wilfried Wöber
- Department of Integrative Biology and Biodiversity Research, Institute of Integrative Conservation Research, University of Natural Resources and Life Sciences, Gregor Mendel Str. 33, 1080 Vienna, Austria;
- Department Industrial Engineering, University of Applied Sciences Technikum Wien, Höchstädtplatz 6, 1200 Vienna, Austria
| | - Lars Mehnen
- Department Computer Science, University of Applied Sciences Technikum Wien, Höchstädtplatz 6, 1200 Vienna, Austria;
| | - Peter Sykacek
- Department of Biotechnology, Institute of Computational Biology, University of Natural Resources and Life Sciences, Muthgasse 18, 1190 Vienna, Austria;
| | - Harald Meimberg
- Department of Integrative Biology and Biodiversity Research, Institute of Integrative Conservation Research, University of Natural Resources and Life Sciences, Gregor Mendel Str. 33, 1080 Vienna, Austria;
| |
Collapse
|
13
|
DBA_SSD: A Novel End-to-End Object Detection Algorithm Applied to Plant Disease Detection. INFORMATION 2021. [DOI: 10.3390/info12110474] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
In response to the difficulty of plant leaf disease detection and classification, this study proposes a novel plant leaf disease detection method called deep block attention SSD (DBA_SSD) for disease identification and disease degree classification of plant leaves. We propose three plant leaf detection methods, namely, squeeze-and-excitation SSD (Se_SSD), deep block SSD (DB_SSD), and DBA_SSD. Se_SSD fuses SSD feature extraction network and attention mechanism channel, DB_SSD improves VGG feature extraction network, and DBA_SSD fuses the improved VGG network and channel attention mechanism. To reduce the training time and accelerate the training process, the convolutional layers trained in the Image Net image dataset by the VGG model are migrated to this model, whereas the collected plant leaves disease image dataset is randomly divided into training set, validation set, and test set in the ratio of 8:1:1. We chose the PlantVillage dataset after careful consideration because it contains images related to the domain of interest. This dataset consists of images of 14 plants, including images of apples, tomatoes, strawberries, peppers, and potatoes, as well as the leaves of other plants. In addition, data enhancement methods, such as histogram equalization and horizontal flip were used to expand the image data. The performance of the three improved algorithms is compared and analyzed in the same environment and with the classical target detection algorithms YOLOv4, YOLOv3, Faster RCNN, and YOLOv4 tiny. Experiments show that DBA_SSD outperforms the two other improved algorithms, and its performance in comparative analysis is superior to other target detection algorithms.
Collapse
|
14
|
Meta-Learning for Few-Shot Plant Disease Detection. Foods 2021; 10:foods10102441. [PMID: 34681490 PMCID: PMC8536056 DOI: 10.3390/foods10102441] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 10/11/2021] [Accepted: 10/12/2021] [Indexed: 11/28/2022] Open
Abstract
Plant diseases can harm crop growth, and the crop production has a deep impact on food. Although the existing works adopt Convolutional Neural Networks (CNNs) to detect plant diseases such as Apple Scab and Squash Powdery mildew, those methods have limitations as they rely on a large amount of manually labeled data. Collecting enough labeled data is not often the case in practice because: plant pathogens are variable and farm environments make collecting data difficulty. Methods based on deep learning suffer from low accuracy and confidence when facing few-shot samples. In this paper, we propose local feature matching conditional neural adaptive processes (LFM-CNAPS) based on meta-learning that aims at detecting plant diseases of unseen categories with only a few annotated examples, and visualize input regions that are ‘important’ for predictions. To train our network, we contribute Miniplantdisease-Dataset that contains 26 plant species and 60 plant diseases. Comprehensive experiments demonstrate that our proposed LFM-CNAPS method outperforms the existing methods.
Collapse
|