1
|
Li Y, Yuan N, Luo S, Yang K, Fang S, Peng Y, Gong Y. Abundance considerations for modeling yield of rapeseed at the flowering stage. Front Plant Sci 2023; 14:1188216. [PMID: 37575912 PMCID: PMC10420083 DOI: 10.3389/fpls.2023.1188216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 07/03/2023] [Indexed: 08/15/2023]
Abstract
Introduction To stabilize the edible oil market, it is necessary to determine the oil yield in advance, so the accurate and fast technology of estimating rapeseed yield is of great significance in agricultural production activities. Due to the long flowering time of rapeseed and the characteristics of petal color that are obviously different from other crops, the flowering period can be carefully considered in crop classification and yield estimation. Methods A field experiment was conducted to obtain the unmanned aerial vehicle (UAV) multispectral images. Field measurements consisted of the reflectance of flowers, leaves, and soils at the flowering stage and rapeseed yield at physiological maturity. Moreover, GF-1 and Sentinel-2 satellite images were collected to compare the applicability of yield estimation methods. The abundance of different organs of rapeseed was extracted by the spectral mixture analysis (SMA) technology, which was multiplied by vegetation indices (VIs) respectively to estimate the yield. Results For the UAV-scale, the product of VIs and leaf abundance (AbdLF) was closely related to rapeseed yield, which was better than the VIs models for yield estimation, with the coefficient of determination (R2) above 0.78. The yield estimation models of the product of normalized difference yellowness index (NDYI), enhanced vegetation index (EVI) and AbdLF had the highest accuracy, with the coefficients of variation (CVs) below 10%. For the satellite scale, most of the estimation models of the product of VIs and rapeseed AbdLF were also improved compared with the VIs models. The yield estimation models of the product of AbdLF and renormalized difference VI (RDVI) and EVI (RDVI×AbdLF and EVI×AbdLF) had the steady improvement, with CVs below 13.1%. Furthermore, the yield estimation models of the product of AbdLF and normalized difference VI (NDVI), visible atmospherically resistant index (VARI), RDVI, and EVI had consistent performance at both UAV and satellite scales. Discussion The results showed that considering SMA could improve the limitation of using only VIs to retrieve rapeseed yield at the flowering stage. Our results indicate that the abundance of rapeseed leaves can be a potential indicator of yield prediction during the flowering stage.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Yan Gong
- School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, China
| |
Collapse
|
2
|
Li D, Sun X, Jia Y, Yao Z, Lin P, Chen Y, Zhou H, Zhou Z, Wu K, Shi L, Li J. A longan yield estimation approach based on UAV images and deep learning. Front Plant Sci 2023; 14:1132909. [PMID: 36950357 PMCID: PMC10025382 DOI: 10.3389/fpls.2023.1132909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 02/17/2023] [Indexed: 06/18/2023]
Abstract
Longan yield estimation is an important practice before longan harvests. Statistical longan yield data can provide an important reference for market pricing and improving harvest efficiency and can directly determine the economic benefits of longan orchards. At present, the statistical work concerning longan yields requires high labor costs. Aiming at the task of longan yield estimation, combined with deep learning and regression analysis technology, this study proposed a method to calculate longan yield in complex natural environment. First, a UAV was used to collect video images of a longan canopy at the mature stage. Second, the CF-YD model and SF-YD model were constructed to identify Cluster_Fruits and Single_Fruits, respectively, realizing the task of automatically identifying the number of targets directly from images. Finally, according to the sample data collected from real orchards, a regression analysis was carried out on the target quantity detected by the model and the real target quantity, and estimation models were constructed for determining the Cluster_Fruits on a single longan tree and the Single_Fruits on a single Cluster_Fruit. Then, an error analysis was conducted on the data obtained from the manual counting process and the estimation model, and the average error rate regarding the number of Cluster_Fruits was 2.66%, while the average error rate regarding the number of Single_Fruits was 2.99%. The results show that the method proposed in this paper is effective at estimating longan yields and can provide guidance for improving the efficiency of longan fruit harvests.
Collapse
Affiliation(s)
- Denghui Li
- College of Engineering, South China Agricultural University, Guangzhou, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou, China
| | - Xiaoxuan Sun
- Key Laboratory of South China Agricultural Plant Molecular Analysis and Genetic Improvement, Guangdong Provincial Key Laboratory of Applied Botany, South China Botanical Garden, Chinese Academy of Sciences, Guangzhou, China
- South China National Botanical Garden, Guangzhou, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yuhang Jia
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Zhongwei Yao
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Peiyi Lin
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Yingyi Chen
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Haobo Zhou
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Zhengqi Zhou
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Kaixuan Wu
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Linlin Shi
- College of Engineering, South China Agricultural University, Guangzhou, China
| | - Jun Li
- College of Engineering, South China Agricultural University, Guangzhou, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou, China
| |
Collapse
|
3
|
Chen C, Nie J, Ma M, Shi X. DNA Origami Nanostructure Detection and Yield Estimation Using Deep Learning. ACS Synth Biol 2023; 12:524-532. [PMID: 36696234 DOI: 10.1021/acssynbio.2c00533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
DNA origami is a milestone in DNA nanotechnology. It is robust and efficient in constructing arbitrary two- and three-dimensional nanostructures. The shape and size of origami structures vary. To characterize them, an atomic force microscope, a transmission electron microscope, and other microscopes are utilized. However, the identification of various origami nanostructures heavily depends on the experience of researchers. In this study, we used the deep learning method (improved Yolox) to detect multiple DNA origami structures and estimate their yield. We designed a feature enhancement fusion network with the attention mechanism, and related parameters were researched. Experiments conducted to verify the proposed method showed that the detection accuracy was higher than that of other methods. This method can detect and estimate the DNA origami yield in complex environments, and the detection speed is in the millisecond range.
Collapse
Affiliation(s)
- Congzhou Chen
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing100029, China
| | - Jinyan Nie
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing100094, China
| | - Mingyuan Ma
- School of Computer Science, Peking University, Beijing100871, China
| | - Xiaolong Shi
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou510006, China
| |
Collapse
|
4
|
Lang P, Zhang L, Huang C, Chen J, Kang X, Zhang Z, Tong Q. Integrating environmental and satellite data to estimate county-level cotton yield in Xinjiang Province. Front Plant Sci 2023; 13:1048479. [PMID: 36743573 PMCID: PMC9889829 DOI: 10.3389/fpls.2022.1048479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 12/28/2022] [Indexed: 06/18/2023]
Abstract
Accurate and timely estimation of cotton yield over large areas is essential for precision agriculture, facilitating the operation of commodity markets and guiding agronomic management practices. Remote sensing (RS) and crop models are effective means to predict cotton yield in the field. The satellite vegetation indices (VIs) can describe crop yield variations over large areas but can't take the exact environmental impact into consideration. Climate variables (CVs), the result of the influence of spatial heterogeneity in large regions, can provide environmental information for better estimation of cotton yield. In this study, the most important VIs and CVs for estimating county-level cotton yield across Xinjiang Province were screened out. We found that the VIs of canopy structure and chlorophyll contents, and the CVs of moisture, were the most significant factors for cotton growth. For yield estimation, we utilized four approaches: least absolute shrinkage and selection operator regression (LASSO), support vector regression (SVR), random forest regression (RFR) and long short-term memory (LSTM). Due to its ability to capture temporal features over the long term, LSTM performed best, with an R2 of 0.76, root mean square error (RMSE) of 150 kg/ha and relative RMSE (rRMSE) of 8.67%; moreover, an additional 10% of the variance could be explained by adding CVs to the VIs. For the within-season yield estimation using LSTM, predictions made 2 months before harvest were the most accurate (R2 = 0.65, RMSE = 220 kg/ha, rRMSE = 15.97%). Our study demonstrated the feasibility of yield estimation and early prediction at the county level over large cotton cultivation areas by integrating satellite and environmental data.
Collapse
Affiliation(s)
- Ping Lang
- State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Lifu Zhang
- State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Changping Huang
- State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jiahua Chen
- State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoyan Kang
- State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
| | - Ze Zhang
- Xinjiang Production and Construction Crops Oasis Eco-Agriculture Key Laboratory, College of Agriculture, Shihezi University, Shihezi, China
| | - Qingxi Tong
- State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
5
|
Rançon F, Keresztes B, Deshayes A, Tardif M, Abdelghafour F, Fontaine G, Da Costa JP, Germain C. Designing a Proximal Sensing Camera Acquisition System for Vineyard Applications: Results and Feedback on 8 Years of Experiments. Sensors (Basel) 2023; 23:847. [PMID: 36679645 PMCID: PMC9865571 DOI: 10.3390/s23020847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/30/2022] [Accepted: 01/05/2023] [Indexed: 06/17/2023]
Abstract
The potential of image proximal sensing for agricultural applications has been a prolific scientific subject in the recent literature. Its main appeal lies in the sensing of precise information about plant status, which is either harder or impossible to extract from lower-resolution downward-looking image sensors such as satellite or drone imagery. Yet, many theoretical and practical problems arise when dealing with proximal sensing, especially on perennial crops such as vineyards. Indeed, vineyards exhibit challenging physical obstacles and many degrees of variability in their layout. In this paper, we present the design of a mobile camera suited to vineyards and harsh experimental conditions, as well as the results and assessments of 8 years' worth of studies using that camera. These projects ranged from in-field yield estimation (berry counting) to disease detection, providing new insights on typical viticulture problems that could also be generalized to orchard crops. Different recommendations are then provided using small case studies, such as the difficulties related to framing plots with different structures or the mounting of the sensor on a moving vehicle. While results stress the obvious importance and strong benefits of a thorough experimental design, they also indicate some inescapable pitfalls, illustrating the need for more robust image analysis algorithms and better databases. We believe sharing that experience with the scientific community can only benefit the future development of these innovative approaches.
Collapse
Affiliation(s)
- Florian Rançon
- IMS Laboratory, CNRS UMR 5218, University of Bordeaux, Talence Campus, F-33400 Talence, France
- Bordeaux Sciences Agro, F-33175 Gradignan, France
| | - Barna Keresztes
- IMS Laboratory, CNRS UMR 5218, University of Bordeaux, Talence Campus, F-33400 Talence, France
| | - Aymeric Deshayes
- IMS Laboratory, CNRS UMR 5218, University of Bordeaux, Talence Campus, F-33400 Talence, France
| | - Malo Tardif
- IMS Laboratory, CNRS UMR 5218, University of Bordeaux, Talence Campus, F-33400 Talence, France
| | - Florent Abdelghafour
- INRAE, Institut Agro, ITAP, University of Montpellier, F-34196 Montpellier, France
| | - Gael Fontaine
- IMS Laboratory, CNRS UMR 5218, University of Bordeaux, Talence Campus, F-33400 Talence, France
| | - Jean-Pierre Da Costa
- IMS Laboratory, CNRS UMR 5218, University of Bordeaux, Talence Campus, F-33400 Talence, France
- Bordeaux Sciences Agro, F-33175 Gradignan, France
| | - Christian Germain
- IMS Laboratory, CNRS UMR 5218, University of Bordeaux, Talence Campus, F-33400 Talence, France
- Bordeaux Sciences Agro, F-33175 Gradignan, France
| |
Collapse
|
6
|
Cheng E, Zhang B, Peng D, Zhong L, Yu L, Liu Y, Xiao C, Li C, Li X, Chen Y, Ye H, Wang H, Yu R, Hu J, Yang S. Wheat yield estimation using remote sensing data based on machine learning approaches. Front Plant Sci 2022; 13:1090970. [PMID: 36618627 PMCID: PMC9816798 DOI: 10.3389/fpls.2022.1090970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
Accurate predictions of wheat yields are essential to farmers'production plans and to the international trade in wheat. However, only poor approximations of the productivity of wheat crops in China can be obtained using traditional linear regression models based on vegetation indices and observations of the yield. In this study, Sentinel-2 (multispectral data) and ZY-1 02D (hyperspectral data) were used together with 15709 gridded yield data (with a resolution of 5 m × 5 m) to predict the winter wheat yield. These estimates were based on four mainstream data-driven approaches: Long Short-Term Memory (LSTM), Random Forest (RF), Gradient Boosting Decision Tree (GBDT), and Support Vector Regression (SVR). The method that gave the best estimate of the winter wheat yield was determined, and the accuracy of the estimates based on multispectral and hyperspectral data were compared. The results showed that the LSTM model, for which the RMSE of the estimates was 0.201 t/ha, performed better than the RF (RMSE = 0.260 t/ha), GBDT (RMSE = 0.306 t/ha), and SVR (RMSE = 0.489 t/ha) methods. The estimates based on the ZY-1 02D hyperspectral data were more accurate than those based on the 30-m Sentinel-2 data: RMSE = 0.237 t/ha for the ZY-1 02D data, which is about a 5% improvement on the RSME of 0.307 t/ha for the 30-m Sentinel-2 data. However, the 10-m Sentinel-2 data performed even better, giving an RMSE of 0.219 t/ha. In addition, it was found that the greenness vegetation index SR (simple ratio index) outperformed the traditional vegetation indices. The results highlight the potential of the shortwave infrared bands to replace the visible and near-infrared bands for predicting crop yields Our study demonstrates the advantages of the deep learning method LSTM over machine learning methods in terms of its ability to make accurate estimates of the winter wheat yield.
Collapse
Affiliation(s)
- Enhui Cheng
- 1Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
- College of Resource and Environment, University of Chinese Academy of Sciences, Beijing, China
| | - Bing Zhang
- 1Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
- College of Resource and Environment, University of Chinese Academy of Sciences, Beijing, China
| | - Dailiang Peng
- 1Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
- International Research Center of Big Data for Sustainable Development Goals, Beijing, China
| | | | - Le Yu
- Ministry of Education Key Laboratory for Earth System Modeling, Department of Earth System Science, Institute for Global Change Studies, Tsinghua University, Beijing, China
| | - Yao Liu
- Land Satellite Remote Sensing Application Center, Ministry of Natural Resources of China, Beijing, China
| | - Chenchao Xiao
- Land Satellite Remote Sensing Application Center, Ministry of Natural Resources of China, Beijing, China
| | - Cunjun Li
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Xiaoyi Li
- Aerospace ShuWei High Tech. Co., Ltd., Beijing, China
| | - Yue Chen
- Aerospace ShuWei High Tech. Co., Ltd., Beijing, China
| | - Huichun Ye
- 1Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
- International Research Center of Big Data for Sustainable Development Goals, Beijing, China
| | - Hongye Wang
- Cultivated Land Quality Monitoring and Protection center, Ministry of Agriculture and Rural Affairs, Beijing, China
| | - Ruyi Yu
- 1Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
| | - Jinkang Hu
- 1Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
- College of Resource and Environment, University of Chinese Academy of Sciences, Beijing, China
| | - Songlin Yang
- 1Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
- College of Resource and Environment, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
7
|
Hassan SI, Alam MM, Zia MYI, Rashid M, Illahi U, Su’ud MM. Rice Crop Counting Using Aerial Imagery and GIS for the Assessment of Soil Health to Increase Crop Yield. Sensors (Basel) 2022; 22:8567. [PMID: 36366269 PMCID: PMC9659203 DOI: 10.3390/s22218567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 10/23/2022] [Accepted: 11/01/2022] [Indexed: 06/16/2023]
Abstract
Rice is one of the vital foods consumed in most countries throughout the world. To estimate the yield, crop counting is used to indicate improper growth, identification of loam land, and control of weeds. It is becoming necessary to grow crops healthy, precisely, and proficiently as the demand increases for food supplies. Traditional counting methods have numerous disadvantages, such as long delay times and high sensitivity, and they are easily disturbed by noise. In this research, the detection and counting of rice plants using an unmanned aerial vehicle (UAV) and aerial images with a geographic information system (GIS) are used. The technique is implemented in the area of forty acres of rice crop in Tando Adam, Sindh, Pakistan. To validate the performance of the proposed system, the obtained results are compared with the standard plant count techniques as well as approved by the agronomist after testing soil and monitoring the rice crop count in each acre of land of rice crops. From the results, it is found that the proposed system is precise and detects rice crops accurately, differentiates from other objects, and estimates the soil health based on plant counting data; however, in the case of clusters, the counting is performed in semi-automated mode.
Collapse
Affiliation(s)
- Syeda Iqra Hassan
- Department of Electronics and Electrical Engineering, Universiti Kuala Lumpur British Malaysian Institute (UniKL BMI), Batu 8, Jalan Sungai Pusu, Gombak 53100, Malaysia
- National Centre for Big Data and Cloud Computing, Ziauddin University, Karachi 74600, Pakistan
- Department of Electrical Engineering, Ziauddin University, Karachi 74600, Pakistan
| | - Muhammad Mansoor Alam
- Faculty of Computing, Riphah International University, Islamabad 46000, Pakistan
- Faculty of Computing and Informatics, Multimedia University, Cyberjaya 63100, Malaysia
- Malaysian Institute of Information Technology, University of Kuala Lumpur, Kuala Lumpur 50250, Malaysia
- Faculty of Engineering and Information Technology, School of Computer Science, University of Technology, Sydney 2006, Australia
| | | | - Muhammad Rashid
- Department of Computer Engineering, Umm Al Qura University, Makkah 21955, Saudi Arabia
| | - Usman Illahi
- Department of Electrical Engineering, FET, Gomal University, Dera Ismail Khan 29050, Pakistan
| | - Mazliham Mohd Su’ud
- Faculty of Computing and Informatics, Multimedia University, Cyberjaya 63100, Malaysia
- Water and Engineering Section, MFI, Universiti Kuala Lumpur Malaysian France Institute (UniKL MFI), Section 14, Jalan Damai, Seksyen 14, Bandar Baru Bangi 43650, Malaysia
| |
Collapse
|
8
|
Wang L, Zhao Y, Xiong Z, Wang S, Li Y, Lan Y. Fast and precise detection of litchi fruits for yield estimation based on the improved YOLOv5 model. Front Plant Sci 2022; 13:965425. [PMID: 36017261 PMCID: PMC9396223 DOI: 10.3389/fpls.2022.965425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 07/18/2022] [Indexed: 06/15/2023]
Abstract
The fast and precise detection of dense litchi fruits and the determination of their maturity is of great practical significance for yield estimation in litchi orchards and robot harvesting. Factors such as complex growth environment, dense distribution, and random occlusion by leaves, branches, and other litchi fruits easily cause the predicted output based on computer vision deviate from the actual value. This study proposed a fast and precise litchi fruit detection method and application software based on an improved You Only Look Once version 5 (YOLOv5) model, which can be used for the detection and yield estimation of litchi in orchards. First, a dataset of litchi with different maturity levels was established. Second, the YOLOv5s model was chosen as a base version of the improved model. ShuffleNet v2 was used as the improved backbone network, and then the backbone network was fine-tuned to simplify the model structure. In the feature fusion stage, the CBAM module was introduced to further refine litchi's effective feature information. Considering the characteristics of the small size of dense litchi fruits, the 1,280 × 1,280 was used as the improved model input size while we optimized the network structure. To evaluate the performance of the proposed method, we performed ablation experiments and compared it with other models on the test set. The results showed that the improved model's mean average precision (mAP) presented a 3.5% improvement and 62.77% compression in model size compared with the original model. The improved model size is 5.1 MB, and the frame per second (FPS) is 78.13 frames/s at a confidence of 0.5. The model performs well in precision and robustness in different scenarios. In addition, we developed an Android application for litchi counting and yield estimation based on the improved model. It is known from the experiment that the correlation coefficient R 2 between the application test and the actual results was 0.9879. In summary, our improved method achieves high precision, lightweight, and fast detection performance at large scales. The method can provide technical means for portable yield estimation and visual recognition of litchi harvesting robots.
Collapse
Affiliation(s)
- Lele Wang
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
| | - Yingjie Zhao
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
| | - Zhangjun Xiong
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
| | - Shizhou Wang
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
- School of Agricultural Engineering and Food Science, Shandong University of Technology, Zibo, China
| | - Yuanhong Li
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
| | - Yubin Lan
- College of Electronic Engineering, College of Artificial Intelligence, South China Agricultural University, Guangzhou, China
- Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
- School of Agricultural Engineering and Food Science, Shandong University of Technology, Zibo, China
- Department of Biological and Agricultural Engineering, Texas A&M University, College Station, TX, United States
| |
Collapse
|
9
|
Parr B, Legg M, Alam F. Analysis of Depth Cameras for Proximal Sensing of Grapes. Sensors (Basel) 2022; 22:4179. [PMID: 35684799 DOI: 10.3390/s22114179] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 05/26/2022] [Accepted: 05/30/2022] [Indexed: 02/04/2023]
Abstract
This work investigates the performance of five depth cameras in relation to their potential for grape yield estimation. The technologies used by these cameras include structured light (Kinect V1), active infrared stereoscopy (RealSense D415), time of flight (Kinect V2 and Kinect Azure), and LiDAR (Intel L515). To evaluate their suitability for grape yield estimation, a range of factors were investigated including their performance in and out of direct sunlight, their ability to accurately measure the shape of the grapes, and their potential to facilitate counting and sizing of individual berries. The depth cameras’ performance was benchmarked using high-resolution photogrammetry scans. All the cameras except the Kinect V1 were able to operate in direct sunlight. Indoors, the RealSense D415 camera provided the most accurate depth scans of grape bunches, with a 2 mm average depth error relative to photogrammetric scans. However, its performance was reduced in direct sunlight. The time of flight and LiDAR cameras provided depth scans of grapes that had about an 8 mm depth bias. Furthermore, the individual berries manifested in the scans as pointed shape distortions. This led to an underestimation of berry sizes when applying the RANSAC sphere fitting but may help with the detection of individual berries with more advanced algorithms. Applying an opaque coating to the surface of the grapes reduced the observed distance bias and shape distortion. This indicated that these are likely caused by the cameras’ transmitted light experiencing diffused scattering within the grapes. More work is needed to investigate if this distortion can be used for enhanced measurement of grape properties such as ripeness and berry size.
Collapse
|
10
|
Wang C, Wang Y, Liu S, Lin G, He P, Zhang Z, Zhou Y. Study on Pear Flowers Detection Performance of YOLO-PEFL Model Trained With Synthetic Target Images. Front Plant Sci 2022; 13:911473. [PMID: 35747884 PMCID: PMC9209761 DOI: 10.3389/fpls.2022.911473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Accepted: 05/02/2022] [Indexed: 05/02/2023]
Abstract
Accurate detection of pear flowers is an important measure for pear orchard yield estimation, which plays a vital role in improving pear yield and predicting pear price trends. This study proposed an improved YOLOv4 model called YOLO-PEFL model for accurate pear flower detection in the natural environment. Pear flower targets were artificially synthesized with pear flower's surface features. The synthetic pear flower targets and the backgrounds of the original pear flower images were used as the inputs of the YOLO-PEFL model. ShuffleNetv2 embedded by the SENet (Squeeze-and-Excitation Networks) module replacing the original backbone network of the YOLOv4 model formed the backbone of the YOLO-PEFL model. The parameters of the YOLO-PEFL model were fine-tuned to change the size of the initial anchor frame. The experimental results showed that the average precision of the YOLO-PEFL model was 96.71%, the model size was reduced by about 80%, and the average detection speed was 0.027s. Compared with the YOLOv4 model and the YOLOv4-tiny model, the YOLO-PEFL model had better performance in model size, detection accuracy, and detection speed, which effectively reduced the model deployment cost and improved the model efficiency. It implied the proposed YOLO-PEFL model could accurately detect pear flowers with high efficiency in the natural environment.
Collapse
Affiliation(s)
- Chenglin Wang
- Faculty of Modern Agricultural Engineering, Kunming University of Science and Technology, Kunming, China
- College of Intelligent Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Yawei Wang
- College of Intelligent Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Suchwen Liu
- College of Intelligent Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| | - Guichao Lin
- School of Mechanical and Electrical Engineering, Zhongkai University of Agriculture and Engineering, Guangzhou, China
- *Correspondence: Guichao Lin,
| | - Peng He
- School of Electronic and Information Engineering, Taizhou University, Taizhou, China
| | - Zhaoguo Zhang
- Faculty of Modern Agricultural Engineering, Kunming University of Science and Technology, Kunming, China
- Zhaoguo Zhang,
| | - Yi Zhou
- College of Intelligent Manufacturing Engineering, Chongqing University of Arts and Sciences, Chongqing, China
| |
Collapse
|
11
|
Hein NT, Ciampitti IA, Jagadish SVK. Bottlenecks and opportunities in field-based high-throughput phenotyping for heat and drought stress. J Exp Bot 2021; 72:5102-5116. [PMID: 33474563 PMCID: PMC8272563 DOI: 10.1093/jxb/erab021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 01/18/2021] [Indexed: 05/27/2023]
Abstract
Flowering and grain-filling stages are highly sensitive to heat and drought stress exposure, leading to significant loss in crop yields. Therefore, phenotyping to enhance resilience to these abiotic stresses is critical for sustaining genetic gains in crop improvement programs. However, traditional methods for screening traits related to these stresses are slow, laborious, and often expensive. Remote sensing provides opportunities to introduce low-cost, less biased, high-throughput phenotyping methods to capture large genetic diversity to facilitate enhancement of stress resilience in crops. This review focuses on four key physiological traits and processes that are critical in understanding crop responses to drought and heat stress during reproductive and grain-filling periods. Specifically, these traits include: (i) time of day of flowering, to escape these stresses during flowering; (ii) optimizing photosynthetic efficiency; (iii) storage and translocation of water-soluble carbohydrates; and (iv) yield and yield components to provide in-season yield estimates. Moreover, we provide an overview of current advances in remote sensing in capturing these traits, and discuss the limitations with existing technology as well as future direction of research to develop high-throughput phenotyping approaches. In the future, phenotyping these complex traits will require sensor advancement, high-quality imagery combined with machine learning methods, and efforts in transdisciplinary science to foster integration across disciplines.
Collapse
Affiliation(s)
- Nathan T Hein
- Department of Agronomy, Kansas State University, Manhattan, KS, USA
| | | | | |
Collapse
|
12
|
Maheswari P, Raja P, Apolo-Apolo OE, Pérez-Ruiz M. Intelligent Fruit Yield Estimation for Orchards Using Deep Learning Based Semantic Segmentation Techniques-A Review. Front Plant Sci 2021; 12:684328. [PMID: 34249054 PMCID: PMC8267528 DOI: 10.3389/fpls.2021.684328] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 05/31/2021] [Indexed: 05/26/2023]
Abstract
Smart farming employs intelligent systems for every domain of agriculture to obtain sustainable economic growth with the available resources using advanced technologies. Deep Learning (DL) is a sophisticated artificial neural network architecture that provides state-of-the-art results in smart farming applications. One of the main tasks in this domain is yield estimation. Manual yield estimation undergoes many hurdles such as labor-intensive, time-consuming, imprecise results, etc. These issues motivate the development of an intelligent fruit yield estimation system that offers more benefits to the farmers in deciding harvesting, marketing, etc. Semantic segmentation combined with DL adds promising results in fruit detection and localization by performing pixel-based prediction. This paper reviews the different literature employing various techniques for fruit yield estimation using DL-based semantic segmentation architectures. It also discusses the challenging issues that occur during intelligent fruit yield estimation such as sampling, collection, annotation and data augmentation, fruit detection, and counting. Results show that the fruit yield estimation employing DL-based semantic segmentation techniques yields better performance than earlier techniques because of human cognition incorporated into the architecture. Future directions like customization of DL architecture for smart-phone applications to predict the yield, development of more comprehensive model encompassing challenging situations like occlusion, overlapping and illumination variation, etc., were also discussed.
Collapse
Affiliation(s)
- Prabhakar Maheswari
- School of Mechanical Engineering, SASTRA Deemed University, Thanjavur, India
| | - Purushothaman Raja
- School of Mechanical Engineering, SASTRA Deemed University, Thanjavur, India
| | - Orly Enrique Apolo-Apolo
- Departamento de Ingeniería Aeroespacial y Mecánica de Fluidos, Área de Ingeniería Agroforestal, Universidad de Sevilla, Seville, Spain
| | - Manuel Pérez-Ruiz
- Departamento de Ingeniería Aeroespacial y Mecánica de Fluidos, Área de Ingeniería Agroforestal, Universidad de Sevilla, Seville, Spain
| |
Collapse
|
13
|
Boatswain Jacques AA, Adamchuk VI, Park J, Cloutier G, Clark JJ, Miller C. Towards a Machine Vision-Based Yield Monitor for the Counting and Quality Mapping of Shallots. Front Robot AI 2021; 8:627067. [PMID: 34046434 PMCID: PMC8146908 DOI: 10.3389/frobt.2021.627067] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 02/04/2021] [Indexed: 11/13/2022] Open
Abstract
In comparison to field crops such as cereals, cotton, hay and grain, specialty crops often require more resources, are usually more sensitive to sudden changes in growth conditions and are known to produce higher value products. Providing quality and quantity assessment of specialty crops during harvesting is crucial for securing higher returns and improving management practices. Technical advancements in computer and machine vision have improved the detection, quality assessment and yield estimation processes for various fruit crops, but similar methods capable of exporting a detailed yield map for vegetable crops have yet to be fully developed. A machine vision-based yield monitor was designed to perform size categorization and continuous counting of shallots in-situ during the harvesting process. Coupled with a software developed in Python, the system is composed of a video logger and a global navigation satellite system. Computer vision analysis is performed within the tractor while an RGB camera collects real-time video data of the crops under natural sunlight conditions. Vegetables are first segmented using Watershed segmentation, detected on the conveyor, and then classified by size. The system detected shallots in a subsample of the dataset with a precision of 76%. The software was also evaluated on its ability to classify the shallots into three size categories. The best performance was achieved in the large class (73%), followed by the small class (59%) and medium class (44%). Based on these results, the occasional occlusion of vegetables and inconsistent lighting conditions were the main factors that hindered performance. Although further enhancements are envisioned for the prototype system, its modular and novel design permits the mapping of a selection of other horticultural crops. Moreover, it has the potential to benefit many producers of small vegetable crops by providing them with useful harvest information in real-time.
Collapse
Affiliation(s)
- Amanda A Boatswain Jacques
- Precision Agriculture and Sensor Systems Laboratory (PASS), Department of Bioresource Engineering, McGill University, Sainte-Anne-de-Bellevue, QC, Canada
| | - Viacheslav I Adamchuk
- Precision Agriculture and Sensor Systems Laboratory (PASS), Department of Bioresource Engineering, McGill University, Sainte-Anne-de-Bellevue, QC, Canada
| | - Jaesung Park
- Precision Agriculture and Sensor Systems Laboratory (PASS), Department of Bioresource Engineering, McGill University, Sainte-Anne-de-Bellevue, QC, Canada
| | | | - James J Clark
- Department of Electrical and Computer Engineering, McGill University, Montreal, QC, Canada
| | - Connor Miller
- Precision Agriculture and Sensor Systems Laboratory (PASS), Department of Bioresource Engineering, McGill University, Sainte-Anne-de-Bellevue, QC, Canada
| |
Collapse
|
14
|
Liu Z, Xu Z, Bi R, Wang C, He P, Jing Y, Yang W. Estimation of Winter Wheat Yield in Arid and Semiarid Regions Based on Assimilated Multi-Source Sentinel Data and the CERES-Wheat Model. Sensors (Basel) 2021; 21:s21041247. [PMID: 33578703 PMCID: PMC7916384 DOI: 10.3390/s21041247] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 01/23/2021] [Accepted: 02/04/2021] [Indexed: 11/16/2022]
Abstract
The farmland area in arid and semiarid regions accounts for about 40% of the total area of farmland in the world, and it continues to increase. It is critical for global food security to predict the crop yield in arid and semiarid regions. To improve the prediction of crop yields in arid and semiarid regions, we explored data assimilation-crop modeling strategies for estimating the yield of winter wheat under different water stress conditions across different growing areas. We incorporated leaf area index (LAI) and soil moisture derived from multi-source Sentinel data with the CERES-Wheat model using ensemble Kalman filter data assimilation. According to different water stress conditions, different data assimilation strategies were applied to estimate winter wheat yields in arid and semiarid areas. Sentinel data provided LAI and soil moisture data with higher frequency (<14 d) and higher precision, with root mean square errors (RMSE) of 0.9955 m2 m−2 and 0.0305 cm3 cm−3, respectively, for data assimilation-crop modeling. The temporal continuity of the CERES-Wheat model and the spatial continuity of the remote sensing images obtained from the Sentinel data were integrated using the assimilation method. The RMSE of LAI and soil water obtained by the assimilation method were lower than those simulated by the CERES-Wheat model, which were reduced by 0.4458 m2 m−2 and 0.0244 cm3 cm−3, respectively. Assimilation of LAI independently estimated yield with high precision and efficiency in irrigated areas for winter wheat, with RMSE and absolute relative error (ARE) of 427.57 kg ha−1 and 6.07%, respectively. However, in rain-fed areas of winter wheat under water stress, assimilating both LAI and soil moisture achieved the highest accuracy in estimating yield (RMSE = 424.75 kg ha−1, ARE = 9.55%) by modifying the growth and development of the canopy simultaneously and by promoting soil water balance. Sentinel data can provide high temporal and spatial resolution data for deriving LAI and soil moisture in the study area, thereby improving the estimation accuracy of the assimilation model at a regional scale. In the arid and semiarid region of the southeastern Loess Plateau, assimilation of LAI independently can obtain high-precision yield estimation of winter wheat in irrigated area, while it requires assimilating both LAI and soil moisture to achieve high-precision yield estimation in the rain-fed area.
Collapse
Affiliation(s)
- Zhengchun Liu
- College of Resource and Environment, Shanxi Agricultural University, Taigu 030801, China; (Z.L.); (Z.X.); (P.H.); (Y.J.)
- National Experimental Teaching Demonstration Center for Agricultural Resources and Environment, Shanxi Agricultural University, Taigu 030801, China
| | - Zhanjun Xu
- College of Resource and Environment, Shanxi Agricultural University, Taigu 030801, China; (Z.L.); (Z.X.); (P.H.); (Y.J.)
- National Experimental Teaching Demonstration Center for Agricultural Resources and Environment, Shanxi Agricultural University, Taigu 030801, China
| | - Rutian Bi
- College of Resource and Environment, Shanxi Agricultural University, Taigu 030801, China; (Z.L.); (Z.X.); (P.H.); (Y.J.)
- National Experimental Teaching Demonstration Center for Agricultural Resources and Environment, Shanxi Agricultural University, Taigu 030801, China
- Correspondence: ; Tel.: +86-0354-6288322
| | - Chao Wang
- College of Agriculture, Shanxi Agricultural University, Taigu 030801, China; (C.W.); (W.Y.)
| | - Peng He
- College of Resource and Environment, Shanxi Agricultural University, Taigu 030801, China; (Z.L.); (Z.X.); (P.H.); (Y.J.)
- National Experimental Teaching Demonstration Center for Agricultural Resources and Environment, Shanxi Agricultural University, Taigu 030801, China
| | - Yaodong Jing
- College of Resource and Environment, Shanxi Agricultural University, Taigu 030801, China; (Z.L.); (Z.X.); (P.H.); (Y.J.)
- National Experimental Teaching Demonstration Center for Agricultural Resources and Environment, Shanxi Agricultural University, Taigu 030801, China
| | - Wude Yang
- College of Agriculture, Shanxi Agricultural University, Taigu 030801, China; (C.W.); (W.Y.)
| |
Collapse
|
15
|
Fu H, Wang C, Cui G, She W, Zhao L. Ramie Yield Estimation Based on UAV RGB Images. Sensors (Basel) 2021; 21:s21020669. [PMID: 33477949 PMCID: PMC7833380 DOI: 10.3390/s21020669] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 01/03/2021] [Accepted: 01/15/2021] [Indexed: 11/30/2022]
Abstract
Timely and accurate crop growth monitoring and yield estimation are important for field management. The traditional sampling method used for estimation of ramie yield is destructive. Thus, this study proposed a new method for estimating ramie yield based on field phenotypic data obtained from unmanned aerial vehicle (UAV) images. A UAV platform carrying RGB cameras was employed to collect ramie canopy images during the whole growth period. The vegetation indices (VIs), plant number, and plant height were extracted from UAV-based images, and then, these data were incorporated to establish yield estimation model. Among all of the UAV-based image data, we found that the structure features (plant number and plant height) could better reflect the ramie yield than the spectral features, and in structure features, the plant number was found to be the most useful index to monitor the yield, with a correlation coefficient of 0.6. By fusing multiple characteristic parameters, the yield estimation model based on the multiple linear regression was obviously more accurate than the stepwise linear regression model, with a determination coefficient of 0.66 and a relative root mean square error of 1.592 kg. Our study reveals that it is feasible to monitor crop growth based on UAV images and that the fusion of phenotypic data can improve the accuracy of yield estimations.
Collapse
Affiliation(s)
- Hongyu Fu
- Ramie Research Institute of Hunan Agricultural University, College of Agricultural, Hunan Agricultural University, Changsha 410128, China; (H.F.); (W.S.); (L.Z.)
| | - Chufeng Wang
- Macro Agriculture Research Institute, College of Resource and Environment, Huazhong Agricultural University, 1 Shizishan Street, Wuhan 430000, China;
| | - Guoxian Cui
- Ramie Research Institute of Hunan Agricultural University, College of Agricultural, Hunan Agricultural University, Changsha 410128, China; (H.F.); (W.S.); (L.Z.)
- Correspondence:
| | - Wei She
- Ramie Research Institute of Hunan Agricultural University, College of Agricultural, Hunan Agricultural University, Changsha 410128, China; (H.F.); (W.S.); (L.Z.)
| | - Liang Zhao
- Ramie Research Institute of Hunan Agricultural University, College of Agricultural, Hunan Agricultural University, Changsha 410128, China; (H.F.); (W.S.); (L.Z.)
| |
Collapse
|
16
|
Mekhalfi ML, Nicolò C, Ianniello I, Calamita F, Goller R, Barazzuol M, Melgani F. Vision System for Automatic On-Tree Kiwifruit Counting and Yield Estimation. Sensors (Basel) 2020; 20:s20154214. [PMID: 32751295 PMCID: PMC7435641 DOI: 10.3390/s20154214] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 07/24/2020] [Accepted: 07/28/2020] [Indexed: 11/16/2022]
Abstract
Yield estimation is an essential preharvest practice among most large-scale farming companies, since it enables the predetermination of essential logistics to be allocated (i.e., transportation means, supplies, labor force, among others). An overestimation may thus incur further costs, whereas an underestimation entails potential crop waste. More interestingly, an accurate yield estimation enables stakeholders to better place themselves in the market. Yet, computer-aided precision farming is set to play a pivotal role in this respect. Kiwifruit represents a major produce in several countries (e.g., Italy, China, New and Zealand). However, up to date, the relevant literature remains short of a complete as well as automatic system for kiwifruit yield estimation. In this paper, we present a fully automatic and noninvasive computer vision system for kiwifruit yield estimation across a given orchard. It consists mainly of an optical sensor mounted on a minitractor that surveys the orchard of interest at a low pace. Afterwards, the acquired images are fed to a pipeline that incorporates image preprocessing, stitching, and fruit counting stages and outputs an estimated fruit count and yield estimation. Experimental results conducted on two large kiwifruit orchards confirm a high plausibility (i.e., errors of 6% and 15%) of the proposed system. The proposed yield estimation solution has been in commercial use for about 2 years. With respect to the traditional manual yield estimation carried out by kiwifruit companies, it was demonstrated to save a significant amount of time and cut down on estimation errors, especially when speaking of large-scale farming.
Collapse
Affiliation(s)
- Mohamed Lamine Mekhalfi
- Metacortex S.r.l., Via dei Campi 27, 38050 Torcegno, Italy; (C.N.); (I.I.); (F.C.); (R.G.); (M.B.)
- Correspondence:
| | - Carlo Nicolò
- Metacortex S.r.l., Via dei Campi 27, 38050 Torcegno, Italy; (C.N.); (I.I.); (F.C.); (R.G.); (M.B.)
| | - Ivan Ianniello
- Metacortex S.r.l., Via dei Campi 27, 38050 Torcegno, Italy; (C.N.); (I.I.); (F.C.); (R.G.); (M.B.)
| | - Federico Calamita
- Metacortex S.r.l., Via dei Campi 27, 38050 Torcegno, Italy; (C.N.); (I.I.); (F.C.); (R.G.); (M.B.)
| | - Rino Goller
- Metacortex S.r.l., Via dei Campi 27, 38050 Torcegno, Italy; (C.N.); (I.I.); (F.C.); (R.G.); (M.B.)
| | - Maurizio Barazzuol
- Metacortex S.r.l., Via dei Campi 27, 38050 Torcegno, Italy; (C.N.); (I.I.); (F.C.); (R.G.); (M.B.)
| | - Farid Melgani
- Department of Information Engineering and Computer Science, University of Trento, Via Sommarive, 9, 38123 Trento, Italy;
| |
Collapse
|
17
|
Leclerc M, Adamchuk V, Park J, Lachapelle-T. X. Development of Willow Tree Yield-Mapping Technology. Sensors (Basel) 2020; 20:s20092650. [PMID: 32384703 PMCID: PMC7249128 DOI: 10.3390/s20092650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 05/02/2020] [Accepted: 05/03/2020] [Indexed: 11/22/2022]
Abstract
With today’s environmental challenges, developing sustainable energy sources is crucial. From this perspective, woody biomass has been, and continues to be, a significant research interest. The goal of this research was to develop new technology for mapping willow tree yield grown in a short-rotation forestry (SRF) system. The system gathered the physical characteristics of willow trees on-the-go, while the trees were being harvested. Features assessed include the number of trees harvested and their diameter. To complete this task, a machine-vision system featuring an RGB-D stereovision camera was built. The system tagged these data with the corresponding geographical coordinates using a Global Navigation Satellite System (GNSS) receiver. The proposed yield-mapping system showed promising detection results considering the complex background and variable light conditions encountered in the outdoors. Of the 40 randomly selected and manually observed trees in a row, 36 were successfully detected, yielding a 90% detection rate. The correctly detected tree rate of all trees within the scenes was actually 71.8% since the system tended to be sensitive to branches, thus, falsely detecting them as trees. Manual validation of the diameter estimation function showed a poor coefficient of determination and a root mean square error (RMSE) of 10.7 mm.
Collapse
Affiliation(s)
- Maxime Leclerc
- Department of Bioresource Engineering, Faculty of Agricultural and Environmental Sciences, McGill University, Ste-Anne-de-Bellevue, QC H9X 3V9, Canada; (M.L.); (J.P.)
| | - Viacheslav Adamchuk
- Department of Bioresource Engineering, Faculty of Agricultural and Environmental Sciences, McGill University, Ste-Anne-de-Bellevue, QC H9X 3V9, Canada; (M.L.); (J.P.)
- Correspondence:
| | - Jaesung Park
- Department of Bioresource Engineering, Faculty of Agricultural and Environmental Sciences, McGill University, Ste-Anne-de-Bellevue, QC H9X 3V9, Canada; (M.L.); (J.P.)
| | | |
Collapse
|
18
|
Zhou C, Ye H, Hu J, Shi X, Hua S, Yue J, Xu Z, Yang G. Automated Counting of Rice Panicle by Applying Deep Learning Model to Images from Unmanned Aerial Vehicle Platform. Sensors (Basel) 2019; 19:s19143106. [PMID: 31337086 PMCID: PMC6679257 DOI: 10.3390/s19143106] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 07/08/2019] [Accepted: 07/11/2019] [Indexed: 12/03/2022]
Abstract
The number of panicles per unit area is a common indicator of rice yield and is of great significance to yield estimation, breeding, and phenotype analysis. Traditional counting methods have various drawbacks, such as long delay times and high subjectivity, and they are easily perturbed by noise. To improve the accuracy of rice detection and counting in the field, we developed and implemented a panicle detection and counting system that is based on improved region-based fully convolutional networks, and we use the system to automate rice-phenotype measurements. The field experiments were conducted in target areas to train and test the system and used a rotor light unmanned aerial vehicle equipped with a high-definition RGB camera to collect images. The trained model achieved a precision of 0.868 on a held-out test set, which demonstrates the feasibility of this approach. The algorithm can deal with the irregular edge of the rice panicle, the significantly different appearance between the different varieties and growing periods, the interference due to color overlapping between panicle and leaves, and the variations in illumination intensity and shading effects in the field. The result is more accurate and efficient recognition of rice-panicles, which facilitates rice breeding. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a global scale.
Collapse
Affiliation(s)
- Chengquan Zhou
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China
| | - Hongbao Ye
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China
| | - Jun Hu
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China
| | - Xiaoyan Shi
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China
| | - Shan Hua
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China
| | - Jibo Yue
- Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing 100089, China
- Key Laboratory of Agri-informatics, Ministry of Agriculture, Beijing 100089, China
| | - Zhifu Xu
- Institute of Agricultural Equipment, Zhejiang Academy of Agricultural Sciences (ZAAS), Hangzhou 310000, China.
| | - Guijun Yang
- Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing 100089, China.
- Key Laboratory of Agri-informatics, Ministry of Agriculture, Beijing 100089, China.
| |
Collapse
|
19
|
Di Gennaro SF, Toscano P, Cinat P, Berton A, Matese A. A Low-Cost and Unsupervised Image Recognition Methodology for Yield Estimation in a Vineyard. Front Plant Sci 2019; 10:559. [PMID: 31130974 PMCID: PMC6509744 DOI: 10.3389/fpls.2019.00559] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Accepted: 04/12/2019] [Indexed: 05/22/2023]
Abstract
Yield prediction is a key factor to optimize vineyard management and achieve the desired grape quality. Classical yield estimation methods, which consist of manual sampling within the field on a limited number of plants before harvest, are time-consuming and frequently insufficient to obtain representative yield data. Non-invasive machine vision methods are therefore being investigated to assess and implement a rapid grape yield estimate tool. This study aimed at an automated estimation of yield in terms of cluster number and size from high resolution RGB images (20 MP) taken with a low-cost UAV platform in representative zones of the vigor variability within an experimental vineyard. The flight campaigns were conducted in different light conditions and canopy cover levels for 2017 and 2018 crop seasons. An unsupervised recognition algorithm was applied to derive cluster number and size, which was used for estimating yield per vine. The results related to the number of clusters detected in different conditions, and the weight estimation for each vigor zone are presented. The segmentation results in cluster detection showed a performance of over 85% in partially leaf removal and full ripe condition, and allowed grapevine yield to be estimated with more than 84% of accuracy several weeks before harvest. The application of innovative technologies in field-phenotyping such as UAV, high-resolution cameras and visual computing algorithms enabled a new methodology to assess yield, which can save time and provide an accurate estimate compared to the manual method.
Collapse
Affiliation(s)
- Salvatore Filippo Di Gennaro
- Institute of Biometeorology, National Research Council (CNR-IBIMET), Florence, Italy
- *Correspondence: Salvatore Filippo Di Gennaro, Piero Toscano,
| | - Piero Toscano
- Institute of Biometeorology, National Research Council (CNR-IBIMET), Florence, Italy
- *Correspondence: Salvatore Filippo Di Gennaro, Piero Toscano,
| | - Paolo Cinat
- Institute of Biometeorology, National Research Council (CNR-IBIMET), Florence, Italy
| | - Andrea Berton
- Institute of Clinical Physiology, National Research Council (CNR-IFC), Pisa, Italy
| | - Alessandro Matese
- Institute of Biometeorology, National Research Council (CNR-IBIMET), Florence, Italy
| |
Collapse
|
20
|
Yu B, Shang S. Multi-Year Mapping of Major Crop Yields in an Irrigation District from High Spatial and Temporal Resolution Vegetation Index. Sensors (Basel) 2018; 18:E3787. [PMID: 30404139 PMCID: PMC6263990 DOI: 10.3390/s18113787] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2018] [Revised: 10/25/2018] [Accepted: 11/01/2018] [Indexed: 11/16/2022]
Abstract
Crop yield estimation is important for formulating informed regional and national food trade policies. The introduction of remote sensing in agricultural monitoring makes accurate estimation of regional crop yields possible. However, remote sensing images and crop distribution maps with coarse spatial resolution usually cause inaccuracy in yield estimation due to the existence of mixed pixels. This study aimed to estimate the annual yields of maize and sunflower in Hetao Irrigation District in North China using 30 m spatial resolution HJ-1A/1B CCD images and high accuracy multi-year crop distribution maps. The Normalized Difference Vegetation Index (NDVI) time series obtained from HJ-1A/1B CCD images was fitted with an asymmetric logistic curve to calculate daily NDVI and phenological characteristics. Eight random forest (RF) models using different predictors were developed for maize and sunflower yield estimation, respectively, where predictors of each model were a combination of NDVI series and/or phenological characteristics. We calibrated all RF models with measured crop yields at sampling points in two years (2014 and 2015), and validated the RF models with statistical yields of four counties in six years. Results showed that the optimal model for maize yield estimation was the model using NDVI series from the 120th to the 210th day in a year with 10 days' interval as predictors, while that for sunflower was the model using the combination of three NDVI characteristics, three phenological characteristics, and two curve parameters as predictors. The selected RF models could estimate multi-year regional crop yields accurately, with the average values of root-mean-square error and the relative error of 0.75 t/ha and 6.1% for maize, and 0.40 t/ha and 10.1% for sunflower, respectively. Moreover, the yields of maize and sunflower can be estimated fairly well with NDVI series 50 days before crop harvest, which implicated the possibility of crop yield forecast before harvest.
Collapse
Affiliation(s)
- Bing Yu
- State Key Laboratory of Hydroscience and Engineering, Tsinghua University, Beijing 100084, China.
| | - Songhao Shang
- State Key Laboratory of Hydroscience and Engineering, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
21
|
Zhou C, Liang D, Yang X, Yang H, Yue J, Yang G. Wheat Ears Counting in Field Conditions Based on Multi-Feature Optimization and TWSVM. Front Plant Sci 2018; 9:1024. [PMID: 30057587 PMCID: PMC6053621 DOI: 10.3389/fpls.2018.01024] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Accepted: 06/25/2018] [Indexed: 05/05/2023]
Abstract
The number of wheat ears in the field is very important data for predicting crop growth and estimating crop yield and as such is receiving ever-increasing research attention. To obtain such data, we propose a novel algorithm that uses computer vision to accurately recognize wheat ears in a digital image. First, red-green-blue images acquired by a manned ground vehicle are selected based on light intensity to ensure that this method is robust with respect to light intensity. Next, the selected images are cut to ensure that the target can be identified in the remaining parts. The simple linear iterative clustering method, which is based on superpixel theory, is then used to generate a patch from the selected images. After manually labeling each patch, they are divided into two categories: wheat ears and background. The color feature "Color Coherence Vectors," the texture feature "Gray Level Co-Occurrence Matrix," and a special image feature "Edge Histogram Descriptor" are then exacted from these patches to generate a high-dimensional matrix called the "feature matrix." Because each feature plays a different role in the classification process, a feature-weighting fusion based on kernel principal component analysis is used to redistribute the feature weights. Finally, a twin-support-vector-machine segmentation (TWSVM-Seg) model is trained to understand the differences between the two types of patches through the features, and the TWSVM-Seg model finally achieves the correct classification of each pixel from the testing sample and outputs the results in the form of binary image. This process thus segments the image. Next, we use a statistical function in Matlab to get the exact a precise number of ears. To verify these statistical numerical results, we compare them with field measurements of the wheat plots. The result of applying the proposed algorithm to ground-shooting image data sets correlates strongly (with a precision of 0.79-0.82) with the data obtained by manual counting. An average running time of 0.1 s is required to successfully extract the correct number of ears from the background, which shows that the proposed algorithm is computationally efficient. These results indicate that the proposed method provides accurate phenotypic data on wheat seedlings.
Collapse
Affiliation(s)
- Chengquan Zhou
- School of Electronics and Information Engineering, Anhui University, Hefei, China
- Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing, China
| | - Dong Liang
- School of Electronics and Information Engineering, Anhui University, Hefei, China
| | - Xiaodong Yang
- Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Hao Yang
- Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Jibo Yue
- Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing, China
- International Institute for Earth System Science, Nanjing University, Nanjing, China
| | - Guijun Yang
- Key Laboratory of Quantitative Remote Sensing in Agriculture of Ministry of Agriculture P. R. China, Beijing Research Center for Information Technology in Agriculture, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing, China
- *Correspondence: Guijun Yang
| |
Collapse
|
22
|
Rahnemoonfar M, Sheppard C. Deep Count: Fruit Counting Based on Deep Simulated Learning. Sensors (Basel) 2017; 17:s17040905. [PMID: 28425947 PMCID: PMC5426829 DOI: 10.3390/s17040905] [Citation(s) in RCA: 122] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2017] [Revised: 04/05/2017] [Accepted: 04/07/2017] [Indexed: 11/17/2022]
Abstract
Recent years have witnessed significant advancement in computer vision research based on deep learning. Success of these tasks largely depends on the availability of a large amount of training samples. Labeling the training samples is an expensive process. In this paper, we present a simulated deep convolutional neural network for yield estimation. Knowing the exact number of fruits, flowers, and trees helps farmers to make better decisions on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits or flowers by workers is a very time consuming and expensive process and it is not practical for big fields. Automatic yield estimation based on robotic agriculture provides a viable solution in this regard. Our network is trained entirely on synthetic data and tested on real data. To capture features on multiple scales, we used a modified version of the Inception-ResNet architecture. Our algorithm counts efficiently even if fruits are under shadow, occluded by foliage, branches, or if there is some degree of overlap amongst fruits. Experimental results show a 91% average test accuracy on real images and 93% on synthetic images.
Collapse
Affiliation(s)
- Maryam Rahnemoonfar
- Department of Computer Science, Texas A&M University-Corpus Christi, Corpus Christi, TX 78412, USA.
| | - Clay Sheppard
- Department of Computer Science, Texas A&M University-Corpus Christi, Corpus Christi, TX 78412, USA.
| |
Collapse
|
23
|
Stein M, Bargoti S, Underwood J. Image Based Mango Fruit Detection, Localisation and Yield Estimation Using Multiple View Geometry. Sensors (Basel) 2016; 16:s16111915. [PMID: 27854271 PMCID: PMC5134574 DOI: 10.3390/s16111915] [Citation(s) in RCA: 153] [Impact Index Per Article: 19.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Revised: 11/09/2016] [Accepted: 11/09/2016] [Indexed: 11/16/2022]
Abstract
This paper presents a novel multi-sensor framework to efficiently identify, track, localise and map every piece of fruit in a commercial mango orchard. A multiple viewpoint approach is used to solve the problem of occlusion, thus avoiding the need for labour-intensive field calibration to estimate actual yield. Fruit are detected in images using a state-of-the-art faster R-CNN detector, and pair-wise correspondences are established between images using trajectory data provided by a navigation system. A novel LiDAR component automatically generates image masks for each canopy, allowing each fruit to be associated with the corresponding tree. The tracked fruit are triangulated to locate them in 3D, enabling a number of spatial statistics per tree, row or orchard block. A total of 522 trees and 71,609 mangoes were scanned on a Calypso mango orchard near Bundaberg, Queensland, Australia, with 16 trees counted by hand for validation, both on the tree and after harvest. The results show that single, dual and multi-view methods can all provide precise yield estimates, but only the proposed multi-view approach can do so without calibration, with an error rate of only 1.36% for individual trees.
Collapse
Affiliation(s)
- Madeleine Stein
- Division of Automatic Control Department of Electrical Engineering, Linköping University, Linköping SE-581 83, Sweden.
| | - Suchet Bargoti
- The Australian Centre for Field Robotics (ACFR), Department of Aerospace, Mechanical and Mechatronic Engineering (AMME),The University of Sydney, Sydney, NSW 2006, Australia.
| | - James Underwood
- The Australian Centre for Field Robotics (ACFR), Department of Aerospace, Mechanical and Mechatronic Engineering (AMME),The University of Sydney, Sydney, NSW 2006, Australia.
| |
Collapse
|