1
|
Tan H, Zhao X, Zhai C, Fu H, Chen L, Yang M. Design and experiments with a SLAM system for low-density canopy environments in greenhouses based on an improved Cartographer framework. FRONTIERS IN PLANT SCIENCE 2024; 15:1276799. [PMID: 38362453 PMCID: PMC10867628 DOI: 10.3389/fpls.2024.1276799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 01/03/2024] [Indexed: 02/17/2024]
Abstract
To address the problem that the low-density canopy of greenhouse crops affects the robustness and accuracy of simultaneous localization and mapping (SLAM) algorithms, a greenhouse map construction method for agricultural robots based on multiline LiDAR was investigated. Based on the Cartographer framework, this paper proposes a map construction and localization method based on spatial downsampling. Taking suspended tomato plants planted in greenhouses as the research object, an adaptive filtering point cloud projection (AF-PCP) SLAM algorithm was designed. Using a wheel odometer, 16-line LiDAR point cloud data based on adaptive vertical projections were linearly interpolated to construct a map and perform high-precision pose estimation in a greenhouse with a low-density canopy environment. Experiments were carried out in canopy environments with leaf area densities (LADs) of 2.945-5.301 m2/m3. The results showed that the AF-PCP SLAM algorithm increased the average mapping area of the crop rows by 155.7% compared with that of the Cartographer algorithm. The mean error and coefficient of variation of the crop row length were 0.019 m and 0.217%, respectively, which were 77.9% and 87.5% lower than those of the Cartographer algorithm. The average maximum void length was 0.124 m, which was 72.8% lower than that of the Cartographer algorithm. The localization experiments were carried out at speeds of 0.2 m/s, 0.4 m/s, and 0.6 m/s. The average relative localization errors at these speeds were respectively 0.026 m, 0.029 m, and 0.046 m, and the standard deviation was less than 0.06 m. Compared with that of the track deduction algorithm, the average localization error was reduced by 79.9% with the proposed algorithm. The results show that our proposed framework can map and localize robots with precision even in low-density canopy environments in greenhouses, demonstrating the satisfactory capability of the proposed approach and highlighting its promising applications in the autonomous navigation of agricultural robots.
Collapse
Affiliation(s)
- Haoran Tan
- College of Engineering, China Agricultural University, Beijing, China
- Intelligent Equipment Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Xueguan Zhao
- Intelligent Equipment Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing, China
- Beijing PAIDE Science and Technology Development Co., Ltd, Beijing, China
| | - Changyuan Zhai
- Intelligent Equipment Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Hao Fu
- Intelligent Equipment Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
| | - Liping Chen
- Intelligent Equipment Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Minli Yang
- College of Engineering, China Agricultural University, Beijing, China
| |
Collapse
|
2
|
Popescu D, Dinca A, Ichim L, Angelescu N. New trends in detection of harmful insects and pests in modern agriculture using artificial neural networks. a review. FRONTIERS IN PLANT SCIENCE 2023; 14:1268167. [PMID: 38023916 PMCID: PMC10652400 DOI: 10.3389/fpls.2023.1268167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 10/11/2023] [Indexed: 12/01/2023]
Abstract
Modern and precision agriculture is constantly evolving, and the use of technology has become a critical factor in improving crop yields and protecting plants from harmful insects and pests. The use of neural networks is emerging as a new trend in modern agriculture that enables machines to learn and recognize patterns in data. In recent years, researchers and industry experts have been exploring the use of neural networks for detecting harmful insects and pests in crops, allowing farmers to act and mitigate damage. This paper provides an overview of new trends in modern agriculture for harmful insect and pest detection using neural networks. Using a systematic review, the benefits and challenges of this technology are highlighted, as well as various techniques being taken by researchers to improve its effectiveness. Specifically, the review focuses on the use of an ensemble of neural networks, pest databases, modern software, and innovative modified architectures for pest detection. The review is based on the analysis of multiple research papers published between 2015 and 2022, with the analysis of the new trends conducted between 2020 and 2022. The study concludes by emphasizing the significance of ongoing research and development of neural network-based pest detection systems to maintain sustainable and efficient agricultural production.
Collapse
Affiliation(s)
- Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Bucharest, Romania
| | - Alexandru Dinca
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Bucharest, Romania
| | - Loretta Ichim
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Bucharest, Romania
| | - Nicoleta Angelescu
- Faculty of Electrical Engineering, Electronics, and Information Technology, University Valahia of Targoviste, Targoviste, Romania
| |
Collapse
|
3
|
Gupta H, Andreasson H, Lilienthal AJ, Kurtser P. Robust Scan Registration for Navigation in Forest Environment Using Low-Resolution LiDAR Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:4736. [PMID: 37430655 DOI: 10.3390/s23104736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 04/30/2023] [Accepted: 05/11/2023] [Indexed: 07/12/2023]
Abstract
Automated forest machines are becoming important due to human operators' complex and dangerous working conditions, leading to a labor shortage. This study proposes a new method for robust SLAM and tree mapping using low-resolution LiDAR sensors in forestry conditions. Our method relies on tree detection to perform scan registration and pose correction using only low-resolution LiDAR sensors (16Ch, 32Ch) or narrow field of view Solid State LiDARs without additional sensory modalities like GPS or IMU. We evaluate our approach on three datasets, including two private and one public dataset, and demonstrate improved navigation accuracy, scan registration, tree localization, and tree diameter estimation compared to current approaches in forestry machine automation. Our results show that the proposed method yields robust scan registration using detected trees, outperforming generalized feature-based registration algorithms like Fast Point Feature Histogram, with an above 3 m reduction in RMSE for the 16Chanel LiDAR sensor. For Solid-State LiDAR the algorithm achieves a similar RMSE of 3.7 m. Additionally, our adaptive pre-processing and heuristic approach to tree detection increased the number of detected trees by 13% compared to the current approach of using fixed radius search parameters for pre-processing. Our automated tree trunk diameter estimation method yields a mean absolute error of 4.3 cm (RSME = 6.5 cm) for the local map and complete trajectory maps.
Collapse
Affiliation(s)
- Himanshu Gupta
- Centre for Applied Autonomous Sensor Systems, Örebro University, 702 81 Örebro, Sweden
| | - Henrik Andreasson
- Centre for Applied Autonomous Sensor Systems, Örebro University, 702 81 Örebro, Sweden
| | - Achim J Lilienthal
- Centre for Applied Autonomous Sensor Systems, Örebro University, 702 81 Örebro, Sweden
- Perception for Intelligent Systems, Technical University of Munich, 80992 Munich, Germany
| | - Polina Kurtser
- Centre for Applied Autonomous Sensor Systems, Örebro University, 702 81 Örebro, Sweden
- Department of Radiation Science, Radiation Physics, Umeå University, 901 87 Umeå, Sweden
| |
Collapse
|
4
|
Wang L, Wei H. Winding pathway understanding based on angle projections in a field environment. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04325-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
5
|
Dong W, Roy P, Peng C, Isler V. Ellipse R-CNN: Learning to Infer Elliptical Object From Clustering and Occlusion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:2193-2206. [PMID: 33471755 DOI: 10.1109/tip.2021.3050673] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Images of heavily occluded objects in cluttered scenes, such as fruit clusters in trees, are hard to segment. To further retrieve the 3D size and 6D pose of each individual object in such cases, bounding boxes are not reliable from multiple views since only a little portion of the object's geometry is captured. We introduce the first CNN-based ellipse detector, called Ellipse R-CNN, to represent and infer occluded objects as ellipses. We first propose a robust and compact ellipse regression based on the Mask R-CNN architecture for elliptical object detection. Our method can infer the parameters of multiple elliptical objects even they are occluded by other neighboring objects. For better occlusion handling, we exploit refined feature regions for the regression stage, and integrate the U-Net structure for learning different occlusion patterns to compute the final detection score. The correctness of ellipse regression is validated through experiments performed on synthetic data of clustered ellipses. We further quantitatively and qualitatively demonstrate that our approach outperforms the state-of-the-art model (i.e., Mask R-CNN followed by ellipse fitting) and its three variants on both synthetic and real datasets of occluded and clustered elliptical objects.
Collapse
|
6
|
Lowe T, Moghadam P, Edwards E, Williams J. Canopy density estimation in perennial horticulture crops using 3D spinning lidar SLAM. J FIELD ROBOT 2021. [DOI: 10.1002/rob.22006] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Affiliation(s)
- Thomas Lowe
- Robotics and Autonomous Systems, CSIRO DATA61 Brisbane Queensland Australia
| | - Peyman Moghadam
- Robotics and Autonomous Systems, CSIRO DATA61 Brisbane Queensland Australia
| | - Everard Edwards
- CSIRO Agriculture & Food Waite Campus South Australia Australia
| | - Jason Williams
- Robotics and Autonomous Systems, CSIRO DATA61 Brisbane Queensland Australia
| |
Collapse
|
7
|
Mobile LiDAR Scanning System Combined with Canopy Morphology Extracting Methods for Tree Crown Parameters Evaluation in Orchards. SENSORS 2021; 21:s21020339. [PMID: 33419182 PMCID: PMC7825505 DOI: 10.3390/s21020339] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 12/25/2020] [Accepted: 01/03/2021] [Indexed: 12/02/2022]
Abstract
To meet the demand for canopy morphological parameter measurements in orchards, a mobile scanning system is designed based on the 3D Simultaneous Localization and Mapping (SLAM) algorithm. The system uses a lightweight LiDAR-Inertial Measurement Unit (LiDAR-IMU) state estimator and a rotation-constrained optimization algorithm to reconstruct a point cloud map of the orchard. Then, Statistical Outlier Removal (SOR) filtering and European clustering algorithms are used to segment the orchard point cloud from which the ground information has been separated, and the k-nearest neighbour (KNN) search algorithm is used to restore the filtered point cloud. Finally, the height of the fruit trees and the volume of the canopy are obtained by the point cloud statistical method and the 3D alpha-shape algorithm. To verify the algorithm, tracked robots equipped with LIDAR and an IMU are used in a standardized orchard. Experiments show that the system in this paper can reconstruct the orchard point cloud environment with high accuracy and can obtain the point cloud information of all fruit trees in the orchard environment. The accuracy of point cloud-based segmentation of fruit trees in the orchard is 95.4%. The R2 and Root Mean Square Error (RMSE) values of crown height are 0.93682 and 0.04337, respectively, and the corresponding values of canopy volume are 0.8406 and 1.5738, respectively. In summary, this system achieves a good evaluation result of orchard crown information and has important application value in the intelligent measurement of fruit trees.
Collapse
|
8
|
Gené-Mola J, Llorens J, Rosell-Polo JR, Gregorio E, Arnó J, Solanelles F, Martínez-Casasnovas JA, Escolà A. Assessing the Performance of RGB-D Sensors for 3D Fruit Crop Canopy Characterization under Different Operating and Lighting Conditions. SENSORS (BASEL, SWITZERLAND) 2020; 20:E7072. [PMID: 33321817 PMCID: PMC7764794 DOI: 10.3390/s20247072] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 12/02/2020] [Accepted: 12/07/2020] [Indexed: 11/16/2022]
Abstract
The use of 3D sensors combined with appropriate data processing and analysis has provided tools to optimise agricultural management through the application of precision agriculture. The recent development of low-cost RGB-Depth cameras has presented an opportunity to introduce 3D sensors into the agricultural community. However, due to the sensitivity of these sensors to highly illuminated environments, it is necessary to know under which conditions RGB-D sensors are capable of operating. This work presents a methodology to evaluate the performance of RGB-D sensors under different lighting and distance conditions, considering both geometrical and spectral (colour and NIR) features. The methodology was applied to evaluate the performance of the Microsoft Kinect v2 sensor in an apple orchard. The results show that sensor resolution and precision decreased significantly under middle to high ambient illuminance (>2000 lx). However, this effect was minimised when measurements were conducted closer to the target. In contrast, illuminance levels below 50 lx affected the quality of colour data and may require the use of artificial lighting. The methodology was useful for characterizing sensor performance throughout the full range of ambient conditions in commercial orchards. Although Kinect v2 was originally developed for indoor conditions, it performed well under a range of outdoor conditions.
Collapse
Affiliation(s)
- Jordi Gené-Mola
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| | - Jordi Llorens
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| | - Joan R. Rosell-Polo
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| | - Eduard Gregorio
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| | - Jaume Arnó
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| | - Francesc Solanelles
- Department of Agriculture, Livestock, Fisheries and Food, Generalitat de Catalunya, Lleida, 25198 Catalunya, Spain;
| | - José A. Martínez-Casasnovas
- Research Group in AgroICT & Precision Agriculture, Department of Environmental and Soil Sciences, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain;
| | - Alexandre Escolà
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| |
Collapse
|
9
|
Abstract
SUMMARYSelf-localization in highly dynamic environments is still a challenging problem for humanoid robots with limited computation resource. In this paper, we propose a dual-channel unscented particle filter (DC-UPF)-based localization method to address it. A key novelty of this approach is that it employs a dual-channel switch mechanism in measurement updating procedure of particle filter, solving for sparse vision feature in motion, and it leverages data from a camera, a walking odometer, and an inertial measurement unit. Extensive experiments with an NAO robot demonstrate that DC-UPF outperforms UPF and Monte–Carlo localization with regard to accuracy.
Collapse
|
10
|
Verbiest R, Ruysen K, Vanwalleghem T, Demeester E, Kellens K. Automation and robotics in the cultivation of pome fruit: Where do we stand today? J FIELD ROBOT 2020. [DOI: 10.1002/rob.22000] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Rafaël Verbiest
- Department of Mechanical Engineering ACRO Research Group, KU Leuven Diepenbeek Belgium
| | - Kris Ruysen
- Department of Environment and Technology Research Center for Fruit (pcfruit) npo Sint‐Truiden Belgium
| | - Tanja Vanwalleghem
- Department of Environment and Technology Research Center for Fruit (pcfruit) npo Sint‐Truiden Belgium
| | - Eric Demeester
- Department of Mechanical Engineering ACRO Research Group, KU Leuven Diepenbeek Belgium
| | - Karel Kellens
- Department of Mechanical Engineering ACRO Research Group, KU Leuven Diepenbeek Belgium
| |
Collapse
|
11
|
Tremblay J, Béland M, Gagnon R, Pomerleau F, Giguère P. Automatic three‐dimensional mapping for tree diameter measurements in inventory operations. J FIELD ROBOT 2020. [DOI: 10.1002/rob.21980] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
| | - Martin Béland
- Department of Geomatics Sciences Université Laval Québec Canada
| | - Richard Gagnon
- Centre de Recherche Industrielle du Québec Québec Canada
| | | | | |
Collapse
|
12
|
Coupel-Ledru A, Pallas B, Delalande M, Boudon F, Carrié E, Martinez S, Regnard JL, Costes E. Multi-scale high-throughput phenotyping of apple architectural and functional traits in orchard reveals genotypic variability under contrasted watering regimes. HORTICULTURE RESEARCH 2019; 6:52. [PMID: 31044079 PMCID: PMC6491481 DOI: 10.1038/s41438-019-0137-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Revised: 01/17/2019] [Accepted: 01/23/2019] [Indexed: 05/06/2023]
Abstract
Despite previous reports on the genotypic variation of architectural and functional traits in fruit trees, phenotyping large populations in the field remains challenging. In this study, we used high-throughput phenotyping methods on an apple tree core-collection (1000 individuals) grown under contrasted watering regimes. First, architectural phenotyping was achieved using T-LiDAR scans for estimating convex and alpha hull volumes and the silhouette to total leaf area ratio (STAR). Second, a semi-empirical index (I PL) was computed from chlorophyll fluorescence measurements, as a proxy for leaf photosynthesis. Last, thermal infrared and multispectral airborne imaging was used for computing canopy temperature variations, water deficit, and vegetation indices. All traits estimated by these methods were compared to low-throughput in planta measurements. Vegetation indices and alpha hull volumes were significantly correlated with tree leaf area and trunk cross sectional area, while I PL values showed strong correlations with photosynthesis measurements collected on an independent leaf dataset. By contrast, correlations between stomatal conductance and canopy temperature estimated from airborne images were lower, emphasizing discrepancies across measurement scales. High heritability values were obtained for almost all the traits except leaf photosynthesis, likely due to large intra-tree variation. Genotypic means were used in a clustering procedure that defined six classes of architectural and functional combinations. Differences between groups showed several combinations between architectural and functional traits, suggesting independent genetic controls. This study demonstrates the feasibility and relevance of combining multi-scale high-throughput methods and paves the way to explore the genetic bases of architectural and functional variations in woody crops in field conditions.
Collapse
Affiliation(s)
- Aude Coupel-Ledru
- UMR AGAP, Univ Montpellier, CIRAD, INRA, Montpellier SupAgro, 34398 Montpellier Cedex 5, France
- Present Address: University of Bristol, School of Biological Sciences, Life Science Building, 24 Tyndall Avenue, Bristol, BS8 1TQ UK
| | - Benoît Pallas
- UMR AGAP, Univ Montpellier, CIRAD, INRA, Montpellier SupAgro, 34398 Montpellier Cedex 5, France
| | - Magalie Delalande
- UMR AGAP, Univ Montpellier, CIRAD, INRA, Montpellier SupAgro, 34398 Montpellier Cedex 5, France
| | - Frédéric Boudon
- UMR AGAP, Univ Montpellier, CIRAD, INRA, Montpellier SupAgro, 34398 Montpellier Cedex 5, France
- CIRAD, 34398 Montpellier Cedex 5, France
| | - Emma Carrié
- UMR AGAP, Univ Montpellier, CIRAD, INRA, Montpellier SupAgro, 34398 Montpellier Cedex 5, France
| | - Sébastien Martinez
- UMR AGAP, Univ Montpellier, CIRAD, INRA, Montpellier SupAgro, 34398 Montpellier Cedex 5, France
| | - Jean-Luc Regnard
- UMR AGAP, Univ Montpellier, CIRAD, INRA, Montpellier SupAgro, 34398 Montpellier Cedex 5, France
| | - Evelyne Costes
- UMR AGAP, Univ Montpellier, CIRAD, INRA, Montpellier SupAgro, 34398 Montpellier Cedex 5, France
| |
Collapse
|