1
|
Zhou J, Cui M, Wu Y, Gao Y, Tang Y, Jiang B, Wu M, Zhang J, Hou L. Detection of maize stem diameter by using RGB-D cameras' depth information under selected field condition. FRONTIERS IN PLANT SCIENCE 2024; 15:1371252. [PMID: 38711601 PMCID: PMC11070473 DOI: 10.3389/fpls.2024.1371252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 04/09/2024] [Indexed: 05/08/2024]
Abstract
Stem diameter is a critical phenotypic parameter for maize, integral to yield prediction and lodging resistance assessment. Traditionally, the quantification of this parameter through manual measurement has been the norm, notwithstanding its tedious and laborious nature. To address these challenges, this study introduces a non-invasive field-based system utilizing depth information from RGB-D cameras to measure maize stem diameter. This technology offers a practical solution for conducting rapid and non-destructive phenotyping. Firstly, RGB images, depth images, and 3D point clouds of maize stems were captured using an RGB-D camera, and precise alignment between the RGB and depth images was achieved. Subsequently, the contours of maize stems were delineated using 2D image processing techniques, followed by the extraction of the stem's skeletal structure employing a thinning-based skeletonization algorithm. Furthermore, within the areas of interest on the maize stems, horizontal lines were constructed using points on the skeletal structure, resulting in 2D pixel coordinates at the intersections of these horizontal lines with the maize stem contours. Subsequently, a back-projection transformation from 2D pixel coordinates to 3D world coordinates was achieved by combining the depth data with the camera's intrinsic parameters. The 3D world coordinates were then precisely mapped onto the 3D point cloud using rigid transformation techniques. Finally, the maize stem diameter was sensed and determined by calculating the Euclidean distance between pairs of 3D world coordinate points. The method demonstrated a Mean Absolute Percentage Error (MAPE) of 3.01%, a Mean Absolute Error (MAE) of 0.75 mm, a Root Mean Square Error (RMSE) of 1.07 mm, and a coefficient of determination (R²) of 0.96, ensuring accurate measurement of maize stem diameter. This research not only provides a new method of precise and efficient crop phenotypic analysis but also offers theoretical knowledge for the advancement of precision agriculture.
Collapse
Affiliation(s)
- Jing Zhou
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Mingren Cui
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Yushan Wu
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Yudi Gao
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Yijia Tang
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Bowen Jiang
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Min Wu
- College of Information Technology, Jilin Agricultural University, Changchun, China
| | - Jian Zhang
- Faculty of Agronomy, Jilin Agricultural University, Changchun, China
- Department of Biology, University of British Columbia, Okanagan, Kelowna, BC, Canada
| | - Lixin Hou
- College of Information Technology, Jilin Agricultural University, Changchun, China
| |
Collapse
|
2
|
Zuo Z, Mu J, Li W, Bu Q, Mao H, Zhang X, Han L, Ni J. Study on the detection of water status of tomato ( Solanum lycopersicum L.) by multimodal deep learning. FRONTIERS IN PLANT SCIENCE 2023; 14:1094142. [PMID: 37324706 PMCID: PMC10264697 DOI: 10.3389/fpls.2023.1094142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 05/10/2023] [Indexed: 06/17/2023]
Abstract
Water plays a very important role in the growth of tomato (Solanum lycopersicum L.), and how to detect the water status of tomato is the key to precise irrigation. The objective of this study is to detect the water status of tomato by fusing RGB, NIR and depth image information through deep learning. Five irrigation levels were set to cultivate tomatoes in different water states, with irrigation amounts of 150%, 125%, 100%, 75%, and 50% of reference evapotranspiration calculated by a modified Penman-Monteith equation, respectively. The water status of tomatoes was divided into five categories: severely irrigated deficit, slightly irrigated deficit, moderately irrigated, slightly over-irrigated, and severely over-irrigated. RGB images, depth images and NIR images of the upper part of the tomato plant were taken as data sets. The data sets were used to train and test the tomato water status detection models built with single-mode and multimodal deep learning networks, respectively. In the single-mode deep learning network, two CNNs, VGG-16 and Resnet-50, were trained on a single RGB image, a depth image, or a NIR image for a total of six cases. In the multimodal deep learning network, two or more of the RGB images, depth images and NIR images were trained with VGG-16 or Resnet-50, respectively, for a total of 20 combinations. Results showed that the accuracy of tomato water status detection based on single-mode deep learning ranged from 88.97% to 93.09%, while the accuracy of tomato water status detection based on multimodal deep learning ranged from 93.09% to 99.18%. The multimodal deep learning significantly outperformed the single-modal deep learning. The tomato water status detection model built using a multimodal deep learning network with ResNet-50 for RGB images and VGG-16 for depth and NIR images was optimal. This study provides a novel method for non-destructive detection of water status of tomato and gives a reference for precise irrigation management.
Collapse
Affiliation(s)
- Zhiyu Zuo
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education/High-tech Key Laboratory of Agricultural Equipment and Intelligence of Jiangsu Province, Jiangsu University, Zhenjiang, China
| | - Jindong Mu
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
| | - Wenjie Li
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
| | - Quan Bu
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
| | - Hanping Mao
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education/High-tech Key Laboratory of Agricultural Equipment and Intelligence of Jiangsu Province, Jiangsu University, Zhenjiang, China
| | - Xiaodong Zhang
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
| | - Lvhua Han
- School of Agricultural Engineering, Jiangsu University, Zhenjiang, China
| | - Jiheng Ni
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education/High-tech Key Laboratory of Agricultural Equipment and Intelligence of Jiangsu Province, Jiangsu University, Zhenjiang, China
| |
Collapse
|
3
|
Neupane C, Pereira M, Koirala A, Walsh KB. Fruit Sizing in Orchard: A Review from Caliper to Machine Vision with Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:3868. [PMID: 37112207 PMCID: PMC10144371 DOI: 10.3390/s23083868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/04/2023] [Accepted: 04/05/2023] [Indexed: 06/19/2023]
Abstract
Forward estimates of harvest load require information on fruit size as well as number. The task of sizing fruit and vegetables has been automated in the packhouse, progressing from mechanical methods to machine vision over the last three decades. This shift is now occurring for size assessment of fruit on trees, i.e., in the orchard. This review focuses on: (i) allometric relationships between fruit weight and lineal dimensions; (ii) measurement of fruit lineal dimensions with traditional tools; (iii) measurement of fruit lineal dimensions with machine vision, with attention to the issues of depth measurement and recognition of occluded fruit; (iv) sampling strategies; and (v) forward prediction of fruit size (at harvest). Commercially available capability for in-orchard fruit sizing is summarized, and further developments of in-orchard fruit sizing by machine vision are anticipated.
Collapse
|
4
|
Song P, Li Z, Yang M, Shao Y, Pu Z, Yang W, Zhai R. Dynamic detection of three-dimensional crop phenotypes based on a consumer-grade RGB-D camera. FRONTIERS IN PLANT SCIENCE 2023; 14:1097725. [PMID: 36778701 PMCID: PMC9911875 DOI: 10.3389/fpls.2023.1097725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 01/11/2023] [Indexed: 06/18/2023]
Abstract
INTRODUCTION Nondestructive detection of crop phenotypic traits in the field is very important for crop breeding. Ground-based mobile platforms equipped with sensors can efficiently and accurately obtain crop phenotypic traits. In this study, we propose a dynamic 3D data acquisition method in the field suitable for various crops by using a consumer-grade RGB-D camera installed on a ground-based movable platform, which can collect RGB images as well as depth images of crop canopy sequences dynamically. METHODS A scale-invariant feature transform (SIFT) operator was used to detect adjacent date frames acquired by the RGB-D camera to calculate the point cloud alignment coarse matching matrix and the displacement distance of adjacent images. The data frames used for point cloud matching were selected according to the calculated displacement distance. Then, the colored ICP (iterative closest point) algorithm was used to determine the fine matching matrix and generate point clouds of the crop row. The clustering method was applied to segment the point cloud of each plant from the crop row point cloud, and 3D phenotypic traits, including plant height, leaf area and projected area of individual plants, were measured. RESULTS AND DISCUSSION We compared the effects of LIDAR and image-based 3D reconstruction methods, and experiments were carried out on corn, tobacco, cottons and Bletilla striata in the seedling stage. The results show that the measurements of the plant height (R²= 0.9~0.96, RSME = 0.015~0.023 m), leaf area (R²= 0.8~0.86, RSME = 0.0011~0.0041 m 2 ) and projected area (R² = 0.96~0.99) have strong correlations with the manual measurement results. Additionally, 3D reconstruction results with different moving speeds and times throughout the day and in different scenes were also verified. The results show that the method can be applied to dynamic detection with a moving speed up to 0.6 m/s and can achieve acceptable detection results in the daytime, as well as at night. Thus, the proposed method can improve the efficiency of individual crop 3D point cloud data extraction with acceptable accuracy, which is a feasible solution for crop seedling 3D phenotyping outdoors.
Collapse
Affiliation(s)
- Peng Song
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Zhengda Li
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Meng Yang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Yang Shao
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Zhen Pu
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Wanneng Yang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Ruifang Zhai
- College of Informatics, Huazhong Agricultural University, Wuhan, China
| |
Collapse
|
5
|
Banana Pseudostem Width Detection Based on Kinect V2 Depth Sensor. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3083647. [PMID: 36203728 PMCID: PMC9532068 DOI: 10.1155/2022/3083647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 08/24/2022] [Indexed: 11/22/2022]
Abstract
This study used Kinect V2 sensor to collect the three-dimensional point cloud data of banana pseudostem and developed an automatic measurement method of banana pseudostem width. The banana plant was selected as the research object in a banana plantation in Fusui, Guangxi. The mobile measurement of banana pseudostem was carried out at a distance of 1 m from the banana plant using the field operation platform with Kinect V2 as the collection equipment. To eliminate the background data and improve the processing speed, a cascade classifier was used to recognize banana pseudostems from the depth image, extract the region of interest (ROI), and transform the ROI into a color point cloud combined with the color image; secondly, the point cloud was sparse by down-sampling; then, the point cloud noise was removed according to the classification of large-scale and small-scale noise; finally, the stem point cloud was segmented along the y-axis, and the difference between the maximum and minimum values in the x-axis direction of each segment was calculated as its horizontal width. The center point of each segment point cloud was used to fit the slope of the stem centerline, and the average horizontal width was corrected to the stem diameter. The test results show that the average measurement error is only 2.7 mm, the average relative error was 1.34%, and the measurement time is only about 300 ms. It could provide an effective solution for the automatic and rapid measurement of stem width of banana plants and other similar plants.
Collapse
|
6
|
Xu R, Li C. A Review of High-Throughput Field Phenotyping Systems: Focusing on Ground Robots. PLANT PHENOMICS 2022; 2022:9760269. [PMID: 36059604 PMCID: PMC9394113 DOI: 10.34133/2022/9760269] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 04/25/2022] [Indexed: 12/03/2022]
Abstract
Manual assessments of plant phenotypes in the field can be labor-intensive and inefficient. The high-throughput field phenotyping systems and in particular robotic systems play an important role to automate data collection and to measure novel and fine-scale phenotypic traits that were previously unattainable by humans. The main goal of this paper is to review the state-of-the-art of high-throughput field phenotyping systems with a focus on autonomous ground robotic systems. This paper first provides a brief review of nonautonomous ground phenotyping systems including tractors, manually pushed or motorized carts, gantries, and cable-driven systems. Then, a detailed review of autonomous ground phenotyping robots is provided with regard to the robot's main components, including mobile platforms, sensors, manipulators, computing units, and software. It also reviews the navigation algorithms and simulation tools developed for phenotyping robots and the applications of phenotyping robots in measuring plant phenotypic traits and collecting phenotyping datasets. At the end of the review, this paper discusses current major challenges and future research directions.
Collapse
Affiliation(s)
- Rui Xu
- Bio-Sensing and Instrumentation Laboratory, College of Engineering, The University of Georgia, Athens, USA
| | - Changying Li
- Bio-Sensing and Instrumentation Laboratory, College of Engineering, The University of Georgia, Athens, USA
- Phenomics and Plant Robotics Center, The University of Georgia, Athens, USA
| |
Collapse
|
7
|
Li D, Li J, Xiang S, Pan A. PSegNet: Simultaneous Semantic and Instance Segmentation for Point Clouds of Plants. PLANT PHENOMICS (WASHINGTON, D.C.) 2022; 2022:9787643. [PMID: 35693119 PMCID: PMC9157368 DOI: 10.34133/2022/9787643] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Accepted: 04/07/2022] [Indexed: 12/02/2022]
Abstract
Phenotyping of plant growth improves the understanding of complex genetic traits and eventually expedites the development of modern breeding and intelligent agriculture. In phenotyping, segmentation of 3D point clouds of plant organs such as leaves and stems contributes to automatic growth monitoring and reflects the extent of stress received by the plant. In this work, we first proposed the Voxelized Farthest Point Sampling (VFPS), a novel point cloud downsampling strategy, to prepare our plant dataset for training of deep neural networks. Then, a deep learning network-PSegNet, was specially designed for segmenting point clouds of several species of plants. The effectiveness of PSegNet originates from three new modules including the Double-Neighborhood Feature Extraction Block (DNFEB), the Double-Granularity Feature Fusion Module (DGFFM), and the Attention Module (AM). After training on the plant dataset prepared with VFPS, the network can simultaneously realize the semantic segmentation and the leaf instance segmentation for three plant species. Comparing to several mainstream networks such as PointNet++, ASIS, SGPN, and PlantNet, the PSegNet obtained the best segmentation results quantitatively and qualitatively. In semantic segmentation, PSegNet achieved 95.23%, 93.85%, 94.52%, and 89.90% for the mean Prec, Rec, F1, and IoU, respectively. In instance segmentation, PSegNet achieved 88.13%, 79.28%, 83.35%, and 89.54% for the mPrec, mRec, mCov, and mWCov, respectively.
Collapse
Affiliation(s)
- Dawei Li
- State Key Laboratory for Modification of Chemical Fibers and Polymer Materials, College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
- Engineering Research Center of Digitized Textile & Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, China
| | - Jinsheng Li
- College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
| | - Shiyu Xiang
- College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
| | - Anqi Pan
- Engineering Research Center of Digitized Textile & Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, China
- College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
| |
Collapse
|
8
|
In Situ Measuring Stem Diameters of Maize Crops with a High-Throughput Phenotyping Robot. REMOTE SENSING 2022. [DOI: 10.3390/rs14041030] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Robotic High-Throughput Phenotyping (HTP) technology has been a powerful tool for selecting high-quality crop varieties among large quantities of traits. Due to the advantages of multi-view observation and high accuracy, ground HTP robots have been widely studied in recent years. In this paper, we study an ultra-narrow wheeled robot equipped with RGB-D cameras for inter-row maize HTP. The challenges of the narrow operating space, intensive light changes, and messy cross-leaf interference in rows of maize crops are considered. An in situ and inter-row stem diameter measurement method for HTP robots is proposed. To this end, we first introduce the stem diameter measurement pipeline, in which a convolutional neural network is employed to detect stems, and the point cloud is analyzed to estimate the stem diameters. Second, we present a clustering strategy based on DBSCAN for extracting stem point clouds under the condition that the stem is shaded by dense leaves. Third, we present a point cloud filling strategy to fill the stem region with missing depth values due to the occlusion by other organs. Finally, we employ convex hull and plane projection of the point cloud to estimate the stem diameters. The results show that the R2 and RMSE of stem diameter measurement are up to 0.72 and 2.95 mm, demonstrating its effectiveness.
Collapse
|
9
|
Tagarakis AC, Filippou E, Kalaitzidis D, Benos L, Busato P, Bochtis D. Proposing UGV and UAV Systems for 3D Mapping of Orchard Environments. SENSORS 2022; 22:s22041571. [PMID: 35214470 PMCID: PMC8877329 DOI: 10.3390/s22041571] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 02/13/2022] [Accepted: 02/14/2022] [Indexed: 01/15/2023]
Abstract
During the last decades, consumer-grade RGB-D (red green blue-depth) cameras have gained popularity for several applications in agricultural environments. Interestingly, these cameras are used for spatial mapping that can serve for robot localization and navigation. Mapping the environment for targeted robotic applications in agricultural fields is a particularly challenging task, owing to the high spatial and temporal variability, the possible unfavorable light conditions, and the unpredictable nature of these environments. The aim of the present study was to investigate the use of RGB-D cameras and unmanned ground vehicle (UGV) for autonomously mapping the environment of commercial orchards as well as providing information about the tree height and canopy volume. The results from the ground-based mapping system were compared with the three-dimensional (3D) orthomosaics acquired by an unmanned aerial vehicle (UAV). Overall, both sensing methods led to similar height measurements, while the tree volume was more accurately calculated by RGB-D cameras, as the 3D point cloud captured by the ground system was far more detailed. Finally, fusion of the two datasets provided the most precise representation of the trees.
Collapse
Affiliation(s)
- Aristotelis C. Tagarakis
- Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (E.F.); (D.K.); (L.B.)
- Correspondence: (A.C.T.); (D.B.)
| | - Evangelia Filippou
- Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (E.F.); (D.K.); (L.B.)
| | - Damianos Kalaitzidis
- Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (E.F.); (D.K.); (L.B.)
| | - Lefteris Benos
- Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (E.F.); (D.K.); (L.B.)
| | - Patrizia Busato
- Department of Agriculture, Forestry and Food Science (DISAFA), University of Turin, Largo Braccini 2, 10095 Grugliasco, Italy;
| | - Dionysis Bochtis
- Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (E.F.); (D.K.); (L.B.)
- FarmB Digital Agriculture P.C., Doiranis 17, GR 54639 Thessaloniki, Greece
- Correspondence: (A.C.T.); (D.B.)
| |
Collapse
|
10
|
Enhancing the Tracking of Seedling Growth Using RGB-Depth Fusion and Deep Learning. SENSORS 2021; 21:s21248425. [PMID: 34960519 PMCID: PMC8708901 DOI: 10.3390/s21248425] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 12/07/2021] [Accepted: 12/14/2021] [Indexed: 11/16/2022]
Abstract
The use of high-throughput phenotyping with imaging and machine learning to monitor seedling growth is a tough yet intriguing subject in plant research. This has been recently addressed with low-cost RGB imaging sensors and deep learning during day time. RGB-Depth imaging devices are also accessible at low-cost and this opens opportunities to extend the monitoring of seedling during days and nights. In this article, we investigate the added value to fuse RGB imaging with depth imaging for this task of seedling growth stage monitoring. We propose a deep learning architecture along with RGB-Depth fusion to categorize the three first stages of seedling growth. Results show an average performance improvement of 5% correct recognition rate by comparison with the sole use of RGB images during the day. The best performances are obtained with the early fusion of RGB and Depth. Also, Depth is shown to enable the detection of growth stage in the absence of the light.
Collapse
|
11
|
Automatic Identification and Monitoring of Plant Diseases Using Unmanned Aerial Vehicles: A Review. REMOTE SENSING 2021. [DOI: 10.3390/rs13193841] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Disease diagnosis is one of the major tasks for increasing food production in agriculture. Although precision agriculture (PA) takes less time and provides a more precise application of agricultural activities, the detection of disease using an Unmanned Aerial System (UAS) is a challenging task. Several Unmanned Aerial Vehicles (UAVs) and sensors have been used for this purpose. The UAVs’ platforms and their peripherals have their own limitations in accurately diagnosing plant diseases. Several types of image processing software are available for vignetting and orthorectification. The training and validation of datasets are important characteristics of data analysis. Currently, different algorithms and architectures of machine learning models are used to classify and detect plant diseases. These models help in image segmentation and feature extractions to interpret results. Researchers also use the values of vegetative indices, such as Normalized Difference Vegetative Index (NDVI), Crop Water Stress Index (CWSI), etc., acquired from different multispectral and hyperspectral sensors to fit into the statistical models to deliver results. There are still various drifts in the automatic detection of plant diseases as imaging sensors are limited by their own spectral bandwidth, resolution, background noise of the image, etc. The future of crop health monitoring using UAVs should include a gimble consisting of multiple sensors, large datasets for training and validation, the development of site-specific irradiance systems, and so on. This review briefly highlights the advantages of automatic detection of plant diseases to the growers.
Collapse
|
12
|
Multiple Sensor Synchronization with theRealSense RGB-D Camera. SENSORS 2021; 21:s21186276. [PMID: 34577483 PMCID: PMC8472203 DOI: 10.3390/s21186276] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 09/11/2021] [Accepted: 09/17/2021] [Indexed: 11/17/2022]
Abstract
When reconstructing a 3D object, it is difficult to obtain accurate 3D geometric information using a single camera. In order to capture detailed geometric information of a 3D object, it is inevitable to increase the number of cameras to capture the object. However, cameras need to be synchronized in order to simultaneously capture frames. If cameras are incorrectly synchronized, many artifacts are produced in the reconstructed 3D object. The RealSense RGB-D camera, which is commonly used for obtaining geometric information of a 3D object, provides synchronization modes to mitigate synchronization errors. However, the synchronization modes provided by theRealSense cameras can only sync depth cameras and have limitations in the number of cameras that can be synchronized using a single host due to the hardware issue of stable data transmission. Therefore, in this paper, we propose a novel synchronization method that synchronizes an arbitrary number of RealSense cameras by adjusting the number of hosts to support stable data transmission. Our method establishes a master–slave architecture in order to synchronize the system clocks of the hosts. While synchronizing the system clocks, delays that resulted from the process of synchronization were estimated so that the difference between the system clocks could be minimized. Through synchronization of the system clocks, cameras connected to the different hosts can be synchronized based on the timestamp of the data received by the hosts. Thus, our method synchronizes theRealSense cameras to simultaneously capture accurate 3D information of an object at a constant frame rate without dropping it.
Collapse
|
13
|
Nasir A, Ullah MO, Yousaf MH, Aziz MA. Acquisition of 3-D trajectories with labeling support for multi-species insects under unconstrained flying conditions. ECOL INFORM 2021. [DOI: 10.1016/j.ecoinf.2021.101381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Outdoor Mobile Mapping and AI-Based 3D Object Detection with Low-Cost RGB-D Cameras: The Use Case of On-Street Parking Statistics. REMOTE SENSING 2021. [DOI: 10.3390/rs13163099] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
A successful application of low-cost 3D cameras in combination with artificial intelligence (AI)-based 3D object detection algorithms to outdoor mobile mapping would offer great potential for numerous mapping, asset inventory, and change detection tasks in the context of smart cities. This paper presents a mobile mapping system mounted on an electric tricycle and a procedure for creating on-street parking statistics, which allow government agencies and policy makers to verify and adjust parking policies in different city districts. Our method combines georeferenced red-green-blue-depth (RGB-D) imagery from two low-cost 3D cameras with state-of-the-art 3D object detection algorithms for extracting and mapping parked vehicles. Our investigations demonstrate the suitability of the latest generation of low-cost 3D cameras for real-world outdoor applications with respect to supported ranges, depth measurement accuracy, and robustness under varying lighting conditions. In an evaluation of suitable algorithms for detecting vehicles in the noisy and often incomplete 3D point clouds from RGB-D cameras, the 3D object detection network PointRCNN, which extends region-based convolutional neural networks (R-CNNs) to 3D point clouds, clearly outperformed all other candidates. The results of a mapping mission with 313 parking spaces show that our method is capable of reliably detecting parked cars with a precision of 100% and a recall of 97%. It can be applied to unslotted and slotted parking and different parking types including parallel, perpendicular, and angle parking.
Collapse
|
15
|
Liu PL, Chang CC, Lin JH, Kobayashi Y. Simple benchmarking method for determining the accuracy of depth cameras in body landmark location estimation: Static upright posture as a measurement example. PLoS One 2021; 16:e0254814. [PMID: 34288917 PMCID: PMC8294549 DOI: 10.1371/journal.pone.0254814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 07/04/2021] [Indexed: 11/19/2022] Open
Abstract
To evaluate the postures in ergonomics applications, studies have proposed the use of low-cost, marker-less, and portable depth camera-based motion tracking systems (DCMTSs) as a potential alternative to conventional marker-based motion tracking systems (MMTSs). However, a simple but systematic method for examining the estimation errors of various DCMTSs is lacking. This paper proposes a benchmarking method for assessing the estimation accuracy of depth cameras for full-body landmark location estimation. A novel alignment board was fabricated to align the coordinate systems of the DCMTSs and MMTSs. The data from an MMTS were used as a reference to quantify the error of using a DCMTS to identify target locations in a 3-D space. To demonstrate the proposed method, the full-body landmark location tracking errors were evaluated for a static upright posture using two different DCMTSs. For each landmark, we compared each DCMTS (Kinect system and RealSense system) with an MMTS by calculating the Euclidean distances between symmetrical landmarks. The evaluation trials were performed twice. The agreement between the tracking errors of the two evaluation trials was assessed using intraclass correlation coefficient (ICC). The results indicate that the proposed method can effectively assess the tracking performance of DCMTSs. The average errors (standard deviation) for the Kinect system and RealSense system were 2.80 (1.03) cm and 5.14 (1.49) cm, respectively. The highest average error values were observed in the depth orientation for both DCMTSs. The proposed method achieved high reliability with ICCs of 0.97 and 0.92 for the Kinect system and RealSense system, respectively.
Collapse
Affiliation(s)
- Pin-Ling Liu
- Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan
| | - Chien-Chi Chang
- Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan
- * E-mail:
| | - Jia-Hua Lin
- Washington State Department of Labor and Industries, Olympia, Washington, United States of America
| | - Yoshiyuki Kobayashi
- Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan
| |
Collapse
|
16
|
Bonci A, Cen Cheng PD, Indri M, Nabissi G, Sibona F. Human-Robot Perception in Industrial Environments: A Survey. SENSORS (BASEL, SWITZERLAND) 2021; 21:1571. [PMID: 33668162 PMCID: PMC7956747 DOI: 10.3390/s21051571] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 02/18/2021] [Accepted: 02/21/2021] [Indexed: 11/16/2022]
Abstract
Perception capability assumes significant importance for human-robot interaction. The forthcoming industrial environments will require a high level of automation to be flexible and adaptive enough to comply with the increasingly faster and low-cost market demands. Autonomous and collaborative robots able to adapt to varying and dynamic conditions of the environment, including the presence of human beings, will have an ever-greater role in this context. However, if the robot is not aware of the human position and intention, a shared workspace between robots and humans may decrease productivity and lead to human safety issues. This paper presents a survey on sensory equipment useful for human detection and action recognition in industrial environments. An overview of different sensors and perception techniques is presented. Various types of robotic systems commonly used in industry, such as fixed-base manipulators, collaborative robots, mobile robots and mobile manipulators, are considered, analyzing the most useful sensors and methods to perceive and react to the presence of human operators in industrial cooperative and collaborative applications. The paper also introduces two proofs of concept, developed by the authors for future collaborative robotic applications that benefit from enhanced capabilities of human perception and interaction. The first one concerns fixed-base collaborative robots, and proposes a solution for human safety in tasks requiring human collision avoidance or moving obstacles detection. The second one proposes a collaborative behavior implementable upon autonomous mobile robots, pursuing assigned tasks within an industrial space shared with human operators.
Collapse
Affiliation(s)
- Andrea Bonci
- Dipartimento di Ingegneria dell’Informazione (DII), Università Politecnica delle Marche, 60131 Ancona, Italy;
| | - Pangcheng David Cen Cheng
- Dipartimento di Elettronica e Telecomunicazioni (DET), Politecnico di Torino, 10129 Torino, Italy; (P.D.C.C.); (M.I.); (F.S.)
| | - Marina Indri
- Dipartimento di Elettronica e Telecomunicazioni (DET), Politecnico di Torino, 10129 Torino, Italy; (P.D.C.C.); (M.I.); (F.S.)
| | - Giacomo Nabissi
- Dipartimento di Ingegneria dell’Informazione (DII), Università Politecnica delle Marche, 60131 Ancona, Italy;
| | - Fiorella Sibona
- Dipartimento di Elettronica e Telecomunicazioni (DET), Politecnico di Torino, 10129 Torino, Italy; (P.D.C.C.); (M.I.); (F.S.)
| |
Collapse
|
17
|
Deery DM, Jones HG. Field Phenomics: Will It Enable Crop Improvement? PLANT PHENOMICS (WASHINGTON, D.C.) 2021; 2021:9871989. [PMID: 34549194 PMCID: PMC8433881 DOI: 10.34133/2021/9871989] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 08/14/2021] [Indexed: 05/19/2023]
Abstract
Field phenomics has been identified as a promising enabling technology to assist plant breeders with the development of improved cultivars for farmers. Yet, despite much investment, there are few examples demonstrating the application of phenomics within a plant breeding program. We review recent progress in field phenomics and highlight the importance of targeting breeders' needs, rather than perceived technology needs, through developing and enhancing partnerships between phenomics researchers and plant breeders.
Collapse
Affiliation(s)
| | - Hamlyn G. Jones
- CSIRO Agriculture and Food, Canberra, ACT, Australia
- Division of Plant Sciences, University of Dundee, UK
- School of Agriculture and Environment, University of Western Australia, Australia
| |
Collapse
|
18
|
Gené-Mola J, Llorens J, Rosell-Polo JR, Gregorio E, Arnó J, Solanelles F, Martínez-Casasnovas JA, Escolà A. Assessing the Performance of RGB-D Sensors for 3D Fruit Crop Canopy Characterization under Different Operating and Lighting Conditions. SENSORS (BASEL, SWITZERLAND) 2020; 20:E7072. [PMID: 33321817 PMCID: PMC7764794 DOI: 10.3390/s20247072] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 12/02/2020] [Accepted: 12/07/2020] [Indexed: 11/16/2022]
Abstract
The use of 3D sensors combined with appropriate data processing and analysis has provided tools to optimise agricultural management through the application of precision agriculture. The recent development of low-cost RGB-Depth cameras has presented an opportunity to introduce 3D sensors into the agricultural community. However, due to the sensitivity of these sensors to highly illuminated environments, it is necessary to know under which conditions RGB-D sensors are capable of operating. This work presents a methodology to evaluate the performance of RGB-D sensors under different lighting and distance conditions, considering both geometrical and spectral (colour and NIR) features. The methodology was applied to evaluate the performance of the Microsoft Kinect v2 sensor in an apple orchard. The results show that sensor resolution and precision decreased significantly under middle to high ambient illuminance (>2000 lx). However, this effect was minimised when measurements were conducted closer to the target. In contrast, illuminance levels below 50 lx affected the quality of colour data and may require the use of artificial lighting. The methodology was useful for characterizing sensor performance throughout the full range of ambient conditions in commercial orchards. Although Kinect v2 was originally developed for indoor conditions, it performed well under a range of outdoor conditions.
Collapse
Affiliation(s)
- Jordi Gené-Mola
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| | - Jordi Llorens
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| | - Joan R. Rosell-Polo
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| | - Eduard Gregorio
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| | - Jaume Arnó
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| | - Francesc Solanelles
- Department of Agriculture, Livestock, Fisheries and Food, Generalitat de Catalunya, Lleida, 25198 Catalunya, Spain;
| | - José A. Martínez-Casasnovas
- Research Group in AgroICT & Precision Agriculture, Department of Environmental and Soil Sciences, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain;
| | - Alexandre Escolà
- Research Group in AgroICT & Precision Agriculture, Department of Agricultural and Forest Engineering, Universitat de Lleida (UdL)–Agrotecnio Centre, Lleida, 25198 Catalonia, Spain; (J.L.); (J.R.R.-P.); (E.G.); (J.A.)
| |
Collapse
|
19
|
Evaluation of Vineyard Cropping Systems Using On-Board RGB-Depth Perception. SENSORS 2020; 20:s20236912. [PMID: 33287285 PMCID: PMC7730935 DOI: 10.3390/s20236912] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 11/23/2020] [Accepted: 11/27/2020] [Indexed: 12/04/2022]
Abstract
A non-destructive measuring technique was applied to test major vine geometric traits on measurements collected by a contactless sensor. Three-dimensional optical sensors have evolved over the past decade, and these advancements may be useful in improving phenomics technologies for other crops, such as woody perennials. Red, green and blue-depth (RGB-D) cameras, namely Microsoft Kinect, have a significant influence on recent computer vision and robotics research. In this experiment an adaptable mobile platform was used for the acquisition of depth images for the non-destructive assessment of branch volume (pruning weight) and related to grape yield in vineyard crops. Vineyard yield prediction provides useful insights about the anticipated yield to the winegrower, guiding strategic decisions to accomplish optimal quantity and efficiency, and supporting the winegrower with decision-making. A Kinect v2 system on-board to an on-ground electric vehicle was capable of producing precise 3D point clouds of vine rows under six different management cropping systems. The generated models demonstrated strong consistency between 3D images and vine structures from the actual physical parameters when average values were calculated. Correlations of Kinect branch volume with pruning weight (dry biomass) resulted in high coefficients of determination (R2 = 0.80). In the study of vineyard yield correlations, the measured volume was found to have a good power law relationship (R2 = 0.87). However due to low capability of most depth cameras to properly build 3-D shapes of small details the results for each treatment when calculated separately were not consistent. Nonetheless, Kinect v2 has a tremendous potential as a 3D sensor in agricultural applications for proximal sensing operations, benefiting from its high frame rate, low price in comparison with other depth cameras, and high robustness.
Collapse
|
20
|
A Vision-Based Approach for Sidewalk and Walkway Trip Hazards Assessment. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17228438. [PMID: 33202633 PMCID: PMC7696567 DOI: 10.3390/ijerph17228438] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 10/27/2020] [Accepted: 11/11/2020] [Indexed: 11/18/2022]
Abstract
Tripping hazards on the sidewalk cause many falls annually, and the inspection and repair of these hazards cost cities millions of dollars. Currently, there is not an efficient and cost-effective method to monitor the sidewalk to identify any possible tripping hazards. In this paper, a new portable device is proposed using an Intel RealSense D415 RGB-D camera to monitor the sidewalks, detect the hazards, and extract relevant features of the hazards. This paper first analyzes the effects of environmental factors contributing to the device’s error and compares different regression techniques to calibrate the camera. The Gaussian Process Regression models yielded the most accurate predictions with less than 0.09 mm Mean Absolute Errors (MAEs). In the second phase, a novel segmentation algorithm is proposed that combines the edge detection and region-growing techniques to detect the true tripping hazards. Different examples are provided to visualize the output results of the proposed method.
Collapse
|
21
|
An Efficient Processing Approach for Colored Point Cloud-Based High-Throughput Seedling Phenotyping. REMOTE SENSING 2020. [DOI: 10.3390/rs12101540] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Plant height and leaf area are important morphological properties of leafy vegetable seedlings, and they can be particularly useful for plant growth and health research. The traditional measurement scheme is time-consuming and not suitable for continuously monitoring plant growth and health. Individual vegetable seedling quick segmentation is the prerequisite for high-throughput seedling phenotype data extraction at individual seedling level. This paper proposes an efficient learning- and model-free 3D point cloud data processing pipeline to measure the plant height and leaf area of every single seedling in a plug tray. The 3D point clouds are obtained by a low-cost red–green–blue (RGB)-Depth (RGB-D) camera. Firstly, noise reduction is performed on the original point clouds through the processing of useable-area filter, depth cut-off filter, and neighbor count filter. Secondly, the surface feature histograms-based approach is used to automatically remove the complicated natural background. Then, the Voxel Cloud Connectivity Segmentation (VCCS) and Locally Convex Connected Patches (LCCP) algorithms are employed for individual vegetable seedling partition. Finally, the height and projected leaf area of respective seedlings are calculated based on segmented point clouds and validation is carried out. Critically, we also demonstrate the robustness of our method for different growth conditions and species. The experimental results show that the proposed method could be used to quickly calculate the morphological parameters of each seedling and it is practical to use this approach for high-throughput seedling phenotyping.
Collapse
|
22
|
Kurtser P, Ringdahl O, Rotstein N, Berenstein R, Edan Y. In-Field Grape Cluster Size Assessment for Vine Yield Estimation Using a Mobile Robot and a Consumer Level RGB-D Camera. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2970654] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
23
|
Kang H, Chen C. Fruit Detection and Segmentation for AppleHarvesting Using Visual Sensor in Orchards. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4599. [PMID: 31652634 PMCID: PMC6832306 DOI: 10.3390/s19204599] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 10/16/2019] [Accepted: 10/19/2019] [Indexed: 11/16/2022]
Abstract
Autonomous harvesting shows a promising prospect in the future development of theagriculture industry, while the vision system is one of the most challenging components in theautonomous harvesting technologies. This work proposes a multi-function network to perform thereal-time detection and semantic segmentation of apples and branches in orchard environments byusing the visual sensor. The developed detection and segmentation network utilises the atrous spatialpyramid pooling and the gate feature pyramid network to enhance feature extraction ability of thenetwork. To improve the real-time computation performance of the network model, a lightweightbackbone network based on the residual network architecture is developed. From the experimentalresults, the detection and segmentation network with ResNet-101 backbone outperformed on thedetection and segmentation tasks, achieving an F1 score of 0.832 on the detection of apples and 87.6%and 77.2% on the semantic segmentation of apples and branches, respectively. The network modelwith lightweight backbone showed the best computation efficiency in the results. It achieved an F1score of 0.827 on the detection of apples and 86.5% and 75.7% on the segmentation of apples andbranches, respectively. The weights size and computation time of the network model with lightweightbackbone were 12.8 M and 32 ms, respectively. The experimental results show that the detection andsegmentation network can effectively perform the real-time detection and segmentation of applesand branches in orchards.
Collapse
Affiliation(s)
- Hanwen Kang
- Laboratory of Motion Generation and Analysis, Faculty of Engineering, Monash University, Clayton, VIC 3800,Australia.
| | - Chao Chen
- Laboratory of Motion Generation and Analysis, Faculty of Engineering, Monash University, Clayton, VIC 3800,Australia.
| |
Collapse
|
24
|
Franchetti B, Ntouskos V, Giuliani P, Herman T, Barnes L, Pirri F. Vision Based Modeling of Plants Phenotyping in Vertical Farming under Artificial Lighting. SENSORS 2019; 19:s19204378. [PMID: 31658728 PMCID: PMC6848939 DOI: 10.3390/s19204378] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 10/03/2019] [Accepted: 10/08/2019] [Indexed: 11/28/2022]
Abstract
In this paper, we present a novel method for vision based plants phenotyping in indoor vertical farming under artificial lighting. The method combines 3D plants modeling and deep segmentation of the higher leaves, during a period of 25–30 days, related to their growth. The novelty of our approach is in providing 3D reconstruction, leaf segmentation, geometric surface modeling, and deep network estimation for weight prediction to effectively measure plant growth, under three relevant phenotype features: height, weight and leaf area. Together with the vision based measurements, to verify the soundness of our proposed method, we also harvested the plants at specific time periods to take manual measurements, collecting a great amount of data. In particular, we manually collected 2592 data points related to the plant phenotype and 1728 images of the plants. This allowed us to show with a good number of experiments that the vision based methods ensure a quite accurate prediction of the considered features, providing a way to predict plant behavior, under specific conditions, without any need to resort to human measurements.
Collapse
Affiliation(s)
| | - Valsamis Ntouskos
- Alcor Lab, DIAG, Sapienza University of Rome, Via Ariosto 25, 00185 Rome, Italy.
| | | | - Tiara Herman
- Agricola Moderna, Viale Col di Lana 8, 20136 Milan, Italy.
| | - Luke Barnes
- Agricola Moderna, Viale Col di Lana 8, 20136 Milan, Italy.
| | - Fiora Pirri
- Alcor Lab, DIAG, Sapienza University of Rome, Via Ariosto 25, 00185 Rome, Italy.
| |
Collapse
|
25
|
Abstract
Numerous sensors have been developed over time for precision agriculture; though, only recently have these sensors been incorporated into the new realm of unmanned aircraft systems (UAS). This UAS technology has allowed for a more integrated and optimized approach to various farming tasks such as field mapping, plant stress detection, biomass estimation, weed management, inventory counting, and chemical spraying, among others. These systems can be highly specialized depending on the particular goals of the researcher or farmer, yet many aspects of UAS are similar. All systems require an underlying platform—or unmanned aerial vehicle (UAV)—and one or more peripherals and sensing equipment such as imaging devices (RGB, multispectral, hyperspectral, near infra-red, RGB depth), gripping tools, or spraying equipment. Along with these wide-ranging peripherals and sensing equipment comes a great deal of data processing. Common tools to aid in this processing include vegetation indices, point clouds, machine learning models, and statistical methods. With any emerging technology, there are also a few considerations that need to be analyzed like legal constraints, economic trade-offs, and ease of use. This review then concludes with a discussion on the pros and cons of this technology, along with a brief outlook into future areas of research regarding UAS technology in agriculture.
Collapse
|
26
|
Shan Z, Li R, Schwertfeger S. RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots. SENSORS 2019; 19:s19102251. [PMID: 31096683 PMCID: PMC6567327 DOI: 10.3390/s19102251] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 05/05/2019] [Accepted: 05/07/2019] [Indexed: 11/16/2022]
Abstract
Using camera sensors for ground robot Simultaneous Localization and Mapping (SLAM) has many benefits over laser-based approaches, such as the low cost and higher robustness. RGBD sensors promise the best of both worlds: dense data from cameras with depth information. This paper proposes to fuse RGBD and IMU data for a visual SLAM system, called VINS-RGBD, that is built upon the open source VINS-Mono software. The paper analyses the VINS approach and highlights the observability problems. Then, we extend the VINS-Mono system to make use of the depth data during the initialization process as well as during the VIO (Visual Inertial Odometry) phase. Furthermore, we integrate a mapping system based on subsampled depth data and octree filtering to achieve real-time mapping, including loop closing. We provide the software as well as datasets for evaluation. Our extensive experiments are performed with hand-held, wheeled and tracked robots in different environments. We show that ORB-SLAM2 fails for our application and see that our VINS-RGBD approach is superior to VINS-Mono.
Collapse
Affiliation(s)
- Zeyong Shan
- School of Information Science & Technology, ShanghaiTech University, Shanghai 201210, China.
- Chinese Academy of Sciences, Shanghai Institute of Microsyst & Information Technology, Shanghai 200050, China.
- University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Ruijian Li
- School of Information Science & Technology, ShanghaiTech University, Shanghai 201210, China.
| | - Sören Schwertfeger
- School of Information Science & Technology, ShanghaiTech University, Shanghai 201210, China.
| |
Collapse
|