1
|
Chen H, Liu S, Wang C, Wang C, Gong K, Li Y, Lan Y. Point Cloud Completion of Plant Leaves under Occlusion Conditions Based on Deep Learning. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0117. [PMID: 38239737 PMCID: PMC10795496 DOI: 10.34133/plantphenomics.0117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 11/01/2023] [Indexed: 01/22/2024]
Abstract
The utilization of 3-dimensional point cloud technology for non-invasive measurement of plant phenotypic parameters can furnish important data for plant breeding, agricultural production, and diverse research applications. Nevertheless, the utilization of depth sensors and other tools for capturing plant point clouds often results in missing and incomplete data due to the limitations of 2.5D imaging features and leaf occlusion. This drawback obstructed the accurate extraction of phenotypic parameters. Hence, this study presented a solution for incomplete flowering Chinese Cabbage point clouds using Point Fractal Network-based techniques. The study performed experiments on flowering Chinese Cabbage by constructing a point cloud dataset of their leaves and training the network. The findings demonstrated that our network is stable and robust, as it can effectively complete diverse leaf point cloud morphologies, missing ratios, and multi-missing scenarios. A novel framework is presented for 3D plant reconstruction using a single-view RGB-D (Red, Green, Blue and Depth) image. This method leveraged deep learning to complete localized incomplete leaf point clouds acquired by RGB-D cameras under occlusion conditions. Additionally, the extracted leaf area parameters, based on triangular mesh, were compared with the measured values. The outcomes revealed that prior to the point cloud completion, the R2 value of the flowering Chinese Cabbage's estimated leaf area (in comparison to the standard reference value) was 0.9162. The root mean square error (RMSE) was 15.88 cm2, and the average relative error was 22.11%. However, post-completion, the estimated value of leaf area witnessed a significant improvement, with an R2 of 0.9637, an RMSE of 6.79 cm2, and average relative error of 8.82%. The accuracy of estimating the phenotypic parameters has been enhanced significantly, enabling efficient retrieval of such parameters. This development offers a fresh perspective for non-destructive identification of plant phenotypes.
Collapse
Affiliation(s)
- Haibo Chen
- Experimental Basis and Practical Training Center,
South China Agricultural University, Guangzhou, China
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
| | - Shengbo Liu
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
- College of Electronics Engineering (College of Artificial Intelligence),
South China Agricultural University, Guangzhou, China
| | - Congyue Wang
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
- College of Electronics Engineering (College of Artificial Intelligence),
South China Agricultural University, Guangzhou, China
| | - Chaofeng Wang
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
- College of Electronics Engineering (College of Artificial Intelligence),
South China Agricultural University, Guangzhou, China
| | - Kangye Gong
- College of Engineering,
South China Agricultural University, Guangzhou, China
| | - Yuanhong Li
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
- College of Electronics Engineering (College of Artificial Intelligence),
South China Agricultural University, Guangzhou, China
| | - Yubin Lan
- National Center for International Collaboration Research on Precision Agricultural Aviation Pesticides Spraying Technology, Guangzhou, China
- College of Electronics Engineering (College of Artificial Intelligence),
South China Agricultural University, Guangzhou, China
- Department of Biological and Agricultural Engineering, Texas A&M University,
College Station, TX, United States
| |
Collapse
|
2
|
Ma Z, Du R, Xie J, Sun D, Fang H, Jiang L, Cen H. Phenotyping of Silique Morphology in Oilseed Rape Using Skeletonization with Hierarchical Segmentation. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0027. [PMID: 36939450 PMCID: PMC10017417 DOI: 10.34133/plantphenomics.0027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 02/03/2023] [Indexed: 06/18/2023]
Abstract
Silique morphology is an important trait that determines the yield output of oilseed rape (Brassica napus L.). Segmenting siliques and quantifying traits are challenging because of the complicated structure of an oilseed rape plant at the reproductive stage. This study aims to develop an accurate method in which a skeletonization algorithm was combined with the hierarchical segmentation (SHS) algorithm to separate siliques from the whole plant using 3-dimensional (3D) point clouds. We combined the L1-median skeleton with the random sample consensus for iteratively extracting skeleton points and optimized the skeleton based on information such as distance, angle, and direction from neighborhood points. Density-based spatial clustering of applications with noise and weighted unidirectional graph were used to achieve hierarchical segmentation of siliques. Using the SHS, we quantified the silique number (SN), silique length (SL), and silique volume (SV) automatically based on the geometric rules. The proposed method was tested with the oilseed rape plants at the mature stage grown in a greenhouse and field. We found that our method showed good performance in silique segmentation and phenotypic extraction with R 2 values of 0.922 and 0.934 for SN and total SL, respectively. Additionally, SN, total SL, and total SV had the statistical significance of correlations with the yield of a plant, with R values of 0.935, 0.916, and 0.897, respectively. Overall, the SHS algorithm is accurate, efficient, and robust for the segmentation of siliques and extraction of silique morphological parameters, which is promising for high-throughput silique phenotyping in oilseed rape breeding.
Collapse
Affiliation(s)
- Zhihong Ma
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, P.R. China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, P.R. China
| | - Ruiming Du
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, P.R. China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, P.R. China
| | - Jiayang Xie
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, P.R. China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, P.R. China
| | - Dawei Sun
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, P.R. China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, P.R. China
| | - Hui Fang
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, P.R. China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, P.R. China
| | - Lixi Jiang
- Institute of Crop Science and Zhejiang Key Laboratory of Crop Germplasm, Zhejiang University, Hangzhou 310058, P.R. China
| | - Haiyan Cen
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, P.R. China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, P.R. China
| |
Collapse
|
3
|
Liu Y, Yuan H, Zhao X, Fan C, Cheng M. Fast reconstruction method of three-dimension model based on dual RGB-D cameras for peanut plant. PLANT METHODS 2023; 19:17. [PMID: 36843020 PMCID: PMC9969713 DOI: 10.1186/s13007-023-00998-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 02/20/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND Plant shape and structure are important factors in peanut breeding research. Constructing a three-dimension (3D) model can provide an effective digital tool for comprehensive and quantitative analysis of peanut plant structure. Fast and accurate are always the goals of the plant 3D model reconstruction research. RESULTS We proposed a 3D reconstruction method based on dual RGB-D cameras for the peanut plant 3D model quickly and accurately. The two Kinect v2 were mirror symmetry placed on both sides of the peanut plant, and the point cloud data obtained were filtered twice to remove noise interference. After rotation and translation based on the corresponding geometric relationship, the point cloud acquired by the two Kinect v2 was converted to the same coordinate system and spliced into the 3D structure of the peanut plant. The experiment was conducted at various growth stages based on twenty potted peanuts. The plant traits' height, width, length, and volume were calculated through the reconstructed 3D models, and manual measurement was also carried out during the experiment processing. The accuracy of the 3D model was evaluated through a synthetic coefficient, which was generated by calculating the average accuracy of the four traits. The test result showed that the average accuracy of the reconstructed peanut plant 3D model by this method is 93.42%. A comparative experiment with the iterative closest point (ICP) algorithm, a widely used 3D modeling algorithm, was additionally implemented to test the rapidity of this method. The test result shows that the proposed method is 2.54 times faster with approximated accuracy compared to the ICP method. CONCLUSIONS The reconstruction method for the 3D model of the peanut plant described in this paper is capable of rapidly and accurately establishing a 3D model of the peanut plant while also meeting the modeling requirements for other species' breeding processes. This study offers a potential tool to further explore the 3D model for improving traits and agronomic qualities of plants.
Collapse
Affiliation(s)
- Yadong Liu
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, 071001, China
| | - Hongbo Yuan
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, 071001, China
| | - Xin Zhao
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, 071001, China
| | - Caihu Fan
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, 071001, China
| | - Man Cheng
- College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, 071001, China.
| |
Collapse
|
4
|
Song P, Li Z, Yang M, Shao Y, Pu Z, Yang W, Zhai R. Dynamic detection of three-dimensional crop phenotypes based on a consumer-grade RGB-D camera. FRONTIERS IN PLANT SCIENCE 2023; 14:1097725. [PMID: 36778701 PMCID: PMC9911875 DOI: 10.3389/fpls.2023.1097725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 01/11/2023] [Indexed: 06/18/2023]
Abstract
INTRODUCTION Nondestructive detection of crop phenotypic traits in the field is very important for crop breeding. Ground-based mobile platforms equipped with sensors can efficiently and accurately obtain crop phenotypic traits. In this study, we propose a dynamic 3D data acquisition method in the field suitable for various crops by using a consumer-grade RGB-D camera installed on a ground-based movable platform, which can collect RGB images as well as depth images of crop canopy sequences dynamically. METHODS A scale-invariant feature transform (SIFT) operator was used to detect adjacent date frames acquired by the RGB-D camera to calculate the point cloud alignment coarse matching matrix and the displacement distance of adjacent images. The data frames used for point cloud matching were selected according to the calculated displacement distance. Then, the colored ICP (iterative closest point) algorithm was used to determine the fine matching matrix and generate point clouds of the crop row. The clustering method was applied to segment the point cloud of each plant from the crop row point cloud, and 3D phenotypic traits, including plant height, leaf area and projected area of individual plants, were measured. RESULTS AND DISCUSSION We compared the effects of LIDAR and image-based 3D reconstruction methods, and experiments were carried out on corn, tobacco, cottons and Bletilla striata in the seedling stage. The results show that the measurements of the plant height (R²= 0.9~0.96, RSME = 0.015~0.023 m), leaf area (R²= 0.8~0.86, RSME = 0.0011~0.0041 m 2 ) and projected area (R² = 0.96~0.99) have strong correlations with the manual measurement results. Additionally, 3D reconstruction results with different moving speeds and times throughout the day and in different scenes were also verified. The results show that the method can be applied to dynamic detection with a moving speed up to 0.6 m/s and can achieve acceptable detection results in the daytime, as well as at night. Thus, the proposed method can improve the efficiency of individual crop 3D point cloud data extraction with acceptable accuracy, which is a feasible solution for crop seedling 3D phenotyping outdoors.
Collapse
Affiliation(s)
- Peng Song
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Zhengda Li
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Meng Yang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Yang Shao
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Zhen Pu
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Wanneng Yang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan, China
| | - Ruifang Zhai
- College of Informatics, Huazhong Agricultural University, Wuhan, China
| |
Collapse
|
5
|
Lou M, Lu J, Wang L, Jiang H, Zhou M. Growth parameter acquisition and geometric point cloud completion of lettuce. FRONTIERS IN PLANT SCIENCE 2022; 13:947690. [PMID: 36247622 PMCID: PMC9558259 DOI: 10.3389/fpls.2022.947690] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/05/2022] [Indexed: 06/16/2023]
Abstract
The plant factory is a form of controlled environment agriculture (CEA) which is offers a promising solution to the problem of food security worldwide. Plant growth parameters need to be acquired for process control and yield estimation in plant factories. In this paper, we propose a fast and non-destructive framework for extracting growth parameters. Firstly, ToF camera (Microsoft Kinect V2) is used to obtain the point cloud from the top view, and then the lettuce point cloud is separated. According to the growth characteristics of lettuce, a geometric method is proposed to complete the incomplete lettuce point cloud. The treated point cloud has a high linear correlation with the actual plant height (R 2 = 0.961), leaf area (R 2 = 0.964), and fresh weight (R 2 = 0.911) with a significant improvement compared to untreated point cloud. The result suggests our proposed point cloud completion method have has the potential to tackle the problem of obtaining the plant growth parameters from a single 3D view with occlusion.
Collapse
Affiliation(s)
- Mingzhao Lou
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Jinke Lu
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Intelligent Equipment and Robotics for Agriculture of Zhejiang Province, Zhejiang University, Hangzhou, China
| | - Le Wang
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Intelligent Equipment and Robotics for Agriculture of Zhejiang Province, Zhejiang University, Hangzhou, China
| | - Huanyu Jiang
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Intelligent Equipment and Robotics for Agriculture of Zhejiang Province, Zhejiang University, Hangzhou, China
| | - Mingchuan Zhou
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Intelligent Equipment and Robotics for Agriculture of Zhejiang Province, Zhejiang University, Hangzhou, China
| |
Collapse
|
6
|
Banana Pseudostem Width Detection Based on Kinect V2 Depth Sensor. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3083647. [PMID: 36203728 PMCID: PMC9532068 DOI: 10.1155/2022/3083647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 08/24/2022] [Indexed: 11/22/2022]
Abstract
This study used Kinect V2 sensor to collect the three-dimensional point cloud data of banana pseudostem and developed an automatic measurement method of banana pseudostem width. The banana plant was selected as the research object in a banana plantation in Fusui, Guangxi. The mobile measurement of banana pseudostem was carried out at a distance of 1 m from the banana plant using the field operation platform with Kinect V2 as the collection equipment. To eliminate the background data and improve the processing speed, a cascade classifier was used to recognize banana pseudostems from the depth image, extract the region of interest (ROI), and transform the ROI into a color point cloud combined with the color image; secondly, the point cloud was sparse by down-sampling; then, the point cloud noise was removed according to the classification of large-scale and small-scale noise; finally, the stem point cloud was segmented along the y-axis, and the difference between the maximum and minimum values in the x-axis direction of each segment was calculated as its horizontal width. The center point of each segment point cloud was used to fit the slope of the stem centerline, and the average horizontal width was corrected to the stem diameter. The test results show that the average measurement error is only 2.7 mm, the average relative error was 1.34%, and the measurement time is only about 300 ms. It could provide an effective solution for the automatic and rapid measurement of stem width of banana plants and other similar plants.
Collapse
|
7
|
Zhang Q, Zhang X, Wu Y, Li X. TMSCNet: A three-stage multi-branch self-correcting trait estimation network for RGB and depth images of lettuce. FRONTIERS IN PLANT SCIENCE 2022; 13:982562. [PMID: 36119576 PMCID: PMC9470961 DOI: 10.3389/fpls.2022.982562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Abstract
Growth traits, such as fresh weight, diameter, and leaf area, are pivotal indicators of growth status and the basis for the quality evaluation of lettuce. The time-consuming, laborious and inefficient method of manually measuring the traits of lettuce is still the mainstream. In this study, a three-stage multi-branch self-correcting trait estimation network (TMSCNet) for RGB and depth images of lettuce was proposed. The TMSCNet consisted of five models, of which two master models were used to preliminarily estimate the fresh weight (FW), dry weight (DW), height (H), diameter (D), and leaf area (LA) of lettuce, and three auxiliary models realized the automatic correction of the preliminary estimation results. To compare the performance, typical convolutional neural networks (CNNs) widely adopted in botany research were used. The results showed that the estimated values of the TMSCNet fitted the measurements well, with coefficient of determination (R 2) values of 0.9514, 0.9696, 0.9129, 0.8481, and 0.9495, normalized root mean square error (NRMSE) values of 15.63, 11.80, 11.40, 10.18, and 14.65% and normalized mean squared error (NMSE) value of 0.0826, which was superior to compared methods. Compared with previous studies on the estimation of lettuce traits, the performance of the TMSCNet was still better. The proposed method not only fully considered the correlation between different traits and designed a novel self-correcting structure based on this but also studied more lettuce traits than previous studies. The results indicated that the TMSCNet is an effective method to estimate the lettuce traits and will be extended to the high-throughput situation. Code is available at https://github.com/lxsfight/TMSCNet.git.
Collapse
Affiliation(s)
- Qinjian Zhang
- School of Mechanical Electrical Engineering, Beijing Information Science and Technology University, Beijing, China
| | - Xiangyan Zhang
- School of Mechanical Electrical Engineering, Beijing Information Science and Technology University, Beijing, China
| | - Yalin Wu
- Lushan Botanical Garden, Chinese Academy of Sciences, Jiujiang, China
| | - Xingshuai Li
- School of Mechanical Electrical Engineering, Beijing Information Science and Technology University, Beijing, China
| |
Collapse
|
8
|
Paturkar A, Sen Gupta G, Bailey D. Plant trait measurement in 3D for growth monitoring. PLANT METHODS 2022; 18:59. [PMID: 35505428 PMCID: PMC9063380 DOI: 10.1186/s13007-022-00889-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 04/13/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND There is a demand for non-destructive systems in plant phenotyping which could precisely measure plant traits for growth monitoring. In this study, the growth of chilli plants (Capsicum annum L.) was monitored in outdoor conditions. A non-destructive solution is proposed for growth monitoring in 3D using a single mobile phone camera based on a structure from motion algorithm. A method to measure leaf length and leaf width when the leaf is curled is also proposed. Various plant traits such as number of leaves, stem height, leaf length, and leaf width were measured from the reconstructed and segmented 3D models at different plant growth stages. RESULTS The accuracy of the proposed system is measured by comparing the values derived from the 3D plant model with manual measurements. The results demonstrate that the proposed system has potential to non-destructively monitor plant growth in outdoor conditions with high precision, when compared to the state-of-the-art systems. CONCLUSIONS In conclusion, this study demonstrated that the methods proposed to calculate plant traits can monitor plant growth in outdoor conditions.
Collapse
Affiliation(s)
- Abhipray Paturkar
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Palmerston North, New Zealand.
| | - Gourab Sen Gupta
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Palmerston North, New Zealand
| | - Donald Bailey
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Palmerston North, New Zealand
| |
Collapse
|
9
|
Buxbaum N, Lieth JH, Earles M. Non-destructive Plant Biomass Monitoring With High Spatio-Temporal Resolution via Proximal RGB-D Imagery and End-to-End Deep Learning. FRONTIERS IN PLANT SCIENCE 2022; 13:758818. [PMID: 35498682 PMCID: PMC9043900 DOI: 10.3389/fpls.2022.758818] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 03/21/2022] [Indexed: 06/14/2023]
Abstract
Plant breeders, scientists, and commercial producers commonly use growth rate as an integrated signal of crop productivity and stress. Plant growth monitoring is often done destructively via growth rate estimation by harvesting plants at different growth stages and simply weighing each individual plant. Within plant breeding and research applications, and more recently in commercial applications, non-destructive growth monitoring is done using computer vision to segment plants in images from the background, either in 2D or 3D, and relating these image-based features to destructive biomass measurements. Recent advancements in machine learning have improved image-based localization and detection of plants, but such techniques are not well suited to make biomass predictions when there is significant self-occlusion or occlusion from neighboring plants, such as those encountered under leafy green production in controlled environment agriculture. To enable prediction of plant biomass under occluded growing conditions, we develop an end-to-end deep learning approach that directly predicts lettuce plant biomass from color and depth image data as provided by a low cost and commercially available sensor. We test the performance of the proposed deep neural network for lettuce production, observing a mean prediction error of 7.3% on a comprehensive test dataset of 864 individuals and substantially outperforming previous work on plant biomass estimation. The modeling approach is robust to the busy and occluded scenes often found in commercial leafy green production and requires only measured mass values for training. We then demonstrate that this level of prediction accuracy allows for rapid, non-destructive detection of changes in biomass accumulation due to experimentally induced stress induction in as little as 2 days. Using this method growers may observe and react to changes in plant-environment interactions in near real time. Moreover, we expect that such a sensitive technique for non-destructive biomass estimation will enable novel research and breeding of improved productivity and yield in response to stress.
Collapse
Affiliation(s)
- Nicolas Buxbaum
- Department of Biological and Agricultural Engineering, University of California, Davis, Davis, CA, United States
| | - Johann Heinrich Lieth
- Department of Plant Sciences, University of California, Davis, Davis, CA, United States
| | - Mason Earles
- Department of Biological and Agricultural Engineering, University of California, Davis, Davis, CA, United States
- Department of Viticulture and Enology, University of California, Davis, Davis, CA, United States
| |
Collapse
|
10
|
Soetedjo A, Hendriarianti E. Plant Leaf Detection and Counting in a Greenhouse during Day and Nighttime Using a Raspberry Pi NoIR Camera. SENSORS 2021; 21:s21196659. [PMID: 34640979 PMCID: PMC8512127 DOI: 10.3390/s21196659] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/29/2021] [Accepted: 09/30/2021] [Indexed: 01/21/2023]
Abstract
A non-destructive method using machine vision is an effective way to monitor plant growth. However, due to the lighting changes and complicated backgrounds in outdoor environments, this becomes a challenging task. In this paper, a low-cost camera system using an NoIR (no infrared filter) camera and a Raspberry Pi module is employed to detect and count the leaves of Ramie plants in a greenhouse. An infrared camera captures the images of leaves during the day and nighttime for a precise evaluation. The infrared images allow Otsu thresholding to be used for efficient leaf detection. A combination of numbers of thresholds is introduced to increase the detection performance. Two approaches, consisting of static images and image sequence methods are proposed. A watershed algorithm is then employed to separate the leaves of a plant. The experimental results show that the proposed leaf detection using static images achieves high recall, precision, and F1 score of 0.9310, 0.9053, and 0.9167, respectively, with an execution time of 551 ms. The strategy of using sequences of images increases the performances to 0.9619, 0.9505, and 0.9530, respectively, with an execution time of 516.30 ms. The proposed leaf counting achieves a difference in count (DiC) and absolute DiC (ABS_DiC) of 2.02 and 2.23, respectively, with an execution time of 545.41 ms. Moreover, the proposed method is evaluated using the benchmark image datasets, and shows that the foreground–background dice (FBD), DiC, and ABS_DIC are all within the average values of the existing techniques. The results suggest that the proposed system provides a promising method for real-time implementation.
Collapse
Affiliation(s)
- Aryuanto Soetedjo
- Department of Electrical Engineering, National Institute of Technology (ITN), Malang 65145, East Java, Indonesia
- Correspondence:
| | - Evy Hendriarianti
- Department of Environmental Engineering, National Institute of Technology (ITN), Malang 65145, East Java, Indonesia;
| |
Collapse
|
11
|
Xiang L, Nolan TM, Bao Y, Elmore M, Tuel T, Gai J, Shah D, Wang P, Huser NM, Hurd AM, McLaughlin SA, Howell SH, Walley JW, Yin Y, Tang L. Robotic Assay for Drought (RoAD): an automated phenotyping system for brassinosteroid and drought responses. THE PLANT JOURNAL : FOR CELL AND MOLECULAR BIOLOGY 2021; 107:1837-1853. [PMID: 34216161 DOI: 10.1111/tpj.15401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 06/16/2021] [Accepted: 06/19/2021] [Indexed: 06/13/2023]
Abstract
Brassinosteroids (BRs) are a group of plant steroid hormones involved in regulating growth, development, and stress responses. Many components of the BR pathway have previously been identified and characterized. However, BR phenotyping experiments are typically performed in a low-throughput manner, such as on Petri plates. Additionally, the BR pathway affects drought responses, but drought experiments are time consuming and difficult to control. To mitigate these issues and increase throughput, we developed the Robotic Assay for Drought (RoAD) system to perform BR and drought response experiments in soil-grown Arabidopsis plants. RoAD is equipped with a robotic arm, a rover, a bench scale, a precisely controlled watering system, an RGB camera, and a laser profilometer. It performs daily weighing, watering, and imaging tasks and is capable of administering BR response assays by watering plants with Propiconazole (PCZ), a BR biosynthesis inhibitor. We developed image processing algorithms for both plant segmentation and phenotypic trait extraction to accurately measure traits including plant area, plant volume, leaf length, and leaf width. We then applied machine learning algorithms that utilize the extracted phenotypic parameters to identify image-derived traits that can distinguish control, drought-treated, and PCZ-treated plants. We carried out PCZ and drought experiments on a set of BR mutants and Arabidopsis accessions with altered BR responses. Finally, we extended the RoAD assays to perform BR response assays using PCZ in Zea mays (maize) plants. This study establishes an automated and non-invasive robotic imaging system as a tool to accurately measure morphological and growth-related traits of Arabidopsis and maize plants in 3D, providing insights into the BR-mediated control of plant growth and stress responses.
Collapse
Affiliation(s)
- Lirong Xiang
- Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, IA, 50011, USA
| | - Trevor M Nolan
- Department of Genetics, Development and Cell Biology, Iowa State University, Ames, IA, 50011, USA
- Plant Sciences Institutes, Iowa State University, Ames, IA, 50011, USA
| | - Yin Bao
- Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, IA, 50011, USA
| | - Mitch Elmore
- Department of Plant Pathology and Microbiology, Iowa State University, Ames, IA, 50011, USA
| | - Taylor Tuel
- Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, IA, 50011, USA
| | - Jingyao Gai
- Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, IA, 50011, USA
| | - Dylan Shah
- Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, IA, 50011, USA
| | - Ping Wang
- Department of Genetics, Development and Cell Biology, Iowa State University, Ames, IA, 50011, USA
| | - Nicole M Huser
- Department of Genetics, Development and Cell Biology, Iowa State University, Ames, IA, 50011, USA
| | - Ashley M Hurd
- Department of Genetics, Development and Cell Biology, Iowa State University, Ames, IA, 50011, USA
| | - Sean A McLaughlin
- Department of Genetics, Development and Cell Biology, Iowa State University, Ames, IA, 50011, USA
| | - Stephen H Howell
- Department of Genetics, Development and Cell Biology, Iowa State University, Ames, IA, 50011, USA
- Plant Sciences Institutes, Iowa State University, Ames, IA, 50011, USA
| | - Justin W Walley
- Plant Sciences Institutes, Iowa State University, Ames, IA, 50011, USA
- Department of Plant Pathology and Microbiology, Iowa State University, Ames, IA, 50011, USA
| | - Yanhai Yin
- Department of Genetics, Development and Cell Biology, Iowa State University, Ames, IA, 50011, USA
- Plant Sciences Institutes, Iowa State University, Ames, IA, 50011, USA
| | - Lie Tang
- Department of Agricultural and Biosystems Engineering, Iowa State University, Ames, IA, 50011, USA
- Plant Sciences Institutes, Iowa State University, Ames, IA, 50011, USA
| |
Collapse
|
12
|
Teng X, Zhou G, Wu Y, Huang C, Dong W, Xu S. Three-Dimensional Reconstruction Method of Rapeseed Plants in the Whole Growth Period Using RGB-D Camera. SENSORS 2021; 21:s21144628. [PMID: 34300368 PMCID: PMC8309581 DOI: 10.3390/s21144628] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/21/2021] [Accepted: 06/29/2021] [Indexed: 11/16/2022]
Abstract
The three-dimensional reconstruction method using RGB-D camera has a good balance in hardware cost and point cloud quality. However, due to the limitation of inherent structure and imaging principle, the acquired point cloud has problems such as a lot of noise and difficult registration. This paper proposes a 3D reconstruction method using Azure Kinect to solve these inherent problems. Shoot color images, depth images and near-infrared images of the target from six perspectives by Azure Kinect sensor with black background. Multiply the binarization result of the 8-bit infrared image with the RGB-D image alignment result provided by Microsoft corporation, which can remove ghosting and most of the background noise. A neighborhood extreme filtering method is proposed to filter out the abrupt points in the depth image, by which the floating noise point and most of the outlier noise will be removed before generating the point cloud, and then using the pass-through filter eliminate rest of the outlier noise. An improved method based on the classic iterative closest point (ICP) algorithm is presented to merge multiple-views point clouds. By continuously reducing both the size of the down-sampling grid and the distance threshold between the corresponding points, the point clouds of each view are continuously registered three times, until get the integral color point cloud. Many experiments on rapeseed plants show that the success rate of cloud registration is 92.5% and the point cloud accuracy obtained by this method is 0.789 mm, the time consuming of a integral scanning is 302 s, and with a good color restoration. Compared with a laser scanner, the proposed method has considerable reconstruction accuracy and a significantly ahead of the reconstruction speed, but the hardware cost is much lower when building a automatic scanning system. This research shows a low-cost, high-precision 3D reconstruction technology, which has the potential to be widely used for non-destructive measurement of rapeseed and other crops phenotype.
Collapse
Affiliation(s)
- Xiaowen Teng
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China; (X.T.); (Y.W.); (C.H.); (W.D.)
- Key Laboratory of Agricultural Equipment for the Middle and Lower Reaches of the Yangtze River, Ministry of Agriculture, Wuhan 430070, China
| | - Guangsheng Zhou
- College of Plant Science & Technology, Huazhong Agricultural University, Wuhan 430070, China;
| | - Yuxuan Wu
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China; (X.T.); (Y.W.); (C.H.); (W.D.)
- Key Laboratory of Agricultural Equipment for the Middle and Lower Reaches of the Yangtze River, Ministry of Agriculture, Wuhan 430070, China
| | - Chenglong Huang
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China; (X.T.); (Y.W.); (C.H.); (W.D.)
- Key Laboratory of Agricultural Equipment for the Middle and Lower Reaches of the Yangtze River, Ministry of Agriculture, Wuhan 430070, China
| | - Wanjing Dong
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China; (X.T.); (Y.W.); (C.H.); (W.D.)
- Key Laboratory of Agricultural Equipment for the Middle and Lower Reaches of the Yangtze River, Ministry of Agriculture, Wuhan 430070, China
| | - Shengyong Xu
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China; (X.T.); (Y.W.); (C.H.); (W.D.)
- Key Laboratory of Agricultural Equipment for the Middle and Lower Reaches of the Yangtze River, Ministry of Agriculture, Wuhan 430070, China
- Correspondence: ; Tel.: +86-134-7629-3548
| |
Collapse
|
13
|
Miao T, Wen W, Li Y, Wu S, Zhu C, Guo X. Label3DMaize: toolkit for 3D point cloud data annotation of maize shoots. Gigascience 2021; 10:6272094. [PMID: 33963385 PMCID: PMC8105162 DOI: 10.1093/gigascience/giab031] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Revised: 03/10/2021] [Accepted: 04/12/2021] [Indexed: 01/31/2023] Open
Abstract
Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.
Collapse
Affiliation(s)
- Teng Miao
- College of Information and Electrical Engineering, Shenyang Agricultural University, Dongling Road, Shenhe District, Liaoning Province, Shenyang 110161, China
| | - Weiliang Wen
- Beijing Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,National Engineering Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,Beijing Key Lab of Digital Plant, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Yinglun Li
- National Engineering Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,Beijing Key Lab of Digital Plant, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Sheng Wu
- Beijing Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,National Engineering Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,Beijing Key Lab of Digital Plant, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| | - Chao Zhu
- College of Information and Electrical Engineering, Shenyang Agricultural University, Dongling Road, Shenhe District, Liaoning Province, Shenyang 110161, China
| | - Xinyu Guo
- Beijing Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,National Engineering Research Center for Information Technology in Agriculture, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China.,Beijing Key Lab of Digital Plant, 11#Shuguang Huayuan Middle Road, Haidian District, Beijing 100097, China
| |
Collapse
|
14
|
Ma Z, Sun D, Xu H, Zhu Y, He Y, Cen H. Optimization of 3D Point Clouds of Oilseed Rape Plants Based on Time-of-Flight Cameras. SENSORS (BASEL, SWITZERLAND) 2021; 21:664. [PMID: 33477933 PMCID: PMC7833437 DOI: 10.3390/s21020664] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Revised: 01/07/2021] [Accepted: 01/16/2021] [Indexed: 12/31/2022]
Abstract
Three-dimensional (3D) structure is an important morphological trait of plants for describing their growth and biotic/abiotic stress responses. Various methods have been developed for obtaining 3D plant data, but the data quality and equipment costs are the main factors limiting their development. Here, we propose a method to improve the quality of 3D plant data using the time-of-flight (TOF) camera Kinect V2. A K-dimension (k-d) tree was applied to spatial topological relationships for searching points. Background noise points were then removed with a minimum oriented bounding box (MOBB) with a pass-through filter, while outliers and flying pixel points were removed based on viewpoints and surface normals. After being smoothed with the bilateral filter, the 3D plant data were registered and meshed. We adjusted the mesh patches to eliminate layered points. The results showed that the patches were closer. The average distance between the patches was 1.88 × 10-3 m, and the average angle was 17.64°, which were 54.97% and 48.33% of those values before optimization. The proposed method performed better in reducing noise and the local layered-points phenomenon, and it could help to more accurately determine 3D structure parameters from point clouds and mesh models.
Collapse
Affiliation(s)
- Zhihong Ma
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China; (Z.M.); (D.S.); (H.X.); (Y.Z.); (Y.H.)
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, China
| | - Dawei Sun
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China; (Z.M.); (D.S.); (H.X.); (Y.Z.); (Y.H.)
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, China
| | - Haixia Xu
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China; (Z.M.); (D.S.); (H.X.); (Y.Z.); (Y.H.)
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, China
| | - Yueming Zhu
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China; (Z.M.); (D.S.); (H.X.); (Y.Z.); (Y.H.)
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, China
| | - Yong He
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China; (Z.M.); (D.S.); (H.X.); (Y.Z.); (Y.H.)
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, China
- State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou 310027, China
| | - Haiyan Cen
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou 310058, China; (Z.M.); (D.S.); (H.X.); (Y.Z.); (Y.H.)
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou 310058, China
- State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
15
|
Zhang L, Xu Z, Xu D, Ma J, Chen Y, Fu Z. Growth monitoring of greenhouse lettuce based on a convolutional neural network. HORTICULTURE RESEARCH 2020; 7:124. [PMID: 32821407 PMCID: PMC7395764 DOI: 10.1038/s41438-020-00345-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 04/20/2020] [Accepted: 05/17/2020] [Indexed: 05/24/2023]
Abstract
Growth-related traits, such as aboveground biomass and leaf area, are critical indicators to characterize the growth of greenhouse lettuce. Currently, nondestructive methods for estimating growth-related traits are subject to limitations in that the methods are susceptible to noise and heavily rely on manually designed features. In this study, a method for monitoring the growth of greenhouse lettuce was proposed by using digital images and a convolutional neural network (CNN). Taking lettuce images as the input, a CNN model was trained to learn the relationship between images and the corresponding growth-related traits, i.e., leaf fresh weight (LFW), leaf dry weight (LDW), and leaf area (LA). To compare the results of the CNN model, widely adopted methods were also used. The results showed that the values estimated by CNN had good agreement with the actual measurements, with R2 values of 0.8938, 0.8910, and 0.9156 and normalized root mean square error (NRMSE) values of 26.00, 22.07, and 19.94%, outperforming the compared methods for all three growth-related traits. The obtained results showed that the CNN demonstrated superior estimation performance for the flat-type cultivars of Flandria and Tiberius compared with the curled-type cultivar of Locarno. Generalization tests were conducted by using images of Tiberius from another growing season. The results showed that the CNN was still capable of achieving accurate estimation of the growth-related traits, with R2 values of 0.9277, 0.9126, and 0.9251 and NRMSE values of 22.96, 37.29, and 27.60%. The results indicated that a CNN with digital images is a robust tool for the monitoring of the growth of greenhouse lettuce.
Collapse
Affiliation(s)
- Lingxian Zhang
- China Agricultural University, Beijing, 100083 China
- Key Laboratory of Agricultural Informationization Standardization, Ministry of Agriculture and Rural Affairs, Beijing, China
| | - Zanyu Xu
- China Agricultural University, Beijing, 100083 China
| | - Dan Xu
- Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing, 100081 China
| | - Juncheng Ma
- Institute of Environment and Sustainable Development in Agriculture, Chinese Academy of Agricultural Sciences, Beijing, 100081 China
| | - Yingyi Chen
- China Agricultural University, Beijing, 100083 China
| | - Zetian Fu
- China Agricultural University, Beijing, 100083 China
| |
Collapse
|
16
|
An Efficient Processing Approach for Colored Point Cloud-Based High-Throughput Seedling Phenotyping. REMOTE SENSING 2020. [DOI: 10.3390/rs12101540] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Plant height and leaf area are important morphological properties of leafy vegetable seedlings, and they can be particularly useful for plant growth and health research. The traditional measurement scheme is time-consuming and not suitable for continuously monitoring plant growth and health. Individual vegetable seedling quick segmentation is the prerequisite for high-throughput seedling phenotype data extraction at individual seedling level. This paper proposes an efficient learning- and model-free 3D point cloud data processing pipeline to measure the plant height and leaf area of every single seedling in a plug tray. The 3D point clouds are obtained by a low-cost red–green–blue (RGB)-Depth (RGB-D) camera. Firstly, noise reduction is performed on the original point clouds through the processing of useable-area filter, depth cut-off filter, and neighbor count filter. Secondly, the surface feature histograms-based approach is used to automatically remove the complicated natural background. Then, the Voxel Cloud Connectivity Segmentation (VCCS) and Locally Convex Connected Patches (LCCP) algorithms are employed for individual vegetable seedling partition. Finally, the height and projected leaf area of respective seedlings are calculated based on segmented point clouds and validation is carried out. Critically, we also demonstrate the robustness of our method for different growth conditions and species. The experimental results show that the proposed method could be used to quickly calculate the morphological parameters of each seedling and it is practical to use this approach for high-throughput seedling phenotyping.
Collapse
|
17
|
Non-Destructive Measurement of Three-Dimensional Plants Based on Point Cloud. PLANTS 2020; 9:plants9050571. [PMID: 32365673 PMCID: PMC7285297 DOI: 10.3390/plants9050571] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 04/19/2020] [Accepted: 04/27/2020] [Indexed: 11/17/2022]
Abstract
In agriculture, information about the spatial distribution of plant growth is valuable for applications. Quantitative study of the characteristics of plants plays an important role in the plants’ growth and development research, and non-destructive measurement of the height of plants based on machine vision technology is one of the difficulties. We propose a methodology for three-dimensional reconstruction under growing plants by Kinect v2.0 and explored the measure growth parameters based on three-dimensional (3D) point cloud in this paper. The strategy includes three steps—firstly, preprocessing 3D point cloud data, completing the 3D plant registration through point cloud outlier filtering and surface smooth method; secondly, using the locally convex connected patches method to segment the leaves and stem from the plant model; extracting the feature boundary points from the leaf point cloud, and using the contour extraction algorithm to get the feature boundary lines; finally, calculating the length, width of the leaf by Euclidean distance, and the area of the leaf by surface integral method, measuring the height of plant using the vertical distance technology. The results show that the automatic extraction scheme of plant information is effective and the measurement accuracy meets the need of measurement standard. The established 3D plant model is the key to study the whole plant information, which reduces the inaccuracy of occlusion to the description of leaf shape and conducive to the study of the real plant growth status.
Collapse
|
18
|
Path Tracking Control of Field Information-Collecting Robot Based on Improved Convolutional Neural Network Algorithm. SENSORS 2020; 20:s20030797. [PMID: 32024030 PMCID: PMC7038679 DOI: 10.3390/s20030797] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Revised: 01/28/2020] [Accepted: 01/29/2020] [Indexed: 11/17/2022]
Abstract
Due to the narrow row spacing of corn, the lack of light in the field caused by the blocking of branches, leaves and weeds in the middle and late stages of corn growth, it is generally difficult for machinery to move between rows and also impossible to observe the corn growth in real time. To solve the problem, a robot for corn interlines information collection thus is designed. First, the mathematical model of the robot is established using the designed control system. Second, an improved convolutional neural network model is proposed for training and learning, and the driving path is fitted by detecting and identifying corn rhizomes. Next, a multi-body dynamics simulation software, RecurDyn/track, is used to establish a dynamic model of the robot movement in soft soil conditions, and a control system is developed in MATLAB/SIMULINK for joint simulation experiments. Simulation results show that the method for controlling a sliding-mode variable structure can achieve better control results. Finally, experiments on the ground and in a simulated field environment show that the robot for field information collection based on the method developed runs stably and shows little deviation. The robot can be well applied for field plant protection, the control of corn diseases and insect pests, and the realization of human–machine separation.
Collapse
|
19
|
Olas JJ, Fichtner F, Apelt F. All roads lead to growth: imaging-based and biochemical methods to measure plant growth. JOURNAL OF EXPERIMENTAL BOTANY 2020; 71:11-21. [PMID: 31613967 PMCID: PMC6913701 DOI: 10.1093/jxb/erz406] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 08/28/2019] [Indexed: 05/31/2023]
Abstract
Plant growth is a highly complex biological process that involves innumerable interconnected biochemical and signalling pathways. Many different techniques have been developed to measure growth, unravel the various processes that contribute to plant growth, and understand how a complex interaction between genotype and environment determines the growth phenotype. Despite this complexity, the term 'growth' is often simplified by researchers; depending on the method used for quantification, growth is viewed as an increase in plant or organ size, a change in cell architecture, or an increase in structural biomass. In this review, we summarise the cellular and molecular mechanisms underlying plant growth, highlight state-of-the-art imaging and non-imaging-based techniques to quantitatively measure growth, including a discussion of their advantages and drawbacks, and suggest a terminology for growth rates depending on the type of technique used.
Collapse
Affiliation(s)
- Justyna Jadwiga Olas
- University of Potsdam, Institute of Biochemistry and Biology, Karl-Liebknecht-Straße, Haus, Potsdam, Germany
| | - Franziska Fichtner
- Max Planck Institute of Molecular Plant Physiology, Am Mühlenberg, Potsdam, Germany
| | - Federico Apelt
- Max Planck Institute of Molecular Plant Physiology, Am Mühlenberg, Potsdam, Germany
| |
Collapse
|
20
|
Estimating Pasture Biomass and Canopy Height in Brazilian Savanna Using UAV Photogrammetry. REMOTE SENSING 2019. [DOI: 10.3390/rs11202447] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The Brazilian territory contains approximately 160 million hectares of pastures, and it is necessary to develop techniques to automate their management and increase their production. This technical note has two objectives: First, to estimate the canopy height using unmanned aerial vehicle (UAV) photogrammetry; second, to propose an equation for the estimation of biomass of Brazilian savanna (Cerrado) pastures based on UAV canopy height. Four experimental units of Panicum maximum cv. BRS Tamani were evaluated. Herbage mass sampling, height measurements, and UAV image collection were simultaneously performed. The UAVs were flown at a height of 50 m, and images were generated with a mean ground sample distance (GSD) of approximately 1.55 cm. The forage canopy height estimated by UAVs was calculated as the difference between the digital surface model (DSM) and the digital terrain model (DTM). The R2 between ruler height and UAV height was 0.80; between biomass (kg ha−1 GB—green biomass) and ruler height, 0.81; and between biomass (kg ha−1 GB) and UAV height, 0.74. UAV photogrammetry proved to be a potential technique to estimate height and biomass in Brazilian Panicum maximum cv. BRS Tamani pastures located in the endangered Brazilian savanna (Cerrado) biome.
Collapse
|
21
|
Measurement Method Based on Multispectral Three-Dimensional Imaging for the Chlorophyll Contents of Greenhouse Tomato Plants. SENSORS 2019; 19:s19153345. [PMID: 31366151 PMCID: PMC6696012 DOI: 10.3390/s19153345] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 07/25/2019] [Accepted: 07/28/2019] [Indexed: 11/26/2022]
Abstract
Nondestructive plant growth measurement is essential for researching plant growth and health. A nondestructive measurement system to retrieve plant information includes the measurement of morphological and physiological information, but most systems use two independent measurement systems for the two types of characteristics. In this study, a highly integrated, multispectral, three-dimensional (3D) nondestructive measurement system for greenhouse tomato plants was designed. The system used a Kinect sensor, an SOC710 hyperspectral imager, an electric rotary table, and other components. A heterogeneous sensing image registration technique based on the Fourier transform was proposed, which was used to register the SOC710 multispectral reflectance in the Kinect depth image coordinate system. Furthermore, a 3D multiview RGB-D image-reconstruction method based on the pose estimation and self-calibration of the Kinect sensor was developed to reconstruct a multispectral 3D point cloud model of the tomato plant. An experiment was conducted to measure plant canopy chlorophyll and the relative chlorophyll content was measured by the soil and plant analyzer development (SPAD) measurement model based on a 3D multispectral point cloud model and a single-view point cloud model and its performance was compared and analyzed. The results revealed that the measurement model established by using the characteristic variables from the multiview point cloud model was superior to the one established using the variables from the single-view point cloud model. Therefore, the multispectral 3D reconstruction approach is able to reconstruct the plant multispectral 3D point cloud model, which optimizes the traditional two-dimensional image-based SPAD measurement method and can obtain a precise and efficient high-throughput measurement of plant chlorophyll.
Collapse
|
22
|
Martinez-Guanter J, Ribeiro Á, Peteinatos GG, Pérez-Ruiz M, Gerhards R, Bengochea-Guevara JM, Machleb J, Andújar D. Low-Cost Three-Dimensional Modeling of Crop Plants. SENSORS (BASEL, SWITZERLAND) 2019; 19:E2883. [PMID: 31261757 PMCID: PMC6651267 DOI: 10.3390/s19132883] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Revised: 06/20/2019] [Accepted: 06/26/2019] [Indexed: 12/03/2022]
Abstract
Plant modeling can provide a more detailed overview regarding the basis of plant development throughout the life cycle. Three-dimensional processing algorithms are rapidly expanding in plant phenotyping programmes and in decision-making for agronomic management. Several methods have already been tested, but for practical implementations the trade-off between equipment cost, computational resources needed and the fidelity and accuracy in the reconstruction of the end-details needs to be assessed and quantified. This study examined the suitability of two low-cost systems for plant reconstruction. A low-cost Structure from Motion (SfM) technique was used to create 3D models for plant crop reconstruction. In the second method, an acquisition and reconstruction algorithm using an RGB-Depth Kinect v2 sensor was tested following a similar image acquisition procedure. The information was processed to create a dense point cloud, which allowed the creation of a 3D-polygon mesh representing every scanned plant. The selected crop plants corresponded to three different crops (maize, sugar beet and sunflower) that have structural and biological differences. The parameters measured from the model were validated with ground truth data of plant height, leaf area index and plant dry biomass using regression methods. The results showed strong consistency with good correlations between the calculated values in the models and the ground truth information. Although, the values obtained were always accurately estimated, differences between the methods and among the crops were found. The SfM method showed a slightly better result with regard to the reconstruction the end-details and the accuracy of the height estimation. Although the use of the processing algorithm is relatively fast, the use of RGB-D information is faster during the creation of the 3D models. Thus, both methods demonstrated robust results and provided great potential for use in both for indoor and outdoor scenarios. Consequently, these low-cost systems for 3D modeling are suitable for several situations where there is a need for model generation and also provide a favourable time-cost relationship.
Collapse
Affiliation(s)
- Jorge Martinez-Guanter
- Department of Aerospace Engineering and Fluids Mechanics, Escuela Técnica Superior de Ingeniería Agronómica (ETSIA), Universidad de Sevilla, 41013 Sevilla, Spain
| | - Ángela Ribeiro
- Centre for Automation and Robotics, CSIC-UPM, Arganda del Rey, 28500 Madrid, Spain
| | - Gerassimos G Peteinatos
- Department of Weed Science, Institute of Phytomedicine, University of Hohenheim, Otto-Sander-Straße 5, 70599 Stuttgart, Germany
| | - Manuel Pérez-Ruiz
- Department of Aerospace Engineering and Fluids Mechanics, Escuela Técnica Superior de Ingeniería Agronómica (ETSIA), Universidad de Sevilla, 41013 Sevilla, Spain.
| | - Roland Gerhards
- Department of Weed Science, Institute of Phytomedicine, University of Hohenheim, Otto-Sander-Straße 5, 70599 Stuttgart, Germany
| | | | - Jannis Machleb
- Department of Weed Science, Institute of Phytomedicine, University of Hohenheim, Otto-Sander-Straße 5, 70599 Stuttgart, Germany
| | - Dionisio Andújar
- Centre for Automation and Robotics, CSIC-UPM, Arganda del Rey, 28500 Madrid, Spain.
| |
Collapse
|
23
|
Zhao C, Zhang Y, Du J, Guo X, Wen W, Gu S, Wang J, Fan J. Crop Phenomics: Current Status and Perspectives. FRONTIERS IN PLANT SCIENCE 2019; 10:714. [PMID: 31214228 PMCID: PMC6557228 DOI: 10.3389/fpls.2019.00714] [Citation(s) in RCA: 132] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 05/14/2019] [Indexed: 05/19/2023]
Abstract
Reliable, automatic, multifunctional, and high-throughput phenotypic technologies are increasingly considered important tools for rapid advancement of genetic gain in breeding programs. With the rapid development in high-throughput phenotyping technologies, research in this area is entering a new era called 'phenomics.' The crop phenotyping community not only needs to build a multi-domain, multi-level, and multi-scale crop phenotyping big database, but also to research technical systems for phenotypic traits identification and develop bioinformatics technologies for information extraction from the overwhelming amounts of omics data. Here, we provide an overview of crop phenomics research, focusing on two parts, from phenotypic data collection through various sensors to phenomics analysis. Finally, we discussed the challenges and prospective of crop phenomics in order to provide suggestions to develop new methods of mining genes associated with important agronomic traits, and propose new intelligent solutions for precision breeding.
Collapse
|
24
|
High-Throughput Phenotyping Analysis of Potted Soybean Plants Using Colorized Depth Images Based on A Proximal Platform. REMOTE SENSING 2019. [DOI: 10.3390/rs11091085] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Canopy color and structure can strongly reflect plant functions. Color characteristics and plant height as well as canopy breadth are important aspects of the canopy phenotype of soybean plants. High-throughput phenotyping systems with imaging capabilities providing color and depth information can rapidly acquire data of soybean plants, making it possible to quantify and monitor soybean canopy development. The goal of this study was to develop a 3D imaging approach to quantitatively analyze soybean canopy development under natural light conditions. Thus, a Kinect sensor-based high-throughput phenotyping (HTP) platform was developed for soybean plant phenotyping. To calculate color traits accurately, the distortion phenomenon of color images was first registered in accordance with the principle of three primary colors and color constancy. Then, the registered color images were applied to depth images for the reconstruction of the colorized three-dimensional canopy structure. Furthermore, the 3D point cloud of soybean canopies was extracted from the background according to adjusted threshold, and each area of individual potted soybean plants in the depth images was segmented for the calculation of phenotypic traits. Finally, color indices, plant height and canopy breadth were assessed based on 3D point cloud of soybean canopies. The results showed that the maximum error of registration for the R, G, and B bands in the dataset was 1.26%, 1.09%, and 0.75%, respectively. Correlation analysis between the sensors and manual measurements yielded R2 values of 0.99, 0.89, and 0.89 for plant height, canopy breadth in the west-east (W–E) direction, and canopy breadth in the north-south (N–S) direction, and R2 values of 0.82, 0.79, and 0.80 for color indices h, s, and i, respectively. Given these results, the proposed approaches provide new opportunities for the identification of the quantitative traits that control canopy structure in genetic/genomic studies or for soybean yield prediction in breeding programs.
Collapse
|
25
|
Wu S, Wen W, Xiao B, Guo X, Du J, Wang C, Wang Y. An Accurate Skeleton Extraction Approach From 3D Point Clouds of Maize Plants. FRONTIERS IN PLANT SCIENCE 2019; 10:248. [PMID: 30899271 PMCID: PMC6416182 DOI: 10.3389/fpls.2019.00248] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Accepted: 02/14/2019] [Indexed: 05/27/2023]
Abstract
Accurate and high-throughput determination of plant morphological traits is essential for phenotyping studies. Nowadays, there are many approaches to acquire high-quality three-dimensional (3D) point clouds of plants. However, it is difficult to estimate phenotyping parameters accurately of the whole growth stages of maize plants using these 3D point clouds. In this paper, an accurate skeleton extraction approach was proposed to bridge the gap between 3D point cloud and phenotyping traits estimation of maize plants. The algorithm first uses point cloud clustering and color difference denoising to reduce the noise of the input point clouds. Next, the Laplacian contraction algorithm is applied to shrink the points. Then the key points representing the skeleton of the plant are selected through adaptive sampling, and neighboring points are connected to form a plant skeleton composed of semantic organs. Finally, deviation skeleton points to the input point cloud are calibrated by building a step forward local coordinate along the tangent direction of the original points. The proposed approach successfully generates accurately extracted skeleton from 3D point cloud and helps to estimate phenotyping parameters with high precision of maize plants. Experimental verification of the skeleton extraction process, tested using three cultivars and different growth stages maize, demonstrates that the extracted matches the input point cloud well. Compared with 3D digitizing data-derived morphological parameters, the NRMSE of leaf length, leaf inclination angle, leaf top length, leaf azimuthal angle, leaf growth height, and plant height, estimated using the extracted plant skeleton, are 5.27, 8.37, 5.12, 4.42, 1.53, and 0.83%, respectively, which could meet the needs of phenotyping analysis. The time required to process a single maize plant is below 100 s. The proposed approach may play an important role in further maize research and applications, such as genotype-to-phenotype study, geometric reconstruction, functional structural maize modeling, and dynamic growth animation.
Collapse
Affiliation(s)
- Sheng Wu
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Weiliang Wen
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Boxiang Xiao
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Xinyu Guo
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Jianjun Du
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Chuanyu Wang
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Yongjian Wang
- Beijing Research Center for Information Technology in Agriculture, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| |
Collapse
|
26
|
A Novel Mobile Structured Light System in Food 3D Reconstruction and Volume Estimation. SENSORS 2019; 19:s19030564. [PMID: 30700041 PMCID: PMC6386919 DOI: 10.3390/s19030564] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 01/16/2019] [Accepted: 01/28/2019] [Indexed: 11/17/2022]
Abstract
. Over the past ten years, diabetes has rapidly become more prevalent in all age demographics and especially in children. Improved dietary assessment techniques are necessary for epidemiological studies that investigate the relationship between diet and disease. Current nutritional research is hindered by the low accuracy of traditional dietary intake estimation methods used for portion size assessment. This paper presents the development and validation of a novel instrumentation system for measuring accurate dietary intake for diabetic patients. This instrument uses a mobile Structured Light System (SLS), which measures the food volume and portion size of a patient's diet in daily living conditions. The SLS allows for the accurate determination of the volume and portion size of a scanned food item. Once the volume of a food item is calculated, the nutritional content of the item can be estimated using existing nutritional databases. The system design includes a volume estimation algorithm and a hardware add-on that consists of a laser module and a diffraction lens. The experimental results demonstrate an improvement of around 40% in the accuracy of the volume or portion size measurement when compared to manual calculation. The limitations and shortcomings of the system are discussed in this manuscript.
Collapse
|
27
|
Itakura K, Kamakura I, Hosoi F. Three-Dimensional Monitoring of Plant Structural Parameters and Chlorophyll Distribution. SENSORS (BASEL, SWITZERLAND) 2019; 19:E413. [PMID: 30669537 PMCID: PMC6359203 DOI: 10.3390/s19020413] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Revised: 01/16/2019] [Accepted: 01/18/2019] [Indexed: 11/16/2022]
Abstract
Image analysis is widely used for accurate and efficient plant monitoring. Plants have complex three-dimensional (3D) structures; hence, 3D image acquisition and analysis is useful for determining the status of plants. Here, 3D images of plants were reconstructed using a photogrammetric approach, called "structure from motion". Chlorophyll content is an important parameter that determines the status of plants. Chlorophyll content was estimated from 3D images of plants with color information. To observe changes in the chlorophyll content and plant structure, a potted plant was kept for five days under a water stress condition and its 3D images were taken once a day. As a result, the normalized Red value and the chlorophyll content were correlated; a high R² value (0.81) was obtained. The absolute error of the chlorophyll content estimation in cross-validation studies was 4.0 × 10-2 μg/mm². At the same time, the structural parameters (i.e., the leaf inclination angle and the azimuthal angle) were calculated by simultaneously monitoring the changes in the plant's status in terms of its chlorophyll content and structural parameters. By combining these parameters related to plant information in plant image analysis, early detection of plant stressors, such as water stress, becomes possible.
Collapse
Affiliation(s)
- Kenta Itakura
- Graduate School, University of Tokyo, Tokyo 113-8657, Japan.
| | | | - Fumiki Hosoi
- Graduate School, University of Tokyo, Tokyo 113-8657, Japan.
| |
Collapse
|
28
|
Leaf Area Estimation of Reconstructed Maize Plants Using a Time-of-Flight Camera Based on Different Scan Directions. ROBOTICS 2018. [DOI: 10.3390/robotics7040063] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
The leaf area is an important plant parameter for plant status and crop yield. In this paper, a low-cost time-of-flight camera, the Kinect v2, was mounted on a robotic platform to acquire 3-D data of maize plants in a greenhouse. The robotic platform drove through the maize rows and acquired 3-D images that were later registered and stitched. Three different maize row reconstruction approaches were compared: reconstruct a crop row by merging point clouds generated from both sides of the row in both directions, merging point clouds scanned just from one side, and merging point clouds scanned from opposite directions of the row. The resulted point cloud was subsampled and rasterized, the normals were computed and re-oriented with a Fast Marching algorithm. The Poisson surface reconstruction was applied to the point cloud, and new vertices and faces generated by the algorithm were removed. The results showed that the approach of aligning and merging four point clouds per row and two point clouds scanned from the same side generated very similar average mean absolute percentage error of 8.8% and 7.8%, respectively. The worst error resulted from the two point clouds scanned from both sides in opposite directions with 32.3%.
Collapse
|
29
|
Three-Dimensional Reconstruction of Soybean Canopies Using Multisource Imaging for Phenotyping Analysis. REMOTE SENSING 2018. [DOI: 10.3390/rs10081206] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Geometric three-dimensional (3D) reconstruction has emerged as a powerful tool for plant phenotyping and plant breeding. Although laser scanning is one of the most intensely used sensing techniques for 3D reconstruction projects, it still has many limitations, such as the high investment cost. To overcome such limitations, in the present study, a low-cost, novel, and efficient imaging system consisting of a red-green-blue (RGB) camera and a photonic mixer detector (PMD) was developed, and its usability for plant phenotyping was demonstrated via a 3D reconstruction of a soybean plant that contains color information. To reconstruct soybean canopies, a density-based spatial clustering of applications with noise (DBSCAN) algorithm was used to extract canopy information from the raw 3D point cloud. Principal component analysis (PCA) and iterative closest point (ICP) algorithms were then used to register the multisource images for the 3D reconstruction of a soybean plant from both the side and top views. We then assessed phenotypic traits such as plant height and the greenness index based on the deviations of test samples. The results showed that compared with manual measurements, the side view-based assessments yielded a determination coefficient (R2) of 0.9890 for the estimation of soybean height and a R2 of 0.6059 for the estimation of soybean canopy greenness index; the top view-based assessment yielded a R2 of 0.9936 for the estimation of soybean height and a R2 of 0.8864 for the estimation of soybean canopy greenness. Together, the results indicated that an assembled 3D imaging device applying the algorithms developed in this study could be used as a reliable and robust platform for plant phenotyping, and potentially for automated and high-throughput applications under both natural light and indoor conditions.
Collapse
|
30
|
Liu J, Yuan Y, Zhou Y, Zhu X, Syed TN. Experiments and Analysis of Close-Shot Identification of On-Branch Citrus Fruit with RealSense. SENSORS 2018; 18:s18051510. [PMID: 29751594 PMCID: PMC5982123 DOI: 10.3390/s18051510] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Revised: 05/04/2018] [Accepted: 05/08/2018] [Indexed: 12/01/2022]
Abstract
Fruit recognition based on depth information has been a hot topic due to its advantages. However, the present equipment and methods cannot meet the requirements of rapid and reliable recognition and location of fruits in close shot for robot harvesting. To solve this problem, we propose a recognition algorithm for citrus fruit based on RealSense. This method effectively utilizes depth-point cloud data in a close-shot range of 160 mm and different geometric features of the fruit and leaf to recognize fruits with a intersection curve cut by the depth-sphere. Experiments with close-shot recognition of six varieties of fruit under different conditions were carried out. The detection rates of little occlusion and adhesion were from 80–100%. However, severe occlusion and adhesion still have a great influence on the overall success rate of on-branch fruits recognition, the rate being 63.8%. The size of the fruit has a more noticeable impact on the success rate of detection. Moreover, due to close-shot near-infrared detection, there was no obvious difference in recognition between bright and dark conditions. The advantages of close-shot limited target detection with RealSense, fast foreground and background removal and the simplicity of the algorithm with high precision may contribute to high real-time vision-servo operations of harvesting robots.
Collapse
Affiliation(s)
- Jizhan Liu
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education, Jiangsu University, Jiangsu 212013, China.
| | - Yan Yuan
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education, Jiangsu University, Jiangsu 212013, China.
| | - Yao Zhou
- College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China.
| | - Xinxin Zhu
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education, Jiangsu University, Jiangsu 212013, China.
| | - Tabinda Naz Syed
- Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education, Jiangsu University, Jiangsu 212013, China.
| |
Collapse
|