1
|
Peng X, Wang K, Zhang Z, Geng N, Zhang Z. A Point-Cloud Segmentation Network Based on SqueezeNet and Time Series for Plants. J Imaging 2023; 9:258. [PMID: 38132676 PMCID: PMC10743816 DOI: 10.3390/jimaging9120258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 11/16/2023] [Accepted: 11/21/2023] [Indexed: 12/23/2023] Open
Abstract
The phenotyping of plant growth enriches our understanding of intricate genetic characteristics, paving the way for advancements in modern breeding and precision agriculture. Within the domain of phenotyping, segmenting 3D point clouds of plant organs is the basis of extracting plant phenotypic parameters. In this study, we introduce a novel method for point-cloud downsampling that adeptly mitigates the challenges posed by sample imbalances. In subsequent developments, we architect a deep learning framework founded on the principles of SqueezeNet for the segmentation of plant point clouds. In addition, we also use the time series as input variables, which effectively improves the segmentation accuracy of the network. Based on semantic segmentation, the MeanShift algorithm is employed to execute instance segmentation on the point-cloud data of crops. In semantic segmentation, the average Precision, Recall, F1-score, and IoU of maize reached 99.35%, 99.26%, 99.30%, and 98.61%, and the average Precision, Recall, F1-score, and IoU of tomato reached 97.98%, 97.92%, 97.95%, and 95.98%. In instance segmentation, the accuracy of maize and tomato reached 98.45% and 96.12%. This research holds the potential to advance the fields of plant phenotypic extraction, ideotype selection, and precision agriculture.
Collapse
Affiliation(s)
| | | | | | - Nan Geng
- College of Information Engineering, Northwest A&F University, Yangling 712100, China; (X.P.)
| | | |
Collapse
|
2
|
Wei B, Ma X, Guan H, Yu M, Yang C, He H, Wang F, Shen P. Dynamic simulation of leaf area index for the soybean canopy based on 3D reconstruction. ECOL INFORM 2023. [DOI: 10.1016/j.ecoinf.2023.102070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
|
3
|
Paturkar A, Sen Gupta G, Bailey D. Plant trait measurement in 3D for growth monitoring. PLANT METHODS 2022; 18:59. [PMID: 35505428 PMCID: PMC9063380 DOI: 10.1186/s13007-022-00889-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 04/13/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND There is a demand for non-destructive systems in plant phenotyping which could precisely measure plant traits for growth monitoring. In this study, the growth of chilli plants (Capsicum annum L.) was monitored in outdoor conditions. A non-destructive solution is proposed for growth monitoring in 3D using a single mobile phone camera based on a structure from motion algorithm. A method to measure leaf length and leaf width when the leaf is curled is also proposed. Various plant traits such as number of leaves, stem height, leaf length, and leaf width were measured from the reconstructed and segmented 3D models at different plant growth stages. RESULTS The accuracy of the proposed system is measured by comparing the values derived from the 3D plant model with manual measurements. The results demonstrate that the proposed system has potential to non-destructively monitor plant growth in outdoor conditions with high precision, when compared to the state-of-the-art systems. CONCLUSIONS In conclusion, this study demonstrated that the methods proposed to calculate plant traits can monitor plant growth in outdoor conditions.
Collapse
Affiliation(s)
- Abhipray Paturkar
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Palmerston North, New Zealand.
| | - Gourab Sen Gupta
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Palmerston North, New Zealand
| | - Donald Bailey
- Department of Mechanical and Electrical Engineering, School of Food and Advanced Technology, Massey University, Palmerston North, New Zealand
| |
Collapse
|
4
|
Wu S, Wen W, Gou W, Lu X, Zhang W, Zheng C, Xiang Z, Chen L, Guo X. A miniaturized phenotyping platform for individual plants using multi-view stereo 3D reconstruction. FRONTIERS IN PLANT SCIENCE 2022; 13:897746. [PMID: 36003825 PMCID: PMC9393617 DOI: 10.3389/fpls.2022.897746] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 07/08/2022] [Indexed: 05/14/2023]
Abstract
Plant phenotyping is essential in plant breeding and management. High-throughput data acquisition and automatic phenotypes extraction are common concerns in plant phenotyping. Despite the development of phenotyping platforms and the realization of high-throughput three-dimensional (3D) data acquisition in tall plants, such as maize, handling small-size plants with complex structural features remains a challenge. This study developed a miniaturized shoot phenotyping platform MVS-Pheno V2 focusing on low plant shoots. The platform is an improvement of MVS-Pheno V1 and was developed based on multi-view stereo 3D reconstruction. It has the following four components: Hardware, wireless communication and control, data acquisition system, and data processing system. The hardware sets the rotation on top of the platform, separating plants to be static while rotating. A novel local network was established to realize wireless communication and control; thus, preventing cable twining. The data processing system was developed to calibrate point clouds and extract phenotypes, including plant height, leaf area, projected area, shoot volume, and compactness. This study used three cultivars of wheat shoots at four growth stages to test the performance of the platform. The mean absolute percentage error of point cloud calibration was 0.585%. The squared correlation coefficient R 2 was 0.9991, 0.9949, and 0.9693 for plant height, leaf length, and leaf width, respectively. The root mean squared error (RMSE) was 0.6996, 0.4531, and 0.1174 cm for plant height, leaf length, and leaf width. The MVS-Pheno V2 platform provides an alternative solution for high-throughput phenotyping of low individual plants and is especially suitable for shoot architecture-related plant breeding and management studies.
Collapse
Affiliation(s)
- Sheng Wu
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Weiliang Wen
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Wenbo Gou
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Xianju Lu
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Wenqi Zhang
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Chenxi Zheng
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Zhiwei Xiang
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
| | - Liping Chen
- Intelligent Equipment Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- *Correspondence: Liping Chen
| | - Xinyu Guo
- Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, China
- Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in Agriculture, Beijing, China
- College of Agricultural Engineering, Jiangsu University, Zhenjiang, China
- Xinyu Guo
| |
Collapse
|
5
|
Automatic leaf segmentation and overlapping leaf separation using stereo vision. ARRAY 2021. [DOI: 10.1016/j.array.2021.100099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
6
|
Estimation of Winter Wheat Yield from UAV-Based Multi-Temporal Imagery Using Crop Allometric Relationship and SAFY Model. DRONES 2021. [DOI: 10.3390/drones5030078] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Crop yield prediction and estimation play essential roles in the precision crop management system. The Simple Algorithm for Yield Estimation (SAFY) has been applied to Unmanned Aerial Vehicle (UAV)-based data to provide high spatial yield prediction and estimation for winter wheat. However, this crop model relies on the relationship between crop leaf weight and biomass, which only considers the contribution of leaves on the final biomass and yield calculation. This study developed the modified SAFY-height model by incorporating an allometric relationship between ground-based measured crop height and biomass. A piecewise linear regression model is used to establish the relationship between crop height and biomass. The parameters of the modified SAFY-height model are calibrated using ground measurements. Then, the calibrated modified SAFY-height model is applied on the UAV-based photogrammetric point cloud derived crop height and effective leaf area index (LAIe) maps to predict winter wheat yield. The growing accumulated temperature turning points of an allometric relationship between crop height and biomass is 712 °C. The modified SAFY-height model, relative to traditional SAFY, provided more accurate yield estimation for areas with LAI higher than 1.01 m2/m2. The RMSE and RRMSE are improved by 3.3% and 0.5%, respectively.
Collapse
|
7
|
EasyIDP: A Python Package for Intermediate Data Processing in UAV-Based Plant Phenotyping. REMOTE SENSING 2021. [DOI: 10.3390/rs13132622] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Unmanned aerial vehicle (UAV) and structure from motion (SfM) photogrammetry techniques are widely used for field-based, high-throughput plant phenotyping nowadays, but some of the intermediate processes throughout the workflow remain manual. For example, geographic information system (GIS) software is used to manually assess the 2D/3D field reconstruction quality and cropping region of interests (ROIs) from the whole field. In addition, extracting phenotypic traits from raw UAV images is more competitive than directly from the digital orthomosaic (DOM). Currently, no easy-to-use tools are available to implement previous tasks for commonly used commercial SfM software, such as Pix4D and Agisoft Metashape. Hence, an open source software package called easy intermediate data processor (EasyIDP; MIT license) was developed to decrease the workload in intermediate data processing mentioned above. The functions of the proposed package include (1) an ROI cropping module, assisting in reconstruction quality assessment and cropping ROIs from the whole field, and (2) an ROI reversing module, projecting ROIs to relative raw images. The result showed that both cropping and reversing modules work as expected. Moreover, the effects of ROI height selection and reversed ROI position on raw images to reverse calculation were discussed. This tool shows great potential for decreasing workload in data annotation for machine learning applications.
Collapse
|
8
|
Abstract
The use of 3D plant models for high-throughput phenotyping is increasingly becoming a preferred method for many plant science researchers. Numerous camera-based imaging systems and reconstruction algorithms have been developed for the 3D reconstruction of plants. However, it is still challenging to build an imaging system with high-quality results at a low cost. Useful comparative information for existing imaging systems and their improvements is also limited, making it challenging for researchers to make data-based selections. The objective of this study is to explore the possible solutions to address these issues. We introduce two novel systems for plants of various sizes, as well as a pipeline to generate high-quality 3D point clouds and meshes. The higher accuracy and efficiency of the proposed systems make it a potentially valuable tool for enhancing high-throughput phenotyping by integrating 3D traits for increased resolution and measuring traits that are not amenable to 2D imaging approaches. The study shows that the phenotype traits derived from the 3D models are highly correlated with manually measured phenotypic traits (R2 > 0.91). Moreover, we present a systematic analysis of different settings of the imaging systems and a comparison with the traditional system, which provide recommendations for plant scientists to improve the accuracy of 3D construction. In summary, our proposed imaging systems are suggested for 3D reconstruction of plants. Moreover, the analysis results of the different settings in this paper can be used for designing new customized imaging systems and improving their accuracy.
Collapse
|
9
|
Lu X, Ono E, Lu S, Zhang Y, Teng P, Aono M, Shimizu Y, Hosoi F, Omasa K. Reconstruction method and optimum range of camera-shooting angle for 3D plant modeling using a multi-camera photography system. PLANT METHODS 2020; 16:118. [PMID: 32874194 PMCID: PMC7457534 DOI: 10.1186/s13007-020-00658-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 08/18/2020] [Indexed: 05/27/2023]
Abstract
BACKGROUND Measurement of plant structure is useful in monitoring plant conditions and understanding the responses of plants to environmental changes. 3D imaging technologies, especially the passive-SfM (Structure from Motion) algorithm combined with a multi-camera photography (MCP) system has been studied to measure plant structure due to its low-cost, close-range, and rapid image capturing ability. However, reconstruction of 3D plant models with complex structure is a time-consuming process and some systems have failed to reconstruct 3D models properly. Therefore, an MCP based SfM system was developed and an appropriate reconstruction method and optimal range of camera-shooting angles were investigated. RESULTS An MCP system which utilized 10 cameras and a rotary table for plant was developed. The 3D mesh model of a single leaf reconstruction using a set of images taken at each viewing zenith angle (VZA) from 12° (C2 camera) to 60° (C6 camera) by the MCP based SfM system had less undetected or unstable regions in comparison with other VZAs. The 3D mesh model of a whole plant, which merged 3D dense point cloud models built from a set of images taken at each appropriate VZA (Method 1), had high accuracy. The Method 1 error percentages for leaf area, leaf length, leaf width, stem height, and stem width are in the range of 2.6-4.4%, 0.2-2.2%, 1.0-4.9%, 1.9-2.8%, and 2.6-5.7% respectively. Also, the error of the leaf inclination angle was less than 5°. Conversely, the 3D mesh model of a whole plant built directly from a set of images taken at all appropriate VZAs (Method 2) had lower accuracy than that of Method 1. For Method 2, the error percentages of leaf area, leaf length, and leaf width are in the range of 3.1-13.3%, 0.4-3.3%, and 1.6-8.6%, respectively. It was difficult to obtain the error percentages of stem height and stem width because some information was missing in this model. In addition, the calculation time for Method 2 was 1.97 times longer computational time in comparison to Method 1. CONCLUSIONS In this study, we determined the optimal shooting angles on the MCP based SfM system developed. We found that it is better in terms of computational time and accuracy to merge partial 3D models from images taken at each appropriate VZA, then construct complete 3D model (Method 1), rather than to construct 3D model by using images taken at all appropriate VZAs (Method 2). This is because utilization of incorporation of incomplete images to match feature points could result in reduced accuracy in 3D models and the increase in computational time for 3D model reconstruction.
Collapse
Affiliation(s)
- Xingtong Lu
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo, Tokyo 113-0032 Japan
- National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba, Ibaraki 305-8506 Japan
| | - Eiichi Ono
- Faculty of Agriculture, Takasaki University of Health and Welfare, 54 Nakaorui-machi, Takasaki, Gunma 370-0033 Japan
| | - Shan Lu
- School of Geographical Sciences, Northeast Normal University, 5268 Renmin Street, Changchun, 130024 China
| | - Yu Zhang
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo, Tokyo 113-0032 Japan
- Research Center for Agricultural Information Technology, National Agriculture and Food Research Organization, 3-1-1 Kannondai, Tsukuba, Ibaraki 305-8517 Japan
| | - Poching Teng
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo, Tokyo 113-0032 Japan
| | - Mitsuko Aono
- National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba, Ibaraki 305-8506 Japan
| | - Yo Shimizu
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo, Tokyo 113-0032 Japan
- Faculty of Agriculture, Takasaki University of Health and Welfare, 54 Nakaorui-machi, Takasaki, Gunma 370-0033 Japan
| | - Fumiki Hosoi
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo, Tokyo 113-0032 Japan
| | - Kenji Omasa
- Graduate School of Agricultural and Life Sciences, The University of Tokyo, 1-1-1 Yayoi, Bunkyo, Tokyo 113-0032 Japan
- Faculty of Agriculture, Takasaki University of Health and Welfare, 54 Nakaorui-machi, Takasaki, Gunma 370-0033 Japan
- National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba, Ibaraki 305-8506 Japan
| |
Collapse
|
10
|
Performances Evaluation of a Low-Cost Platform for High-Resolution Plant Phenotyping. SENSORS 2020; 20:s20113150. [PMID: 32498361 PMCID: PMC7308841 DOI: 10.3390/s20113150] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 05/28/2020] [Accepted: 05/30/2020] [Indexed: 12/28/2022]
Abstract
This study aims to test the performances of a low-cost and automatic phenotyping platform, consisting of a Red-Green-Blue (RGB) commercial camera scanning objects on rotating plates and the reconstruction of main plant phenotypic traits via the structure for motion approach (SfM). The precision of this platform was tested in relation to three-dimensional (3D) models generated from images of potted maize, tomato and olive tree, acquired at a different frequency (steps of 4°, 8° and 12°) and quality (4.88, 6.52 and 9.77 µm/pixel). Plant and organs heights, angles and areas were extracted from the 3D models generated for each combination of these factors. Coefficient of determination (R2), relative Root Mean Square Error (rRMSE) and Akaike Information Criterion (AIC) were used as goodness-of-fit indexes to compare the simulated to the observed data. The results indicated that while the best performances in reproducing plant traits were obtained using 90 images at 4.88 µm/pixel (R2 = 0.81, rRMSE = 9.49% and AIC = 35.78), this corresponded to an unviable processing time (from 2.46 h to 28.25 h for herbaceous plants and olive trees, respectively). Conversely, 30 images at 4.88 µm/pixel resulted in a good compromise between a reliable reconstruction of considered traits (R2 = 0.72, rRMSE = 11.92% and AIC = 42.59) and processing time (from 0.50 h to 2.05 h for herbaceous plants and olive trees, respectively). In any case, the results pointed out that this input combination may vary based on the trait under analysis, which can be more or less demanding in terms of input images and time according to the complexity of its shape (R2 = 0.83, rRSME = 10.15% and AIC = 38.78). These findings highlight the reliability of the developed low-cost platform for plant phenotyping, further indicating the best combination of factors to speed up the acquisition and elaboration process, at the same time minimizing the bias between observed and simulated data.
Collapse
|
11
|
Zhu R, Sun K, Yan Z, Yan X, Yu J, Shi J, Hu Z, Jiang H, Xin D, Zhang Z, Li Y, Qi Z, Liu C, Wu X, Chen Q. Analysing the phenotype development of soybean plants using low-cost 3D reconstruction. Sci Rep 2020; 10:7055. [PMID: 32341432 PMCID: PMC7184763 DOI: 10.1038/s41598-020-63720-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 04/06/2020] [Indexed: 11/10/2022] Open
Abstract
With the development of digital agriculture, 3D reconstruction technology has been widely used to analyse crop phenotypes. To date, most research on 3D reconstruction of field crops has been limited to analysis of population characteristics. Therefore, in this study, we propose a method based on low-cost 3D reconstruction technology to analyse the phenotype development during the whole growth period. Based on the phenotypic parameters extracted from the 3D reconstruction model, we identified the "phenotypic fingerprint" of the relevant phenotypes throughout the whole growth period of soybean plants and completed analysis of the plant growth patterns using a logistic growth model. The phenotypic fingerprint showed that, before the R3 period, the growth of the five varieties was similar. After the R5 period, the differences among the five cultivars gradually increased. This result indicates that the phenotypic fingerprint can accurately reveal the patterns of phenotypic changes. The logistic growth model of soybean plants revealed the time points of maximum growth rate of the five soybean varieties, and this information can provide a basis for developing guidelines for water and fertiliser application to crops. These findings will provide effective guidance for breeding and field management of soybean and other crops.
Collapse
Affiliation(s)
- Rongsheng Zhu
- College of Arts and Sciences, Northeast Agricultural University, Harbin, 150030, China.
| | - Kai Sun
- College of Engineering, Northeast Agricultural University, Harbin, 150030, China
| | - Zhuangzhuang Yan
- College of Engineering, Northeast Agricultural University, Harbin, 150030, China
| | - Xuehui Yan
- College of Engineering, Northeast Agricultural University, Harbin, 150030, China
| | - Jianglin Yu
- College of Engineering, Northeast Agricultural University, Harbin, 150030, China
| | - Jia Shi
- College of Engineering, Northeast Agricultural University, Harbin, 150030, China
| | - Zhenbang Hu
- College of Agricultural, Northeast Agricultural University, Harbin, 150030, China
| | - Hongwei Jiang
- College of Agricultural, Northeast Agricultural University, Harbin, 150030, China
| | - Dawei Xin
- College of Agricultural, Northeast Agricultural University, Harbin, 150030, China
| | - Zhanguo Zhang
- College of Arts and Sciences, Northeast Agricultural University, Harbin, 150030, China
| | - Yang Li
- College of Arts and Sciences, Northeast Agricultural University, Harbin, 150030, China
| | - Zhaoming Qi
- College of Agricultural, Northeast Agricultural University, Harbin, 150030, China
| | - Chunyan Liu
- College of Agricultural, Northeast Agricultural University, Harbin, 150030, China
| | - Xiaoxia Wu
- College of Agricultural, Northeast Agricultural University, Harbin, 150030, China
| | - Qingshan Chen
- College of Agricultural, Northeast Agricultural University, Harbin, 150030, China.
| |
Collapse
|
12
|
Wu S, Wen W, Wang Y, Fan J, Wang C, Gou W, Guo X. MVS-Pheno: A Portable and Low-Cost Phenotyping Platform for Maize Shoots Using Multiview Stereo 3D Reconstruction. PLANT PHENOMICS (WASHINGTON, D.C.) 2020; 2020:1848437. [PMID: 33313542 PMCID: PMC7706320 DOI: 10.34133/2020/1848437] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 02/19/2020] [Indexed: 05/26/2023]
Abstract
Plant phenotyping technologies play important roles in plant research and agriculture. Detailed phenotypes of individual plants can guide the optimization of shoot architecture for plant breeding and are useful to analyze the morphological differences in response to environments for crop cultivation. Accordingly, high-throughput phenotyping technologies for individual plants grown in field conditions are urgently needed, and MVS-Pheno, a portable and low-cost phenotyping platform for individual plants, was developed. The platform is composed of four major components: a semiautomatic multiview stereo (MVS) image acquisition device, a data acquisition console, data processing and phenotype extraction software for maize shoots, and a data management system. The platform's device is detachable and adjustable according to the size of the target shoot. Image sequences for each maize shoot can be captured within 60-120 seconds, yielding 3D point clouds of shoots are reconstructed using MVS-based commercial software, and the phenotypic traits at the organ and individual plant levels are then extracted by the software. The correlation coefficient (R 2) between the extracted and manually measured plant height, leaf width, and leaf area values are 0.99, 0.87, and 0.93, respectively. A data management system has also been developed to store and manage the acquired raw data, reconstructed point clouds, agronomic information, and resulting phenotypic traits. The platform offers an optional solution for high-throughput phenotyping of field-grown plants, which is especially useful for large populations or experiments across many different ecological regions.
Collapse
Affiliation(s)
- Sheng Wu
- Beijing Research Center for Information Technology in Agriculture, Beijing 100097, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Key Lab of Digital Plant, Beijing 100097, China
| | - Weiliang Wen
- Beijing Research Center for Information Technology in Agriculture, Beijing 100097, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Key Lab of Digital Plant, Beijing 100097, China
| | - Yongjian Wang
- Beijing Research Center for Information Technology in Agriculture, Beijing 100097, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Key Lab of Digital Plant, Beijing 100097, China
| | - Jiangchuan Fan
- Beijing Research Center for Information Technology in Agriculture, Beijing 100097, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Key Lab of Digital Plant, Beijing 100097, China
| | - Chuanyu Wang
- Beijing Research Center for Information Technology in Agriculture, Beijing 100097, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Key Lab of Digital Plant, Beijing 100097, China
| | - Wenbo Gou
- Beijing Research Center for Information Technology in Agriculture, Beijing 100097, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Key Lab of Digital Plant, Beijing 100097, China
| | - Xinyu Guo
- Beijing Research Center for Information Technology in Agriculture, Beijing 100097, China
- National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
- Beijing Key Lab of Digital Plant, Beijing 100097, China
| |
Collapse
|
13
|
Ding Z, Xu H, Chen G, Wang Z, Chi W, Zhang H, Wang Z, Sun L, Yang G, Wen Y. Three-dimensional reconstruction method based on bionic active sensing in precision assembly. APPLIED OPTICS 2020; 59:846-856. [PMID: 32225217 DOI: 10.1364/ao.59.000846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Accepted: 12/02/2019] [Indexed: 06/10/2023]
Abstract
With the prevailing application of new materials and the higher requirements for the quality and efficiency of production in the equipment manufacturing industry, traditional assembly methods can hardly meet the needs of large-scale production, especially in the field of high-precision assembly. Robot assembly guided by visual perception has become the key of the research in the field of engineering technology. It requires higher accuracy of robot visual perception and the control over force, position and so on. However, in 3C assembly, most products are made of transparent materials such as glass. Because of the transparency and specular reflection of the surface, 3D reconstruction of transparent objects is a very difficult problem in computer vision, in that the traditional visual perception methods could not be accurate enough. The present research proposes a bionic active sensing algorithm for 3D perception and reconstruction and realizes high-precision 3D by applying the registration algorithm. The purpose is to solve the problems existing in the traditional visual perception method, such as difficulties in achieving active sensing, low accuracy of point clouds registration, and complex computation. The results of the experiments show that the present method is efficient and accurate in 3D reconstruction. It reduces the planar reconstruction error to 0.064 mm and the surface reconstruction error to 0.177 mm.
Collapse
|
14
|
Itakura K, Hosoi F. Automatic method for segmenting leaves by combining 2D and 3D image-processing techniques. APPLIED OPTICS 2020; 59:545-551. [PMID: 32225339 DOI: 10.1364/ao.59.000545] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 12/02/2019] [Indexed: 06/10/2023]
Abstract
In this study, a method to automatically segment plant leaves from three-dimensional (3D) images using structure from motion is proposed. First, leaves in the 3D images are roughly segmented using a region-growing method in which near points with distances less than 0.2 cm are assigned to the same group. By repeating this process, the leaves not touching each other can be segmented. Then, each segmented leaf is projected onto two-dimensional (2D) images, and the watershed algorithm is executed. This process successfully segments overlapping leaves.
Collapse
|
15
|
Novel and Automatic Rice Thickness Extraction Based on Photogrammetry Using Rice Edge Features. SENSORS 2019; 19:s19245561. [PMID: 31888287 PMCID: PMC6960983 DOI: 10.3390/s19245561] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 12/10/2019] [Accepted: 12/13/2019] [Indexed: 11/26/2022]
Abstract
The dimensions of phenotyping parameters such as the thickness of rice play an important role in rice quality assessment and phenotyping research. The objective of this study was to propose an automatic method for extracting rice thickness. This method was based on the principle of binocular stereovision but avoiding the problem that it was difficult to directly match the corresponding points for 3D reconstruction due to the lack of texture of rice. Firstly, the shape features of edge, instead of texture, was used to match the corresponding points of the rice edge. Secondly, the height of the rice edge was obtained by way of space intersection. Finally, the thickness of rice was extracted based on the assumption that the average height of the edges of multiple rice is half of the thickness of rice. According to the results of the experiments on six kinds of rice or grain, errors of thickness extraction were no more than the upper limit of 0.1 mm specified in the national industry standard. The results proved that edge features could be used to extract rice thickness and validated the effectiveness of the thickness extraction algorithm we proposed, which provided technical support for the extraction of phenotyping parameters for crop researchers.
Collapse
|
16
|
Sun G, Ding Y, Wang X, Lu W, Sun Y, Yu H. Nondestructive Determination of Nitrogen, Phosphorus and Potassium Contents in Greenhouse Tomato Plants Based on Multispectral Three-Dimensional Imaging. SENSORS 2019; 19:s19235295. [PMID: 31805657 PMCID: PMC6928753 DOI: 10.3390/s19235295] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 11/29/2019] [Accepted: 11/29/2019] [Indexed: 11/16/2022]
Abstract
Measurement of plant nitrogen (N), phosphorus (P), and potassium (K) levels are important for determining precise fertilization management approaches for crops cultivated in greenhouses. To accurately, rapidly, stably, and nondestructively measure the NPK levels in tomato plants, a nondestructive determination method based on multispectral three-dimensional (3D) imaging was proposed. Multiview RGB-D images and multispectral images were synchronously collected, and the plant multispectral reflectance was registered to the depth coordinates according to Fourier transform principles. Based on the Kinect sensor pose estimation and self-calibration, the unified transformation of the multiview point cloud coordinate system was realized. Finally, the iterative closest point (ICP) algorithm was used for the precise registration of multiview point clouds and the reconstruction of plant multispectral 3D point cloud models. Using the normalized grayscale similarity coefficient, the degree of spectral overlap, and the Hausdorff distance set, the accuracy of the reconstructed multispectral 3D point clouds was quantitatively evaluated, the average value was 0.9116, 0.9343 and 0.41 cm, respectively. The results indicated that the multispectral reflectance could be registered to the Kinect depth coordinates accurately based on the Fourier transform principles, the reconstruction accuracy of the multispectral 3D point cloud model met the model reconstruction needs of tomato plants. Using back-propagation artificial neural network (BPANN), support vector machine regression (SVMR), and gaussian process regression (GPR) methods, determination models for the NPK contents in tomato plants based on the reflectance characteristics of plant multispectral 3D point cloud models were separately constructed. The relative error (RE) of the N content by BPANN, SVMR and GPR prediction models were 2.27%, 7.46% and 4.03%, respectively. The RE of the P content by BPANN, SVMR and GPR prediction models were 3.32%, 8.92% and 8.41%, respectively. The RE of the K content by BPANN, SVMR and GPR prediction models were 3.27%, 5.73% and 3.32%, respectively. These models provided highly efficient and accurate measurements of the NPK contents in tomato plants. The NPK contents determination performance of these models were more stable than those of single-view models.
Collapse
Affiliation(s)
- Guoxiang Sun
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (Y.D.); (X.W.); (W.L.); (Y.S.); (H.Y.)
- Jiangsu Province Engineering Lab for Modern Facility Agriculture Technology & Equipment, Nanjing 210031, China
- Correspondence: ; Tel.: +86-25-5860-6585
| | - Yongqian Ding
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (Y.D.); (X.W.); (W.L.); (Y.S.); (H.Y.)
- Jiangsu Province Engineering Lab for Modern Facility Agriculture Technology & Equipment, Nanjing 210031, China
| | - Xiaochan Wang
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (Y.D.); (X.W.); (W.L.); (Y.S.); (H.Y.)
- Jiangsu Province Engineering Lab for Modern Facility Agriculture Technology & Equipment, Nanjing 210031, China
| | - Wei Lu
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (Y.D.); (X.W.); (W.L.); (Y.S.); (H.Y.)
- Jiangsu Province Engineering Lab for Modern Facility Agriculture Technology & Equipment, Nanjing 210031, China
| | - Ye Sun
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (Y.D.); (X.W.); (W.L.); (Y.S.); (H.Y.)
| | - Hongfeng Yu
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (Y.D.); (X.W.); (W.L.); (Y.S.); (H.Y.)
| |
Collapse
|
17
|
Kang WH, Hwang I, Jung DH, Kim D, Kim J, Kim JH, Park KS, Son JE. Time Change in Spatial Distributions of Light Interception and Photosynthetic Rate of Paprika Estimated by Ray-tracing Simulation. ACTA ACUST UNITED AC 2019. [DOI: 10.12791/ksbec.2019.28.4.279] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Woo Hyun Kang
- Department of Plant Science and Research Inst. of Agricultural and Life Sci., Seoul National University, Seoul 08826, Korea
| | - Inha Hwang
- Department of Plant Science and Research Inst. of Agricultural and Life Sci., Seoul National University, Seoul 08826, Korea
| | - Dae Ho Jung
- Department of Plant Science and Research Inst. of Agricultural and Life Sci., Seoul National University, Seoul 08826, Korea
| | - Dongpil Kim
- Department of Plant Science and Research Inst. of Agricultural and Life Sci., Seoul National University, Seoul 08826, Korea
| | - Jaewoo Kim
- Department of Plant Science and Research Inst. of Agricultural and Life Sci., Seoul National University, Seoul 08826, Korea
| | - Jin Hyun Kim
- Protected Horticulture Research Institute, National Institute of Horticultural and Herbal Science, Haman 52054, Korea
| | - Kyoung Sub Park
- Department of Horticultural Science, Mokpo National University, Muan 58554, Korea
| | | |
Collapse
|
18
|
Measurement Method Based on Multispectral Three-Dimensional Imaging for the Chlorophyll Contents of Greenhouse Tomato Plants. SENSORS 2019; 19:s19153345. [PMID: 31366151 PMCID: PMC6696012 DOI: 10.3390/s19153345] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 07/25/2019] [Accepted: 07/28/2019] [Indexed: 11/26/2022]
Abstract
Nondestructive plant growth measurement is essential for researching plant growth and health. A nondestructive measurement system to retrieve plant information includes the measurement of morphological and physiological information, but most systems use two independent measurement systems for the two types of characteristics. In this study, a highly integrated, multispectral, three-dimensional (3D) nondestructive measurement system for greenhouse tomato plants was designed. The system used a Kinect sensor, an SOC710 hyperspectral imager, an electric rotary table, and other components. A heterogeneous sensing image registration technique based on the Fourier transform was proposed, which was used to register the SOC710 multispectral reflectance in the Kinect depth image coordinate system. Furthermore, a 3D multiview RGB-D image-reconstruction method based on the pose estimation and self-calibration of the Kinect sensor was developed to reconstruct a multispectral 3D point cloud model of the tomato plant. An experiment was conducted to measure plant canopy chlorophyll and the relative chlorophyll content was measured by the soil and plant analyzer development (SPAD) measurement model based on a 3D multispectral point cloud model and a single-view point cloud model and its performance was compared and analyzed. The results revealed that the measurement model established by using the characteristic variables from the multiview point cloud model was superior to the one established using the variables from the single-view point cloud model. Therefore, the multispectral 3D reconstruction approach is able to reconstruct the plant multispectral 3D point cloud model, which optimizes the traditional two-dimensional image-based SPAD measurement method and can obtain a precise and efficient high-throughput measurement of plant chlorophyll.
Collapse
|
19
|
Ge L, Yang Z, Sun Z, Zhang G, Zhang M, Zhang K, Zhang C, Tan Y, Li W. A Method for Broccoli Seedling Recognition in Natural Environment Based on Binocular Stereo Vision and Gaussian Mixture Model. SENSORS 2019; 19:s19051132. [PMID: 30845680 PMCID: PMC6427649 DOI: 10.3390/s19051132] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 02/27/2019] [Accepted: 02/28/2019] [Indexed: 02/04/2023]
Abstract
Illumination in the natural environment is uncontrollable, and the field background is complex and changeable which all leads to the poor quality of broccoli seedling images. The colors of weeds and broccoli seedlings are close, especially under weedy conditions. The factors above have a large influence on the stability, velocity and accuracy of broccoli seedling recognition based on traditional 2D image processing technologies. The broccoli seedlings are higher than the soil background and weeds in height due to the growth advantage of transplanted crops. A method of broccoli seedling recognition in natural environments based on Binocular Stereo Vision and a Gaussian Mixture Model is proposed in this paper. Firstly, binocular images of broccoli seedlings were obtained by an integrated, portable and low-cost binocular camera. Then left and right images were rectified, and a disparity map of the rectified images was obtained by the Semi-Global Matching (SGM) algorithm. The original 3D dense point cloud was reconstructed using the disparity map and left camera internal parameters. To reduce the operation time, a non-uniform grid sample method was used for the sparse point cloud. After that, the Gaussian Mixture Model (GMM) cluster was exploited and the broccoli seedling points were recognized from the sparse point cloud. An outlier filtering algorithm based on k-nearest neighbors (KNN) was applied to remove the discrete points along with the recognized broccoli seedling points. Finally, an ideal point cloud of broccoli seedlings can be obtained, and the broccoli seedlings recognized. The experimental results show that the Semi-Global Matching (SGM) algorithm can meet the matching requirements of broccoli images in the natural environment, and the average operation time of SGM is 138 ms. The SGM algorithm is superior to the Sum of Absolute Differences (SAD) algorithm and Sum of Squared Differences (SSD) algorithms. The recognition results of Gaussian Mixture Model (GMM) outperforms K-means and Fuzzy c-means with the average running time of 51 ms. To process a pair of images with the resolution of 640×480, the total running time of the proposed method is 578 ms, and the correct recognition rate is 97.98% of 247 pairs of images. The average value of sensitivity is 85.91%. The average percentage of the theoretical envelope box volume to the measured envelope box volume is 95.66%. The method can provide a low-cost, real-time and high-accuracy solution for crop recognition in natural environment.
Collapse
Affiliation(s)
- Luzhen Ge
- College of Engineering, China Agricultural University, Qinghua Rd.(E) No.17, Haidian District, Beijing 100083, China.
| | - Zhilun Yang
- College of Engineering, China Agricultural University, Qinghua Rd.(E) No.17, Haidian District, Beijing 100083, China.
| | - Zhe Sun
- College of Engineering, China Agricultural University, Qinghua Rd.(E) No.17, Haidian District, Beijing 100083, China.
| | - Gan Zhang
- College of Engineering, China Agricultural University, Qinghua Rd.(E) No.17, Haidian District, Beijing 100083, China.
| | - Ming Zhang
- College of Engineering, China Agricultural University, Qinghua Rd.(E) No.17, Haidian District, Beijing 100083, China.
| | - Kaifei Zhang
- College of Engineering, China Agricultural University, Qinghua Rd.(E) No.17, Haidian District, Beijing 100083, China.
| | - Chunlong Zhang
- College of Engineering, China Agricultural University, Qinghua Rd.(E) No.17, Haidian District, Beijing 100083, China.
| | - Yuzhi Tan
- College of Engineering, China Agricultural University, Qinghua Rd.(E) No.17, Haidian District, Beijing 100083, China.
| | - Wei Li
- College of Engineering, China Agricultural University, Qinghua Rd.(E) No.17, Haidian District, Beijing 100083, China.
| |
Collapse
|
20
|
Itakura K, Kamakura I, Hosoi F. Three-Dimensional Monitoring of Plant Structural Parameters and Chlorophyll Distribution. SENSORS (BASEL, SWITZERLAND) 2019; 19:E413. [PMID: 30669537 PMCID: PMC6359203 DOI: 10.3390/s19020413] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Revised: 01/16/2019] [Accepted: 01/18/2019] [Indexed: 11/16/2022]
Abstract
Image analysis is widely used for accurate and efficient plant monitoring. Plants have complex three-dimensional (3D) structures; hence, 3D image acquisition and analysis is useful for determining the status of plants. Here, 3D images of plants were reconstructed using a photogrammetric approach, called "structure from motion". Chlorophyll content is an important parameter that determines the status of plants. Chlorophyll content was estimated from 3D images of plants with color information. To observe changes in the chlorophyll content and plant structure, a potted plant was kept for five days under a water stress condition and its 3D images were taken once a day. As a result, the normalized Red value and the chlorophyll content were correlated; a high R² value (0.81) was obtained. The absolute error of the chlorophyll content estimation in cross-validation studies was 4.0 × 10-2 μg/mm². At the same time, the structural parameters (i.e., the leaf inclination angle and the azimuthal angle) were calculated by simultaneously monitoring the changes in the plant's status in terms of its chlorophyll content and structural parameters. By combining these parameters related to plant information in plant image analysis, early detection of plant stressors, such as water stress, becomes possible.
Collapse
Affiliation(s)
- Kenta Itakura
- Graduate School, University of Tokyo, Tokyo 113-8657, Japan.
| | | | - Fumiki Hosoi
- Graduate School, University of Tokyo, Tokyo 113-8657, Japan.
| |
Collapse
|
21
|
Itakura K, Hosoi F. Automatic Leaf Segmentation for Estimating Leaf Area and Leaf Inclination Angle in 3D Plant Images. SENSORS (BASEL, SWITZERLAND) 2018; 18:E3576. [PMID: 30360406 PMCID: PMC6210333 DOI: 10.3390/s18103576] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 10/19/2018] [Accepted: 10/20/2018] [Indexed: 11/23/2022]
Abstract
Automatic and efficient plant monitoring offers accurate plant management. Construction of three-dimensional (3D) models of plants and acquisition of their spatial information is an effective method for obtaining plant structural parameters. Here, 3D images of leaves constructed with multiple scenes taken from different positions were segmented automatically for the automatic retrieval of leaf areas and inclination angles. First, for the initial segmentation, leave images were viewed from the top, then leaves in the top-view images were segmented using distance transform and the watershed algorithm. Next, the images of leaves after the initial segmentation were reduced by 90%, and the seed regions for each leaf were produced. The seed region was re-projected onto the 3D images, and each leaf was segmented by expanding the seed region with the 3D information. After leaf segmentation, the leaf area of each leaf and its inclination angle were estimated accurately via a voxel-based calculation. As a result, leaf area and leaf inclination angle were estimated accurately after automatic leaf segmentation. This method for automatic plant structure analysis allows accurate and efficient plant breeding and growth management.
Collapse
Affiliation(s)
- Kenta Itakura
- Graduate School, University of Tokyo, Tokyo 113-8657, Japan.
| | - Fumiki Hosoi
- Graduate School, University of Tokyo, Tokyo 113-8657, Japan.
| |
Collapse
|
22
|
He JQ, Harrison RJ, Li B. A novel 3D imaging system for strawberry phenotyping. PLANT METHODS 2017; 13:93. [PMID: 29176998 PMCID: PMC5688821 DOI: 10.1186/s13007-017-0243-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 10/23/2017] [Indexed: 05/20/2023]
Abstract
BACKGROUND Accurate and quantitative phenotypic data in plant breeding programmes is vital in breeding to assess the performance of genotypes and to make selections. Traditional strawberry phenotyping relies on the human eye to assess most external fruit quality attributes, which is time-consuming and subjective. 3D imaging is a promising high-throughput technique that allows multiple external fruit quality attributes to be measured simultaneously. RESULTS A low cost multi-view stereo (MVS) imaging system was developed, which captured data from 360° around a target strawberry fruit. A 3D point cloud of the sample was derived and analysed with custom-developed software to estimate berry height, length, width, volume, calyx size, colour and achene number. Analysis of these traits in 100 fruits showed good concordance with manual assessment methods. CONCLUSION This study demonstrates the feasibility of an MVS based 3D imaging system for the rapid and quantitative phenotyping of seven agronomically important external strawberry traits. With further improvement, this method could be applied in strawberry breeding programmes as a cost effective phenotyping technique.
Collapse
Affiliation(s)
- Joe Q. He
- NIAB EMR, New Road, East Malling, ME19 6BJ UK
- University of Reading, Whiteknights, Reading, RG6 6AH UK
| | - Richard J. Harrison
- NIAB EMR, New Road, East Malling, ME19 6BJ UK
- University of Reading, Whiteknights, Reading, RG6 6AH UK
| | - Bo Li
- NIAB EMR, New Road, East Malling, ME19 6BJ UK
| |
Collapse
|
23
|
Kravanja J, Žganec M, Žganec-Gros J, Dobrišek S, Štruc V. Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models. SENSORS (BASEL, SWITZERLAND) 2016; 16:E1740. [PMID: 27775570 PMCID: PMC5087525 DOI: 10.3390/s16101740] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Revised: 10/09/2016] [Accepted: 10/10/2016] [Indexed: 11/16/2022]
Abstract
Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors.
Collapse
Affiliation(s)
- Jaka Kravanja
- Alpineon d.o.o., Ulica Iga Grudna 15, Ljubljana SI-1000, Slovenia.
| | - Mario Žganec
- Alpineon d.o.o., Ulica Iga Grudna 15, Ljubljana SI-1000, Slovenia.
| | | | - Simon Dobrišek
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana SI-1000, Slovenia.
| | - Vitomir Štruc
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana SI-1000, Slovenia.
| |
Collapse
|