1
|
Giang TTH, Ryoo YJ. Pruning Points Detection of Sweet Pepper Plants Using 3D Point Clouds and Semantic Segmentation Neural Network. SENSORS (BASEL, SWITZERLAND) 2023; 23:4040. [PMID: 37112381 PMCID: PMC10144461 DOI: 10.3390/s23084040] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 04/10/2023] [Accepted: 04/11/2023] [Indexed: 06/19/2023]
Abstract
Automation in agriculture can save labor and raise productivity. Our research aims to have robots prune sweet pepper plants automatically in smart farms. In previous research, we studied detecting plant parts by a semantic segmentation neural network. Additionally, in this research, we detect the pruning points of leaves in 3D space by using 3D point clouds. Robot arms can move to these positions and cut the leaves. We proposed a method to create 3D point clouds of sweet peppers by applying semantic segmentation neural networks, the ICP algorithm, and ORB-SLAM3, a visual SLAM application with a LiDAR camera. This 3D point cloud consists of plant parts that have been recognized by the neural network. We also present a method to detect the leaf pruning points in 2D images and 3D space by using 3D point clouds. Furthermore, the PCL library was used to visualize the 3D point clouds and the pruning points. Many experiments are conducted to show the method's stability and correctness.
Collapse
Affiliation(s)
- Truong Thi Huong Giang
- Department of Electrical Engineering, Mokpo National University, Muan 58554, Jeonnam, Republic of Korea;
| | - Young-Jae Ryoo
- Department of Electrical and Control Engineering, Mokpo National University, Muan 58554, Jeonnam, Republic of Korea
| |
Collapse
|
2
|
Tagarakis AC, Filippou E, Kalaitzidis D, Benos L, Busato P, Bochtis D. Proposing UGV and UAV Systems for 3D Mapping of Orchard Environments. SENSORS 2022; 22:s22041571. [PMID: 35214470 PMCID: PMC8877329 DOI: 10.3390/s22041571] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 02/13/2022] [Accepted: 02/14/2022] [Indexed: 01/15/2023]
Abstract
During the last decades, consumer-grade RGB-D (red green blue-depth) cameras have gained popularity for several applications in agricultural environments. Interestingly, these cameras are used for spatial mapping that can serve for robot localization and navigation. Mapping the environment for targeted robotic applications in agricultural fields is a particularly challenging task, owing to the high spatial and temporal variability, the possible unfavorable light conditions, and the unpredictable nature of these environments. The aim of the present study was to investigate the use of RGB-D cameras and unmanned ground vehicle (UGV) for autonomously mapping the environment of commercial orchards as well as providing information about the tree height and canopy volume. The results from the ground-based mapping system were compared with the three-dimensional (3D) orthomosaics acquired by an unmanned aerial vehicle (UAV). Overall, both sensing methods led to similar height measurements, while the tree volume was more accurately calculated by RGB-D cameras, as the 3D point cloud captured by the ground system was far more detailed. Finally, fusion of the two datasets provided the most precise representation of the trees.
Collapse
Affiliation(s)
- Aristotelis C. Tagarakis
- Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (E.F.); (D.K.); (L.B.)
- Correspondence: (A.C.T.); (D.B.)
| | - Evangelia Filippou
- Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (E.F.); (D.K.); (L.B.)
| | - Damianos Kalaitzidis
- Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (E.F.); (D.K.); (L.B.)
| | - Lefteris Benos
- Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (E.F.); (D.K.); (L.B.)
| | - Patrizia Busato
- Department of Agriculture, Forestry and Food Science (DISAFA), University of Turin, Largo Braccini 2, 10095 Grugliasco, Italy;
| | - Dionysis Bochtis
- Institute for Bio-Economy and Agri-Technology (IBO), Centre for Research and Technology-Hellas (CERTH), 6th km Charilaou-Thermi Rd, GR 57001 Thessaloniki, Greece; (E.F.); (D.K.); (L.B.)
- FarmB Digital Agriculture P.C., Doiranis 17, GR 54639 Thessaloniki, Greece
- Correspondence: (A.C.T.); (D.B.)
| |
Collapse
|
3
|
Li W, Li Y, Darwish W, Tang S, Hu Y, Chen W. A Range-Independent Disparity-Based Calibration Model for Structured Light Pattern-Based RGBD Sensor. SENSORS 2020; 20:s20030639. [PMID: 31979266 PMCID: PMC7038339 DOI: 10.3390/s20030639] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 01/20/2020] [Accepted: 01/21/2020] [Indexed: 11/16/2022]
Abstract
Consumer-grade RGBD sensors that provide both colour and depth information have many potential applications, such as robotics control, localization, and mapping, due to their low cost and simple operation. However, the depth measurement provided by consumer-grade RGBD sensors is still inadequate for many high-precision applications, such as rich 3D reconstruction, accurate object recognition and precise localization, due to the fact that the systematic errors of RGB sensors increase exponentially with the ranging distance. Most existing calibration models for depth measurement must be carried out with different distances. In this paper, we reveal the mechanism of how an infrared (IR) camera and IR projector contribute to the overall non-centrosymmetric distortion of a structured light pattern-based RGBD sensor. Then, a new two-step calibration method for RGBD sensors based on the disparity measurement is proposed, which is range-independent and has full frame coverage. Three independent calibration models are used for the calibration for the three main components of the RGBD sensor errors: the infrared camera distortion, the infrared projection distortion, and the infrared cone-caused bias. Experiments show the proposed calibration method can provide precise calibration results in full-range and full-frame coverage of depth measurement. The offset in the edge area of long-range depth (8 m) is reduced from 86 cm to 30 cm, and the relative error is reduced from 11% to 3% of the range distance. Overall, at far range the proposed calibration method can improve the depth accuracy by 70% in the central region of depth frame and 65% in the edge region.
Collapse
Affiliation(s)
- Wenbin Li
- Shenzhen Research Institute, The Hong Kong Polytechnic University, Shenzhen 518057, China;
- Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China; (Y.L.); (Y.H.)
- Correspondence:
| | - Yaxin Li
- Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China; (Y.L.); (Y.H.)
| | - Walid Darwish
- Geomatics Engineering Lab, Civil Engineering Department, Faculty of Engineering, Cairo University, Cairo 12613, Egypt;
- Department of Electronic and Informatics, Faculty of Engineering, Vrije Universiteit Brussel, 1050 Brussels, Belgium
| | - Shengjun Tang
- Guangdong Key Laboratory of Urban Informatics & Shenzhen Key Laboratory of Spatial Smart Sensing and Services & Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) & Research Institute for Smart Cities, School of Architecture and Urban Planning, Shenzhen University, Shenzhen 518050, China;
| | - Yuling Hu
- Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China; (Y.L.); (Y.H.)
| | - Wu Chen
- Shenzhen Research Institute, The Hong Kong Polytechnic University, Shenzhen 518057, China;
| |
Collapse
|
4
|
Model-Based 3D Pose Estimation of a Single RGB Image Using a Deep Viewpoint Classification Neural Network. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9122478] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper presents a model-based approach for 3D pose estimation of a single RGB image to keep the 3D scene model up-to-date using a low-cost camera. A prelearned image model of the target scene is first reconstructed using a training RGB-D video. Next, the model is analyzed using the proposed multiple principal analysis to label the viewpoint class of each training RGB image and construct a training dataset for training a deep learning viewpoint classification neural network (DVCNN). For all training images in a viewpoint class, the DVCNN estimates their membership probabilities and defines the template of the class as the one of the highest probability. To achieve the goal of scene reconstruction in a 3D space using a camera, using the information of templates, a pose estimation algorithm follows to estimate the pose parameters and depth map of a single RGB image captured by navigating the camera to a specific viewpoint. Obviously, the pose estimation algorithm is the key to success for updating the status of the 3D scene. To compare with conventional pose estimation algorithms which use sparse features for pose estimation, our approach enhances the quality of reconstructing the 3D scene point cloud using the template-to-frame registration. Finally, we verify the ability of the established reconstruction system on publicly available benchmark datasets and compare it with the state-of-the-art pose estimation algorithms. The results indicate that our approach outperforms the compared methods in terms of the accuracy of pose estimation.
Collapse
|
5
|
Li C, Lu B, Zhang Y, Liu H, Qu Y. 3D Reconstruction of Indoor Scenes via Image Registration. Neural Process Lett 2018. [DOI: 10.1007/s11063-018-9781-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
6
|
Zhang F, Lei T, Li J, Cai X, Shao X, Chang J, Tian F. Real-Time Calibration and Registration Method for Indoor Scene with Joint Depth and Color Camera. INT J PATTERN RECOGN 2018. [DOI: 10.1142/s0218001418540216] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Traditional vision registration technologies require the design of precise markers or rich texture information captured from the video scenes, and the vision-based methods have high computational complexity while the hardware-based registration technologies lack accuracy. Therefore, in this paper, we propose a novel registration method that takes advantages of RGB-D camera to obtain the depth information in real-time, and a binocular system using the Time of Flight (ToF) camera and a commercial color camera is constructed to realize the three-dimensional registration technique. First, we calibrate the binocular system to get their position relationships. The systematic errors are fitted and corrected by the method of B-spline curve. In order to reduce the anomaly and random noise, an elimination algorithm and an improved bilateral filtering algorithm are proposed to optimize the depth map. For the real-time requirement of the system, it is further accelerated by parallel computing with CUDA. Then, the Camshift-based tracking algorithm is applied to capture the real object registered in the video stream. In addition, the position and orientation of the object are tracked according to the correspondence between the color image and the 3D data. Finally, some experiments are implemented and compared using our binocular system. Experimental results are shown to demonstrate the feasibility and effectiveness of our method.
Collapse
Affiliation(s)
- Fengquan Zhang
- Beijing Key Laboratory on Integration and Analysis of Large-Scale Stream Data North, China University of Technology, Beijing, P. R. China
- State Key Laboratory of Virtual Reality Technology and Systems Beihang University, Beijing, P. R. China
| | - Tingshen Lei
- Beijing Key Laboratory on Integration and Analysis of Large-Scale Stream Data North, China University of Technology, Beijing, P. R. China
| | - Jinhong Li
- Beijing Key Laboratory on Integration and Analysis of Large-Scale Stream Data North, China University of Technology, Beijing, P. R. China
| | - Xingquan Cai
- Beijing Key Laboratory on Integration and Analysis of Large-Scale Stream Data North, China University of Technology, Beijing, P. R. China
| | - Xuqiang Shao
- State Key Laboratory of Virtual Reality Technology and Systems Beihang University, Beijing, P. R. China
- School of Control and Computer Engineering, North China Electric Power University, Baoding, P. R. China
| | - Jian Chang
- National Centre for Computer Animation, Bournemouth University, Poole, UK
| | - Feng Tian
- State Key Laboratory of Virtual Reality Technology and Systems Beihang University, Beijing, P. R. China
- School of Computer and Information Technology, Northeast Petroleum University, Daqing, P. R. China
| |
Collapse
|
7
|
Calibrate Multiple Consumer RGB-D Cameras for Low-Cost and Efficient 3D Indoor Mapping. REMOTE SENSING 2018. [DOI: 10.3390/rs10020328] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
8
|
Wang K, Zhang G, Xia S. Templateless Non-Rigid Reconstruction and Motion Tracking With a Single RGB-D Camera. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:5966-5979. [PMID: 28816672 DOI: 10.1109/tip.2017.2740624] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We present a novel templateless approach for nonrigid reconstruction and motion tracking using a single RGB-D camera. Without any template prior, our system achieves accurate reconstruction and tracking for considerably deformable objects. To robustly register the input sequence of partial depth scans with dynamic motion, we propose an efficient local-to-global hierarchical optimization framework inspired by the idea of traditional structure-from-motion. Our proposed framework mainly consists of two stages, local nonrigid bundle adjustment and global optimization. To eliminate error accumulation during the nonrigid registration of loop motion sequences, we split the full sequence into several segments and apply local nonrigid bundle adjustment to align each segment locally. Global optimization is then adopted to combine all segments and handle the drift problem through loop-closure constraint. By fitting to the input partial data, a deforming 3D model sequence of dynamic objects is finally generated. Experiments on both synthetic and real test data sets and comparisons with state of the art demonstrate that our approach can handle considerable motions robustly and efficiently, and reconstruct high-quality 3D model sequences without drift.
Collapse
|
9
|
Safaei F. Optimization of Camera Arrangement Using Correspondence Field to Improve Depth Estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:3038-3050. [PMID: 28436863 DOI: 10.1109/tip.2017.2695102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Stereo matching algorithms attempt to estimate depth from the images obtained by two cameras. In most cases, the arrangement of cameras (their locations and orientations with respect to the scene) is determined based on human experience. In this paper, it is shown that the camera arrangement can be optimized using the concept of correspondence field (CF) for better acquisition of depth. Specifically, this paper demonstrates the relationship between the CF of a pair of cameras and depth estimation accuracy and presents a method to optimize their arrangement based on the gradient of the CF. The experimental results show that a pair of cameras optimized by the proposed method can improve the accuracy of depth estimation by as much as 30% compared with the conventional camera arrangements.
Collapse
|
10
|
Darwish W, Tang S, Li W, Chen W. A New Calibration Method for Commercial RGB-D Sensors. SENSORS 2017; 17:s17061204. [PMID: 28538695 PMCID: PMC5492766 DOI: 10.3390/s17061204] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Revised: 05/09/2017] [Accepted: 05/20/2017] [Indexed: 11/23/2022]
Abstract
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.
Collapse
Affiliation(s)
- Walid Darwish
- Department of Land Surveying & Geo-Informatics, The Hong Kong Polytechnic University, Hung Hom 999077, Hong Kong, China.
| | - Shenjun Tang
- Department of Land Surveying & Geo-Informatics, The Hong Kong Polytechnic University, Hung Hom 999077, Hong Kong, China.
- State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
- Shenzhen Key Laboratory of Spatial Smart Sensing and Services & The Key Laboratory for Geo-Environment Monitoring of Coastal Zone of the National Administration of Surveying, Mapping and GeoInformation, Shenzhen University, Shenzhen 518060, China.
| | - Wenbin Li
- Department of Land Surveying & Geo-Informatics, The Hong Kong Polytechnic University, Hung Hom 999077, Hong Kong, China.
| | - Wu Chen
- Department of Land Surveying & Geo-Informatics, The Hong Kong Polytechnic University, Hung Hom 999077, Hong Kong, China.
| |
Collapse
|
11
|
Liu H, Zhou Q, Yang J, Jiang T, Liu Z, Li J. Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback. SENSORS 2017; 17:s17020321. [PMID: 28208781 PMCID: PMC5336081 DOI: 10.3390/s17020321] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2017] [Revised: 02/01/2017] [Accepted: 02/03/2017] [Indexed: 11/16/2022]
Abstract
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.
Collapse
Affiliation(s)
- Haoting Liu
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China.
| | - Qianxiang Zhou
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China.
| | - Jin Yang
- Astronaut Research and Training Center of China, Beijing 100094, China.
| | - Ting Jiang
- Astronaut Research and Training Center of China, Beijing 100094, China.
| | - Zhizhen Liu
- Astronaut Research and Training Center of China, Beijing 100094, China.
| | - Jie Li
- Astronaut Research and Training Center of China, Beijing 100094, China.
| |
Collapse
|