1
|
Ye F, Jia G, Wang Y, Chen X, Xi J. Kinematic and Joint Compliance Modeling Method to Improve Position Accuracy of a Robotic Vision System. SENSORS (BASEL, SWITZERLAND) 2024; 24:2559. [PMID: 38676176 PMCID: PMC11053926 DOI: 10.3390/s24082559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 04/08/2024] [Accepted: 04/12/2024] [Indexed: 04/28/2024]
Abstract
In the field of robotic automation, achieving high position accuracy in robotic vision systems (RVSs) is a pivotal challenge that directly impacts the efficiency and effectiveness of industrial applications. This study introduces a comprehensive modeling approach that integrates kinematic and joint compliance factors to significantly enhance the position accuracy of a system. In the first place, we develop a unified kinematic model that effectively reduces the complexity and error accumulation associated with the calibration of robotic systems. At the heart of our approach is the formulation of a joint compliance model that meticulously accounts for the intricacies of the joint connector, the external load, and the self-weight of robotic links. By employing a novel 3D rotary laser sensor for precise error measurement and model calibration, our method offers a streamlined and efficient solution for the accurate integration of vision systems into robotic operations. The efficacy of our proposed models is validated through experiments conducted on a FANUC LR Mate 200iD robot, showcasing notable improvements in the position accuracy of robotic vision system. Our findings contribute a framework for the calibration and error compensation of RVS, holding significant potential for advancements in automated tasks requiring high precision.
Collapse
Affiliation(s)
- Fan Ye
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (F.Y.); (Y.W.); (X.C.)
| | - Guangpeng Jia
- China National Heavy Duty Truck Group Co., Ltd., No. 777 Hua’ao Road, Innovation Zone, Jinan 250101, China;
| | - Yukun Wang
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (F.Y.); (Y.W.); (X.C.)
| | - Xiaobo Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (F.Y.); (Y.W.); (X.C.)
| | - Juntong Xi
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China; (F.Y.); (Y.W.); (X.C.)
| |
Collapse
|
2
|
Välimäki T, Garigipati B, Ghabcheloo R. Motion-Based Extrinsic Sensor-to-Sensor Calibration: Effect of Reference Frame Selection for New and Existing Methods. SENSORS (BASEL, SWITZERLAND) 2023; 23:3740. [PMID: 37050800 PMCID: PMC10098754 DOI: 10.3390/s23073740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 03/23/2023] [Accepted: 03/31/2023] [Indexed: 06/19/2023]
Abstract
This paper studies the effect of reference frame selection in sensor-to-sensor extrinsic calibration when formulated as a motion-based hand-eye calibration problem. As the sensor trajectories typically contain some composition of noise, the aim is to determine which selection strategies work best under which noise conditions. Different reference selection options are tested under varying noise conditions in simulations, and the findings are validated with real data from the KITTI dataset. The study is conducted for four state-of-the-art methods, as well as two proposed cost functions for nonlinear optimization. One of the proposed cost functions incorporates outlier rejection to improve calibration performance and was shown to significantly improve performance in the presence of outliers, and either match or outperform the other algorithms in other noise conditions. However, the performance gain from reference frame selection was deemed larger than that from algorithm selection. In addition, we show that with realistic noise, the reference frame selection method commonly used in the literature, is inferior to other tested options, and that relative error metrics are not reliable for telling which method achieves best calibration performance.
Collapse
|
3
|
Hao M, Yang B, Ru C, Yue C, Huang Z, Zhai R, Sun Y, Wang Y, Dai C. Modeling and Compensation of Positioning Error in Micromanipulation. MICROMACHINES 2023; 14:779. [PMID: 37421012 DOI: 10.3390/mi14040779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/18/2023] [Accepted: 03/18/2023] [Indexed: 07/09/2023]
Abstract
In order to improve the positioning accuracy of the micromanipulation system, a comprehensive error model is first established to take into account the microscope nonlinear imaging distortion, camera installation error, and the mechanical displacement error of the motorized stage. A novel error compensation method is then proposed with distortion compensation coefficients obtained by the Levenberg-Marquardt optimization algorithm combined with the deduced nonlinear imaging model. The compensation coefficients for camera installation error and mechanical displacement error are derived from the rigid-body translation technique and image stitching algorithm. To validate the error compensation model, single shot and cumulative error tests were designed. The experimental results show that after the error compensation, the displacement errors were controlled within 0.25 μm when moving in a single direction and within 0.02 μm per 1000 μm when moving in multiple directions.
Collapse
Affiliation(s)
- Miao Hao
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou 215137, China
| | - Bin Yang
- The Reproductive Medicine Centre, The First Affiliated Hospital of Suzhou University, Suzhou 215031, China
| | - Changhai Ru
- School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
| | - Chunfeng Yue
- Suzhou Boundless Medical Technology Co., Ltd., Suzhou 215163, China
| | - Zongjie Huang
- Suzhou Boundless Medical Technology Co., Ltd., Suzhou 215163, China
| | - Rongan Zhai
- School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China
| | - Yu Sun
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada
| | - Yong Wang
- School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
| | - Changsheng Dai
- School of Mechanical Engineering, Dalian University of Technology, Dalian 116081, China
| |
Collapse
|
4
|
Zhang X, Yao M, Cheng Q, Liang G, Fan F. A novel hand-eye calibration method of picking robot based on TOF camera. FRONTIERS IN PLANT SCIENCE 2023; 13:1099033. [PMID: 36733593 PMCID: PMC9888730 DOI: 10.3389/fpls.2022.1099033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 12/22/2022] [Indexed: 06/18/2023]
Abstract
Aiming at the stability of hand-eye calibration in fruit picking scene, a simple hand-eye calibration method for picking robot based on optimization combined with TOF (Time of Flight) camera is proposed. This method needs to fix the TOF depth camera at actual and calculated coordinates of the peach the end of the robot, operate the robot to take pictures of the calibration board from different poses, and record the current photographing poses to ensure that each group of pictures is clear and complete, so as to use the TOF depth camera to image the calibration board. Obtain multiple sets of calibration board depth maps and corresponding point cloud data, that is, "eye" data. Through the circle center extraction and positioning algorithm, the circle center points on each group of calibration plates are extracted, and a circle center sorting method based on the vector angle and the center of mass coordinates is designed to solve the circle center caused by factors such as mirror distortion, uneven illumination and different photographing poses. And through the tool center point of the actuator, the coordinate value of the circle center point on the four corners of each group of calibration plates in the robot end coordinate system is located in turn, and the "hand" data is obtained. Combined with the SVD method, And according to the obtained point residuals, the weight coefficients of the marker points are redistributed, and the hand-eye parameters are iteratively optimized, which improves the accuracy and stability of the hand-eye calibration. the method proposed in this paper has a better ability to locate the gross error under the environment of large gross errors. In order to verify the feasibility of the hand-eye calibration method, the indoor picking experiment was simulated, and the peaches were identified and positioned by combining deep learning and 3D vision to verify the proposed hand-eye calibration method. The JAKA six-axis robot and TuYang depth camera are used to build the experimental platform. The experimental results show that the method is simple to operate, has good stability, and the calibration plate is easy to manufacture and low in cost. work accuracy requirements.
Collapse
Affiliation(s)
- Xiangsheng Zhang
- Key Laboratory of Advanced Process Control for Light Industry, Ministry of Education, Jiangnan University, Wuxi, Jiangsu, China
| | - Meng Yao
- Key Laboratory of Advanced Process Control for Light Industry, Ministry of Education, Jiangnan University, Wuxi, Jiangsu, China
| | - Qi Cheng
- Key Laboratory of Advanced Process Control for Light Industry, Ministry of Education, Jiangnan University, Wuxi, Jiangsu, China
| | - Gunan Liang
- Visual Algorithm R&D Department, XINJIE Electronic Limited Company, Wuxi, Jiangsu, China
| | - Feng Fan
- Key Laboratory of Advanced Process Control for Light Industry, Ministry of Education, Jiangnan University, Wuxi, Jiangsu, China
| |
Collapse
|
5
|
Enebuse I, Ibrahim BKSMK, Foo M, Matharu RS, Ahmed H. Accuracy evaluation of hand-eye calibration techniques for vision-guided robots. PLoS One 2022; 17:e0273261. [PMID: 36260640 PMCID: PMC9581431 DOI: 10.1371/journal.pone.0273261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 08/04/2022] [Indexed: 11/07/2022] Open
Abstract
Hand-eye calibration is an important step in controlling a vision-guided robot in applications like part assembly, bin picking and inspection operations etc. Many methods for estimating hand-eye transformations have been proposed in literature with varying degrees of complexity and accuracy. However, the success of a vision-guided application is highly impacted by the accuracy the hand-eye calibration of the vision system with the robot. The level of this accuracy depends on several factors such as rotation and translation noise, rotation and translation motion range that must be considered during calibration. Previous studies and benchmarking of the proposed algorithms have largely been focused on the combined effect of rotation and translation noise. This study provides insight on the impact of rotation and translation noise acting in isolation on the hand-eye calibration accuracy. This deviates from the most common method of assessing hand-eye calibration accuracy based on pose noise (combined rotation and translation noise). We also evaluated the impact of the robot motion range used during the hand-eye calibration operation which is rarely considered. We provide quantitative evaluation of our study using six commonly used algorithms from an implementation perspective. We comparatively analyse the performance of these algorithms through simulation case studies and experimental validation using the Universal Robot’s UR5e physical robots. Our results show that these different algorithms perform differently when the noise conditions vary rather than following a general trend. For example, the simultaneous methods are more resistant to rotation noise, whereas the separate methods are better at dealing with translation noise. Additionally, while increasing the robot rotation motion span during calibration enhances the accuracy of the separate methods, it has a negative effect on the simultaneous methods. Conversely, increasing the translation motion range improves the accuracy of simultaneous methods but degrades the accuracy of the separate methods. These findings suggest that those conditions should be considered when benchmarking algorithms or performing a calibration process for enhanced accuracy.
Collapse
Affiliation(s)
- Ikenna Enebuse
- Centre for Future Transport and Cities, Coventry University, Coventry, United Kingdom
| | | | - Mathias Foo
- School of Engineering, University of Warwick, Coventry, United Kingdom
| | - Ranveer S. Matharu
- Centre for Future Transport and Cities, Coventry University, Coventry, United Kingdom
| | - Hafiz Ahmed
- Nuclear Futures Institute, Bangor University, Bangor, United Kingdom
- * E-mail:
| |
Collapse
|
6
|
Research on the Hand–Eye Calibration Method of Variable Height and Analysis of Experimental Results Based on Rigid Transformation. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094415] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In view of the phenomenon that camera imaging will appear large up close and small from afar in the eye-to-hand hand-calibration system, one hand–eye calibration is carried out. The manipulator is only suitable for grasping objects of the same height, and the calibration results cannot be applied to grasping products with variable height. Based on the study of the pinhole camera model and the rigid transformation model between coordinate systems, the introduction of the calibration height parameters, the relationship between parameters of the rigid transformation matrix between image the coordinate system and the robot coordinate system, and sampling height are established. In the experiment, firstly, through the calibration of camera parameters, the influence of camera distortion on imaging quality is eliminated, and the influence of calibration height is ignored. Then, the machine coordinate system and image coordinate system of the calibration plate at different heights are calibrated using the four-point calibration method. The parameters of the rigid transformation matrix at different heights (H) are calculated. Finally, through experimental analysis, the high linear relationship between the parameters of the rigid transformation matrix from the image coordinate system to the robot coordinate system and the calibration height is fitted. By analyzing the random error of the experiment, the linear relationship between calibration height and pixel density is further established, and the systematic error of the experimental process is deeply analyzed. The experimental results show that the hand–eye calibration system based on this linear relationship is precise and suitable for grabbing products of any height, and the positioning error is less than 0.08%.
Collapse
|
7
|
Sarabandi S, Porta JM, Thomas F. Hand-Eye Calibration Made Easy Through a Closed-Form Two-Stage Method. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3146943] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
8
|
Abstract
Abstract
A classic hand-eye system involves hand-eye calibration and robot-world and hand-eye calibration. Insofar as hand-eye calibration can solve only hand-eye transformation, this study aims to determine the robot-world and hand-eye transformations simultaneously based on the robot-world and hand-eye equation. According to whether the rotation part and the translation part of the equation are decoupled, the methods can be divided into separable solutions and simultaneous solutions. The separable solutions solve the rotation part before solving the translation part, so the estimated errors of the rotation will be transferred to the translation. In this study, a method was proposed for calculation with rotation and translation coupling; a closed-form solution based on Kronecker product and an iterative solution based on the Gauss–Newton algorithm were involved. The feasibility was further tested using simulated data and real data, and the superiority was verified by comparison with the results obtained by the available method. Finally, we improved a method that can solve the singularity problem caused by the parameterization of the rotation matrix, which can be widely used in the robot-world and hand-eye calibration. The results show that the prediction errors of rotation and translation based on the proposed method be reduced to
$0.26^\circ$
and
$1.67$
mm, respectively.
Collapse
|
9
|
Pedrosa E, Oliveira M, Lau N, Santos V. A General Approach to Hand–Eye Calibration Through the Optimization of Atomic Transformations. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3062306] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
10
|
Pachtrachai K, Vasconcelos F, Edwards P, Stoyanov D. Learning to Calibrate - Estimating the Hand-eye Transformation Without Calibration Objects. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3098942] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
11
|
Abstract
A robot can identify the position of a target and complete a grasping based on the hand–eye calibration algorithm, through which the relationship between the robot coordinate system and the camera coordinate system can be established. The accuracy of the hand–eye calibration algorithm affects the real-time performance of the visual servo system and the robot manipulation. The traditional calibration technique is based on a perfect mathematical model AX = XB, in which the X represents the relationship of (A) the camera coordinate system and (B) the robot coordinate system. The traditional solution to the transformation matrix has a certain extent of limitation and instability. To solve this problem, an optimized neural-network-based hand–eye calibration method was developed to establish a non-linear relationship between robotic coordinates and pixel coordinates that can compensate for the nonlinear distortion of the camera lens. The learning process of the hand–eye calibration model can be interpreted as B=fA, which is the coordinate transformation relationship trained by the neural network. An accurate hand–eye calibration model can finally be obtained by continuously optimizing the network structure and parameters via training. Finally, the accuracy and stability of the method were verified by experiments on a robot grasping system.
Collapse
|
12
|
Pachtrachai K, Vasconcelos F, Dwyer G, Hailes S, Stoyanov D. Hand-Eye Calibration With a Remote Centre of Motion. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2924845] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
13
|
Ali I, Suominen O, Gotchev A, Morales ER. Methods for Simultaneous Robot-World-Hand-Eye Calibration: A Comparative Study. SENSORS 2019; 19:s19122837. [PMID: 31242714 PMCID: PMC6631330 DOI: 10.3390/s19122837] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2019] [Revised: 06/19/2019] [Accepted: 06/21/2019] [Indexed: 11/16/2022]
Abstract
In this paper, we propose two novel methods for robot-world-hand–eye calibration and provide a comparative analysis against six state-of-the-art methods. We examine the calibration problem from two alternative geometrical interpretations, called ‘hand–eye’ and ‘robot-world-hand–eye’, respectively. The study analyses the effects of specifying the objective function as pose error or reprojection error minimization problem. We provide three real and three simulated datasets with rendered images as part of the study. In addition, we propose a robotic arm error modeling approach to be used along with the simulated datasets for generating a realistic response. The tests on simulated data are performed in both ideal cases and with pseudo-realistic robotic arm pose and visual noise. Our methods show significant improvement and robustness on many metrics in various scenarios compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Ihtisham Ali
- Faculty of Information Technology and Communication, Tampere University, 33720 Tampere, Finland.
| | - Olli Suominen
- Faculty of Information Technology and Communication, Tampere University, 33720 Tampere, Finland.
| | - Atanas Gotchev
- Faculty of Information Technology and Communication, Tampere University, 33720 Tampere, Finland.
| | - Emilio Ruiz Morales
- Fusion for Energy (F4E), ITER Delivery Department, Remote Handling Project Team, 08019 Barcelona, Spain.
| |
Collapse
|