1
|
Zhou K, Huang X, Li S, Li G. Convolutional neural network-based pose mapping estimation as an alternative to traditional hand-eye calibration. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2023; 94:065002. [PMID: 37862475 DOI: 10.1063/5.0147783] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 05/13/2023] [Indexed: 10/22/2023]
Abstract
The vision system is a crucial technology for realizing the automation and intelligence of industrial robots, and the accuracy of hand-eye calibration is crucial in determining the relationship between the camera and robot end. Parallel robots are widely used in automated assembly due to their high positioning accuracy and large carrying capacity, but traditional hand-eye calibration methods may not be applicable due to their limited motion range and resulting accuracy problems. To address this issue, we propose using a pose, nonlinear mapping estimation method to solve the hand-eye calibration problem and have constructed a 1-D pose estimation convolutional neural network (PECNN) with excellent performance, through experiments and discussions. The PECNN achieves an end-to-end mapping of the variation of the target object pose to the variation of the robot end pose. Our experiments have shown that the proposed hand-eye calibration method has high accuracy and can be applied to the automated assembly tasks of vision-guided parallel robots. Moreover, the method is also applicable to most parallel robots and tandem robots.
Collapse
Affiliation(s)
- Kuai Zhou
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Xiang Huang
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Shuanggao Li
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Gen Li
- Suzhou Research Institute, Nanjing University of Aeronautics and Astronautics, Suzhou, China
| |
Collapse
|
2
|
Jeung D, Jung K, Lee HJ, Hong J. Augmented reality-based surgical guidance for wrist arthroscopy with bone-shift compensation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107323. [PMID: 36608430 DOI: 10.1016/j.cmpb.2022.107323] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 08/17/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVES Intraoperative joint condition is different from preoperative CT/MR due to the motion applied during surgery, inducing an inaccurate approach to surgical targets. This study aims to provide real-time augmented reality (AR)-based surgical guidance for wrist arthroscopy based on a bone-shift model through an in vivo computed tomography (CT) study. METHODS To accurately visualize concealed wrist bones on the intra-articular arthroscopic image, we propose a surgical guidance system with a novel bone-shift compensation method using noninvasive fiducial markers. First, to measure the effect of traction during surgery, two noninvasive fiducial markers were attached before surgery. In addition, two virtual link models connecting the wrist bones were implemented. When wrist traction occurs during the operation, the movement of the fiducial marker is measured, and bone-shift compensation is applied to move the virtual links in the direction of the traction. The proposed bone-shift compensation method was verified with the in vivo CT data of 10 participants. Finally, to introduce AR, camera calibration for the arthroscope parameters was performed, and a patient-specific template was used for registration between the patient and the wrist bone model. As a result, a virtual bone model with three-dimensional information could be accurately projected on a two-dimensional arthroscopic image plane. RESULTS The proposed method was possible to estimate the position of wrist bone in the traction state with an accuracy of 1.4 mm margin. After bone-shift compensation was applied, the target point error was reduced by 33.6% in lunate, 63.3% in capitate, 55.0% in scaphoid, and 74.8% in trapezoid than those in preoperative wrist CT. In addition, a phantom experiment was introduced simulating the real surgical environment. AR display allowed to expand the field of view (FOV) of the arthroscope and helped in visualizing the anatomical structures around the bones. CONCLUSIONS This study demonstrated the successful handling of AR error caused by wrist traction using the proposed method. In addition, the method allowed accurate AR visualization of the concealed bones and expansion of the limited FOV of the arthroscope. The proposed bone-shift compensation can also be applied to other joints, such as the knees or shoulders, by representing their bone movements using corresponding virtual links. In addition, the movement of the joint skin during surgery can be measured using noninvasive fiducial markers in the same manner as that used for the wrist joint.
Collapse
Affiliation(s)
- Deokgi Jeung
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea
| | - Kyunghwa Jung
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea; Korea Research Institute of Standards and Science, Daejeon, South Korea
| | - Hyun-Joo Lee
- Department of Orthopaedic Surgery, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, South Korea.
| | - Jaesung Hong
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea.
| |
Collapse
|
3
|
Wu J, Wang M, Fourati H, Li H, Zhu Y, Zhang C, Jiang Y, Hu X, Liu M. Generalized n-Dimensional Rigid Registration: Theory and Applications. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:927-940. [PMID: 35507617 DOI: 10.1109/tcyb.2022.3168938] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The generalized rigid registration problem in high-dimensional Euclidean spaces is studied. The loss function is minimized with an equivalent error formulation by the Cayley formula. The closed-form linear least-square solution to such a problem is derived which generates the registration covariances, i.e., uncertainty information of rotation and translation, providing quite accurate probabilistic descriptions. Simulation results indicate the correctness of the proposed method and also present its efficiency on computation-time consumption, compared with previous algorithms using singular value decomposition (SVD) and linear matrix inequality (LMI). The proposed scheme is then applied to an interpolation problem on the special Euclidean group SE(n) with covariance-preserving functionality. Finally, experiments on covariance-aided Lidar mapping show practical superiority in robotic navigation.
Collapse
|
4
|
Abstract
Abstract
A classic hand-eye system involves hand-eye calibration and robot-world and hand-eye calibration. Insofar as hand-eye calibration can solve only hand-eye transformation, this study aims to determine the robot-world and hand-eye transformations simultaneously based on the robot-world and hand-eye equation. According to whether the rotation part and the translation part of the equation are decoupled, the methods can be divided into separable solutions and simultaneous solutions. The separable solutions solve the rotation part before solving the translation part, so the estimated errors of the rotation will be transferred to the translation. In this study, a method was proposed for calculation with rotation and translation coupling; a closed-form solution based on Kronecker product and an iterative solution based on the Gauss–Newton algorithm were involved. The feasibility was further tested using simulated data and real data, and the superiority was verified by comparison with the results obtained by the available method. Finally, we improved a method that can solve the singularity problem caused by the parameterization of the rotation matrix, which can be widely used in the robot-world and hand-eye calibration. The results show that the prediction errors of rotation and translation based on the proposed method be reduced to
$0.26^\circ$
and
$1.67$
mm, respectively.
Collapse
|
5
|
Wang G, Li WL, Jiang C, Zhu DH, Xie H, Liu XJ, Ding H. Simultaneous Calibration of Multicoordinates for a Dual-Robot System by Solving the AXB = YCZ Problem. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3043688] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
6
|
Sun Y, Pan B, Guo Y, Fu Y, Niu G. Vision-based hand-eye calibration for robot-assisted minimally invasive surgery. Int J Comput Assist Radiol Surg 2020; 15:2061-2069. [PMID: 32808149 DOI: 10.1007/s11548-020-02245-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Accepted: 08/07/2020] [Indexed: 11/24/2022]
Abstract
PURPOSE The knowledge of laparoscope vision can greatly improve the surgical operation room (OR) efficiency. For the vision-based computer-assisted surgery, the hand-eye calibration establishes the coordinate relationship between laparoscope and robot slave arm. While significant advances have been made for hand-eye calibration in recent years, efficient algorithm for minimally invasive surgical robot is still a major challenge. Removing the external calibration object in abdominal environment to estimate the hand-eye transformation is still a critical problem. METHODS We propose a novel hand-eye calibration algorithm to tackle the problem which relies purely on surgical instrument already in the operating scenario for robot-assisted minimally invasive surgery (RMIS). Our model is formed by the geometry information of the surgical instrument and the remote center-of-motion (RCM) constraint. We also enhance the algorithm with stereo laparoscope model. RESULTS Promising validation of synthetic simulation and experimental surgical robot system have been conducted to evaluate the proposed method. We report results that the proposed method can exhibit the hand-eye calibration without calibration object. CONCLUSION Vision-based hand-eye calibration is developed. We demonstrate the feasibility to perform hand-eye calibration by taking advantage of the components of surgical robot system, leading to the efficiency of surgical OR.
Collapse
Affiliation(s)
- Yanwen Sun
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Bo Pan
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.
| | - Yongchen Guo
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Yili Fu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Guojun Niu
- School of Mechanical Engineering and Automation, Zhejiang Sci-Tech University, Hangzhou, China
| |
Collapse
|
7
|
Xiang G, Su J. Interactive Natural Motion Planning for Robot Systems Based on Representation Space. Int J Soc Robot 2020. [DOI: 10.1007/s12369-019-00552-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
8
|
Zhang Q, Gao GQ. Hand–eye calibration and grasping pose calculation with motion error compensation and vertical-component correction for 4-R(2-SS) parallel robot. INT J ADV ROBOT SYST 2020. [DOI: 10.1177/1729881420909012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Due to motion constraint of 4-R(2-SS) parallel robot, it is difficult to calculate the translation component of hand–eye calibration based on the existing model solving method accurately. Additionally, the camera calibration error, robot motion error, and invalid calibration motion poses make it difficult to achieve fast and accurate online hand–eye calibration. Therefore, we propose a hand–eye calibration method with motion error compensation and vertical-component correction for 4-R(2-SS) parallel robot by improving the existing eye-to-hand model and solving method. Firstly, the eye-to-hand model of single camera is improved and the robot motion error in the improved model is compensated to reduce the influence of camera calibration error and robot motion error on model accuracy. Secondly, the vertical-component of hand–eye calibration is corrected based on vertical constraint between calibration plate and end effector in parallel robot to calculate the pose and motion error in calibration of 4-R(2-SS) parallel robot accurately. Thirdly, the nontrivial solution constraint of eye-to-hand model is constructed and adopted to remove invalid calibration motion poses and plan calibration motion. Finally, the proposed method was verified by experiments with a fruit sorting system based on 4-R(2-SS) parallel robot. Compared with random motion, the existing model, and solving method, the average time of online calibration based on planned motion decreases by 29.773 s and the average error of calibration based on the improved model and solving method decreases by 151.293. The proposed method can improve the accuracy and efficiency of hand–eye calibration of 4-R(2-SS) parallel robot effectively and further realize accurate and fast grasping.
Collapse
Affiliation(s)
- Qian Zhang
- School of Electrical and Information Engineering, Jiangsu University, Zhenjiang, China
| | - Guo-Qin Gao
- School of Electrical and Information Engineering, Jiangsu University, Zhenjiang, China
| |
Collapse
|
9
|
Lee S, Shim S, Ha HG, Lee H, Hong J. Simultaneous Optimization of Patient-Image Registration and Hand-Eye Calibration for Accurate Augmented Reality in Surgery. IEEE Trans Biomed Eng 2020; 67:2669-2682. [PMID: 31976878 DOI: 10.1109/tbme.2020.2967802] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Augmented reality (AR) navigation using a position sensor in endoscopic surgeries relies on the quality of patient-image registration and hand-eye calibration. Conventional methods collect the necessary data to compute two output transformation matrices separately. However, the AR display setting during surgery generally differs from that during preoperative processes. Although conventional methods can identify optimal solutions under initial conditions, AR display errors are unavoidable during surgery owing to the inherent computational complexity of AR processes, such as error accumulation over successive matrix multiplications, and tracking errors of position sensor. METHODS We propose the simultaneous optimization of patient-image registration and hand-eye calibration in an AR environment before surgery. The relationship between the endoscope and a virtual object to overlay is first calculated using an endoscopic image, which also functions as a reference during optimization. After including the tracking information from the position sensor, patient-image registration and hand-eye calibration are optimized in terms of least-squares. RESULTS Experiments with synthetic data verify that the proposed method is less sensitive to computation and tracking errors. A phantom experiment with a position sensor is also conducted. The accuracy of the proposed method is significantly higher than that of the conventional method. CONCLUSION The AR accuracy of the proposed method is compared with those of the conventional ones, and the superiority of the proposed method is verified. SIGNIFICANCE This study demonstrates that the proposed method exhibits substantial potential for improving AR navigation accuracy.
Collapse
|
10
|
Li W, Dong M, Lu N, Lou X, Sun P. Simultaneous Robot⁻World and Hand⁻Eye Calibration without a Calibration Object. SENSORS 2018; 18:s18113949. [PMID: 30445680 PMCID: PMC6263626 DOI: 10.3390/s18113949] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 10/29/2018] [Accepted: 11/05/2018] [Indexed: 12/03/2022]
Abstract
An extended robot–world and hand–eye calibration method is proposed in this paper to evaluate the transformation relationship between the camera and robot device. This approach could be performed for mobile or medical robotics applications, where precise, expensive, or unsterile calibration objects, or enough movement space, cannot be made available at the work site. Firstly, a mathematical model is established to formulate the robot-gripper-to-camera rigid transformation and robot-base-to-world rigid transformation using the Kronecker product. Subsequently, a sparse bundle adjustment is introduced for the optimization of robot–world and hand–eye calibration, as well as reconstruction results. Finally, a validation experiment including two kinds of real data sets is designed to demonstrate the effectiveness and accuracy of the proposed approach. The translation relative error of rigid transformation is less than 8/10,000 by a Denso robot in a movement range of 1.3 m × 1.3 m × 1.2 m. The distance measurement mean error after three-dimensional reconstruction is 0.13 mm.
Collapse
Affiliation(s)
- Wei Li
- Institute of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China.
| | - Mingli Dong
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| | - Naiguang Lu
- Institute of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China.
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| | - Xiaoping Lou
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| | - Peng Sun
- Institute of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China.
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| |
Collapse
|
11
|
Tang H, Liu Y, Wang H. Constraint Gaussian Filter With Virtual Measurement for On-Line Camera-Odometry Calibration. IEEE T ROBOT 2018. [DOI: 10.1109/tro.2018.2805312] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
12
|
Wang Z, Liu Z, Ma Q, Cheng A, Liu YH, Kim S, Deguet A, Reiter A, Kazanzides P, Taylor RH. Vision-Based Calibration of Dual RCM-Based Robot Arms in Human-Robot Collaborative Minimally Invasive Surgery. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2017.2737485] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
13
|
Shah M, Bostelman R, Legowik S, Hong T. Calibration of mobile manipulators using 2D positional features. MEASUREMENT : JOURNAL OF THE INTERNATIONAL MEASUREMENT CONFEDERATION 2018; 124:10.1016/j.measurement.2018.04.024. [PMID: 30996508 PMCID: PMC6463307 DOI: 10.1016/j.measurement.2018.04.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Robotic manipulators are increasingly being attached to Automatic Ground Vehicles (AGVs) to aid in the efficiency of assembly for manufacturing systems. However, calibrating these mobile manipulators is difficult as the offset between the robotic manipulator and the AGV is often unknown. This paper provides a novel, simple, and low-cost method for calibrating and measuring the performance of mobile manipulators by using data collected from a laser retroreflector that digitally detects the horizontal two-dimensional (2D) position of reflectors on an artifact as well as a navigation system that provides the heading angle and 2D position of the AGV. The method is mathematically presented by providing a closed form solution to the positional component of the 2D robotworld/hand-eye calibration problem AX Y= B. The method is then applied to simulated data as well as data collected in a laboratory setting and compared to other methods.
Collapse
Affiliation(s)
- Mili Shah
- Department of Mathematics and Statistics, Loyola University Maryland, 4501 North Charles Street, Baltimore, MD 21210, United States
- Intelligent Systems Division, National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, United States
| | - Roger Bostelman
- Intelligent Systems Division, National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, United States
- Le2i, Universite de Bourgogne, BP 47870, 21078 Dijon, France
| | - Steven Legowik
- Robotic Research, LLC, 555 Quince Orchard Road, Gaithersburg, MD 20878, United States
| | - Tsai Hong
- Intelligent Systems Division, National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, United States
| |
Collapse
|
14
|
Xu J, Chen R, Liu S, Guan Y. Self-recalibration of a robot-assisted structured-light-based measurement system. APPLIED OPTICS 2017; 56:8857-8865. [PMID: 29131165 DOI: 10.1364/ao.56.008857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 10/07/2017] [Indexed: 06/07/2023]
Abstract
The structured-light-based measurement method is widely employed in numerous fields. However, for industrial inspection, to achieve complete scanning of a work piece and overcome occlusion, the measurement system needs to be moved to different viewpoints. Moreover, frequent reconfiguration of the measurement system may be needed based on the size of the measured object, making the self-recalibration of extrinsic parameters indispensable. To this end, this paper proposes an automatic self-recalibration and reconstruction method, wherein a robot arm is employed to move the measurement system for complete scanning; the self-recalibration is achieved using fundamental matrix calculations and point cloud registration without the need for an accurate calibration gauge. Experimental results demonstrate the feasibility and accuracy of our method.
Collapse
|
15
|
Wang L, Wang T, Tang P, Hu L, Liu W, Han Z, Hao M, Liu H, Wang K, Zhao Y, Guo N, Cao Y, Li C. A new hand-eye calibration approach for fracture reduction robot. Comput Assist Surg (Abingdon) 2017; 22:113-119. [PMID: 28938847 DOI: 10.1080/24699322.2017.1379254] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
OBJECTIVE The hand-eye calibration is used to determine the transformation between the end-effector and the camera marker of the robot. But the robot movement in traditional method would be time-consuming, inaccurate and even unavailable in some conditions. The method presented in this article can complete the calibration without any movement and is more suitable in clinical applications. METHODS Instead of solving the classic non-linear equation AX = XB, we collected the points on X and Y axes of the tool coordinate system (TCS) with the visual probe and fitted them using the singular value decomposition algorithm (SVD). Then, the transformation was obtained with the data of the tool center point (TCP). A comparison test was conducted to verify the performance of the method. RESULTS The average translation error and orientation error of the new method are 0.12 ± 0.122 mm and 0.18 ± 0.112° respectively, while they are 0.357 ± 0.347 mm and 0.416 ± 0.234° correspondingly in the traditional method. CONCLUSIONS The high accuracy of the method indicates that it is a good candidate for medical robots, which usually need to work in a sterile environment.
Collapse
Affiliation(s)
- Lifeng Wang
- a School of Mechanical Engineering and Automation , Beihang University , Beijing , China
| | - Tianmiao Wang
- a School of Mechanical Engineering and Automation , Beihang University , Beijing , China
| | - Peifu Tang
- b Department of Orthopaedics , Chinese PLA General Hospital , Beijing , China
| | - Lei Hu
- a School of Mechanical Engineering and Automation , Beihang University , Beijing , China
| | - Wenyong Liu
- c School of Biological Science and Medical Engineering , Beihang University , Beijing , China
| | - Zhonghao Han
- a School of Mechanical Engineering and Automation , Beihang University , Beijing , China
| | - Ming Hao
- b Department of Orthopaedics , Chinese PLA General Hospital , Beijing , China
| | - Hongpeng Liu
- a School of Mechanical Engineering and Automation , Beihang University , Beijing , China
| | - Kun Wang
- b Department of Orthopaedics , Chinese PLA General Hospital , Beijing , China
| | - Yanpeng Zhao
- b Department of Orthopaedics , Chinese PLA General Hospital , Beijing , China
| | - Na Guo
- a School of Mechanical Engineering and Automation , Beihang University , Beijing , China
| | - Yanxiang Cao
- b Department of Orthopaedics , Chinese PLA General Hospital , Beijing , China
| | - Changsheng Li
- d Department of Biomedical Engineering , National University of Singapore , Singapore, Singapore
| |
Collapse
|
16
|
Taylor Z, Nieto J. Motion-Based Calibration of Multimodal Sensor Extrinsics and Timing Offset Estimation. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2596771] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
17
|
Abstract
In this paper, we present a new approach to visual servoing using lines. It is based on a theoretical and geometrical study of the main line representations, which allows us to define a new representation, the so-called binormalized Plücker coordinates. These are particularly well suited for visual servoing. Indeed, they allow the definition of an image line alignment concept. Moreover, the control law which realizes such an alignment has several properties: partial decoupling between rotation and translation, analytical inversion of the motion equations and global asymptotic stability conditions. This control law was validated both in simulation and experimentally in the specific case of an orthogonal trihedron.
Collapse
Affiliation(s)
- Nicolas Andreff
- LaRAMA - UBP, Institut Français de Mécanique Avancée BP 265, 63175 Aubière Cedex, France,
| | - Bernard Espiau
- INRIA Rhône-Alpes and GRAVIR-IMAG 655, av. de l'Europe, 38330 Montbonnot Saint Martin, France,
| | - Radu Horaud
- INRIA Rhône-Alpes and GRAVIR-IMAG 655, av. de l'Europe, 38330 Montbonnot Saint Martin, France,
| |
Collapse
|
18
|
Hu JS, Chang YJ. Simultaneous Hand-Eye-Workspace and Camera Calibration using Laser Beam Projection. INTERNATIONAL JOURNAL OF AUTOMATION AND SMART TECHNOLOGY 2014. [DOI: 10.5875/ausmt.v4i1.205] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
19
|
|
20
|
Ernst F, Richter L, Matthäus L, Martens V, Bruder R, Schlaefer A, Schweikard A. Non-orthogonal tool/flange and robot/world calibration. Int J Med Robot 2012; 8:407-20. [DOI: 10.1002/rcs.1427] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2012] [Indexed: 11/07/2022]
Affiliation(s)
| | | | - Lars Matthäus
- Eemagine Medical Imaging Solutions GmbH; 10243; Berlin; Germany
| | - Volker Martens
- Institute for Robotics and Cognitive Systems; University of Lübeck; Ratzeburger Allee 160; 23538; Lübeck; Germany
| | - Ralf Bruder
- Institute for Robotics and Cognitive Systems; University of Lübeck; Ratzeburger Allee 160; 23538; Lübeck; Germany
| | | | | |
Collapse
|
21
|
ARISTOS DIMITRIS, TZAFESTAS SPYROS. A METHOD FOR THE REGISTRATION OF A KNOWN CAD MODEL INTO THE WORKSPACE FRAME OF A ROBOT. INT J ARTIF INTELL T 2011. [DOI: 10.1142/s0218213010000194] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In many robotic applications it is required to manipulate a specific rigid object whose CAD model is known a priori, but its position and orientation in space are unknown. This category of tasks includes piercing, painting or iron welding on some specific points of the rigid object's surface. For this kind of tasks to become feasible the CAD data of the rigid object should be registered into the robot's workspace frame, so that the robot arm becomes aware of the position and orientation of the rigid body with reference to its own flange or base coordinate frame. In order to achieve this goal several techniques from the fields of image processing, 3D modeling and robot kinematics should be combined. This paper provides a convenient combination of such methods that can be used for the successful on-line registration of a rigid object's CAD data to the robot's workspace frame.
Collapse
Affiliation(s)
- DIMITRIS ARISTOS
- Intelligent Robotics and Automation Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Zographou, Athens, Greece 15773, Greece
| | - SPYROS TZAFESTAS
- Intelligent Robotics and Automation Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Zographou, Athens, Greece 15773, Greece
| |
Collapse
|
22
|
Chen C, Schonfeld D. A particle filtering framework for joint video tracking and pose estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2010; 19:1625-1634. [PMID: 20215081 DOI: 10.1109/tip.2010.2043009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
A method is introduced to track the object's motion and estimate its pose directly from 2-D image sequences. Scale-invariant feature transform (SIFT) is used to extract corresponding feature points from image sequences. We demonstrate that pose estimation from the corresponding feature points can be formed as a solution to Sylvester's equation. We show that the proposed approach to the solution of Sylvester's equation is equivalent to the classical SVD method for 3D-3D pose estimation. However, whereas classical SVD cannot be used for pose estimation directly from 2-D image sequences, our method based on Sylvester's equation provides a new approach to pose estimation. Smooth video tracking and pose estimation is finally obtained by using the solution to Sylvester's equation within the importance sampling density of the particle filtering framework. Finally, computer simulation experiments conducted over synthetic data and real-world videos demonstrate the effectiveness of our method in both robustness and speed compared with other similar object tracking and pose estimation methods.
Collapse
Affiliation(s)
- Chong Chen
- Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA.
| | | |
Collapse
|
23
|
Comparing calibration approaches for 3D ultrasound probes. Int J Comput Assist Radiol Surg 2008; 4:203-13. [DOI: 10.1007/s11548-008-0258-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2008] [Accepted: 09/14/2008] [Indexed: 10/21/2022]
|
24
|
Abstract
This paper presents new vector quantization based methods for selecting well-suited data for hand-eye calibration from a given sequence of hand and eye movements. Data selection can improve the accuracy of classic hand-eye calibration, and make it possible in the first place in situations where the standard approach of manually selecting positions is inconvenient or even impossible, especially when using continuously recorded data. A variety of methods is proposed, which differ from each other in the dimensionality of the vector quantization compared to the degrees of freedom of the rotation representation, and how the rotation angle is incorporated. The performance of the proposed vector quantization based data selection methods is evaluated using data obtained from a manually moved optical tracking system (hand) and an endoscopic camera (eye).
Collapse
Affiliation(s)
- Jochen Schmidt
- Centre for Artificial Intelligence Research Auckland University of Technology Auckland, New Zealand,
| | - Heinrich Niemann
- Lehrstuhl für Mustererkennung Universität Erlangen-Nürnberg 91058 Erlangen, Germany
| |
Collapse
|
25
|
Abstract
SUMMARYWhile a robot moves, online hand–eye calibration to determine the relative pose between the robot gripper/end-effector and the sensors mounted on it is very important in a vision-guided robot system. During online hand–eye calibration, it is impossible to perform motion planning to avoid degenerate motions and small rotations, which may lead to unreliable calibration results. This paper proposes an adaptive motion selection algorithm for online hand–eye calibration, featured by dynamic threshold determination for motion selection and getting reliable hand–eye calibration results. Simulation and real experiments demonstrate the effectiveness of our method.
Collapse
|
26
|
Malm H, Heyden A. Extensions of Plane-Based Calibration to the Case of Translational Motion in a Robot Vision Setting. IEEE T ROBOT 2006. [DOI: 10.1109/tro.2005.862477] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
27
|
Renaud P, Andreff N, Lavest JM, Dhome M. Simplifying the kinematic calibration of parallel mechanisms using vision-based metrology. IEEE T ROBOT 2006. [DOI: 10.1109/tro.2005.861482] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
28
|
Shi F, Zhang J, Liu Y, Zhao Z. A hand-eye robotic model for total knee replacement surgery. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2005; 8:122-30. [PMID: 16685951 DOI: 10.1007/11566489_16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
This paper presents a hand-eye robotic model for total knee replacement (TKR) surgery. Unlike existent robot assisted TKR surgery, the proposed model is a surgical robot that combines with a movable hand-eye navigation system, which would use the full potential of both computer-assisted systems. Without using CT images and landmark pins in the patient's bones, it can directly measure the mechanical axis with high precision. This system provides a new approach of the minimally invasive surgery. Experiment results show that the proposed model is promising in the future application.
Collapse
Affiliation(s)
- Fanhuai Shi
- Inst. Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200030, P R China.
| | | | | | | |
Collapse
|
29
|
Schmidt J, Vogt F, Niemann H. Calibration–Free Hand–Eye Calibration: A Structure–from–Motion Approach. ACTA ACUST UNITED AC 2005. [DOI: 10.1007/11550518_9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023]
|
30
|
An Approach to Improve Online Hand-Eye Calibration. PATTERN RECOGNITION AND IMAGE ANALYSIS 2005. [DOI: 10.1007/11492429_78] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
31
|
Boctor EM, Iordachita I, Fichtinger G, Hager GD. Real-Time Quality Control of Tracked Ultrasound. ACTA ACUST UNITED AC 2005; 8:621-30. [PMID: 16685898 DOI: 10.1007/11566465_77] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
The overwhelming majority of intra-operative hazard situations in tracked ultrasound (US) systems are attributed to failure of registration between tracking and imaging coordinate frames. We introduce a novel methodology for eal-time in-vivo quality control of tracked US systems, in order to capture registration failures during the clinical procedure. In effect, we dynamically recalibrate the tracked US system for rotation, scale factor, and in-plane position offset up to a scale factor. We detect any unexpected change in these parameters through capturing discrepancies in the resulting calibration matrix, thereby assuring quality (accuracy and consistency) of the tracked system. No phantom is used for the recalibration. We perform the task of quality control in the background, transparently to the clinical user while the subject is being scanned. We present the concept, mathematical formulation, and experimental evaluation in-vitro. This new method can play an important role in guaranteeing accurate, consistent, and reliable performance of tracked ultrasound.
Collapse
Affiliation(s)
- Emad M Boctor
- Engineering Research Center, Johns Hopkins University, USA.
| | | | | | | |
Collapse
|
32
|
Viswanathan A, Boctor EM, Taylor RH, Hager G, Fichtinger G. Immediate Ultrasound Calibration with Three Poses and Minimal Image Processing. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2004 2004. [DOI: 10.1007/978-3-540-30136-3_55] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
33
|
Li Y, Chen S. Automatic recalibration of an active structured light vision system. ACTA ACUST UNITED AC 2003. [DOI: 10.1109/tra.2003.808859] [Citation(s) in RCA: 72] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|