1
|
Burton W, Crespo IR, Andreassen T, Pryhoda M, Jensen A, Myers C, Shelburne K, Banks S, Rullkoetter P. Fully automatic tracking of native glenohumeral kinematics from stereo-radiography. Comput Biol Med 2023; 163:107189. [PMID: 37393783 DOI: 10.1016/j.compbiomed.2023.107189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/12/2023] [Accepted: 06/19/2023] [Indexed: 07/04/2023]
Abstract
The current work introduces a system for fully automatic tracking of native glenohumeral kinematics in stereo-radiography sequences. The proposed method first applies convolutional neural networks to obtain segmentation and semantic key point predictions in biplanar radiograph frames. Preliminary bone pose estimates are computed by solving a non-convex optimization problem with semidefinite relaxations to register digitized bone landmarks to semantic key points. Initial poses are then refined by registering computed tomography-based digitally reconstructed radiographs to captured scenes, which are masked by segmentation maps to isolate the shoulder joint. A particular neural net architecture which exploits subject-specific geometry is also introduced to improve segmentation predictions and increase robustness of subsequent pose estimates. The method is evaluated by comparing predicted glenohumeral kinematics to manually tracked values from 17 trials capturing 4 dynamic activities. Median orientation differences between predicted and ground truth poses were 1.7∘ and 8.6∘ for the scapula and humerus, respectively. Joint-level kinematics differences were less than 2∘ in 65%, 13%, and 63% of frames for XYZ orientation DoFs based on Euler angle decompositions. Automation of kinematic tracking can increase scalability of tracking workflows in research, clinical, or surgical applications.
Collapse
Affiliation(s)
- William Burton
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA.
| | - Ignacio Rivero Crespo
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Thor Andreassen
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Moira Pryhoda
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Andrew Jensen
- Department of Mechanical and Aerospace Engineering, University of Florida, 939 Center Dr., Gainesville, FL, 32611, USA
| | - Casey Myers
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Kevin Shelburne
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Scott Banks
- Department of Mechanical and Aerospace Engineering, University of Florida, 939 Center Dr., Gainesville, FL, 32611, USA
| | - Paul Rullkoetter
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| |
Collapse
|
2
|
Fast and robust active camera relocalization in the wild for fine-grained change detection. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
3
|
Wu J, Liu M, Huang Y, Jin C, Wu Y, Yu C. SE(n)++: An Efficient Solution to Multiple Pose Estimation Problems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:3829-3840. [PMID: 32877345 DOI: 10.1109/tcyb.2020.3015039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In robotic applications, many pose problems involve solving the homogeneous transformation based on the special Euclidean group SE(n) . However, due to the nonconvexity of SE(n) , many of these solvers treat rotation and translation separately, and the computational efficiency is still unsatisfactory. A new technique called the SE(n)++ is proposed in this article that exploits a novel mapping from SE(n) to SO(n + 1) . The mapping transforms the coupling between rotation and translation into a unified formulation on the Lie group and gives better analytical results and computational performances. Specifically, three major pose problems are considered in this article, that is, the point-cloud registration, the hand-eye calibration, and the SE(n) synchronization. Experimental validations have confirmed the effectiveness of the proposed SE(n)++ method in open datasets.
Collapse
|
4
|
Liu Y, Chen G, Knoll A. Globally Optimal Vertical Direction Estimation in Atlanta World. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:1949-1962. [PMID: 32986545 DOI: 10.1109/tpami.2020.3027047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In man-made environments, most of the objects and structures are organized in the form of orthogonal and parallel planes. These planes can be approximated by an Atlanta world assumption, in which the normals of planes can be represented by Atlanta frames. The Atlanta world assumption has one vertical frame and multiple horizontal frames. Conventionally, given a set of inputs such as surface normals, the Atlanta frame estimation problem can be solved by a branch-and-bound (BnB) algorithm. However, the runtime of the BnB algorithm will increase greatly when the dimensionality (i.e., the number of horizontal frames) increases. In this paper, we estimate only the vertical direction, instead of all Atlanta frames at once. Accordingly, we propose a vertical direction estimation method by considering the relationship between the vertical frame and horizontal frames. Concretely, our approach employs a BnB algorithm to search the vertical direction, thereby guaranteeing global optimality without requiring prior knowledge of the number of Atlanta frames. In order to guarantee convergence, four novel bounds are investigated, by mapping a 3D hemisphere to a 2D region. We verify the feasibility of the proposed method using various challenging synthetic and real-world data.
Collapse
|
5
|
|
6
|
Yu Y, Guan B, Sun X, Li Z, Fraundorfer F. Rotation alignment of a camera-IMU system using a single affine correspondence. APPLIED OPTICS 2021; 60:7455-7465. [PMID: 34613035 DOI: 10.1364/ao.431909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 07/20/2021] [Indexed: 06/13/2023]
Abstract
We propose an accurate and easy-to-implement method on rotation alignment of a camera-inertial measurement unit (IMU) system using only a single affine correspondence in the minimal case. The known initial rotation angles between the camera and IMU are utilized; thus, the alignment model can be formulated as a polynomial equation system based on homography constraints by expressing the rotation matrix in a first-order approximation. By solving the equation system, we can recover the rotation alignment parameters. Furthermore, more accurate alignment results can be achieved with the joint optimization of multiple stereo image pairs. The proposed method does not require additional auxiliary equipment or a camera's particular motion. The experimental results on synthetic data and two real-world data sets demonstrate that our method is efficient and precise for the camera-IMU system's rotation alignment.
Collapse
|
7
|
Vision-guided fine-operation of robot and its application in eight-puzzle game. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2021. [DOI: 10.1007/s41315-021-00186-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
8
|
Li W, Fan J, Li S, Tian Z, Zheng Z, Ai D, Song H, Yang J. Calibrating 3D Scanner in the Coordinate System of Optical Tracker for Image-To-Patient Registration. Front Neurorobot 2021; 15:636772. [PMID: 34054454 PMCID: PMC8160243 DOI: 10.3389/fnbot.2021.636772] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Three-dimensional scanners have been widely applied in image-guided surgery (IGS) given its potential to solve the image-to-patient registration problem. How to perform a reliable calibration between a 3D scanner and an external tracker is especially important for these applications. This study proposes a novel method for calibrating the extrinsic parameters of a 3D scanner in the coordinate system of an optical tracker. We bound an optical marker to a 3D scanner and designed a specified 3D benchmark for calibration. We then proposed a two-step calibration method based on the pointset registration technique and nonlinear optimization algorithm to obtain the extrinsic matrix of the 3D scanner. We applied repeat scan registration error (RSRE) as the cost function in the optimization process. Subsequently, we evaluated the performance of the proposed method on a recaptured verification dataset through RSRE and Chamfer distance (CD). In comparison with the calibration method based on 2D checkerboard, the proposed method achieved a lower RSRE (1.73 mm vs. 2.10, 1.94, and 1.83 mm) and CD (2.83 mm vs. 3.98, 3.46, and 3.17 mm). We also constructed a surgical navigation system to further explore the application of the tracked 3D scanner in image-to-patient registration. We conducted a phantom study to verify the accuracy of the proposed method and analyze the relationship between the calibration accuracy and the target registration error (TRE). The proposed scanner-based image-to-patient registration method was also compared with the fiducial-based method, and TRE and operation time (OT) were used to evaluate the registration results. The proposed registration method achieved an improved registration efficiency (50.72 ± 6.04 vs. 212.97 ± 15.91 s in the head phantom study). Although the TRE of the proposed registration method met the clinical requirements, its accuracy was lower than that of the fiducial-based registration method (1.79 ± 0.17 mm vs. 0.92 ± 0.16 mm in the head phantom study). We summarized and analyzed the limitations of the scanner-based image-to-patient registration method and discussed its possible development.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) CO., LTD., Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
9
|
Tian FP, Feng W, Zhang Q, Wang X, Sun J, Loia V, Liu ZQ. Active Camera Relocalization from a Single Reference Image without Hand-Eye Calibration. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2019; 41:2791-2806. [PMID: 31689178 DOI: 10.1109/tpami.2018.2870646] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This paper studies active relocalization of 6D camera pose from a single reference image, a new and challenging problem in computer vision and robotics. Straightforward active camera relocalization (ACR) is a tricky and expensive task that requires elaborate hand-eye calibration on precision robotic platforms. In this paper, we show that high-quality camera relocalization can be achieved in an active and much easier way. We propose a hand-eye calibration free approach to actively relocating the camera to the same 6D pose that produces the input reference image. We theoretically prove that, given bounded unknown hand-eye pose displacement, this approach is able to rapidly reduce both 3D relative rotational and translational pose between current camera and the reference one to an identical matrix and a zero vector, respectively. Based on these findings, we develop an effective ACR algorithm with fast convergence rate, reliable accuracy and robustness. Extensive experiments validate the effectiveness and feasibility of our approach on both laboratory tests and challenging real-world applications in fine-grained change monitoring of cultural heritages.
Collapse
|
10
|
Zhao Z. Simultaneous robot-world and hand-eye calibration by the alternative linear programming. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2018.08.023] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
11
|
Zhang Y, Qiu Z, Zhang X. Calibration method for hand-eye system with rotation and translation couplings. APPLIED OPTICS 2019; 58:5375-5387. [PMID: 31504005 DOI: 10.1364/ao.58.005375] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Accepted: 06/11/2019] [Indexed: 06/10/2023]
Abstract
This paper develops a novel hand-eye calibration method for hand-eye systems with rotation and translation coupling terms. First, a nonlinear camera model with distortion terms and a model of a hand-eye system with rotation and translation coupling terms are established. Based on a non-linear optimization method and a reverse projection method, a decoupling calibration method for a lower-degree-of-freedom hand-eye system is proposed. Then the path planning for the calibration process is carried out. Based on the analysis of coupling constraints and hand-eye system motion constraints, three types of hand-eye calibration paths with high efficiency and easy operation are developed. In addition, the influence of key parameters on hand-eye calibration accuracy is analyzed. Finally, calibration experiments and parametric influence experiments are carried out. The results demonstrate that the proposed method is effective and practical for calibrating the hand-eye system.
Collapse
|
12
|
Ali I, Suominen O, Gotchev A, Morales ER. Methods for Simultaneous Robot-World-Hand-Eye Calibration: A Comparative Study. SENSORS 2019; 19:s19122837. [PMID: 31242714 PMCID: PMC6631330 DOI: 10.3390/s19122837] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2019] [Revised: 06/19/2019] [Accepted: 06/21/2019] [Indexed: 11/16/2022]
Abstract
In this paper, we propose two novel methods for robot-world-hand–eye calibration and provide a comparative analysis against six state-of-the-art methods. We examine the calibration problem from two alternative geometrical interpretations, called ‘hand–eye’ and ‘robot-world-hand–eye’, respectively. The study analyses the effects of specifying the objective function as pose error or reprojection error minimization problem. We provide three real and three simulated datasets with rendered images as part of the study. In addition, we propose a robotic arm error modeling approach to be used along with the simulated datasets for generating a realistic response. The tests on simulated data are performed in both ideal cases and with pseudo-realistic robotic arm pose and visual noise. Our methods show significant improvement and robustness on many metrics in various scenarios compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Ihtisham Ali
- Faculty of Information Technology and Communication, Tampere University, 33720 Tampere, Finland.
| | - Olli Suominen
- Faculty of Information Technology and Communication, Tampere University, 33720 Tampere, Finland.
| | - Atanas Gotchev
- Faculty of Information Technology and Communication, Tampere University, 33720 Tampere, Finland.
| | - Emilio Ruiz Morales
- Fusion for Energy (F4E), ITER Delivery Department, Remote Handling Project Team, 08019 Barcelona, Spain.
| |
Collapse
|
13
|
Neurofuzzy c-Means Network-Based SCARA Robot for Head Gimbal Assembly (HGA) Circuit Inspection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2018; 2018:4952389. [PMID: 30627142 PMCID: PMC6305037 DOI: 10.1155/2018/4952389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 10/29/2018] [Indexed: 11/20/2022]
Abstract
Decision and control of SCARA robot in HGA (head gimbal assembly) inspection line is a very challenge issue in hard disk drive (HDD) manufacturing. The HGA circuit called slider FOS is a part of HDD which is used for reading and writing data inside the disk with a very small dimension, i.e., 45 × 64 µm. Accuracy plays an important role in this inspection, and classification of defects is very crucial to assign the action of the SCARA robot. The robot can move the inspected parts into the corresponding boxes, which are divided into 5 groups and those are “Good,” “Bridging,” “Missing,” “Burn,” and “No connection.” A general image processing technique, blob analysis, in conjunction with neurofuzzy c-means (NFC) clustering with branch and bound (BNB) technique to find the best structure in all possible candidates was proposed to increase the performance of the entire robotics system. The results from two clustering techniques which are K-means, Kohonen network, and neurofuzzy c-means were investigated to show the effectiveness of the proposed algorithm. Training results from the 30x microscope inspection with 300 samples show that the best accuracy for clustering is 99.67% achieved from the NFC clustering with the following features: area, moment of inertia, and perimeter, and the testing results show 92.21% accuracy for the conventional Kohonen network. The results exhibit the improvement on the clustering when the neural network was applied. This application is one of the progresses in neurorobotics in industrial applications. This system has been implemented successfully in the HDD production line at Seagate Technology (Thailand) Co. Ltd.
Collapse
|
14
|
Li W, Dong M, Lu N, Lou X, Sun P. Simultaneous Robot⁻World and Hand⁻Eye Calibration without a Calibration Object. SENSORS 2018; 18:s18113949. [PMID: 30445680 PMCID: PMC6263626 DOI: 10.3390/s18113949] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 10/29/2018] [Accepted: 11/05/2018] [Indexed: 12/03/2022]
Abstract
An extended robot–world and hand–eye calibration method is proposed in this paper to evaluate the transformation relationship between the camera and robot device. This approach could be performed for mobile or medical robotics applications, where precise, expensive, or unsterile calibration objects, or enough movement space, cannot be made available at the work site. Firstly, a mathematical model is established to formulate the robot-gripper-to-camera rigid transformation and robot-base-to-world rigid transformation using the Kronecker product. Subsequently, a sparse bundle adjustment is introduced for the optimization of robot–world and hand–eye calibration, as well as reconstruction results. Finally, a validation experiment including two kinds of real data sets is designed to demonstrate the effectiveness and accuracy of the proposed approach. The translation relative error of rigid transformation is less than 8/10,000 by a Denso robot in a movement range of 1.3 m × 1.3 m × 1.2 m. The distance measurement mean error after three-dimensional reconstruction is 0.13 mm.
Collapse
Affiliation(s)
- Wei Li
- Institute of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China.
| | - Mingli Dong
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| | - Naiguang Lu
- Institute of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China.
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| | - Xiaoping Lou
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| | - Peng Sun
- Institute of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China.
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| |
Collapse
|
15
|
Stereo Camera Head-Eye Calibration Based on Minimum Variance Approach Using Surface Normal Vectors. SENSORS 2018; 18:s18113706. [PMID: 30384481 PMCID: PMC6263920 DOI: 10.3390/s18113706] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Revised: 10/19/2018] [Accepted: 10/29/2018] [Indexed: 11/17/2022]
Abstract
This paper presents a stereo camera-based head-eye calibration method that aims to find the globally optimal transformation between a robot’s head and its eye. This method is highly intuitive and simple, so it can be used in a vision system for humanoid robots without any complex procedures. To achieve this, we introduce an extended minimum variance approach for head-eye calibration using surface normal vectors instead of 3D point sets. The presented method considers both positional and orientational error variances between visual measurements and kinematic data in head-eye calibration. Experiments using both synthetic and real data show the accuracy and efficiency of the proposed method.
Collapse
|
16
|
Luo X, Mori K, Peters TM. Advanced Endoscopic Navigation: Surgical Big Data, Methodology, and Applications. Annu Rev Biomed Eng 2018; 20:221-251. [PMID: 29505729 DOI: 10.1146/annurev-bioeng-062117-120917] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.
Collapse
Affiliation(s)
- Xiongbiao Luo
- Department of Computer Science, Fujian Key Laboratory of Computing and Sensing for Smart City, Xiamen University, Xiamen 361005, China;
| | - Kensaku Mori
- Department of Intelligent Systems, Graduate School of Informatics, Nagoya University, Nagoya 464-8601, Japan;
| | - Terry M Peters
- Robarts Research Institute, Western University, London, Ontario N6A 3K7, Canada;
| |
Collapse
|
17
|
A computationally efficient method for hand-eye calibration. Int J Comput Assist Radiol Surg 2017; 12:1775-1787. [PMID: 28726116 PMCID: PMC5608875 DOI: 10.1007/s11548-017-1646-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Accepted: 07/10/2017] [Indexed: 11/05/2022]
Abstract
Purpose Surgical robots with cooperative control and semiautonomous features have shown increasing clinical potential, particularly for repetitive tasks under imaging and vision guidance. Effective performance of an autonomous task requires accurate hand–eye calibration so that the transformation between the robot coordinate frame and the camera coordinates is well defined. In practice, due to changes in surgical instruments, online hand–eye calibration must be performed regularly. In order to ensure seamless execution of the surgical procedure without affecting the normal surgical workflow, it is important to derive fast and efficient hand–eye calibration methods. Methods We present a computationally efficient iterative method for hand–eye calibration. In this method, dual quaternion is introduced to represent the rigid transformation, and a two-step iterative method is proposed to recover the real and dual parts of the dual quaternion simultaneously, and thus the estimation of rotation and translation of the transformation. Results The proposed method was applied to determine the rigid transformation between the stereo laparoscope and the robot manipulator. Promising experimental and simulation results have shown significant convergence speed improvement to 3 iterations from larger than 30 with regard to standard optimization method, which illustrates the effectiveness and efficiency of the proposed method.
Collapse
|