1
|
Zhou L, Wang R, Zhang L. Accurate Robot Arm Attitude Estimation Based on Multi-View Images and Super-Resolution Keypoint Detection Networks. SENSORS (BASEL, SWITZERLAND) 2024; 24:305. [PMID: 38203167 PMCID: PMC10781322 DOI: 10.3390/s24010305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 12/24/2023] [Accepted: 12/27/2023] [Indexed: 01/12/2024]
Abstract
Robot arm monitoring is often required in intelligent industrial scenarios. A two-stage method for robot arm attitude estimation based on multi-view images is proposed. In the first stage, a super-resolution keypoint detection network (SRKDNet) is proposed. The SRKDNet incorporates a subpixel convolution module in the backbone neural network, which can output high-resolution heatmaps for keypoint detection without significantly increasing the computational resource consumption. Efficient virtual and real sampling and SRKDNet training methods are put forward. The SRKDNet is trained with generated virtual data and fine-tuned with real sample data. This method decreases the time and manpower consumed in collecting data in real scenarios and achieves a better generalization effect on real data. A coarse-to-fine dual-SRKDNet detection mechanism is proposed and verified. Full-view and close-up dual SRKDNets are executed to first detect the keypoints and then refine the results. The keypoint detection accuracy, PCK@0.15, for the real robot arm reaches up to 96.07%. In the second stage, an equation system, involving the camera imaging model, the robot arm kinematic model and keypoints with different confidence values, is established to solve the unknown rotation angles of the joints. The proposed confidence-based keypoint screening scheme makes full use of the information redundancy of multi-view images to ensure attitude estimation accuracy. Experiments on a real UR10 robot arm under three views demonstrate that the average estimation error of the joint angles is 0.53 degrees, which is superior to that achieved with the comparison methods.
Collapse
Affiliation(s)
| | | | - Liyan Zhang
- College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China; (L.Z.); (R.W.)
| |
Collapse
|
2
|
Bobrow TL, Golhar M, Vijayan R, Akshintala VS, Garcia JR, Durr NJ. Colonoscopy 3D video dataset with paired depth from 2D-3D registration. Med Image Anal 2023; 90:102956. [PMID: 37713764 PMCID: PMC10591895 DOI: 10.1016/j.media.2023.102956] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/29/2023] [Accepted: 09/04/2023] [Indexed: 09/17/2023]
Abstract
Screening colonoscopy is an important clinical application for several 3D computer vision techniques, including depth estimation, surface reconstruction, and missing region detection. However, the development, evaluation, and comparison of these techniques in real colonoscopy videos remain largely qualitative due to the difficulty of acquiring ground truth data. In this work, we present a Colonoscopy 3D Video Dataset (C3VD) acquired with a high definition clinical colonoscope and high-fidelity colon models for benchmarking computer vision methods in colonoscopy. We introduce a novel multimodal 2D-3D registration technique to register optical video sequences with ground truth rendered views of a known 3D model. The different modalities are registered by transforming optical images to depth maps with a Generative Adversarial Network and aligning edge features with an evolutionary optimizer. This registration method achieves an average translation error of 0.321 millimeters and an average rotation error of 0.159 degrees in simulation experiments where error-free ground truth is available. The method also leverages video information, improving registration accuracy by 55.6% for translation and 60.4% for rotation compared to single frame registration. 22 short video sequences were registered to generate 10,015 total frames with paired ground truth depth, surface normals, optical flow, occlusion, six degree-of-freedom pose, coverage maps, and 3D models. The dataset also includes screening videos acquired by a gastroenterologist with paired ground truth pose and 3D surface models. The dataset and registration source code are available at https://durr.jhu.edu/C3VD.
Collapse
Affiliation(s)
- Taylor L Bobrow
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Mayank Golhar
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Rohan Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Venkata S Akshintala
- Division of Gastroenterology and Hepatology, Johns Hopkins Medicine, Baltimore, MD 21287, USA
| | - Juan R Garcia
- Department of Art as Applied to Medicine, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Nicholas J Durr
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
3
|
Badilla-Solórzano J, Ihler S, Gellrich NC, Spalthoff S. Improving instrument detection for a robotic scrub nurse using multi-view voting. Int J Comput Assist Radiol Surg 2023; 18:1961-1968. [PMID: 37530904 PMCID: PMC10589190 DOI: 10.1007/s11548-023-03002-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 07/13/2023] [Indexed: 08/03/2023]
Abstract
PURPOSE A basic task of a robotic scrub nurse is surgical instrument detection. Deep learning techniques could potentially address this task; nevertheless, their performance is subject to some degree of error, which could render them unsuitable for real-world applications. In this work, we aim to demonstrate how the combination of a trained instrument detector with an instance-based voting scheme that considers several frames and viewpoints is enough to guarantee a strong improvement in the instrument detection task. METHODS We exploit the typical setup of a robotic scrub nurse to collect RGB data and point clouds from different viewpoints. Using trained Mask R-CNN models, we obtain predictions from each view. We propose a multi-view voting scheme based on predicted instances that combines the gathered data and predictions to produce a reliable map of the location of the instruments in the scene. RESULTS Our approach reduces the number of errors by more than 82% compared with the single-view case. On average, the data from five viewpoints are sufficient to infer the correct instrument arrangement with our best model. CONCLUSION Our approach can drastically improve an instrument detector's performance. Our method is practical and can be applied during an actual medical procedure without negatively affecting the surgical workflow. Our implementation and data are made available for the scientific community ( https://github.com/Jorebs/Multi-view-Voting-Scheme ).
Collapse
Affiliation(s)
| | - Sontje Ihler
- Institute of Mechatronic Systems, Leibniz University Hannover, Garbsen, Germany
| | | | - Simon Spalthoff
- Department of Cranio-Maxillofacial Surgery, Hannover Medical School, Hannover, Germany
| |
Collapse
|
4
|
Zhou J, Ji Z, Li Y, Liu X, Yao W, Qin Y. High-Precision Calibration of a Monocular-Vision-Guided Handheld Line-Structured-Light Measurement System. SENSORS (BASEL, SWITZERLAND) 2023; 23:6469. [PMID: 37514761 PMCID: PMC10385695 DOI: 10.3390/s23146469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 06/27/2023] [Accepted: 07/03/2023] [Indexed: 07/30/2023]
Abstract
Due to the advantages of simple construction, easy application and good environmental suitability, handheld structured-light measurement systems have broad application prospects in 3D measurements. Here, a monocular-vision-guided line-structured-light measurement system is developed, and the posture of the handheld device can be obtained via a specifically designed target attached to it. No more marker points need to be adhered onto the object under inspection. The key for the system calibration is to obtain the coordinate transformation matrix from the sensor to the featured target coordinate system. The mathematical model of the system is first established. Then, an improved multi-view calibration method is proposed, where a selection process for the image pairs is conducted for accuracy improvement. With this method, the maximum relative error of the measured stair heights can be reduced from 0.48% to 0.16%. The measurement results for the specific parts further verified the effectiveness of the proposed system and the calibration method.
Collapse
Affiliation(s)
- Jingbo Zhou
- School of Mechanical Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China
| | - Zhaohui Ji
- School of Mechanical Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China
| | - Yuehua Li
- School of Mechanical Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China
| | - Xiaohong Liu
- School of Mechanical Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China
| | - Wenhao Yao
- Tangshan Yinglai Technology Co., Ltd., Tangshan 063000, China
| | - Yafang Qin
- School of Mechanical Engineering, Hebei University of Science and Technology, Shijiazhuang 050018, China
| |
Collapse
|
5
|
Zhou K, Huang X, Li S, Li G. Convolutional neural network-based pose mapping estimation as an alternative to traditional hand-eye calibration. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2023; 94:065002. [PMID: 37862475 DOI: 10.1063/5.0147783] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 05/13/2023] [Indexed: 10/22/2023]
Abstract
The vision system is a crucial technology for realizing the automation and intelligence of industrial robots, and the accuracy of hand-eye calibration is crucial in determining the relationship between the camera and robot end. Parallel robots are widely used in automated assembly due to their high positioning accuracy and large carrying capacity, but traditional hand-eye calibration methods may not be applicable due to their limited motion range and resulting accuracy problems. To address this issue, we propose using a pose, nonlinear mapping estimation method to solve the hand-eye calibration problem and have constructed a 1-D pose estimation convolutional neural network (PECNN) with excellent performance, through experiments and discussions. The PECNN achieves an end-to-end mapping of the variation of the target object pose to the variation of the robot end pose. Our experiments have shown that the proposed hand-eye calibration method has high accuracy and can be applied to the automated assembly tasks of vision-guided parallel robots. Moreover, the method is also applicable to most parallel robots and tandem robots.
Collapse
Affiliation(s)
- Kuai Zhou
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Xiang Huang
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Shuanggao Li
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Gen Li
- Suzhou Research Institute, Nanjing University of Aeronautics and Astronautics, Suzhou, China
| |
Collapse
|
6
|
Li X, Xiao Y, Wang B, Ren H, Zhang Y, Ji J. Automatic targetless LiDAR–camera calibration: a survey. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10317-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
7
|
Wang J, Ye S, Shi S, New TH. Hand-eye calibration for an unfocused light-field camera. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:1946-1957. [PMID: 36520690 DOI: 10.1364/josaa.469703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 09/18/2022] [Indexed: 06/17/2023]
Abstract
A calibration framework is established for an unfocused light-field camera and a robotic arm. With Gaussian optics and light-field imaging principles, the mapping relationship between a point light source and its corresponding plenoptic disc feature is established, and the intrinsic and extrinsic parameters of the unfocused light-field camera are calculated through nonlinear optimization. Transformation matrices for eye-to-hand and eye-in-hand configurations are subsequently solved and are validated by applying them to an industrial light-field camera-robotic arm system. With the proposed calibration method, 3D reconstruction for calibration board in different poses is demonstrated and calibration uncertainty is discussed in detail.
Collapse
|
8
|
Enebuse I, Ibrahim BKSMK, Foo M, Matharu RS, Ahmed H. Accuracy evaluation of hand-eye calibration techniques for vision-guided robots. PLoS One 2022; 17:e0273261. [PMID: 36260640 PMCID: PMC9581431 DOI: 10.1371/journal.pone.0273261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 08/04/2022] [Indexed: 11/07/2022] Open
Abstract
Hand-eye calibration is an important step in controlling a vision-guided robot in applications like part assembly, bin picking and inspection operations etc. Many methods for estimating hand-eye transformations have been proposed in literature with varying degrees of complexity and accuracy. However, the success of a vision-guided application is highly impacted by the accuracy the hand-eye calibration of the vision system with the robot. The level of this accuracy depends on several factors such as rotation and translation noise, rotation and translation motion range that must be considered during calibration. Previous studies and benchmarking of the proposed algorithms have largely been focused on the combined effect of rotation and translation noise. This study provides insight on the impact of rotation and translation noise acting in isolation on the hand-eye calibration accuracy. This deviates from the most common method of assessing hand-eye calibration accuracy based on pose noise (combined rotation and translation noise). We also evaluated the impact of the robot motion range used during the hand-eye calibration operation which is rarely considered. We provide quantitative evaluation of our study using six commonly used algorithms from an implementation perspective. We comparatively analyse the performance of these algorithms through simulation case studies and experimental validation using the Universal Robot’s UR5e physical robots. Our results show that these different algorithms perform differently when the noise conditions vary rather than following a general trend. For example, the simultaneous methods are more resistant to rotation noise, whereas the separate methods are better at dealing with translation noise. Additionally, while increasing the robot rotation motion span during calibration enhances the accuracy of the separate methods, it has a negative effect on the simultaneous methods. Conversely, increasing the translation motion range improves the accuracy of simultaneous methods but degrades the accuracy of the separate methods. These findings suggest that those conditions should be considered when benchmarking algorithms or performing a calibration process for enhanced accuracy.
Collapse
Affiliation(s)
- Ikenna Enebuse
- Centre for Future Transport and Cities, Coventry University, Coventry, United Kingdom
| | | | - Mathias Foo
- School of Engineering, University of Warwick, Coventry, United Kingdom
| | - Ranveer S. Matharu
- Centre for Future Transport and Cities, Coventry University, Coventry, United Kingdom
| | - Hafiz Ahmed
- Nuclear Futures Institute, Bangor University, Bangor, United Kingdom
- * E-mail:
| |
Collapse
|
9
|
Halim J, Eichler P, Krusche S, Bdiwi M, Ihlenfeldt S. No-code robotic programming for agile production: A new markerless-approach for multimodal natural interaction in a human-robot collaboration context. Front Robot AI 2022; 9:1001955. [PMID: 36274910 PMCID: PMC9583918 DOI: 10.3389/frobt.2022.1001955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 08/17/2022] [Indexed: 11/13/2022] Open
Abstract
Industrial robots and cobots are widely deployed in most industrial sectors. However, robotic programming still needs a lot of time and effort in small batch sizes, and it demands specific expertise and special training, especially when various robotic platforms are required. Actual low-code or no-code robotic programming solutions are exorbitant and meager. This work proposes a novel approach for no-code robotic programming for end-users with adequate or no expertise in industrial robotic. The proposed method ensures intuitive and fast robotic programming by utilizing a finite state machine with three layers of natural interactions based on hand gesture, finger gesture, and voice recognition. The implemented system combines intelligent computer vision and voice control capabilities. Using a vision system, the human could transfer spatial information of a 3D point, lines, and trajectories using hand and finger gestures. The voice recognition system will assist the user in parametrizing robot parameters and interacting with the robot’s state machine. Furthermore, the proposed method will be validated and compared with state-of-the-art “Hand-Guiding” cobot devices within real-world experiments. The results obtained are auspicious, and indicate the capability of this novel approach for real-world deployment in an industrial context.
Collapse
Affiliation(s)
- Jayanto Halim
- Departement of Cognitive Human-Machine System, Fraunhofer Institute for Machine Tools and Forming Technology, Chemnitz, Germany
- *Correspondence: Jayanto Halim,
| | - Paul Eichler
- Departement of Cognitive Human-Machine System, Fraunhofer Institute for Machine Tools and Forming Technology, Chemnitz, Germany
| | - Sebastian Krusche
- Departement of Cognitive Human-Machine System, Fraunhofer Institute for Machine Tools and Forming Technology, Chemnitz, Germany
| | - Mohamad Bdiwi
- Departement of Cognitive Human-Machine System, Fraunhofer Institute for Machine Tools and Forming Technology, Chemnitz, Germany
| | - Steffen Ihlenfeldt
- Departement of Production System and Factory Automation, Fraunhofer Institute for Machine Tools and Forming Technology, Chemnitz, Germany
| |
Collapse
|
10
|
Significance of Camera Pixel Error in the Calibration Process of a Robotic Vision System. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136406] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Although robotic vision systems offer a promising technology solution for rapid and reconfigurable in-process 3D inspection of complex and large parts in contemporary manufacturing, measurement accuracy poses a challenge for its wide deployment. One of the key issues in adopting a robotic vision system is to understand the extent of its measurement errors which are directly correlated with the calibration process. In this paper, a possible source of practical and inherent measurement uncertainties involved in the calibration process of a robotic vision system are discussed. The system considered in this work consists of an image sensor mounted on an industrial robot manipulator with six degrees of freedom. Based on a series of experimental tests and computer simulations, the paper gives a comprehensive performance comparison of different calibration approaches and shows the impact of measurement uncertainties on the calibration process. It has been found from the error sensitivity analysis that minor uncertainties in the calibration process can significantly affect the accuracy of the robotic vision system. Further investigations suggest that inducing errors in image calibration patterns can have an adverse effect on the hand–eye calibration process compared to the angular errors in the robot joints.
Collapse
|
11
|
Maiseli BJ. Optimization of chamfer masks using Farey sequences and kernel dimensionality. Sci Rep 2022; 12:7639. [PMID: 35538162 PMCID: PMC9090818 DOI: 10.1038/s41598-022-11807-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 04/22/2022] [Indexed: 11/24/2022] Open
Abstract
Farey sequences have captured the attention of several researchers because of their wide applications in polygonal approximation, generation of Ford circles, and shape analysis. In this work, we extend the applications of these sequences to optimize chamfer masks for computation of distance maps in images. Compared with previous methods, the proposed method can more effectively generate optimal weights from larger chamfer masks without considering multiple and rather complex defining variables of the masks. Furthermore, our work demonstrates the relationship between size of the chamfer kernel, Farey sequence, and optimal weights of the chamfer mask. This interesting relationship, which may be useful in various image processing and computer vision tasks, has never been revealed by any other previous study. Results from the current research may advance our understanding on the applications of Farey sequences in computational geometry and vision-related tasks. To allow reproducibility of the results, implementation codes and datasets can be accessed in the public repository at https://www.mathworks.com/matlabcentral/fileexchange/71652-optimization-of-chamfer-masks.
Collapse
Affiliation(s)
- Baraka Jacob Maiseli
- Department of Electronics & Telecommunications Engineering, College of Information & Communication Technologies, University of Dar es Salaam, 14113, Dar es Salaam, Tanzania.
| |
Collapse
|
12
|
Wu J, Liu M, Huang Y, Jin C, Wu Y, Yu C. SE(n)++: An Efficient Solution to Multiple Pose Estimation Problems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:3829-3840. [PMID: 32877345 DOI: 10.1109/tcyb.2020.3015039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In robotic applications, many pose problems involve solving the homogeneous transformation based on the special Euclidean group SE(n) . However, due to the nonconvexity of SE(n) , many of these solvers treat rotation and translation separately, and the computational efficiency is still unsatisfactory. A new technique called the SE(n)++ is proposed in this article that exploits a novel mapping from SE(n) to SO(n + 1) . The mapping transforms the coupling between rotation and translation into a unified formulation on the Lie group and gives better analytical results and computational performances. Specifically, three major pose problems are considered in this article, that is, the point-cloud registration, the hand-eye calibration, and the SE(n) synchronization. Experimental validations have confirmed the effectiveness of the proposed SE(n)++ method in open datasets.
Collapse
|
13
|
Xu K, Jiang B, Moghekar A, Kazanzides P, Boctor E. AutoInFocus, a new paradigm for ultrasound-guided spine intervention: a multi-platform validation study. Int J Comput Assist Radiol Surg 2022; 17:911-920. [PMID: 35334043 DOI: 10.1007/s11548-022-02583-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 02/23/2022] [Indexed: 11/27/2022]
Abstract
PURPOSE Ultrasound-guided spine interventions often suffer from the insufficient visualization of key anatomical structures due to the complex shapes of the self-shadowing vertebrae. Therefore, we propose an ultrasound imaging paradigm, AutoInFocus (automatic insonification optimization with controlled ultrasound), to improve the key structure visibility. METHODS A phased-array probe is used in conjunction with a motion platform to image a controlled workspace, and the resulting images from multiple insonification angles are combined to reveal the target anatomy. This idea is first evaluated in simulation and then realized as a robotic platform and a miniaturized patch device. A spine phantom (CIRS) and its CT scan were used in the evaluation experiments to quantitatively and qualitatively analyze the advantages of the proposed method over the traditional approach. RESULTS We showed in simulation that the proposed system setup increased the visibility of interspinous space boundary, a key feature for lumbar puncture guidance, from 44.13 to 67.73% on average, and the 3D spine surface coverage from 14.31 to 35.87%, compared to traditional imaging setup. We also demonstrated the feasibility of both robotic and patch-based realizations in a spine phantom study. CONCLUSION This work lays the foundation for a new imaging paradigm that leverages redundant and controlled insonification to allow for imaging optimization of the complex vertebrae anatomy, making it possible for high-quality visualization of key anatomies during ultrasound-guided spine interventions.
Collapse
Affiliation(s)
- Keshuai Xu
- Department of Computer Science, Johns Hopkins University, Baltimore, 21218, MD, USA
| | - Baichuan Jiang
- Department of Computer Science, Johns Hopkins University, Baltimore, 21218, MD, USA
| | - Abhay Moghekar
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, 21205, MD, USA
| | - Peter Kazanzides
- Department of Computer Science, Johns Hopkins University, Baltimore, 21218, MD, USA
| | - Emad Boctor
- Department of Computer Science, Johns Hopkins University, Baltimore, 21218, MD, USA.
| |
Collapse
|
14
|
Wang J, Yue C, Wang G, Gong Y, Li H, Yao W, Kuang S, Liu W, Wang J, Su B. Task Autonomous Medical Robot for Both Incision Stapling and Staples Removal. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3141452] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
15
|
Sarabandi S, Porta JM, Thomas F. Hand-Eye Calibration Made Easy Through a Closed-Form Two-Stage Method. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3146943] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
16
|
Abstract
Abstract
A classic hand-eye system involves hand-eye calibration and robot-world and hand-eye calibration. Insofar as hand-eye calibration can solve only hand-eye transformation, this study aims to determine the robot-world and hand-eye transformations simultaneously based on the robot-world and hand-eye equation. According to whether the rotation part and the translation part of the equation are decoupled, the methods can be divided into separable solutions and simultaneous solutions. The separable solutions solve the rotation part before solving the translation part, so the estimated errors of the rotation will be transferred to the translation. In this study, a method was proposed for calculation with rotation and translation coupling; a closed-form solution based on Kronecker product and an iterative solution based on the Gauss–Newton algorithm were involved. The feasibility was further tested using simulated data and real data, and the superiority was verified by comparison with the results obtained by the available method. Finally, we improved a method that can solve the singularity problem caused by the parameterization of the rotation matrix, which can be widely used in the robot-world and hand-eye calibration. The results show that the prediction errors of rotation and translation based on the proposed method be reduced to
$0.26^\circ$
and
$1.67$
mm, respectively.
Collapse
|
17
|
Pedrosa E, Oliveira M, Lau N, Santos V. A General Approach to Hand–Eye Calibration Through the Optimization of Atomic Transformations. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3062306] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
18
|
In Situ Visualization for 3D Ultrasound-Guided Interventions with Augmented Reality Headset. Bioengineering (Basel) 2021; 8:bioengineering8100131. [PMID: 34677204 PMCID: PMC8533537 DOI: 10.3390/bioengineering8100131] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 09/16/2021] [Accepted: 09/21/2021] [Indexed: 12/03/2022] Open
Abstract
Augmented Reality (AR) headsets have become the most ergonomic and efficient visualization devices to support complex manual tasks performed under direct vision. Their ability to provide hands-free interaction with the augmented scene makes them perfect for manual procedures such as surgery. This study demonstrates the reliability of an AR head-mounted display (HMD), conceived for surgical guidance, in navigating in-depth high-precision manual tasks guided by a 3D ultrasound imaging system. The integration between the AR visualization system and the ultrasound imaging system provides the surgeon with real-time intra-operative information on unexposed soft tissues that are spatially registered with the surrounding anatomic structures. The efficacy of the AR guiding system was quantitatively assessed with an in vitro study simulating a biopsy intervention aimed at determining the level of accuracy achievable. In the experiments, 10 subjects were asked to perform the biopsy on four spherical lesions of decreasing sizes (10, 7, 5, and 3 mm). The experimental results showed that 80% of the subjects were able to successfully perform the biopsy on the 5 mm lesion, with a 2.5 mm system accuracy. The results confirmed that the proposed integrated system can be used for navigation during in-depth high-precision manual tasks.
Collapse
|
19
|
Cottam DS, Campbell AC, Davey PC, Kent P, Elliott BC, Alderson JA. Functional calibration does not improve the concurrent validity of magneto-inertial wearable sensor-based thorax and lumbar angle measurements when compared with retro-reflective motion capture. Med Biol Eng Comput 2021; 59:2253-2262. [PMID: 34529184 DOI: 10.1007/s11517-021-02440-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Accepted: 09/07/2021] [Indexed: 10/20/2022]
Abstract
Magneto-inertial measurement unit (MIMU) systems allow calculation of simple sensor-to-sensor Euler angles, though this process does not address sensor-to-segment alignment, which is important for deriving meaningful MIMU-based kinematics. Functional sensor-to-segment calibrations have improved concurrent validity for elbow and knee angle measurements but have not yet been comprehensively investigated for trunk or sport-specific movements. This study aimed to determine the influence of MIMU functional calibration on thorax and lumbar joint angles during uni-planar and multi-planar, sport-specific tasks. It was hypothesised that functionally calibrating segment axes prior to angle decomposition would produce smaller differences than a non-functional method when both approaches were compared with concurrently collected 3D retro-reflective derived angles. Movements of 10 fast-medium cricket bowlers were simultaneously recorded by MIMUs and retro-reflective motion capture. Joint angles derived from four different segment definitions were compared, with three incorporating functionally defined axes. Statistical parametric mapping and root mean squared differences (RMSD) quantified measurement differences one-dimensionally and zero-dimensionally, respectively. Statistical parametric mapping found no significant differences between MIMU and retro-reflective data for any method across bowling and uni-planar trunk movements. The RMSDs for the functionally calibrated methods and non-functional method were not significantly different. Functional segment calibration may be unnecessary for MIMU-based measurement of thorax and lumbar joint angles.
Collapse
Affiliation(s)
- Daniel S Cottam
- Australian Institute of Sport, Leverrier St, Bruce, 2602, Australian Capital Territory, Australia.
| | - Amity C Campbell
- School of Physiotherapy and Exercise Science, Curtin University, Kent St, Bentley, Western Australia, 6102, Australia
| | - Paul C Davey
- School of Physiotherapy and Exercise Science, Curtin University, Kent St, Bentley, Western Australia, 6102, Australia
| | - Peter Kent
- School of Physiotherapy and Exercise Science, Curtin University, Kent St, Bentley, Western Australia, 6102, Australia.,Department of Sports Science and Clinical Biomechanics, University of Southern Denmark, Odense, Denmark
| | - Bruce C Elliott
- School of Human Sciences (Exercise and Sport Science), University of Western Australia, 35 Stirling Hwy, Crawley, Western Australia, 6009, Australia
| | - Jacqueline A Alderson
- School of Human Sciences (Exercise and Sport Science), University of Western Australia, 35 Stirling Hwy, Crawley, Western Australia, 6009, Australia.,Minderoo Tech & Policy Lab (UWA Law School), University of Western Australia, 35 Stirling Hwy, Crawley, WA, 6009, Australia.,Sports Performance Research Institute New Zealand (SPRINZ), Faculty of Health and Environmental Sciences, Auckland University of Technology, Auckland, New Zealand
| |
Collapse
|
20
|
Yu Y, Guan B, Sun X, Li Z, Fraundorfer F. Rotation alignment of a camera-IMU system using a single affine correspondence. APPLIED OPTICS 2021; 60:7455-7465. [PMID: 34613035 DOI: 10.1364/ao.431909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 07/20/2021] [Indexed: 06/13/2023]
Abstract
We propose an accurate and easy-to-implement method on rotation alignment of a camera-inertial measurement unit (IMU) system using only a single affine correspondence in the minimal case. The known initial rotation angles between the camera and IMU are utilized; thus, the alignment model can be formulated as a polynomial equation system based on homography constraints by expressing the rotation matrix in a first-order approximation. By solving the equation system, we can recover the rotation alignment parameters. Furthermore, more accurate alignment results can be achieved with the joint optimization of multiple stereo image pairs. The proposed method does not require additional auxiliary equipment or a camera's particular motion. The experimental results on synthetic data and two real-world data sets demonstrate that our method is efficient and precise for the camera-IMU system's rotation alignment.
Collapse
|
21
|
A Flexible Baseline Measuring System Based on Optics for Airborne DPOS. SENSORS 2021; 21:s21165333. [PMID: 34450775 PMCID: PMC8398224 DOI: 10.3390/s21165333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 07/30/2021] [Accepted: 08/04/2021] [Indexed: 11/17/2022]
Abstract
Three-dimensional imaging for multi-node interferometric synthetic aperture radar (InSAR) or multi-task imaging sensors has become the prevailing trend in the field of aerial remote sensing, which requires multi-node motion information to carry out the motion compensation. A distributed position and orientation system (DPOS) can provide multi-node motion information for InSAR by transfer alignment technology. However, due to wing deformation, the relative spatial relationship between the nodes will change, which will lead to lower accuracy of the transfer alignment. As a result, the flexible baseline between the nodes affects the interferometric phase error compensation and further deteriorates the imaging quality. This paper proposes a flexible baseline measuring system based on optics, which achieves non-connect measurement and overcomes the problem that it is difficult to build an accurate wing deformation model. An accuracy test was conducted in the laboratory, and results showed that the measurement accuracy of the baseline under static and dynamic conditions was less than 0.3 mm and 0.67 mm, respectively.
Collapse
|
22
|
Wang G, Li WL, Jiang C, Zhu DH, Xie H, Liu XJ, Ding H. Simultaneous Calibration of Multicoordinates for a Dual-Robot System by Solving the AXB = YCZ Problem. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3043688] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
23
|
Liu S, Huang WL, Gordon C, Armand M. Automated Implant Resizing for Single-Stage Cranioplasty. IEEE Robot Autom Lett 2021; 6:6624-6631. [PMID: 34395869 DOI: 10.1109/lra.2021.3095286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Patient-specific customized cranial implants (CCIs) are designed to fill the bony voids in the cranial and craniofacial skeleton. The current clinical approach during single-stage cranioplasty involves a surgeon modifying an oversized CCI to fit a patient's skull defect. The manual process, however, can be imprecise and time-consuming. This paper presents an automated surgical workflow with a robotic workstation for intraoperative CCI modification that provides higher resizing accuracy compared to the manual approach. We proposed a 2-scan method for intraoperative patient-to-CT registration using reattachable fiducial markers to address the registration issue caused by the clinical draping requirement. First, the draped defected skull was 3D scanned and registered to the CT space using our proposed 2-scan registration method. Next, our algorithm generates a robot cutting toolpath based on the 3D defect model. The robot then performs automatic 3D scanning to localize the implant and resizes the implant to match the cranial defect. We evaluated the implant resizing accuracy of the proposed paradigm against the resizing accuracy of the manual approach by an expert surgeon on two plastic skulls and two cadavers. The evaluation results showed that our system was able to decrease the bone gap distance by more than 60% and 30% on plastic skulls and cadavers respectively compared to the manual approach, indicating lower risk of post-surgical complication and better aesthetic restoration.
Collapse
Affiliation(s)
- Shuya Liu
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Wei-Lun Huang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Chad Gordon
- Department of Plastic & Reconstructive Surgery, the Section of Neuroplastic & Reconstructive Surgery, Johns Hopkins School of Medicine, Baltimore, MD 21205, USA
| | - Mehran Armand
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA.,Department of Orthopedic Surgery, Johns Hopkins School of Medicine, Baltimore, MD 21205, USA
| |
Collapse
|
24
|
Hu X, Baena FRY, Cutolo F. Head-Mounted Augmented Reality Platform for Markerless Orthopaedic Navigation. IEEE J Biomed Health Inform 2021; 26:910-921. [PMID: 34115600 DOI: 10.1109/jbhi.2021.3088442] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual augmented reality (AR) has the potential to improve the accuracy, efficiency and reproducibility of computer-assisted orthopaedic surgery (CAOS). AR Head-mounted displays (HMDs) further allow non-eye-shift target observation and egocentric view. Recently, a markerless tracking and registration (MTR) algorithm was proposed to avoid the artificial markers that are conventionally pinned into the target anatomy for tracking, as their use prolongs surgical workflow, introduces human-induced errors, and necessitates additional surgical invasion in patients. However, such an MTR-based method has neither been explored for surgical applications nor integrated into current AR HMDs, making the ergonomic HMD-based markerless AR CAOS navigation hard to achieve. To these aims, we present a versatile, device-agnostic and accurate HMD-based AR platform. Our software platform, supporting both video see-through (VST) and optical see-through (OST) modes, integrates two proposed fast calibration procedures using a specially designed calibration tool. According to the camera-based evaluation, our AR platform achieves a display error of 6.31 2.55 arcmin for VST and 7.72 3.73 arcmin for OST. A proof-of-concept markerless surgical navigation system to assist in femoral bone drilling was then developed based on the platform and Microsoft HoloLens 1. According to the user study, both VST and OST markerless navigation systems are reliable, with the OST system providing the best usability. The measured navigation error is 4.90 1.04 mm, 5.96 2.22 for VST system and 4.36 0.80 mm, 5.65 1.42 for OST system.
Collapse
|
25
|
Vijayan RC, Han R, Wu P, Sheth NM, Ketcha MD, Vagdargi P, Vogt S, Kleinszig G, Osgood GM, Siewerdsen JH, Uneri A. Development of a fluoroscopically guided robotic assistant for instrument placement in pelvic trauma surgery. J Med Imaging (Bellingham) 2021; 8:035001. [PMID: 34124283 PMCID: PMC8189698 DOI: 10.1117/1.jmi.8.3.035001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 05/21/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: A method for fluoroscopic guidance of a robotic assistant is presented for instrument placement in pelvic trauma surgery. The solution uses fluoroscopic images acquired in standard clinical workflow and helps avoid repeat fluoroscopy commonly performed during implant guidance. Approach: Images acquired from a mobile C-arm are used to perform 3D-2D registration of both the patient (via patient CT) and the robot (via CAD model of a surgical instrument attached to its end effector, e.g; a drill guide), guiding the robot to target trajectories defined in the patient CT. The proposed approach avoids C-arm gantry motion, instead manipulating the robot to acquire disparate views of the instrument. Phantom and cadaver studies were performed to determine operating parameters and assess the accuracy of the proposed approach in aligning a standard drill guide instrument. Results: The proposed approach achieved average drill guide tip placement accuracy of 1.57 ± 0.47 mm and angular alignment of 0.35 ± 0.32 deg in phantom studies. The errors remained within 2 mm and 1 deg in cadaver experiments, comparable to the margins of errors provided by surgical trackers (but operating without the need for external tracking). Conclusions: By operating at a fixed fluoroscopic perspective and eliminating the need for encoded C-arm gantry movement, the proposed approach simplifies and expedites the registration of image-guided robotic assistants and can be used with simple, non-calibrated, non-encoded, and non-isocentric C-arm systems to accurately guide a robotic device in a manner that is compatible with the surgical workflow.
Collapse
Affiliation(s)
- Rohan C. Vijayan
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Runze Han
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Pengwei Wu
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Niral M. Sheth
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Michael D. Ketcha
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Prasad Vagdargi
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | | | | | - Greg M. Osgood
- Johns Hopkins Medicine, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
26
|
Cai J, Lei T. An Autonomous Positioning Method of Tube-to-Tubesheet Welding Robot Based on Coordinate Transformation and Template Matching. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3050741] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
27
|
Liang G, Chen F, Liang Y, Feng Y, Wang C, Wu X. A Manufacturing-Oriented Intelligent Vision System Based on Deep Neural Network for Object Recognition and 6D Pose Estimation. Front Neurorobot 2021; 14:616775. [PMID: 33488378 PMCID: PMC7817625 DOI: 10.3389/fnbot.2020.616775] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 11/20/2020] [Indexed: 11/21/2022] Open
Abstract
Nowadays, intelligent robots are widely applied in the manufacturing industry, in various working places or assembly lines. In most manufacturing tasks, determining the category and pose of parts is important, yet challenging, due to complex environments. This paper presents a new two-stage intelligent vision system based on a deep neural network with RGB-D image inputs for object recognition and 6D pose estimation. A dense-connected network fusing multi-scale features is first built to segment the objects from the background. The 2D pixels and 3D points in cropped object regions are then fed into a pose estimation network to make object pose predictions based on fusion of color and geometry features. By introducing the channel and position attention modules, the pose estimation network presents an effective feature extraction method, by stressing important features whilst suppressing unnecessary ones. Comparative experiments with several state-of-the-art networks conducted on two well-known benchmark datasets, YCB-Video and LineMOD, verified the effectiveness and superior performance of the proposed method. Moreover, we built a vision-guided robotic grasping system based on the proposed method using a Kinova Jaco2 manipulator with an RGB-D camera installed. Grasping experiments proved that the robot system can effectively implement common operations such as picking up and moving objects, thereby demonstrating its potential to be applied in all kinds of real-time manufacturing applications.
Collapse
Affiliation(s)
- Guoyuan Liang
- Center for Intelligent and Biomimetic Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems (No.2019B121205007), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Fan Chen
- Center for Intelligent and Biomimetic Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yu Liang
- Center for Intelligent and Biomimetic Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yachun Feng
- Center for Intelligent and Biomimetic Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems (No.2019B121205007), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Can Wang
- Center for Intelligent and Biomimetic Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems (No.2019B121205007), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xinyu Wu
- Center for Intelligent and Biomimetic Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems (No.2019B121205007), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
28
|
Richter F, Lu J, Orosco RK, Yip MC. Robotic Tool Tracking Under Partially Visible Kinematic Chain: A Unified Approach. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3111441] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
29
|
Özgüner O, Shkurti T, Huang S, Hao R, Jackson RC, Newman WS, Çavuşoğlu MC. Camera-Robot Calibration for the da Vinci® Robotic Surgery System. IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING : A PUBLICATION OF THE IEEE ROBOTICS AND AUTOMATION SOCIETY 2020; 17:2154-2161. [PMID: 33746640 PMCID: PMC7978174 DOI: 10.1109/tase.2020.2986503] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The development of autonomous or semi-autonomous surgical robots stands to improve the performance of existing teleoperated equipment, but requires fine hand-eye calibration between the free-moving endoscopic camera and patient-side manipulator arms (PSMs). A novel method of solving this problem for the da Vinci® robotic surgical system and kinematically similar systems is presented. First, a series of image-processing and optical-tracking operations are performed to compute the coordinate transformation between the endoscopic camera view frame and an optical-tracking marker permanently affixed to the camera body. Then, the kinematic properties of the PSM are exploited to compute the coordinate transformation between the kinematic base frame of the PSM and an optical marker permanently affixed thereto. Using these transformations, it is then possible to compute the spatial relationship between the PSM and the endoscopic camera using only one tracker snapshot of the two markers. The effectiveness of this calibration is demonstrated by successfully guiding the PSM end effector to points of interest identified through the camera. Additional tests on a surgical task, namely grasping a surgical needle, are also performed to validate the proposed method. The resulting visually-guided robot positioning accuracy is better than the earlier hand-eye calibration results reported in the literature for the da Vinci® system, while supporting intraoperative update of the calibration and requiring only devices that are already commonly used in the surgical environment.
Collapse
Affiliation(s)
- Orhan Özgüner
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH
| | - Thomas Shkurti
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH
| | - Siqi Huang
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH
| | - Ran Hao
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH
| | - Russell C Jackson
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH
| | - Wyatt S Newman
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH
| | - M Cenk Çavuşoğlu
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
30
|
Sun Y, Pan B, Guo Y, Fu Y, Niu G. Vision-based hand-eye calibration for robot-assisted minimally invasive surgery. Int J Comput Assist Radiol Surg 2020; 15:2061-2069. [PMID: 32808149 DOI: 10.1007/s11548-020-02245-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Accepted: 08/07/2020] [Indexed: 11/24/2022]
Abstract
PURPOSE The knowledge of laparoscope vision can greatly improve the surgical operation room (OR) efficiency. For the vision-based computer-assisted surgery, the hand-eye calibration establishes the coordinate relationship between laparoscope and robot slave arm. While significant advances have been made for hand-eye calibration in recent years, efficient algorithm for minimally invasive surgical robot is still a major challenge. Removing the external calibration object in abdominal environment to estimate the hand-eye transformation is still a critical problem. METHODS We propose a novel hand-eye calibration algorithm to tackle the problem which relies purely on surgical instrument already in the operating scenario for robot-assisted minimally invasive surgery (RMIS). Our model is formed by the geometry information of the surgical instrument and the remote center-of-motion (RCM) constraint. We also enhance the algorithm with stereo laparoscope model. RESULTS Promising validation of synthetic simulation and experimental surgical robot system have been conducted to evaluate the proposed method. We report results that the proposed method can exhibit the hand-eye calibration without calibration object. CONCLUSION Vision-based hand-eye calibration is developed. We demonstrate the feasibility to perform hand-eye calibration by taking advantage of the components of surgical robot system, leading to the efficiency of surgical OR.
Collapse
Affiliation(s)
- Yanwen Sun
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Bo Pan
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China.
| | - Yongchen Guo
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Yili Fu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Guojun Niu
- School of Mechanical Engineering and Automation, Zhejiang Sci-Tech University, Hangzhou, China
| |
Collapse
|
31
|
Zhang Q, Gao GQ. Hand–eye calibration and grasping pose calculation with motion error compensation and vertical-component correction for 4-R(2-SS) parallel robot. INT J ADV ROBOT SYST 2020. [DOI: 10.1177/1729881420909012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Due to motion constraint of 4-R(2-SS) parallel robot, it is difficult to calculate the translation component of hand–eye calibration based on the existing model solving method accurately. Additionally, the camera calibration error, robot motion error, and invalid calibration motion poses make it difficult to achieve fast and accurate online hand–eye calibration. Therefore, we propose a hand–eye calibration method with motion error compensation and vertical-component correction for 4-R(2-SS) parallel robot by improving the existing eye-to-hand model and solving method. Firstly, the eye-to-hand model of single camera is improved and the robot motion error in the improved model is compensated to reduce the influence of camera calibration error and robot motion error on model accuracy. Secondly, the vertical-component of hand–eye calibration is corrected based on vertical constraint between calibration plate and end effector in parallel robot to calculate the pose and motion error in calibration of 4-R(2-SS) parallel robot accurately. Thirdly, the nontrivial solution constraint of eye-to-hand model is constructed and adopted to remove invalid calibration motion poses and plan calibration motion. Finally, the proposed method was verified by experiments with a fruit sorting system based on 4-R(2-SS) parallel robot. Compared with random motion, the existing model, and solving method, the average time of online calibration based on planned motion decreases by 29.773 s and the average error of calibration based on the improved model and solving method decreases by 151.293. The proposed method can improve the accuracy and efficiency of hand–eye calibration of 4-R(2-SS) parallel robot effectively and further realize accurate and fast grasping.
Collapse
Affiliation(s)
- Qian Zhang
- School of Electrical and Information Engineering, Jiangsu University, Zhenjiang, China
| | - Guo-Qin Gao
- School of Electrical and Information Engineering, Jiangsu University, Zhenjiang, China
| |
Collapse
|
32
|
Ambiguity-Free Optical-Inertial Tracking for Augmented Reality Headsets. SENSORS 2020; 20:s20051444. [PMID: 32155808 PMCID: PMC7085738 DOI: 10.3390/s20051444] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 03/04/2020] [Accepted: 03/04/2020] [Indexed: 01/19/2023]
Abstract
The increasing capability of computing power and mobile graphics has made possible the release of self-contained augmented reality (AR) headsets featuring efficient head-anchored tracking solutions. Ego motion estimation based on well-established infrared tracking of markers ensures sufficient accuracy and robustness. Unfortunately, wearable visible-light stereo cameras with short baseline and operating under uncontrolled lighting conditions suffer from tracking failures and ambiguities in pose estimation. To improve the accuracy of optical self-tracking and its resiliency to marker occlusions, degraded camera calibrations, and inconsistent lighting, in this work we propose a sensor fusion approach based on Kalman filtering that integrates optical tracking data with inertial tracking data when computing motion correlation. In order to measure improvements in AR overlay accuracy, experiments are performed with a custom-made AR headset designed for supporting complex manual tasks performed under direct vision. Experimental results show that the proposed solution improves the head-mounted display (HMD) tracking accuracy by one third and improves the robustness by also capturing the orientation of the target scene when some of the markers are occluded and when the optical tracking yields unstable and/or ambiguous results due to the limitations of using head-anchored stereo tracking cameras under uncontrollable lighting conditions.
Collapse
|
33
|
Chan WP, Pan MKXJ, Croft EA, Inaba M. An Affordance and Distance Minimization Based Method for Computing Object Orientations for Robot Human Handovers. Int J Soc Robot 2020. [DOI: 10.1007/s12369-019-00546-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
34
|
Schlette C, Buch AG, Hagelskjær F, Iturrate I, Kraft D, Kramberger A, Lindvig AP, Mathiesen S, Petersen HG, Rasmussen MH, Savarimuthu TR, Sloth C, Sørensen LC, Thulesen TN. Towards robot cell matrices for agile production – SDU Robotics' assembly cell at the WRC 2018. Adv Robot 2019. [DOI: 10.1080/01691864.2019.1686422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- C. Schlette
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - A. G. Buch
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - F. Hagelskjær
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - I. Iturrate
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - D. Kraft
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - A. Kramberger
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - A. P. Lindvig
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - S. Mathiesen
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - H. G. Petersen
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - M. H. Rasmussen
- SDU Mechanical Engineering, Institute for Technology and Innovation (ITI), University of Southern Denmark (SDU), Odense, Denmark
| | - T. R. Savarimuthu
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - C. Sloth
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - L. C. Sørensen
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| | - T. N. Thulesen
- SDU Robotics, Maersk Mc-Kinney Moller Institute (MMMI), University of Southern Denmark (SDU), Odense, Denmark
| |
Collapse
|
35
|
Zhao Z. Simultaneous robot-world and hand-eye calibration by the alternative linear programming. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2018.08.023] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
36
|
Robust and Accurate Hand-Eye Calibration Method Based on Schur Matric Decomposition. SENSORS 2019; 19:s19204490. [PMID: 31623249 PMCID: PMC6832585 DOI: 10.3390/s19204490] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Revised: 10/12/2019] [Accepted: 10/14/2019] [Indexed: 12/02/2022]
Abstract
To improve the accuracy and robustness of hand–eye calibration, a hand–eye calibration method based on Schur matric decomposition is proposed in this paper. The accuracy of these methods strongly depends on the quality of observation data. Therefore, preprocessing observation data is essential. As with traditional two-step hand–eye calibration methods, we first solve the rotation parameters and then the translation vector can be immediately determined. A general solution was obtained from one observation through Schur matric decomposition and then the degrees of freedom were decreased from three to two. Observation data preprocessing is one of the basic unresolved problems with hand–eye calibration methods. A discriminant equation to delete outliers was deduced based on Schur matric decomposition. Finally, the basic problem of observation data preprocessing was solved using outlier detection, which significantly improved robustness. The proposed method was validated by both simulations and experiments. The results show that the prediction error of rotation and translation was 0.06 arcmin and 1.01 mm respectively, and the proposed method performed much better in outlier detection. A minimal configuration for the unique solution was proven from a new perspective.
Collapse
|
37
|
Pachtrachai K, Vasconcelos F, Dwyer G, Hailes S, Stoyanov D. Hand-Eye Calibration With a Remote Centre of Motion. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2924845] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
38
|
Zhang Y, Qiu Z, Zhang X. Calibration method for hand-eye system with rotation and translation couplings. APPLIED OPTICS 2019; 58:5375-5387. [PMID: 31504005 DOI: 10.1364/ao.58.005375] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Accepted: 06/11/2019] [Indexed: 06/10/2023]
Abstract
This paper develops a novel hand-eye calibration method for hand-eye systems with rotation and translation coupling terms. First, a nonlinear camera model with distortion terms and a model of a hand-eye system with rotation and translation coupling terms are established. Based on a non-linear optimization method and a reverse projection method, a decoupling calibration method for a lower-degree-of-freedom hand-eye system is proposed. Then the path planning for the calibration process is carried out. Based on the analysis of coupling constraints and hand-eye system motion constraints, three types of hand-eye calibration paths with high efficiency and easy operation are developed. In addition, the influence of key parameters on hand-eye calibration accuracy is analyzed. Finally, calibration experiments and parametric influence experiments are carried out. The results demonstrate that the proposed method is effective and practical for calibrating the hand-eye system.
Collapse
|
39
|
Ali I, Suominen O, Gotchev A, Morales ER. Methods for Simultaneous Robot-World-Hand-Eye Calibration: A Comparative Study. SENSORS 2019; 19:s19122837. [PMID: 31242714 PMCID: PMC6631330 DOI: 10.3390/s19122837] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2019] [Revised: 06/19/2019] [Accepted: 06/21/2019] [Indexed: 11/16/2022]
Abstract
In this paper, we propose two novel methods for robot-world-hand–eye calibration and provide a comparative analysis against six state-of-the-art methods. We examine the calibration problem from two alternative geometrical interpretations, called ‘hand–eye’ and ‘robot-world-hand–eye’, respectively. The study analyses the effects of specifying the objective function as pose error or reprojection error minimization problem. We provide three real and three simulated datasets with rendered images as part of the study. In addition, we propose a robotic arm error modeling approach to be used along with the simulated datasets for generating a realistic response. The tests on simulated data are performed in both ideal cases and with pseudo-realistic robotic arm pose and visual noise. Our methods show significant improvement and robustness on many metrics in various scenarios compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Ihtisham Ali
- Faculty of Information Technology and Communication, Tampere University, 33720 Tampere, Finland.
| | - Olli Suominen
- Faculty of Information Technology and Communication, Tampere University, 33720 Tampere, Finland.
| | - Atanas Gotchev
- Faculty of Information Technology and Communication, Tampere University, 33720 Tampere, Finland.
| | - Emilio Ruiz Morales
- Fusion for Energy (F4E), ITER Delivery Department, Remote Handling Project Team, 08019 Barcelona, Spain.
| |
Collapse
|
40
|
Wei Z, Zou W, Zhang G, Zhao K. Extrinsic parameters calibration of multi-camera with non-overlapping fields of view using laser scanning. OPTICS EXPRESS 2019; 27:16719-16737. [PMID: 31252894 DOI: 10.1364/oe.27.016719] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 05/21/2019] [Indexed: 06/09/2023]
Abstract
An extrinsic parameters calibration method of multi-cameras with non-overlapping fields of view (FOV) using laser scanning is presented. Firstly, two lasers are mounted on a multi-degree-of-freedom manipulator and can scan objects freely by the projected line-structured light. Then, controlling the movement of the manipulator, the line-structured light is projected into the field of view of one of the multi-cameras, and the light plane equation in the camera coordinate frame is calibrated by the target. The manipulator is moved several times in small amplitude to change the position of structured light in the field of vision of the camera and to continue to calibrate the light plane. The light plane equation of line-structured light in the manipulator coordinate frame are solved by the hand-eye calibration method. Secondly, with the help of the light planes, projected into the field of vision of other cameras to be calibrated, the light plane equation in the camera coordinate frame is calibrated, and the external parameters between the camera coordinate frame and the manipulator coordinate frame are calculated, so that the calibration of the external parameters of multiple cameras can be realized. The proposed method connects the non-overlapping multi-cameras by the laser scanning. It can effectively solve the problem of multi-camera extrinsic parameter calibration under the conditions of long working distance and complex environment light.
Collapse
|
41
|
A Novel Indirect Calibration Approach for Robot Positioning Error Compensation Based on Neural Network and Hand-Eye Vision. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9091940] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
It is well known that most of the industrial robots have excellent repeatability in positioning. However, the absolute position errors of industrial robots are relatively poor, and in some cases the error may reach even several millimeters, which make it difficult to apply the robot system to vehicle assembly lines that need small position errors. In this paper, we have studied a method to reduce the absolute position error of robots using machine vision and neural network. The position/orientation of robot tool-end is compensated using a vision-based approach combined with a neural network, where a novel indirect calibration approach is presented in order to gather information for training the neural network. In the simulation, the proposed compensation algorithm was found to reduce the positional error to 98%. On average, the absolute position error was 0.029 mm. The application of the proposed algorithm in the actual robot experiment reduced the error to 50.3%, averaging 1.79 mm.
Collapse
|
42
|
Di K, Yang S, Wang W, Yan F, Xing H, Jiang J, Jiang Y. Optimizing Evasive Strategies for an Evader with Imperfect Vision Capacity. J INTELL ROBOT SYST 2019. [DOI: 10.1007/s10846-019-00996-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
43
|
Nguyen H, Pham QC. On the Covariance of
$\boldsymbol X$
in
$\boldsymbol A\boldsymbol X = \boldsymbol X\boldsymbol B$
. IEEE T ROBOT 2018. [DOI: 10.1109/tro.2018.2861905] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
44
|
Viglialoro R, Esposito N, Condino S, Cutolo F, Guadagni S, Gesi M, Ferrari M, Ferrari V. Augmented Reality to Improve Surgical Simulation. Lessons Learned Towards the Design of a Hybrid Laparoscopic Simulator for Cholecystectomy. IEEE Trans Biomed Eng 2018; 66:2091-2104. [PMID: 30507490 DOI: 10.1109/tbme.2018.2883816] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Hybrid surgical simulators based on Augmented Reality (AR) solutions benefit from the advantages of both the box trainers and the Virtual Reality simulators. This paper reports on the results of a long development stage of a hybrid simulator for laparoscopic cholecystectomy that integrates real and the virtual components. We first outline the specifications of the AR simulator and then we explain the strategy adopted for implementing it based on a careful selection of its simulated anatomical components, and characterized by a real-time tracking of both a target anatomy and of the laparoscope. The former is tracked by means of an electromagnetic field generator, while the latter requires an additional camera for video tracking. The new system was evaluated in terms of AR visualization accuracy, realism and hardware robustness. Obtained results show that the accuracy of AR visualization is adequate for training purposes. The qualitative evaluation confirms the robustness and the realism of the simulator. The AR simulator satisfies all the initial specifications in terms of anatomical appearance, modularity, reusability, minimization of spare parts cost, and ability to record surgical errors and to track in real-time the Calot's triangle and the laparoscope. The proposed system could be an effective training tool for learning the task of identification and isolation of Calot's triangle in laparoscopic cholecystectomy. Moreover, the presented strategy could be applied to simulate other surgical procedures involving the task of identification and isolation of generic tubular structures, such as blood vessels, biliary tree and nerves, which are not directly visible.
Collapse
|
45
|
Li W, Dong M, Lu N, Lou X, Sun P. Simultaneous Robot⁻World and Hand⁻Eye Calibration without a Calibration Object. SENSORS 2018; 18:s18113949. [PMID: 30445680 PMCID: PMC6263626 DOI: 10.3390/s18113949] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 10/29/2018] [Accepted: 11/05/2018] [Indexed: 12/03/2022]
Abstract
An extended robot–world and hand–eye calibration method is proposed in this paper to evaluate the transformation relationship between the camera and robot device. This approach could be performed for mobile or medical robotics applications, where precise, expensive, or unsterile calibration objects, or enough movement space, cannot be made available at the work site. Firstly, a mathematical model is established to formulate the robot-gripper-to-camera rigid transformation and robot-base-to-world rigid transformation using the Kronecker product. Subsequently, a sparse bundle adjustment is introduced for the optimization of robot–world and hand–eye calibration, as well as reconstruction results. Finally, a validation experiment including two kinds of real data sets is designed to demonstrate the effectiveness and accuracy of the proposed approach. The translation relative error of rigid transformation is less than 8/10,000 by a Denso robot in a movement range of 1.3 m × 1.3 m × 1.2 m. The distance measurement mean error after three-dimensional reconstruction is 0.13 mm.
Collapse
Affiliation(s)
- Wei Li
- Institute of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China.
| | - Mingli Dong
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| | - Naiguang Lu
- Institute of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China.
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| | - Xiaoping Lou
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| | - Peng Sun
- Institute of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China.
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China.
| |
Collapse
|
46
|
Stereo Camera Head-Eye Calibration Based on Minimum Variance Approach Using Surface Normal Vectors. SENSORS 2018; 18:s18113706. [PMID: 30384481 PMCID: PMC6263920 DOI: 10.3390/s18113706] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Revised: 10/19/2018] [Accepted: 10/29/2018] [Indexed: 11/17/2022]
Abstract
This paper presents a stereo camera-based head-eye calibration method that aims to find the globally optimal transformation between a robot’s head and its eye. This method is highly intuitive and simple, so it can be used in a vision system for humanoid robots without any complex procedures. To achieve this, we introduce an extended minimum variance approach for head-eye calibration using surface normal vectors instead of 3D point sets. The presented method considers both positional and orientational error variances between visual measurements and kinematic data in head-eye calibration. Experiments using both synthetic and real data show the accuracy and efficiency of the proposed method.
Collapse
|
47
|
Leonard S, Sinha A, Reiter A, Ishii M, Gallia GL, Taylor RH, Hager GD. Evaluation and Stability Analysis of Video-Based Navigation System for Functional Endoscopic Sinus Surgery on In Vivo Clinical Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2185-2195. [PMID: 29993881 DOI: 10.1109/tmi.2018.2833868] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Functional endoscopic sinus surgery (FESS) is one of the most common outpatient surgical procedures performed in the head and neck region. It is used to treat chronic sinusitis, a disease characterized by inflammation in the nose and surrounding paranasal sinuses, affecting about 15% of the adult population. During FESS, the nasal cavity is visualized using an endoscope, and instruments are used to remove tissues that are often within a millimeter of critical anatomical structures, such as the optic nerve, carotid arteries, and nasolacrimal ducts. To maintain orientation and to minimize the risk of damage to these structures, surgeons use surgical navigation systems to visualize the 3-D position of their tools on patients' preoperative Computed Tomographies (CTs). This paper presents an image-based method for enhanced endoscopic navigation. The main contributions are: (1) a system that enables a surgeon to asynchronously register a sequence of endoscopic images to a CT scan with higher accuracy than other reported solutions using no additional hardware; (2) the ability to report the robustness of the registration; and (3) evaluation on in vivo human data. The system also enables the overlay of anatomical structures, visible, or occluded, on top of video images. The methods are validated on four different data sets using multiple evaluation metrics. First, for experiments on synthetic data, we observe a mean absolute position error of 0.21mm and a mean absolute orientation error of 2.8° compared with ground truth. Second, for phantom data, we observe a mean absolute position error of 0.97mm and a mean absolute orientation error of 3.6° compared with the same motion tracked by an electromagnetic tracker. Third, for cadaver data, we use fiducial landmarks and observe an average reprojection distance error of 0.82mm. Finally, for in vivo clinical data, we report an average ICP residual error of 0.88mm in areas that are not composed of erectile tissue and an average ICP residual error of 1.09mm in areas that are composed of erectile tissue.
Collapse
|
48
|
Zhang HK, Cheng A, Kim Y, Ma Q, Chirikjian GS, Boctor EM. Phantom with multiple active points for ultrasound calibration. J Med Imaging (Bellingham) 2018; 5:045001. [PMID: 30525061 PMCID: PMC6257090 DOI: 10.1117/1.jmi.5.4.045001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Accepted: 10/10/2018] [Indexed: 11/14/2022] Open
Abstract
Accurate tracking and localization of ultrasound (US) images are used in various computer-assisted interventions. US calibration is a preoperative procedure to recover the transformation bridging the tracking sensor and the US image coordinate systems. Although many calibration phantom designs have been proposed, a limitation that hinders the resulted calibration accuracy is US elevational beam thickness. Previous studies have proposed an active-echo (AE)-based calibration concept to overcome this limitation by utilizing dynamic active US feedback from a single PZT element-based phantom, which assists in placing the phantom within the US elevational plane. However, the process of searching elevational midplane is time-consuming and requires dedicated hardware to enable "AE" functionality. Extending this active phantom, we present a US calibration concept and associated mathematical framework enabling fast and accurate US calibration using multiple "active" points. The proposed US calibration can simplify the calibration procedure by minimizing the number of times midplane search is performed and shortening calibration time. This concept is demonstrated with a configuration mechanically tracking a US probe using a robot arm. We validated the concept through simulation and experiment, and achieved submillimeter calibration accuracy. This result indicates that the multiple active-point phantom has potential to provide superior calibration performance for applications requiring high tracking accuracy.
Collapse
Affiliation(s)
- Haichong K. Zhang
- The Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Alexis Cheng
- The Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Younsu Kim
- The Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| | - Qianli Ma
- The Johns Hopkins University, Department of Mechanical Engineering, Baltimore, Maryland, United States
| | - Gregory S. Chirikjian
- The Johns Hopkins University, Department of Mechanical Engineering, Baltimore, Maryland, United States
| | - Emad M. Boctor
- The Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
- The Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
- The Johns Hopkins University, Department of Radiology, Baltimore, Maryland, United States
| |
Collapse
|
49
|
Pachtrachai K, Vasconcelos F, Chadebecq F, Allan M, Hailes S, Pawar V, Stoyanov D. Adjoint Transformation Algorithm for Hand-Eye Calibration with Applications in Robotic Assisted Surgery. Ann Biomed Eng 2018; 46:1606-1620. [PMID: 30051249 PMCID: PMC6154014 DOI: 10.1007/s10439-018-2097-4] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2017] [Accepted: 07/17/2018] [Indexed: 11/30/2022]
Abstract
Hand–eye calibration aims at determining the unknown rigid transformation between the coordinate systems of a robot arm and a camera. Existing hand–eye algorithms using closed-form solutions followed by iterative non-linear refinement provide accurate calibration results within a broad range of robotic applications. However, in the context of surgical robotics hand–eye calibration is still a challenging problem due to the required accuracy within the millimetre range, coupled with a large displacement between endoscopic cameras and the robot end-effector. This paper presents a new method for hand–eye calibration based on the adjoint transformation of twist motions that solves the problem iteratively through alternating estimations of rotation and translation. We show that this approach converges to a solution with a higher accuracy than closed form initializations within a broad range of synthetic and real experiments. We also propose a stereo hand–eye formulation that can be used in the context of both our proposed method and previous state-of-the-art closed form solutions. Experiments with real data are conducted with a stereo laparoscope, the KUKA robot arm manipulator, and the da Vinci surgical robot, showing that both our new alternating solution and the explicit representation of stereo camera hand–eye relations contribute to a higher calibration accuracy.
Collapse
Affiliation(s)
- Krittin Pachtrachai
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) and the Department of Computer Science, University College London, London, UK.
| | - Francisco Vasconcelos
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) and the Department of Computer Science, University College London, London, UK
| | - François Chadebecq
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) and the Department of Computer Science, University College London, London, UK
| | - Max Allan
- Intuitive Surgical, Sunnyvale, CA, USA
| | - Stephen Hailes
- Department of Computer Science, University College London, London, UK
| | - Vijay Pawar
- Department of Computer Science, University College London, London, UK
| | - Danail Stoyanov
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) and the Department of Computer Science, University College London, London, UK
| |
Collapse
|
50
|
Pachtrachai K, Vasconcelos F, Dwyer G, Pawar V, Hailes S, Stoyanov D. CHESS—Calibrating the Hand-Eye Matrix With Screw Constraints and Synchronization. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2800088] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|