1
|
Qian L, Song T, Unberath M, Kazanzides P. AR-Loupe: Magnified Augmented Reality by Combining an Optical See-Through Head-Mounted Display and a Loupe. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2550-2562. [PMID: 33170780 DOI: 10.1109/tvcg.2020.3037284] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Head-mounted loupes can increase the user's visual acuity to observe the details of an object. On the other hand, optical see-through head-mounted displays (OST-HMD) are able to provide virtual augmentations registered with real objects. In this article, we propose AR-Loupe, combining the advantages of loupes and OST-HMDs, to offer augmented reality in the user's magnified field-of-vision. Specifically, AR-Loupe integrates a commercial OST-HMD, Magic Leap One, and binocular Galilean magnifying loupes, with customized 3D-printed attachments. We model the combination of user's eye, screen of OST-HMD, and the optical loupe as a pinhole camera. The calibration of AR-Loupe involves interactive view segmentation and an adapted version of stereo single point active alignment method (Stereo-SPAAM). We conducted a two-phase multi-user study to evaluate AR-Loupe. The users were able to achieve sub-millimeter accuracy ( 0.82 mm) on average, which is significantly ( ) smaller compared to normal AR guidance ( 1.49 mm). The mean calibration time was 268.46 s. With the increased size of real objects through optical magnification and the registered augmentation, AR-Loupe can aid users in high-precision tasks with better visual acuity and higher accuracy.
Collapse
|
2
|
Rapid Calibration of the Projector in Structured Light Systems Based on Brox Optical Flow Estimation. PHOTONICS 2022. [DOI: 10.3390/photonics9060375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
In this work, we propose a rapid calibration technique for locating the projector in the structured light measurement system. Using Brox optical flow, the calibration of the three-dimensional (3-D) coordinates of the projector only requires two images captured before and after the motion of the calibration plate. The calibration principle presented in geometry depicts the relation between the position of the projector, the camera, and the optical flow caused by the movement of the calibration plate. Some important influences on accuracy are discussed, such as the environmental noises and the localization error of the camera and the calibration plate, illustrated by numerical simulations. The simulation results show that the relative errors of the projector calibration results are less than 0.8% and 1% in the case of the calibration images polluted by Gaussian noise of SNR of 40 dB and 20 dB, respectively. An actual experiment measured a square standard block, and a circular thin plate verifies the proposed method’s feasibility and practicality. The results show that the height distributions of the two specimens are in good agreement with their true values, and the maximum absolute errors are 0.1 mm and 0.08 mm, respectively.
Collapse
|
3
|
Ueda T, Iwai D, Sato K. IlluminatedZoom: spatially varying magnified vision using periodically zooming eyeglasses and a high-speed projector. OPTICS EXPRESS 2021; 29:16377-16395. [PMID: 34154202 DOI: 10.1364/oe.427616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 05/07/2021] [Indexed: 06/13/2023]
Abstract
Spatial zooming and magnification, which control the size of only a portion of a scene while maintaining its context, is an essential interaction technique in augmented reality (AR) systems. It has been applied in various AR applications including surgical navigation, visual search support, and human behavior control. However, spatial zooming has been implemented only on video see-through displays and not been supported by optical see-through displays. It is not trivial to achieve spatial zooming of an observed real scene using near-eye optics. This paper presents the first optical see-through spatial zooming glasses which enables interactive control of the perceived sizes of real-world appearances in a spatially varying manner. The key to our technique is the combination of periodically fast zooming eyeglasses and a synchronized high-speed projector. We stack two electrically focus-tunable lenses (ETLs) for each eyeglass and sweep their focal lengths to modulate the magnification periodically from one (unmagnified) to higher (magnified) at 60 Hz in a manner that prevents a user from perceiving the modulation. We use a 1,000 fps high-speed projector to provide high-resolution spatial illumination for the real scene around the user. A portion of the scene that is to appear magnified is illuminated by the projector when the magnification is greater than one, while the other part is illuminated when the magnification is equal to one. Through experiments, we demonstrate the spatial zooming results of up to 30% magnification using a prototype system. Our technique has the potential to expand the application field of spatial zooming interaction in optical see-through AR.
Collapse
|
4
|
Off-Line Camera-Based Calibration for Optical See-Through Head-Mounted Displays. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app10010193] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In recent years, the entry into the market of self contained optical see-through headsets with integrated multi-sensor capabilities has led the way to innovative and technology driven augmented reality applications and has encouraged the adoption of these devices also across highly challenging medical and industrial settings. Despite this, the display calibration process of consumer level systems is still sub-optimal, particularly for those applications that require high accuracy in the spatial alignment between computer generated elements and a real-world scene. State-of-the-art manual and automated calibration procedures designed to estimate all the projection parameters are too complex for real application cases outside laboratory environments. This paper describes an off-line fast calibration procedure that only requires a camera to observe a planar pattern displayed on the see-through display. The camera that replaces the user’s eye must be placed within the eye-motion-box of the see-through display. The method exploits standard camera calibration and computer vision techniques to estimate the projection parameters of the display model for a generic position of the camera. At execution time, the projection parameters can then be refined through a planar homography that encapsulates the shift and scaling effect associated with the estimated relative translation from the old camera position to the current user’s eye position. Compared to classical SPAAM techniques that still rely on the human element and to other camera based calibration procedures, the proposed technique is flexible and easy to replicate in both laboratory environments and real-world settings.
Collapse
|
5
|
Grubert J, Itoh Y, Moser K, Swan JE. A Survey of Calibration Methods for Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2649-2662. [PMID: 28961115 DOI: 10.1109/tvcg.2017.2754257] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Optical see-through head-mounted displays (OST HMDs) are a major output medium for Augmented Reality, which have seen significant growth in popularity and usage among the general public due to the growing release of consumer-oriented models, such as the Microsoft Hololens. Unlike Virtual Reality headsets, OST HMDs inherently support the addition of computer-generated graphics directly into the light path between a user's eyes and their view of the physical world. As with most Augmented and Virtual Reality systems, the physical position of an OST HMD is typically determined by an external or embedded 6-Degree-of-Freedom tracking system. However, in order to properly render virtual objects, which are perceived as spatially aligned with the physical environment, it is also necessary to accurately measure the position of the user's eyes within the tracking system's coordinate frame. For over 20 years, researchers have proposed various calibration methods to determine this needed eye position. However, to date, there has not been a comprehensive overview of these procedures and their requirements. Hence, this paper surveys the field of calibration methods for OST HMDs. Specifically, it provides insights into the fundamentals of calibration techniques, and presents an overview of both manual and automatic approaches, as well as evaluation methods and metrics. Finally, it also identifies opportunities for future research.
Collapse
|
6
|
Wang J, Suenaga H, Liao H, Hoshi K, Yang L, Kobayashi E, Sakuma I. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation. Comput Med Imaging Graph 2014; 40:147-59. [PMID: 25465067 DOI: 10.1016/j.compmedimag.2014.11.003] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Revised: 09/03/2014] [Accepted: 11/03/2014] [Indexed: 11/26/2022]
Abstract
Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.
Collapse
Affiliation(s)
- Junchen Wang
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| | - Kazuto Hoshi
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Liangjing Yang
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
7
|
Efficient stereo image geometrical reconstruction at arbitrary camera settings from a single calibration. ACTA ACUST UNITED AC 2014; 17:440-7. [PMID: 25333148 DOI: 10.1007/978-3-319-10404-1_55] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Camera calibration is central to obtaining a quantitative image-to-physical-space mapping from stereo images acquired in the operating room (OR). A practical challenge for cameras mounted to the operating microscope is maintenance of image calibration as the surgeon's field-of-view is repeatedly changed (in terms of zoom and focal settings) throughout a procedure. Here, we present an efficient method for sustaining a quantitative image-to-physical space relationship for arbitrary image acquisition settings (S) without the need for camera re-calibration. Essentially, we warp images acquired at S into the equivalent data acquired at a reference setting, S(0), using deformation fields obtained with optical flow by successively imaging a simple phantom. Closed-form expressions for the distortions were derived from which 3D surface reconstruction was performed based on the single calibration at S(0). The accuracy of the reconstructed surface was 1.05 mm and 0.59 mm along and perpendicular to the optical axis of the operating microscope on average, respectively, for six phantom image pairs, and was 1.26 mm and 0.71 mm for images acquired with a total of 47 arbitrary settings during three clinical cases. The technique is presented in the context of stereovision; however, it may also be applicable to other types of video image acquisitions (e.g., endoscope) because it does not rely on any a priori knowledge about the camera system itself, suggesting the method is likely of considerable significance.
Collapse
|
8
|
Kumar AN, Miga MI, Pheiffer TS, Chambless LB, Thompson RC, Dawant BM. Persistent and automatic intraoperative 3D digitization of surfaces under dynamic magnifications of an operating microscope. Med Image Anal 2014; 19:30-45. [PMID: 25189364 DOI: 10.1016/j.media.2014.07.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2013] [Revised: 07/22/2014] [Accepted: 07/23/2014] [Indexed: 12/15/2022]
Abstract
One of the major challenges impeding advancement in image-guided surgical (IGS) systems is the soft-tissue deformation during surgical procedures. These deformations reduce the utility of the patient's preoperative images and may produce inaccuracies in the application of preoperative surgical plans. Solutions to compensate for the tissue deformations include the acquisition of intraoperative tomographic images of the whole organ for direct displacement measurement and techniques that combines intraoperative organ surface measurements with computational biomechanical models to predict subsurface displacements. The later solution has the advantage of being less expensive and amenable to surgical workflow. Several modalities such as textured laser scanners, conoscopic holography, and stereo-pair cameras have been proposed for the intraoperative 3D estimation of organ surfaces to drive patient-specific biomechanical models for the intraoperative update of preoperative images. Though each modality has its respective advantages and disadvantages, stereo-pair camera approaches used within a standard operating microscope is the focus of this article. A new method that permits the automatic and near real-time estimation of 3D surfaces (at 1 Hz) under varying magnifications of the operating microscope is proposed. This method has been evaluated on a CAD phantom object and on full-length neurosurgery video sequences (∼1 h) acquired intraoperatively by the proposed stereovision system. To the best of our knowledge, this type of validation study on full-length brain tumor surgery videos has not been done before. The method for estimating the unknown magnification factor of the operating microscope achieves accuracy within 0.02 of the theoretical value on a CAD phantom and within 0.06 on 4 clinical videos of the entire brain tumor surgery. When compared to a laser range scanner, the proposed method for reconstructing 3D surfaces intraoperatively achieves root mean square errors (surface-to-surface distance) in the 0.28-0.81 mm range on the phantom object and in the 0.54-1.35 mm range on 4 clinical cases. The digitization accuracy of the presented stereovision methods indicate that the operating microscope can be used to deliver the persistent intraoperative input required by computational biomechanical models to update the patient's preoperative images and facilitate active surgical guidance.
Collapse
Affiliation(s)
- Ankur N Kumar
- Vanderbilt University, Department of Electrical Engineering, Nashville, TN 37235, USA
| | - Michael I Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN 37235, USA
| | - Thomas S Pheiffer
- Vanderbilt University, Department of Biomedical Engineering, Nashville, TN 37235, USA
| | - Lola B Chambless
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, TN 37232, USA
| | - Reid C Thompson
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, TN 37232, USA
| | - Benoit M Dawant
- Vanderbilt University, Department of Electrical Engineering, Nashville, TN 37235, USA
| |
Collapse
|
9
|
Ji S, Fan X, Roberts DW, Hartov A, Paulsen KD. Cortical surface shift estimation using stereovision and optical flow motion tracking via projection image registration. Med Image Anal 2014; 18:1169-83. [PMID: 25077845 DOI: 10.1016/j.media.2014.07.001] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2012] [Revised: 07/03/2014] [Accepted: 07/03/2014] [Indexed: 10/25/2022]
Abstract
Stereovision is an important intraoperative imaging technique that captures the exposed parenchymal surface noninvasively during open cranial surgery. Estimating cortical surface shift efficiently and accurately is critical to compensate for brain deformation in the operating room (OR). In this study, we present an automatic and robust registration technique based on optical flow (OF) motion tracking to compensate for cortical surface displacement throughout surgery. Stereo images of the cortical surface were acquired at multiple time points after dural opening to reconstruct three-dimensional (3D) texture intensity-encoded cortical surfaces. A local coordinate system was established with its z-axis parallel to the average surface normal direction of the reconstructed cortical surface immediately after dural opening in order to produce two-dimensional (2D) projection images. A dense displacement field between the two projection images was determined directly from OF motion tracking without the need for feature identification or tracking. The starting and end points of the displacement vectors on the two cortical surfaces were then obtained following spatial mapping inversion to produce the full 3D displacement of the exposed cortical surface. We evaluated the technique with images obtained from digital phantoms and 18 surgical cases - 10 of which involved independent measurements of feature locations acquired with a tracked stylus for accuracy comparisons, and 8 others of which 4 involved stereo image acquisitions at three or more time points during surgery to illustrate utility throughout a procedure. Results from the digital phantom images were very accurate (0.05 pixels). In the 10 surgical cases with independently digitized point locations, the average agreement between feature coordinates derived from the cortical surface reconstructions was 1.7-2.1mm relative to those determined with the tracked stylus probe. The agreement in feature displacement tracking was also comparable to tracked probe data (difference in displacement magnitude was <1mm on average). The average magnitude of cortical surface displacement was 7.9 ± 5.7 mm (range 0.3-24.4 mm) in all patient cases with the displacement components along gravity being 5.2 ± 6.0 mm relative to the lateral movement of 2.4 ± 1.6 mm. Thus, our technique appears to be sufficiently accurate and computationally efficiency (typically ∼15 s), for applications in the OR.
Collapse
Affiliation(s)
- Songbai Ji
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA; Geisel School of Medicine, Dartmouth College, Hanover, NH 03755, USA.
| | - Xiaoyao Fan
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
| | - David W Roberts
- Geisel School of Medicine, Dartmouth College, Hanover, NH 03755, USA; Dartmouth Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Alex Hartov
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
| | - Keith D Paulsen
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA; Geisel School of Medicine, Dartmouth College, Hanover, NH 03755, USA
| |
Collapse
|
10
|
Junchen Wang, Suenaga H, Hoshi K, Liangjing Yang, Kobayashi E, Sakuma I, Hongen Liao. Augmented Reality Navigation With Automatic Marker-Free Image Registration Using 3-D Image Overlay for Dental Surgery. IEEE Trans Biomed Eng 2014; 61:1295-304. [DOI: 10.1109/tbme.2014.2301191] [Citation(s) in RCA: 120] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
11
|
März K, Franz AM, Seitel A, Winterstein A, Bendl R, Zelzer S, Nolden M, Meinzer HP, Maier-Hein L. MITK-US: real-time ultrasound support within MITK. Int J Comput Assist Radiol Surg 2013; 9:411-20. [DOI: 10.1007/s11548-013-0962-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2013] [Accepted: 11/05/2013] [Indexed: 11/28/2022]
|
12
|
|
13
|
Tamadazte B, Marchand E, Dembélé S, Le Fort-Piat N. CAD Model-based Tracking and 3D Visual-based Control for MEMS Microassembly. Int J Rob Res 2010. [DOI: 10.1177/0278364910376033] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper investigates sequential robotic microassembly for the construction of 3D micro-electro-mechanical systems (MEMSs) structures using a 3D visual servoing approach. The previous solutions proposed in the literature for these kinds of problems are based on 2D visual control because of the lack of precise and robust 3D measures from the work scene. In this paper, the relevance of the real-time 3D visual tracking method and the 3D vision-based control law proposed is demonstrated. The 3D poses of the MEMSs are supplied in real-time by a computer-aided design model-based tracking algorithm. This algorithm is sufficiently accurate and robust to enable a precise regulation toward zero of the 3D error using the proposed pose-based visual servoing approach. Experiments on a microrobotic setup have been carried out to achieve assemblies of two or more 400 μm × 400 μm × 100 μm silicon micro-objects by their respective 97 μm × 97 μm × 100 μm notches with an assembly clearance from 1 μm to 5 μm. The different microassembly processes are performed with a mean error of 0.3 μm in position and 0.35× 10−2 rad in orientation.
Collapse
Affiliation(s)
- B. Tamadazte
- Femto-St Institute, UMR CNRS 6174 - UFC / ENSMM / UTBM, Automatic Control and Micro-Mechatronic Systems Department, Besançon, France,
| | - E. Marchand
- INRIA Rennes-Bretagne Atlantique, IRISA, Lagadic, France
| | - S. Dembélé
- Femto-St Institute, UMR CNRS 6174 - UFC / ENSMM / UTBM, Automatic Control and Micro-Mechatronic Systems Department, Besançon, France
| | - N. Le Fort-Piat
- Femto-St Institute, UMR CNRS 6174 - UFC / ENSMM / UTBM, Automatic Control and Micro-Mechatronic Systems Department, Besançon, France
| |
Collapse
|
14
|
Kockro RA, Tsai YT, Ng I, Hwang P, Zhu C, Agusanto K, Hong LX, Serra L. Dex-ray: augmented reality neurosurgical navigation with a handheld video probe. Neurosurgery 2010; 65:795-807; discussion 807-8. [PMID: 19834386 DOI: 10.1227/01.neu.0000349918.36700.1c] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVE We developed an augmented reality system that enables intraoperative image guidance by using 3-dimensional (3D) graphics overlaid on a video stream. We call this system DEX-Ray and report on its development and the initial intraoperative experience in 12 cases. METHODS DEX-Ray consists of a tracked handheld probe that integrates a lipstick-size video camera. The camera looks over the probe's tip into the surgical field. The camera's video stream is augmented with coregistered, multimodality 3D graphics and landmarks obtained during neurosurgical planning with 3D workstations. The handheld probe functions as a navigation device to view and point and as an interaction device to adjust the 3D graphics. We tested the system's accuracy in the laboratory and evaluated it intraoperatively with a series of tumor and vascular cases. RESULTS DEX-Ray provided accurate and real-time video-based augmented reality display. The system could be seamlessly integrated into the surgical workflow. The see-through effect revealing 3D information below the surgically exposed surface proved to be of significant value, especially during the macroscopic phase of an operation, providing easily understandable structural navigational information. Navigation in deep and narrow surgical corridors was limited by the camera resolution and light sensitivity. CONCLUSION The system was perceived as an improved navigational experience because the augmented see-through effect allowed direct understanding of the surgical anatomy beyond the visible surface and direct guidance toward surgical targets.
Collapse
Affiliation(s)
- Ralf A Kockro
- Department of Neurosurgery, University Hospital Zürich, Zürich, Switzerland.
| | | | | | | | | | | | | | | |
Collapse
|
15
|
Figl M, Rueckert D, Hawkes D, Casula R, Hu M, Pedro O, Zhang DP, Penney G, Bello F, Edwards P. Image guidance for robotic minimally invasive coronary artery bypass. Comput Med Imaging Graph 2010; 34:61-8. [DOI: 10.1016/j.compmedimag.2009.08.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2009] [Revised: 07/25/2009] [Accepted: 08/07/2009] [Indexed: 11/16/2022]
|
16
|
Sielhorst T, Feuerstein M, Navab N. Advanced Medical Displays: A Literature Review of Augmented Reality. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/jdt.2008.2001575] [Citation(s) in RCA: 201] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
17
|
Traub J, Sielhorst T, Heining SM, Navab N. Advanced Display and Visualization Concepts for Image Guided Surgery. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/jdt.2008.2006510] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
18
|
García J, Thoranaghatte R, Marti G, Zheng G, Caversaccio M, González Ballester MA. Calibration of a surgical microscope with automated zoom lenses using an active optical tracker. Int J Med Robot 2008; 4:87-93. [PMID: 18275035 DOI: 10.1002/rcs.180] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND In this paper, we present a new method for the calibration of a microscope and its registration using an active optical tracker. METHODS Practically, both operations are done simultaneously by moving an active optical marker within the field of view of the two devices. The IR LEDs composing the marker are first segmented from the microscope images. By knowing their corresponding three-dimensional (3D) position in the optical tracker reference system, it is possible to find the transformation matrix between the referential of the two devices. Registration and calibration parameters can be extracted directly from that transformation. In addition, since the zoom and focus can be modified by the surgeon during the operation, we propose a spline based method to update the camera model to the new setup. RESULTS The proposed technique is currently being used in an augmented reality system for image-guided surgery in the fields of ear, nose and throat (ENT) and craniomaxillofacial surgeries. CONCLUSIONS The results have proved to be accurate and the technique is a fast, dynamic and reliable way to calibrate and register the two devices in an OR environment.
Collapse
Affiliation(s)
- Jaime García
- MEM Research Centre, University of Bern, Switzerland.
| | | | | | | | | | | |
Collapse
|
19
|
Krempien R, Hoppe H, Kahrs L, Daeuber S, Schorr O, Eggers G, Bischof M, Munter MW, Debus J, Harms W. Projector-based augmented reality for intuitive intraoperative guidance in image-guided 3D interstitial brachytherapy. Int J Radiat Oncol Biol Phys 2007; 70:944-52. [PMID: 18164834 DOI: 10.1016/j.ijrobp.2007.10.048] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2006] [Revised: 10/25/2007] [Accepted: 10/25/2007] [Indexed: 11/25/2022]
Abstract
PURPOSE The aim of this study is to implement augmented reality in real-time image-guided interstitial brachytherapy to allow an intuitive real-time intraoperative orientation. METHODS AND MATERIALS The developed system consists of a common video projector, two high-resolution charge coupled device cameras, and an off-the-shelf notebook. The projector was used as a scanning device by projecting coded-light patterns to register the patient and superimpose the operating field with planning data and additional information in arbitrary colors. Subsequent movements of the nonfixed patient were detected by means of stereoscopically tracking passive markers attached to the patient. RESULTS In a first clinical study, we evaluated the whole process chain from image acquisition to data projection and determined overall accuracy with 10 patients undergoing implantation. The described method enabled the surgeon to visualize planning data on top of any preoperatively segmented and triangulated surface (skin) with direct line of sight during the operation. Furthermore, the tracking system allowed dynamic adjustment of the data to the patient's current position and therefore eliminated the need for rigid fixation. Because of soft-part displacement, we obtained an average deviation of 1.1 mm by moving the patient, whereas changing the projector's position resulted in an average deviation of 0.9 mm. Mean deviation of all needles of an implant was 1.4 mm (range, 0.3-2.7 mm). CONCLUSIONS The developed low-cost augmented-reality system proved to be accurate and feasible in interstitial brachytherapy. The system meets clinical demands and enables intuitive real-time intraoperative orientation and monitoring of needle implantation.
Collapse
Affiliation(s)
- Robert Krempien
- Department of Radiation Oncology, University of Heidelberg, Heidelberg, Germany.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
20
|
Abstract
Contemporary imaging modalities can now provide the surgeon with high quality three- and four-dimensional images depicting not only normal anatomy and pathology, but also vascularity and function. A key component of image-guided surgery (IGS) is the ability to register multi-modal pre-operative images to each other and to the patient. The other important component of IGS is the ability to track instruments in real time during the procedure and to display them as part of a realistic model of the operative volume. Stereoscopic, virtual- and augmented-reality techniques have been implemented to enhance the visualization and guidance process. For the most part, IGS relies on the assumption that the pre-operatively acquired images used to guide the surgery accurately represent the morphology of the tissue during the procedure. This assumption may not necessarily be valid, and so intra-operative real-time imaging using interventional MRI, ultrasound, video and electrophysiological recordings are often employed to ameliorate this situation. Although IGS is now in extensive routine clinical use in neurosurgery and is gaining ground in other surgical disciplines, there remain many drawbacks that must be overcome before it can be employed in more general minimally-invasive procedures. This review overviews the roots of IGS in neurosurgery, provides examples of its use outside the brain, discusses the infrastructure required for successful implementation of IGS approaches and outlines the challenges that must be overcome for IGS to advance further.
Collapse
Affiliation(s)
- Terry M Peters
- Robarts Research Institute, University of Western Ontario, PO Box 5015, 100 Perth Drive, London, ON N6A 5K8, Canada.
| |
Collapse
|