1
|
Macedo MCF, Apolinario AL. Occlusion Handling in Augmented Reality: Past, Present and Future. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1590-1609. [PMID: 34613916 DOI: 10.1109/tvcg.2021.3117866] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
One of the main goals of many augmented reality applications is to provide a seamless integration of a real scene with additional virtual data. To fully achieve that goal, such applications must typically provide high-quality real-world tracking, support real-time performance and handle the mutual occlusion problem, estimating the position of the virtual data into the real scene and rendering the virtual content accordingly. In this survey, we focus on the occlusion handling problem in augmented reality applications and provide a detailed review of 161 articles published in this field between January 1992 and August 2020. To do so, we present a historical overview of the most common strategies employed to determine the depth order between real and virtual objects, to visualize hidden objects in a real scene, and to build occlusion-capable visual displays. Moreover, we look at the state-of-the-art techniques, highlight the recent research trends, discuss the current open problems of occlusion handling in augmented reality, and suggest future directions for research.
Collapse
|
2
|
Martin-Gomez A, Weiss J, Keller A, Eck U, Roth D, Navab N. The Impact of Focus and Context Visualization Techniques on Depth Perception in Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4156-4171. [PMID: 33979287 DOI: 10.1109/tvcg.2021.3079849] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Estimating the depth of virtual content has proven to be a challenging task in Augmented Reality (AR) applications. Existing studies have shown that the visual system makes use of multiple depth cues to infer the distance of objects, occlusion being one of the most important ones. The ability to generate appropriate occlusions becomes particularly important for AR applications that require the visualization of augmented objects placed below a real surface. Examples of these applications are medical scenarios in which the visualization of anatomical information needs to be observed within the patient's body. In this regard, existing works have proposed several focus and context (F+C) approaches to aid users in visualizing this content using Video See-Through (VST) Head-Mounted Displays (HMDs). However, the implementation of these approaches in Optical See-Through (OST) HMDs remains an open question due to the additive characteristics of the display technology. In this article, we, for the first time, design and conduct a user study that compares depth estimation between VST and OST HMDs using existing in-situ visualization methods. Our results show that these visualizations cannot be directly transferred to OST displays without increasing error in depth perception tasks. To tackle this gap, we perform a structured decomposition of the visual properties of AR F+C methods to find best-performing combinations. We propose the use of chromatic shadows and hatching approaches transferred from computer graphics. In a second study, we perform a factorized analysis of these combinations, showing that varying the shading type and using colored shadows can lead to better depth estimation when using OST HMDs.
Collapse
|
3
|
Zollmann S, Langlotz T, Grasset R, Lo WH, Mori S, Regenbrecht H. Visualization Techniques in Augmented Reality: A Taxonomy, Methods and Patterns. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3808-3825. [PMID: 32275601 DOI: 10.1109/tvcg.2020.2986247] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, the development of Augmented Reality (AR) frameworks made AR application development widely accessible to developers without AR expert background. With this development, new application fields for AR are on the rise. This comes with an increased need for visualization techniques that are suitable for a wide range of application areas. It becomes more important for a wider audience to gain a better understanding of existing AR visualization techniques. In this article we provide a taxonomy of existing works on visualization techniques in AR. The taxonomy aims to give researchers and developers without an in-depth background in Augmented Reality the information to successively apply visualization techniques in Augmented Reality environments. We also describe required components and methods and analyze common patterns.
Collapse
|
4
|
Wachs JP, Kirkpatrick AW, Tisherman SA. Procedural Telementoring in Rural, Underdeveloped, and Austere Settings: Origins, Present Challenges, and Future Perspectives. Annu Rev Biomed Eng 2021; 23:115-139. [PMID: 33770455 DOI: 10.1146/annurev-bioeng-083120-023315] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Telemedicine is perhaps the most rapidly growing area in health care. Approximately 15 million Americans receive medical assistance remotely every year. Yet rural communities face significant challenges in securing subspecialist care. In the United States, 25% of the population resides in rural areas, where less than 15% of physicians work. Current surgery residency programs do not adequately prepare surgeons for rural practice. Telementoring, wherein a remote expert guides a less experienced caregiver, has been proposed to address this challenge. Nonetheless, existing mentoring technologies are not widely available to rural communities, due to a lack of infrastructure and mentor availability. For this reason, some clinicians prefer simpler and more reliable technologies. This article presents past and current telementoring systems, with a focus on rural settings, and proposes aset of requirements for such systems. We conclude with a perspective on the future of telementoring systems and the integration of artificial intelligence within those systems.
Collapse
Affiliation(s)
- Juan P Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana 47907, USA;
| | - Andrew W Kirkpatrick
- Departments of Critical Care Medicine, Surgery, and Medicine; Snyder Institute for Chronic Diseases; and the Trauma Program, University of Calgary and Alberta Health Services, Calgary, Alberta T2N 2T9, Canada.,Tele-Mentored Ultrasound Supported Medical Interaction (TMUSMI) Research Group, Foothills Medical Centre, Calgary, Alberta T2N 2T9, Canada
| | - Samuel A Tisherman
- Department of Surgery and the Program in Trauma, University of Maryland School of Medicine, Baltimore, Maryland 21201, USA
| |
Collapse
|
5
|
Andrews CM, Henry AB, Soriano IM, Southworth MK, Silva JR. Registration Techniques for Clinical Applications of Three-Dimensional Augmented Reality Devices. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2020; 9:4900214. [PMID: 33489483 PMCID: PMC7819530 DOI: 10.1109/jtehm.2020.3045642] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 11/13/2020] [Accepted: 12/03/2020] [Indexed: 12/15/2022]
Abstract
Many clinical procedures would benefit from direct and intuitive real-time visualization of anatomy, surgical plans, or other information crucial to the procedure. Three-dimensional augmented reality (3D-AR) is an emerging technology that has the potential to assist physicians with spatial reasoning during clinical interventions. The most intriguing applications of 3D-AR involve visualizations of anatomy or surgical plans that appear directly on the patient. However, commercially available 3D-AR devices have spatial localization errors that are too large for many clinical procedures. For this reason, a variety of approaches for improving 3D-AR registration accuracy have been explored. The focus of this review is on the methods, accuracy, and clinical applications of registering 3D-AR devices with the clinical environment. The works cited represent a variety of approaches for registering holograms to patients, including manual registration, computer vision-based registration, and registrations that incorporate external tracking systems. Evaluations of user accuracy when performing clinically relevant tasks suggest that accuracies of approximately 2 mm are feasible. 3D-AR device limitations due to the vergence-accommodation conflict or other factors attributable to the headset hardware add on the order of 1.5 mm of error compared to conventional guidance. Continued improvements to 3D-AR hardware will decrease these sources of error.
Collapse
Affiliation(s)
- Christopher M. Andrews
- Department of Biomedical EngineeringWashington University in St Louis, McKelvey School of EngineeringSt LouisMO63130USA
- SentiAR, Inc.St. LouisMO63108USA
| | | | | | | | - Jonathan R. Silva
- Department of Biomedical EngineeringWashington University in St Louis, McKelvey School of EngineeringSt LouisMO63130USA
| |
Collapse
|
6
|
Qian L, Wu JY, DiMaio SP, Navab N, Kazanzides P. A Review of Augmented Reality in Robotic-Assisted Surgery. ACTA ACUST UNITED AC 2020. [DOI: 10.1109/tmrb.2019.2957061] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
7
|
|
8
|
Augmented visualization with depth perception cues to improve the surgeon's performance in minimally invasive surgery. Med Biol Eng Comput 2018; 57:995-1013. [PMID: 30511205 DOI: 10.1007/s11517-018-1929-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 11/03/2018] [Indexed: 01/14/2023]
Abstract
Minimally invasive techniques, such as laparoscopy and radiofrequency ablation of tumors, bring important advantages in surgery: by minimizing incisions on the patient's body, they can reduce the hospitalization period and the risk of postoperative complications. Unfortunately, they come with drawbacks for surgeons, who have a restricted vision of the operation area through an indirect access and 2D images provided by a camera inserted in the body. Augmented reality provides an "X-ray vision" of the patient anatomy thanks to the visualization of the internal organs of the patient. In this way, surgeons are free from the task of mentally associating the content from CT images to the operative scene. We present a navigation system that supports surgeons in preoperative and intraoperative phases and an augmented reality system that superimposes virtual organs on the patient's body together with depth and distance information. We implemented a combination of visual and audio cues allowing the surgeon to improve the intervention precision and avoid the risk of damaging anatomical structures. The test scenarios proved the good efficacy and accuracy of the system. Moreover, tests in the operating room suggested some modifications to the tracking system to make it more robust with respect to occlusions. Graphical Abstract Augmented visualization in minimally invasive surgery.
Collapse
|
9
|
Hettig J, Engelhardt S, Hansen C, Mistelbauer G. AR in VR: assessing surgical augmented reality visualizations in a steerable virtual reality environment. Int J Comput Assist Radiol Surg 2018; 13:1717-1725. [DOI: 10.1007/s11548-018-1825-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 07/05/2018] [Indexed: 12/28/2022]
|
10
|
Haouchine N, Kuang W, Cotin S, Yip M. Vision-Based Force Feedback Estimation for Robot-Assisted Surgery Using Instrument-Constrained Biomechanical Three-Dimensional Maps. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2810948] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
11
|
Real-time surgical tool tracking and pose estimation using a hybrid cylindrical marker. Int J Comput Assist Radiol Surg 2017; 12:921-930. [PMID: 28342105 PMCID: PMC5447333 DOI: 10.1007/s11548-017-1558-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Accepted: 03/08/2017] [Indexed: 11/01/2022]
Abstract
PURPOSE To provide an integrated visualisation of intraoperative ultrasound and endoscopic images to facilitate intraoperative guidance, real-time tracking of the ultrasound probe is required. State-of-the-art methods are suitable for planar targets while most of the laparoscopic ultrasound probes are cylindrical objects. A tracking framework for cylindrical objects with a large work space will improve the usability of the intraoperative ultrasound guidance. METHODS A hybrid marker design that combines circular dots and chessboard vertices is proposed for facilitating tracking cylindrical tools. The circular dots placed over the curved surface are used for pose estimation. The chessboard vertices are employed to provide additional information for resolving the ambiguous pose problem due to the use of planar model points under a monocular camera. Furthermore, temporal information between consecutive images is considered to minimise tracking failures with real-time computational performance. RESULTS Detailed validation confirms that our hybrid marker provides a large working space for different tool sizes (6-14 mm in diameter). The tracking framework allows translational movements between 40 and 185 mm along the depth direction and rotational motion around three local orthogonal axes up to [Formula: see text]. Comparative studies with the current state of the art confirm that our approach outperforms existing methods by providing nearly 100% detection rates and accurate pose estimation with mean errors of 2.8 mm and 0.72[Formula: see text]. The tracking algorithm runs at 20 frames per second for [Formula: see text] image resolution videos. CONCLUSION Experiments show that the proposed hybrid marker can be applied to a wide range of surgical tools with superior detection rates and pose estimation accuracies. Both the qualitative and quantitative results demonstrate that our framework can be used not only for assisting intraoperative ultrasound guidance but also for tracking general surgical tools in MIS.
Collapse
|
12
|
|
13
|
Marcus HJ, Pratt P, Hughes-Hallett A, Cundy TP, Marcus AP, Yang GZ, Darzi A, Nandi D. Comparative effectiveness and safety of image guidance systems in neurosurgery: a preclinical randomized study. J Neurosurg 2015; 123:307-13. [PMID: 25909567 DOI: 10.3171/2014.10.jns141662] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
OBJECT Over the last decade, image guidance systems have been widely adopted in neurosurgery. Nonetheless, the evidence supporting the use of these systems in surgery remains limited. The aim of this study was to compare simultaneously the effectiveness and safety of various image guidance systems against that of standard surgery. METHODS In this preclinical, randomized study, 50 novice surgeons were allocated to one of the following groups: 1) no image guidance, 2) triplanar display, 3) always-on solid overlay, 4) always-on wire mesh overlay, and 5) on-demand inverse realism overlay. Each participant was asked to identify a basilar tip aneurysm in a validated model head. The primary outcomes were time to task completion (in seconds) and tool path length (in mm). The secondary outcomes were recognition of an unexpected finding (i.e., a surgical clip) and subjective depth perception using a Likert scale. RESULTS The time to task completion and tool path length were significantly lower when using any form of image guidance compared with no image guidance (p < 0.001 and p = 0.003, respectively). The tool path distance was also lower in groups using augmented reality compared with triplanar display (p = 0.010). Always-on solid overlay resulted in the greatest inattentional blindness (20% recognition of unexpected finding). Wire mesh and on-demand overlays mitigated, but did not negate, inattentional blindness and were comparable to triplanar display (40% recognition of unexpected finding in all groups). Wire mesh and inverse realism overlays also resulted in better subjective depth perception than always-on solid overlay (p = 0.031 and p = 0.008, respectively). CONCLUSIONS New augmented reality platforms may improve performance in less-experienced surgeons. However, all image display modalities, including existing triplanar displays, carry a risk of inattentional blindness.
Collapse
Affiliation(s)
- Hani J Marcus
- The Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, and.,Department of Neurosurgery, Imperial College Healthcare NHS Trust, London, United Kingdom
| | - Philip Pratt
- The Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, and
| | | | - Thomas P Cundy
- The Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, and
| | - Adam P Marcus
- Faculty of Medicine, Imperial College London, United Kingdom; and
| | - Guang-Zhong Yang
- The Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, and
| | - Ara Darzi
- The Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, and
| | - Dipankar Nandi
- Department of Neurosurgery, Imperial College Healthcare NHS Trust, London, United Kingdom
| |
Collapse
|
14
|
Marcus H, Nandi D, Darzi A, Guang-Zhong Yang. Surgical Robotics Through a Keyhole: From Today's Translational Barriers to Tomorrow's “Disappearing” Robots. IEEE Trans Biomed Eng 2013; 60:674-81. [DOI: 10.1109/tbme.2013.2243731] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
15
|
Vitiello V, Lee SL, Cundy TP, Yang GZ. Emerging robotic platforms for minimally invasive surgery. IEEE Rev Biomed Eng 2012; 6:111-26. [PMID: 23288354 DOI: 10.1109/rbme.2012.2236311] [Citation(s) in RCA: 124] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Recent technological advances in surgery have resulted in the development of a range of new techniques that have reduced patient trauma, shortened hospitalization, and improved diagnostic accuracy and therapeutic outcome. Despite the many appreciated benefits of minimally invasive surgery (MIS) compared to traditional approaches, there are still significant drawbacks associated with conventional MIS including poor instrument control and ergonomics caused by rigid instrumentation and its associated fulcrum effect. The use of robot assistance has helped to realize the full potential of MIS with improved consistency, safety and accuracy. The development of articulated, precision tools to enhance the surgeon's dexterity has evolved in parallel with advances in imaging and human-robot interaction. This has improved hand-eye coordination and manual precision down to micron scales, with the capability of navigating through complex anatomical pathways. In this review paper, clinical requirements and technical challenges related to the design of robotic platforms for flexible access surgery are discussed. Allied technical approaches and engineering challenges related to instrument design, intraoperative guidance, and intelligent human-robot interaction are reviewed. We also highlight emerging designs and research opportunities in the field by assessing the current limitations and open technical challenges for the wider clinical uptake of robotic platforms in MIS.
Collapse
|
16
|
Cheung CL, Wedlake C, Moore J, Pautler SE, Peters TM. Fused video and ultrasound images for minimally invasive partial nephrectomy: a phantom study. ACTA ACUST UNITED AC 2010; 13:408-15. [PMID: 20879426 DOI: 10.1007/978-3-642-15711-0_51] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
The shift to minimally invasive abdominal surgery has increased reliance on image guidance during surgical procedures. However, these images are most often presented independently, increasing the cognitive workload for the surgeon and potentially increasing procedure time. When warm ischemia of an organ is involved, time is an important factor to consider. To address these limitations, we present a more intuitive visualization that combines images in a common augmented reality environment. In this paper, we assess surgeon performance under the guidance of the conventional visualization system and our fusion system using a phantom study that mimics the tumour resection of partial nephrectomy. The RMS error between the fused images was 2.43mm, which is sufficient for our purposes. A faster planning time for the resection was achieved using our fusion visualization system. This result is a positive step towards decreasing risks associated with long procedure times in minimally invasive abdominal interventions.
Collapse
Affiliation(s)
- Carling L Cheung
- Imaging Research Laboratories, Robarts Research Institute, Ontario, Canada
| | | | | | | | | |
Collapse
|
17
|
Hongen Liao, Inomata T, Sakuma I, Dohi T. 3-D Augmented Reality for MRI-Guided Surgery Using Integral Videography Autostereoscopic Image Overlay. IEEE Trans Biomed Eng 2010; 57:1476-86. [DOI: 10.1109/tbme.2010.2040278] [Citation(s) in RCA: 136] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
18
|
Pratt P, Stoyanov D, Visentini-Scarzanella M, Yang GZ. Dynamic Guidance for Robotic Surgery Using Image-Constrained Biomechanical Models. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2010 2010; 13:77-85. [DOI: 10.1007/978-3-642-15705-9_10] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
19
|
Linte CA, White J, Eagleson R, Guiraudon GM, Peters TM. Virtual and Augmented Medical Imaging Environments: Enabling Technology for Minimally Invasive Cardiac Interventional Guidance. IEEE Rev Biomed Eng 2010; 3:25-47. [DOI: 10.1109/rbme.2010.2082522] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
20
|
Motion compensated SLAM for image guided surgery. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2010; 13:496-504. [PMID: 20879352 DOI: 10.1007/978-3-642-15745-5_61] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
The effectiveness and clinical benefits of image guided surgery are well established for procedures where there is manageable tissue motion. In minimally invasive cardiac, gastrointestinal, or abdominal surgery, large scale tissue deformation prohibits accurate registration and fusion of pre- and intraoperative data. Vision based techniques such as structure from motion and simultaneous localization and mapping are capable of recovering 3D structure and laparoscope motion. Current research in the area generally assumes the environment is static, which is difficult to satisfy in most surgical procedures. In this paper, a novel framework for simultaneous online estimation of laparoscopic camera motion and tissue deformation in a dynamic environment is proposed. The method only relies on images captured by the laparoscope to sequentially and incrementally generate a dynamic 3D map of tissue motion that can be co-registered with pre-operative data. The theoretical contribution of this paper is validated with both simulated and ex vivo data. The practical application of the technique is further demonstrated on in vivo procedures.
Collapse
|
21
|
Hansen C, Wieferich J, Ritter F, Rieder C, Peitgen HO. Illustrative visualization of 3D planning models for augmented reality in liver surgery. Int J Comput Assist Radiol Surg 2009; 5:133-41. [PMID: 20033519 DOI: 10.1007/s11548-009-0365-3] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2008] [Accepted: 05/14/2009] [Indexed: 02/08/2023]
Abstract
PURPOSE Augmented reality (AR) obtains increasing acceptance in the operating room. However, a meaningful augmentation of the surgical view with a 3D visualization of planning data which allows reliable comparisons of distances and spatial relations is still an open request. METHODS We introduce methods for intraoperative visualization of 3D planning models which extend illustrative rendering and AR techniques. We aim to reduce visual complexity of 3D planning models and accentuate spatial relations between relevant objects. The main contribution of our work is an advanced silhouette algorithm for 3D planning models (distance-encoding silhouettes) combined with procedural textures (distance-encoding surfaces). In addition, we present a method for illustrative visualization of resection surfaces. RESULTS The developed algorithms have been embedded into a clinical prototype that has been evaluated in the operating room. To verify the expressiveness of our illustration methods, we performed a user study under controlled conditions. The study revealed a clear advantage in distance assessment with the proposed illustrative approach in comparison to classical rendering techniques. CONCLUSION The presented illustration methods are beneficial for distance assessment in surgical AR. To increase the safety of interventions with the proposed approach, the reduction of inaccuracies in tracking and registration is a subject of our current research.
Collapse
Affiliation(s)
- Christian Hansen
- Fraunhofer MEVIS, Insitute for Medical Image Computing, Bremen, Germany.
| | | | | | | | | |
Collapse
|
22
|
Abstract
The use of focused energy delivery in robotic assisted surgery for atrial fibrillation requires accurate prescription of ablation paths. In this paper, an original framework based on fusing human and machine vision for providing gaze-contigent control in robotic assisted surgery is provided. With the proposed method, binocular eye tracking is used to estimate the 3D fixations of the surgeon, which are further refined by considering the camera geometry and the consistency of image features at reprojected fixations. Nonparametric clustering is then used to optimize the point distribution to provide an accurate ablation path. For experimental validation, a study where eight subjects prescribe an ablation path on the right atrium of the heart using only their gaze control is presented. The accuracy of the proposed method is validated using a phantom heart model with known 3D ground truth.
Collapse
|
23
|
Sielhorst T, Feuerstein M, Navab N. Advanced Medical Displays: A Literature Review of Augmented Reality. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/jdt.2008.2001575] [Citation(s) in RCA: 201] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
24
|
Stoyanov D, Mylonas GP, Lerotic M, Chung AJ, Yang GZ. Intra-Operative Visualizations: Perceptual Fidelity and Human Factors. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/jdt.2008.926497] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|