1
|
Fischer M, Rosenberg J, Leuze C, Hargreaves B, Daniel B. The Impact of Occlusion on Depth Perception at Arm's Length. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4494-4502. [PMID: 37782607 DOI: 10.1109/tvcg.2023.3320239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
This paper investigates the accuracy of Augmented Reality (AR) technologies, particularly commercially available optical see-through displays, in depicting virtual content inside the human body for surgical planning. Their inherent limitations result in inaccuracies in perceived object positioning. We examine how occlusion, specifically with opaque surfaces, affects perceived depth of virtual objects at arm's length working distances. A custom apparatus with a half-silvered mirror was developed, providing accurate depth cues excluding occlusion, differing from commercial displays. We carried out a study, contrasting our apparatus with a HoloLens 2, involving a depth estimation task under varied surface complexities and illuminations. In addition, we explored the effects of creating a virtual "hole" in the surface. Subjects' depth estimation accuracy and confidence were a ssessed. Results showed more depth estimation variation with HoloLens and significant depth error beneath complex occluding surfaces. However, creating a virtual hole significantly reduced depth errors and increased subjects' confidence, irrespective of accuracy enhancement. These findings have important implications for the design and use of mixed-reality technologies in surgical applications, and industrial applications such as using virtual content to guide maintenance or repair of components hidden beneath the opaque outer surface of equipment. A free copy of this paper and all supplemental materials are available at https://bit.ly/3YbkwjU.
Collapse
|
2
|
Macedo MCF, Apolinario AL. Occlusion Handling in Augmented Reality: Past, Present and Future. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1590-1609. [PMID: 34613916 DOI: 10.1109/tvcg.2021.3117866] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
One of the main goals of many augmented reality applications is to provide a seamless integration of a real scene with additional virtual data. To fully achieve that goal, such applications must typically provide high-quality real-world tracking, support real-time performance and handle the mutual occlusion problem, estimating the position of the virtual data into the real scene and rendering the virtual content accordingly. In this survey, we focus on the occlusion handling problem in augmented reality applications and provide a detailed review of 161 articles published in this field between January 1992 and August 2020. To do so, we present a historical overview of the most common strategies employed to determine the depth order between real and virtual objects, to visualize hidden objects in a real scene, and to build occlusion-capable visual displays. Moreover, we look at the state-of-the-art techniques, highlight the recent research trends, discuss the current open problems of occlusion handling in augmented reality, and suggest future directions for research.
Collapse
|
3
|
In-situ or side-by-side? A user study on augmented reality maintenance instructions in blind areas. COMPUT IND 2023. [DOI: 10.1016/j.compind.2022.103795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
4
|
Navab N, Martin-Gomez A, Seibold M, Sommersperger M, Song T, Winkler A, Yu K, Eck U. Medical Augmented Reality: Definition, Principle Components, Domain Modeling, and Design-Development-Validation Process. J Imaging 2022; 9:4. [PMID: 36662102 PMCID: PMC9866223 DOI: 10.3390/jimaging9010004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/15/2022] [Accepted: 12/19/2022] [Indexed: 12/28/2022] Open
Abstract
Three decades after the first set of work on Medical Augmented Reality (MAR) was presented to the international community, and ten years after the deployment of the first MAR solutions into operating rooms, its exact definition, basic components, systematic design, and validation still lack a detailed discussion. This paper defines the basic components of any Augmented Reality (AR) solution and extends them to exemplary Medical Augmented Reality Systems (MARS). We use some of the original MARS applications developed at the Chair for Computer Aided Medical Procedures and deployed into medical schools for teaching anatomy and into operating rooms for telemedicine and surgical guidance throughout the last decades to identify the corresponding basic components. In this regard, the paper is not discussing all past or existing solutions but only aims at defining the principle components and discussing the particular domain modeling for MAR and its design-development-validation process, and providing exemplary cases through the past in-house developments of such solutions.
Collapse
Affiliation(s)
- Nassir Navab
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Alejandro Martin-Gomez
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Matthias Seibold
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, CH-8008 Zurich, Switzerland
| | - Michael Sommersperger
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Tianyu Song
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| | - Alexander Winkler
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- Department of General, Visceral, and Transplant Surgery, Ludwig-Maximilians-University Hospital, DE-80336 Munich, Germany
| | - Kevin Yu
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
- medPhoton GmbH, AT-5020 Salzburg, Austria
| | - Ulrich Eck
- Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, DE-85748 Garching, Germany
| |
Collapse
|
5
|
Martin-Gomez A, Weiss J, Keller A, Eck U, Roth D, Navab N. The Impact of Focus and Context Visualization Techniques on Depth Perception in Optical See-Through Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:4156-4171. [PMID: 33979287 DOI: 10.1109/tvcg.2021.3079849] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Estimating the depth of virtual content has proven to be a challenging task in Augmented Reality (AR) applications. Existing studies have shown that the visual system makes use of multiple depth cues to infer the distance of objects, occlusion being one of the most important ones. The ability to generate appropriate occlusions becomes particularly important for AR applications that require the visualization of augmented objects placed below a real surface. Examples of these applications are medical scenarios in which the visualization of anatomical information needs to be observed within the patient's body. In this regard, existing works have proposed several focus and context (F+C) approaches to aid users in visualizing this content using Video See-Through (VST) Head-Mounted Displays (HMDs). However, the implementation of these approaches in Optical See-Through (OST) HMDs remains an open question due to the additive characteristics of the display technology. In this article, we, for the first time, design and conduct a user study that compares depth estimation between VST and OST HMDs using existing in-situ visualization methods. Our results show that these visualizations cannot be directly transferred to OST displays without increasing error in depth perception tasks. To tackle this gap, we perform a structured decomposition of the visual properties of AR F+C methods to find best-performing combinations. We propose the use of chromatic shadows and hatching approaches transferred from computer graphics. In a second study, we perform a factorized analysis of these combinations, showing that varying the shading type and using colored shadows can lead to better depth estimation when using OST HMDs.
Collapse
|
6
|
Gu W, Martin-Gomez A, Cho SM, Osgood G, Bracke B, Josewski C, Knopf J, Unberath M. The impact of visualization paradigms on the detectability of spatial misalignment in mixed reality surgical guidance. Int J Comput Assist Radiol Surg 2022; 17:921-927. [DOI: 10.1007/s11548-022-02602-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 03/08/2022] [Indexed: 11/28/2022]
|
7
|
Cheng D, Hou Q, Li Y, Zhang T, Li D, Huang Y, Liu Y, Wang Q, Hou W, Yang T, Feng Z, Wang Y. Optical design and pupil swim analysis of a compact, large EPD and immersive VR head mounted display. OPTICS EXPRESS 2022; 30:6584-6602. [PMID: 35299440 DOI: 10.1364/oe.452747] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 02/01/2022] [Indexed: 06/14/2023]
Abstract
Virtual reality head-mounted displays (VR-HMDs) are crucial to Metaverse which appears to be one of the most popular terms to have been adopted over the internet recently. It provides basic infrastructure and entrance to cater for the next evolution of social interaction, and it has already been widely used in many fields. The VR-HMDs with traditional aspherical or Fresnel optics are not suitable for long-term usage because of the image quality, system size, and weight. In this study, we designed and developed a large exit pupil diameter (EPD), compact, and lightweight VR-HMD with catadioptric optics. The mathematical formula for designing the catadioptric VR optics is derived. The reason why this kind of immersive VR optics could achieve a compact size and large EPD simultaneously is answered. Various catadioptric forms are systematically proposed and compared. The design can achieve a diagonal field of view (FOV) of 96° at -1 diopter, with an EPD of 10 mm at 11 mm eye relief (ERF). The overall length (OAL) of the system was less than 20 mm. A prototype of a compact catadioptric VR-HMD system was successfully developed.
Collapse
|
8
|
Willett W, Aseniero BA, Carpendale S, Dragicevic P, Jansen Y, Oehlberg L, Isenberg P. Perception! Immersion! Empowerment! Superpowers as Inspiration for Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:22-32. [PMID: 34587071 DOI: 10.1109/tvcg.2021.3114844] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We explore how the lens of fictional superpowers can help characterize how visualizations empower people and provide inspiration for new visualization systems. Researchers and practitioners often tout visualizations' ability to "make the invisible visible" and to "enhance cognitive abilities." Meanwhile superhero comics and other modern fiction often depict characters with similarly fantastic abilities that allow them to see and interpret the world in ways that transcend traditional human perception. We investigate the intersection of these domains, and show how the language of superpowers can be used to characterize existing visualization systems and suggest opportunities for new and empowering ones. We introduce two frameworks: The first characterizes seven underlying mechanisms that form the basis for a variety of visual superpowers portrayed in fiction. The second identifies seven ways in which visualization tools and interfaces can instill a sense of empowerment in the people who use them. Building on these observations, we illustrate a diverse set of "visualization superpowers" and highlight opportunities for the visualization community to create new systems and interactions that empower new experiences with data Material and illustrations are available under CC-BY 4.0 at osf.io/8yhfz.
Collapse
|
9
|
Neves CA, Leuze C, Gomez AM, Navab N, Blevins N, Vaisbuch Y, McNab JA. Augmented Reality for Retrosigmoid Craniotomy Planning. Skull Base Surg 2021; 83:e564-e573. [DOI: 10.1055/s-0041-1735509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 07/28/2021] [Indexed: 10/20/2022]
Abstract
AbstractWhile medical imaging data have traditionally been viewed on two-dimensional (2D) displays, augmented reality (AR) allows physicians to project the medical imaging data on patient's bodies to locate important anatomy. We present a surgical AR application to plan the retrosigmoid craniotomy, a standard approach to access the posterior fossa and the internal auditory canal. As a simple and accurate alternative to surface landmarks and conventional surgical navigation systems, our AR application augments the surgeon's vision to guide the optimal location of cortical bone removal. In this work, two surgeons performed a retrosigmoid approach 14 times on eight cadaver heads. In each case, the surgeon manually aligned a computed tomography (CT)-derived virtual rendering of the sigmoid sinus on the real cadaveric heads using a see-through AR display, allowing the surgeon to plan and perform the craniotomy accordingly. Postprocedure CT scans were acquired to assess the accuracy of the retrosigmoid craniotomies with respect to their intended location relative to the dural sinuses. The two surgeons had a mean margin of davg = 0.6 ± 4.7 mm and davg = 3.7 ± 2.3 mm between the osteotomy border and the dural sinuses over all their cases, respectively, and only positive margins for 12 of the 14 cases. The intended surgical approach to the internal auditory canal was successfully achieved in all cases using the proposed method, and the relatively small and consistent margins suggest that our system has the potential to be a valuable tool to facilitate planning a variety of similar skull-base procedures.
Collapse
Affiliation(s)
- Caio A. Neves
- Department of Otolaryngology, Stanford School of Medicine, Stanford, United States
- Faculty of Medicine, University of Brasília, Brasília, Brazil
| | - Christoph Leuze
- Department of Radiology, Stanford School of Medicine, Stanford, United States
| | - Alejandro M. Gomez
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, Germany
- Laboratory for Computer Aided Medical Procedures, Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich, Germany
- Laboratory for Computer Aided Medical Procedures, Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| | - Nikolas Blevins
- Department of Otolaryngology, Stanford School of Medicine, Stanford, United States
| | - Yona Vaisbuch
- Department of Otolaryngology, Stanford School of Medicine, Stanford, United States
| | - Jennifer A. McNab
- Department of Radiology, Stanford School of Medicine, Stanford, United States
| |
Collapse
|
10
|
Novel Multimodal, Multiscale Imaging System with Augmented Reality. Diagnostics (Basel) 2021; 11:diagnostics11030441. [PMID: 33806547 PMCID: PMC7999725 DOI: 10.3390/diagnostics11030441] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 02/19/2021] [Accepted: 02/21/2021] [Indexed: 01/23/2023] Open
Abstract
A novel multimodal, multiscale imaging system with augmented reality capability were developed and characterized. The system offers 3D color reflectance imaging, 3D fluorescence imaging, and augmented reality in real time. Multiscale fluorescence imaging was enabled by developing and integrating an in vivo fiber-optic microscope. Real-time ultrasound-fluorescence multimodal imaging used optically tracked fiducial markers for registration. Tomographical data are also incorporated using optically tracked fiducial markers for registration. Furthermore, we characterized system performance and registration accuracy in a benchtop setting. The multiscale fluorescence imaging facilitated assessing the functional status of tissues, extending the minimal resolution of fluorescence imaging to ~17.5 µm. The system achieved a mean of Target Registration error of less than 2 mm for registering fluorescence images to ultrasound images and MRI-based 3D model, which is within clinically acceptable range. The low latency and high frame rate of the prototype system has shown the promise of applying the reported techniques in clinically relevant settings in the future.
Collapse
|
11
|
Qian L, Wu JY, DiMaio SP, Navab N, Kazanzides P. A Review of Augmented Reality in Robotic-Assisted Surgery. ACTA ACUST UNITED AC 2020. [DOI: 10.1109/tmrb.2019.2957061] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
12
|
Bork F. [Interactive augmented reality systems : Aid for personalized patient education and rehabilitation]. Unfallchirurg 2019; 121:286-292. [PMID: 29383388 DOI: 10.1007/s00113-018-0458-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND During patient education, information exchange plays a critical role both for patient compliance during medical or rehabilitative treatment and for obtaining an informed consent for an operative procedure. OBJECTIVE In this article the augmented reality system "Magic Mirror" as an additive tool during patient education, rehabilitation as well as anatomical education is highlighted. MATERIAL AND METHODS The Magic Mirror system allows the user of the system to inspect both a detailed model of the 3‑dimensional anatomy of the human body and volumetric slice images in a virtual mirror environment. RESULTS First preliminary results from the areas of rehabilitation and learning anatomy indicate the broad potential of the Magic Mirror. Similarly, the system also provides interesting advantages for patient education situations in comparison to traditional methods of information exchange. CONCLUSION Novel technologies, such as augmented reality are a door opener for many innovations in medicine. In the future, patient-specific systems, such as the Magic Mirror will be used increasingly more in areas such as patient education and rehabilitation. In order to maximize the benefits of such systems, further evaluation studies are necessary to find out about the best use cases and to start an iterative optimization process of these systems.
Collapse
Affiliation(s)
- F Bork
- Fakultät für Informatik, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| |
Collapse
|
13
|
Kardong-Edgren S(S, Farra SL, Alinier G, Young HM. A Call to Unify Definitions of Virtual Reality. Clin Simul Nurs 2019. [DOI: 10.1016/j.ecns.2019.02.006] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
14
|
Jung S, Lee J, Biocca F, Kim JW. Augmented Reality in the Health Domain: Projecting Spatial Augmented Reality Visualizations on a Perceiver's Body for Health Communication Effects. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2019; 22:142-150. [PMID: 30668138 DOI: 10.1089/cyber.2018.0028] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
An experiment is reported that studied the effects of spatial embodiment in augmented reality on medical attitudes about the self. College students (N = 90) viewed public service announcements (PSAs) with overlaid virtual fetuses and X-rayed images of lungs on various interfaces representing embodiment-a two-dimensional screen, a three-dimensional (3D) mannequin, and the participants' bodies (3D). Results indicated that PSA messages with richer embodied interfaces increase the sense of "being there," also known as spatial presence (SP), in sequential order; this leads to increased negative emotion regarding smoking cigarettes and an increased willingness to engage with a cigarette cessation campaign. When the SP mediates the dual model process, only affective attitudes increase the behavioral intention to engage with the campaign.
Collapse
Affiliation(s)
- Soyoung Jung
- 1 S.I. School of Newhouse Public Communications, Syracuse University, Syracuse, New York.,2 M.I.N.D. Lab, Digital Design, School of Art & Design College of Architecture & Design, New Jersey Institute of Technology. Newark, New Jersey
| | - Jiyoung Lee
- 1 S.I. School of Newhouse Public Communications, Syracuse University, Syracuse, New York
| | - Frank Biocca
- 3 Department of Informatics, Ying Wu College of Computing, New Jersey Institute of Technology, Newark, New Jersey
| | - Ji Won Kim
- 1 S.I. School of Newhouse Public Communications, Syracuse University, Syracuse, New York
| |
Collapse
|
15
|
Cheema MN, Nazir A, Sheng B, Li P, Qin J, Kim J, Feng DD. Image-Aligned Dynamic Liver Reconstruction Using Intra-Operative Field of Views for Minimal Invasive Surgery. IEEE Trans Biomed Eng 2018; 66:2163-2173. [PMID: 30507524 DOI: 10.1109/tbme.2018.2884319] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
During hepatic minimal invasive surgery (MIS), 3-D reconstruction of a liver surface by interpreting the geometry of its soft tissues is achieving attractions. One of the major issues to be addressed in MIS is liver deformation. Moreover, it severely inhibits free sight and dexterity of tissue manipulation, which causes its intra-operative morphology and soft tissue motion altered as compared to its pre-operative shape. While many applications focus on 3-D reconstruction of rigid or semi-rigid scenes, the techniques applied in hepatic MIS must be able to cope with a dynamic and deformable environment. We propose an efficient technique for liver surface reconstruction based on the structure from motion to handle liver deformation. The reconstructed liver will assist surgeons to visualize liver surface more efficiently with better depth perception. We use the intra-operative field of views to generate 3-D template mesh from a dense keypoint cloud. We estimate liver deformation by finding best correspondence between 3-D templates and reconstruct a liver image to calculate translation and rotational motions. Our technique then finely tunes deformed surface by adding smoothness using shading cues. Up till now, this technique is not used for solving the human liver deformation problem. Our approach is tested and validated with synthetic as well as real in vivo data, which reveal that the reconstruction accuracy can be enhanced using our approach even in challenging laparoscopic environments.
Collapse
|
16
|
Ma L, Zhao Z, Zhang B, Jiang W, Fu L, Zhang X, Liao H. Three-dimensional augmented reality surgical navigation with hybrid optical and electromagnetic tracking for distal intramedullary nail interlocking. Int J Med Robot 2018; 14:e1909. [PMID: 29575601 DOI: 10.1002/rcs.1909] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 02/07/2018] [Accepted: 02/08/2018] [Indexed: 11/08/2022]
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Zhe Zhao
- Department of Orthopedics Surgery; Beijing Tsinghua Changgung Hospital; Beijing China
| | - Boyu Zhang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Weipeng Jiang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Ligong Fu
- Department of Orthopedics Surgery; Beijing Tsinghua Changgung Hospital; Beijing China
| | - Xinran Zhang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| |
Collapse
|
17
|
Qian L, Barthel A, Johnson A, Osgood G, Kazanzides P, Navab N, Fuerst B. Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2D-display. Int J Comput Assist Radiol Surg 2017; 12:901-910. [PMID: 28343301 DOI: 10.1007/s11548-017-1564-y] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2017] [Accepted: 03/13/2017] [Indexed: 10/19/2022]
Abstract
PURPOSE Optical see-through head-mounted displays (OST-HMD) feature an unhindered and instantaneous view of the surgery site and can enable a mixed reality experience for surgeons during procedures. In this paper, we present a systematic approach to identify the criteria for evaluation of OST-HMD technologies for specific clinical scenarios, which benefit from using an object-anchored 2D-display visualizing medical information. METHODS Criteria for evaluating the performance of OST-HMDs for visualization of medical information and its usage are identified and proposed. These include text readability, contrast perception, task load, frame rate, and system lag. We choose to compare three commercially available OST-HMDs, which are representatives of currently available head-mounted display technologies. A multi-user study and an offline experiment are conducted to evaluate their performance. RESULTS Statistical analysis demonstrates that Microsoft HoloLens performs best among the three tested OST-HMDs, in terms of contrast perception, task load, and frame rate, while ODG R-7 offers similar text readability. The integration of indoor localization and fiducial tracking on the HoloLens provides significantly less system lag in a relatively motionless scenario. CONCLUSIONS With ever more OST-HMDs appearing on the market, the proposed criteria could be used in the evaluation of their suitability for mixed reality surgical intervention. Currently, Microsoft HoloLens may be more suitable than ODG R-7 and Epson Moverio BT-200 for clinical usability in terms of the evaluated criteria. To the best of our knowledge, this is the first paper that presents a methodology and conducts experiments to evaluate and compare OST-HMDs for their use as object-anchored 2D-display during interventions.
Collapse
Affiliation(s)
- Long Qian
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA. .,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA.
| | - Alexander Barthel
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.,Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Alex Johnson
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Peter Kazanzides
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.,Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Bernhard Fuerst
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
18
|
Abstract
Minimally invasive surgery (MIS) poses visual challenges to the surgeons. In MIS, binocular disparity is not freely available for surgeons, who are required to mentally rebuild the 3-dimensional (3D) patient anatomy from a limited number of monoscopic visual cues. The insufficient depth cues from the MIS environment could cause surgeons to misjudge spatial depth, which could lead to performance errors thus jeopardizing patient safety. In this article, we will first discuss the natural human depth perception by exploring the main depth cues available for surgeons in open procedures. Subsequently, we will reveal what depth cues are lost in MIS and how surgeons compensate for the incomplete depth presentation. Next, we will further expand our knowledge by exploring some of the available solutions for improving depth presentation to surgeons. Here we will review the innovative approaches (multiple 2D camera assembly, shadow introduction) and devices (3D monitors, head-mounted devices, and auto-stereoscopic monitors) for 3D image presentation from the past few years.
Collapse
Affiliation(s)
| | | | - Bin Zheng
- University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
19
|
Maas S, Ingler M, Overhoff HM. Using smart glasses for ultrasound diagnostics. CURRENT DIRECTIONS IN BIOMEDICAL ENGINEERING 2015. [DOI: 10.1515/cdbme-2015-0049] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Ultrasound has been established as a diagnostic tool in a wide range of applications. Especially for beginners, the alignment of sectional images to patient’s spatial anatomy can be cumbersome. A direct view onto the patient’s anatomy while regarding ultrasound images may help to overcome unergonomic examination.
To solve these issues an affordable augmented reality system using smart glasses was created, that displays a (virtual) ultrasound image beneath a (real) ultrasound transducer.
Collapse
Affiliation(s)
- Stefan Maas
- Westfälische Hochschule, Department Electrical Engineering and Applied Natural Sciences, Neidenburger Straße 43, 45877 Gelsenkirchen, Germany
| | - Marvin Ingler
- Westfälische Hochschule, Department Electrical Engineering and Applied Natural Sciences, Neidenburger Straße 43, 45877 Gelsenkirchen, Germany
| | - Heinrich Martin Overhoff
- Westfälische Hochschule, Department Electrical Engineering and Applied Natural Sciences, Neidenburger Straße 43, 45877 Gelsenkirchen, Germany
| |
Collapse
|
20
|
Abstract
Rapid prototyping (RP) technologies have found many uses in dentistry, and especially oral and maxillofacial surgery, due to its ability to promote product development while at the same time reducing cost and depositing a part of any degree of complexity theoretically. This paper provides an overview of RP technologies for maxillofacial reconstruction covering both fundamentals and applications of the technologies. Key fundamentals of RP technologies involving the history, characteristics, and principles are reviewed. A number of RP applications to the main fields of oral and maxillofacial surgery, including restoration of maxillofacial deformities and defects, reduction of functional bone tissues, correction of dento-maxillofacial deformities, and fabrication of maxillofacial prostheses, are discussed. The most remarkable challenges for development of RP-assisted maxillofacial surgery and promising solutions are also elaborated.
Collapse
Affiliation(s)
- Qian Peng
- Xiangya Stomatological Hospital, Central South University , Changsha, Hunan 410008 , China
| | | | | | | |
Collapse
|
21
|
Abhari K, Baxter JSH, Chen ECS, Khan AR, Peters TM, de Ribaupierre S, Eagleson R. Training for planning tumour resection: augmented reality and human factors. IEEE Trans Biomed Eng 2014; 62:1466-77. [PMID: 25546854 DOI: 10.1109/tbme.2014.2385874] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Planning surgical interventions is a complex task, demanding a high degree of perceptual, cognitive, and sensorimotor skills to reduce intra- and post-operative complications. This process requires spatial reasoning to coordinate between the preoperatively acquired medical images and patient reference frames. In the case of neurosurgical interventions, traditional approaches to planning tend to focus on providing a means for visualizing medical images, but rarely support transformation between different spatial reference frames. Thus, surgeons often rely on their previous experience and intuition as their sole guide is to perform mental transformation. In case of junior residents, this may lead to longer operation times or increased chance of error under additional cognitive demands. In this paper, we introduce a mixed augmented-/virtual-reality system to facilitate training for planning a common neurosurgical procedure, brain tumour resection. The proposed system is designed and evaluated with human factors explicitly in mind, alleviating the difficulty of mental transformation. Our results indicate that, compared to conventional planning environments, the proposed system greatly improves the nonclinicians' performance, independent of the sensorimotor tasks performed ( ). Furthermore, the use of the proposed system by clinicians resulted in a significant reduction in time to perform clinically relevant tasks ( ). These results demonstrate the role of mixed-reality systems in assisting residents to develop necessary spatial reasoning skills needed for planning brain tumour resection, improving patient outcomes.
Collapse
|
22
|
Zhu E, Hadadgar A, Masiello I, Zary N. Augmented reality in healthcare education: an integrative review. PeerJ 2014; 2:e469. [PMID: 25071992 PMCID: PMC4103088 DOI: 10.7717/peerj.469] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Accepted: 06/16/2014] [Indexed: 01/23/2023] Open
Abstract
Background. The effective development of healthcare competencies poses great educational challenges. A possible approach to provide learning opportunities is the use of augmented reality (AR) where virtual learning experiences can be embedded in a real physical context. The aim of this study was to provide a comprehensive overview of the current state of the art in terms of user acceptance, the AR applications developed and the effect of AR on the development of competencies in healthcare. Methods. We conducted an integrative review. Integrative reviews are the broadest type of research review methods allowing for the inclusion of various research designs to more fully understand a phenomenon of concern. Our review included multi-disciplinary research publications in English reported until 2012. Results. 2529 research papers were found from ERIC, CINAHL, Medline, PubMed, Web of Science and Springer-link. Three qualitative, 20 quantitative and 2 mixed studies were included. Using a thematic analysis, we’ve described three aspects related to the research, technology and education. This study showed that AR was applied in a wide range of topics in healthcare education. Furthermore acceptance for AR as a learning technology was reported among the learners and its potential for improving different types of competencies. Discussion. AR is still considered as a novelty in the literature. Most of the studies reported early prototypes. Also the designed AR applications lacked an explicit pedagogical theoretical framework. Finally the learning strategies adopted were of the traditional style ‘see one, do one and teach one’ and do not integrate clinical competencies to ensure patients’ safety.
Collapse
Affiliation(s)
- Egui Zhu
- Department of Learning, Informatics, Management and Ethics (LIME), Karolinska Institutet , Stockholm , Sweden ; Faculty of Education, Hubei University , China
| | - Arash Hadadgar
- Department of Learning, Informatics, Management and Ethics (LIME), Karolinska Institutet , Stockholm , Sweden
| | - Italo Masiello
- Department of Learning, Informatics, Management and Ethics (LIME), Karolinska Institutet , Stockholm , Sweden
| | - Nabil Zary
- Department of Learning, Informatics, Management and Ethics (LIME), Karolinska Institutet , Stockholm , Sweden
| |
Collapse
|
23
|
Lapeer RJ, Jeffrey SJ, Dao JT, García GG, Chen M, Shickell SM, Rowland RS, Philpott CM. Using a passive coordinate measurement arm for motion tracking of a rigid endoscope for augmented-reality image-guided surgery. Int J Med Robot 2013; 10:65-77. [DOI: 10.1002/rcs.1513] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/07/2013] [Indexed: 11/07/2022]
Affiliation(s)
- Rudy J. Lapeer
- School of Computing Sciences; University of East Anglia; Norwich UK
| | | | - Josh T. Dao
- School of Computing Sciences; University of East Anglia; Norwich UK
| | | | - Minsi Chen
- School of Computing and Mathematics; University of Derby; UK
| | | | | | - Carl M. Philpott
- James Paget University Hospital, Gorleston, and Norwich Medical School; University of East Anglia; Norwich UK
| |
Collapse
|
24
|
Linte CA, Davenport KP, Cleary K, Peters C, Vosburgh KG, Navab N, Edwards PE, Jannin P, Peters TM, Holmes DR, Robb RA. On mixed reality environments for minimally invasive therapy guidance: systems architecture, successes and challenges in their implementation from laboratory to clinic. Comput Med Imaging Graph 2013; 37:83-97. [PMID: 23632059 PMCID: PMC3796657 DOI: 10.1016/j.compmedimag.2012.12.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2012] [Revised: 11/16/2012] [Accepted: 12/24/2012] [Indexed: 11/21/2022]
Abstract
Mixed reality environments for medical applications have been explored and developed over the past three decades in an effort to enhance the clinician's view of anatomy and facilitate the performance of minimally invasive procedures. These environments must faithfully represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical instrument tracking, and display technology into a common framework centered around and registered to the patient. However, in spite of their reported benefits, few mixed reality environments have been successfully translated into clinical use. Several challenges that contribute to the difficulty in integrating such environments into clinical practice are presented here and discussed in terms of both technical and clinical limitations. This article should raise awareness among both developers and end-users toward facilitating a greater application of such environments in the surgical practice of the future.
Collapse
|
25
|
Ferroli P, Tringali G, Acerbi F, Schiariti M, Broggi M, Aquino D, Broggi G. Advanced 3-Dimensional Planning in Neurosurgery. Neurosurgery 2013; 72 Suppl 1:54-62. [DOI: 10.1227/neu.0b013e3182748ee8] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
Abstract
During the past decades, medical applications of virtual reality technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. With the aid of new technologies, the user is able to process large amounts of data sets to create accurate and almost realistic reconstructions of anatomic structures and related pathologies. As a result, a 3-diensional (3-D) representation is obtained, and surgeons can explore the brain for planning or training. Further improvement such as a feedback system increases the interaction between users and models by creating a virtual environment. Its use for advanced 3-D planning in neurosurgery is described. Different systems of medical image volume rendering have been used and analyzed for advanced 3-D planning: 1 is a commercial “ready-to-go” system (Dextroscope, Bracco, Volume Interaction, Singapore), whereas the others are open-source-based software (3-D Slicer, FSL, and FreesSurfer). Different neurosurgeons at our institution experienced how advanced 3-D planning before surgery allowed them to facilitate and increase their understanding of the complex anatomic and pathological relationships of the lesion. They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. Virtual reality for advanced 3-D planning in neurosurgery has achieved considerable realism as a result of the available processing power of modern computers. Although it has been found useful to facilitate the understanding of complex anatomic relationships, further effort is needed to increase the quality of the interaction between the user and the model.
Collapse
Affiliation(s)
| | | | - Francesco Acerbi
- Neuroradiology Unit, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milano, Italy
| | | | | | - Domenico Aquino
- Neuroradiology Unit, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milano, Italy
| | | |
Collapse
|
26
|
Lee S, Lee J, Lee A, Park N, Lee S, Song S, Seo A, Lee H, Kim JI, Eom K. Augmented reality intravenous injection simulator based 3D medical imaging for veterinary medicine. Vet J 2012; 196:197-202. [PMID: 23103217 DOI: 10.1016/j.tvjl.2012.09.015] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2012] [Revised: 07/08/2012] [Accepted: 09/19/2012] [Indexed: 10/27/2022]
Abstract
Augmented reality (AR) is a technology which enables users to see the real world, with virtual objects superimposed upon or composited with it. AR simulators have been developed and used in human medicine, but not in veterinary medicine. The aim of this study was to develop an AR intravenous (IV) injection simulator to train veterinary and pre-veterinary students to perform canine venipuncture. Computed tomographic (CT) images of a beagle dog were scanned using a 64-channel multidetector. The CT images were transformed into volumetric data sets using an image segmentation method and were converted into a stereolithography format for creating 3D models. An AR-based interface was developed for an AR simulator for IV injection. Veterinary and pre-veterinary student volunteers were randomly assigned to an AR-trained group or a control group trained using more traditional methods (n = 20/group; n = 8 pre-veterinary students and n = 12 veterinary students in each group) and their proficiency at IV injection technique in live dogs was assessed after training was completed. Students were also asked to complete a questionnaire which was administered after using the simulator. The group that was trained using an AR simulator were more proficient at IV injection technique using real dogs than the control group (P ≤ 0.01). The students agreed that they learned the IV injection technique through the AR simulator. Although the system used in this study needs to be modified before it can be adopted for veterinary educational use, AR simulation has been shown to be a very effective tool for training medical personnel. Using the technology reported here, veterinary AR simulators could be developed for future use in veterinary education.
Collapse
Affiliation(s)
- S Lee
- Department of Veterinary Radiology and Diagnostic Imaging, College of Veterinary Medicine, Konkuk University, Seoul 143-701, Republic of Korea
| | | | | | | | | | | | | | | | | | | |
Collapse
|
27
|
Kersten-Oertel M, Jannin P, Collins DL. DVV: a taxonomy for mixed reality visualization in image guided surgery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:332-352. [PMID: 21383411 DOI: 10.1109/tvcg.2011.50] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- McConell Brain Imaging Center at the Montreal Neurological Institute (MNI), 3801 University St, Montre´al, QC H3A 2B4, Canada.
| | | | | |
Collapse
|
28
|
A guide to stereoscopic 3D displays in medicine. Acad Radiol 2011; 18:1035-48. [PMID: 21652229 DOI: 10.1016/j.acra.2011.04.005] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2011] [Revised: 04/08/2011] [Accepted: 04/22/2011] [Indexed: 01/21/2023]
Abstract
Stereoscopic displays can potentially improve many aspects of medicine. However, weighing the advantages and disadvantages of such displays remains difficult, and more insight is needed to evaluate whether stereoscopic displays are worth adopting. In this article, we begin with a review of monocular and binocular depth cues. We then apply this knowledge to examine how stereoscopic displays can potentially benefit diagnostic imaging, medical training, and surgery. It is apparent that the binocular depth information afforded by stereo displays 1) aid the detection of diagnostically relevant shapes, orientations, and positions of anatomical features, especially when monocular cues are absent or unreliable; 2) help novice surgeons orient themselves in the surgical landscape and perform complicated tasks; and 3) improve the three-dimensional anatomical understanding of students with low visual-spatial skills. The drawbacks of stereo displays are also discussed, including extra eyewear, potential three-dimensional misperceptions, and the hurdle of overcoming familiarity with existing techniques. Finally, we list suggested guidelines for the optimal use of stereo displays. We provide a concise guide for medical practitioners who want to assess the potential benefits of stereo displays before adopting them.
Collapse
|
29
|
Hongen Liao, Inomata T, Sakuma I, Dohi T. 3-D Augmented Reality for MRI-Guided Surgery Using Integral Videography Autostereoscopic Image Overlay. IEEE Trans Biomed Eng 2010; 57:1476-86. [DOI: 10.1109/tbme.2010.2040278] [Citation(s) in RCA: 136] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
30
|
Thoranaghatte R, Garcia J, Caversaccio M, Widmer D, Gonzalez Ballester MA, Nolte LP, Zheng G. Landmark-based augmented reality system for paranasal and transnasal endoscopic surgeries. Int J Med Robot 2010; 5:415-22. [PMID: 19623600 DOI: 10.1002/rcs.273] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
BACKGROUND In this paper we present a landmark-based augmented reality (AR) endoscope system for endoscopic paranasal and transnasal surgeries along with fast and automatic calibration and registration procedures for the endoscope. METHODS Preoperatively the surgeon selects natural landmarks or can define new landmarks in CT volume. These landmarks are overlaid, after proper registration of preoperative CT to the patient, on the endoscopic video stream. The specified name of the landmark, along with selected colour and its distance from the endoscope tip, is also augmented. The endoscope optics are calibrated and registered by fast and automatic methods. Accuracy of the system is evaluated in a metallic grid and cadaver set-up. RESULTS Root mean square (RMS) error of the system is 0.8 mm in a controlled laboratory set-up (metallic grid) and was 2.25 mm during cadaver studies. CONCLUSIONS A novel landmark-based AR endoscope system is implemented and its accuracy is evaluated. Augmented landmarks will help the surgeon to orientate and navigate the surgical field. Studies prove the capability of the system for the proposed application. Further clinical studies are planned in near future.
Collapse
Affiliation(s)
- Ramesh Thoranaghatte
- ARTORG Research Centre, University of Bern, Stauffacherstrasse 78, Bern, Switzerland.
| | | | | | | | | | | | | |
Collapse
|
31
|
Yoshinaga T, Horiguchi T, Miyazaki W, Masuda K. Development of 3D space-sharing interface using augmented reality technology for domestic tele-echography. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2009; 2009:6103-6. [PMID: 19965260 DOI: 10.1109/iembs.2009.5334928] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We propose a domestic tele-echography system linking between the patient's home and a hospital because of the demand due to increase in the number of patients by aging society and recent progress in portable echography enabled us to develop this system. In previous researches three-dimensional position of the ultrasound probe was difficult to specify because a remote doctor observe the patient through a video camera. Therefore we have developed a reproduction system of the probe position and angle using the ARToolKit and GUI interface using OpenGL. Only an USB camera and two markers for the body surface and the probe are necessary to memorize and transfer three-dimensional position of the probe. We have also designed a doctor side interface including echogram, patient scene and CG to instruct probe operation. As a result of evaluation experiments, guided position was satisfied to reproduce the echogram for diagnosis.
Collapse
Affiliation(s)
- Takashi Yoshinaga
- Graduate school of Bio-Applications and Systems Engineering, Tokyo University of Agriculture & Technology, Koganei, Tokyo, Japan.
| | | | | | | |
Collapse
|
32
|
Brown W, Satava R, Rosen J. Virtual reality and surgical training: Simulating the future. ACTA ACUST UNITED AC 2009. [DOI: 10.3109/13645709409153003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
33
|
Cheng D, Wang Y, Hua H, Talha MM. Design of an optical see-through head-mounted display with a low f-number and large field of view using a freeform prism. APPLIED OPTICS 2009; 48:2655-68. [PMID: 19424386 DOI: 10.1364/ao.48.002655] [Citation(s) in RCA: 112] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
It has been a challenge to design an optical see-through head-mounted display (OST-HMD) that has a wide field of view (FOV) and low f-number (f/#) while maintaining a compact, lightweight, and nonintrusive form factor. In this paper, we present an OST-HMD design using a wedge-shaped freeform prism cemented with a freeform lens. The prism, consisting of three freeform surfaces (FFSs), serves as the near-eye viewing optics that magnifies the image displayed through a microdisplay, and the freeform lens is an auxiliary element attached to the prism in order to maintain a nondistorted see-through view of a real-world scene. Both the freeform prism and the lens utilize plastic materials to achieve light weight. The overall dimension of the optical system per eye is no larger than 25 mm by 22 mm by 12 mm, and the weight is 8 g. Based on a 0.61 in. microdisplay, our system demonstrates a diagonal FOV of 53.5 degrees and an f/# of 1.875, with an 8 mm exit pupil diameter and an 18.25 mm eye relief.
Collapse
Affiliation(s)
- Dewen Cheng
- Department of Optoelectronic Engineering, Beijing Institute of Technology, Beijing 100081, China
| | | | | | | |
Collapse
|
34
|
Cognitive and learning sciences in biomedical and health instructional design: A review with lessons for biomedical informatics education. J Biomed Inform 2008; 42:176-97. [PMID: 19135173 DOI: 10.1016/j.jbi.2008.12.002] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2008] [Accepted: 12/10/2008] [Indexed: 11/21/2022]
Abstract
Theoretical and methodological advances in the cognitive and learning sciences can greatly inform curriculum and instruction in biomedicine and also educational programs in biomedical informatics. It does so by addressing issues such as the processes related to comprehension of medical information, clinical problem-solving and decision-making, and the role of technology. This paper reviews these theories and methods from the cognitive and learning sciences and their role in addressing current and future needs in designing curricula, largely using illustrative examples drawn from medical education. The lessons of this past work are also applicable, however, to biomedical and health professional curricula in general, and to biomedical informatics training, in particular. We summarize empirical studies conducted over two decades on the role of memory, knowledge organization and reasoning as well as studies of problem-solving and decision-making in medical areas that inform curricular design. The results of this research contribute to the design of more informed curricula based on empirical findings about how people learn and think, and more specifically, how expertise is developed. Similarly, the study of practice can also help to shape theories of human performance, technology-based learning, and scientific and professional collaboration that extend beyond the domain of medicine. Just as biomedical science has revolutionized health care practice, research in the cognitive and learning sciences provides a scientific foundation for education in biomedicine, the health professions, and biomedical informatics.
Collapse
|
35
|
Lapeer R, Chen MS, Gonzalez G, Linney A, Alusi G. Image-enhanced surgical navigation for endoscopic sinus surgery: evaluating calibration, registration and tracking. Int J Med Robot 2008; 4:32-45. [DOI: 10.1002/rcs.175] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
36
|
Lerotic M, Chung AJ, Mylonas G, Yang GZ. Pq-space based non-photorealistic rendering for augmented reality. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2007; 10:102-9. [PMID: 18044558 DOI: 10.1007/978-3-540-75759-7_13] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The increasing use of robotic assisted minimally invasive surgery (MIS) provides an ideal environment for using Augmented Reality (AR) for performing image guided surgery. Seamless synthesis of AR depends on a number of factors relating to the way in which virtual objects appear and visually interact with a real environment. Traditional overlaid AR approaches generally suffer from a loss of depth perception. This paper presents a new AR method for robotic assisted MIS, which uses a novel pq-space based non-photorealistic rendering technique for providing see-through vision of the embedded virtual object whilst maintaining salient anatomical details of the exposed anatomical surface. Experimental results with both phantom and in vivo lung lobectomy data demonstrate the visual realism achieved for the proposed method and its accuracy in providing high fidelity AR depth perception.
Collapse
Affiliation(s)
- Mirna Lerotic
- Institute of Biomedical Engineering, Imperial College, London SW7 2AZ, UK.
| | | | | | | |
Collapse
|
37
|
Vogt S, Khamene A, Sauer F. Reality Augmentation for Medical Procedures: System Architecture, Single Camera Marker Tracking, and System Evaluation. Int J Comput Vis 2006. [DOI: 10.1007/s11263-006-7938-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
38
|
Castro A, Frauel Y, Tepichín E, Javidi B. Pose estimation from a two-dimensional view by use of composite correlation filters and neural networks. APPLIED OPTICS 2003; 42:5882-5890. [PMID: 14577541 DOI: 10.1364/ao.42.005882] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We present a technique to estimate the pose of a three-dimensional object from a two-dimensional view. We first compute the correlation between the unknown image and several synthetic-discriminant-function filters constructed with known views of the object. We consider both linear and nonlinear correlations. The filters are constructed in such a way that the obtained correlation values depend on the pose parameters. We show that this dependence is not perfectly linear, in particular for nonlinear correlation. Therefore we use a two-layer neural network to retrieve the pose parameters from the correlation values. We demonstrate the technique by simultaneously estimating the in-plane and out-of-plane orientations of an airplane within an 8-deg portion. We show that a nonlinear correlation is necessary to identify the object and also to estimate its pose. On the other hand, linear correlation is more accurate and more robust. A combination of linear and nonlinear correlations gives the best results.
Collapse
Affiliation(s)
- Albertina Castro
- Instituto Nacional de Astrofísica, Optica y Electrónica, Apdo. Postal 51, Puebla, Puebla, 72000, México.
| | | | | | | |
Collapse
|
39
|
An Augmented Reality Navigation System with a Single-Camera Tracker: System Design and Needle Biopsy Phantom Trial. ACTA ACUST UNITED AC 2002. [DOI: 10.1007/3-540-45787-9_15] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
40
|
Tamura S, Hirano M, Chen X, Sato Y, Narumi Y, Hori M, Takahashi S, Nakamura H. Intrabody three-dimensional position sensor for an ultrasound endoscope. IEEE Trans Biomed Eng 2002; 49:1187-94. [PMID: 12374344 DOI: 10.1109/tbme.2002.803517] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
To avoid or reduce the X-ray exposure in endoscopic examinations and therapy, as an alternative to the conventional two-dimensional X-ray fluoroscopy we are developing an intrabody navigation system that can directly measure and visualize the three-dimensional (3-D) position of the tip and the trace of an ultrasound endoscope. The proposed system can identify the 3-D location and direction of the endoscope probe inserted into the body to furnish endoscopic images. A marker transducer(s) placed on the surface of the body transmits ultrasound pulses, which are visualized as a marker synchronized to the scanning of the endoscope. The position (direction and distance of the marker transducer(s) outside the body relative to the scanning probe inside the body) of the marker is detected and measured in the scanned image of the ultrasound endoscope. Further, an optical localizer locates the marker transducer(s) with six degrees of freedom. Thus, the proposed method performs inside-body 3-D localization by utilizing the inherent image reconstruction function of the ultrasound endoscope, and is able to be used with currently available commercial ultrasound image scanners. The system may be envisaged as a kind of global positioning system for intrabody navigation.
Collapse
Affiliation(s)
- Shinichi Tamura
- Division of Interdisciplinary Image Analysis, Osaka University Medical School, Suita City, Japan.
| | | | | | | | | | | | | | | |
Collapse
|
41
|
Rosenthal M, State A, Lee J, Hirota G, Ackerman J, Keller K, Pisano E, Jiroutek M, Muller K, Fuchs H. Augmented reality guidance for needle biopsies: An initial randomized, controlled trial in phantoms††A preliminary version of this paper was presented at the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2001 conference in Utrecht. The Netherlands (Rosenthal et al., 2001). Med Image Anal 2002; 6:313-20. [PMID: 12270235 DOI: 10.1016/s1361-8415(02)00088-9] [Citation(s) in RCA: 65] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
We report the results of a randomized, controlled trial to compare the accuracy of standard ultrasound-guided needle biopsy to biopsies performed using a 3D Augmented Reality (AR) guidance system. A board-certified radiologist conducted 50 core biopsies of breast phantoms, with biopsies randomly assigned to one of the methods in blocks of five biopsies each. The raw ultrasound data from each biopsy was recorded. Another board-certified radiologist, blinded to the actual biopsy guidance mechanism, evaluated the ultrasound recordings and determined the distance of the biopsy from the ideal position. A repeated measures analysis of variance indicated that the head-mounted display method led to a statistically significantly smaller mean deviation from the desired target than did the standard display method (2.48 mm for control versus 1.62 mm for augmented reality, p<0.02). This result suggests that AR systems can offer improved accuracy over traditional biopsy guidance methods.
Collapse
Affiliation(s)
- Michael Rosenthal
- Department of Computer Science, University of North Carolina at Chapel Hill, CB 3175, Chapel Hill, NC 27599-3175, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
42
|
|
43
|
Saucer F, Khamene A, Bascle B, Rubino GJ. A Head-Mounted Display System for Augmented Reality Image Guidance: Towards Clinical Evaluation for iMRI-guided Nuerosurgery. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2001 2001. [DOI: 10.1007/3-540-45468-3_85] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
44
|
Nikou C, Digioia AM, Blackwell M, Jaramaz B, Kanade T. Augmented reality imaging technology for orthopaedic surgery. ACTA ACUST UNITED AC 2000. [DOI: 10.1016/s1048-6666(00)80047-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
45
|
Rothbaum BO, Hodges LF. The use of virtual reality exposure in the treatment of anxiety disorders. Behav Modif 1999; 23:507-25. [PMID: 10533438 DOI: 10.1177/0145445599234001] [Citation(s) in RCA: 51] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
One possible alternative to standard in vivo exposure may be virtual reality exposure. Virtual reality integrates real-time computer graphics, body tracking devices, visual displays, and other sensory input devices to immerse a participant in a computer-generated virtual environment. Virtual reality exposure (VRE) is potentially an efficient and cost-effective treatment of anxiety disorders. VRE therapy has been successful in reducing the fear of heights in the first known controlled study of virtual reality in the treatment of a psychological disorder. Outcome was assessed on measures of anxiety, avoidance, attitudes, and distress. Significant group differences were found on all measures such that the VRE group was significantly improved at posttreatment but the control group was unchanged. The efficacy of virtual reality exposure therapy was also supported for the fear of flying in a case study. The potential for virtual reality exposure treatment for these and other disorders is explored.
Collapse
|
46
|
Abstract
PURPOSE To guide treatment for macular diseases and to facilitate real-time image measurement and comparison, investigations were initiated to permit overlay of previously stored photographic and angiographic images directly onto the real-time slit-lamp biomicroscopic fundus image. DESIGN Experimental study in model eyes, and preliminary observations in human subjects. METHODS A modified, binocular video slit lamp interfaced to a personal computer and framegrabber allows for image acquisition and rendering of stored images overlaid onto the real-time slit-lamp biomicroscopic fundus image. Development proceeds with rendering on a computer monitor, while construction is completed on a miniature display interfaced directly with one of the slit-lamp oculars. Registration and tracking are performed with in-house-developed software. MAIN OUTCOME MEASURES Tracking speed and accuracy, ergonomic acceptability. RESULTS Computer-vision algorithms permit robust montaging, tracking, registration, and rendering of previously stored photographic and angiographic images onto the real-time slit-lamp fundus biomicroscopic image. In model eyes and in preliminary studies in a human eye, optimized registration permits near-video-rate image overlay with updates at 3 to 10 Hz and misregistration errors on the order of 1 to 5 pixels. CONCLUSIONS A prototype for ophthalmic augmented reality (image overlay) is presented. The current hardware/software implementation allows for robust performance.
Collapse
Affiliation(s)
- J W Berger
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia 19104, USA.
| | | |
Collapse
|
47
|
Raghavan V, Molineros J, Sharma R. Interactive evaluation of assembly sequences using augmented reality. ACTA ACUST UNITED AC 1999. [DOI: 10.1109/70.768177] [Citation(s) in RCA: 63] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
48
|
Abstract
The objective of this article is to provide scientists, engineers and clinicians with an up-to-date overview on the current state of development in the area of three-dimensional ultrasound (3-DUS) and to serve as a reference for individuals who wish to learn more about 3-DUS imaging. The sections will review the state of the art with respect to 3-DUS imaging, methods of data acquisition, analysis and display approaches. Clinical sections summarize patient research study results to date with discussion of applications by organ system. The basic algorithms and approaches to visualization of 3-D and 4-D ultrasound data are reviewed, including issues related to interactivity and user interfaces. The implications of recent developments for future ultrasound imaging/visualization systems are considered. Ultimately, an improved understanding of ultrasound data offered by 3-DUS may make it easier for primary care physicians to understand complex patient anatomy. Tertiary care physicians specializing in ultrasound can further enhance the quality of patient care by using high-speed networks to review volume ultrasound data at specialization centers. Access to volume data and expertise at specialization centers affords more sophisticated analysis and review, further augmenting patient diagnosis and treatment.
Collapse
Affiliation(s)
- T R Nelson
- Department of Radiology, University of California San Diego, La Jolla 92093-0610, USA.
| | | |
Collapse
|
49
|
Sato Y, Nakamoto M, Tamaki Y, Sasama T, Sakita I, Nakajima Y, Monden M, Tamura S. Image guidance of breast cancer surgery using 3-D ultrasound images and augmented reality visualization. IEEE TRANSACTIONS ON MEDICAL IMAGING 1998; 17:681-693. [PMID: 9874292 DOI: 10.1109/42.736019] [Citation(s) in RCA: 65] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
This paper describes augmented reality visualization for the guidance of breast-conservative cancer surgery using ultrasonic images acquired in the operating room just before surgical resection. By combining an optical three-dimensional (3-D) position sensor, the position and orientation of each ultrasonic cross section are precisely measured to reconstruct geometrically accurate 3-D tumor models from the acquired ultrasonic images. Similarly, the 3-D position and orientation of a video camera are obtained to integrate video and ultrasonic images in a geometrically accurate manner. Superimposing the 3-D tumor models onto live video images of the patient's breast enables the surgeon to perceive the exact 3-D position of the tumor, including irregular cancer invasions which cannot be perceived by touch, as if it were visible through the breast skin. Using the resultant visualization, the surgeon can determine the region for surgical resection in a more objective and accurate manner, thereby minimizing the risk of a relapse and maximizing breast conservation. The system was shown to be effective in experiments using phantom and clinical data.
Collapse
Affiliation(s)
- Y Sato
- Division of Functional Diagnostic Imaging, Biomedical Research Center, Osaka University Medical School, Suita, Japan.
| | | | | | | | | | | | | | | |
Collapse
|
50
|
Tang SL, Kwoh CK, Teo MY, Sing NW, Ling KV. Augmented reality systems for medical applications. IEEE ENGINEERING IN MEDICINE AND BIOLOGY MAGAZINE : THE QUARTERLY MAGAZINE OF THE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY 1998; 17:49-58. [PMID: 9604701 DOI: 10.1109/51.677169] [Citation(s) in RCA: 78] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- S L Tang
- School of Mechanical and Production Engineering, Nanyang Technological University, Singapore
| | | | | | | | | |
Collapse
|