1
|
Sakano Y, Ando H. Conditions of a Multi-View 3D Display for Accurate Reproduction of Perceived Glossiness. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3336-3350. [PMID: 33651695 DOI: 10.1109/tvcg.2021.3063182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Visualizing objects as they are perceived in the real world is often critical in our daily experiences. We previously focused on objects' surface glossiness visualized with a 3D display and found that a multi-view 3D display reproduces perceived glossiness more accurately than a 2D display. This improvement of glossiness reproduction can be explained by the fact that a glossy surface visualized by a multi-view 3D display appropriately provides luminance differences between the two eyes and luminance changes accompanying the viewer's lateral head motion. In the present study, to determine the requirements of a multi-view 3D display for the accurate reproduction of perceived glossiness, we developed a simulator of a multi-view 3D display to independently and simultaneously manipulate the viewpoint interval and the magnitude of the optical inter-view crosstalk. Using the simulator, we conducted a psychophysical experiment and found that glossiness reproduction is most accurate when the viewpoint interval is small and there is just a small (but not too small) amount of crosstalk. We proposed a simple yet perceptually valid model that quantitatively predicts the reproduction accuracy of perceived glossiness.
Collapse
|
2
|
Lin L, Gao Y, Aung ZM, Xu H, Wang B, Yang X, Chai G, Xie L. Preliminary reports of augmented-reality assisted craniofacial bone fracture reduction. J Plast Reconstr Aesthet Surg 2022; 75:e1-e8. [DOI: 10.1016/j.bjps.2022.06.105] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 05/01/2022] [Accepted: 06/05/2022] [Indexed: 10/31/2022]
|
3
|
Abstract
In order to estimate the shape of objects, the visual system must refer to shape-related regularities in the (retinal) image. For opaque objects, many such regularities have already been identified, but most of them cannot simply be transferred to transparent objects, because they are not available there at all or are available only in a substantially modified form. We here consider three potentially relevant regularities specific to transparent objects: optical background distortions due to refraction, changes in chromaticity and brightness due to absorption, and multiple mirror images due to specular reflection. Using computer simulations, we first analyze under which conditions these regularities may be used as shape cues. We further investigate experimentally how shape perception depends on the availability of these potential cues in realistic scenes under natural viewing conditions. Our results show that the shape of transparent objects was perceived both less accurately and less precisely than in the opaque case. Furthermore, the influence of individual image regularities varied considerably depending on the properties of both object and scene. This suggests that in the transparent case, what kind of information is usable as a shape cue depends on a complex interplay of properties of the transparent object and the surrounding scene.
Collapse
Affiliation(s)
- Nick Schlüter
- Institut für Psychologie, Christian-Albrechts-Universität zu Kiel, Kiel, Germany
| | - Franz Faul
- Institut für Psychologie, Christian-Albrechts-Universität zu Kiel, Kiel, Germany
| |
Collapse
|
4
|
Lichtenberg N, Lawonn K. Auxiliary Tools for Enhanced Depth Perception in Vascular Structures. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2019; 1138:103-113. [DOI: 10.1007/978-3-030-14227-8_8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
5
|
An HTML Tool for Production of Interactive Stereoscopic Compositions. J Med Syst 2016; 40:265. [DOI: 10.1007/s10916-016-0616-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2016] [Accepted: 09/16/2016] [Indexed: 10/20/2022]
|
6
|
Choi H, Cho B, Masamune K, Hashizume M, Hong J. An effective visualization technique for depth perception in augmented reality-based surgical navigation. Int J Med Robot 2015; 12:62-72. [PMID: 25951494 DOI: 10.1002/rcs.1657] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Revised: 03/06/2015] [Accepted: 03/22/2015] [Indexed: 11/09/2022]
Abstract
BACKGROUND Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. METHODS To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. RESULTS Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. CONCLUSIONS We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks.
Collapse
Affiliation(s)
- Hyunseok Choi
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, Republic of Korea
| | - Byunghyun Cho
- Department of Advanced medical Initiatives, Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Ken Masamune
- Institute of Advanced Biomedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan
| | - Makoto Hashizume
- Department of Advanced medical Initiatives, Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Jaesung Hong
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, Republic of Korea
| |
Collapse
|
7
|
Wang J, Kreiser M, Wang L, Navab N, Fallavollita P. Augmented depth perception visualization in 2D/3D image fusion. Comput Med Imaging Graph 2014; 38:744-52. [PMID: 25066009 DOI: 10.1016/j.compmedimag.2014.06.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2013] [Revised: 06/24/2014] [Accepted: 06/27/2014] [Indexed: 11/27/2022]
Abstract
2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved.
Collapse
Affiliation(s)
- Jian Wang
- Chair for Computer Aided Medical Procedures, Fakultät für Informatik, Technische Universität München, Germany.
| | | | - Lejing Wang
- Chair for Computer Aided Medical Procedures, Fakultät für Informatik, Technische Universität München, Germany.
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures, Fakultät für Informatik, Technische Universität München, Germany.
| | - Pascal Fallavollita
- Chair for Computer Aided Medical Procedures, Fakultät für Informatik, Technische Universität München, Germany.
| |
Collapse
|
8
|
Kersten-Oertel M, Chen SJS, Collins DL. An evaluation of depth enhancing perceptual cues for vascular volume visualization in neurosurgery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:391-403. [PMID: 24434220 DOI: 10.1109/tvcg.2013.240] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Cerebral vascular images obtained through angiography are used by neurosurgeons for diagnosis, surgical planning, and intraoperative guidance. The intricate branching of the vessels and furcations, however, make the task of understanding the spatial three-dimensional layout of these images challenging. In this paper, we present empirical studies on the effect of different perceptual cues (fog, pseudo-chromadepth, kinetic depth, and depicting edges) both individually and in combination on the depth perception of cerebral vascular volumes and compare these to the cue of stereopsis. Two experiments with novices and one experiment with experts were performed. The results with novices showed that the pseudo-chromadepth and fog cues were stronger cues than that of stereopsis. Furthermore, the addition of the stereopsis cue to the other cues did not improve relative depth perception in cerebral vascular volumes. In contrast to novices, the experts also performed well with the edge cue. In terms of both novice and expert subjects, pseudo-chromadepth and fog allow for the best relative depth perception. By using such cues to improve depth perception of cerebral vasculature, we may improve diagnosis, surgical planning, and intraoperative guidance.
Collapse
|
9
|
Liu X, Jiang H, Lang Y, Wang H, Sun N. A novel stereoscopic projection display system for CT images of fractures. Exp Ther Med 2013; 5:1677-1682. [PMID: 23837053 PMCID: PMC3702712 DOI: 10.3892/etm.2013.1044] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2013] [Accepted: 03/22/2013] [Indexed: 11/09/2022] Open
Abstract
The present study proposed a novel projection display system based on a virtual reality enhancement environment. The proposed system displays stereoscopic images of fractures and enhances the computed tomography (CT) images. The diagnosis and treatment of fractures primarily depend on the post-processing of CT images. However, two-dimensional (2D) images do not show overlapping structures in fractures since they are displayed without visual depth and these structures are too small to be simultaneously observed by a group of clinicians. Stereoscopic displays may solve this problem and allow clinicians to obtain more information from CT images. Hardware with which to generate stereoscopic images was designed. This system utilized the conventional equipment found in meeting rooms. The off-axis algorithm was adopted to convert the CT images into stereo image pairs, which were used as the input for a stereo generator. The final stereoscopic images were displayed using a projection system. Several CT fracture images were imported into the system for comparison with traditional 2D CT images. The results showed that the proposed system aids clinicians in group discussions by producing large stereoscopic images. The results demonstrated that the enhanced stereoscopic CT images generated by the system appear clearer and smoother, such that the sizes, displacement and shapes of bone fragments are easier to assess. Certain fractures that were previously not visible on 2D CT images due to vision overlap became vividly evident in the stereo images. The proposed projection display system efficiently, economically and accurately displayed three-dimensional (3D) CT images. The system may help clinicians improve the diagnosis and treatment of fractures.
Collapse
Affiliation(s)
- Xiujuan Liu
- Department of CT Room, 1st Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang 150001
| | | | | | | | | |
Collapse
|
10
|
Kersten-Oertel M, Jannin P, Collins DL. DVV: a taxonomy for mixed reality visualization in image guided surgery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:332-352. [PMID: 21383411 DOI: 10.1109/tvcg.2011.50] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- McConell Brain Imaging Center at the Montreal Neurological Institute (MNI), 3801 University St, Montre´al, QC H3A 2B4, Canada.
| | | | | |
Collapse
|
11
|
van Beurden MHPH, IJsselsteijn WA, Juola JF. Effectiveness of stereoscopic displays in medicine: A review. ACTA ACUST UNITED AC 2012. [DOI: 10.1007/3dres.01(2012)3] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
12
|
Zhang Q, Eagleson R, Peters TM. Volume visualization: a technical overview with a focus on medical applications. J Digit Imaging 2011; 24:640-64. [PMID: 20714917 DOI: 10.1007/s10278-010-9321-6] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
With the increasing availability of high-resolution isotropic three- or four-dimensional medical datasets from sources such as magnetic resonance imaging, computed tomography, and ultrasound, volumetric image visualization techniques have increased in importance. Over the past two decades, a number of new algorithms and improvements have been developed for practical clinical image display. More recently, further efficiencies have been attained by designing and implementing volume-rendering algorithms on graphics processing units (GPUs). In this paper, we review volumetric image visualization pipelines, algorithms, and medical applications. We also illustrate our algorithm implementation and evaluation results, and address the advantages and drawbacks of each algorithm in terms of image quality and efficiency. Within the outlined literature review, we have integrated our research results relating to new visualization, classification, enhancement, and multimodal data dynamic rendering. Finally, we illustrate issues related to modern GPU working pipelines, and their applications in volume visualization domain.
Collapse
Affiliation(s)
- Qi Zhang
- Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, ON, Canada.
| | | | | |
Collapse
|
13
|
Chung JR, Sung C, Mayerich D, Kwon J, Miller DE, Huffman T, Keyser J, Abbott LC, Choe Y. Multiscale exploration of mouse brain microstructures using the knife-edge scanning microscope brain atlas. Front Neuroinform 2011; 5:29. [PMID: 22275895 PMCID: PMC3254184 DOI: 10.3389/fninf.2011.00029] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2011] [Accepted: 11/01/2011] [Indexed: 11/13/2022] Open
Abstract
Connectomics is the study of the full connection matrix of the brain. Recent advances in high-throughput, high-resolution 3D microscopy methods have enabled the imaging of whole small animal brains at a sub-micrometer resolution, potentially opening the road to full-blown connectomics research. One of the first such instruments to achieve whole-brain-scale imaging at sub-micrometer resolution is the Knife-Edge Scanning Microscope (KESM). KESM whole-brain data sets now include Golgi (neuronal circuits), Nissl (soma distribution), and India ink (vascular networks). KESM data can contribute greatly to connectomics research, since they fill the gap between lower resolution, large volume imaging methods (such as diffusion MRI) and higher resolution, small volume methods (e.g., serial sectioning electron microscopy). Furthermore, KESM data are by their nature multiscale, ranging from the subcellular to the whole organ scale. Due to this, visualization alone is a huge challenge, before we even start worrying about quantitative connectivity analysis. To solve this issue, we developed a web-based neuroinformatics framework for efficient visualization and analysis of the multiscale KESM data sets. In this paper, we will first provide an overview of KESM, then discuss in detail the KESM data sets and the web-based neuroinformatics framework, which is called the KESM brain atlas (KESMBA). Finally, we will discuss the relevance of the KESMBA to connectomics research, and identify challenges and future directions.
Collapse
Affiliation(s)
- Ji Ryang Chung
- Department of Computer Science and Engineering, Texas A&M University College Station, TX, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
14
|
Wieczorek M, Aichert A, Fallavollita P, Kutter O, Ahmadi A, Wang L, Navab N. Interactive 3D visualization of a single-view X-ray image. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2011; 14:73-80. [PMID: 22003602 DOI: 10.1007/978-3-642-23623-5_10] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this paper, we present an interactive X-Ray perceptual visualization technique (IXPV) to improve 3D perception in standard single-view X-Ray images. Based on a priori knowledge from CT data, we re-introduce lost depth information into the original single-view X-Ray image without jeopardizing information of the original X-Ray. We propose a novel approach that is suitable for correct fusion of intraoperative X-Ray and ultrasound, co-visualization of X-Ray and surgical tools, and for improving the 3D perception of standard radiographs. Phantom and animal cadaver datasets were used during experimentation to demonstrate the impact of our technique. Results from a questionnaire completed by 11 clinicians and computer scientists demonstrate the added value of introduced depth cues directly in an X-Ray image. In conclusion, we propose IXPV as a futuristic alternative to the standard radiographic image found in today's clinical setting.
Collapse
Affiliation(s)
- Matthias Wieczorek
- Chair for Computer Aided Medical Procedures (CAMP), Technische Universität München, Munich, Germany.
| | | | | | | | | | | | | |
Collapse
|
15
|
Bruckner S, Gröller E. Enhancing depth-perception with flexible volumetric halos. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2007; 13:1344-1351. [PMID: 17968083 DOI: 10.1109/tvcg.2007.70555] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Volumetric data commonly has high depth complexity which makes it difficult to judge spatial relationships accurately. There are many different ways to enhance depth perception, such as shading, contours, and shadows. Artists and illustrators frequently employ halos for this purpose. In this technique, regions surrounding the edges of certain structures are darkened or brightened which makes it easier to judge occlusion. Based on this concept, we present a flexible method for enhancing and highlighting structures of interest using GPU-based direct volume rendering. Our approach uses an interactively defined halo transfer function to classify structures of interest based on data value, direction, and position. A feature-preserving spreading algorithm is applied to distribute seed values to neighboring locations, generating a controllably smooth field of halo intensities. These halo intensities are then mapped to colors and opacities using a halo profile function. Our method can be used to annotate features at interactive frame rates.
Collapse
Affiliation(s)
- Stefan Bruckner
- Institute of Computer Graphics and Algorithms, Vienna University of Technology, Austria.
| | | |
Collapse
|