1
|
Windoffer R, Schwarz N, Yoon S, Piskova T, Scholkemper M, Stegmaier J, Bönsch A, Di Russo J, Leube R. Quantitative mapping of keratin networks in 3D. eLife 2022; 11:75894. [PMID: 35179484 PMCID: PMC8979588 DOI: 10.7554/elife.75894] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 02/15/2022] [Indexed: 11/26/2022] Open
Abstract
Mechanobiology requires precise quantitative information on processes taking place in specific 3D microenvironments. Connecting the abundance of microscopical, molecular, biochemical, and cell mechanical data with defined topologies has turned out to be extremely difficult. Establishing such structural and functional 3D maps needed for biophysical modeling is a particular challenge for the cytoskeleton, which consists of long and interwoven filamentous polymers coordinating subcellular processes and interactions of cells with their environment. To date, useful tools are available for the segmentation and modeling of actin filaments and microtubules but comprehensive tools for the mapping of intermediate filament organization are still lacking. In this work, we describe a workflow to model and examine the complete 3D arrangement of the keratin intermediate filament cytoskeleton in canine, murine, and human epithelial cells both, in vitro and in vivo. Numerical models are derived from confocal airyscan high-resolution 3D imaging of fluorescence-tagged keratin filaments. They are interrogated and annotated at different length scales using different modes of visualization including immersive virtual reality. In this way, information is provided on network organization at the subcellular level including mesh arrangement, density and isotropic configuration as well as details on filament morphology such as bundling, curvature, and orientation. We show that the comparison of these parameters helps to identify, in quantitative terms, similarities and differences of keratin network organization in epithelial cell types defining subcellular domains, notably basal, apical, lateral, and perinuclear systems. The described approach and the presented data are pivotal for generating mechanobiological models that can be experimentally tested.
Collapse
Affiliation(s)
- Reinhard Windoffer
- Institute of Molecular and Cellular Anatomy, RWTH Aachen University, Aachen, Germany
| | - Nicole Schwarz
- Institute of Molecular and Cellular Anatomy, RWTH Aachen University, Aachen, Germany
| | - Sungjun Yoon
- Institute of Molecular and Cellular Anatomy, RWTH Aachen University, Aachen, Germany
| | - Teodora Piskova
- Institute of Molecular and Cellular Anatomy, RWTH Aachen University, Aachen, Germany
| | | | - Johannes Stegmaier
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Andrea Bönsch
- Visual Computing Institute, RWTH Aachen University, Aachen, Germany
| | - Jacopo Di Russo
- Interdisciplinary Centre for Clinical Research, RWTH Aachen University, Aachen, Germany
| | - Rudolf Leube
- Institute of Molecular and Cellular Anatomy, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
2
|
Gao Z, Wang H, Lv H, Wang M, Qi Y. Evaluating the Effects of Non-Isomorphic Rotation on 3D Manipulation Tasks in Mixed Reality Simulation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1261-1273. [PMID: 32746279 DOI: 10.1109/tvcg.2020.3010247] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As a hyper-natural interaction technique in 3D user interfaces, non-isomorphic rotation has been considered an effective approach for rotation tasks, where a static or dynamic control-display gain can be applied to amplify or attenuate a rotation. However, it is not clear whether non-isomorphic rotation can benefit 6-degree-of-freedom (6-DOF) manipulation tasks in AR and VR. In this article, we extended the usability studies of non-isomorphic rotation from rotation-only tasks to 6-DOF manipulation tasks and analyzed the collected data using a 2-component model. Using a mixed reality (MR) simulation approach, we also investigated whether environment (AR or VR) had an impact on 3D manipulation tasks. The results reveal that although both static and dynamic non-isomorphic rotation techniques could save time and effort in ballistic phases, only dynamic non-isomorphic rotation was significantly faster than isomorphic rotation. Interestingly, while environment had no significant impact on overall user performance, we found evidence that it could affect fine-tuning in correction phases. We also found that most participants preferred AR over VR, indicating that environmental visual realism could be helpful to improve user experience.
Collapse
|
3
|
Fonnet A, Prie Y. Survey of Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2101-2122. [PMID: 31352344 DOI: 10.1109/tvcg.2019.2929033] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Immersive analytics (IA) is a new term referring to the use of immersive technologies for data analysis. Yet such applications are not new, and numerous contributions have been made in the last three decades. However, no survey reviewing all these contributions is available. Here we propose a survey of IA from the early nineties until the present day, describing how rendering technologies, data, sensory mapping, and interaction means have been used to build IA systems, as well as how these systems have been evaluated. The conclusions that emerge from our analysis are that: multi-sensory aspects of IA are under-exploited, the 3DUI and VR community knowledge regarding immersive interaction is not sufficiently utilised, the IA community should focus on converging towards best practices, as well as aim for real life IA systems.
Collapse
|
4
|
Mirhosseini S, Gutenko I, Ojal S, Marino J, Kaufman A. Immersive Virtual Colonoscopy. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2011-2021. [PMID: 30762554 DOI: 10.1109/tvcg.2019.2898763] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Virtual colonoscopy (VC) is a non-invasive screening tool for colorectal polyps which employs volume visualization of a colon model reconstructed from a CT scan of the patient's abdomen. We present an immersive analytics system for VC which enhances and improves the traditional desktop VC through the use of VR technologies. Our system, using a head-mounted display (HMD), includes all of the standard VC features, such as the volume rendered endoluminal fly-through, measurement tool, bookmark modes, electronic biopsy, and slice views. The use of VR immersion, stereo, and wider field of view and field of regard has a positive effect on polyp search and analysis tasks in our immersive VC system, a volumetric-based immersive analytics application. Navigation includes enhanced automatic speed and direction controls, based on the user's head orientation, in conjunction with physical navigation for exploration of local proximity. In order to accommodate the resolution and frame rate requirements for HMDs, new rendering techniques have been developed, including mesh-assisted volume raycasting and a novel lighting paradigm. Feedback and further suggestions from expert radiologists show the promise of our system for immersive analysis for VC and encourage new avenues for exploring the use of VR in visualization systems for medical diagnosis.
Collapse
|
5
|
Lopes DS, Parreira PDDF, Paulo SF, Nunes V, Rego PA, Neves MC, Rodrigues PS, Jorge JA. On the utility of 3D hand cursors to explore medical volume datasets with a touchless interface. J Biomed Inform 2017; 72:140-149. [PMID: 28720438 DOI: 10.1016/j.jbi.2017.07.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2016] [Revised: 06/30/2017] [Accepted: 07/10/2017] [Indexed: 10/19/2022]
Abstract
Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N=5) and interns (N=2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed.
Collapse
Affiliation(s)
- Daniel Simões Lopes
- INESC-ID Lisboa, IST Taguspark, Avenida Professor Cavaco Silva, 2744-016 Porto Salvo, Portugal.
| | | | - Soraia Figueiredo Paulo
- INESC-ID Lisboa, IST Taguspark, Avenida Professor Cavaco Silva, 2744-016 Porto Salvo, Portugal.
| | - Vitor Nunes
- Surgery Department, Hospital Prof. Doutor Fernando Fonseca, E.P.E., IC19, 2720-276 Amadora, Portugal.
| | - Paulo Amaral Rego
- Hip Surgery Unit, Orthopedic Surgery Department, Hospital Beatriz Ângelo, Av. Carlos Teixeira, 3, 2674-514 Loures, Portugal; Department of Orthopaedic Surgery, Hospital da Luz, Avenida Lusíada, 100, 1500-650 Lisboa, Portugal.
| | - Manuel Cassiano Neves
- Department of Pediatric & Adolescent Orthopaedic Surgery, Hospital CUF Descobertas, Rua Mário Botas, Parque das Nações, 1998-018 Lisboa, Portugal.
| | - Pedro Silva Rodrigues
- Oral Implantology Group, Clínica Universitária Egas Moniz, Rua D. João IV Nº 23ª, 2800-114 Almada, Portugal.
| | - Joaquim Armando Jorge
- INESC-ID Lisboa, IST Taguspark, Avenida Professor Cavaco Silva, 2744-016 Porto Salvo, Portugal; Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal.
| |
Collapse
|
6
|
Ragan ED, Bowman DA, Kopper R, Stinson C, Scerbo S, McMahan RP. Effects of Field of View and Visual Complexity on Virtual Reality Training Effectiveness for a Visual Scanning Task. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2015; 21:794-807. [PMID: 26357242 DOI: 10.1109/tvcg.2015.2403312] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. We conducted a controlled experiment to test the effects of display and scenario properties on training effectiveness for a visual scanning task in a simulated urban environment. The experiment varied the levels of field of view and visual complexity during a training phase and then evaluated scanning performance with the simulator's highest levels of fidelity and scene complexity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual complexity significantly affected target detection during training; higher field of view led to better performance and higher visual complexity worsened performance. Additionally, adherence to the prescribed visual scanning strategy during assessment was best when the level of visual complexity during training matched that of the assessment conditions, providing evidence that similar visual complexity was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training-evaluation in a more realistic setting may be necessary.
Collapse
|
7
|
[Current reporting in radiology : what will happen tomorrow?]. Radiologe 2014; 54:45-52. [PMID: 24402724 DOI: 10.1007/s00117-013-2540-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
CLINICAL/METHODICAL ISSUE Reporting in radiology faces considerable changes in the near future that will be influenced by a broader understanding of the task and increasing technological possibilities. STANDARD RADIOLOGICAL METHODS Until now a radiological report could be regarded as a text phrased by a radiologist after viewing imaging data. METHODICAL INNOVATIONS New solutions will be accessed by advances in visualization of large datasets, in extracting, analyzing, and communicating metadata as well as by improved integration and interpretation of clinical information. PERFORMANCE Virtual reality, texture analysis, growing networks, semantic annotation, data mining and context based presentation have the potential to extensively change the everyday working routine. ACHIEVEMENTS Although many of these developments are still in a laboratory phase, the impact on the process of reporting can already be predicted. PRACTICAL RECOMMENDATIONS As the leading community in information analysis and technology, radiology as a subject should strive to lead and shape these impending changes.
Collapse
|
8
|
Laha B, Bowman DA, Socha JJ. Effects of VR system fidelity on analyzing isosurface visualization of volume datasets. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2014; 20:513-522. [PMID: 24650978 DOI: 10.1109/tvcg.2014.20] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.
Collapse
|