1
|
Hong J, Hnatyshyn R, Santos EAD, Maciejewski R, Isenberg T. A Survey of Designs for Combined 2D+3D Visual Representations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2888-2902. [PMID: 38648152 DOI: 10.1109/tvcg.2024.3388516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
We examine visual representations of data that make use of combinations of both 2D and 3D data mappings. Combining 2D and 3D representations is a common technique that allows viewers to understand multiple facets of the data with which they are interacting. While 3D representations focus on the spatial character of the data or the dedicated 3D data mapping, 2D representations often show abstract data properties and take advantage of the unique benefits of mapping to a plane. Many systems have used unique combinations of both types of data mappings effectively. Yet there are no systematic reviews of the methods in linking 2D and 3D representations. We systematically survey the relationships between 2D and 3D visual representations in major visualization publications-IEEE VIS, IEEE TVCG, and EuroVis-from 2012 to 2022. We closely examined 105 articles where 2D and 3D representations are connected visually, interactively, or through animation. These approaches are designed based on their visual environment, the relationships between their visual representations, and their possible layouts. Through our analysis, we introduce a design space as well as provide design guidelines for effectively linking 2D and 3D visual representations.
Collapse
|
2
|
Fouché G, Argelaguet F, Faure E, Kervrann C. Immersive and interactive visualization of 3D spatio-temporal data using a space time hypercube: Application to cell division and morphogenesis analysis. FRONTIERS IN BIOINFORMATICS 2023; 3:998991. [PMID: 36969798 PMCID: PMC10031126 DOI: 10.3389/fbinf.2023.998991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 02/21/2023] [Indexed: 03/11/2023] Open
Abstract
The analysis of multidimensional time-varying datasets faces challenges, notably regarding the representation of the data and the visualization of temporal variations. We propose an extension of the well-known Space-Time Cube (STC) visualization technique in order to visualize time-varying 3D spatial data, taking advantage of the interaction capabilities of Virtual Reality (VR). First, we propose the Space-Time Hypercube (STH) as an abstraction for 3D temporal data, extended from the STC concept. Second, through the example of embryo development imaging dataset, we detail the construction and visualization of a STC based on a user-driven projection of the spatial and temporal information. This projection yields a 3D STC visualization, which can also encode additional numerical and categorical data. Additionally, we propose a set of tools allowing the user to filter and manipulate the 3D STC which benefits the visualization, exploration and interaction possibilities offered by VR. Finally, we evaluated the proposed visualization method in the context of 3D temporal cell imaging data analysis, through a user study (n = 5) reporting the feedback from five biologists. These domain experts also accompanied the application design as consultants, providing insights on how the STC visualization could be used for the exploration of complex 3D temporal morphogenesis data.
Collapse
Affiliation(s)
- Gwendal Fouché
- Inria de l’Université de Rennes, IRISA, CNRS, Rennes, France
| | | | - Emmanuel Faure
- LIRMM, Université Montpellier, CNRS, Montpellier, France
| | - Charles Kervrann
- Inria de l’Université de Rennes, Rennes, France
- UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universités, Paris, France
| |
Collapse
|
3
|
Wen Z, Zeng W, Weng L, Liu Y, Xu M, Chen W. Effects of View Layout on Situated Analytics for Multiple-View Representations in Immersive Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:440-450. [PMID: 36170396 DOI: 10.1109/tvcg.2022.3209475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Multiple-view (MV) representations enabling multi-perspective exploration of large and complex data are often employed on 2D displays. The technique also shows great potential in addressing complex analytic tasks in immersive visualization. However, although useful, the design space of MV representations in immersive visualization lacks in deep exploration. In this paper, we propose a new perspective to this line of research, by examining the effects of view layout for MV representations on situated analytics. Specifically, we disentangle situated analytics in perspectives of situatedness regarding spatial relationship between visual representations and physical referents, and analytics regarding cross-view data analysis including filtering, refocusing, and connecting tasks. Through an in-depth analysis of existing layout paradigms, we summarize design trade-offs for achieving high situatedness and effective analytics simultaneously. We then distill a list of design requirements for a desired layout that balances situatedness and analytics, and develop a prototype system with an automatic layout adaptation method to fulfill the requirements. The method mainly includes a cylindrical paradigm for egocentric reference frame, and a force-directed method for proper view-view, view-user, and view-referent proximities and high view visibility. We conducted a formal user study that compares layouts by our method with linked and embedded layouts. Quantitative results show that participants finished filtering- and connecting-centered tasks significantly faster with our layouts, and user feedback confirms high usability of the prototype system.
Collapse
|
4
|
Deng Z, Weng D, Liu S, Tian Y, Xu M, Wu Y. A survey of urban visual analytics: Advances and future directions. COMPUTATIONAL VISUAL MEDIA 2022; 9:3-39. [PMID: 36277276 PMCID: PMC9579670 DOI: 10.1007/s41095-022-0275-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 02/08/2022] [Indexed: 06/16/2023]
Abstract
Developing effective visual analytics systems demands care in characterization of domain problems and integration of visualization techniques and computational models. Urban visual analytics has already achieved remarkable success in tackling urban problems and providing fundamental services for smart cities. To promote further academic research and assist the development of industrial urban analytics systems, we comprehensively review urban visual analytics studies from four perspectives. In particular, we identify 8 urban domains and 22 types of popular visualization, analyze 7 types of computational method, and categorize existing systems into 4 types based on their integration of visualization techniques and computational models. We conclude with potential research directions and opportunities.
Collapse
Affiliation(s)
- Zikun Deng
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Di Weng
- Microsoft Research Asia, Beijing, 100080 China
| | - Shuhan Liu
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Yuan Tian
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Mingliang Xu
- School of Information Engineering, Zhengzhou University, Zhengzhou, China
- Henan Institute of Advanced Technology, Zhengzhou University, Zhengzhou, 450001 China
| | - Yingcai Wu
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| |
Collapse
|
5
|
Wagner J, Stuerzlinger W, Nedel L. The Effect of Exploration Mode and Frame of Reference in Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3252-3264. [PMID: 33606632 DOI: 10.1109/tvcg.2021.3060666] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The design space for user interfaces for Immersive Analytics applications is vast. Designers can combine navigation and manipulation to enable data exploration with ego- or exocentric views, have the user operate at different scales, or use different forms of navigation with varying levels of physical movement. This freedom results in a multitude of different viable approaches. Yet, there is no clear understanding of the advantages and disadvantages of each choice. Our goal is to investigate the affordances of several major design choices, to enable both application designers and users to make better decisions. In this article, we assess two main factors, exploration mode and frame of reference, consequently also varying visualization scale and physical movement demand. To isolate each factor, we implemented nine different conditions in a Space-Time Cube visualization use case and asked 36 participants to perform multiple tasks. We analyzed the results in terms of performance and qualitative measures and correlated them with participants' spatial abilities. While egocentric room-scale exploration significantly reduced mental workload, exocentric exploration improved performance in some tasks. Combining navigation and manipulation made tasks easier by reducing workload, temporal demand, and physical effort.
Collapse
|
6
|
Sereno M, Wang X, Besancon L, McGuffin MJ, Isenberg T. Collaborative Work in Augmented Reality: A Survey. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2530-2549. [PMID: 33085619 DOI: 10.1109/tvcg.2020.3032761] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In Augmented Reality (AR), users perceive virtual content anchored in the real world. It is used in medicine, education, games, navigation, maintenance, product design, and visualization, in both single-user and multi-user scenarios. Multi-user AR has received limited attention from researchers, even though AR has been in development for more than two decades. We present the state of existing work at the intersection of AR and Computer-Supported Collaborative Work (AR-CSCW), by combining a systematic survey approach with an exploratory, opportunistic literature search. We categorize 65 papers along the dimensions of space, time, role symmetry (whether the roles of users are symmetric), technology symmetry (whether the hardware platforms of users are symmetric), and output and input modalities. We derive design considerations for collaborative AR environments, and identify under-explored research topics. These include the use of heterogeneous hardware considerations and 3D data exploration research areas. This survey is useful for newcomers to the field, readers interested in an overview of CSCW in AR applications, and domain experts seeking up-to-date information.
Collapse
|
7
|
Chu X, Xie X, Ye S, Lu H, Xiao H, Yuan Z, Zhu-Tian C, Zhang H, Wu Y. TIVEE: Visual Exploration and Explanation of Badminton Tactics in Immersive Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:118-128. [PMID: 34596547 DOI: 10.1109/tvcg.2021.3114861] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Tactic analysis is a major issue in badminton as the effective usage of tactics is the key to win. The tactic in badminton is defined as a sequence of consecutive strokes. Most existing methods use statistical models to find sequential patterns of strokes and apply 2D visualizations such as glyphs and statistical charts to explore and analyze the discovered patterns. However, in badminton, spatial information like the shuttle trajectory, which is inherently 3D, is the core of a tactic. The lack of sufficient spatial awareness in 2D visualizations largely limited the tactic analysis of badminton. In this work, we collaborate with domain experts to study the tactic analysis of badminton in a 3D environment and propose an immersive visual analytics system, TIVEE, to assist users in exploring and explaining badminton tactics from multi-levels. Users can first explore various tactics from the third-person perspective using an unfolded visual presentation of stroke sequences. By selecting a tactic of interest, users can turn to the first-person perspective to perceive the detailed kinematic characteristics and explain its effects on the game result. The effectiveness and usefulness of TIVEE are demonstrated by case studies and an expert interview.
Collapse
|
8
|
Latif S, Tarner H, Beck F. Talking Realities: Audio Guides in Virtual Reality Visualizations. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2022; 42:73-83. [PMID: 33560980 DOI: 10.1109/mcg.2021.3058129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Building upon the ideas of storytelling and explorable explanations, we introduce Talking Realities, a concept for producing data-driven interactive narratives in virtual reality. It combines an audio narrative with an immersive visualization to communicate analysis results. The narrative is automatically produced using template-based natural language generation and adapts to data and user interactions. The synchronized animation of visual elements in accordance with the audio connects the two representations. In addition, we discuss various modes of explanation ranging from fully guided tours to free exploration of the data. We demonstrate the applicability of our concept by developing a virtual reality visualization for air traffic data. Furthermore, generalizability is exhibited by sketching mock-ups for two more application scenarios in the context of information and scientific visualization.
Collapse
|
9
|
Shi L, Hu J, Tan Z, Tao J, Ding J, Jin Y, Wu Y, Thompson P. MV 2Net: Multi-Variate Multi-View Brain Network Comparison over Uncertain Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; PP:4640-4657. [PMID: 34283716 DOI: 10.1109/tvcg.2021.3098123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Visually identifying effective bio-markers from human brain networks poses non-trivial challenges to the field of data visualization and analysis. Existing methods in the literature and neuroscience practice are generally limited to the study of individual connectivity features in the brain (e.g., the strength of neural connection among brain regions). Pairwise comparisons between contrasting subject groups (e.g., the diseased and the healthy controls) are normally performed. The underlying neuroimaging and brain network construction process is assumed to have 100% fidelity. Yet, real-world user requirements on brain network visual comparison lean against these assumptions. In this work, we present MV^2Net, a visual analytics system that tightly integrates multi-variate multi-view visualization for brain network comparison with an interactive wrangling mechanism to deal with data uncertainty. On the analysis side, the system integrates multiple extraction methods on diffusion and geometric connectivity features of brain networks, an anomaly detection algorithm for data quality assessment, single- and multi-connection feature selection methods for bio-marker detection. On the visualization side, novel designs are introduced which optimize network comparisons among contrasting subject groups and related connectivity features. Our design provides level-of-detail comparisons, from juxtaposed and explicit-coding views for subject group comparisons, to high-order composite view for correlation of network comparisons, and to fiber tract detail view for voxel-level comparisons. The proposed techniques are inspired and evaluated in expert studies, as well as through case analyses on diffusion and geometric bio-markers of certain neurology diseases. Results in these experiments demonstrate the effectiveness and superiority of MV^2Net over state-of-the-art approaches.
Collapse
|
10
|
Butcher PWS, John NW, Ritsos PD. VRIA: A Web-Based Framework for Creating Immersive Analytics Experiences. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3213-3225. [PMID: 31944959 DOI: 10.1109/tvcg.2020.2965109] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present VRIA, a Web-based framework for creating Immersive Analytics (IA) experiences in Virtual Reality. VRIA is built upon WebVR, A-Frame, React and D3.js, and offers a visualization creation workflow which enables users, of different levels of expertise, to rapidly develop Immersive Analytics experiences for the Web. The use of these open-standards Web-based technologies allows us to implement VR experiences in a browser and offers strong synergies with popular visualization libraries, through the HTML Document Object Model (DOM). This makes VRIA ubiquitous and platform-independent. Moreover, by using WebVR's progressive enhancement, the experiences VRIA creates are accessible on a plethora of devices. We elaborate on our motivation for focusing on open-standards Web technologies, present the VRIA creation workflow and detail the underlying mechanics of our framework. We also report on techniques and optimizations necessary for implementing Immersive Analytics experiences on the Web, discuss scalability implications of our framework, and present a series of use case applications to demonstrate the various features of VRIA. Finally, we discuss current limitations of our framework, the lessons learned from its development, and outline further extensions.
Collapse
|
11
|
Kraus M, Klein K, Fuchs J, Keim D, Schreiber F, Sedlmair M, Rhyne TM. The Value of Immersive Visualization. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2021; 41:125-132. [PMID: 34264822 DOI: 10.1109/mcg.2021.3075258] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In recent years, research on immersive environments has experienced a new wave of interest, and immersive analytics has been established as a new research field. Every year, a vast amount of different techniques, applications, and user studies are published that focus on employing immersive environments for visualizing and analyzing data. Nevertheless, immersive analytics is still a relatively unexplored field that needs more basic research in many aspects and is still viewed with skepticism. Rightly so, because in our opinion, many researchers do not fully exploit the possibilities offered by immersive environments and, on the contrary, sometimes even overestimate the power of immersive visualizations. Although a growing body of papers has demonstrated individual advantages of immersive analytics for specific tasks and problems, the general benefit of using immersive environments for effective analytic tasks remains controversial. In this article, we reflect on when and how immersion may be appropriate for the analysis and present four guiding scenarios. We report on our experiences, discuss the landscape of assessment strategies, and point out the directions where we believe immersive visualizations have the greatest potential.
Collapse
|
12
|
Wagner J, Stuerzlinger W, Nedel L. Comparing and Combining Virtual Hand and Virtual Ray Pointer Interactions for Data Manipulation in Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2513-2523. [PMID: 33750698 DOI: 10.1109/tvcg.2021.3067759] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this work, we evaluate two standard interaction techniques for Immersive Analytics environments: virtual hands, with actions such as grabbing and stretching, and virtual ray pointers, with actions assigned to controller buttons. We also consider a third option: seamlessly integrating both modes and allowing the user to alternate between them without explicit mode switches. Easy-to-use interaction with data visualizations in Virtual Reality enables analysts to intuitively query or filter the data, in addition to the benefit of multiple perspectives and stereoscopic 3D display. While many VR-based Immersive Analytics systems employ one of the studied interaction modes, the effect of this choice is unknown. Considering that each has different advantages, we compared the three conditions through a controlled user study in the spatio-temporal data domain. We did not find significant differences between hands and ray-casting in task performance, workload, or interactivity patterns. Yet, 60% of the participants preferred the mixed mode and benefited from it by choosing the best alternative for each low-level task. This mode significantly reduced completion times by 23% for the most demanding task, at the cost of a 5% decrease in overall success rates.
Collapse
|
13
|
Fonnet A, Prie Y. Survey of Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2101-2122. [PMID: 31352344 DOI: 10.1109/tvcg.2019.2929033] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Immersive analytics (IA) is a new term referring to the use of immersive technologies for data analysis. Yet such applications are not new, and numerous contributions have been made in the last three decades. However, no survey reviewing all these contributions is available. Here we propose a survey of IA from the early nineties until the present day, describing how rendering technologies, data, sensory mapping, and interaction means have been used to build IA systems, as well as how these systems have been evaluated. The conclusions that emerge from our analysis are that: multi-sensory aspects of IA are under-exploited, the 3DUI and VR community knowledge regarding immersive interaction is not sufficiently utilised, the IA community should focus on converging towards best practices, as well as aim for real life IA systems.
Collapse
|
14
|
Yang Y, Cordeil M, Beyer J, Dwyer T, Marriott K, Pfister H. Embodied Navigation in Immersive Abstract Data Visualization: Is Overview+Detail or Zooming Better for 3D Scatterplots? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1214-1224. [PMID: 33048730 DOI: 10.1109/tvcg.2020.3030427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
data has no natural scale and so interactive data visualizations must provide techniques to allow the user to choose their viewpoint and scale. Such techniques are well established in desktop visualization tools. The two most common techniques are zoom+pan and overview+detail. However, how best to enable the analyst to navigate and view abstract data at different levels of scale in immersive environments has not previously been studied. We report the findings of the first systematic study of immersive navigation techniques for 3D scatterplots. We tested four conditions that represent our best attempt to adapt standard 2D navigation techniques to data visualization in an immersive environment while still providing standard immersive navigation techniques through physical movement and teleportation. We compared room-sized visualization versus a zooming interface, each with and without an overview. We find significant differences in participants' response times and accuracy for a number of standard visual analysis tasks. Both zoom and overview provide benefits over standard locomotion support alone (i.e., physical movement and pointer teleportation). However, which variation is superior, depends on the task. We obtain a more nuanced understanding of the results by analyzing them in terms of a time-cost model for the different components of navigation: way-finding, travel, number of travel steps, and context switching.
Collapse
|
15
|
Ye S, Chen Z, Chu X, Wang Y, Fu S, Shen L, Zhou K, Wu Y. ShuttleSpace: Exploring and Analyzing Movement Trajectory in Immersive Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:860-869. [PMID: 33048712 DOI: 10.1109/tvcg.2020.3030392] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We present ShuttleSpace, an immersive analytics system to assist experts in analyzing trajectory data in badminton. Trajectories in sports, such as the movement of players and balls, contain rich information on player behavior and thus have been widely analyzed by coaches and analysts to improve the players' performance. However, existing visual analytics systems often present the trajectories in court diagrams that are abstractions of reality, thereby causing difficulty for the experts to imagine the situation on the court and understand why the player acted in a certain way. With recent developments in immersive technologies, such as virtual reality (VR), experts gradually have the opportunity to see, feel, explore, and understand these 3D trajectories from the player's perspective. Yet, few research has studied how to support immersive analysis of sports data from such a perspective. Specific challenges are rooted in data presentation (e.g., how to seamlessly combine 2D and 3D visualizations) and interaction (e.g., how to naturally interact with data without keyboard and mouse) in VR. To address these challenges, we have worked closely with domain experts who have worked for a top national badminton team to design ShuttleSpace. Our system leverages 1) the peripheral vision to combine the 2D and 3D visualizations and 2) the VR controller to support natural interactions via a stroke metaphor. We demonstrate the effectiveness of ShuttleSpace through three case studies conducted by the experts with useful insights. We further conduct interviews with the experts whose feedback confirms that our first-person immersive analytics system is suitable and useful for analyzing badminton data.
Collapse
|
16
|
Exploring Multiple and Coordinated Views for Multilayered Geospatial Data in Virtual Reality. INFORMATION 2020. [DOI: 10.3390/info11090425] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Virtual reality (VR) headsets offer a large and immersive workspace for displaying visualizations with stereoscopic vision, as compared to traditional environments with monitors or printouts. The controllers for these devices further allow direct three-dimensional interaction with the virtual environment. In this paper, we make use of these advantages to implement a novel multiple and coordinated view (MCV) system in the form of a vertical stack, showing tilted layers of geospatial data. In a formal study based on a use-case from urbanism that requires cross-referencing four layers of geospatial urban data, we compared it against more conventional systems similarly implemented in VR: a simpler grid of layers, and one map that allows for switching between layers. Performance and oculometric analyses showed a slight advantage of the two spatial-multiplexing methods (the grid or the stack) over the temporal multiplexing in blitting. Subgrouping the participants based on their preferences, characteristics, and behavior allowed a more nuanced analysis, allowing us to establish links between e.g., saccadic information, experience with video games, and preferred system. In conclusion, we found that none of the three systems are optimal and a choice of different MCV systems should be provided in order to optimally engage users.
Collapse
|
17
|
Batch A, Cunningham A, Cordeil M, Elmqvist N, Dwyer T, Thomas BH, Marriott K. There Is No Spoon: Evaluating Performance, Space Use, and Presence with Expert Domain Users in Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:536-546. [PMID: 31484124 DOI: 10.1109/tvcg.2019.2934803] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Immersive analytics turns the very space surrounding the user into a canvas for data analysis, supporting human cognitive abilities in myriad ways. We present the results of a design study, contextual inquiry, and longitudinal evaluation involving professional economists using a Virtual Reality (VR) system for multidimensional visualization to explore actual economic data. Results from our preregistered evaluation highlight the varied use of space depending on context (exploration vs. presentation), the organization of space to support work, and the impact of immersion on navigation and orientation in the 3D analysis space.
Collapse
|
18
|
Zhu-Tian C, Zeng W, Yang Z, Yu L, Fu CW, Qu H. LassoNet: Deep Lasso-Selection of 3D Point Clouds. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:195-204. [PMID: 31425100 DOI: 10.1109/tvcg.2019.2934332] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Selection is a fundamental task in exploratory analysis and visualization of 3D point clouds. Prior researches on selection methods were developed mainly based on heuristics such as local point density, thus limiting their applicability in general data. Specific challenges root in the great variabilities implied by point clouds (e.g., dense vs. sparse), viewpoint (e.g., occluded vs. non-occluded), and lasso (e.g., small vs. large). In this work, we introduce LassoNet, a new deep neural network for lasso selection of 3D point clouds, attempting to learn a latent mapping from viewpoint and lasso to point cloud regions. To achieve this, we couple user-target points with viewpoint and lasso information through 3D coordinate transform and naive selection, and improve the method scalability via an intention filtering and farthest point sampling. A hierarchical network is trained using a dataset with over 30K lasso-selection records on two different point cloud data. We conduct a formal user study to compare LassoNet with two state-of-the-art lasso-selection methods. The evaluations confirm that our approach improves the selection effectiveness and efficiency across different combinations of 3D point clouds, viewpoints, and lasso selections. Project Website: https://LassoNet.github.io.
Collapse
|
19
|
Filho JAW, Stuerzlinger W, Nedel L. Evaluating an Immersive Space-Time Cube Geovisualization for Intuitive Trajectory Data Exploration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:514-524. [PMID: 31581085 DOI: 10.1109/tvcg.2019.2934415] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
A Space-Time Cube enables analysts to clearly observe spatio-temporal features in movement trajectory datasets in geovisualization. However, its general usability is impacted by a lack of depth cues, a reported steep learning curve, and the requirement for efficient 3D navigation. In this work, we investigate a Space-Time Cube in the Immersive Analytics domain. Based on a review of previous work and selecting an appropriate exploration metaphor, we built a prototype environment where the cube is coupled to a virtual representation of the analyst's real desk, and zooming and panning in space and time are intuitively controlled using mid-air gestures. We compared our immersive environment to a desktop-based implementation in a user study with 20 participants across 7 tasks of varying difficulty, which targeted different user interface features. To investigate how performance is affected in the presence of clutter, we explored two scenarios with different numbers of trajectories. While the quantitative performance was similar for the majority of tasks, large differences appear when we analyze the patterns of interaction and consider subjective metrics. The immersive version of the Space-Time Cube received higher usability scores, much higher user preference, and was rated to have a lower mental workload, without causing participants discomfort in 25-minute-long VR sessions.
Collapse
|
20
|
Thomas BH. Virtual Reality for Information Visualization Might Just Work This Time. Front Robot AI 2019; 6:84. [PMID: 33501099 PMCID: PMC7806101 DOI: 10.3389/frobt.2019.00084] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Accepted: 08/21/2019] [Indexed: 11/13/2022] Open
Affiliation(s)
- Bruce H Thomas
- IVE: Australian Research Centre for Interactive and Virtual Environments, School of Information Technology and Mathematical Sciences, University of South Australia, Adelaide, SA, Australia
| |
Collapse
|