1
|
Borhani Z, Sharma P, Ortega FR. Survey of Annotations in Extended Reality Systems. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5074-5096. [PMID: 37352090 DOI: 10.1109/tvcg.2023.3288869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2023]
Abstract
Annotation in 3D user interfaces such as Augmented Reality (AR) and Virtual Reality (VR) is a challenging and promising area; however, there are not currently surveys reviewing these contributions. In order to provide a survey of annotations for Extended Reality (XR) environments, we conducted a structured literature review of papers that used annotation in their AR/VR systems from the period between 2001 and 2021. Our literature review process consists of several filtering steps which resulted in 103 XR publications with a focus on annotation. We classified these papers based on the display technologies, input devices, annotation types, target object under annotation, collaboration type, modalities, and collaborative technologies. A survey of annotation in XR is an invaluable resource for researchers and newcomers. Finally, we provide a database of the collected information for each reviewed paper. This information includes applications, the display technologies and its annotator, input devices, modalities, annotation types, interaction techniques, collaboration types, and tasks for each paper. This database provides a rapid access to collected data and gives users the ability to search or filter the required information. This survey provides a starting point for anyone interested in researching annotation in XR environments.
Collapse
|
2
|
Friedl-Knirsch J, Stach C, Pointecker F, Anthes C, Roth D. A Study on Collaborative Visual Data Analysis in Augmented Reality with Asymmetric Display Types. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2633-2643. [PMID: 38437119 DOI: 10.1109/tvcg.2024.3372103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Collaboration is a key aspect of immersive visual data analysis. Due to its inherent benefit of seeing co-located collaborators, augmented reality is often useful in such collaborative scenarios. However, to enable the augmentation of the real environment, there are different types of technology available. While there are constant developments in specific devices, each of these device types provide different premises for collaborative visual data analysis. In our work we combine handheld, optical see-through and video see-through displays to explore and understand the impact of these different device types in collaborative immersive analytics. We conducted a mixed-methods collaborative user study where groups of three performed a shared data analysis task in augmented reality with each user working on a different device, to explore differences in collaborative behaviour, user experience and usage patterns. Both quantitative and qualitative data revealed differences in user experience and usage patterns. For collaboration, the different display types influenced how well participants could participate in the collaborative data analysis, nevertheless, there was no measurable effect in verbal communication.
Collapse
|
3
|
Combe T, Chardonnet JR, Merienne F, Ovtcharova J. CAVE and HMD: distance perception comparative study. VIRTUAL REALITY 2023; 27:1-11. [PMID: 37360808 PMCID: PMC10054200 DOI: 10.1007/s10055-023-00787-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 03/06/2023] [Indexed: 06/28/2023]
Abstract
This paper proposes to analyse user experience using two different immersive device categories: a cave automatic virtual environment (CAVE) and a head-mounted display (HMD). While most past studies focused on one of these devices to characterize user experience, we propose to fill the gap in comparative studies by conducting investigations using both devices, considering the same application, method and analysis. Through this study, we want to highlight the differences in user experience induced when using either one of these technologies in terms of visualization and interaction. We performed two experiments, each focusing on a specific aspect of the devices employed. The first one is related to distance perception when walking and the possible influence of the HMD's weight, which does not occur with CAVE systems as they do not require wearing any heavy equipment. Past studies found that weight may impact distance perception. Several walking distances were considered. Results revealed that the HMD's weight does not induce significant differences over short distances (above three meters). In the second experiment, we focused on distance perception over short distances. We considered that the HMD's screen being closer to the user's eyes than in CAVE systems might induce substantial distance perception differences, especially for short-distance interaction. We designed a task in which users had to move an object from one place to another at several distances using the CAVE and an HMD. Results revealed significant underestimation compared to reality as in past work, but no significant differences between the immersive devices. These results provide a better understanding of the differences between the two emblematic virtual reality displays.
Collapse
Affiliation(s)
- Théo Combe
- Arts et Métiers Institute of Technology, LISPEN, HESAM Université, UBFC, F-71100, 2 Rue Thomas Dumorey, 71100 Chalon-sur-Saône, France
| | - Jean-Rémy Chardonnet
- Arts et Métiers Institute of Technology, LISPEN, HESAM Université, UBFC, F-71100, 2 Rue Thomas Dumorey, 71100 Chalon-sur-Saône, France
| | - Frédéric Merienne
- Arts et Métiers Institute of Technology, LISPEN, HESAM Université, UBFC, F-71100, 2 Rue Thomas Dumorey, 71100 Chalon-sur-Saône, France
| | - Jivka Ovtcharova
- IMI, Karlsruhe Institute of Technology, Kriegsstraße 77, 76133 Karlsruhe, Germany
| |
Collapse
|
4
|
Bueckle A, Qing C, Luley S, Kumar Y, Pandey N, Börner K. The HRA Organ Gallery Affords Immersive Superpowers for Building and Exploring the Human Reference Atlas with Virtual Reality. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.13.528002. [PMID: 36824790 PMCID: PMC9949060 DOI: 10.1101/2023.02.13.528002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
The Human Reference Atlas (HRA, https://humanatlas.io ) funded by the NIH Human Biomolecular Atlas Program (HuBMAP, https://commonfund.nih.gov/hubmap ) and other projects engages 17 international consortia to create a spatial reference of the healthy adult human body at single-cell resolution. The specimen, biological structure, and spatial data that define the HRA are disparate in nature and benefit from a visually explicit method of data integration. Virtual reality (VR) offers unique means to enable users to explore complex data structures in a threedimensional (3D) immersive environment. On a 2D desktop application, the 3D spatiality and real-world size of the 3D reference organs of the atlas is hard to understand. If viewed in VR, the spatiality of the organs and tissue blocks mapped to the HRA can be explored in their true size and in a way that goes beyond traditional 2D user interfaces. Added 2D and 3D visualizations can then provide data-rich context. In this paper, we present the HRA Organ Gallery, a VR application to explore the atlas in an integrated VR environment. Presently, the HRA Organ Gallery features 55 3D reference organs,1,203 mapped tissue blocks from 292 demographically diverse donors and 15 providers that link to 5,000+ datasets; it also features prototype visualizations of cell type distributions and 3D protein structures. We outline our plans to support two biological use cases: on-ramping novice and expert users to HuBMAP data available via the Data Portal ( https://portal.hubmapconsortium.org ), and quality assurance/quality control (QA/QC) for HRA data providers . Code and onboarding materials are available at https://github.com/cns-iu/ccf-organ-vr-gallery#readme .
Collapse
Affiliation(s)
- Andreas Bueckle
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47408, USA
| | - Catherine Qing
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47408, USA
- Department of Humanities & Sciences, Stanford University, Stanford, CA 94305, USA
| | - Shefali Luley
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47408, USA
| | - Yash Kumar
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47408, USA
| | - Naval Pandey
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47408, USA
| | - Katy Börner
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN 47408, USA
| |
Collapse
|
5
|
Bueckle A, Qing C, Luley S, Kumar Y, Pandey N, Börner K. The HRA Organ Gallery affords immersive superpowers for building and exploring the Human Reference Atlas with virtual reality. FRONTIERS IN BIOINFORMATICS 2023; 3:1162723. [PMID: 37181487 PMCID: PMC10174312 DOI: 10.3389/fbinf.2023.1162723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 04/10/2023] [Indexed: 05/16/2023] Open
Abstract
The Human Reference Atlas (HRA, https://humanatlas.io) funded by the NIH Human Biomolecular Atlas Program (HuBMAP, https://commonfund.nih.gov/hubmap) and other projects engages 17 international consortia to create a spatial reference of the healthy adult human body at single-cell resolution. The specimen, biological structure, and spatial data that define the HRA are disparate in nature and benefit from a visually explicit method of data integration. Virtual reality (VR) offers unique means to enable users to explore complex data structures in a three-dimensional (3D) immersive environment. On a 2D desktop application, the 3D spatiality and real-world size of the 3D reference organs of the atlas is hard to understand. If viewed in VR, the spatiality of the organs and tissue blocks mapped to the HRA can be explored in their true size and in a way that goes beyond traditional 2D user interfaces. Added 2D and 3D visualizations can then provide data-rich context. In this paper, we present the HRA Organ Gallery, a VR application to explore the atlas in an integrated VR environment. Presently, the HRA Organ Gallery features 55 3D reference organs, 1,203 mapped tissue blocks from 292 demographically diverse donors and 15 providers that link to 6,000+ datasets; it also features prototype visualizations of cell type distributions and 3D protein structures. We outline our plans to support two biological use cases: on-ramping novice and expert users to HuBMAP data available via the Data Portal (https://portal.hubmapconsortium.org), and quality assurance/quality control (QA/QC) for HRA data providers. Code and onboarding materials are available at https://github.com/cns-iu/hra-organ-gallery-in-vr.
Collapse
Affiliation(s)
- Andreas Bueckle
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, United States
- *Correspondence: Andreas Bueckle, ; Catherine Qing,
| | - Catherine Qing
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, United States
- Department of Humanities and Sciences, Stanford University, Stanford, CA, United States
- *Correspondence: Andreas Bueckle, ; Catherine Qing,
| | - Shefali Luley
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, United States
| | - Yash Kumar
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, United States
| | - Naval Pandey
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, United States
| | - Katy Börner
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, United States
| |
Collapse
|
6
|
Schneider S, Maruhn P, Dang NT, Pala P, Cavallo V, Bengler K. Pedestrian Crossing Decisions in Virtual Environments: Behavioral Validity in CAVEs and Head-Mounted Displays. HUMAN FACTORS 2022; 64:1210-1226. [PMID: 33529060 DOI: 10.1177/0018720820987446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
OBJECTIVE To contribute to the validation of virtual reality (VR) as a tool for analyzing pedestrian behavior, we compared two types of high-fidelity pedestrian simulators to a test track. BACKGROUND While VR has become a popular tool in pedestrian research, it is uncertain to what extent simulator studies evoke the same behavior as nonvirtual environments. METHOD An identical experimental procedure was replicated in a CAVE automatic virtual environment (CAVE), a head-mounted display (HMD), and on a test track. In each group, 30 participants were instructed to step forward whenever they felt the gap between two approaching vehicles was adequate for crossing. RESULTS Our analyses revealed distinct effects for the three environments. Overall acceptance was highest on the test track. In both simulators, crossings were initiated later, but a relationship between gap size and crossing initiation was apparent only in the CAVE. In contrast to the test track, vehicle speed significantly affected acceptance rates and safety margins in both simulators. CONCLUSION For a common decision task, the results obtained in virtual environments deviate from those in a nonvirtual test bed. The consistency of differences indicates that restrictions apply when predicting real-world behavior based on VR studies. In particular, the higher susceptibility to speed effects warrants further investigation, since it implies that differences in perceptual processing alter experimental outcomes. APPLICATION Our observations should inform the conclusions drawn from future research in pedestrian simulators, for example by accounting for a higher sensitivity to speed variations and a greater uncertainty associated with crossing decisions.
Collapse
|
7
|
Joos L, Jaeger-Honz S, Schreiber F, Keim DA, Klein K. Visual Comparison of Networks in VR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3651-3661. [PMID: 36048995 DOI: 10.1109/tvcg.2022.3203001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Networks are an important means for the representation and analysis of data in a variety of research and application areas. While there are many efficient methods to create layouts for networks to support their visual analysis, approaches for the comparison of networks are still underexplored. Especially when it comes to the comparison of weighted networks, which is an important task in several areas, such as biology and biomedicine, there is a lack of efficient visualization approaches. With the availability of affordable high-quality virtual reality (VR) devices, such as head-mounted displays (HMDs), the research field of immersive analytics emerged and showed great potential for using the new technology for visual data exploration. However, the use of immersive technology for the comparison of networks is still underexplored. With this work, we explore how weighted networks can be visually compared in an immersive VR environment and investigate how visual representations can benefit from the extended 3D design space. For this purpose, we develop different encodings for 3D node-link diagrams supporting the visualization of two networks within a single representation and evaluate them in a pilot user study. We incorporate the results into a more extensive user study comparing node-link representations with matrix representations encoding two networks simultaneously. The data and tasks designed for our experiments are similar to those occurring in real-world scenarios. Our evaluation shows significantly better results for the node-link representations, which is contrary to comparable 2D experiments and indicates a high potential for using VR for the visual comparison of networks.
Collapse
|
8
|
Abstract
Recent research in the area of immersive analytics demonstrated the utility of augmented reality for data analysis. However, there is a lack of research on how to facilitate engaging, embodied, and interactive AR graph visualization. In this paper, we explored the design space for combining the capabilities of AR with node-link diagrams to create immersive data visualization. We first systematically described the design rationale and the design process of the mobile based AR graph including the layout, interactions, and aesthetics. Then, we validated the AR concept by conducting a user study with 36 participants to examine users’ behaviors with an AR graph and a 2D graph. The results of our study showed the feasibility of using an AR graph to present data relations and also introduced interaction challenges in terms of the effectiveness and usability with mobile devices. Third, we iterated the AR graph by implementing embodied interactions with hand gestures and addressing the connection between the physical objects and the digital graph. This study is the first step in our research, aiming to guide the design of the application of immersive AR data visualization in the future.
Collapse
|
9
|
Taylor S, Soneji S. Bioinformatics and the Metaverse: Are We Ready? FRONTIERS IN BIOINFORMATICS 2022; 2:863676. [PMID: 36304263 PMCID: PMC9580841 DOI: 10.3389/fbinf.2022.863676] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 04/20/2022] [Indexed: 02/01/2023] Open
Abstract
COVID-19 forced humanity to think about new ways of working globally without physically being present with other people, and eXtended Reality (XR) systems (defined as Virtual Reality, Augmented Reality and Mixed Reality) offer a potentially elegant solution. Previously seen as mainly for gaming, commercial and research institutions are investigating XR solutions to solve real world problems from training, simulation, mental health, data analysis, and studying disease progression. More recently large corporations such as Microsoft and Meta have announced they are developing the Metaverse as a new paradigm to interact with the digital world. This article will look at how visualization can leverage the Metaverse in bioinformatics research, the pros and cons of this technology, and what the future may hold.
Collapse
Affiliation(s)
- Stephen Taylor
- Analysis, Visualization and Informatics Group, MRC Weatherall Institute of Computational Biology, MRC Weatherall Institute of Molecular Medicine, Oxford, United Kingdom
- *Correspondence: Stephen Taylor,
| | - Shamit Soneji
- Division of Molecular Hematology, Department of Laboratory Medicine, Faculty of Medicine, BMC, Lund University, Lund, Sweden
- Lund Stem Cell Center, Faculty of Medicine, BMC, Lund University, Lund, Sweden
| |
Collapse
|
10
|
Danyluk K, Ulusoy T, Wei W, Willett W. Touch and Beyond: Comparing Physical and Virtual Reality Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1930-1940. [PMID: 32915741 DOI: 10.1109/tvcg.2020.3023336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We compare physical and virtual reality (VR) versions of simple data visualizations and explore how the addition of virtual annotation and filtering tools affects how viewers solve basic data analysis tasks. We report on two studies, inspired by previous examinations of data physicalizations. The first study examines differences in how viewers interact with physical hand-scale, virtual hand-scale, and virtual table-scale visualizations and the impact that the different forms had on viewer's problem solving behavior. A second study examines how interactive annotation and filtering tools might support new modes of use that transcend the limitations of physical representations. Our results highlight challenges associated with virtual reality representations and hint at the potential of interactive annotation and filtering tools in VR visualizations.
Collapse
|
11
|
|
12
|
Bueckle A, Buehling K, Shih PC, Börner K. 3D virtual reality vs. 2D desktop registration user interface comparison. PLoS One 2021; 16:e0258103. [PMID: 34705835 PMCID: PMC8550408 DOI: 10.1371/journal.pone.0258103] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 09/19/2021] [Indexed: 11/19/2022] Open
Abstract
Working with organs and extracted tissue blocks is an essential task in many medical surgery and anatomy environments. In order to prepare specimens from human donors for further analysis, wet-bench workers must properly dissect human tissue and collect metadata for downstream analysis, including information about the spatial origin of tissue. The Registration User Interface (RUI) was developed to allow stakeholders in the Human Biomolecular Atlas Program (HuBMAP) to register tissue blocks-i.e., to record the size, position, and orientation of human tissue data with regard to reference organs. The RUI has been used by tissue mapping centers across the HuBMAP consortium to register a total of 45 kidney, spleen, and colon tissue blocks, with planned support for 17 organs in the near future. In this paper, we compare three setups for registering one 3D tissue block object to another 3D reference organ (target) object. The first setup is a 2D Desktop implementation featuring a traditional screen, mouse, and keyboard interface. The remaining setups are both virtual reality (VR) versions of the RUI: VR Tabletop, where users sit at a physical desk which is replicated in virtual space; VR Standup, where users stand upright while performing their tasks. All three setups were implemented using the Unity game engine. We then ran a user study for these three setups involving 42 human subjects completing 14 increasingly difficult and then 30 identical tasks in sequence and reporting position accuracy, rotation accuracy, completion time, and satisfaction. All study materials were made available in support of future study replication, alongside videos documenting our setups. We found that while VR Tabletop and VR Standup users are about three times as fast and about a third more accurate in terms of rotation than 2D Desktop users (for the sequence of 30 identical tasks), there are no significant differences between the three setups for position accuracy when normalized by the height of the virtual kidney across setups. When extrapolating from the 2D Desktop setup with a 113-mm-tall kidney, the absolute performance values for the 2D Desktop version (22.6 seconds per task, 5.88 degrees rotation, and 1.32 mm position accuracy after 8.3 tasks in the series of 30 identical tasks) confirm that the 2D Desktop interface is well-suited for allowing users in HuBMAP to register tissue blocks at a speed and accuracy that meets the needs of experts performing tissue dissection. In addition, the 2D Desktop setup is cheaper, easier to learn, and more practical for wet-bench environments than the VR setups.
Collapse
Affiliation(s)
- Andreas Bueckle
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, Indiana, United States of America
- * E-mail:
| | - Kilian Buehling
- Research Group Knowledge and Technology Transfer, Fakultät Wirtschaftswissenschaften, Technische Universität Dresden, Dresden, Germany
| | - Patrick C. Shih
- Department of Informatics, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, Indiana, United States of America
| | - Katy Börner
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, Indiana, United States of America
- Department of Information and Library Science, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, Indiana, United States of America
| |
Collapse
|
13
|
|
14
|
Butcher PWS, John NW, Ritsos PD. VRIA: A Web-Based Framework for Creating Immersive Analytics Experiences. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3213-3225. [PMID: 31944959 DOI: 10.1109/tvcg.2020.2965109] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present VRIA, a Web-based framework for creating Immersive Analytics (IA) experiences in Virtual Reality. VRIA is built upon WebVR, A-Frame, React and D3.js, and offers a visualization creation workflow which enables users, of different levels of expertise, to rapidly develop Immersive Analytics experiences for the Web. The use of these open-standards Web-based technologies allows us to implement VR experiences in a browser and offers strong synergies with popular visualization libraries, through the HTML Document Object Model (DOM). This makes VRIA ubiquitous and platform-independent. Moreover, by using WebVR's progressive enhancement, the experiences VRIA creates are accessible on a plethora of devices. We elaborate on our motivation for focusing on open-standards Web technologies, present the VRIA creation workflow and detail the underlying mechanics of our framework. We also report on techniques and optimizations necessary for implementing Immersive Analytics experiences on the Web, discuss scalability implications of our framework, and present a series of use case applications to demonstrate the various features of VRIA. Finally, we discuss current limitations of our framework, the lessons learned from its development, and outline further extensions.
Collapse
|
15
|
Wagner J, Stuerzlinger W, Nedel L. Comparing and Combining Virtual Hand and Virtual Ray Pointer Interactions for Data Manipulation in Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2513-2523. [PMID: 33750698 DOI: 10.1109/tvcg.2021.3067759] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this work, we evaluate two standard interaction techniques for Immersive Analytics environments: virtual hands, with actions such as grabbing and stretching, and virtual ray pointers, with actions assigned to controller buttons. We also consider a third option: seamlessly integrating both modes and allowing the user to alternate between them without explicit mode switches. Easy-to-use interaction with data visualizations in Virtual Reality enables analysts to intuitively query or filter the data, in addition to the benefit of multiple perspectives and stereoscopic 3D display. While many VR-based Immersive Analytics systems employ one of the studied interaction modes, the effect of this choice is unknown. Considering that each has different advantages, we compared the three conditions through a controlled user study in the spatio-temporal data domain. We did not find significant differences between hands and ray-casting in task performance, workload, or interactivity patterns. Yet, 60% of the participants preferred the mixed mode and benefited from it by choosing the best alternative for each low-level task. This mode significantly reduced completion times by 23% for the most demanding task, at the cost of a 5% decrease in overall success rates.
Collapse
|
16
|
Fonnet A, Prie Y. Survey of Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2101-2122. [PMID: 31352344 DOI: 10.1109/tvcg.2019.2929033] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Immersive analytics (IA) is a new term referring to the use of immersive technologies for data analysis. Yet such applications are not new, and numerous contributions have been made in the last three decades. However, no survey reviewing all these contributions is available. Here we propose a survey of IA from the early nineties until the present day, describing how rendering technologies, data, sensory mapping, and interaction means have been used to build IA systems, as well as how these systems have been evaluated. The conclusions that emerge from our analysis are that: multi-sensory aspects of IA are under-exploited, the 3DUI and VR community knowledge regarding immersive interaction is not sufficiently utilised, the IA community should focus on converging towards best practices, as well as aim for real life IA systems.
Collapse
|
17
|
Ens B, Goodwin S, Prouzeau A, Anderson F, Wang FY, Gratzl S, Lucarelli Z, Moyle B, Smiley J, Dwyer T. Uplift: A Tangible and Immersive Tabletop System for Casual Collaborative Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1193-1203. [PMID: 33074810 DOI: 10.1109/tvcg.2020.3030334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Collaborative visual analytics leverages social interaction to support data exploration and sensemaking. These processes are typically imagined as formalised, extended activities, between groups of dedicated experts, requiring expertise with sophisticated data analysis tools. However, there are many professional domains that benefit from support for short 'bursts' of data exploration between a subset of stakeholders with a diverse breadth of knowledge. Such 'casual collaborative' scenarios will require engaging features to draw users' attention, with intuitive, 'walk-up and use' interfaces. This paper presents Uplift, a novel prototype system to support 'casual collaborative visual analytics' for a campus microgrid, co-designed with local stakeholders. An elicitation workshop with key members of the building management team revealed relevant knowledge is distributed among multiple experts in their team, each using bespoke analysis tools. Uplift combines an engaging 3D model on a central tabletop display with intuitive tangible interaction, as well as augmented-reality, mid-air data visualisation, in order to support casual collaborative visual analytics for this complex domain. Evaluations with expert stakeholders from the building management and energy domains were conducted during and following our prototype development and indicate that Uplift is successful as an engaging backdrop for casual collaboration. Experts see high potential in such a system to bring together diverse knowledge holders and reveal complex interactions between structural, operational, and financial aspects of their domain. Such systems have further potential in other domains that require collaborative discussion or demonstration of models, forecasts, or cost-benefit analyses to high-level stakeholders.
Collapse
|
18
|
Lee B, Hu X, Cordeil M, Prouzeau A, Jenny B, Dwyer T. Shared Surfaces and Spaces: Collaborative Data Visualisation in a Co-located Immersive Environment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1171-1181. [PMID: 33048740 DOI: 10.1109/tvcg.2020.3030450] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Immersive technologies offer new opportunities to support collaborative visual data analysis by providing each collaborator a personal, high-resolution view of a flexible shared visualisation space through a head mounted display. However, most prior studies of collaborative immersive analytics have focused on how groups interact with surface interfaces such as tabletops and wall displays. This paper reports on a study in which teams of three co-located participants are given flexible visualisation authoring tools to allow a great deal of control in how they structure their shared workspace. They do so using a prototype system we call FIESTA: the Free-roaming Immersive Environment to Support Team-based Analysis. Unlike traditional visualisation tools, FIESTA allows users to freely position authoring interfaces and visualisation artefacts anywhere in the virtual environment, either on virtual surfaces or suspended within the interaction space. Our participants solved visual analytics tasks on a multivariate data set, doing so individually and collaboratively by creating a large number of 2D and 3D visualisations. Their behaviours suggest that the usage of surfaces is coupled with the type of visualisation used, often using walls to organise 2D visualisations, but positioning 3D visualisations in the space around them. Outside of tightly-coupled collaboration, participants followed social protocols and did not interact with visualisations that did not belong to them even if outside of its owner's personal workspace.
Collapse
|
19
|
Jouppila T, Tiainen T. Nurses' Participation in the Design of an Intensive Care Unit: The Use of Virtual Mock-Ups. HERD-HEALTH ENVIRONMENTS RESEARCH & DESIGN JOURNAL 2020; 14:301-312. [PMID: 32672071 DOI: 10.1177/1937586720935407] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Co-design with multiple tools is useful when end users' knowledge is important, especially when designers work with people unfamiliar with design. Many studies have highlighted the importance of nurses' participation in design, and such participation requires the development of techniques and tools to facilitate collaboration. This article analyzes how nurses participated in designing a general intensive care unit in a walk-in virtual environment (VE) and examines how their work-related knowledge can be transferred to the design process of spaces. METHOD In this action research study, the design process was conducted by using virtual mock-ups, which were evaluated by multi-occupational groups in a walk-in VE. Nurses were the largest occupational group. Their work processes were under modification, since existing multi-patient rooms were being redesigned as single-patient rooms. The design of single-patient rooms was performed in three iterative cycles in the walk-in VE. RESULTS The nurses could specify their requirements in the walk-in VE, and their suggestions were incorporated into the architectural design process. The nurses were satisfied with their role in the design process. CONCLUSION Co-design with virtual mock-ups in walk-in VE is appropriate when designing new healthcare facilities and when the opinions of workers are important. Virtual mock-ups in walk-in VE can be used collaboratively, facilitating simultaneous feedback from multiple users. Virtual reality (VR) technology has evolved, and changes can be made rapidly and at a lower cost. Another advantage of VR is that it allows one to design larger spaces, thus providing larger layouts of facilities for evaluation.
Collapse
Affiliation(s)
- Tiina Jouppila
- The Hospital District of South Ostrobothnia, Seinäjoki, Finland
| | - Tarja Tiainen
- Faculty of Information Technology and Communication, 7840Tampere University, Finland
| |
Collapse
|
20
|
Simonetto P, Archambault D, Kobourov S. Event-Based Dynamic Graph Visualisation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2373-2386. [PMID: 30575538 DOI: 10.1109/tvcg.2018.2886901] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Dynamic graph drawing algorithms take as input a series of timeslices that standard, force-directed algorithms can exploit to compute a layout. However, often dynamic graphs are expressed as a series of events where the nodes and edges have real coordinates along the time dimension that are not confined to discrete timeslices. Current techniques for dynamic graph drawing impose a set of timeslices on this event-based data in order to draw the dynamic graph, but it is unclear how many timeslices should be selected: too many timeslices slows the computation of the layout, while too few timeslices obscures important temporal features, such as causality. To address these limitations, we introduce a novel model for drawing event-based dynamic graphs and the first dynamic graph drawing algorithm, DynNoSlice, that is capable of drawing dynamic graphs in this model. DynNoSlice is an offline, force-directed algorithm that draws event-based, dynamic graphs in the space-time cube (2D+time). We also present a method to extract representative small multiples from the space-time cube. To demonstrate the advantages of our approach, DynNoSlice is compared with state-of-the-art timeslicing methods using a metrics-based experiment. Finally, we present case studies of event-based dynamic data visualised with the new model and algorithm.
Collapse
|
21
|
Saktheeswaran A, Srinivasan A, Stasko J. Touch? Speech? or Touch and Speech? Investigating Multimodal Interaction for Visual Network Exploration and Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2168-2179. [PMID: 32012017 DOI: 10.1109/tvcg.2020.2970512] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Interaction plays a vital role during visual network exploration as users need to engage with both elements in the view (e.g., nodes, links) and interface controls (e.g., sliders, dropdown menus). Particularly as the size and complexity of a network grow, interactive displays supporting multimodal input (e.g., touch, speech, pen, gaze) exhibit the potential to facilitate fluid interaction during visual network exploration and analysis. While multimodal interaction with network visualization seems like a promising idea, many open questions remain. For instance, do users actually prefer multimodal input over unimodal input, and if so, why? Does it enable them to interact more naturally, or does having multiple modes of input confuse users? To answer such questions, we conducted a qualitative user study in the context of a network visualization tool, comparing speech- and touch-based unimodal interfaces to a multimodal interface combining the two. Our results confirm that participants strongly prefer multimodal input over unimodal input attributing their preference to: 1) the freedom of expression, 2) the complementary nature of speech and touch, and 3) integrated interactions afforded by the combination of the two modalities. We also describe the interaction patterns participants employed to perform common network visualization operations and highlight themes for future multimodal network visualization systems to consider.
Collapse
|
22
|
Batch A, Cunningham A, Cordeil M, Elmqvist N, Dwyer T, Thomas BH, Marriott K. There Is No Spoon: Evaluating Performance, Space Use, and Presence with Expert Domain Users in Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:536-546. [PMID: 31484124 DOI: 10.1109/tvcg.2019.2934803] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Immersive analytics turns the very space surrounding the user into a canvas for data analysis, supporting human cognitive abilities in myriad ways. We present the results of a design study, contextual inquiry, and longitudinal evaluation involving professional economists using a Virtual Reality (VR) system for multidimensional visualization to explore actual economic data. Results from our preregistered evaluation highlight the varied use of space depending on context (exploration vs. presentation), the organization of space to support work, and the impact of immersion on navigation and orientation in the 3D analysis space.
Collapse
|
23
|
Skarbez R, Polys NF, Ogle JT, North C, Bowman DA. Immersive Analytics: Theory and Research Agenda. Front Robot AI 2019; 6:82. [PMID: 33501097 PMCID: PMC7805807 DOI: 10.3389/frobt.2019.00082] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 08/19/2019] [Indexed: 11/27/2022] Open
Abstract
Advances in a variety of computing fields, including “big data,” machine learning, visualization, and augmented/mixed/virtual reality, have combined to give rise to the emerging field of immersive analytics, which investigates how these new technologies support analysis and decision making. Thus far, we feel that immersive analytics research has been somewhat ad hoc, possibly owing to the fact that there is not yet an organizing framework for immersive analytics research. In this paper, we address this lack by proposing a definition for immersive analytics and identifying some general research areas and specific research questions that will be important for the development of this field. We also present three case studies that, while all being examples of what we would consider immersive analytics, present different challenges, and opportunities. These serve to demonstrate the breadth of immersive analytics and illustrate how the framework proposed in this paper applies to practical research.
Collapse
Affiliation(s)
- Richard Skarbez
- Center for Human-Computer Interaction, Virginia Tech, Blacksburg, VA, United States
| | - Nicholas F Polys
- Center for Human-Computer Interaction, Virginia Tech, Blacksburg, VA, United States
| | - J Todd Ogle
- Center for Human-Computer Interaction, Virginia Tech, Blacksburg, VA, United States
| | - Chris North
- Center for Human-Computer Interaction, Virginia Tech, Blacksburg, VA, United States
| | - Doug A Bowman
- Center for Human-Computer Interaction, Virginia Tech, Blacksburg, VA, United States
| |
Collapse
|
24
|
Thomas BH. Virtual Reality for Information Visualization Might Just Work This Time. Front Robot AI 2019; 6:84. [PMID: 33501099 PMCID: PMC7806101 DOI: 10.3389/frobt.2019.00084] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2018] [Accepted: 08/21/2019] [Indexed: 11/13/2022] Open
Affiliation(s)
- Bruce H Thomas
- IVE: Australian Research Centre for Interactive and Virtual Environments, School of Information Technology and Mathematical Sciences, University of South Australia, Adelaide, SA, Australia
| |
Collapse
|
25
|
Ivanov A, Danyluk K, Jacob C, Willett W. A Walk Among the Data. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2019; 39:19-28. [PMID: 30762534 DOI: 10.1109/mcg.2019.2898941] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
We examine the potential for immersive unit visualizations-interactive virtual environments populated with objects representing individual items in a dataset. Our virtual reality prototype highlights how immersive unit visualizations can allow viewers to examine data at multiple scales, support immersive exploration, and create affective personal experiences with data.
Collapse
|
26
|
Buschel W, Vogt S, Dachselt R. Augmented Reality Graph Visualizations. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2019; 39:29-40. [PMID: 30735987 DOI: 10.1109/mcg.2019.2897927] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Three-dimensional node-link diagrams are an important class of visualization for immersive analysis. Yet, there is little knowledge on how to visualize edges to support efficient analysis. We present an exploration of the design space for edge styles and discuss the results of a user study comparing six different edge variants.
Collapse
|
27
|
Klein K, Sommer B, Nim HT, Flack A, Safi K, Nagy M, Feyer SP, Zhang Y, Rehberg K, Gluschkow A, Quetting M, Fiedler W, Wikelski M, Schreiber F. Fly with the flock: immersive solutions for animal movement visualization and analytics. J R Soc Interface 2019; 16:20180794. [PMID: 30940026 PMCID: PMC6505562 DOI: 10.1098/rsif.2018.0794] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Understanding the movement of animals is important for a wide range of scientific interests including migration, disease spread, collective movement behaviour and analysing motion in relation to dynamic changes of the environment such as wind and thermal lifts. Particularly, the three-dimensional (3D) spatial-temporal nature of bird movement data, which is widely available with high temporal and spatial resolution at large volumes, presents a natural option to explore the potential of immersive analytics (IA). We investigate the requirements and benefits of a wide range of immersive environments for explorative visualization and analytics of 3D movement data, in particular regarding design considerations for such 3D immersive environments, and present prototypes for IA solutions. Tailored to biologists studying bird movement data, the immersive solutions enable geo-locational time-series data to be investigated interactively, thus enabling experts to visually explore interesting angles of a flock and its behaviour in the context of the environment. The 3D virtual world presents the audience with engaging and interactive content, allowing users to 'fly with the flock', with the potential to ascertain an intuitive overview of often complex datasets, and to provide the opportunity thereby to formulate and at least qualitatively assess hypotheses. This work also contributes to ongoing research efforts to promote better understanding of bird migration and the associated environmental factors at the global scale, thereby providing a visual vehicle for driving public awareness of environmental issues and bird migration patterns.
Collapse
Affiliation(s)
- Karsten Klein
- Department of Computer and
Information Science, University of Konstanz, Fach
76, 78457 Konstanz, Germany,Faculty of Information
Technology, Monash University,
Melbourne, Australia
| | - Björn Sommer
- Department of Computer and
Information Science, University of Konstanz, Fach
76, 78457 Konstanz, Germany,School of Design, Royal
College of Arts, London,
UK
| | - Hieu T. Nim
- Faculty of Information
Technology, Monash University,
Melbourne, Australia
| | - Andrea Flack
- Max-Planck-Institute for
Ornithology, Radolfzell,
Germany,Centre for the Advanced Study
of Collective Behaviour, University of Konstanz,
Konstanz, Germany
| | - Kamran Safi
- Max-Planck-Institute for
Ornithology, Radolfzell,
Germany,Department of Biology,
University of Konstanz, Konstanz,
Germany
| | - Máté Nagy
- Max-Planck-Institute for
Ornithology, Radolfzell,
Germany,Department of Biology,
University of Konstanz, Konstanz,
Germany,Centre for the Advanced Study
of Collective Behaviour, University of Konstanz,
Konstanz, Germany,MTA-ELTE Statistical and
Biological Physics Research Group, Hungarian Academy of
Sciences, Budapest,
Hungary
| | - Stefan P. Feyer
- Department of Computer and
Information Science, University of Konstanz, Fach
76, 78457 Konstanz, Germany
| | - Ying Zhang
- Department of Computer and
Information Science, University of Konstanz, Fach
76, 78457 Konstanz, Germany
| | - Kim Rehberg
- Department of Computer and
Information Science, University of Konstanz, Fach
76, 78457 Konstanz, Germany
| | - Alexej Gluschkow
- Department of Computer and
Information Science, University of Konstanz, Fach
76, 78457 Konstanz, Germany
| | | | | | - Martin Wikelski
- Max-Planck-Institute for
Ornithology, Radolfzell,
Germany,Department of Biology,
University of Konstanz, Konstanz,
Germany,Centre for the Advanced Study
of Collective Behaviour, University of Konstanz,
Konstanz, Germany
| | - Falk Schreiber
- Department of Computer and
Information Science, University of Konstanz, Fach
76, 78457 Konstanz, Germany,Faculty of Information
Technology, Monash University,
Melbourne, Australia,Centre for the Advanced Study
of Collective Behaviour, University of Konstanz,
Konstanz, Germany
| |
Collapse
|
28
|
Sicat R, Li J, Choi J, Cordeil M, Jeong WK, Bach B, Pfister H. DXR: A Toolkit for Building Immersive Data Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:715-725. [PMID: 30136991 DOI: 10.1109/tvcg.2018.2865152] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents DXR, a toolkit for building immersive data visualizations based on the Unity development platform. Over the past years, immersive data visualizations in augmented and virtual reality (AR, VR) have been emerging as a promising medium for data sense-making beyond the desktop. However, creating immersive visualizations remains challenging, and often require complex low-level programming and tedious manual encoding of data attributes to geometric and visual properties. These can hinder the iterative idea-to-prototype process, especially for developers without experience in 3D graphics, AR, and VR programming. With DXR, developers can efficiently specify visualization designs using a concise declarative visualization grammar inspired by Vega-Lite. DXR further provides a GUI for easy and quick edits and previews of visualization designs in-situ, i.e., while immersed in the virtual world. DXR also provides reusable templates and customizable graphical marks, enabling unique and engaging visualizations. We demonstrate the flexibility of DXR through several examples spanning a wide range of applications.
Collapse
|
29
|
Patnaik B, Batch A, Elmqvist N. Information Olfactation: Harnessing Scent to Convey Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:726-736. [PMID: 30137003 DOI: 10.1109/tvcg.2018.2865237] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Olfactory feedback for analytical tasks is a virtually unexplored area in spite of the advantages it offers for information recall, feature identification, and location detection. Here we introduce the concept of information olfactation as the fragrant sibling of information visualization, and discuss how scent can be used to convey data. Building on a review of the human olfactory system and mirroring common visualization practice, we propose olfactory marks, the substrate in which they exist, and their olfactory channels that are available to designers. To exemplify this idea, we present VISCENT: A six-scent stereo olfactory display capable of conveying olfactory glyphs of varying temperature and direction, as well as a corresponding software system that integrates the display with a traditional visualization display. Finally, we present three applications that make use of the viScent system: A 2D graph visualization, a 2D line and point chart, and an immersive analytics graph visualization in 3D virtual reality. We close the paper with a review of possible extensions of viScent and applications of information olfactation for general visualization beyond the examples in this paper.
Collapse
|
30
|
Yang Y, Dwyer T, Jenny B, Marriott K, Cordeil M, Chen H. Origin-Destination Flow Maps in Immersive Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 25:693-703. [PMID: 30136995 DOI: 10.1109/tvcg.2018.2865192] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Immersive virtual- and augmented-reality headsets can overlay a flat image against any surface or hang virtual objects in the space around the user. The technology is rapidly improving and may, in the long term, replace traditional flat panel displays in many situations. When displays are no longer intrinsically flat, how should we use the space around the user for abstract data visualisation? In this paper, we ask this question with respect to origin-destination flow data in a global geographic context. We report on the findings of three studies exploring different spatial encodings for flow maps. The first experiment focuses on different 2D and 3D encodings for flows on flat maps. We find that participants are significantly more accurate with raised flow paths whose height is proportional to flow distance but fastest with traditional straight line 2D flows. In our second and third experiment we compared flat maps, 3D globes and a novel interactive design we call MapsLink, involving a pair of linked flat maps. We find that participants took significantly more time with MapsLink than other flow maps while the 3D globe with raised flows was the fastest, most accurate, and most preferred method. Our work suggests that careful use of the third spatial dimension can resolve visual clutter in complex flow maps.
Collapse
|
31
|
Bach B, Sicat R, Beyer J, Cordeil M, Pfister H. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:457-467. [PMID: 28866590 DOI: 10.1109/tvcg.2017.2745941] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
Collapse
|
32
|
Srinivasan A, Stasko J. Orko: Facilitating Multimodal Interaction for Visual Exploration and Analysis of Networks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:511-521. [PMID: 28866579 DOI: 10.1109/tvcg.2017.2745219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Data visualization systems have predominantly been developed for WIMP-based direct manipulation interfaces. Only recently have other forms of interaction begun to appear, such as natural language or touch-based interaction, though usually operating only independently. Prior evaluations of natural language interfaces for visualization have indicated potential value in combining direct manipulation and natural language as complementary interaction techniques. We hypothesize that truly multimodal interfaces for visualization, those providing users with freedom of expression via both natural language and touch-based direct manipulation input, may provide an effective and engaging user experience. Unfortunately, however, little work has been done in exploring such multimodal visualization interfaces. To address this gap, we have created an architecture and a prototype visualization system called Orko that facilitates both natural language and direct manipulation input. Specifically, Orko focuses on the domain of network visualization, one that has largely relied on WIMP-based interfaces and direct manipulation interaction, and has little or no prior research exploring natural language interaction. We report results from an initial evaluation study of Orko, and use our observations to discuss opportunities and challenges for future work in multimodal network visualization interfaces.
Collapse
|
33
|
|
34
|
|