1
|
Tong W, Shigyo K, Yuan LP, Fan M, Pong TC, Qu H, Xia M. VisTellAR: Embedding Data Visualization to Short-Form Videos Using Mobile Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:1862-1874. [PMID: 38427541 DOI: 10.1109/tvcg.2024.3372104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/03/2024]
Abstract
With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.
Collapse
|
2
|
Zhu Q, Lu T, Guo S, Ma X, Yang Y. CompositingVis: Exploring Interactions for Creating Composite Visualizations in Immersive Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2025; 31:591-601. [PMID: 39250414 DOI: 10.1109/tvcg.2024.3456210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we aim to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience with immersive visualization and has the potential to facilitate understanding of the relationship between visualization views. We begin with developing a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions based on the combination of 3D manipulations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interaction to create different kinds of composite visualizations in virtual reality. Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of creating composite visualizations through embodied interactions. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization views for understanding and communicating the relationships between different views, which underscores the potential of several future application scenarios.
Collapse
|
3
|
In S, Lin T, North C, Pfister H, Yang Y. This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5635-5650. [PMID: 37506003 DOI: 10.1109/tvcg.2023.3299602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.
Collapse
|
4
|
Cortes CAT, Thurow S, Ong A, Sharples JJ, Bednarz T, Stevens G, Favero DD. Analysis of Wildfire Visualization Systems for Research and Training: Are They Up for the Challenge of the Current State of Wildfires? IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4285-4303. [PMID: 37030767 DOI: 10.1109/tvcg.2023.3258440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Wildfires affect many regions across the world. The accelerated progression of global warming has amplified their frequency and scale, deepening their impact on human life, the economy, and the environment. The temperature rise has been driving wildfires to behave unpredictably compared to those previously observed, challenging researchers and fire management agencies to understand the factors behind this behavioral change. Furthermore, this change has rendered fire personnel training outdated and lost its ability to adequately prepare personnel to respond to these new fires. Immersive visualization can play a key role in tackling the growing issue of wildfires. Therefore, this survey reviews various studies that use immersive and non-immersive data visualization techniques to depict wildfire behavior and train first responders and planners. This paper identifies the most useful characteristics of these systems. While these studies support knowledge creation for certain situations, there is still scope to comprehensively improve immersive systems to address the unforeseen dynamics of wildfires.
Collapse
|
5
|
Seraji MR, Piray P, Zahednejad V, Stuerzlinger W. Analyzing User Behaviour Patterns in a Cross-Virtuality Immersive Analytics System. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2613-2623. [PMID: 38470602 DOI: 10.1109/tvcg.2024.3372129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Recent work in immersive analytics suggests benefits for systems that support work across both 2D and 3D data visualizations, i.e., cross-virtuality analytics systems. Here, we introduce HybridAxes, an immersive visual analytics system that enables users to conduct their analysis either in 2D on desktop monitors or in 3D within an immersive AR environment - while enabling them to seamlessly switch and transfer their graphs between modes. Our user study results show that the cross-virtuality sub-systems in HybridAxes complement each other well in helping the users in their data-understanding journey. We show that users preferred using the AR component for exploring the data, while they used the desktop to work on more detail-intensive tasks. Despite encountering some minor challenges in switching between the two virtuality modes, users consistently rated the whole system as highly engaging, user-friendly, and helpful in streamlining their analytics processes. Finally, we present suggestions for designers of cross-virtuality visual analytics systems and identify avenues for future work.
Collapse
|
6
|
Butcher PWS, Batch A, Saffo D, MacIntyre B, Elmqvist N, Ritsos PD, Rhyne TM. Is Native Naïve? Comparing Native Game Engines and WebXR as Immersive Analytics Development Platforms. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2024; 44:91-98. [PMID: 38905026 DOI: 10.1109/mcg.2024.3367422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/23/2024]
Abstract
Native game engines have long been the 3-D development platform of choice for research in mixed and augmented reality. For this reason, they have also been adopted in many immersive visualization and immersive analytics systems and toolkits. However, with the rapid improvements of WebXR and related open technologies, this choice may not always be optimal for future visualization research. In this article, we investigate common assumptions about native game engines versus WebXR and find that while native engines still have an advantage in many areas, WebXR is rapidly catching up and is superior for many immersive analytics applications.
Collapse
|
7
|
Minh Tran TT, Brown S, Weidlich O, Billinghurst M, Parker C. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4782-4793. [PMID: 37782599 DOI: 10.1109/tvcg.2023.3320231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998-2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
Collapse
|
8
|
Mota R, Ferreira N, Silva JD, Horga M, Lage M, Ceferino L, Alim U, Sharlin E, Miranda F. A Comparison of Spatiotemporal Visualizations for 3D Urban Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1277-1287. [PMID: 36166521 DOI: 10.1109/tvcg.2022.3209474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Recent technological innovations have led to an increase in the availability of 3D urban data, such as shadow, noise, solar potential, and earthquake simulations. These spatiotemporal datasets create opportunities for new visualizations to engage experts from different domains to study the dynamic behavior of urban spaces in this under explored dimension. However, designing 3D spatiotemporal urban visualizations is challenging, as it requires visual strategies to support analysis of time-varying data referent to the city geometry. Although different visual strategies have been used in 3D urban visual analytics, the question of how effective these visual designs are at supporting spatiotemporal analysis on building surfaces remains open. To investigate this, in this paper we first contribute a series of analytical tasks elicited after interviews with practitioners from three urban domains. We also contribute a quantitative user study comparing the effectiveness of four representative visual designs used to visualize 3D spatiotemporal urban data: spatial juxtaposition, temporal juxtaposition, linked view, and embedded view. Participants performed a series of tasks that required them to identify extreme values on building surfaces over time. Tasks varied in granularity for both space and time dimensions. Our results demonstrate that participants were more accurate using plot-based visualizations (linked view, embedded view) but faster using color-coded visualizations (spatial juxtaposition, temporal juxtaposition). Our results also show that, with increasing task complexity, plot-based visualizations perform better in preserving efficiency (time, accuracy) compared to color-coded visualizations. Based on our findings, we present a set of takeaways with design recommendations for 3D spatiotemporal urban visualizations for researchers and practitioners. Lastly, we report on a series of interviews with four practitioners, and their feedback and suggestions for further work on the visualizations to support 3D spatiotemporal urban data analysis.
Collapse
|
9
|
Sperrle F, Ceneda D, El-Assady M. Lotse: A Practical Framework for Guidance in Visual Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1124-1134. [PMID: 36215348 DOI: 10.1109/tvcg.2022.3209393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Co-adaptive guidance aims to enable efficient human-machine collaboration in visual analytics, as proposed by multiple theoretical frameworks. This paper bridges the gap between such conceptual frameworks and practical implementation by introducing an accessible model of guidance and an accompanying guidance library, mapping theory into practice. We contribute a model of system-provided guidance based on design templates and derived strategies. We instantiate the model in a library called Lotse that allows specifying guidance strategies in definition files and generates running code from them. Lotse is the first guidance library using such an approach. It supports the creation of reusable guidance strategies to retrofit existing applications with guidance and fosters the creation of general guidance strategy patterns. We demonstrate its effectiveness through first-use case studies with VA researchers of varying guidance design expertise and find that they are able to effectively and quickly implement guidance with Lotse. Further, we analyze our framework's cognitive dimensions to evaluate its expressiveness and outline a summary of open research questions for aligning guidance practice with its intricate theory.
Collapse
|
10
|
Tong W, Chen Z, Xia M, Lo LYH, Yuan L, Bach B, Qu H. Exploring Interactions with Printed Data Visualizations in Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:418-428. [PMID: 36166542 DOI: 10.1109/tvcg.2022.3209386] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
This paper presents a design space of interaction techniques to engage with visualizations that are printed on paper and augmented through Augmented Reality. Paper sheets are widely used to deploy visualizations and provide a rich set of tangible affordances for interactions, such as touch, folding, tilting, or stacking. At the same time, augmented reality can dynamically update visualization content to provide commands such as pan, zoom, filter, or detail on demand. This paper is the first to provide a structured approach to mapping possible actions with the paper to interaction commands. This design space and the findings of a controlled user study have implications for future designs of augmented reality systems involving paper sheets and visualizations. Through workshops ( N=20) and ideation, we identified 81 interactions that we classify in three dimensions: 1) commands that can be supported by an interaction, 2) the specific parameters provided by an (inter)action with paper, and 3) the number of paper sheets involved in an interaction. We tested user preference and viability of 11 of these interactions with a prototype implementation in a controlled study ( N=12, HoloLens 2) and found that most of the interactions are intuitive and engaging to use. We summarized interactions (e.g., tilt to pan) that have strong affordance to complement "point" for data exploration, physical limitations and properties of paper as a medium, cases requiring redundancy and shortcuts, and other implications for design.
Collapse
|
11
|
Abstract
Recent research in the area of immersive analytics demonstrated the utility of augmented reality for data analysis. However, there is a lack of research on how to facilitate engaging, embodied, and interactive AR graph visualization. In this paper, we explored the design space for combining the capabilities of AR with node-link diagrams to create immersive data visualization. We first systematically described the design rationale and the design process of the mobile based AR graph including the layout, interactions, and aesthetics. Then, we validated the AR concept by conducting a user study with 36 participants to examine users’ behaviors with an AR graph and a 2D graph. The results of our study showed the feasibility of using an AR graph to present data relations and also introduced interaction challenges in terms of the effectiveness and usability with mobile devices. Third, we iterated the AR graph by implementing embodied interactions with hand gestures and addressing the connection between the physical objects and the digital graph. This study is the first step in our research, aiming to guide the design of the application of immersive AR data visualization in the future.
Collapse
|
12
|
The Hitchhiker’s Guide to Fused Twins: A Review of Access to Digital Twins In Situ in Smart Cities. REMOTE SENSING 2022. [DOI: 10.3390/rs14133095] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Smart Cities already surround us, and yet they are still incomprehensibly far from directly impacting everyday life. While current Smart Cities are often inaccessible, the experience of everyday citizens may be enhanced with a combination of the emerging technologies Digital Twins (DTs) and Situated Analytics. DTs represent their Physical Twin (PT) in the real world via models, simulations, (remotely) sensed data, context awareness, and interactions. However, interaction requires appropriate interfaces to address the complexity of the city. Ultimately, leveraging the potential of Smart Cities requires going beyond assembling the DT to be comprehensive and accessible. Situated Analytics allows for the anchoring of city information in its spatial context. We advance the concept of embedding the DT into the PT through Situated Analytics to form Fused Twins (FTs). This fusion allows access to data in the location that it is generated in in an embodied context that can make the data more understandable. Prototypes of FTs are rapidly emerging from different domains, but Smart Cities represent the context with the most potential for FTs in the future. This paper reviews DTs, Situated Analytics, and Smart Cities as the foundations of FTs. Regarding DTs, we define five components (physical, data, analytical, virtual, and Connection Environments) that we relate to several cognates (i.e., similar but different terms) from existing literature. Regarding Situated Analytics, we review the effects of user embodiment on cognition and cognitive load. Finally, we classify existing partial examples of FTs from the literature and address their construction from Augmented Reality, Geographic Information Systems, Building/City Information Models, and DTs and provide an overview of future directions.
Collapse
|
13
|
Exploration and Assessment of Interaction in an Immersive Analytics Module: A Software-Based Comparison. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The focus of computer systems in the field of visual analytics is to make the results clear and understandable. However, enhancing human-computer interaction (HCI) in the field is less investigated. Data visualization and visual analytics (VA) are usually performed using traditional desktop settings and mouse interaction. These methods are based on the window, icon, menu, and pointer (WIMP) interface, which often results in information clutter and is difficult to analyze and understand, especially by novice users. Researchers believe that introducing adequate, natural interaction techniques to the field is necessary for building effective and enjoyable visual analytics systems. This work introduces a novel virtual reality (VR) module to perform basic visual analytics tasks and aims to explore new interaction techniques in the field. A pilot study was conducted to measure the time it takes students to perform basic tasks for analytics using the developed VR module and compares it to the time it takes them to perform the same tasks using a traditional desktop to assess the effectiveness of the VR module in enhancing student’s performance. The results show that novice users (Participants with less programming experience) took about 50% less time to complete tasks using the developed VR module as a comrade to a programming language, notably R. Experts (Participants with advanced programming experience) took about the same time to complete tasks under both conditions (R and VR).
Collapse
|
14
|
Chu X, Xie X, Ye S, Lu H, Xiao H, Yuan Z, Zhu-Tian C, Zhang H, Wu Y. TIVEE: Visual Exploration and Explanation of Badminton Tactics in Immersive Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:118-128. [PMID: 34596547 DOI: 10.1109/tvcg.2021.3114861] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Tactic analysis is a major issue in badminton as the effective usage of tactics is the key to win. The tactic in badminton is defined as a sequence of consecutive strokes. Most existing methods use statistical models to find sequential patterns of strokes and apply 2D visualizations such as glyphs and statistical charts to explore and analyze the discovered patterns. However, in badminton, spatial information like the shuttle trajectory, which is inherently 3D, is the core of a tactic. The lack of sufficient spatial awareness in 2D visualizations largely limited the tactic analysis of badminton. In this work, we collaborate with domain experts to study the tactic analysis of badminton in a 3D environment and propose an immersive visual analytics system, TIVEE, to assist users in exploring and explaining badminton tactics from multi-levels. Users can first explore various tactics from the third-person perspective using an unfolded visual presentation of stroke sequences. By selecting a tactic of interest, users can turn to the first-person perspective to perceive the detailed kinematic characteristics and explain its effects on the game result. The effectiveness and usefulness of TIVEE are demonstrated by case studies and an expert interview.
Collapse
|
15
|
Latif S, Tarner H, Beck F. Talking Realities: Audio Guides in Virtual Reality Visualizations. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2022; 42:73-83. [PMID: 33560980 DOI: 10.1109/mcg.2021.3058129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Building upon the ideas of storytelling and explorable explanations, we introduce Talking Realities, a concept for producing data-driven interactive narratives in virtual reality. It combines an audio narrative with an immersive visualization to communicate analysis results. The narrative is automatically produced using template-based natural language generation and adapts to data and user interactions. The synchronized animation of visual elements in accordance with the audio connects the two representations. In addition, we discuss various modes of explanation ranging from fully guided tours to free exploration of the data. We demonstrate the applicability of our concept by developing a virtual reality visualization for air traffic data. Furthermore, generalizability is exhibited by sketching mock-ups for two more application scenarios in the context of information and scientific visualization.
Collapse
|
16
|
Bressa N, Korsgaard H, Tabard A, Houben S, Vermeulen J. What's the Situation with Situated Visualization? A Survey and Perspectives on Situatedness. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:107-117. [PMID: 34587065 DOI: 10.1109/tvcg.2021.3114835] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Situated visualization is an emerging concept within visualization, in which data is visualized in situ, where it is relevant to people. The concept has gained interest from multiple research communities, including visualization, human-computer interaction (HCI) and augmented reality. This has led to a range of explorations and applications of the concept, however, this early work has focused on the operational aspect of situatedness leading to inconsistent adoption of the concept and terminology. First, we contribute a literature survey in which we analyze 44 papers that explicitly use the term "situated visualization" to provide an overview of the research area, how it defines situated visualization, common application areas and technology used, as well as type of data and type of visualizations. Our survey shows that research on situated visualization has focused on technology-centric approaches that foreground a spatial understanding of situatedness. Secondly, we contribute five perspectives on situatedness (space, time, place, activity, and community) that together expand on the prevalent notion of situatedness in the corpus. We draw from six case studies and prior theoretical developments in HCI. Each perspective develops a generative way of looking at and working with situatedness in design and research. We outline future directions, including considering technology, material and aesthetics, leveraging the perspectives for design, and methods for stronger engagement with target audiences. We conclude with opportunities to consolidate situated visualization research.
Collapse
|
17
|
Butcher PWS, John NW, Ritsos PD. VRIA: A Web-Based Framework for Creating Immersive Analytics Experiences. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3213-3225. [PMID: 31944959 DOI: 10.1109/tvcg.2020.2965109] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present VRIA, a Web-based framework for creating Immersive Analytics (IA) experiences in Virtual Reality. VRIA is built upon WebVR, A-Frame, React and D3.js, and offers a visualization creation workflow which enables users, of different levels of expertise, to rapidly develop Immersive Analytics experiences for the Web. The use of these open-standards Web-based technologies allows us to implement VR experiences in a browser and offers strong synergies with popular visualization libraries, through the HTML Document Object Model (DOM). This makes VRIA ubiquitous and platform-independent. Moreover, by using WebVR's progressive enhancement, the experiences VRIA creates are accessible on a plethora of devices. We elaborate on our motivation for focusing on open-standards Web technologies, present the VRIA creation workflow and detail the underlying mechanics of our framework. We also report on techniques and optimizations necessary for implementing Immersive Analytics experiences on the Web, discuss scalability implications of our framework, and present a series of use case applications to demonstrate the various features of VRIA. Finally, we discuss current limitations of our framework, the lessons learned from its development, and outline further extensions.
Collapse
|
18
|
Fonnet A, Prie Y. Survey of Immersive Analytics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2101-2122. [PMID: 31352344 DOI: 10.1109/tvcg.2019.2929033] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Immersive analytics (IA) is a new term referring to the use of immersive technologies for data analysis. Yet such applications are not new, and numerous contributions have been made in the last three decades. However, no survey reviewing all these contributions is available. Here we propose a survey of IA from the early nineties until the present day, describing how rendering technologies, data, sensory mapping, and interaction means have been used to build IA systems, as well as how these systems have been evaluated. The conclusions that emerge from our analysis are that: multi-sensory aspects of IA are under-exploited, the 3DUI and VR community knowledge regarding immersive interaction is not sufficiently utilised, the IA community should focus on converging towards best practices, as well as aim for real life IA systems.
Collapse
|
19
|
Narechania A, Srinivasan A, Stasko J. NL4DV: A Toolkit for Generating Analytic Specifications for Data Visualization from Natural Language Queries. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:369-379. [PMID: 33048704 DOI: 10.1109/tvcg.2020.3030378] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Natural language interfaces (NLls) have shown great promise for visual data analysis, allowing people to flexibly specify and interact with visualizations. However, developing visualization NLIs remains a challenging task, requiring low-level implementation of natural language processing (NLP) techniques as well as knowledge of visual analytic tasks and visualization design. We present NL4DV, a toolkit for natural language-driven data visualization. NL4DV is a Python package that takes as input a tabular dataset and a natural language query about that dataset. In response, the toolkit returns an analytic specification modeled as a JSON object containing data attributes, analytic tasks, and a list of Vega-Lite specifications relevant to the input query. In doing so, NL4DV aids visualization developers who may not have a background in NLP, enabling them to create new visualization NLIs or incorporate natural language input within their existing systems. We demonstrate NL4DV's usage and capabilities through four examples: 1) rendering visualizations using natural language in a Jupyter notebook, 2) developing a NLI to specify and edit Vega-Lite charts, 3) recreating data ambiguity widgets from the DataTone system, and 4) incorporating speech input to create a multimodal visualization system.
Collapse
|
20
|
McDonald T, Usher W, Morrical N, Gyulassy A, Petruzza S, Federer F, Angelucci A, Pascucci V. Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:744-754. [PMID: 33055032 PMCID: PMC7891492 DOI: 10.1109/tvcg.2020.3030363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.
Collapse
|
21
|
Rubab S, Tang J, Wu Y. Examining interaction techniques in data visualization authoring tools from the perspective of goals and human cognition: a survey. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-020-00705-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
22
|
Jonsson D, Steneteg P, Sunden E, Englund R, Kottravel S, Falk M, Ynnerman A, Hotz I, Ropinski T. Inviwo - A Visualization System with Usage Abstraction Levels. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3241-3254. [PMID: 31180858 DOI: 10.1109/tvcg.2019.2920639] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities.
Collapse
|
23
|
Ivson P, Moreira A, Queiroz F, Santos W, Celes W. A Systematic Review of Visualization in Building Information Modeling. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3109-3127. [PMID: 30932840 DOI: 10.1109/tvcg.2019.2907583] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Building Information Modeling (BIM) employs data-rich 3D CAD models for large-scale facility design, construction, and operation. These complex datasets contain a large amount and variety of information, ranging from design specifications to real-time sensor data. They are used by architects and engineers for various analysis and simulations throughout a facility's life cycle. Many techniques from different visualization fields could be used to analyze these data. However, the BIM domain still remains largely unexplored by the visualization community. The goal of this article is to encourage visualization researchers to increase their involvement with BIM. To this end, we present the results of a systematic review of visualization in current BIM practice. We use a novel taxonomy to identify main application areas and analyze commonly employed techniques. From this domain characterization, we highlight future research opportunities brought forth by the unique features of BIM. For instance, exploring the synergies between scientific and information visualization to integrate spatial and non-spatial data. We hope this article raises awareness to interesting new challenges the BIM domain brings to the visualization community.
Collapse
|
24
|
|
25
|
Chen Z, Su Y, Wang Y, Wang Q, Qu H, Wu Y. MARVisT: Authoring Glyph-Based Visualization in Mobile Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2645-2658. [PMID: 30640614 DOI: 10.1109/tvcg.2019.2892415] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent advances in mobile augmented reality (AR) techniques have shed new light on personal visualization for their advantages of fitting visualization within personal routines, situating visualization in a real-world context, and arousing users' interests. However, enabling non-experts to create data visualization in mobile AR environments is challenging given the lack of tools that allow in-situ design while supporting the binding of data to AR content. Most existing AR authoring tools require working on personal computers or manually creating each virtual object and modifying its visual attributes. We systematically study this issue by identifying the specificity of AR glyph-based visualization authoring tool and distill four design considerations. Following these design considerations, we design and implement MARVisT, a mobile authoring tool that leverages information from reality to assist non-experts in addressing relationships between data and virtual glyphs, real objects and virtual glyphs, and real objects and data. With MARVisT, users without visualization expertise can bind data to real-world objects to create expressive AR glyph-based visualizations rapidly and effortlessly, reshaping the representation of the real world with data. We use several examples to demonstrate the expressiveness of MARVisT. A user study with non-experts is also conducted to evaluate the authoring experience of MARVisT.
Collapse
|
26
|
Rebelo J, Andrade C, Costa C, Santos MY. An Immersive Web Visualization Platform for a Big Data Context in Bosch’s Industry 4.0 Movement. INFORM SYST 2020. [DOI: 10.1007/978-3-030-44322-1_6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|