1
|
Deng Z, Weng D, Liu S, Tian Y, Xu M, Wu Y. A survey of urban visual analytics: Advances and future directions. COMPUTATIONAL VISUAL MEDIA 2022; 9:3-39. [PMID: 36277276 PMCID: PMC9579670 DOI: 10.1007/s41095-022-0275-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 02/08/2022] [Indexed: 06/16/2023]
Abstract
Developing effective visual analytics systems demands care in characterization of domain problems and integration of visualization techniques and computational models. Urban visual analytics has already achieved remarkable success in tackling urban problems and providing fundamental services for smart cities. To promote further academic research and assist the development of industrial urban analytics systems, we comprehensively review urban visual analytics studies from four perspectives. In particular, we identify 8 urban domains and 22 types of popular visualization, analyze 7 types of computational method, and categorize existing systems into 4 types based on their integration of visualization techniques and computational models. We conclude with potential research directions and opportunities.
Collapse
Affiliation(s)
- Zikun Deng
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Di Weng
- Microsoft Research Asia, Beijing, 100080 China
| | - Shuhan Liu
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Yuan Tian
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| | - Mingliang Xu
- School of Information Engineering, Zhengzhou University, Zhengzhou, China
- Henan Institute of Advanced Technology, Zhengzhou University, Zhengzhou, 450001 China
| | - Yingcai Wu
- State Key Lab of CAD & CG, Zhejiang University, Hangzhou, 310058 China
| |
Collapse
|
2
|
An irrelevant attributes resistant approach to anomaly detection in high-dimensional space using a deep hypersphere structure. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108301] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
3
|
Sun L, Zhang X, Pan X, Liu Y, Yu W, Xu T, Liu F, Chen W, Wang Y, Su W, Zhou Z. Visual analytics of genealogy with attribute-enhanced topological clustering. J Vis (Tokyo) 2021. [DOI: 10.1007/s12650-021-00802-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
4
|
Nonato LG, do Carmo FP, Silva CT. GLoG: Laplacian of Gaussian for Spatial Pattern Detection in Spatio-Temporal Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3481-3492. [PMID: 32149640 DOI: 10.1109/tvcg.2020.2978847] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Boundary detection has long been a fundamental tool for image processing and computer vision, supporting the analysis of static and time-varying data. In this work, we built upon the theory of Graph Signal Processing to propose a novel boundary detection filter in the context of graphs, having as main application scenario the visual analysis of spatio-temporal data. More specifically, we propose the equivalent for graphs of the so-called Laplacian of Gaussian edge detection filter, which is widely used in image processing. The proposed filter is able to reveal interesting spatial patterns while still enabling the definition of entropy of time slices. The entropy reveals the degree of randomness of a time slice, helping users to identify expected and unexpected phenomena over time. The effectiveness of our approach appears in applications involving synthetic and real data sets, which show that the proposed methodology is able to uncover interesting spatial and temporal phenomena. The provided examples and case studies make clear the usefulness of our approach as a mechanism to support visual analytic tasks involving spatio-temporal data.
Collapse
|
5
|
Zeng H, Shu X, Wang Y, Wang Y, Zhang L, Pong TC, Qu H. EmotionCues: Emotion-Oriented Visual Summarization of Classroom Videos. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3168-3181. [PMID: 31902765 DOI: 10.1109/tvcg.2019.2963659] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Analyzing students' emotions from classroom videos can help both teachers and parents quickly know the engagement of students in class. The availability of high-definition cameras creates opportunities to record class scenes. However, watching videos is time-consuming, and it is challenging to gain a quick overview of the emotion distribution and find abnormal emotions. In this article, we propose EmotionCues, a visual analytics system to easily analyze classroom videos from the perspective of emotion summary and detailed analysis, which integrates emotion recognition algorithms with visualizations. It consists of three coordinated views: a summary view depicting the overall emotions and their dynamic evolution, a character view presenting the detailed emotion status of an individual, and a video view enhancing the video analysis with further details. Considering the possible inaccuracy of emotion recognition, we also explore several factors affecting the emotion analysis, such as face size and occlusion. They provide hints for inferring the possible inaccuracy and the corresponding reasons. Two use cases and interviews with end users and domain experts are conducted to show that the proposed system could be useful and effective for analyzing emotions in the classroom videos.
Collapse
|
6
|
Chen X, Zeng W, Lin Y, Ai-Maneea HM, Roberts J, Chang R. Composition and Configuration Patterns in Multiple-View Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1514-1524. [PMID: 33048683 DOI: 10.1109/tvcg.2020.3030338] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multiple-view visualization (MV) is a layout design technique often employed to help users see a large number of data attributes and values in a single cohesive representation. Because of its generalizability, the MV design has been widely adopted by the visualization community to help users examine and interact with large, complex, and high-dimensional data. However, although ubiquitous, there has been little work to categorize and analyze MVs in order to better understand its design space. As a result, there has been little to no guideline in how to use the MV design effectively. In this paper, we present an in-depth study of how MVs are designed in practice. We focus on two fundamental measures of multiple-view patterns: composition, which quantifies what view types and how many are there; and configuration, which characterizes spatial arrangement of view layouts in the display space. We build a new dataset containing 360 images of MVs collected from IEEE VIS, EuroVis, and PacificVis publications 2011 to 2019, and make fine-grained annotations of view types and layouts for these visualization images. From this data we conduct composition and configuration analyses using quantitative metrics of term frequency and layout topology. We identify common practices around MVs, including relationship of view types, popular view layouts, and correlation between view types and layouts. We combine the findings into a MV recommendation system, providing interactive tools to explore the design space, and support example-based design.
Collapse
|
7
|
Jin Z, Cao N, Shi Y, Wu W, Wu Y. EcoLens: visual analysis of ecological regions in urban contexts using traffic data. J Vis (Tokyo) 2020. [DOI: 10.1007/s12650-020-00707-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
8
|
Dai H, Tao Y, Lin H. Visual analytics of urban transportation from a bike-sharing and taxi perspective. J Vis (Tokyo) 2020. [DOI: 10.1007/s12650-020-00673-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
9
|
Deng Z, Weng D, Chen J, Liu R, Wang Z, Bao J, Zheng Y, Wu Y. AirVis: Visual Analytics of Air Pollution Propagation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:800-810. [PMID: 31443012 DOI: 10.1109/tvcg.2019.2934670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Air pollution has become a serious public health problem for many cities around the world. To find the causes of air pollution, the propagation processes of air pollutants must be studied at a large spatial scale. However, the complex and dynamic wind fields lead to highly uncertain pollutant transportation. The state-of-the-art data mining approaches cannot fully support the extensive analysis of such uncertain spatiotemporal propagation processes across multiple districts without the integration of domain knowledge. The limitation of these automated approaches motivates us to design and develop AirVis, a novel visual analytics system that assists domain experts in efficiently capturing and interpreting the uncertain propagation patterns of air pollution based on graph visualizations. Designing such a system poses three challenges: a) the extraction of propagation patterns; b) the scalability of pattern presentations; and c) the analysis of propagation processes. To address these challenges, we develop a novel pattern mining framework to model pollutant transportation and extract frequent propagation patterns efficiently from large-scale atmospheric data. Furthermore, we organize the extracted patterns hierarchically based on the minimum description length (MDL) principle and empower expert users to explore and analyze these patterns effectively on the basis of pattern topologies. We demonstrated the effectiveness of our approach through two case studies conducted with a real-world dataset and positive feedback from domain experts.
Collapse
|
10
|
Li J, Chen S, Zhang K, Andrienko G, Andrienko N. COPE: Interactive Exploration of Co-Occurrence Patterns in Spatial Time Series. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2554-2567. [PMID: 29994614 DOI: 10.1109/tvcg.2018.2851227] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Spatial time series is a common type of data dealt with in many domains, such as economic statistics and environmental science. There have been many studies focusing on finding and analyzing various kinds of events in time series; the term 'event' refers to significant changes or occurrences of particular patterns formed by consecutive attribute values. We focus on a further step in event analysis: discover temporal relationship patterns between event locations, i.e., repeated cases when there is a specific temporal relationship (same time, before, or after) between events occurring at two locations. This can provide important clues for understanding the formation and spreading mechanisms of events and interdependencies among spatial locations. We propose a visual exploration framework COPE (Co-Occurrence Pattern Exploration), which allows users to extract events of interest from data and detect various co-occurrence patterns among them. Case studies and expert reviews were conducted to verify the effectiveness and scalability of COPE using two real-world datasets.
Collapse
|
11
|
Abstract
The increased accessibility of urban sensor data and the popularity of social network applications is enabling the discovery of crowd mobility and personal communication patterns. However, studying the egocentric relationships of an individual can be very challenging because available data may refer to direct contacts, such as phone calls between individuals, or indirect contacts, such as paired location presence. In this article, we develop methods to integrate three facets extracted from heterogeneous urban data (timelines, calls, and locations) through a progressive visual reasoning and inspection scheme. Our approach uses a detect-and-filter scheme such that, prior to visual refinement and analysis, a coarse detection is performed to extract the target individual and construct the timeline of the target. It then detects spatio-temporal co-occurrences or call-based contacts to develop the egocentric network of the individual. The filtering stage is enhanced with a line-based visual reasoning interface that facilitates a flexible and comprehensive investigation of egocentric relationships and connections in terms of time, space, and social networks. The integrated system, RelationLines, is demonstrated using a dataset that contains taxi GPS data, cell-base mobility data, mobile calling data, microblog data, and point-of-interest (POI) data from a city with millions of citizens. We examine the effectiveness and efficiency of our system with three case studies and user review.
Collapse
Affiliation(s)
- Wei Chen
- Zhejiang University, State Key Lab of CAD8CG, China
| | - Jing Xia
- Zhejiang University, State Key Lab of CAD8CG and Alibaba Group, China
| | - Xumeng Wang
- Zhejiang University, State Key Lab of CAD8CG, China
| | - Yi Wang
- Zhejiang University, State Key Lab of CAD8CG, China
| | - Jun Chen
- Zhejiang University, State Key Lab of CAD8CG, Guangzhou, China
| | - Liang Chang
- Guilin University of Electronic Technology, China
| |
Collapse
|
12
|
Diverse Visualization Techniques and Methods of Moving-Object-Trajectory Data: A Review. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2019. [DOI: 10.3390/ijgi8020063] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Trajectory big data have significant applications in many areas, such as traffic management, urban planning and military reconnaissance. Traditional visualization methods, which are represented by contour maps, shading maps and hypsometric maps, are mainly based on the spatiotemporal information of trajectories, which can macroscopically study the spatiotemporal conditions of the entire trajectory set and microscopically analyze the individual movement of each trajectory; such methods are widely used in screen display and flat mapping. With the improvement of trajectory data quality, these data can generally describe information in the spatial and temporal dimensions and involve many other attributes (e.g., speed, orientation, and elevation) with large data amounts and high dimensions. Additionally, these data have relatively complicated internal relationships and regularities, whose analysis could cause many troubles; the traditional approaches can no longer fully meet the requirements of visualizing trajectory data and mining hidden information. Therefore, diverse visualization methods that present the value of massive trajectory information are currently a hot research topic. This paper summarizes the research status of trajectory data-visualization techniques in recent years and extracts common contemporary trajectory data-visualization methods to comprehensively cognize and understand the fundamental characteristics and diverse achievements of trajectory-data visualization.
Collapse
|
13
|
Sobral T, Galvão T, Borges J. Visualization of Urban Mobility Data from Intelligent Transportation Systems. SENSORS 2019; 19:s19020332. [PMID: 30650641 PMCID: PMC6359619 DOI: 10.3390/s19020332] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 01/02/2019] [Accepted: 01/10/2019] [Indexed: 11/16/2022]
Abstract
Intelligent Transportation Systems are an important enabler for the smart cities paradigm. Currently, such systems generate massive amounts of granular data that can be analyzed to better understand people’s dynamics. To address the multivariate nature of spatiotemporal urban mobility data, researchers and practitioners have developed an extensive body of research and interactive visualization tools. Data visualization provides multiple perspectives on data and supports the analytical tasks of domain experts. This article surveys related studies to analyze which topics of urban mobility were addressed and their related phenomena, and to identify the adopted visualization techniques and sensors data types. We highlight research opportunities based on our findings.
Collapse
Affiliation(s)
- Thiago Sobral
- INESC TEC, Faculty of Engineering, University of Porto, Porto 4200-465, Portugal.
| | - Teresa Galvão
- INESC TEC, Faculty of Engineering, University of Porto, Porto 4200-465, Portugal.
| | - José Borges
- INESC TEC, Faculty of Engineering, University of Porto, Porto 4200-465, Portugal.
| |
Collapse
|
14
|
Estimation of Hourly Link Population and Flow Directions from Mobile CDR. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2018. [DOI: 10.3390/ijgi7110449] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The rise in big data applications in urban planning and transport management is now widening and becoming a part of local government decision-making processes. Understanding people flow inside the city helps urban and transport planners build a healthy and lively city. Many flow maps are based on origin-and-destination points with crossing lines, which reduce the map’s readability and overall appearance. Today, with the emergence of geolocation-enabled handheld devices with wireless communication and networking capabilities, human mobility and the resulting events can be captured and stored as text-based geospatial big data. In this paper, we used one-week mobile-call-detail records (CDR) and a GIS road network model to estimate hourly link population and flow directions, based on mobile-call activities of origin–destination pairs with a shortest-path analysis for the whole city. Moreover, to gain the actual population size from the number of mobile-call users, we introduced a home-based magnification factor (h-MF) by integrating with the national census. Therefore, the final output link data have both magnitude (actual population) and flow direction at one-hour intervals between 06:00 and 21:00. The hourly link population and flow direction dataset are intended to optimize bus routes, solve traffic congestion problems, and enhance disaster and emergency preparedness.
Collapse
|
15
|
Focus+context grouping for animated transitions. JOURNAL OF VISUAL LANGUAGES AND COMPUTING 2018. [DOI: 10.1016/j.jvlc.2018.06.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
16
|
Zhou Z, Yu J, Guo Z, Liu Y. Visual exploration of urban functions via spatio-temporal taxi OD data. JOURNAL OF VISUAL LANGUAGES AND COMPUTING 2018. [DOI: 10.1016/j.jvlc.2018.08.009] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
17
|
M3: visual exploration of spatial relationships between flight trajectories. J Vis (Tokyo) 2018. [DOI: 10.1007/s12650-017-0471-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
18
|
Zhou Z, Ye Z, Yu J, Chen W. Cluster-aware arrangement of the parallel coordinate plots. JOURNAL OF VISUAL LANGUAGES AND COMPUTING 2018. [DOI: 10.1016/j.jvlc.2017.10.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
Steptoe M, Krüger R, Garcia R, Liang X, Maciejewski R. A Visual Analytics Framework for Exploring Theme Park Dynamics. ACM T INTERACT INTEL 2018. [DOI: 10.1145/3162076] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In 2015, the top 10 largest amusement park corporations saw a combined annual attendance of over 400 million visitors. Daily average attendance in some of the most popular theme parks in the world can average 44,000 visitors per day. These visitors ride attractions, shop for souvenirs, and dine at local establishments; however, a critical component of their visit is the overall park experience. This experience depends on the wait time for rides, the crowd flow in the park, and various other factors linked to the crowd dynamics and human behavior. As such, better insight into visitor behavior can help theme parks devise competitive strategies for improved customer experience. Research into the use of attractions, facilities, and exhibits can be studied, and as behavior profiles emerge, park operators can also identify anomalous behaviors of visitors which can improve safety and operations. In this article, we present a visual analytics framework for analyzing crowd dynamics in theme parks. Our proposed framework is designed to support behavioral analysis by summarizing patterns and detecting anomalies. We provide methodologies to link visitor movement data, communication data, and park infrastructure data. This combination of data sources enables a semantic analysis of
who
,
what
,
when
, and
where
, enabling analysts to explore visitor-visitor interactions and visitor-infrastructure interactions. Analysts can identify behaviors at the macro level through semantic trajectory clustering views for group behavior dynamics, as well as at the micro level using trajectory traces and a novel visitor network analysis view. We demonstrate the efficacy of our framework through two case studies of simulated theme park visitors.
Collapse
|
20
|
Cao N, Lin C, Zhu Q, Lin YR, Teng X, Wen X. Voila: Visual Anomaly Detection and Monitoring with Streaming Spatiotemporal Data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:23-33. [PMID: 28866547 DOI: 10.1109/tvcg.2017.2744419] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The increasing availability of spatiotemporal data continuously collected from various sources provides new opportunities for a timely understanding of the data in their spatial and temporal context. Finding abnormal patterns in such data poses significant challenges. Given that there is often no clear boundary between normal and abnormal patterns, existing solutions are limited in their capacity of identifying anomalies in large, dynamic and heterogeneous data, interpreting anomalies in their multifaceted, spatiotemporal context, and allowing users to provide feedback in the analysis loop. In this work, we introduce a unified visual interactive system and framework, Voila, for interactively detecting anomalies in spatiotemporal data collected from a streaming data source. The system is designed to meet two requirements in real-world applications, i.e., online monitoring and interactivity. We propose a novel tensor-based anomaly analysis algorithm with visualization and interaction design that dynamically produces contextualized, interpretable data summaries and allows for interactively ranking anomalous patterns based on user input. Using the "smart city" as an example scenario, we demonstrate the effectiveness of the proposed framework through quantitative evaluation and qualitative case studies.
Collapse
|
21
|
Aureole: a multi-perspective visual analytics approach for green cellular networks. J Vis (Tokyo) 2017. [DOI: 10.1007/s12650-017-0467-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
22
|
Miranda F, Doraiswamy H, Lage M, Zhao K, Goncalves B, Wilson L, Hsieh M, Silva CT. Urban Pulse: Capturing the Rhythm of Cities. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2017; 23:791-800. [PMID: 27875193 DOI: 10.1109/tvcg.2016.2598585] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Cities are inherently dynamic. Interesting patterns of behavior typically manifest at several key areas of a city over multiple temporal resolutions. Studying these patterns can greatly help a variety of experts ranging from city planners and architects to human behavioral experts. Recent technological innovations have enabled the collection of enormous amounts of data that can help in these studies. However, techniques using these data sets typically focus on understanding the data in the context of the city, thus failing to capture the dynamic aspects of the city. The goal of this work is to instead understand the city in the context of multiple urban data sets. To do so, we define the concept of an "urban pulse" which captures the spatio-temporal activity in a city across multiple temporal resolutions. The prominent pulses in a city are obtained using the topology of the data sets, and are characterized as a set of beats. The beats are then used to analyze and compare different pulses. We also design a visual exploration framework that allows users to explore the pulses within and across multiple cities under different conditions. Finally, we present three case studies carried out by experts from two different domains that demonstrate the utility of our framework.
Collapse
|