1
|
Ye C, Wang X, Morris A, Ying Z. Pedestrian crash causation analysis and active safety system calibration. ACCIDENT; ANALYSIS AND PREVENTION 2024; 195:107404. [PMID: 38042009 DOI: 10.1016/j.aap.2023.107404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 10/30/2023] [Accepted: 11/21/2023] [Indexed: 12/04/2023]
Abstract
Over 20 % of global crash fatalities involve pedestrians, but pedestrian crash causation and pedestrian protection systems have not been thoroughly developed or reliably tested. To understand the causation characteristics of pedestrian crashes, 398 pedestrian crashes were extracted from the China in-depth accident study (CIDAS), and most of these crashes were aggregated into five scenarios. The two scenarios with the highest proportion of crashes were analyzed by the driving reliability and error analysis method (DREAM) to identify high-risk causation patterns. From these patterns, three main contributing factors were identified: 1) extremely environmental light disturbance; 2) distracted driving caused by drivers' own thoughts; 3) drivers violating pedestrian yield law. Based on these patterns and factors, a pedestrian protection system was designed. It consists of a forward vision sensor and radar to sense the environment and the three-stage autonomous emergency braking (AEB) algorithm to automatically avoid pedestrian collisions. Crash scenarios from CIDAS data were recreated in MATLAB Simulink to test the pedestrian protection system proposed in this study. This system was found to reduce pedestrian crashes by more than 90 %. The optimal parameters for three AEB stages were obtained, with decelerations of 0.2 g, 0.3 g, and 0.6 g. This study designed an active safety system based on causation analysis of the vehicle-pedestrian crashes and calibrated the AEB algorithm of it, thus providing reference and insight for further development of pedestrian protection systems.
Collapse
Affiliation(s)
- Caiyang Ye
- School of Transportation Engineering, Tongji University, Shanghai, 201804, China; The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Shanghai, 201804, China
| | - Xuesong Wang
- School of Transportation Engineering, Tongji University, Shanghai, 201804, China; The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Shanghai, 201804, China.
| | - Andrew Morris
- School of Design and Creative Arts, Loughborough University, Loughborough, UK
| | - Zhaoyang Ying
- Traffic Management Research Institute, The Ministry of Public Security, Wuxi, Jiangsu, 214151, China
| |
Collapse
|
2
|
Wang X, Ye C, Quddus M, Morris A. Pedestrian safety in an automated driving environment: Calibrating and evaluating the responsibility-sensitive safety model. ACCIDENT; ANALYSIS AND PREVENTION 2023; 192:107265. [PMID: 37619318 DOI: 10.1016/j.aap.2023.107265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 05/22/2023] [Accepted: 08/12/2023] [Indexed: 08/26/2023]
Abstract
The severity of vehicle-pedestrian crashes has prompted authorities worldwide to concentrate on improving pedestrian safety. The situation has only become more urgent with the approach of automated driving scenarios. The Responsibility-Sensitive Safety (RSS) model, introduced by Mobileye®, is a rigorous mathematical model developed to facilitate the safe operation of automated vehicles. The RSS model has been calibrated for several vehicle conflict scenarios; however, it has not yet been tested for pedestrian safety. Therefore, this study calibrates and evaluates the RSS model for pedestrian safety using data from the Shanghai Naturalistic Driving Study. Nearly 400 vehicle-pedestrian conflicts were extracted from 8,000 trips by the threshold and manual check method, and then divided into 16 basic scenarios in three categories. Because crossing conflicts were the most serious and frequent, they were reproduced in MATLAB's Simulink with each vehicle replaced with a virtual automated vehicle loaded with the RSS controller module. With the objectives of maximizing safety and minimizing conservativeness, the non-dominated sorting genetic algorithm II was applied to calibrate the RSS model for vehicle-pedestrian conflicts. The safety performance of the RSS model was then compared with that of the commonly used active safety function, autonomous emergency braking (AEB), and with human driving. Findings verified that the RSS model was safer in vehicle-pedestrian conflicts than both the AEB model and human driving. Its performance also yielded the best test results in producing smooth and stable driving. This study provides a reliable reference for the safe control of automated vehicles with respect to pedestrians.
Collapse
Affiliation(s)
- Xuesong Wang
- School of Transportation Engineering, Tongji University, Shanghai, 201804, China; The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Shanghai 201804, China.
| | - Caiyang Ye
- School of Transportation Engineering, Tongji University, Shanghai, 201804, China; The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Shanghai 201804, China
| | - Mohammed Quddus
- Department of Civil and Environmental Engineering, Imperial College London, London, UK
| | - Andrew Morris
- School of Design and Creative Arts, Loughborough University, Loughborough, UK
| |
Collapse
|
3
|
Zhao Y, Stefanucci J, Creem-Regehr S, Bodenheimer B. Evaluating Augmented Reality Landmark Cues and Frame of Reference Displays with Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:2710-2720. [PMID: 37027707 DOI: 10.1109/tvcg.2023.3247078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Daily travel usually demands navigation on foot across a variety of different application domains, including tasks like search and rescue or commuting. Head-mounted augmented reality (AR) displays provide a preview of future navigation systems on foot, but designing them is still an open problem. In this paper, we look at two choices that such AR systems can make for navigation: 1) whether to denote landmarks with AR cues and 2) how to convey navigation instructions. Specifically, instructions can be given via a head-referenced display (screen-fixed frame of reference) or by giving directions fixed to global positions in the world (world-fixed frame of reference). Given limitations with the tracking stability, field of view, and brightness of most currently available head-mounted AR displays for lengthy routes outdoors, we decided to simulate these conditions in virtual reality. In the current study, participants navigated an urban virtual environment and their spatial knowledge acquisition was assessed. We experimented with whether or not landmarks in the environment were cued, as well as how navigation instructions were displayed (i.e., via screen-fixed or world-fixed directions). We found that the world-fixed frame of reference resulted in better spatial learning when there were no landmarks cued; adding AR landmark cues marginally improved spatial learning in the screen-fixed condition. These benefits in learning were also correlated with participants' reported sense of direction. Our findings have implications for the design of future cognition-driven navigation systems.
Collapse
|
4
|
Kara PA, Wippelhauser A, Balogh T, Bokor L. How I Met Your V2X Sensor Data: Analysis of Projection-Based Light Field Visualization for Vehicle-to-Everything Communication Protocols and Use Cases. SENSORS (BASEL, SWITZERLAND) 2023; 23:1284. [PMID: 36772324 PMCID: PMC9919924 DOI: 10.3390/s23031284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 01/18/2023] [Accepted: 01/19/2023] [Indexed: 06/18/2023]
Abstract
The practical usage of V2X communication protocols started emerging in recent years. Data built on sensor information are displayed via onboard units and smart devices. However, perceptually obtaining such data may be counterproductive in terms of visual attention, particularly in the case of safety-related applications. Using the windshield as a display may solve this issue, but switching between 2D information and the 3D reality of traffic may introduce issues of its own. To overcome such difficulties, automotive light field visualization is introduced. In this paper, we investigate the visualization of V2X communication protocols and use cases via projection-based light field technology. Our work is motivated by the abundance of V2X sensor data, the low latency of V2X data transfer, the availability of automotive light field prototypes, the prevalent dominance of non-autonomous and non-remote driving, and the lack of V2X-based light field solutions. As our primary contributions, we provide a comprehensive technological review of light field and V2X communication, a set of recommendations for design and implementation, an extensive discussion and implication analysis, the exploration of utilization based on standardized protocols, and use-case-specific considerations.
Collapse
Affiliation(s)
- Peter A. Kara
- Department of Networked Systems and Services, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
- Wireless Multimedia and Networking Research Group, Department of Computer Science, Faculty of Science, School of Computer Science and Mathematics, Engineering and Computing, Kingston University, Penrhyn Road Campus, Kingston upon Thames, London KT1 2EE, UK
| | - Andras Wippelhauser
- Department of Networked Systems and Services, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | | | - Laszlo Bokor
- Department of Networked Systems and Services, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| |
Collapse
|
5
|
Mok CS, Bazilinskyy P, de Winter J. Stopping by looking: A driver-pedestrian interaction study in a coupled simulator using head-mounted displays with eye-tracking. APPLIED ERGONOMICS 2022; 105:103825. [PMID: 35777182 DOI: 10.1016/j.apergo.2022.103825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 04/10/2022] [Accepted: 06/03/2022] [Indexed: 06/15/2023]
Abstract
Automated vehicles (AVs) can perform low-level control tasks but are not always capable of proper decision-making. This paper presents a concept of eye-based maneuver control for AV-pedestrian interaction. Previously, it was unknown whether the AV should conduct a stopping maneuver when the driver looks at the pedestrian or looks away from the pedestrian. A two-agent experiment was conducted using two head-mounted displays with integrated eye-tracking. Seventeen pairs of participants (pedestrian and driver) each interacted in a road crossing scenario. The pedestrians' task was to hold a button when they felt safe to cross the road, and the drivers' task was to direct their gaze according to instructions. Participants completed three 16-trial blocks: (1) Baseline, in which the AV was pre-programmed to yield or not yield, (2) Look to Yield (LTY), in which the AV yielded when the driver looked at the pedestrian, and (3) Look Away to Yield (LATY), in which the AV yielded when the driver did not look at the pedestrian. The driver's eye movements in the LTY and LATY conditions were visualized using a virtual light beam. Crossing performance was assessed based on whether the pedestrian held the button when the AV yielded and released the button when the AV did not yield. Furthermore, the pedestrians' and drivers' acceptance of the mappings was measured through a questionnaire. The results showed that the LTY and LATY mappings yielded better crossing performance than Baseline. Furthermore, the LTY condition was best accepted by drivers and pedestrians. Eye-tracking analyses indicated that the LTY and LATY mappings attracted the pedestrian's attention, while pedestrians still distributed their attention between the AV and a second vehicle approaching from the other direction. In conclusion, LTY control may be a promising means of AV control at intersections before full automation is technologically feasible.
Collapse
Affiliation(s)
- Chun Sang Mok
- Department of Cognitive Robotics, Delft University of Technology, Delft, the Netherlands
| | - Pavlo Bazilinskyy
- Department of Cognitive Robotics, Delft University of Technology, Delft, the Netherlands
| | - Joost de Winter
- Department of Cognitive Robotics, Delft University of Technology, Delft, the Netherlands.
| |
Collapse
|
6
|
Kim H, Gabbard JL. Assessing Distraction Potential of Augmented Reality Head-Up Displays for Vehicle Drivers. HUMAN FACTORS 2022; 64:852-865. [PMID: 31063399 DOI: 10.1177/0018720819844845] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To develop a framework for quantifying the visual and cognitive distraction potential of augmented reality (AR) head-up displays (HUDs). BACKGROUND AR HUDs promise to be less distractive than traditional in-vehicle displays because they project information onto the driver's forward-looking view of the road. However, AR graphics may direct the driver's attention away from critical road elements. Moreover, current in-vehicle device assessment methods, which are based on eyes-off-road time measures, cannot capture this unique challenge. METHOD This article proposes a new method for the assessment of AR HUDs by measuring driver gaze behavior, situation awareness, confidence, and workload. An experimental user study (n = 24) was conducted in a driving simulator to apply the proposed method for the assessment of two AR pedestrian collision warning (PCW) design alternatives. RESULTS Only one of the two tested AR interfaces improved driver awareness of pedestrians without visually and cognitively distracting drivers from other road elements that were not augmented by the display but still critical for safe driving. CONCLUSION Our initial human-subject experiment demonstrated the potential of the proposed method in quantifying both positive and negative consequences of AR HUDs on driver cognitive processes. More importantly, the study suggests that AR interfaces can be informative or distractive depending on the perceptual forms of graphical elements presented on the displays. APPLICATION The proposed methods can be applied by designers of in-vehicle AR HUD interfaces and be leveraged by designers of AR user interfaces in general.
Collapse
Affiliation(s)
- Hyungil Kim
- Virginia Tech Transportation Institute, Blacksburg, USA
| | | |
Collapse
|
7
|
Stapel J, El Hassnaoui M, Happee R. Measuring Driver Perception: Combining Eye-Tracking and Automated Road Scene Perception. HUMAN FACTORS 2022; 64:714-731. [PMID: 32993382 PMCID: PMC9136390 DOI: 10.1177/0018720820959958] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
OBJECTIVE To investigate how well gaze behavior can indicate driver awareness of individual road users when related to the vehicle's road scene perception. BACKGROUND An appropriate method is required to identify how driver gaze reveals awareness of other road users. METHOD We developed a recognition-based method for labeling of driver situation awareness (SA) in a vehicle with road-scene perception and eye tracking. Thirteen drivers performed 91 left turns on complex urban intersections and identified images of encountered road users among distractor images. RESULTS Drivers fixated within 2° for 72.8% of relevant and 27.8% of irrelevant road users and were able to recognize 36.1% of the relevant and 19.4% of irrelevant road users one min after leaving the intersection. Gaze behavior could predict road user relevance but not the outcome of the recognition task. Unexpectedly, 18% of road users observed beyond 10° were recognized. CONCLUSIONS Despite suboptimal psychometric properties leading to low recognition rates, our recognition task could identify awareness of individual road users during left turn maneuvers. Perception occurred at gaze angles well beyond 2°, which means that fixation locations are insufficient for awareness monitoring. APPLICATION Findings can be used in driver attention and awareness modelling, and design of gaze-based driver support systems.
Collapse
Affiliation(s)
- Jork Stapel
- Delft University of Technology, Netherlands
- Jork Stapel, Faculty of Mechanical Maritime and Materials Engineering, Delft University of Technology, Mekelweg 2Delft, 2628 CD, Netherlands;
| | | | | |
Collapse
|
8
|
Skirnewskaja J, Wilkinson TD. Automotive Holographic Head-Up Displays. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2022; 34:e2110463. [PMID: 35148445 DOI: 10.1002/adma.202110463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 02/01/2022] [Indexed: 06/14/2023]
Abstract
Driver's access to information about navigation and vehicle data through in-car displays and personal devices distract the driver from safe vehicle management. The discrepancy between road safety and infotainment must be addressed to develop safely operated modern vehicles. Head-up displays (HUDs) aim to introduce a seamless uptake of visual information for the driver while securely operating a vehicle. HUDs projected on the windshield provide the driver with visual navigation and vehicle data within the comfort of the driver's personal eye box through a customizable extended display space. Windshield HUDs do not require the driver to shift the gaze away from the road to attain road information. This article presents a review of technological advances and future perspectives in holographic HUDs by analyzing the optoelectronics devices and the user experience of the driver. The review elucidates holographic displays and full augmented reality in 3D with depth perception when projecting the visual information on the road within the driver's gaze. Design factors, functionality, and the integration of personalized machine learning technologies into holographic HUDs are discussed. Application examples of the display technologies regarding road safety and security are presented. An outlook is provided to reflect on display trends and autonomous driving.
Collapse
Affiliation(s)
- Jana Skirnewskaja
- Electrical Engineering Division, Department of Engineering, University of Cambridge, 9 JJ Thomson Avenue, Cambridge, CB3 0FA, UK
| | - Timothy D Wilkinson
- Electrical Engineering Division, Department of Engineering, University of Cambridge, 9 JJ Thomson Avenue, Cambridge, CB3 0FA, UK
| |
Collapse
|
9
|
Jing C, Shang C, Yu D, Chen Y, Zhi J. The impact of different AR-HUD virtual warning interfaces on the takeover performance and visual characteristics of autonomous vehicles. TRAFFIC INJURY PREVENTION 2022; 23:277-282. [PMID: 35442130 DOI: 10.1080/15389588.2022.2055752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 03/14/2022] [Accepted: 03/15/2022] [Indexed: 06/14/2023]
Abstract
OBJECTIVE The objective of this study was to determine the different effects of the arrow-pointing augmented reality head-up display (AR-HUD) interface, virtual shadow AR-HUD interface, and non-AR-HUD interface on autonomous vehicle takeover efficiency and driver eye movement characteristics in different driving scenarios. METHODS Thirty-six participants were selected to carry out a simulated driving experiment, and the eye movement index and takeover time were analyzed. RESULTS The arrow pointing AR-HUD interface and the virtual shadow AR-HUD interface could effectively reduce the driver's visual distraction, improve the efficiency of obtaining visual information, reduce the number of times the driver's eyes leave the road, and improve the efficiency of the takeover compared with the non-AR-HUD interface, but there was no significant difference in eye movement indexes between the arrow pointing AR-HUD interface and the more eye-catching virtual shadow AR-HUD interface. When specific scenarios were considered, it was found that in the scenario of emergency braking of the vehicle in front, the arrow pointing AR-HUD interface and the virtual shadow AR-HUD interface had more advantages in takeover efficiency than the non-AR-HUD interface. However, in the scenarios of a rear vehicle overtaking the vehicle ahead and non-motor vehicles running red lights, there was no significant difference in takeover efficiency. For the non-motor vehicle invading the line, emergency U-turn of the vehicle in front, and pedestrian crossing scenarios, the virtual shadow AR-HUD interface had the highest takeover efficiency. CONCLUSIONS These research results can help improve the active safety of autonomous vehicle AR-HUD interfaces.
Collapse
Affiliation(s)
- Chunhui Jing
- Department of Industrial Design, Southwest Jiaotong University, Chengdu, China
| | - Chenguang Shang
- Intelligent Research Institute of Chongqing Changan Automobile Co., Ltd, Chongqing, China
| | - Dongyu Yu
- Department of Industrial Design, Southwest Jiaotong University, Chengdu, China
| | - Yaodong Chen
- Department of Industrial Design, Southwest Jiaotong University, Chengdu, China
| | - Jinyi Zhi
- Department of Industrial Design, Southwest Jiaotong University, Chengdu, China
| |
Collapse
|
10
|
Noh B, Park H, Yeo H. Analyzing vehicle-pedestrian interactions: Combining data cube structure and predictive collision risk estimation model. ACCIDENT; ANALYSIS AND PREVENTION 2022; 165:106539. [PMID: 34929575 DOI: 10.1016/j.aap.2021.106539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 08/17/2021] [Accepted: 10/29/2021] [Indexed: 06/14/2023]
Abstract
Road traffic accidents are a severe threat to human lives, particularly to vulnerable road users (VRUs) such as pedestrians causing premature deaths. Therefore, it is necessary to devise systems to prevent accidents in advance and respond proactively, using potential risky situations as one of the surrogate safety measurements. This study introduces a new concept of a pedestrian safety system that combines the field and the centralized processes. The system can warn of upcoming risks immediately in the field and improve the safety of risk-frequent areas by assessing the safety levels of roads without actual collisions. In particular, this study focuses on the latter by introducing a new analytical framework for a crosswalk safety assessment with various behaviors of vehicles/pedestrians and environmental features. We obtain these behavioral features from actual traffic video footages in the city with complete automatic processing. The proposed framework mainly analyzes these behaviors in multi-dimensional perspectives by constructing a data cube structure, which combines the Long Short-Term Memory (LSTM)-based predictive collision risk (PCR) estimation model and the on-line analytical processing (OLAP) operations. From the PCR estimation model, we categorize the severity of risks as four levels; "relatively safe," "caution," "warning," and "danger," and apply the proposed framework to assess the crosswalk safety with behavioral features. With the proposed framework, the various descriptive results are harvested, but we aim at conducting analysis based on two scenarios in our analytic experiments; the movement patterns of vehicles and pedestrians by road environment and the relationships between risk levels and car speeds. Consequently, the proposed framework can support decision-makers (e.g., urban planners, safety administrators) by providing the valuable information to improve pedestrian safety for future accidents, and it can help us better understand cars' and pedestrians' proactive behavior near the crosswalks. In order to confirm the feasibility and applicability of the proposed framework, we implement and apply it to actual operating CCTVs in Osan City, Republic of Korea.
Collapse
Affiliation(s)
- Byeongjoon Noh
- Applied Science Research Institute, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseung-gu, Daejeon, Republic of Korea.
| | - Hansaem Park
- Department of Civil and Environmental Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseung-gu, Daejeon, Republic of Korea.
| | - Hwasoo Yeo
- Department of Civil and Environmental Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseung-gu, Daejeon, Republic of Korea.
| |
Collapse
|
11
|
The Application of Augmented Reality in the Automotive Industry: A Systematic Literature Review. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10124259] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Augmented reality (AR) is a fairly new technology enabling human machine interaction by superimposing virtual information on a real environment. Potential applications can be found in many areas of research from recent years. This study presents a systematic review of existing AR systems in the automotive field, synthesizing 55 studies from 2002 to 2019. The main research questions are: where AR technology has been applied within the automotive industry, what is the purpose of its application, what are the general characteristics of these systems, and what are the emphasized benefits and challenges of using AR in this field? The aim of this paper is to provide an insight into the AR applications and technologies in the automotive field.
Collapse
|
12
|
Hachaj T, Piekarczyk M. Evaluation of Pattern Recognition Methods for Head Gesture-Based Interface of a Virtual Reality Helmet Equipped with a Single IMU Sensor. SENSORS 2019; 19:s19245408. [PMID: 31817991 PMCID: PMC6960875 DOI: 10.3390/s19245408] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Revised: 12/05/2019] [Accepted: 12/06/2019] [Indexed: 11/20/2022]
Abstract
The motivation of this paper is to examine the effectiveness of state-of-the-art and newly proposed motion capture pattern recognition methods in the task of head gesture classifications. The head gestures are designed for a user interface that utilizes a virtual reality helmet equipped with an internal measurement unit (IMU) sensor that has 6-axis accelerometer and gyroscope. We will validate a classifier that uses Principal Components Analysis (PCA)-based features with various numbers of dimensions, a two-stage PCA-based method, a feedforward artificial neural network, and random forest. Moreover, we will also propose a Dynamic Time Warping (DTW) classifier trained with extension of DTW Barycenter Averaging (DBA) algorithm that utilizes quaternion averaging and a bagged variation of previous method (DTWb) that utilizes many DTW classifiers that perform voting. The evaluation has been performed on 975 head gesture recordings in seven classes acquired from 12 persons. The highest value of recognition rate in a leave-one-out test has been obtained for DTWb and it equals 0.975 (0.026 better than the best of state-of-the-art methods to which we have compared our approach). Among the most important applications of the proposed method is improving life quality for people who are disabled below the neck by supporting, for example, an assistive autonomous power chair with a head gesture interface or remote controlled interfaces in robotics.
Collapse
|
13
|
Merenda C, Kim H, Tanous K, Gabbard JL, Feichtl B, Misu T, Suga C. Augmented Reality Interface Design Approaches for Goal-directed and Stimulus-driven Driving Tasks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2875-2885. [PMID: 30235134 DOI: 10.1109/tvcg.2018.2868531] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The automotive industry is rapidly developing new in-vehicle technologies that can provide drivers with information to aid awareness and promote quicker response times. Particularly, vehicles with augmented reality (AR) graphics delivered via head-up displays (HUDs) are nearing mainstream commercial feasibility and will be widely implemented over the next decade. Though AR graphics have been shown to provide tangible benefits to drivers in scenarios like forward collision warnings and navigation, they also create many new perceptual and sensory issues for drivers. For some time now, designers have focused on increasing the realism and quality of virtual graphics delivered via HUDs, and recently have begun testing more advanced 3D HUD systems that deliver volumetric spatial information to drivers. However, the realization of volumetric graphics adds further complexity to the design and delivery of AR cues, and moreover, parameters in this new design space must be clearly and operationally defined and explored. In this work, we present two user studies that examine how driver performance and visual attention are affected when using fixed and animated AR HUD interface design approaches in driving scenarios that require top-down and bottom-up cognitive processing. Results demonstrate that animated design approaches can produce some driving gains (e.g., in goal-directed navigation tasks) but often come at the cost of response time and distance. Our discussion yields AR HUD design recommendations and challenges some of the existing assumptions of world-fixed conformal graphic approaches to design.
Collapse
|