1
|
Zhong Q, Zhi J, Xu Y, Gao P, Feng S. Assessing driver distraction from in-vehicle information system: an on-road study exploring the effects of input modalities and secondary task types. Sci Rep 2024; 14:20289. [PMID: 39217232 PMCID: PMC11366028 DOI: 10.1038/s41598-024-71226-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 08/26/2024] [Indexed: 09/04/2024] Open
Abstract
In-vehicle information system (IVIS) use is prevalent among young adults. However, their interaction with IVIS needs to be better understood. Therefore, an on-road study aims to explore the effects of input modalities and secondary task types on young drivers' secondary task performance, driving performance, and visual glance behavior. A 2 × 4 within-subject design was undertaken. The independent variables are input modalities (auditory-speech and visual-manual) and secondary task types (calls, music, navigation, and radio). The dependent variables include secondary task performance (task completion time, number of errors, and SUS), driving performance (average speed, number of lane departure warnings, and NASA-TLX), and visual glance behavior (average glance duration, number of glances, total glance duration, and number of glances over 1.6 s). The statistical analysis result showed that the main effect of input modalities is significant, with more distraction during visual-manual than auditory-speech. The main impact of secondary task types was also substantial across most metrics, aside from average speed and average glance duration. Navigation and music were the most distracting, followed by calls, and radio came in last. The distracting effect of input modalities is relatively stable and generally not moderated by the secondary task types, except radio tasks. The findings practically benefit the driver-friendly human-machine interface design, preventing IVIS-related distraction.
Collapse
Affiliation(s)
- Qi Zhong
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu, 611756, China.
| | - Jinyi Zhi
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu, 611756, China.
| | - Yongsheng Xu
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu, 611756, China
| | - Pengfei Gao
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu, 611756, China
| | - Shu Feng
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu, 611756, China
| |
Collapse
|
2
|
Xie X, Li T, Xu S, Yu Y, Ma Y, Liu Z, Ji M. The Effects of Auditory Working Memory Task on Situation Awareness in Complex Dynamic Environments: An Eye-movement Study. HUMAN FACTORS 2024; 66:1844-1859. [PMID: 37529928 DOI: 10.1177/00187208231191389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/03/2023]
Abstract
OBJECTIVE This study investigated the effect of auditory working memory task on situation awareness (SA) and eye-movement patterns in complex dynamic environments. BACKGROUND Many human errors in aviation are caused by a lack of SA, and distraction from auditory secondary tasks is a serious threat to SA. However, it remains unclear how auditory working memory tasks affect SA and eye-movement patterns. METHOD Participants (n = 28) were randomly allocated to two groups and received different periods of visual search training (short versus long). They subsequently completed a situation awareness measurement task in three auditory secondary task conditions (without secondary task, auditory calculation task, and auditory 2-back task). Eye-movement data were collected during the situation awareness measurement task. RESULTS The auditory 2-back task significantly reduced overall SA, Level 1 SA, dwell times, and total percentage of fixation time on task-related areas of interests in the SA measurement task. Overall SA and Level 3 SA were not reduced by the auditory 2-back task in individuals in the longer visual search training time condition. CONCLUSION Auditory working memory load impairs SA in the perception and projection stage; however, greater experience can overcome impairment of SA in the projection stage. APPLICATION This study provided possible approaches to preventing loss of SA: (1) improving crew members' communication skills to ensure the accurate and clear transmission of information, reducing the difficulty of processing information, and (2) providing targeted cognitive training tailored to each pilot's level of experience.
Collapse
Affiliation(s)
- Xudong Xie
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Tiantian Li
- Northwest University of Political Science and Law, Xi'an, China
| | - Shuai Xu
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Yingyue Yu
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Yifeng Ma
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Zhen Liu
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Ming Ji
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| |
Collapse
|
3
|
Houweling KP, Mallam SC, van de Merwe K, Nordby K. The effects of Augmented Reality on operator Situation Awareness and Head-Down Time. APPLIED ERGONOMICS 2024; 116:104213. [PMID: 38154227 DOI: 10.1016/j.apergo.2023.104213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 12/11/2023] [Accepted: 12/15/2023] [Indexed: 12/30/2023]
Abstract
A lack of navigator's Situation Awareness (SA) is one of the leading causes of maritime accidents. Visually observing the area surrounding a vessel continues to be a critical aspect and best practice of safe navigation to establish and maintain SA. Augmented Reality (AR) allows the placement of information in a user's field of view, which can encourage navigators to spend more time looking up at their external environment whilst still having access to operational data. However, empirical evidence on the impact of AR on maritime operations is limited. This paper investigates the effects of AR on navigator SA & Head-Down Time (HDT) using a within-group quasi-experimental design. Seventeen licensed navigators and nautical students analysed twelve navigation scenarios: six non-AR (control) and six AR (experimental) scenarios using a maritime training simulator. SA was measured via SAGAT scores for each scenario and the SA-SWORD to compare preferences. Each scenario was video recorded and analysed for participant's total amount of HDT and head-down occurrences in each scenario. Results found that the addition of AR significantly reduced participant HDT (by a factor of 2.67) and head-down occurrences (by 62%) in comparison to navigation scenarios without AR. Furthermore, AR did not significantly improve mean SA. This study contributes to the limited empirical data on the effects of AR on operator performance, demonstrating the potential value of AR for improving SA and facilitating increased head-up time during maritime navigation, which in turn could improve safety at sea.
Collapse
Affiliation(s)
- Koen Pieter Houweling
- Department of Maritime Operations, University of South-Eastern Norway, Borre, Norway; Group Research and Development, DNV, Høvik, Norway
| | - Steven C Mallam
- Department of Maritime Operations, University of South-Eastern Norway, Borre, Norway; Fisheries & Marine Institute, Memorial University of Newfoundland, St. John's, Canada.
| | - Koen van de Merwe
- Department of Maritime Operations, University of South-Eastern Norway, Borre, Norway; Group Research and Development, DNV, Høvik, Norway
| | - Kjetil Nordby
- Institute of Design, The Oslo School of Architecture and Design, Oslo, Norway
| |
Collapse
|
4
|
Postelnicu CC, Boboc RG. Extended reality in the automotive sector: A bibliometric analysis of publications from 2012 to 2022. Heliyon 2024; 10:e24960. [PMID: 38312558 PMCID: PMC10835006 DOI: 10.1016/j.heliyon.2024.e24960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 01/05/2024] [Accepted: 01/17/2024] [Indexed: 02/06/2024] Open
Abstract
The present study aims to present a bibliometric analysis of publications related to "Extended Reality" (XR) in the automotive sector. XR is revolutionizing the industry in all fields, and the automotive is one of the sectors that has had much to gain from this technology and its components (Virtual Reality, Augmented Reality, Mixed Reality). Articles on XR in the automotive field that were published from 2012 to 2022 were retrieved from the Scopus database. Extracted items were analysed in terms of the document type, document language, year of publication, country, authors, affiliations, sources, citations, keywords, and research domains. The open-source tool VOSviewer was used to visualize trends in research on XR applied to automotive. The analyses of 1584 documents revealed that the total number of publications has continually increased over the last 11 years. The country producing most of the articles in this field was Germany, followed by the United States and China. The most productive journal is Transportation Research Part F: Traffic Psychology and Behaviour and the institution that issued most of the articles is Technical University of Munich. From the analysis of author keywords, the prominent research areas currently involving the use of XR technologies in automotive can be highlighted: virtual prototyping, design, manufacturing, sales, training, driver or pedestrian behaviour analysis, and ergonomics. More recently, terms like artificial intelligence and autonomous vehicles have started to be used more frequently in studies in the field. The current study reveals an expanding corpus of literature on XR-based applications for the automotive sector using bibliometric methods. Researchers and stakeholders can use this study as a useful reference to comprehend the big picture and the state-of-the-art in this area.
Collapse
Affiliation(s)
- Cristian-Cezar Postelnicu
- Department of Automotive and Transport Engineering, Transilvania University of Brașov, 29 Eroilor Blvd., 500036 Brasov, Romania
| | - Răzvan Gabriel Boboc
- Department of Automotive and Transport Engineering, Transilvania University of Brașov, 29 Eroilor Blvd., 500036 Brasov, Romania
| |
Collapse
|
5
|
Shi Y, Wu W. Multimodal non-invasive non-pharmacological therapies for chronic pain: mechanisms and progress. BMC Med 2023; 21:372. [PMID: 37775758 PMCID: PMC10542257 DOI: 10.1186/s12916-023-03076-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 09/11/2023] [Indexed: 10/01/2023] Open
Abstract
BACKGROUND Chronic pain conditions impose significant burdens worldwide. Pharmacological treatments like opioids have limitations. Non-invasive non-pharmacological therapies (NINPT) encompass diverse interventions including physical, psychological, complementary and alternative approaches, and other innovative techniques that provide analgesic options for chronic pain without medications. MAIN BODY This review elucidates the mechanisms of major NINPT modalities and synthesizes evidence for their clinical potential across chronic pain populations. NINPT leverages peripheral, spinal, and supraspinal mechanisms to restore normal pain processing and limit central sensitization. However, heterogeneity in treatment protocols and individual responses warrants optimization through precision medicine approaches. CONCLUSION Future adoption of NINPT requires addressing limitations in standardization and accessibility as well as synergistic combination with emerging therapies. Overall, this review highlights the promise of NINPT as a valuable complementary option ready for integration into contemporary pain medicine paradigms to improve patient care and outcomes.
Collapse
Affiliation(s)
- Yu Shi
- Department of Rehabilitation, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China
| | - Wen Wu
- Department of Rehabilitation, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, China.
| |
Collapse
|
6
|
Woodward J, Ruiz J. Analytic Review of Using Augmented Reality for Situational Awareness. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2166-2183. [PMID: 35007195 DOI: 10.1109/tvcg.2022.3141585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Situational awareness is the perception and understanding of the surrounding environment. Maintaining situational awareness is vital for performance and error prevention in safety critical domains. Prior work has examined applying augmented reality (AR) to the context of improving situational awareness, but has mainly focused on the applicability of using AR rather than on information design. Hence, there is a need to investigate how to design the presentation of information, especially in AR headsets, to increase users' situational awareness. We conducted a Systematic Literature Review to research how information is currently presented in AR, especially in systems that are being utilized for situational awareness. Comparing current presentations of information to existing design recommendations aided in identifying future areas of design. In addition, this survey further discusses opportunities and challenges in applying AR to increasing users' situational awareness.
Collapse
|
7
|
Chen W, Song J, Wang Y, Wu C, Ma S, Wang D, Yang Z, Li H. Inattentional blindness to unexpected hazard in augmented reality head-up display assisted driving: The impact of the relative position between stimulus and augmented graph. TRAFFIC INJURY PREVENTION 2023; 24:344-351. [PMID: 36939683 DOI: 10.1080/15389588.2023.2186735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Revised: 02/22/2023] [Accepted: 02/23/2023] [Indexed: 05/23/2023]
Abstract
OBJECTIVE An augmented reality head-up display (AR-HUD) is a promising technology in assisted driving. It provides additional information in the driving environment. However, considering the registration problem related to the limitations of interactive technology, we suspect that an AR-HUD may not be able to recognize unpredictable stimuli in a timely manner, inducing inattentional blindness to these non-augmented stimuli. Actually, non-augmented stimuli may accidentally have a brief superimposition to AR graphics. This condition may also influence the rate of inattentional blindness accordingly. Thus, this study examined the problem of inattentional blindness in AR-HUD systems that may result from the immaturity of AR technology. METHOD We investigated the impact of AR graphic position (peripheral AOI v.s. central AOI) and the relative position of the AR graphic on unpredictable stimuli (on-HUD hazard v.s. off-HUD hazard) on the occurrence of inattentional blindness. Thirty Participants watched an AR-augmented driving video that included four augmented conditions. Participants were instructed to respond to four critical events (speeding, running of red lights, unexpected pedestrians or motorcycles). The rate of inattentional blindness and response time were recorded. We only analyzed data on unexpected pedestrian and motorcycle incidents. RESULTS The relative position of the AR graphic on unpredictable stimuli and AR graphic positions significantly affected the rate of inattentional blindness and response time. Drivers had a higher rate of inattentional blindness to the unpredictable stimulus briefly superimposed on the AR graphic (i.e., on-HUD hazard) in the peripheral visual field (i.e., peripheral AOI). Also, drivers exhibited a higher rate of inattentional blindness to the unpredictable stimuli outside the AR graphic (i.e., off-HUD hazard) in the central visual field (i.e., central AOI). CONCLUSION The study is expected to be beneficial for furthering the design of an AR-HUD-assisted system to reduce inattentional blindness in driving. Our results found that in the peripheral visual field, unpredictable stimuli accidentally superimposed on the AR graphic (i.e., on-HUD hazard) lead to a higher probability of ignoring the accidental events and seemed to require a longer response time for drivers. This study illustrated that inattentional blindness to non-augmented stimuli is also influenced by the AR graphic position when AR technology fails to augment them in a timely manner. An important recommendation emerging from this work is to consider the design of AR graphics according to the AR graphic positions and stimulus types to reduce the occurrence of inattentional blindness.
Collapse
Affiliation(s)
- Wanting Chen
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Jiaqing Song
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yuwei Wang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Changxu Wu
- Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Shu Ma
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Duming Wang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Zhen Yang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Hongting Li
- Institute of Applied Psychology, College of Education, Zhejiang University of Technology, Hangzhou, China
| |
Collapse
|
8
|
de Bougrenet de la Tocnaye JL. Restored vision-augmented vision: arguments for a cybernetic vision. C R Biol 2022; 345:135-156. [PMID: 36847468 DOI: 10.5802/crbiol.102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 11/17/2022] [Indexed: 12/14/2022]
Abstract
In this paper, we present some thoughts about the recent developments, made possible by technological advances and miniaturisation of connected visual prostheses, linked to the visual system, operating at different level of this one, on the retina as well as in the visual cortex. While these objects represent a great hope for people with impaired vision to recover partial vision, we show how this technology could also act on the functional vision of well sighted persons to improve or increase their visual performance. In addition to the impact on our cognitive and attentional mechanisms, such an operation when it originates outside the natural real visual field (e.g. cybernetics) raises a number of questions about the development and use of such implants or prostheses in the future.
Collapse
|
9
|
Jing C, Shang C, Yu D, Chen Y, Zhi J. The impact of different AR-HUD virtual warning interfaces on the takeover performance and visual characteristics of autonomous vehicles. TRAFFIC INJURY PREVENTION 2022; 23:277-282. [PMID: 35442130 DOI: 10.1080/15389588.2022.2055752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 03/14/2022] [Accepted: 03/15/2022] [Indexed: 06/14/2023]
Abstract
OBJECTIVE The objective of this study was to determine the different effects of the arrow-pointing augmented reality head-up display (AR-HUD) interface, virtual shadow AR-HUD interface, and non-AR-HUD interface on autonomous vehicle takeover efficiency and driver eye movement characteristics in different driving scenarios. METHODS Thirty-six participants were selected to carry out a simulated driving experiment, and the eye movement index and takeover time were analyzed. RESULTS The arrow pointing AR-HUD interface and the virtual shadow AR-HUD interface could effectively reduce the driver's visual distraction, improve the efficiency of obtaining visual information, reduce the number of times the driver's eyes leave the road, and improve the efficiency of the takeover compared with the non-AR-HUD interface, but there was no significant difference in eye movement indexes between the arrow pointing AR-HUD interface and the more eye-catching virtual shadow AR-HUD interface. When specific scenarios were considered, it was found that in the scenario of emergency braking of the vehicle in front, the arrow pointing AR-HUD interface and the virtual shadow AR-HUD interface had more advantages in takeover efficiency than the non-AR-HUD interface. However, in the scenarios of a rear vehicle overtaking the vehicle ahead and non-motor vehicles running red lights, there was no significant difference in takeover efficiency. For the non-motor vehicle invading the line, emergency U-turn of the vehicle in front, and pedestrian crossing scenarios, the virtual shadow AR-HUD interface had the highest takeover efficiency. CONCLUSIONS These research results can help improve the active safety of autonomous vehicle AR-HUD interfaces.
Collapse
Affiliation(s)
- Chunhui Jing
- Department of Industrial Design, Southwest Jiaotong University, Chengdu, China
| | - Chenguang Shang
- Intelligent Research Institute of Chongqing Changan Automobile Co., Ltd, Chongqing, China
| | - Dongyu Yu
- Department of Industrial Design, Southwest Jiaotong University, Chengdu, China
| | - Yaodong Chen
- Department of Industrial Design, Southwest Jiaotong University, Chengdu, China
| | - Jinyi Zhi
- Department of Industrial Design, Southwest Jiaotong University, Chengdu, China
| |
Collapse
|
10
|
Matias J, Belletier C, Izaute M, Lutz M, Silvert L. The role of perceptual and cognitive load on inattentional blindness: A systematic review and three meta-analyses. Q J Exp Psychol (Hove) 2021; 75:1844-1875. [PMID: 34802311 DOI: 10.1177/17470218211064903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The inattentional blindness phenomenon refers to situations in which a visible but unexpected stimulus remains consciously unnoticed by observers. This phenomenon is classically explained as the consequence of insufficient attention, because attentional resources are already engaged elsewhere or vary between individuals. However, this attentional-resources view is broad and often imprecise regarding the variety of attentional models, the different pools of resources that can be involved in attentional tasks, and the heterogeneity of the experimental paradigms. Our aim was to investigate whether a classic theoretical model of attention, namely the Load Theory, could account for a large range of empirical findings in this field by distinguishing the role of perceptual and cognitive resources in attentional selection and attentional capture by irrelevant stimuli. As this model has been mostly built on implicit measures of distractor interference, it is unclear whether its predictions also hold when explicit and subjective awareness of an unexpected stimulus is concerned. Therefore, we conducted a systematic review and meta-analyses of inattentional blindness studies investigating the role of perceptual and/or cognitive resources. The results reveal that, in line with the perceptual account of the Load Theory, inattentional blindness significantly increases with the perceptual load of the task. However, the cognitive account of this theory is not clearly supported by the empirical findings analysed here. Furthermore, the interaction between perceptual and cognitive load on inattentional blindness remains understudied. Theoretical implications for the Load Theory are discussed, notably regarding the difference between attentional capture and subjective awareness paradigms, and further research directions are provided.
Collapse
Affiliation(s)
- Jérémy Matias
- Laboratoire de Psychologie Sociale et Cognitive (LAPSCO), Université Clermont Auvergne-CNRS, Clermont-Ferrand, France
| | - Clément Belletier
- Laboratoire de Psychologie Sociale et Cognitive (LAPSCO), Université Clermont Auvergne-CNRS, Clermont-Ferrand, France
| | - Marie Izaute
- Laboratoire de Psychologie Sociale et Cognitive (LAPSCO), Université Clermont Auvergne-CNRS, Clermont-Ferrand, France
| | - Matthieu Lutz
- Innovation Procédés Industriels, Michelin Recherche et Développement, Clermont-Ferrand, France
| | - Laetitia Silvert
- Laboratoire de Psychologie Sociale et Cognitive (LAPSCO), Université Clermont Auvergne-CNRS, Clermont-Ferrand, France
| |
Collapse
|
11
|
Calvi A, D'Amico F, Ferrante C, Bianchini Ciampoli L. Evaluation of augmented reality cues to improve the safety of left-turn maneuvers in a connected environment: A driving simulator study. ACCIDENT; ANALYSIS AND PREVENTION 2020; 148:105793. [PMID: 33017731 DOI: 10.1016/j.aap.2020.105793] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 09/14/2020] [Accepted: 09/16/2020] [Indexed: 06/11/2023]
Abstract
Left-turns are some of the most dangerous maneuvers drivers face as they involve a complex decision-making process. Indeed, drivers must wait for an adequate gap in oncoming traffic to safely complete a left-turn maneuver. In this context, incorrectly assessed gaps can lead to severe crashes and severe traffic delays at intersections. This study tests the potential of Augmented Reality (AR) technology, built into connected vehicle technology, to improve the safety of left-turn maneuvers of connected vehicles by adding visual virtual information to the driver. To achieve this goal, a driving simulator study was carried out. The effectiveness of the system was tested, and the ability of young drivers to detect adequate gaps between vehicles in the opposite lane (with right of way) to safely turn left was assessed with and without AR warnings at a two-way stop-controlled intersection under a connected vehicles environment. In the scenario projected on the simulation screen, three different virtual warnings were displayed and tested: a green/red traffic light, which informs the driver of the availability of an appropriate gap between opposing vehicles; a traffic light with a timer showing the number of seconds available to safely perform the left-turn maneuver; a traffic light with an additionally activated audio warning system. Significant positive effects of AR warnings on driving performance and traffic safety were observed: the number of safe left-turns increased and the delays at the intersection decreased. In addition, AR signaling improved driving behavior both during the waiting time, with many more drivers waiting for the gap in front of the stop line to avoid disrupting oncoming traffic, and turning movement, reducing the average time it took to complete the left-turn maneuver. This study confirmed the great potential of AR and connected vehicle technologies to improve general safety conditions on the road network, especially under risky situations and difficult maneuvers.
Collapse
Affiliation(s)
- Alessandro Calvi
- Department of Engineering, Roma Tre University, Via Vito Volterra 62, 00146, Rome, Italy.
| | - Fabrizio D'Amico
- Department of Engineering, Roma Tre University, Via Vito Volterra 62, 00146, Rome, Italy
| | - Chiara Ferrante
- Department of Engineering, Roma Tre University, Via Vito Volterra 62, 00146, Rome, Italy
| | | |
Collapse
|
12
|
Aromaa S, Väätänen A, Aaltonen I, Goriachev V, Helin K, Karjalainen J. Awareness of the real-world environment when using augmented reality head-mounted display. APPLIED ERGONOMICS 2020; 88:103145. [PMID: 32421637 DOI: 10.1016/j.apergo.2020.103145] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 04/27/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
Augmented reality (AR) systems are becoming common tools in industrial workplaces. However, factory workers are still concerned about whether head-mounted display (HMD)-based AR systems distract their awareness of the environment and therefore pose safety risks. The purpose of this study was to assess users' experience of real-world awareness when using an AR system. 19 study participants played a wooden block logic game in a laboratory with three different setups: real, AR and virtual reality (VR). Based on this study, it can be concluded that HMD-based AR systems do not decrease users' awareness of their surroundings if the virtual content is minimal and the task is done while seated. However, it was seen that more research in this area with more interactive virtual content is required. This study is an important step in understanding how AR may affect future work in industrial and safety-critical environments.
Collapse
Affiliation(s)
- Susanna Aromaa
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland.
| | - Antti Väätänen
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland
| | - Iina Aaltonen
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland
| | - Vladimir Goriachev
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland
| | - Kaj Helin
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland
| | - Jaakko Karjalainen
- VTT Technical Research Centre of Finland Ltd, P.O. Box 1300, Visiokatu 4, 33101, Tampere, Finland
| |
Collapse
|