1
|
Forster Y, Schoemig N, Kremer C, Wiedemann K, Gary S, Naujoks F, Keinath A, Neukum A. Attentional warnings caused by driver monitoring systems: How often do they appear and how well are they understood? ACCIDENT; ANALYSIS AND PREVENTION 2024; 205:107684. [PMID: 38945045 DOI: 10.1016/j.aap.2024.107684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 05/17/2024] [Accepted: 06/15/2024] [Indexed: 07/02/2024]
Abstract
The present study investigated the effects of a driver monitoring system that triggers attention warnings in case distraction is detected. Based on the EuroNCAP protocol, distraction could either be long glances away from the forward roadway (≥3s) or visual attention time sharing (>10 cumulative seconds within a 30 s time interval). In a series of manual driving simulator drives, 30 participants completed both driving related tasks (e.g., changing multiple lanes in dense traffic) and non-driving related tasks (e.g., infotainment operations). Results of warning frequencies revealed that visual attention time sharing warnings occurred more frequently than long distraction warnings. Moreover, there was a large number of attention warnings during driving related tasks. Results also revealed that participants' mental models tended to be less accurate when it came to understanding of the visual attention time sharing warnings as compared to the long distraction warnings, which were understood more accurately. Based on these observations, the work discusses the applicability and design of driver monitoring warnings.
Collapse
Affiliation(s)
| | - Nadja Schoemig
- WIVW (Wuerzburg Institute for Traffic Sciences GmbH, Germany
| | | | | | - Sebastian Gary
- WIVW (Wuerzburg Institute for Traffic Sciences GmbH, Germany
| | | | | | | |
Collapse
|
2
|
Deffler RA, Cooley SSL, Kohl HA, Raasch TW, Dougherty BE. Hazard Perception in Visually Impaired Drivers Who Use Bioptic Telescopes. Transl Vis Sci Technol 2024; 13:5. [PMID: 38869357 PMCID: PMC11178119 DOI: 10.1167/tvst.13.6.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 04/13/2024] [Indexed: 06/14/2024] Open
Abstract
Purpose Bioptic telescopic spectacles can allow individuals with central vision impairment to obtain or maintain driving privileges. The purpose of this study was to (1) compare hazard perception ability among bioptic drivers and traditionally licensed controls, (2) assess the impact of bioptic telescopic spectacles on hazard perception in drivers with vision impairment, and (3) analyze the relationships among vision and hazard detection in bioptic drivers. Methods Visual acuity, contrast sensitivity, and visual field were measured for each participant. All drivers completed the Driving Habits Questionnaire. Hazard perception testing was conducted using commercially available first-person video driving clips. Subjects signaled when they could first identify a traffic hazard requiring a change of speed or direction. Bioptic drivers were tested with and without their bioptic telescopes in alternating blocks. Hazard detection times for each clip were converted to z-scores, converted back to seconds using the average response time across all videos, and then compared among conditions. Results Twenty-one bioptic drivers and 21 normally sighted controls participated in the study. The hazard response time of bioptic drivers was improved when able to use the telescope (5.4 ± 1.4 seconds vs 6.3 ± 1.8 seconds without telescope); however, it remained significantly longer than for controls (4.0 ± 1.4 seconds). Poorer visual acuity, contrast sensitivity, and superior visual field sensitivity loss were related to longer hazard response times. Conclusions Drivers with central vision loss had improved hazard response times with the use of bioptic telescopic spectacles, although their responses were still slower than normally sighted control drivers. Translational Relevance The use of a bioptic telescope by licensed, visually impaired drivers improves their hazard detection speed on a video-based task, lending support to their use on the road.
Collapse
Affiliation(s)
| | - San-San L. Cooley
- College of Optometry, The Ohio State University, Columbus, Ohio, USA
| | - Halea A. Kohl
- College of Optometry, The Ohio State University, Columbus, Ohio, USA
| | - Thomas W. Raasch
- College of Optometry, The Ohio State University, Columbus, Ohio, USA
| | | |
Collapse
|
3
|
Biebl B, Kuhn M, Stolle F, Xu J, Bengler K, Bowers AR. Knowing me, knowing you-A study on top-down requirements for compensatory scanning in drivers with homonymous visual field loss. PLoS One 2024; 19:e0299129. [PMID: 38427630 PMCID: PMC10906860 DOI: 10.1371/journal.pone.0299129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 02/05/2024] [Indexed: 03/03/2024] Open
Abstract
OBJECTIVE It is currently still unknown why some drivers with visual field loss can compensate well for their visual impairment while others adopt ineffective strategies. This paper contributes to the methodological investigation of the associated top-down mechanisms and aims at validating a theoretical model on the requirements for successful compensation among drivers with homonymous visual field loss. METHODS A driving simulator study was conducted with eight participants with homonymous visual field loss and eight participants with normal vision. Participants drove through an urban surrounding and experienced a baseline scenario and scenarios with visual precursors indicating increased likelihoods of crossing hazards. Novel measures for the assessment of the mental model of their visual abilities, the mental model of the driving scene and the perceived attention demand were developed and used to investigate the top-down mechanisms behind attention allocation and hazard avoidance. RESULTS Participants with an overestimation of their visual field size tended to prioritize their seeing side over their blind side both in subjective and objective measures. The mental model of the driving scene showed close relations to the subjective and actual attention allocation. While participants with homonymous visual field loss were less anticipatory in their usage of the visual precursors and showed poorer performances compared to participants with normal vision, the results indicate a stronger reliance on top-down mechanism for drivers with visual impairments. A subjective focus on the seeing side or on near peripheries more frequently led to bad performances in terms of collisions with crossing cyclists. CONCLUSION The study yielded promising indicators for the potential of novel measures to elucidate top-down mechanisms in drivers with homonymous visual field loss. Furthermore, the results largely support the model of requirements for successful compensatory scanning. The findings highlight the importance of individualized interventions and driver assistance systems tailored to address these mechanisms.
Collapse
Affiliation(s)
- Bianca Biebl
- Chair of Ergonomics, TUM School of Engineering and Design, Technical University of Munich, Garching, Germany
| | - Max Kuhn
- Chair of Ergonomics, TUM School of Engineering and Design, Technical University of Munich, Garching, Germany
| | - Franziska Stolle
- Chair of Ergonomics, TUM School of Engineering and Design, Technical University of Munich, Garching, Germany
| | - Jing Xu
- Schepens Eye Research Institute of Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, United States of America
| | - Klaus Bengler
- Chair of Ergonomics, TUM School of Engineering and Design, Technical University of Munich, Garching, Germany
| | - Alex R. Bowers
- Schepens Eye Research Institute of Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, United States of America
| |
Collapse
|
4
|
Wyche NJ, Edwards M, Goodhew SC. An updating-based working memory load alters the dynamics of eye movements but not their spatial extent during free viewing of natural scenes. Atten Percept Psychophys 2024; 86:503-524. [PMID: 37468789 PMCID: PMC10805812 DOI: 10.3758/s13414-023-02741-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/24/2023] [Indexed: 07/21/2023]
Abstract
The relationship between spatial deployments of attention and working memory load is an important topic of study, with clear implications for real-world tasks such as driving. Previous research has generally shown that attentional breadth broadens under higher load, while exploratory eye-movement behaviour also appears to change with increasing load. However, relatively little research has compared the effects of working memory load on different kinds of spatial deployment, especially in conditions that require updating of the contents of working memory rather than simple retrieval. The present study undertook such a comparison by measuring participants' attentional breadth (via an undirected Navon task) and their exploratory eye-movement behaviour (a free-viewing recall task) under low and high updating working memory loads. While spatial aspects of task performance (attentional breadth, and peripheral extent of image exploration in the free-viewing task) were unaffected by the load manipulation, the exploratory dynamics of the free-viewing task (including fixation durations and scan-path lengths) changed under increasing load. These findings suggest that temporal dynamics, rather than the spatial extent of exploration, are the primary mechanism affected by working memory load during the spatial deployment of attention. Further, individual differences in exploratory behaviour were observed on the free-viewing task: all metrics were highly correlated across working memory load blocks. These findings suggest a need for further investigation of individual differences in eye-movement behaviour; potential factors associated with these individual differences, including working memory capacity and persistence versus flexibility orientations, are discussed.
Collapse
Affiliation(s)
- Nicholas J Wyche
- Research School of Psychology (Building 39), The Australian National University, Canberra, ACT, 2601, Australia.
| | - Mark Edwards
- Research School of Psychology (Building 39), The Australian National University, Canberra, ACT, 2601, Australia
| | - Stephanie C Goodhew
- Research School of Psychology (Building 39), The Australian National University, Canberra, ACT, 2601, Australia
| |
Collapse
|
5
|
Pinto JDG, Papesh MH. High target prevalence may reduce the spread of attention during search tasks. Atten Percept Psychophys 2024; 86:62-83. [PMID: 38036870 DOI: 10.3758/s13414-023-02821-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/15/2023] [Indexed: 12/02/2023]
Abstract
Target prevalence influences many cognitive processes during visual search, including target detection, search efficiency, and item processing. The present research investigated whether target prevalence may also impact the spread of attention during search. Relative to low-prevalence searches, high-prevalence searches typically yield higher fixation counts, particularly during target-absent trials. This may emerge because the attention spread around each fixation may be smaller for high than low prevalence searches. To test this, observers searched for targets within object arrays in Experiments 1 (free-viewing) and 2 (gaze-contingent viewing). In Experiment 3, observers searched for targets in a Rapid Serial Visual Presentation (RSVP) stream at the center of the display while simultaneously processing occasional peripheral objects. Experiment 1 used fixation patterns to estimate attentional spread, and revealed that attention was narrowed during high, relative to low, prevalence searches. This effect was weakened during gaze-contingent search (Experiment 2) but emerged again when eye movements were unnecessary in RSVP search (Experiment 3). These results suggest that, although task demands impact how attention is allocated across displays, attention may also narrow when searching for frequent targets.
Collapse
|
6
|
Li Z, Yu B, Wang Y, Chen Y, Kong Y, Xu Y. A novel collision warning system based on the visual road environment schema: An examination from vehicle and driver characteristics. ACCIDENT; ANALYSIS AND PREVENTION 2023; 190:107154. [PMID: 37343457 DOI: 10.1016/j.aap.2023.107154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 05/11/2023] [Accepted: 06/02/2023] [Indexed: 06/23/2023]
Abstract
Drivers pay unequal attention to different road environmental elements and visual fields, which greatly influences their driving behavior. However, existing collision warning systems ignore these visual characteristics of drivers, which limits the performance of collision warning systems. Therefore, this study proposes a novel collision warning system based on the visual road environment schema, in order to enhance the support for avoiding potential dangers in objects and areas that are easily overlooked by the drivers' vision. To capture the above visual characteristics of drivers, the visual road environment schema that consists of the semantic layer, the scene depth layer, the sensitive layer, and the visual field layer is established by using several different deep neural networks, which realizes the recognition, quantization, and analysis of the road environment from the drivers' visual perspective. The effectiveness of the novel collision warning system is verified by the driving simulation experiment from six indicators, including warning distance, maximum lateral acceleration, maximum longitudinal deceleration, minimum collision time, reaction time, and heart rate. Additionally, a grey target decision-making model is built to comprehensively evaluate the system. The results show that compared with the traditional collision warning system, the novel collision warning system proposed in this study performs significantly better and can discover potential dangers earlier, give timely warnings, enhance the vehicles' lateral stability and driving comfort, shorten reaction time, and relieve the drivers' nervousness. By integrating the drivers' visual characteristics into the collision warning system, this study could help to optimize the existing collision warning system and promote the mutual understanding between intelligent vehicles and human drivers.
Collapse
Affiliation(s)
- Zhiguo Li
- The Key Laboratory of Road and Traffic Engineering of the Ministry of Education, College of Transportation Engineering, Tongji University, 4800 Cao' an Highway, Shanghai 201804, China.
| | - Bo Yu
- The Key Laboratory of Road and Traffic Engineering of the Ministry of Education, College of Transportation Engineering, Tongji University, 4800 Cao' an Highway, Shanghai 201804, China.
| | - Yuan Wang
- College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China.
| | - Yuren Chen
- The Key Laboratory of Road and Traffic Engineering of the Ministry of Education, College of Transportation Engineering, Tongji University, 4800 Cao' an Highway, Shanghai 201804, China.
| | - You Kong
- College of Transport and Communications, Shanghai Maritime University, No.1550, Haigang Avenue, Lin'gang Xincheng, Pudong, Shanghai 201303, China.
| | - Yueru Xu
- Intelligent Transportation System Research Center, Southeast University, Nanjing 211189, China.
| |
Collapse
|
7
|
Grahn H, Kujala T, Taipalus T, Lee J, Lee JD. On the relationship between occlusion times and in-car glance durations in simulated driving. ACCIDENT; ANALYSIS AND PREVENTION 2023; 182:106955. [PMID: 36630858 DOI: 10.1016/j.aap.2023.106955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 12/21/2022] [Accepted: 01/05/2023] [Indexed: 06/17/2023]
Abstract
Drivers have spare visual capacity in driving, and often this capacity is used for engaging in secondary in-car tasks. Previous research has suggested that the spare visual capacity could be estimated with the occlusion method. However, the relationship between drivers' occlusion times and in-car glance duration preferences has not been sufficiently investigated for granting occlusion times the role of an estimate of spare visual capacity. We conducted a driving simulator experiment (N = 30) and investigated if there is an association between drivers' occlusion times and in-car glance durations in a given driving scenario. Furthermore, we explored which factors and variables could explain the strength of the association. The findings suggest an association between occlusion time preferences and in-car glance durations in visually and cognitively low demanding unstructured tasks but that this association is lost if the in-car task is more demanding. The findings might be explained by the inability to utilize peripheral vision for lane-keeping when conducting in-car tasks and/or by in-car task structures that override drivers' preferences for the in-car glance durations. It seems that the occlusion technique could be utilized as an estimate of drivers' spare visual capacity in research - but with caution. It is strongly recommended to use occlusion times in combination with driving performance metrics. There is less spare visual capacity if this capacity is used for secondary tasks that interfere with the driver's ability to utilize peripheral vision for driving or preferences for the in-car glance durations. However, we suggest that the occlusion method can be a valid method to control for inter-individual differences in in-car glance duration preferences when investigating the visual distraction potential of, for instance, in-vehicle infotainment systems.
Collapse
Affiliation(s)
- Hilkka Grahn
- University of Jyväskylä, Faculty of Information Technology, P.O. Box 35, FI-40014, University of Jyväskylä, Finland.
| | - Tuomo Kujala
- University of Jyväskylä, Faculty of Information Technology, P.O. Box 35, FI-40014, University of Jyväskylä, Finland.
| | - Toni Taipalus
- University of Jyväskylä, Faculty of Information Technology, P.O. Box 35, FI-40014, University of Jyväskylä, Finland.
| | - Joonbum Lee
- University of Wisconsin-Madison, Department of Industrial and Systems Engineering, 1513 University Avenue, Madison, WI 53706, USA.
| | - John D Lee
- University of Wisconsin-Madison, Department of Industrial and Systems Engineering, 1513 University Avenue, Madison, WI 53706, USA.
| |
Collapse
|
8
|
Hu X, Yang J, Song Z, Wang Q, Chu Z, Zhang L, Lin D, Xu Y, Liang L, Yang WC. The perceived effects of augmented trail sensing and mood recognition abilities in a human-fish biohybrid system. BIOINSPIRATION & BIOMIMETICS 2022; 18:015008. [PMID: 36379063 DOI: 10.1088/1748-3190/aca308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 11/15/2022] [Indexed: 06/16/2023]
Abstract
The use of technologies to enhance human and animal perception has been explored in pioneering research about artificial life and biohybrid systems. These attempts have revealed that augmented sensing abilities can emerge from new interactions between individuals within or across species. Nevertheless, the diverse effects of different augmented capabilities have been less examined and compared. In this work, we built a human-fish biohybrid system that enhanced the vision of the ornamental fish by projecting human participants onto the arena background. In contrast, human participants were equipped with a mixed-reality device, which visualized individual fish trails (representing situation-oriented perceptions) and emotions (representing communication-oriented perceptions). We investigated the impacts of the two enhanced perceptions on the human side and documented the perceived effects from three aspects. First, both augmented perceptions considerably increase participants' attention toward ornamental fish, and the impact of emotion recognition is more potent than trail sense. Secondly, the frequency of human-fish interactions increases with the equipped perceptions. The mood recognition ability on the human side can indirectly promote the recorded positive mood of fish. Thirdly, most participants mentioned that they felt closer to those fish which had mood recognition ability, even if we added some mistakes in the accuracy of mood recognition. In contrast, the addition of trail sensing ability does not lead to a similar effect on the mental bond. These findings reveal several aspects of different perceived effects between the enhancements of communication-oriented and situation-oriented perceptions.
Collapse
Affiliation(s)
- Xin Hu
- Intelligent Systems Lab, NeuHelium Co., Ltd. Shanghai 200090 People's Republic of China
| | - Jinxin Yang
- Intelligent Systems Lab, NeuHelium Co., Ltd. Shanghai 200090 People's Republic of China
| | - Zhihua Song
- Intelligent Systems Lab, NeuHelium Co., Ltd. Shanghai 200090 People's Republic of China
| | - Qian Wang
- Intelligent Systems Lab, NeuHelium Co., Ltd. Shanghai 200090 People's Republic of China
| | - Ziyue Chu
- Intelligent Systems Lab, NeuHelium Co., Ltd. Shanghai 200090 People's Republic of China
| | - Lei Zhang
- Intelligent Systems Lab, NeuHelium Co., Ltd. Shanghai 200090 People's Republic of China
| | - Daoyuan Lin
- Intelligent Systems Lab, NeuHelium Co., Ltd. Shanghai 200090 People's Republic of China
| | - Yangyang Xu
- Intelligent Systems Lab, NeuHelium Co., Ltd. Shanghai 200090 People's Republic of China
| | - Longfei Liang
- Intelligent Systems Lab, NeuHelium Co., Ltd. Shanghai 200090 People's Republic of China
| | - Wen-Chi Yang
- Intelligent Systems Lab, NeuHelium Co., Ltd. Shanghai 200090 People's Republic of China
| |
Collapse
|
9
|
Ortiz-Peregrina S, Casares-López M, Castro-Torres JJ, Anera RG, Artal P. Effect of peripheral refractive errors on driving performance. BIOMEDICAL OPTICS EXPRESS 2022; 13:5533-5550. [PMID: 36425634 PMCID: PMC9664894 DOI: 10.1364/boe.468032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 09/13/2022] [Accepted: 09/15/2022] [Indexed: 05/30/2023]
Abstract
The effect of peripheral refractive errors on driving while performing secondary tasks at 40° of eccentricity was studied in thirty-one young drivers. They drove a driving simulator under 7 different induced peripheral refractive errors (baseline (0D), spherical lenses of +/- 2D, +/- 4D and cylindrical lenses of +2D and +4D). Peripheral visual acuity and contrast sensitivity were also evaluated at 40°. Driving performance was significantly impaired by the addition of myopic defocus (4D) and astigmatism (4D). Worse driving significantly correlated with worse contrast sensitivity for the route in general, but also with worse visual acuity when participants interacted with the secondary task. Induced peripheral refractive errors may negatively impact driving when performing secondary tasks.
Collapse
Affiliation(s)
- Sonia Ortiz-Peregrina
- Department of Optics, Laboratory of Vision Sciences and Applications, University of Granada, Granada 18071, Spain
| | - Miriam Casares-López
- Department of Optics, Laboratory of Vision Sciences and Applications, University of Granada, Granada 18071, Spain
| | - José J. Castro-Torres
- Department of Optics, Laboratory of Vision Sciences and Applications, University of Granada, Granada 18071, Spain
| | - Rosario G. Anera
- Department of Optics, Laboratory of Vision Sciences and Applications, University of Granada, Granada 18071, Spain
| | - Pablo Artal
- Laboratorio de Óptica, Universidad de Murcia, Campus de Espinardo, Murcia 30100, Spain
| |
Collapse
|
10
|
Wolfe JM, Kosovicheva A, Wolfe B. Normal blindness: when we Look But Fail To See. Trends Cogn Sci 2022; 26:809-819. [PMID: 35872002 PMCID: PMC9378609 DOI: 10.1016/j.tics.2022.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 06/02/2022] [Accepted: 06/13/2022] [Indexed: 10/17/2022]
Abstract
Humans routinely miss important information that is 'right in front of our eyes', from overlooking typos in a paper to failing to see a cyclist in an intersection. Recent studies on these 'Looked But Failed To See' (LBFTS) errors point to a common mechanism underlying these failures, whether the missed item was an unexpected gorilla, the clearly defined target of a visual search, or that simple typo. We argue that normal blindness is the by-product of the limited-capacity prediction engine that is our visual system. The processes that evolved to allow us to move through the world with ease are virtually guaranteed to cause us to miss some significant stimuli, especially in important tasks like driving and medical image perception.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham and Women's Hospital, 900 Commonwealth Avenue, Boston, MA 02215, USA; Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
| | - Anna Kosovicheva
- Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, Ontario, L5L 1C6, Canada
| | - Benjamin Wolfe
- Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, Ontario, L5L 1C6, Canada
| |
Collapse
|
11
|
Yang S, Wilson K, Roady T, Kuo J, Lenné MG. Beyond gaze fixation: Modeling peripheral vision in relation to speed, Tesla Autopilot, cognitive load, and age in highway driving. ACCIDENT; ANALYSIS AND PREVENTION 2022; 171:106670. [PMID: 35429654 DOI: 10.1016/j.aap.2022.106670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 04/06/2022] [Accepted: 04/08/2022] [Indexed: 06/14/2023]
Abstract
OBJECTIVE The study aims to model driver perception across the visual field in dynamic, real-world highway driving. BACKGROUND Peripheral vision acquires information across the visual field and guides a driver's information search. Studies in naturalistic settings are lacking however, with most research having been conducted in controlled simulation environments with limited eccentricities and driving dynamics. METHODS We analyzed data from 24 participants who drove a Tesla Model S with Autopilot on the highway. While driving, participants completed the peripheral detection task (PDT) using LEDs and the N-back task to generate cognitive load. The I-DT (identification by dispersion threshold) algorithm sampled naturalistic gaze fixations during PDTs to cover a broader and continuous spectrum of eccentricity. A generalized Bayesian regression model predicted LED detection probability during the PDT-as a surrogate for peripheral vision-in relation to eccentricity, vehicle speed, driving mode, cognitive load, and age. RESULTS The model predicted that LED detection probability was high and stable through near-peripheral vision but it declined rapidly beyond 20°-30° eccentricity, showing a narrower useful field over a broader visual field (maximum 70°) during highway driving. Reduced speed (while following another vehicle), cognitive load, and older age were the main factors that degraded the mid-peripheral vision (20°-50°), while using Autopilot had little effect. CONCLUSIONS Drivers can reliably detect objects through near-peripheral vision, but their peripheral detection degrades gradually due to further eccentricity, foveal demand during low-speed vehicle following, cognitive load, and age. APPLICATIONS The findings encourage the development of further multivariate computational models to estimate peripheral vision and assess driver situation awareness for crash prevention.
Collapse
Affiliation(s)
- Shiyan Yang
- Seeing Machines, 80 Mildura St, Fyshwick 2609 ACT, Australia.
| | - Kyle Wilson
- Seeing Machines, 80 Mildura St, Fyshwick 2609 ACT, Australia; Department of Psychology, University of Huddersfield, West Yorkshire, UK
| | - Trey Roady
- Seeing Machines, 80 Mildura St, Fyshwick 2609 ACT, Australia
| | - Jonny Kuo
- Seeing Machines, 80 Mildura St, Fyshwick 2609 ACT, Australia
| | - Michael G Lenné
- Seeing Machines, 80 Mildura St, Fyshwick 2609 ACT, Australia
| |
Collapse
|
12
|
Abstract
Peripheral vision is fundamental for many real-world tasks, including walking, driving, and aviation. Nonetheless, there has been no effort to connect these applied literatures to research in peripheral vision in basic vision science or sports science. To close this gap, we analyzed 60 relevant papers, chosen according to objective criteria. Applied research, with its real-world time constraints, complex stimuli, and performance measures, reveals new functions of peripheral vision. Peripheral vision is used to monitor the environment (e.g., road edges, traffic signs, or malfunctioning lights), in ways that differ from basic research. Applied research uncovers new actions that one can perform solely with peripheral vision (e.g., steering a car, climbing stairs). An important use of peripheral vision is that it helps compare the position of one’s body/vehicle to objects in the world. In addition, many real-world tasks require multitasking, and the fact that peripheral vision provides degraded but useful information means that tradeoffs are common in deciding whether to use peripheral vision or move one’s eyes. These tradeoffs are strongly influenced by factors like expertise, age, distraction, emotional state, task importance, and what the observer already knows. These tradeoffs make it hard to infer from eye movements alone what information is gathered from peripheral vision and what tasks we can do without it. Finally, we recommend three ways in which basic, sport, and applied science can benefit each other’s methodology, furthering our understanding of peripheral vision more generally.
Collapse
|
13
|
Nuthmann A, Canas-Bajo T. Visual search in naturalistic scenes from foveal to peripheral vision: A comparison between dynamic and static displays. J Vis 2022; 22:10. [PMID: 35044436 PMCID: PMC8802022 DOI: 10.1167/jov.22.1.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 12/03/2021] [Indexed: 11/24/2022] Open
Abstract
How important foveal, parafoveal, and peripheral vision are depends on the task. For object search and letter search in static images of real-world scenes, peripheral vision is crucial for efficient search guidance, whereas foveal vision is relatively unimportant. Extending this research, we used gaze-contingent Blindspots and Spotlights to investigate visual search in complex dynamic and static naturalistic scenes. In Experiment 1, we used dynamic scenes only, whereas in Experiments 2 and 3, we directly compared dynamic and static scenes. Each scene contained a static, contextually irrelevant target (i.e., a gray annulus). Scene motion was not predictive of target location. For dynamic scenes, the search-time results from all three experiments converge on the novel finding that neither foveal nor central vision was necessary to attain normal search proficiency. Since motion is known to attract attention and gaze, we explored whether guidance to the target was equally efficient in dynamic as compared to static scenes. We found that the very first saccade was guided by motion in the scene. This was not the case for subsequent saccades made during the scanning epoch, representing the actual search process. Thus, effects of task-irrelevant motion were fast-acting and short-lived. Furthermore, when motion was potentially present (Spotlights) or absent (Blindspots) in foveal or central vision only, we observed differences in verification times for dynamic and static scenes (Experiment 2). When using scenes with greater visual complexity and more motion (Experiment 3), however, the differences between dynamic and static scenes were much reduced.
Collapse
Affiliation(s)
- Antje Nuthmann
- Institute of Psychology, Kiel University, Kiel, Germany
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
- http://orcid.org/0000-0003-3338-3434
| | - Teresa Canas-Bajo
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
14
|
Ahlström C, Kircher K, Nyström M, Wolfe B. Eye Tracking in Driver Attention Research-How Gaze Data Interpretations Influence What We Learn. FRONTIERS IN NEUROERGONOMICS 2021; 2:778043. [PMID: 38235213 PMCID: PMC10790828 DOI: 10.3389/fnrgo.2021.778043] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 11/01/2021] [Indexed: 01/19/2024]
Abstract
Eye tracking (ET) has been used extensively in driver attention research. Amongst other findings, ET data have increased our knowledge about what drivers look at in different traffic environments and how they distribute their glances when interacting with non-driving related tasks. Eye tracking is also the go-to method when determining driver distraction via glance target classification. At the same time, eye trackers are limited in the sense that they can only objectively measure the gaze direction. To learn more about why drivers look where they do, what information they acquire foveally and peripherally, how the road environment and traffic situation affect their behavior, and how their own expertise influences their actions, it is necessary to go beyond counting the targets that the driver foveates. In this perspective paper, we suggest a glance analysis approach that classifies glances based on their purpose. The main idea is to consider not only the intention behind each glance, but to also account for what is relevant in the surrounding scene, regardless of whether the driver has looked there or not. In essence, the old approaches, unaware as they are of the larger context or motivation behind eye movements, have taken us as far as they can. We propose this more integrative approach to gain a better understanding of the complexity of drivers' informational needs and how they satisfy them in the moment.
Collapse
Affiliation(s)
- Christer Ahlström
- Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden
| | - Katja Kircher
- Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden
| | | | - Benjamin Wolfe
- Department of Psychology, University of Toronto Mississauga, Mississauga, ON, Canada
| |
Collapse
|
15
|
The role of eye movements in perceiving vehicle speed and time-to-arrival at the roadside. Sci Rep 2021; 11:23312. [PMID: 34857779 PMCID: PMC8640052 DOI: 10.1038/s41598-021-02412-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 11/09/2021] [Indexed: 11/12/2022] Open
Abstract
To avoid collisions, pedestrians depend on their ability to perceive and interpret the visual motion of other road users. Eye movements influence motion perception, yet pedestrians' gaze behavior has been little investigated. In the present study, we ask whether observers sample visual information differently when making two types of judgements based on the same virtual road-crossing scenario and to which extent spontaneous gaze behavior affects those judgements. Participants performed in succession a speed and a time-to-arrival two-interval discrimination task on the same simple traffic scenario-a car approaching at a constant speed (varying from 10 to 90 km/h) on a single-lane road. On average, observers were able to discriminate vehicle speeds of around 18 km/h and times-to-arrival of 0.7 s. In both tasks, observers placed their gaze closely towards the center of the vehicle's front plane while pursuing the vehicle. Other areas of the visual scene were sampled infrequently. No differences were found in the average gaze behavior between the two tasks and a pattern classifier (Support Vector Machine), trained on trial-level gaze patterns, failed to reliably classify the task from the spontaneous eye movements it elicited. Saccadic gaze behavior could predict time-to-arrival discrimination performance, demonstrating the relevance of gaze behavior for perceptual sensitivity in road-crossing.
Collapse
|
16
|
Wolfe B, Kosovicheva A, Stent S, Rosenholtz R. Effects of temporal and spatiotemporal cues on detection of dynamic road hazards. Cogn Res Princ Implic 2021; 6:80. [PMID: 34928486 PMCID: PMC8688617 DOI: 10.1186/s41235-021-00348-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 12/02/2021] [Indexed: 11/30/2022] Open
Abstract
While driving, dangerous situations can occur quickly, and giving drivers extra time to respond may make the road safer for everyone. Extensive research on attentional cueing in cognitive psychology has shown that targets are detected faster when preceded by a spatially valid cue, and slower when preceded by an invalid cue. However, it is unknown how these standard laboratory-based cueing effects may translate to dynamic, real-world situations like driving, where potential targets (i.e., hazardous events) are inherently more complex and variable. Observers in our study were required to correctly localize hazards in dynamic road scenes across three cue conditions (temporal, spatiotemporal valid and spatiotemporal invalid), and a no-cue baseline. All cues were presented at the first moment the hazardous situation began. Both types of valid cues reduced reaction time (by 58 and 60 ms, respectively, with no significant difference between them, a larger effect than in many classic studies). In addition, observers’ ability to accurately localize hazards dropped 11% in the spatiotemporal invalid condition, a result with dangerous implications on the road. This work demonstrates that, in spite of this added complexity, classic cueing effects persist—and may even be enhanced—for the detection of real-world hazards, and that valid cues have the potential to benefit drivers on the road.
Collapse
|