1
|
Yang G, Ridgeway C, Miller A, Sarkar A. Comprehensive Assessment of Artificial Intelligence Tools for Driver Monitoring and Analyzing Safety Critical Events in Vehicles. SENSORS (BASEL, SWITZERLAND) 2024; 24:2478. [PMID: 38676095 PMCID: PMC11055067 DOI: 10.3390/s24082478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/24/2024] [Accepted: 04/07/2024] [Indexed: 04/28/2024]
Abstract
Human factors are a primary cause of vehicle accidents. Driver monitoring systems, utilizing a range of sensors and techniques, offer an effective method to monitor and alert drivers to minimize driver error and reduce risky driving behaviors, thus helping to avoid Safety Critical Events (SCEs) and enhance overall driving safety. Artificial Intelligence (AI) tools, in particular, have been widely investigated to improve the efficiency and accuracy of driver monitoring or analysis of SCEs. To better understand the state-of-the-art practices and potential directions for AI tools in this domain, this work is an inaugural attempt to consolidate AI-related tools from academic and industry perspectives. We include an extensive review of AI models and sensors used in driver gaze analysis, driver state monitoring, and analyzing SCEs. Furthermore, researchers identified essential AI tools, both in academia and industry, utilized for camera-based driver monitoring and SCE analysis, in the market. Recommendations for future research directions are presented based on the identified tools and the discrepancies between academia and industry in previous studies. This effort provides a valuable resource for researchers and practitioners seeking a deeper understanding of leveraging AI tools to minimize driver errors, avoid SCEs, and increase driving safety.
Collapse
Affiliation(s)
- Guangwei Yang
- Virginia Tech Transportation Institute, Blacksburg, VA 24061, USA
| | | | | | - Abhijit Sarkar
- Virginia Tech Transportation Institute, Blacksburg, VA 24061, USA
| |
Collapse
|
2
|
Jiang M, Chaichanasittikarn O, Seet M, Ng D, Vyas R, Saini G, Dragomir A. Modulating Driver Alertness via Ambient Olfactory Stimulation: A Wearable Electroencephalography Study. SENSORS (BASEL, SWITZERLAND) 2024; 24:1203. [PMID: 38400361 PMCID: PMC10892239 DOI: 10.3390/s24041203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/31/2024] [Accepted: 02/07/2024] [Indexed: 02/25/2024]
Abstract
Poor alertness levels and related changes in cognitive efficiency are common when performing monotonous tasks such as extended driving. Recent studies have investigated driver alertness decrement and possible strategies for modulating alertness with the goal of improving reaction times to safety critical events. However, most studies rely on subjective measures in assessing alertness changes, while the use of olfactory stimuli, which are known to be strong modulators of cognitive states, has not been commensurately explored in driving alertness settings. To address this gap, in the present study we investigated the effectiveness of olfactory stimuli in modulating the alertness state of drivers and explored the utility of electroencephalography (EEG) in developing objective brain-based tools for assessing the resulting changes in cortical activity. Olfactory stimulation induced a significant differential effect on braking reaction time. The corresponding effect to the cortical activity was characterized using EEG-derived metrics and the devised machine learning framework yielded a high discriminating accuracy (92.1%). Furthermore, neural activity in the alpha frequency band was found to be significantly associated with the observed drivers' behavioral changes. Overall, our results demonstrate the potential of olfactory stimuli to modulate the alertness state and the efficiency of EEG in objectively assessing the resulting cognitive changes.
Collapse
Affiliation(s)
- Mengting Jiang
- N.1 Institute for Health, National University of Singapore, 28 Medical Drive, #05-COR, Singapore 117456, Singapore
- Laboratoire des Systèmes Perceptifs, Département d’Études Cognitives, École Normale Supérieure, PSL University, CNRS, 75005 Paris, France
| | - Oranatt Chaichanasittikarn
- N.1 Institute for Health, National University of Singapore, 28 Medical Drive, #05-COR, Singapore 117456, Singapore
| | - Manuel Seet
- N.1 Institute for Health, National University of Singapore, 28 Medical Drive, #05-COR, Singapore 117456, Singapore
| | - Desmond Ng
- International Operations, Procter & Gamble, 70 Biopolis Street, Singapore 138547, Singapore
| | - Rahul Vyas
- International Operations, Procter & Gamble, 70 Biopolis Street, Singapore 138547, Singapore
| | - Gaurav Saini
- International Operations, Procter & Gamble, 70 Biopolis Street, Singapore 138547, Singapore
| | - Andrei Dragomir
- N.1 Institute for Health, National University of Singapore, 28 Medical Drive, #05-COR, Singapore 117456, Singapore
| |
Collapse
|
3
|
Manning B, Downey LA, Narayan A, Hayley AC. A systematic review of oculomotor deficits associated with acute and chronic cannabis use. Addict Biol 2024; 29:e13359. [PMID: 38221807 PMCID: PMC10898834 DOI: 10.1111/adb.13359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 09/29/2023] [Accepted: 11/17/2023] [Indexed: 01/16/2024]
Abstract
Driving is a critical everyday task necessitating the rapid and seamless integration of dynamic visually derived information to guide neurobehaviour. Biological markers are frequently employed to detect Δ9-tetrahydrocannabinol (THC) consumption among drivers during roadside tests, despite not necessarily indicating impairment. Characterising THC-specific alterations to oculomotor behaviour may offer a more sensitive measure for indexing drug-related impairment, necessitating discrimination between acute THC effects, chronic use and potential tolerance effects. The present review aims to synthesise current evidence on the acute and chronic effects of THC on driving-relevant oculomotor behaviour. The review was prospectively registered (10.17605/OSF.IO/A4H9W), and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines informed reporting standards. Overall, 20 included articles comprising 12 experimental acute dosing trials, 5 cross-sectional chronic use studies and 3 roadside epidemiological studies examined the effects of cannabis/THC on oculomotor parameters including saccadic activity gaze behaviour, nystagmus, smooth pursuit and eyelid/blink characteristics. Acute THC consumption selectively impacts oculomotor control, notably increasing saccadic latency and inaccuracy and impairing inhibitory control. Chronic cannabis users, especially those with early age of use onset, display enduring oculomotor deficits that affect visual scanning efficiency. The presence of eyelid tremors appears to be a reliable indicator of cannabis consumption while remaining distinct from direct impairment associated with visual attention and motor control. Cannabis selectively influences oculomotor activity relevant to driving, highlighting the role of cannabinoid systems in these processes. Defining cannabis/THC-specific changes in oculomotor control may enhance the precision of roadside impairment assessments and vehicle safety systems to detect drug-related impairment and assess driving fitness.
Collapse
Affiliation(s)
- Brooke Manning
- Centre for Mental Health and Brain Science, School of Health SciencesSwinburne University of TechnologyHawthornVictoriaAustralia
- International Council for Alcohol, Drugs and Traffic Safety (ICADTS)RotterdamNetherlands
| | - Luke A. Downey
- Centre for Mental Health and Brain Science, School of Health SciencesSwinburne University of TechnologyHawthornVictoriaAustralia
- Institute for Breathing and SleepAustin HospitalMelbourneVictoriaAustralia
| | - Andrea Narayan
- Centre for Mental Health and Brain Science, School of Health SciencesSwinburne University of TechnologyHawthornVictoriaAustralia
| | - Amie C. Hayley
- Centre for Mental Health and Brain Science, School of Health SciencesSwinburne University of TechnologyHawthornVictoriaAustralia
- International Council for Alcohol, Drugs and Traffic Safety (ICADTS)RotterdamNetherlands
- Institute for Breathing and SleepAustin HospitalMelbourneVictoriaAustralia
| |
Collapse
|
4
|
Ezzat M, Maged M, Gamal Y, Adel M, Alrahmawy M, El-Metwally S. Blink-To-Live eye-based communication system for users with speech impairments. Sci Rep 2023; 13:7961. [PMID: 37198193 DOI: 10.1038/s41598-023-34310-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 04/27/2023] [Indexed: 05/19/2023] Open
Abstract
Eye-based communication languages such as Blink-To-Speak play a key role in expressing the needs and emotions of patients with motor neuron disorders. Most invented eye-based tracking systems are complex and not affordable in low-income countries. Blink-To-Live is an eye-tracking system based on a modified Blink-To-Speak language and computer vision for patients with speech impairments. A mobile phone camera tracks the patient's eyes by sending real-time video frames to computer vision modules for facial landmarks detection, eye identification and tracking. There are four defined key alphabets in the Blink-To-Live eye-based communication language: Left, Right, Up, and Blink. These eye gestures encode more than 60 daily life commands expressed by a sequence of three eye movement states. Once the eye gestures encoded sentences are generated, the translation module will display the phrases in the patient's native speech on the phone screen, and the synthesized voice can be heard. A prototype of the Blink-To-Live system is evaluated using normal cases with different demographic characteristics. Unlike the other sensor-based eye-tracking systems, Blink-To-Live is simple, flexible, and cost-efficient, with no dependency on specific software or hardware requirements. The software and its source are available from the GitHub repository ( https://github.com/ZW01f/Blink-To-Live ).
Collapse
Affiliation(s)
- Mohamed Ezzat
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, P.O. Box: 35516, Mansoura, Egypt
| | - Mohamed Maged
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, P.O. Box: 35516, Mansoura, Egypt
| | - Youssef Gamal
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, P.O. Box: 35516, Mansoura, Egypt
| | - Mustafa Adel
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, P.O. Box: 35516, Mansoura, Egypt
| | - Mohammed Alrahmawy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, P.O. Box: 35516, Mansoura, Egypt
| | - Sara El-Metwally
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, P.O. Box: 35516, Mansoura, Egypt.
| |
Collapse
|
5
|
Manning B, Hayley AC, Catchlove S, Shiferaw B, Stough C, Downey LA. Effect of CannEpil ® on simulated driving performance and co-monitoring of ocular activity: A randomised controlled trial. J Psychopharmacol 2023; 37:472-483. [PMID: 37129083 PMCID: PMC10184186 DOI: 10.1177/02698811231170360] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
BACKGROUND Medicinal cannabis products containing Δ9-tetrahydrocannabinol (THC) are increasingly accessible. Yet, policy guidelines regarding fitness to drive are lacking, and cannabinoid-specific indexations of impairment are underdeveloped. AIMS To determine the impact of a standardised 1 mL sublingual dose of CannEpil®, a medicinal cannabis oil containing 100 mg cannabidiol (CBD) and 5 mg THC on simulated driving performance, relative to placebo and whether variations in vehicle control can be indexed by ocular activity. METHODS A double-blind, within-subjects, randomised, placebo-controlled, crossover trial assessed 31 healthy fully licensed drivers (15 male, 16 female) aged between 21 and 58 years (M = 38.0, SD = 10.78). Standard deviation of lateral position (SDLP), standard deviation of speed (SDS) and steering variability were assessed over time and as a function of treatment during a 40 min simulated drive, with oculomotor parameters assessed simultaneously. Oral fluid and plasma were collected at 30 min and 2.5 h. RESULTS CannEpil did not significantly alter SDLP across the full drive, although increased SDLP was observed between 20 and 30 min (p < 0.05). CannEpil increased SDS across the full drive (p < 0.05), with variance greatest at 20-30 min (p < 0.001). CannEpil increased fixation duration (p < 0.05), blink rate (trend p = 0.051) and decreased blink duration (p < 0.001) during driving. No significant correlations were observed between biological matrices and performance outcomes. CONCLUSIONS CannEpil impairs select aspects of vehicle control (speed and weaving) over time. Alterations to ocular behaviour suggest that eye tracking may assist in determining cannabis-related driver impairment or intoxication. Australian and New Zealand Clinician Trials Registry, https://anzctr.org.au(ACTRN12619000932167).
Collapse
Affiliation(s)
- Brooke Manning
- Centre for Human Psychopharmacology, Swinburne University of Technology, Hawthorn, VIC, Australia
| | - Amie C Hayley
- Centre for Human Psychopharmacology, Swinburne University of Technology, Hawthorn, VIC, Australia
- International Council for Alcohol, Drugs, and Traffic Safety
- Institute for Breathing and Sleep, Austin Health, Melbourne, VIC, Australia
| | - Sarah Catchlove
- Centre for Human Psychopharmacology, Swinburne University of Technology, Hawthorn, VIC, Australia
| | - Brook Shiferaw
- Centre for Human Psychopharmacology, Swinburne University of Technology, Hawthorn, VIC, Australia
- Seeing Machines, Melbourne, VIC, Australia
| | - Con Stough
- Centre for Human Psychopharmacology, Swinburne University of Technology, Hawthorn, VIC, Australia
| | - Luke A Downey
- Centre for Human Psychopharmacology, Swinburne University of Technology, Hawthorn, VIC, Australia
- Institute for Breathing and Sleep, Austin Health, Melbourne, VIC, Australia
| |
Collapse
|
6
|
Higashino M, Ono S, Matsumoto S, Kubo M, Yasuura N, Hayasaka S, Tanaka I, Shimoda Y, Nishimura Y, Ono M, Yamamoto K, Ono Y, Sakamoto N. Improvement of detection sensitivity of upper gastrointestinal epithelial neoplasia in linked color imaging based on data of eye tracking. J Gastroenterol Hepatol 2023; 38:710-715. [PMID: 36627106 DOI: 10.1111/jgh.16106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 01/03/2023] [Accepted: 01/07/2023] [Indexed: 01/12/2023]
Abstract
BACKGROUND AND AIM Linked color imaging (LCI) is useful for screening in the gastrointestinal tract; however, its true clinical benefit has not been determined. The aim of this study was to determine the objective advantage of LCI for detection of upper gastrointestinal neoplasms. METHODS Nine endoscopists, including three novices, three trainees, and three experts, prospectively performed eye tracking. From 30 cases of esophageal or gastric neoplasm and 30 normal cases without neoplasms, a total of 120 images, including 60 pair images of white light imaging (WLI) and LCI taken at the same positions and angles, were randomly shown for 10 s. The sensitivity of tumor detection as a primary endpoint was evaluated and sensitivities by organ, size, and visual gaze pattern were also assessed. Color differences (ΔE using CIE1976 [L*a*b*]) between lesions and surrounding mucosa were measured and compared with detectability. RESULTS A total of 1080 experiments were completed. The sensitivities of tumor detection in WLI and LCI were 53.7% (50.1-56.8%) and 68.1% (64.8-70.8%), respectively (P = 0.002). LCI provided higher sensitivity than WLI for the novice and trainee groups (novice: 42.2% [WLI] vs 65.6% [LCI], P = 0.003; trainee: 54.4% vs 70.0%, P = 0.045). No significant correlations were found between sensitivity and visual gaze patterns. LCI significantly increased ΔE, and the diagnostic accuracy with WLI depended on ΔE. CONCLUSIONS In conclusion, LCI significantly improved sensitivity in the detection of epithelial neoplasia and enabled epithelial neoplasia detection that is not possible with the small color difference in WLI. (UMIN000047944).
Collapse
Affiliation(s)
- Masayuki Higashino
- Department of Gastroenterology and Hepatology, Graduate School of Medicine and Faculty of Medicine Hokkaido University, Sapporo, Hokkaido, Japan
| | - Shoko Ono
- Division of Endoscopy, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Shogo Matsumoto
- Department of Gastroenterology and Hepatology, Graduate School of Medicine and Faculty of Medicine Hokkaido University, Sapporo, Hokkaido, Japan
| | - Marina Kubo
- Department of Gastroenterology and Hepatology, Graduate School of Medicine and Faculty of Medicine Hokkaido University, Sapporo, Hokkaido, Japan
| | - Naohiro Yasuura
- Department of Gastroenterology and Hepatology, Graduate School of Medicine and Faculty of Medicine Hokkaido University, Sapporo, Hokkaido, Japan
| | - Shuhei Hayasaka
- Department of Gastroenterology and Hepatology, Graduate School of Medicine and Faculty of Medicine Hokkaido University, Sapporo, Hokkaido, Japan
| | - Ikko Tanaka
- Department of Gastroenterology and Hepatology, Graduate School of Medicine and Faculty of Medicine Hokkaido University, Sapporo, Hokkaido, Japan
| | - Yoshihiko Shimoda
- Department of Gastroenterology and Hepatology, Graduate School of Medicine and Faculty of Medicine Hokkaido University, Sapporo, Hokkaido, Japan
| | - Yusuke Nishimura
- Department of Gastroenterology and Hepatology, Graduate School of Medicine and Faculty of Medicine Hokkaido University, Sapporo, Hokkaido, Japan
| | - Masayoshi Ono
- Department of Gastroenterology and Hepatology, Graduate School of Medicine and Faculty of Medicine Hokkaido University, Sapporo, Hokkaido, Japan
| | - Keiko Yamamoto
- Division of Endoscopy, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Yuji Ono
- Department of Gastroenterology, Sapporo City General Hospital, Sapporo, Hokkaido, Japan
| | - Naoya Sakamoto
- Department of Gastroenterology and Hepatology, Graduate School of Medicine and Faculty of Medicine Hokkaido University, Sapporo, Hokkaido, Japan
| |
Collapse
|
7
|
Ding N, Zhong Y, Li J, Xiao Q, Zhang S, Xia H. Visual preference of plant features in different living environments using eye tracking and EEG. PLoS One 2022; 17:e0279596. [PMID: 36584138 PMCID: PMC9803246 DOI: 10.1371/journal.pone.0279596] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 12/12/2022] [Indexed: 12/31/2022] Open
Abstract
Plants play a very important role in landscape construction. In order to explore whether different living environment will affect people's preference for the structural features of plant organs, this study examined 26 villagers and 33 college students as the participants, and pictures of leaves, flowers and fruits of plants as the stimulus to conduct eye-tracking and EEG detection experiments. We found that eye movement indicators can explain people's visual preferences, but they are unable to find differences in preferences between groups. EEG indicators can make up for this deficiency, which further reveals the difference in psychological and physiological responses between the two groups when viewing stimuli. The final results show that the villagers and the students liked leaves best, preferring aciculiform and leathery leaves; solitary, purple and capitulum flowers; and medium-sized, spathulate, black and pear fruits. In addition, it was found that the overall attention of the villagers when watching stimuli was far lower than that of the students, but the degree of meditation was higher. With regard to eye movement and EEG, the total duration of fixations is highly positively correlated with the number of fixations, and the average pupil size has a weak negative correlation with attention. On the contrary, the average duration of fixations has a weak positive correlation with meditation. Generally speaking, we believe that Photinia×fraseri, Metasequoia glyptostroboides, Photinia serratifolia, Koelreuteria bipinnata and Cunninghamia lanceolata are superior landscape building plants in rural areas and on campuses; Pinus thunbergii, Myrica rubra, Camellia japonica and other plants with obvious features and bright colours are also the first choice in rural landscapes; and Yulania biondii, Cercis chinensis, Hibiscus mutabilis and other plants with simple structures are the first choice in campus landscapes. This study is of great significance for selecting plants for landscape construction and management according to different environments and local conditions.
Collapse
Affiliation(s)
- Ningning Ding
- Central South University of Forestry and Technology, Changsha, China
| | - Yongde Zhong
- Central South University of Forestry and Technology, Changsha, China,National Forestry and Grassland Administration State Forestry Administration Engineering Research Center for Forest Tourism, Changsha, China,* E-mail:
| | - Jiaxiang Li
- Central South University of Forestry and Technology, Changsha, China
| | - Qiong Xiao
- Central South University of Forestry and Technology, Changsha, China
| | - Shuangquan Zhang
- Central South University of Forestry and Technology, Changsha, China
| | - Hongling Xia
- Hunan Urban Construction College, Xiangtan, China
| |
Collapse
|
8
|
El Hamdani S, Bouchner P, Kunclova T, Lehet D. The Impact of Physical Motion Cues on Driver Braking Performance: A Clinical Study Using Driving Simulator and Eye Tracker. SENSORS (BASEL, SWITZERLAND) 2022; 23:42. [PMID: 36616641 PMCID: PMC9824264 DOI: 10.3390/s23010042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 12/14/2022] [Accepted: 12/17/2022] [Indexed: 06/17/2023]
Abstract
Driving simulators are increasingly being incorporated by driving schools into a training process for a variety of vehicles. The motion platform is a major component integrated into simulators to enhance the sense of presence and fidelity of the driving simulator. However, less effort has been devoted to assessing the motion cues feedback on trainee performance in simulators. To address this gap, we thoroughly study the impact of motion cues on braking at a target point as an elementary behavior that reflects the overall driver's performance. In this paper, we use an eye-tracking device to evaluate driver behavior in addition to evaluating data from a driving simulator and considering participants' feedback. Furthermore, we compare the effect of different motion levels ("No motion", "Mild motion", and "Full motion") in two road scenarios: with and without the pre-braking warning signs with the speed feedback given by the speedometer. The results showed that a full level of motion cues had a positive effect on braking smoothness and gaze fixation on the track. In particular, the presence of full motion cues helped the participants to gradually decelerate from 5 to 0 ms-1 in the last 240 m before the stop line in both scenarios, without and with warning signs, compared to the hardest braking from 25 to 0 ms-1 produced under the no motion cues conditions. Moreover, the results showed that a combination of the mild motion conditions and warning signs led to an underestimation of the actual speed and a greater fixation of the gaze on the speedometer. Questionnaire data revealed that 95% of the participants did not suffer from motion sickness symptoms, yet participants' preferences did not indicate that they were aware of the impact of simulator conditions on their driving behavior.
Collapse
|
9
|
Ban S, Lee YJ, Kim KR, Kim JH, Yeo WH. Advances in Materials, Sensors, and Integrated Systems for Monitoring Eye Movements. BIOSENSORS 2022; 12:1039. [PMID: 36421157 PMCID: PMC9688058 DOI: 10.3390/bios12111039] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/11/2022] [Accepted: 11/13/2022] [Indexed: 06/16/2023]
Abstract
Eye movements show primary responses that reflect humans' voluntary intention and conscious selection. Because visual perception is one of the fundamental sensory interactions in the brain, eye movements contain critical information regarding physical/psychological health, perception, intention, and preference. With the advancement of wearable device technologies, the performance of monitoring eye tracking has been significantly improved. It also has led to myriad applications for assisting and augmenting human activities. Among them, electrooculograms, measured by skin-mounted electrodes, have been widely used to track eye motions accurately. In addition, eye trackers that detect reflected optical signals offer alternative ways without using wearable sensors. This paper outlines a systematic summary of the latest research on various materials, sensors, and integrated systems for monitoring eye movements and enabling human-machine interfaces. Specifically, we summarize recent developments in soft materials, biocompatible materials, manufacturing methods, sensor functions, systems' performances, and their applications in eye tracking. Finally, we discuss the remaining challenges and suggest research directions for future studies.
Collapse
Affiliation(s)
- Seunghyeb Ban
- School of Engineering and Computer Science, Washington State University, Vancouver, WA 98686, USA
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Yoon Jae Lee
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Ka Ram Kim
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Jong-Hoon Kim
- School of Engineering and Computer Science, Washington State University, Vancouver, WA 98686, USA
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Woon-Hong Yeo
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta, GA 30332, USA
- Neural Engineering Center, Institute for Materials, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
10
|
A Driver Gaze Estimation Method Based on Deep Learning. SENSORS 2022; 22:s22103959. [PMID: 35632365 PMCID: PMC9142909 DOI: 10.3390/s22103959] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 05/15/2022] [Accepted: 05/19/2022] [Indexed: 02/05/2023]
Abstract
Car crashes are among the top ten leading causes of death; they could mainly be attributed to distracted drivers. An advanced driver-assistance technique (ADAT) is a procedure that can notify the driver about a dangerous scenario, reduce traffic crashes, and improve road safety. The main contribution of this work involved utilizing the driver’s attention to build an efficient ADAT. To obtain this “attention value”, the gaze tracking method is proposed. The gaze direction of the driver is critical toward understanding/discerning fatal distractions, pertaining to when it is obligatory to notify the driver about the risks on the road. A real-time gaze tracking system is proposed in this paper for the development of an ADAT that obtains and communicates the gaze information of the driver. The developed ADAT system detects various head poses of the driver and estimates eye gaze directions, which play important roles in assisting the driver and avoiding any unwanted circumstances. The first (and more significant) task in this research work involved the development of a benchmark image dataset consisting of head poses and horizontal and vertical direction gazes of the driver’s eyes. To detect the driver’s face accurately and efficiently, the You Only Look Once (YOLO-V4) face detector was used by modifying it with the Inception-v3 CNN model for robust feature learning and improved face detection. Finally, transfer learning in the InceptionResNet-v2 CNN model was performed, where the CNN was used as a classification model for head pose detection and eye gaze angle estimation; a regression layer to the InceptionResNet-v2 CNN was added instead of SoftMax and the classification output layer. The proposed model detects and estimates head pose directions and eye directions with higher accuracy. The average accuracy achieved by the head pose detection system was 91%; the model achieved a RMSE of 2.68 for vertical and 3.61 for horizontal eye gaze estimations.
Collapse
|
11
|
Wang Y, Yuan G, Fu X. Driver's Head Pose and Gaze Zone Estimation Based on Multi-Zone Templates Registration and Multi-Frame Point Cloud Fusion. SENSORS (BASEL, SWITZERLAND) 2022; 22:3154. [PMID: 35590843 PMCID: PMC9105416 DOI: 10.3390/s22093154] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 03/12/2022] [Accepted: 03/22/2022] [Indexed: 06/15/2023]
Abstract
Head pose and eye gaze are vital clues for analysing a driver's visual attention. Previous approaches achieve promising results from point clouds in constrained conditions. However, these approaches face challenges in the complex naturalistic driving scene. One of the challenges is that the collected point cloud data under non-uniform illumination and large head rotation is prone to partial facial occlusion. It causes bad transformation during failed template matching or incorrect feature extraction. In this paper, a novel estimation method is proposed for predicting accurate driver head pose and gaze zone using an RGB-D camera, with an effective point cloud fusion and registration strategy. In the fusion step, to reduce bad transformation, continuous multi-frame point clouds are registered and fused to generate a stable point cloud. In the registration step, to reduce reliance on template registration, multiple point clouds in the nearest neighbor gaze zone are utilized as a template point cloud. A coarse transformation computed by the normal distributions transform is used as the initial transformation, and updated with particle filter. A gaze zone estimator is trained by combining the head pose and eye image features, in which the head pose is predicted by point cloud registration, and the eye image features are extracted via multi-scale spare coding. Extensive experiments demonstrate that the proposed strategy achieves better results on head pose tracking, and also has a low error on gaze zone classification.
Collapse
|
12
|
Wang Y, Ding X, Yuan G, Fu X. Dual-Cameras-Based Driver's Eye Gaze Tracking System with Non-Linear Gaze Point Refinement. SENSORS 2022; 22:s22062326. [PMID: 35336497 PMCID: PMC8949346 DOI: 10.3390/s22062326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 02/25/2022] [Accepted: 03/05/2022] [Indexed: 02/06/2023]
Abstract
The human eye gaze plays a vital role in monitoring people’s attention, and various efforts have been made to improve in-vehicle driver gaze tracking systems. Most of them build the specific gaze estimation model by pre-annotated data training in an offline way. These systems usually tend to have poor generalization performance during the online gaze prediction, which is caused by the estimation bias between the training domain and the deployment domain, making the predicted gaze points shift from their correct location. To solve this problem, a novel driver’s eye gaze tracking method with non-linear gaze point refinement is proposed in a monitoring system using two cameras, which eliminates the estimation bias and implicitly fine-tunes the gaze points. Supported by the two-stage gaze point clustering algorithm, the non-linear gaze point refinement method can gradually extract the representative gaze points of the forward and mirror gaze zone and establish the non-linear gaze point re-mapping relationship. In addition, the Unscented Kalman filter is utilized to track the driver’s continuous status features. Experimental results show that the non-linear gaze point refinement method outperforms several previous gaze calibration and gaze mapping methods, and improves the gaze estimation accuracy even on the cross-subject evaluation. The system can be used for predicting the driver’s attention.
Collapse
|
13
|
Schweizer T, Wyss T, Gilgen-Ammann R. Detecting Soldiers' Fatigue Using Eye-Tracking Glasses: Practical Field Applications and Research Opportunities. Mil Med 2021; 187:e1330-e1337. [PMID: 34915554 PMCID: PMC10100772 DOI: 10.1093/milmed/usab509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 11/04/2021] [Accepted: 11/29/2021] [Indexed: 11/14/2022] Open
Abstract
INTRODUCTION Objectively determining soldiers' fatigue levels could help prevent injuries or accidents resulting from inattention or decreased alertness. Eye-tracking technologies, such as optical eye tracking (OET) and electrooculography (EOG), are often used to monitor fatigue. Eyeblinks-especially blink frequency and blink duration-are known as easily observable and valid biomarkers of fatigue. Currently, various eye trackers (i.e., eye-tracking glasses) are available on the market using either OET or EOG technologies. These wearable eye trackers offer several advantages, including unobtrusive functionality, practicality, and low costs. However, several challenges and limitations must be considered when implementing these technologies in the field to monitor fatigue levels. This review investigates the feasibility of eye tracking in the field focusing on the practical applications in military operational environments. MATERIALS AND METHOD This paper summarizes the existing literature about eyeblink dynamics and available wearable eye-tracking technologies, exposing challenges and limitations, as well as discussing practical recommendations on how to improve the feasibility of eye tracking in the field. RESULTS So far, no eye-tracking glasses can be recommended for use in a demanding work environment. First, eyeblink dynamics are influenced by multiple factors; therefore, environments, situations, and individual behavior must be taken into account. Second, the glasses' placement, sunlight, facial or body movements, vibrations, and sweat can drastically decrease measurement accuracy. The placement of the eye cameras for the OET and the placement of the electrodes for the EOG must be chosen consciously, the sampling rate must be minimal 200 Hz, and software and hardware must be robust to resist any factors influencing eye tracking. CONCLUSION Monitoring physiological and psychological readiness of soldiers, as well as other civil professionals that face higher risks when their attention is impaired or reduced, is necessary. However, improvements to eye-tracking devices' hardware, calibration method, sampling rate, and algorithm are needed in order to accurately monitor fatigue levels in the field.
Collapse
Affiliation(s)
- Theresa Schweizer
- Monitoring, Swiss Federal Institute of Sport Magglingen (SFISM), Macolin 2532, Switzerland
| | - Thomas Wyss
- Monitoring, Swiss Federal Institute of Sport Magglingen (SFISM), Macolin 2532, Switzerland
| | - Rahel Gilgen-Ammann
- Monitoring, Swiss Federal Institute of Sport Magglingen (SFISM), Macolin 2532, Switzerland
| |
Collapse
|
14
|
Reimer B, Mehler B, Muñoz M, Dobres J, Kidd D, Reagan IJ. Patterns in transitions of visual attention during baseline driving and during interaction with visual-manual and voice-based interfaces. ERGONOMICS 2021; 64:1429-1451. [PMID: 34018916 DOI: 10.1080/00140139.2021.1930197] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 05/09/2021] [Indexed: 06/12/2023]
Abstract
Voice interfaces reduce visual demand compared with visual-manual interfaces, but the extent depends on design. This study compared visual demand during baseline driving with driving while using voice or manual inputs to place calls with Chevrolet MyLink, Volvo Sensus, or a smartphone. Mean glance duration and total eyes-off-road-time increased when using manual input compared with baseline driving; only eyes off road time increased with voice input. Confusion matrices developed with hidden Markov modelling characterise the similarity of glance sequences during baseline driving and while making phone calls. Glance sequences with the MyLink voice interface were misclassified as baseline driving more frequently than the other voice interfaces. Conversely, glance sequences with the Sensus and smartphone voice interfaces were more often misclassified as manual phone calling. Thus, the MyLink voice interface not only reduced the overall visual demand of placing calls, but produced glance patterns more similar to driving without another task. Practitioner Summary: The attention map and confusion matrix methodologies provide ways of characterising similarities and differences in glance behaviour across secondary task conditions, complementing traditional temporally based metrics (e.g. mean glance duration, long duration glances) while addressing some of the limitations of total-eyes-off-road-time (TEORT) for comparing secondary task behaviour to baseline driving.
Collapse
Affiliation(s)
- Bryan Reimer
- AgeLab, Center for Transportation & Logistics, Massachusetts Institute of Technology Cambridge, MA, USA
| | - Bruce Mehler
- AgeLab, Center for Transportation & Logistics, Massachusetts Institute of Technology Cambridge, MA, USA
| | - Mauricio Muñoz
- AgeLab, Center for Transportation & Logistics, Massachusetts Institute of Technology Cambridge, MA, USA
| | - Jonathan Dobres
- AgeLab, Center for Transportation & Logistics, Massachusetts Institute of Technology Cambridge, MA, USA
| | - David Kidd
- Insurance Institute for Highway Safety, Arlington, VA, USA
| | - Ian J Reagan
- Insurance Institute for Highway Safety, Arlington, VA, USA
| |
Collapse
|
15
|
Araluce J, Bergasa LM, Ocaña M, López-Guillén E, Revenga PA, Arango JF, Pérez O. Gaze Focalization System for Driving Applications Using OpenFace 2.0 Toolkit with NARMAX Algorithm in Accidental Scenarios. SENSORS 2021; 21:s21186262. [PMID: 34577469 PMCID: PMC8473381 DOI: 10.3390/s21186262] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 08/21/2021] [Accepted: 09/15/2021] [Indexed: 11/18/2022]
Abstract
Monitoring driver attention using the gaze estimation is a typical approach used on road scenes. This indicator is of great importance for safe driving, specially on Level 3 and Level 4 automation systems, where the take over request control strategy could be based on the driver’s gaze estimation. Nowadays, gaze estimation techniques used in the state-of-the-art are intrusive and costly, and these two aspects are limiting the usage of these techniques on real vehicles. To test this kind of application, there are some databases focused on critical situations in simulation, but they do not show real accidents because of the complexity and the danger to record them. Within this context, this paper presents a low-cost and non-intrusive camera-based gaze mapping system integrating the open-source state-of-the-art OpenFace 2.0 Toolkit to visualize the driver focalization on a database composed of recorded real traffic scenes through a heat map using NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) to establish the correspondence between the OpenFace 2.0 parameters and the screen region the user is looking at. This proposal is an improvement of our previous work, which was based on a linear approximation using a projection matrix. The proposal has been validated using the recent and challenging public database DADA2000, which has 2000 video sequences with annotated driving scenarios based on real accidents. We compare our proposal with our previous one and with an expensive desktop-mounted eye-tracker, obtaining on par results. We proved that this method can be used to record driver attention databases.
Collapse
|
16
|
Implementing a Gaze Tracking Algorithm for Improving Advanced Driver Assistance Systems. ELECTRONICS 2021. [DOI: 10.3390/electronics10121480] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Car accidents are one of the top ten causes of death and are produced mainly by driver distractions. ADAS (Advanced Driver Assistance Systems) can warn the driver of dangerous scenarios, improving road safety, and reducing the number of traffic accidents. However, having a system that is continuously sounding alarms can be overwhelming or confusing or both, and can be counterproductive. Using the driver’s attention to build an efficient ADAS is the main contribution of this work. To obtain this “attention value” the use of a Gaze tracking is proposed. Driver’s gaze direction is a crucial factor in understanding fatal distractions, as well as discerning when it is necessary to warn the driver about risks on the road. In this paper, a real-time gaze tracking system is proposed as part of the development of an ADAS that obtains and communicates the driver’s gaze information. The developed ADAS uses gaze information to determine if the drivers are looking to the road with their full attention. This work gives a step ahead in the ADAS based on the driver, building an ADAS that warns the driver only in case of distraction. The gaze tracking system was implemented as a model-based system using a Kinect v2.0 sensor and was adjusted on a set-up environment and tested on a suitable-features driving simulation environment. The average obtained results are promising, having hit ratios between 96.37% and 81.84%.
Collapse
|
17
|
Lee A, Chung H, Cho Y, Kim JL, Choi J, Lee E, Kim B, Cho SJ, Kim SG. Identification of gaze pattern and blind spots by upper gastrointestinal endoscopy using an eye-tracking technique. Surg Endosc 2021; 36:2574-2581. [PMID: 34013392 DOI: 10.1007/s00464-021-08546-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 05/04/2021] [Indexed: 11/24/2022]
Abstract
BACKGROUND The lesion detection rate of esophagogastroduodenoscopy (EGD) varies depending on the degree of experience of the endoscopist and anatomical blind spots. This study aimed to identify gaze patterns and blind spots by analyzing the endoscopist's gaze during real-time EGD. METHODS Five endoscopists were enrolled in this study. The endoscopist's eye gaze tracked by an eye tracker was selected from the esophagogastric junction to the second portion of the duodenum without the esophagus during insertion and withdrawal, and then matched with photos. Gaze patterns were visualized as a gaze plot, blind spot detection as a heatmap, observation time (OT), fixation duration (FD), and FD-to-OT ratio. RESULTS The mean OT and FD were 11.10 ± 11.14 min and 8.37 ± 9.95 min, respectively, and the FD-to-OT ratio was 72.5%. A total of 34.3% of the time was spent observing the antrum. When observing the body of the stomach, it took longer to observe the high body in the retroflexion view and the low-to-mid body in the forward view. CONCLUSIONS It is necessary to minimize gaze distraction and observe the posterior wall in the retroflexion view. Our results suggest that eye-tracking techniques may be useful for future endoscopic training and education.
Collapse
Affiliation(s)
- Ayoung Lee
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Internal Medicine, Ewha Womans University School of Medicine, Seoul, Republic of Korea
| | - Hyunsoo Chung
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea.
| | - Yejin Cho
- Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Jue Lie Kim
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Jinju Choi
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Eunwoo Lee
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Bokyung Kim
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Soo-Jeong Cho
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Sang Gyun Kim
- Division of Gastroenterology, Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
18
|
Abstract
Different systems based on Artificial Intelligence (AI) techniques are currently used in relevant areas such as healthcare, cybersecurity, natural language processing, and self-driving cars. However, many of these systems are developed with “black box” AI, which makes it difficult to explain how they work. For this reason, explainability and interpretability are key factors that need to be taken into consideration in the development of AI systems in critical areas. In addition, different contexts produce different explainability needs which must be met. Against this background, Explainable Artificial Intelligence (XAI) appears to be able to address and solve this situation. In the field of automated driving, XAI is particularly needed because the level of automation is constantly increasing according to the development of AI techniques. For this reason, the field of XAI in the context of automated driving is of particular interest. In this paper, we propose the use of an explainable intelligence technique in the understanding of some of the tasks involved in the development of advanced driver-assistance systems (ADAS). Since ADAS assist drivers in driving functions, it is essential to know the reason for the decisions taken. In addition, trusted AI is the cornerstone of the confidence needed in this research area. Thus, due to the complexity and the different variables that are part of the decision-making process, this paper focuses on two specific tasks in this area: the detection of emotions and the distractions of drivers. The results obtained are promising and show the capacity of the explainable artificial techniques in the different tasks of the proposed environments.
Collapse
|
19
|
Shichrur R, Ratzon NZ, Shoham A, Borowsky A. The Effects of an In-vehicle Collision Warning System on Older Drivers' On-road Head Movements at Intersections. Front Psychol 2021; 12:596278. [PMID: 33679517 PMCID: PMC7932995 DOI: 10.3389/fpsyg.2021.596278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 01/25/2021] [Indexed: 11/13/2022] Open
Abstract
With age might come a decline in crucial driving skills. The effect of a collision warning system (CWS) on older drivers' head movements behavior at intersections was examined. Methods: Twenty-six old-adults, between 55 and 64 years of age, and 16 Older drivers between 65 and 83 years of age, participated in the study. A CWS (Mobileye Inc.) and a front-back in-vehicle camera (IVC) were installed in each of the participants' own vehicles for 6 months. The CWS was utilized to identify unsafe events during naturalistic driving situations, and the IVC was used to capture head direction at intersections. The experimental design was conducted in three phases (baseline, intervention, and carryover), 2 months each. Unsafe events were recorded by the CWS during all phases of the study. In the second phase, the CWS feedback was activated to examine its effect on drivers' head movement' behavior at intersections. Results: Older drivers (65+) drove significantly more hours in total during the intervention phase (M = 79.1 h, SE = 10) than the baseline phase (M = 39.1 h, SE = 5.3) and the carryover phase (M = 37.7 h, SE = 5.4). The study revealed no significant differences between the head movements of older and old-adult drivers at intersections. For intersection on the left direction, a significant improvement in drivers' head movements' behavior was found at T-junctions, turns and four-way intersections from phase 1 to phase 3 (p < 0.01), however, two intersection types presented a decrease along the study phases. The head movements' behavior at roundabouts and merges was better at phase 1 compared to phase 3 (p < 0.01). There was no significant reduction of the mean number of CWS unsafe events across the study phases. Conclusions: The immediate feedback provided by the CWS was effective in terms of participants' head movements at certain intersections but was harmful in others. However, older drivers drove many more hours during the active feedback phase, implying that they trusted the system. Therefore, in the light of this complex picture, using the technological feedback with older drivers should be followed with an additional mediation or follow-up to ensure safety.
Collapse
Affiliation(s)
- Rachel Shichrur
- Occupational Therapy Department, Faculty of Health Sciences, Ariel University, Ariel, Israel
| | - Navah Z Ratzon
- Occupational Therapy Department, School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo, Israel
| | - Arava Shoham
- Occupational Therapy Department, School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo, Israel.,Occupational Therapy Clinics, Clalit Health Services, Dimona, Israel
| | - Avinoam Borowsky
- The Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Be'er-Sheva, Israel
| |
Collapse
|
20
|
Real-Time Abnormal Event Detection for Enhanced Security in Autonomous Shuttles Mobility Infrastructures. SENSORS 2020; 20:s20174943. [PMID: 32882846 PMCID: PMC7506808 DOI: 10.3390/s20174943] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 08/26/2020] [Accepted: 08/27/2020] [Indexed: 12/18/2022]
Abstract
Autonomous vehicles (AVs) are already operating on the streets of many countries around the globe. Contemporary concerns about AVs do not relate to the implementation of fundamental technologies, as they are already in use, but are rather increasingly centered on the way that such technologies will affect emerging transportation systems, our social environment, and the people living inside it. Many concerns also focus on whether such systems should be fully automated or still be partially controlled by humans. This work aims to address the new reality that is formed in autonomous shuttles mobility infrastructures as a result of the absence of the bus driver and the increased threat from terrorism in European cities. Typically, drivers are trained to handle incidents of passengers’ abnormal behavior, incidents of petty crimes, and other abnormal events, according to standard procedures adopted by the transport operator. Surveillance using camera sensors as well as smart software in the bus will maximize the feeling and the actual level of security. In this paper, an online, end-to-end solution is introduced based on deep learning techniques for the timely, accurate, robust, and automatic detection of various petty crime types. The proposed system can identify abnormal passenger behavior such as vandalism and accidents but can also enhance passenger security via petty crimes detection such as aggression, bag-snatching, and vandalism. The solution achieves excellent results across different use cases and environmental conditions.
Collapse
|
21
|
Carr DB, Grover P. The Role of Eye Tracking Technology in Assessing Older Driver Safety. Geriatrics (Basel) 2020; 5:E36. [PMID: 32517336 PMCID: PMC7345272 DOI: 10.3390/geriatrics5020036] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 05/21/2020] [Accepted: 05/22/2020] [Indexed: 12/11/2022] Open
Abstract
A growing body of literature is focused on the use of eye tracking (ET) technology to understand the association between objective visual parameters and higher order brain processes such as cognition. One of the settings where this principle has found practical utility is in the area of driving safety. METHODS We reviewed the literature to identify the changes in ET parameters with older adults and neurodegenerative disease. RESULTS This narrative review provides a brief overview of oculomotor system anatomy and physiology, defines common eye movements and tracking variables that are typically studied, explains the most common methods of eye tracking measurements during driving in simulation and in naturalistic settings, and examines the association of impairment in ET parameters with advanced age and neurodegenerative disease. CONCLUSION ET technology is becoming less expensive, more portable, easier to use, and readily applicable in a variety of clinical settings. Older adults and especially those with neurodegenerative disease may have impairments in visual search parameters, placing them at risk for motor vehicle crashes. Advanced driver assessment systems are becoming more ubiquitous in newer cars and may significantly reduce crashes related to impaired visual search, distraction, and/or fatigue.
Collapse
Affiliation(s)
- David B. Carr
- Department of Medicine and Neurology, Washington University School of Medicine, St Louis, MO 63110, USA
| | - Prateek Grover
- Department of Neurology, Washington University School of Medicine, St Louis, MO 63110, USA;
| |
Collapse
|
22
|
Lim JZ, Mountstephens J, Teo J. Emotion Recognition Using Eye-Tracking: Taxonomy, Review and Current Challenges. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2384. [PMID: 32331327 PMCID: PMC7219342 DOI: 10.3390/s20082384] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 03/31/2020] [Accepted: 03/31/2020] [Indexed: 12/12/2022]
Abstract
The ability to detect users' emotions for the purpose of emotion engineering is currently one of the main endeavors of machine learning in affective computing. Among the more common approaches to emotion detection are methods that rely on electroencephalography (EEG), facial image processing and speech inflections. Although eye-tracking is fast in becoming one of the most commonly used sensor modalities in affective computing, it is still a relatively new approach for emotion detection, especially when it is used exclusively. In this survey paper, we present a review on emotion recognition using eye-tracking technology, including a brief introductory background on emotion modeling, eye-tracking devices and approaches, emotion stimulation methods, the emotional-relevant features extractable from eye-tracking data, and most importantly, a categorical summary and taxonomy of the current literature which relates to emotion recognition using eye-tracking. This review concludes with a discussion on the current open research problems and prospective future research directions that will be beneficial for expanding the body of knowledge in emotion detection using eye-tracking as the primary sensor modality.
Collapse
Affiliation(s)
- Jia Zheng Lim
- Evolutionary Computing Laboratory, Faculty of Computing and Informatics, Universiti Malaysia Sabah, Jalan UMS, Kota Kinabalu 88400, Sabah, Malaysia;
| | - James Mountstephens
- Faculty of Computing and Informatics, Universiti Malaysia Sabah, Jalan UMS, Kota Kinabalu 88400, Sabah, Malaysia;
| | - Jason Teo
- Faculty of Computing and Informatics, Universiti Malaysia Sabah, Jalan UMS, Kota Kinabalu 88400, Sabah, Malaysia;
| |
Collapse
|
23
|
Intelligent Driving Assistant Based on Road Accident Risk Map Analysis and Vehicle Telemetry. SENSORS 2020; 20:s20061763. [PMID: 32235783 PMCID: PMC7147716 DOI: 10.3390/s20061763] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Revised: 03/10/2020] [Accepted: 03/18/2020] [Indexed: 12/11/2022]
Abstract
Through the application of intelligent systems in driver assistance systems, the experience of traveling by road has become much more comfortable and safe. In this sense, this paper then reports the development of an intelligent driving assistant, based on vehicle telemetry and road accident risk map analysis, whose responsibility is to alert the driver in order to avoid risky situations that may cause traffic accidents. In performance evaluations using real cars in a real environment, the on-board intelligent assistant reproduced real-time audio-visual alerts according to information obtained from both telemetry and road accident risk map analysis. As a result, an intelligent assistance agent based on fuzzy reasoning was obtained, which supported the driver correctly in real-time according to the telemetry data, the vehicle environment and the principles of secure driving practices and transportation regulation laws. Experimental results and conclusions emphasizing the advantages of the proposed intelligent driving assistant in the improvement of the driving task are presented.
Collapse
|