1
|
Pandey NN, Muppalaneni NB. Strabismus free gaze detection system for driver’s using deep learning technique. PROGRESS IN ARTIFICIAL INTELLIGENCE 2023. [DOI: 10.1007/s13748-023-00296-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
2
|
Park SH, Yoon HS, Park KR. Faster R-CNN and Geometric Transformation-Based Detection of Driver's Eyes Using Multiple Near-Infrared Camera Sensors. SENSORS 2019; 19:s19010197. [PMID: 30621110 PMCID: PMC6338982 DOI: 10.3390/s19010197] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 12/31/2018] [Accepted: 01/03/2019] [Indexed: 11/16/2022]
Abstract
Studies are being actively conducted on camera-based driver gaze tracking in a vehicle environment for vehicle interfaces and analyzing forward attention for judging driver inattention. In existing studies on the single-camera-based method, there are frequent situations in which the eye information necessary for gaze tracking cannot be observed well in the camera input image owing to the turning of the driver's head during driving. To solve this problem, existing studies have used multiple-camera-based methods to obtain images to track the driver's gaze. However, this method has the drawback of an excessive computation process and processing time, as it involves detecting the eyes and extracting the features of all images obtained from multiple cameras. This makes it difficult to implement it in an actual vehicle environment. To solve these limitations of existing studies, this study proposes a method that uses a shallow convolutional neural network (CNN) for the images of the driver's face acquired from two cameras to adaptively select camera images more suitable for detecting eye position; faster R-CNN is applied to the selected driver images, and after the driver's eyes are detected, the eye positions of the camera image of the other side are mapped through a geometric transformation matrix. Experiments were conducted using the self-built Dongguk Dual Camera-based Driver Database (DDCD-DB1) including the images of 26 participants acquired from inside a vehicle and the Columbia Gaze Data Set (CAVE-DB) open database. The results confirmed that the performance of the proposed method is superior to those of the existing methods.
Collapse
Affiliation(s)
- Sung Ho Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | | | | |
Collapse
|
3
|
Li B, Fu H, Wen D, Lo W. Etracker: A Mobile Gaze-Tracking System with Near-Eye Display Based on a Combined Gaze-Tracking Algorithm. SENSORS (BASEL, SWITZERLAND) 2018; 18:E1626. [PMID: 29783738 PMCID: PMC5981618 DOI: 10.3390/s18051626] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Revised: 05/15/2018] [Accepted: 05/15/2018] [Indexed: 11/18/2022]
Abstract
Eye tracking technology has become increasingly important for psychological analysis, medical diagnosis, driver assistance systems, and many other applications. Various gaze-tracking models have been established by previous researchers. However, there is currently no near-eye display system with accurate gaze-tracking performance and a convenient user experience. In this paper, we constructed a complete prototype of the mobile gaze-tracking system 'Etracker' with a near-eye viewing device for human gaze tracking. We proposed a combined gaze-tracking algorithm. In this algorithm, the convolutional neural network is used to remove blinking images and predict coarse gaze position, and then a geometric model is defined for accurate human gaze tracking. Moreover, we proposed using the mean value of gazes to resolve pupil center changes caused by nystagmus in calibration algorithms, so that an individual user only needs to calibrate it the first time, which makes our system more convenient. The experiments on gaze data from 26 participants show that the eye center detection accuracy is 98% and Etracker can provide an average gaze accuracy of 0.53° at a rate of 30⁻60 Hz.
Collapse
Affiliation(s)
- Bin Li
- Xi'an Institute of Optics and Precision Mechanics of CAS, Xi'an 710119, China.
- University of Chinese Academy of Sciences, Beijing 100049, China.
- Department of Computer Science, Chu Hai College of Higher Education, Tuen Mun, Hong Kong, China.
| | - Hong Fu
- Department of Computer Science, Chu Hai College of Higher Education, Tuen Mun, Hong Kong, China.
| | - Desheng Wen
- Xi'an Institute of Optics and Precision Mechanics of CAS, Xi'an 710119, China.
| | - WaiLun Lo
- Department of Computer Science, Chu Hai College of Higher Education, Tuen Mun, Hong Kong, China.
| |
Collapse
|
4
|
Naqvi RA, Arsalan M, Batchuluun G, Yoon HS, Park KR. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor. SENSORS 2018; 18:s18020456. [PMID: 29401681 PMCID: PMC5855991 DOI: 10.3390/s18020456] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 01/31/2018] [Accepted: 02/01/2018] [Indexed: 11/30/2022]
Abstract
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.
Collapse
Affiliation(s)
- Rizwan Ali Naqvi
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Ganbayar Batchuluun
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Hyo Sik Yoon
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 100-715, Korea.
| |
Collapse
|
5
|
Naqvi RA, Arsalan M, Park KR. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker. SENSORS 2017; 17:s17040862. [PMID: 28420114 PMCID: PMC5424739 DOI: 10.3390/s17040862] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Revised: 03/29/2017] [Accepted: 04/11/2017] [Indexed: 11/16/2022]
Abstract
Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user’s gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods.
Collapse
Affiliation(s)
- Rizwan Ali Naqvi
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 100-715, Korea.
| |
Collapse
|
6
|
Nonwearable gaze tracking system for controlling home appliance. ScientificWorldJournal 2014; 2014:303670. [PMID: 25298966 PMCID: PMC4178921 DOI: 10.1155/2014/303670] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2014] [Revised: 08/19/2014] [Accepted: 08/23/2014] [Indexed: 11/28/2022] Open
Abstract
A novel gaze tracking system for controlling home appliances in 3D space is proposed in this study. Our research is novel in the following four ways. First, we propose a nonwearable gaze tracking system containing frontal viewing and eye tracking cameras. Second, our system includes three modes: navigation (for moving the wheelchair depending on the direction of gaze movement), selection (for selecting a specific appliance by gaze estimation), and manipulation (for controlling the selected appliance by gazing at the control panel). The modes can be changed by closing eyes during a specific time period or gazing. Third, in the navigation mode, the signal for moving the wheelchair can be triggered according to the direction of gaze movement. Fourth, after a specific home appliance is selected by gazing at it for more than predetermined time period, a control panel with 3 × 2 menu is displayed on laptop computer below the gaze tracking system for manipulation. The user gazes at one of the menu options for a specific time period, which can be manually adjusted according to the user, and the signal for controlling the home appliance can be triggered. The proposed method is shown to have high detection accuracy through a series of experiments.
Collapse
|
7
|
Abstract
SUMMARYIn this work, a vision-based control interface for commanding a robotic wheelchair is presented. The interface estimates the orientation angles of the user's head and it translates these parameters in command of maneuvers for different devices. The performance of the proposed interface is evaluated both in static experiments as well as when it is applied in commanding the robotic wheelchair. The interface calculates the orientation angles and it translates the parameters as the reference inputs to the robotic wheelchair. Control architecture based on the dynamic model of the wheelchair is implemented in order to achieve safety navigation. Experimental results of the interface performance and the wheelchair navigation are presented.
Collapse
|
8
|
Gonzalez M, Mulet D, Perez E, Soria C, Mut V. Vision based interface: an alternative tool for children with cerebral palsy. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2011; 2010:5895-8. [PMID: 21096933 DOI: 10.1109/iembs.2010.5627533] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper a new vision based interface (VBI) for children with cerebral palsy is presented. The VBI is implemented for the interaction between children and computer. The VBI detects and tracks the movement of the hand, foot or head of the user. These movements are translated into movements of the cursor on the screen of the computer. The evaluation of system user-VBI is based on HAAT model. The experimental results show four vase studies of children, when they carried out different tasks with the computer.
Collapse
Affiliation(s)
- Magdalena Gonzalez
- Gabinete de Tecnología Médica, UNSJ, Av. San Martin Oeste 1109, 5400, Argentina
| | | | | | | | | |
Collapse
|