1
|
Jeon J, Noh YG, Kim J, Hong JH. Pre-AttentiveGaze: gaze-based authentication dataset with momentary visual interactions. Sci Data 2025; 12:263. [PMID: 39948380 PMCID: PMC11825865 DOI: 10.1038/s41597-025-04538-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Accepted: 01/24/2025] [Indexed: 02/16/2025] Open
Abstract
This manuscript presents a Pre-AttentiveGaze dataset. One of the defining characteristics of gaze-based authentication is the necessity for a rapid response. In this study, we constructed a dataset for identifying individuals through eye movements by inducing "pre-attentive processing" in response to a given gaze stimulus in a very short time. A total of 76,840 eye movement samples were collected from 34 participants across five sessions. From the dataset, we extracted the gaze features proposed in previous studies, pre-processed them, and validated the dataset by applying machine learning models. This study demonstrates the efficacy of the dataset and illustrates its potential for use in gaze-based authentication of visual stimuli that elicit pre-attentive processing.
Collapse
Affiliation(s)
- Junryeol Jeon
- Gwangju Institute of Science of Technology, School of Integrated Technology, Gwangju, 61005, Republic of Korea
| | - Yeo-Gyeong Noh
- Gwangju Institute of Science of Technology, School of Integrated Technology, Gwangju, 61005, Republic of Korea
| | - JooYeong Kim
- Gwangju Institute of Science of Technology, School of Integrated Technology, Gwangju, 61005, Republic of Korea
| | - Jin-Hyuk Hong
- Gwangju Institute of Science of Technology, School of Integrated Technology, Gwangju, 61005, Republic of Korea.
- Gwangju Institute of Science of Technology, Artificial Intelligence Graduate School, Gwangju, 61005, Republic of Korea.
| |
Collapse
|
2
|
Jin H, Chen S. Biometric Recognition Based on Recurrence Plot and InceptionV3 Model Using Eye Movements. IEEE J Biomed Health Inform 2023; 27:5554-5563. [PMID: 37682647 DOI: 10.1109/jbhi.2023.3313261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
The ability to use eye movement signals as a feature in biometric recognition is a novel characteristic of biometric recognition technology. However, present technologies have not fully exploited the correlation features between eye movement signals. To address this, we propose an eye movement biometric recognition model that is based on recurrence plot encoding and the InceptionV3 model. We first encode the original eye movement signal using the recurrence plot to obtain a 2-D image that is then used as input for the InceptionV3 model to perform biometric recognition. Our experimental results using the GazeBaseV2.0 eye movement dataset demonstrate that our proposed model achieved a high biometric recognition accuracy of 96.58% ± 0.66% using the recurrence plot transformation of the horizontal gaze position signals and the InceptionV3 model, surpassing the accuracy achieved by other models. The use of horizontal gaze position eye movement signals for biometric recognition outperforms the use of vertical gaze position signals when using our proposed methods. Furthermore, the biometric recognition that is achieved through recurrent plot encoding is superior to that achieved using Markov transition fields and Gramian angular field transformations.
Collapse
|
3
|
D’Amelio A, Patania S, Bursic S, Cuculo V, Boccignone G. Using Gaze for Behavioural Biometrics. SENSORS (BASEL, SWITZERLAND) 2023; 23:1262. [PMID: 36772302 PMCID: PMC9920149 DOI: 10.3390/s23031262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 01/15/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
A principled approach to the analysis of eye movements for behavioural biometrics is laid down. The approach grounds in foraging theory, which provides a sound basis to capture the uniqueness of individual eye movement behaviour. We propose a composite Ornstein-Uhlenbeck process for quantifying the exploration/exploitation signature characterising the foraging eye behaviour. The relevant parameters of the composite model, inferred from eye-tracking data via Bayesian analysis, are shown to yield a suitable feature set for biometric identification; the latter is eventually accomplished via a classical classification technique. A proof of concept of the method is provided by measuring its identification performance on a publicly available dataset. Data and code for reproducing the analyses are made available. Overall, we argue that the approach offers a fresh view on either the analyses of eye-tracking data and prospective applications in this field.
Collapse
Affiliation(s)
- Alessandro D’Amelio
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Sabrina Patania
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Sathya Bursic
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
- Department of Psychology, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
| | - Vittorio Cuculo
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Giuseppe Boccignone
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| |
Collapse
|
4
|
Alexiev K, Vakarelski T. Can Microsaccades Be Used for Biometrics? SENSORS (BASEL, SWITZERLAND) 2022; 23:89. [PMID: 36616687 PMCID: PMC9824634 DOI: 10.3390/s23010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 12/17/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
Human eyes are in constant motion. Even when we fix our gaze on a certain point, our eyes continue to move. When looking at a point, scientists have distinguished three different fixational eye movements (FEM)-microsaccades, drift and tremor. The main goal of this paper is to investigate one of these FEMs-microsaccades-as a source of information for biometric analysis. The paper argues why microsaccades are preferred for biometric analysis over the other two fixational eye movements. The process of microsaccades' extraction is described. Thirteen parameters are defined for microsaccade analysis, and their derivation is given. A gradient algorithm was used to solve the biometric problem. An assessment of the weights of the different pairs of parameters in solving the biometric task was made.
Collapse
|
5
|
Liao H, Zhao W, Zhang C, Dong W. Exploring Eye Movement Biometrics in Real-World Activities: A Case Study of Wayfinding. SENSORS 2022; 22:s22082949. [PMID: 35458933 PMCID: PMC9030773 DOI: 10.3390/s22082949] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 04/05/2022] [Accepted: 04/08/2022] [Indexed: 02/04/2023]
Abstract
Eye movement biometrics can enable continuous verification for highly secure environments such as financial transactions and defense establishments, as well as a more personalized and tailored experience in gaze-based human–computer interactions. However, there are numerous challenges to recognizing people in real environments using eye movements, such as implicity and stimulus independence. In the instance of wayfinding, this research intends to investigate implicit and stimulus-independent eye movement biometrics in real-world situations. We collected 39 subjects’ eye movement data from real-world wayfinding experiments and derived five sets of eye movement features (the basic statistical, pupillary response, fixation density, fixation semantic and saccade encoding features). We adopted a random forest and performed biometric recognition for both identification and verification scenarios. The best accuracy we obtained in the identification scenario was 78% (equal error rate, EER = 6.3%) with the 10-fold classification and 64% (EER = 12.1%) with the leave-one-route-out classification. The best accuracy we achieved in the verification scenario was 89% (EER = 9.1%). Additionally, we tested performance across the 5 feature sets and 20 time window sizes. The results showed that the verification accuracy was insensitive to the increase in the time window size. These findings are the first indication of the viability of performing implicit and stimulus-independent biometric recognition in real-world settings using wearable eye tracking.
Collapse
Affiliation(s)
- Hua Liao
- School of Geographic Sciences, Hunan Normal University, Changsha 410081, China; (H.L.); (W.Z.); (C.Z.)
- Hunan Key Laboratory of Geospatial Big Data Mining and Application, Hunan Normal University, Changsha 410081, China
- State Key Laboratory of Remote Sensing Science, Beijing Key Laboratory for Remote Sensing of Environment and Digital Cities, Faculty of Geographical Science, Beijing Normal University, Beijing 100875, China
| | - Wendi Zhao
- School of Geographic Sciences, Hunan Normal University, Changsha 410081, China; (H.L.); (W.Z.); (C.Z.)
- Hunan Key Laboratory of Geospatial Big Data Mining and Application, Hunan Normal University, Changsha 410081, China
| | - Changbo Zhang
- School of Geographic Sciences, Hunan Normal University, Changsha 410081, China; (H.L.); (W.Z.); (C.Z.)
- Hunan Key Laboratory of Geospatial Big Data Mining and Application, Hunan Normal University, Changsha 410081, China
| | - Weihua Dong
- State Key Laboratory of Remote Sensing Science, Beijing Key Laboratory for Remote Sensing of Environment and Digital Cities, Faculty of Geographical Science, Beijing Normal University, Beijing 100875, China
- Correspondence: ; Tel.: +86-010-5880-9246
| |
Collapse
|
6
|
|
7
|
Design and development of an EOG-based simplified Chinese eye-writing system. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101767] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
8
|
Sluganovic I, Roeschlin M, Rasmussen KB, Martinovic I. Analysis of Reflexive Eye Movements for Fast Replay-Resistant Biometric Authentication. ACM TRANSACTIONS ON PRIVACY AND SECURITY 2019. [DOI: 10.1145/3281745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Eye tracking devices have recently become increasingly popular as an interface between people and cons-umer-grade electronic devices. Due to the fact that human eyes are fast, responsive, and carry information unique to an individual, analyzing person’s gaze is particularly attractive for rapid biometric authentication. Unfortunately, previous proposals for gaze-based authentication systems either suffer from high error rates or requires long authentication times.
We build on the fact that some eye movements can be reflexively and predictably triggered and develop an interactive visual stimulus for elicitation of reflexive eye movements that support the extraction of reliable biometric features in a matter of seconds, without requiring any memorization or cognitive effort on the part of the user. As an important benefit, our stimulus can be made unique for every authentication attempt and thus incorporated in a challenge-response biometric authentication system. This allows us to prevent replay attacks, which are possibly the most applicable attack vectors against biometric authentication.
Using a gaze tracking device, we build a prototype of our system and perform a series of systematic user experiments with 30 participants from the general public. We thoroughly analyze various system parameters and evaluate the performance and security guarantees under several different attack scenarios. The results show that our system matches or surpasses existing gaze-based authentication methods in achieved equal error rates (6.3%) while achieving significantly lower authentication times (5s).
Collapse
|
9
|
Biometric recognition via texture features of eye movement trajectories in a visual searching task. PLoS One 2018; 13:e0194475. [PMID: 29617383 PMCID: PMC5884501 DOI: 10.1371/journal.pone.0194475] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2017] [Accepted: 03/05/2018] [Indexed: 11/19/2022] Open
Abstract
Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.
Collapse
|
10
|
Abstract
How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.
Collapse
Affiliation(s)
| | - Janet H Hsiao
- Department of Psychology, The University of Hong Kong, Pok Fu Lam, Hong Kong
| | - Antoni B Chan
- Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|
11
|
Galdi C, Nappi M, Riccio D, Wechsler H. Eye movement analysis for human authentication: a critical survey. Pattern Recognit Lett 2016. [DOI: 10.1016/j.patrec.2016.11.002] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
12
|
|
13
|
|
14
|
|
15
|
Abstract
Eye movements are a relatively novel data source for biometric identification. When video cameras applied to eye tracking become smaller and more efficient, this data source could offer interesting opportunities for the development of eye movement biometrics. In this paper, we study primarily biometric identification as seen as a classification task of multiple classes, and secondarily biometric verification considered as binary classification. Our research is based on the saccadic eye movement signal measurements from 109 young subjects. In order to test the data measured, we use a procedure of biometric identification according to the one-versus-one (subject) principle. In a development from our previous research, which also involved biometric verification based on saccadic eye movements, we now apply another eye movement tracker device with a higher sampling frequency of 250 Hz. The results obtained are good, with correct identification rates at 80-90% at their best.
Collapse
|
16
|
Abstract
In todays world, identity of human beings has expanded beyond the real world to the cyber world. Virtual identity of millions of users is present at various web-based Social Networking Sites (SNSs) such as Myspace, Facebook, and Twitter. Interactions through SNSs have become a part of our daily practices, which eventually leaves a big trail of behavioral pattern in virtual domain. In this paper, the authors examined the feasibility of person identification using such social network activities as behavioral biometrics. Experimentation includes extraction of a number of idiosyncratic features from SNSs and analysis of their performance as novel social behavioral biometric features.
Collapse
Affiliation(s)
- Madeena Sultana
- Department of Computer Science, University of Calgary, 2500 University DR NW, Calgary, Alberta, T2N1N4, Canada
| | - Padma Polash Paul
- Department of Computer Science, University of Calgary, 2500 University DR NW, Calgary, Alberta, T2N1N4, Canada
| | - Marina Gavrilova
- Department of Computer Science, University of Calgary, 2500 University DR NW, Calgary, Alberta, T2N1N4, Canada
| |
Collapse
|
17
|
Abstract
The human eye is rich in physical and behavioral attributes that can be used for automatic person recognition. The physical attributes such as the iris attracted early attention and yielded significant recognition results, but like most physical biometrics, they have several disadvantages such as intrusive acquisition, vulnerability to spoofing attacks, etc. Consequently, during the last decade the behavioral attributes extracted from human eyes have steadily gained interest from the automatic person recognition research community. In this first of its kind survey, we present the studies utilizing the behavioral attributes of human eyes for automatic person recognition. We have proposed a unique classification based on the type of stimuli used to elicit behavioral attributes. In addition, for each approach we have carefully examined the common steps involved in automatic person recognition from database acquisition, feature extraction to classification. Lastly, we also present a comparison of the recognition results obtained by each approach.
Collapse
Affiliation(s)
- USMAN SAEED
- Faculty of Computing and Information Technology, North Jeddah, King Abdulazziz University, Jeddah, Saudi Arabia
| |
Collapse
|