1
|
Wisher I, Pettitt P, Kentridge R. The deep past in the virtual present: developing an interdisciplinary approach towards understanding the psychological foundations of palaeolithic cave art. Sci Rep 2023; 13:19009. [PMID: 37923922 PMCID: PMC10624876 DOI: 10.1038/s41598-023-46320-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 10/30/2023] [Indexed: 11/06/2023] Open
Abstract
Virtual Reality (VR) has vast potential for developing systematic, interdisciplinary studies to understand ephemeral behaviours in the archaeological record, such as the emergence and development of visual culture. Upper Palaeolithic cave art forms the most robust record for investigating this and the methods of its production, themes, and temporal and spatial changes have been researched extensively, but without consensus over its functions or meanings. More compelling arguments draw from visual psychology and posit that the immersive, dark conditions of caves elicited particular psychological responses, resulting in the perception-and depiction-of animals on suggestive features of cave walls. Our research developed and piloted a novel VR experiment that allowed participants to perceive 3D models of cave walls, with the Palaeolithic art digitally removed, from El Castillo cave (Cantabria, Spain). Results indicate that modern participants' visual attention corresponded to the same topographic features of cave walls utilised by Palaeolithic artists, and that they perceived such features as resembling animals. Although preliminary, our results support the hypothesis that pareidolia-a product of our cognitive evolution-was a key mechanism in Palaeolithic art making, and demonstrates the potential of interdisciplinary VR research for understanding the evolution of art, and demonstrate the potential efficacy of the methodology.
Collapse
Affiliation(s)
- Izzy Wisher
- Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark.
- Department of Archaeology and Heritage Studies, Aarhus University, Aarhus, Denmark.
| | - Paul Pettitt
- Department of Archaeology, Durham University, Durham, UK
| | | |
Collapse
|
2
|
Reeves SM, Otero-Millan J. The influence of scene tilt on saccade directions is amplitude dependent. J Neurol Sci 2023; 448:120635. [PMID: 37031623 DOI: 10.1016/j.jns.2023.120635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 01/13/2023] [Accepted: 03/24/2023] [Indexed: 03/28/2023]
Abstract
When exploring a visual scene, humans make more saccades in the horizontal direction than any other direction. While many have shown that the horizontal saccade bias rotates in response to scene tilt, it is unclear whether this effect depends on saccade amplitude. We addressed this question by examining the effect of image tilt on the saccade direction distributions recorded during freely viewing natural scenes. Participants (n = 20) viewed scenes tilted at -30°, 0°, and 30°. Saccade distributions during free viewing rotated by an angle of 12.1° ± 6.7° (t(19) = 8.04, p < 0.001) in the direction of the image tilt. When we partitioned the saccades according to their amplitude we found that small amplitude saccades occurred most in the horizontal direction while large amplitude saccades were more oriented to the scene tilt (p < 0.001). To further study the characteristics of small saccades and how they are affected by scene tilt, we looked at the effect of image tilt on small fixational saccades made while fixating a central target amidst a larger scene and found that fixational saccade distributions did not rotate with scene tilt (-0.3° ±1.7° degrees; t(19) = -0.8, p = 0.39). These results suggest a combined effect of two reference frames in saccade generation: one egocentric reference frame that dominates for small saccades, biases them horizontally, and may be common for different tasks, and another allocentric reference frame that biases larger saccades along the orientation of an image during free viewing.
Collapse
|
3
|
Bischof WF, Anderson NC, Kingstone A. Eye and head movements while encoding and recognizing panoramic scenes in virtual reality. PLoS One 2023; 18:e0282030. [PMID: 36800398 PMCID: PMC9937482 DOI: 10.1371/journal.pone.0282030] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 02/06/2023] [Indexed: 02/18/2023] Open
Abstract
One approach to studying the recognition of scenes and objects relies on the comparison of eye movement patterns during encoding and recognition. Past studies typically analyzed the perception of flat stimuli of limited extent presented on a computer monitor that did not require head movements. In contrast, participants in the present study saw omnidirectional panoramic scenes through an immersive 3D virtual reality viewer, and they could move their head freely to inspect different parts of the visual scenes. This allowed us to examine how unconstrained observers use their head and eyes to encode and recognize visual scenes. By studying head and eye movement within a fully immersive environment, and applying cross-recurrence analysis, we found that eye movements are strongly influenced by the content of the visual environment, as are head movements-though to a much lesser degree. Moreover, we found that the head and eyes are linked, with the head supporting, and by and large mirroring the movements of the eyes, consistent with the notion that the head operates to support the acquisition of visual information by the eyes.
Collapse
Affiliation(s)
- Walter F. Bischof
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Nicola C. Anderson
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
- * E-mail:
| |
Collapse
|
4
|
Abstract
This chapter explores the current state of the art in eye tracking within 3D virtual environments. It begins with the motivation for eye tracking in Virtual Reality (VR) in psychological research, followed by descriptions of the hardware and software used for presenting virtual environments as well as for tracking eye and head movements in VR. This is followed by a detailed description of an example project on eye and head tracking while observers look at 360° panoramic scenes. The example is illustrated with descriptions of the user interface and program excerpts to show the measurement of eye and head movements in VR. The chapter continues with fundamentals of data analysis, in particular methods for the determination of fixations and saccades when viewing spherical displays. We then extend these methodological considerations to determining the spatial and temporal coordination of the eyes and head in VR perception. The chapter concludes with a discussion of outstanding problems and future directions for conducting eye- and head-tracking research in VR. We hope that this chapter will serve as a primer for those intending to implement VR eye tracking in their own research.
Collapse
|
5
|
Head Orientation Influences Saccade Directions during Free Viewing. eNeuro 2022; 9:ENEURO.0273-22.2022. [PMID: 36351820 PMCID: PMC9787809 DOI: 10.1523/eneuro.0273-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/01/2022] [Accepted: 11/03/2022] [Indexed: 11/11/2022] Open
Abstract
When looking around a visual scene, humans make saccadic eye movements to fixate objects of interest. While the extraocular muscles can execute saccades in any direction, not all saccade directions are equally likely: saccades in horizontal and vertical directions are most prevalent. Here, we asked whether head orientation plays a role in determining saccade direction biases. Study participants (n = 14) viewed natural scenes and abstract fractals (radially symmetric patterns) through a virtual reality headset equipped with eye tracking. Participants' heads were stabilized and tilted at -30°, 0°, or 30° while viewing the images, which could also be tilted by -30°, 0°, and 30° relative to the head. To determine whether the biases in saccade direction changed with head tilt, we calculated polar histograms of saccade directions and cross-correlated pairs of histograms to find the angular displacement resulting in the maximum correlation. During free viewing of fractals, saccade biases largely followed the orientation of the head with an average displacement value of 24° when comparing head upright to head tilt in world-referenced coordinates (t (13) = 17.63, p < 0.001). There was a systematic offset of 2.6° in saccade directions, likely reflecting ocular counter roll (OCR; t (13) = 3.13, p = 0.008). When participants viewed an Earth upright natural scene during head tilt, we found that the orientation of the head still influenced saccade directions (t (13) = 3.7, p = 0.001). These results suggest that nonvisual information about head orientation, such as that acquired by vestibular sensors, likely plays a role in saccade generation.
Collapse
|
6
|
Nuthmann A, Thibaut M, Tran THC, Boucart M. Impact of neovascular age-related macular degeneration on eye-movement control during scene viewing: Viewing biases and guidance by visual salience. Vision Res 2022; 201:108105. [PMID: 36081228 DOI: 10.1016/j.visres.2022.108105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 06/06/2022] [Accepted: 07/19/2022] [Indexed: 01/25/2023]
Abstract
Human vision requires us to analyze the visual periphery to decide where to fixate next. In the present study, we investigated this process in people with age-related macular degeneration (AMD). In particular, we examined viewing biases and the extent to which visual salience guides fixation selection during free-viewing of naturalistic scenes. We used an approach combining generalized linear mixed modeling (GLMM) with a-priori scene parcellation. This method allows one to investigate group differences in terms of scene coverage and observers' well-known tendency to look at the center of scene images. Moreover, it allows for testing whether image salience influences fixation probability above and beyond what can be accounted for by the central bias. Compared with age-matched normally sighted control subjects (and young subjects), AMD patients' viewing behavior was less exploratory, with a stronger central fixation bias. All three subject groups showed a salience effect on fixation selection-higher-salience scene patches were more likely to be fixated. Importantly, the salience effect for the AMD group was of similar size as the salience effect for the control group, suggesting that guidance by visual salience was still intact. The variances for by-subject random effects in the GLMM indicated substantial individual differences. A separate model exclusively considered the AMD data and included fixation stability as a covariate, with the results suggesting that reduced fixation stability was associated with a reduced impact of visual salience on fixation selection.
Collapse
Affiliation(s)
- Antje Nuthmann
- Institute of Psychology, University of Kiel, Kiel, Germany.
| | - Miguel Thibaut
- University of Lille, Lille Neuroscience & Cognition, INSERM, Lille, France
| | - Thi Ha Chau Tran
- University of Lille, Lille Neuroscience & Cognition, INSERM, Lille, France; Ophthalmology Department, Lille Catholic Hospital, Catholic University of Lille, Lille, France
| | - Muriel Boucart
- University of Lille, Lille Neuroscience & Cognition, INSERM, Lille, France.
| |
Collapse
|
7
|
David EJ, Lebranchu P, Perreira Da Silva M, Le Callet P. What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality? J Vis 2022; 22:12. [PMID: 35323868 PMCID: PMC8963670 DOI: 10.1167/jov.22.4.12] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (\(\gt \)80°) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360° context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany.,
| | - Pierre Lebranchu
- LS2N UMR CNRS 6004, University of Nantes and Nantes University Hospital, Nantes, France.,
| | | | - Patrick Le Callet
- LS2N UMR CNRS 6004, University of Nantes, Nantes, France., http://pagesperso.ls2n.fr/~lecallet-p/index.html
| |
Collapse
|
8
|
Mahanama B, Jayawardana Y, Rengarajan S, Jayawardena G, Chukoskie L, Snider J, Jayarathna S. Eye Movement and Pupil Measures: A Review. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2021.733531] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Our subjective visual experiences involve complex interaction between our eyes, our brain, and the surrounding world. It gives us the sense of sight, color, stereopsis, distance, pattern recognition, motor coordination, and more. The increasing ubiquity of gaze-aware technology brings with it the ability to track gaze and pupil measures with varying degrees of fidelity. With this in mind, a review that considers the various gaze measures becomes increasingly relevant, especially considering our ability to make sense of these signals given different spatio-temporal sampling capacities. In this paper, we selectively review prior work on eye movements and pupil measures. We first describe the main oculomotor events studied in the literature, and their characteristics exploited by different measures. Next, we review various eye movement and pupil measures from prior literature. Finally, we discuss our observations based on applications of these measures, the benefits and practical challenges involving these measures, and our recommendations on future eye-tracking research directions.
Collapse
|
9
|
David EJ, Beitner J, Võ MLH. The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. J Vis 2021; 21:3. [PMID: 34251433 PMCID: PMC8287039 DOI: 10.1167/jov.21.7.3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany.,
| | - Julia Beitner
- Department of Psychology, Goethe-Universität, Frankfurt, Germany.,
| | | |
Collapse
|