1
|
Wyche NJ, Edwards M, Goodhew SC. Openness to experience predicts eye movement behavior during scene viewing. Atten Percept Psychophys 2024; 86:2386-2411. [PMID: 39134921 PMCID: PMC11480192 DOI: 10.3758/s13414-024-02937-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/11/2024] [Indexed: 10/16/2024]
Abstract
Individuals' abilities to perform goal-directed spatial deployments of attention are distinguishable from their broader preferences for how they use spatial attention when circumstances do not compel a specific deployment strategy. Although these preferences are likely to play a major role in how we interact with the visual world during daily life, they remain relatively understudied. This exploratory study investigated two key questions about these preferences: firstly, are individuals consistent in their preferences for how they deploy their spatial attention when making shifts of attention versus adopting an attentional breadth? Secondly, which other factors are associated with these preferences? Across two experiments, we measured how participants preferred to deploy both attentional breadth (using an adapted Navon task) and eye movements (using a free-viewing task). We also measured participants' working memory capacities (Experiment 1), and their personalities and world beliefs (Experiment 2). In both experiments, there were consistent individual differences in preference for attentional breadth and eye movement characteristics, but these two kinds of preference were unrelated to each other. Working memory capacity was not linked to these preferences. Conversely, the personality trait of Openness to Experience robustly predicted two aspects of eye movement behavior preference, such that higher levels of Openness predicted smaller saccades and shorter scan paths. This suggests that personality dimensions may predict preferences for more absorbed engagement with visual information. However, it appears that individuals' preferences for shifts of attention during scene viewing do not necessarily relate to the breadth of attention they choose to adopt.
Collapse
Affiliation(s)
- Nicholas J Wyche
- School of Medicine and Psychology, Australian National University, Canberra, Australia.
| | - Mark Edwards
- School of Medicine and Psychology, Australian National University, Canberra, Australia
| | - Stephanie C Goodhew
- School of Medicine and Psychology, Australian National University, Canberra, Australia
| |
Collapse
|
2
|
Jiang C, Chen Z, Wolfe JM. Toward viewing behavior for aerial scene categorization. Cogn Res Princ Implic 2024; 9:17. [PMID: 38530617 PMCID: PMC10965882 DOI: 10.1186/s41235-024-00541-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 03/07/2024] [Indexed: 03/28/2024] Open
Abstract
Previous work has demonstrated similarities and differences between aerial and terrestrial image viewing. Aerial scene categorization, a pivotal visual processing task for gathering geoinformation, heavily depends on rotation-invariant information. Aerial image-centered research has revealed effects of low-level features on performance of various aerial image interpretation tasks. However, there are fewer studies of viewing behavior for aerial scene categorization and of higher-level factors that might influence that categorization. In this paper, experienced subjects' eye movements were recorded while they were asked to categorize aerial scenes. A typical viewing center bias was observed. Eye movement patterns varied among categories. We explored the relationship of nine image statistics to observers' eye movements. Results showed that if the images were less homogeneous, and/or if they contained fewer or no salient diagnostic objects, viewing behavior became more exploratory. Higher- and object-level image statistics were predictive at both the image and scene category levels. Scanpaths were generally organized and small differences in scanpath randomness could be roughly captured by critical object saliency. Participants tended to fixate on critical objects. Image statistics included in this study showed rotational invariance. The results supported our hypothesis that the availability of diagnostic objects strongly influences eye movements in this task. In addition, this study provides supporting evidence for Loschky et al.'s (Journal of Vision, 15(6), 11, 2015) speculation that aerial scenes are categorized on the basis of image parts and individual objects. The findings were discussed in relation to theories of scene perception and their implications for automation development.
Collapse
Affiliation(s)
- Chenxi Jiang
- School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, Hubei, China
| | - Zhenzhong Chen
- School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, Hubei, China.
- Hubei Luojia Laboratory, Wuhan, Hubei, China.
| | - Jeremy M Wolfe
- Harvard Medical School, Boston, MA, USA
- Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|
3
|
Ghiani A, Mann D, Brenner E. Methods matter: Exploring how expectations influence common actions. iScience 2024; 27:109076. [PMID: 38361615 PMCID: PMC10867666 DOI: 10.1016/j.isci.2024.109076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 12/21/2023] [Accepted: 01/26/2024] [Indexed: 02/17/2024] Open
Abstract
Behavior in controlled laboratory studies is not always representative of what people do in daily life. This has prompted a recent shift toward conducting studies in natural settings. We wondered whether expectations raised by how the task is presented should also be considered. To find out, we studied gaze when walking down and up a staircase. Gaze was often directed at steps before stepping on them, but most participants did not look at every step. Importantly, participants fixated more steps and looked around less when asked to navigate the staircase than when navigating the same staircase but asked to walk outside. Presumably, expecting the staircase to be important made participants direct their gaze at more steps, despite the identical requirements when on the staircase. This illustrates that behavior can be influenced by expectations, such as expectations resulting from task instructions, even when studies are conducted in natural settings.
Collapse
Affiliation(s)
- Andrea Ghiani
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - David Mann
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
4
|
Maniarasu P, Shasane PH, Pai VH, Kuzhuppilly NIR, Ve RS, Ballae Ganeshrao S. Does the sampling frequency of an eye tracker affect the detection of glaucomatous visual field loss? Ophthalmic Physiol Opt 2024; 44:378-387. [PMID: 38149468 DOI: 10.1111/opo.13267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 12/12/2023] [Accepted: 12/13/2023] [Indexed: 12/28/2023]
Abstract
PURPOSE Evidence suggests that eye movements have potential as a tool for detecting glaucomatous visual field defects. This study evaluated the influence of sampling frequency on eye movement parameters in detecting glaucomatous visual field defects during a free-viewing task. METHODS We investigated eye movements in two sets of experiments: (a) young adults with and without simulated visual field defects and (b) glaucoma patients and age-matched controls. In Experiment 1, we recruited 30 healthy volunteers. Among these, 10 performed the task with a gaze-contingent superior arcuate (SARC) scotoma, 10 performed the task with a gaze-contingent biarcuate (BARC) scotoma and 10 performed the task without a simulated scotoma (NoSim). The experimental task involved participants freely exploring 100 images, each for 4 s. Eye movements were recorded using the LiveTrack Lightning eye-tracker (500 Hz). In Experiment 2, we recruited 20 glaucoma patients and 16 age-matched controls. All participants underwent similar experimental tasks as in Experiment 1, except only 37 images were shown for exploration. To analyse the effect of sampling frequency, data were downsampled to 250, 120 and 60 Hz. Eye movement parameters, such as the number of fixations, fixation duration, saccadic amplitude and bivariate contour ellipse area (BCEA), were computed across various sampling frequencies. RESULTS Two-way ANOVA revealed no significant effects of sampling frequency on fixation duration (simulation, p = 0.37; glaucoma patients, p = 0.95) and BCEA (simulation, p = 0.84; glaucoma patients: p = 0.91). BCEA showed good distinguishability in differentiating groups across different sampling frequencies, whereas fixation duration failed to distinguish between glaucoma patients and controls. Number of fixations and saccade amplitude showed variations with sampling frequency in both simulations and glaucoma patients. CONCLUSION In both simulations and glaucoma patients, BCEA consistently differentiated them from controls across various sampling frequencies.
Collapse
Affiliation(s)
- Priyanka Maniarasu
- Department of Optometry, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, India
| | - Prathamesh Harshad Shasane
- Department of Optometry, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, India
| | - Vijaya H Pai
- Department of Ophthalmology, Kasturba Medical College Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Neetha I R Kuzhuppilly
- Department of Ophthalmology, Kasturba Medical College Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Ramesh S Ve
- Department of Optometry, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, India
| | - Shonraj Ballae Ganeshrao
- Department of Optometry, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
5
|
Teng S, Danforth C, Paternoster N, Ezeana M, Puri A. Object recognition via echoes: quantifying the crossmodal transfer of three-dimensional shape information between echolocation, vision, and haptics. Front Neurosci 2024; 18:1288635. [PMID: 38440393 PMCID: PMC10909950 DOI: 10.3389/fnins.2024.1288635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 02/05/2024] [Indexed: 03/06/2024] Open
Abstract
Active echolocation allows blind individuals to explore their surroundings via self-generated sounds, similarly to dolphins and other echolocating animals. Echolocators emit sounds, such as finger snaps or mouth clicks, and parse the returning echoes for information about their surroundings, including the location, size, and material composition of objects. Because a crucial function of perceiving objects is to enable effective interaction with them, it is important to understand the degree to which three-dimensional shape information extracted from object echoes is useful in the context of other modalities such as haptics or vision. Here, we investigated the resolution of crossmodal transfer of object-level information between acoustic echoes and other senses. First, in a delayed match-to-sample task, blind expert echolocators and sighted control participants inspected common (everyday) and novel target objects using echolocation, then distinguished the target object from a distractor using only haptic information. For blind participants, discrimination accuracy was overall above chance and similar for both common and novel objects, whereas as a group, sighted participants performed above chance for the common, but not novel objects, suggesting that some coarse object information (a) is available to both expert blind and novice sighted echolocators, (b) transfers from auditory to haptic modalities, and (c) may be facilitated by prior object familiarity and/or material differences, particularly for novice echolocators. Next, to estimate an equivalent resolution in visual terms, we briefly presented blurred images of the novel stimuli to sighted participants (N = 22), who then performed the same haptic discrimination task. We found that visuo-haptic discrimination performance approximately matched echo-haptic discrimination for a Gaussian blur kernel σ of ~2.5°. In this way, by matching visual and echo-based contributions to object discrimination, we can estimate the quality of echoacoustic information that transfers to other sensory modalities, predict theoretical bounds on perception, and inform the design of assistive techniques and technology available for blind individuals.
Collapse
Affiliation(s)
- Santani Teng
- Smith-Kettlewell Eye Research Institute, San Francisco, CA, United States
| | - Caroline Danforth
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Nickolas Paternoster
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Department of Psychology, Cornell University, Ithaca, NY, United States
| | - Michael Ezeana
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Georgetown University School of Medicine, Washington, DC, United States
| | - Amrita Puri
- Department of Biology, University of Central Arkansas, Conway, AR, United States
| |
Collapse
|
6
|
Wyche NJ, Edwards M, Goodhew SC. An updating-based working memory load alters the dynamics of eye movements but not their spatial extent during free viewing of natural scenes. Atten Percept Psychophys 2024; 86:503-524. [PMID: 37468789 PMCID: PMC10805812 DOI: 10.3758/s13414-023-02741-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/24/2023] [Indexed: 07/21/2023]
Abstract
The relationship between spatial deployments of attention and working memory load is an important topic of study, with clear implications for real-world tasks such as driving. Previous research has generally shown that attentional breadth broadens under higher load, while exploratory eye-movement behaviour also appears to change with increasing load. However, relatively little research has compared the effects of working memory load on different kinds of spatial deployment, especially in conditions that require updating of the contents of working memory rather than simple retrieval. The present study undertook such a comparison by measuring participants' attentional breadth (via an undirected Navon task) and their exploratory eye-movement behaviour (a free-viewing recall task) under low and high updating working memory loads. While spatial aspects of task performance (attentional breadth, and peripheral extent of image exploration in the free-viewing task) were unaffected by the load manipulation, the exploratory dynamics of the free-viewing task (including fixation durations and scan-path lengths) changed under increasing load. These findings suggest that temporal dynamics, rather than the spatial extent of exploration, are the primary mechanism affected by working memory load during the spatial deployment of attention. Further, individual differences in exploratory behaviour were observed on the free-viewing task: all metrics were highly correlated across working memory load blocks. These findings suggest a need for further investigation of individual differences in eye-movement behaviour; potential factors associated with these individual differences, including working memory capacity and persistence versus flexibility orientations, are discussed.
Collapse
Affiliation(s)
- Nicholas J Wyche
- Research School of Psychology (Building 39), The Australian National University, Canberra, ACT, 2601, Australia.
| | - Mark Edwards
- Research School of Psychology (Building 39), The Australian National University, Canberra, ACT, 2601, Australia
| | - Stephanie C Goodhew
- Research School of Psychology (Building 39), The Australian National University, Canberra, ACT, 2601, Australia
| |
Collapse
|
7
|
Peacock CE, Hall EH, Henderson JM. Objects are selected for attention based upon meaning during passive scene viewing. Psychon Bull Rev 2023; 30:1874-1886. [PMID: 37095319 PMCID: PMC11164276 DOI: 10.3758/s13423-023-02286-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/26/2023] [Indexed: 04/26/2023]
Abstract
While object meaning has been demonstrated to guide attention during active scene viewing and object salience guides attention during passive viewing, it is unknown whether object meaning predicts attention in passive viewing tasks and whether attention during passive viewing is more strongly related to meaning or salience. To answer this question, we used a mixed modeling approach where we computed the average meaning and physical salience of objects in scenes while statistically controlling for the roles of object size and eccentricity. Using eye-movement data from aesthetic judgment and memorization tasks, we then tested whether fixations are more likely to land on high-meaning objects than low-meaning objects while controlling for object salience, size, and eccentricity. The results demonstrated that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of these other factors. Further analyses revealed that fixation durations were positively associated with object meaning irrespective of the other object properties. Overall, these findings provide the first evidence that objects are, in part, selected by meaning for attentional selection during passive scene viewing.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA.
- Department of Psychology, University of California, Davis, CA, USA.
| | - Elizabeth H Hall
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA
- Department of Psychology, University of California, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA
- Department of Psychology, University of California, Davis, CA, USA
| |
Collapse
|
8
|
Loh Z, Hall EH, Cronin D, Henderson JM. Working memory control predicts fixation duration in scene-viewing. PSYCHOLOGICAL RESEARCH 2023; 87:1143-1154. [PMID: 35879564 PMCID: PMC11129724 DOI: 10.1007/s00426-022-01694-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/02/2022] [Indexed: 11/28/2022]
Abstract
When viewing scenes, observers differ in how long they linger at each fixation location and how far they move their eyes between fixations. What factors drive these differences in eye-movement behaviors? Previous work suggests individual differences in working memory capacity may influence fixation durations and saccade amplitudes. In the present study, participants (N = 98) performed two scene-viewing tasks, aesthetic judgment and memorization, while viewing 100 photographs of real-world scenes. Working memory capacity, working memory processing ability, and fluid intelligence were assessed with an operation span task, a memory updating task, and Raven's Advanced Progressive Matrices, respectively. Across participants, we found significant effects of task on both fixation durations and saccade amplitudes. At the level of each individual participant, we also found a significant relationship between memory updating task performance and participants' fixation duration distributions. However, we found no effect of fluid intelligence and no effect of working memory capacity on fixation duration or saccade amplitude distributions, inconsistent with previous findings. These results suggest that the ability to flexibly maintain and update working memory is strongly related to fixation duration behavior.
Collapse
Affiliation(s)
- Zoe Loh
- Management of Complex Systems Department, University of California Merced, Merced, CA, 95343, USA
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA
| | - Elizabeth H Hall
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA.
- Department of Psychology, University of California Davis, Davis, CA, 95616, USA.
| | - Deborah Cronin
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA
- Department of Psychology, Drake University, Des Moines, IA, 50311, USA
| | - John M Henderson
- Center for Mind and Brain, University of California Davis, Davis, CA, 95618, USA
- Department of Psychology, University of California Davis, Davis, CA, 95616, USA
| |
Collapse
|
9
|
Peacock CE, Singh P, Hayes TR, Rehrig G, Henderson JM. Searching for meaning: Local scene semantics guide attention during natural visual search in scenes. Q J Exp Psychol (Hove) 2023; 76:632-648. [PMID: 35510885 PMCID: PMC11132926 DOI: 10.1177/17470218221101334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informativeness in scenes, and salience maps which represented the spatial distribution of conspicuous image features and tested their influence on fixation densities from two object search tasks in real-world scenes. The results showed that meaning accounted for significantly greater variance in fixation densities than image salience, both overall and in early attention across both studies. Here, meaning explained 58% and 63% of the theoretical ceiling of variance in attention across both studies, respectively. Furthermore, both studies demonstrated that fast initial saccades were not more likely to be directed to higher salience regions than slower initial saccades, and initial saccades of all latencies were directed to regions containing higher meaning than salience. Together, these results demonstrated that even though meaning was task-neutral, the visual system still selected meaningful over salient scene regions for attention during search.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Praveena Singh
- Center for Neuroscience, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Gwendolyn Rehrig
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|
10
|
Srikantharajah J, Ellard C. How central and peripheral vision influence focal and ambient processing during scene viewing. J Vis 2022; 22:4. [PMID: 36322076 PMCID: PMC9639699 DOI: 10.1167/jov.22.12.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Central and peripheral vision carry out different functions during scene processing. The ambient mode of visual processing is more likely to involve peripheral visual processes, whereas the focal mode of visual processing is more likely to involve central visual processes. Although the ambient mode is responsible for navigating space and comprehending scene layout, the focal mode gathers detailed information as central vision is oriented to salient areas of the visual field. Previous work suggests that during the time course of scene viewing, there is a transition from ambient processing during the first few seconds to focal processing during later time intervals, characterized by longer fixations and shorter saccades. In this study, we identify the influence of central and peripheral vision on changes in eye movements and the transition from ambient to focal processing during the time course of scene processing. Using a gaze-contingent protocol, we restricted the visual field to central or peripheral vision while participants freely viewed scenes for 20 seconds. Results indicated that fixation durations are shorter when vision is restricted to central vision compared to normal vision. During late visual processing, fixations in peripheral vision were longer than those in central vision. We show that a transition from more ambient to more focal processing during scene viewing will occur even when vision is restricted to only central vision or peripheral vision.
Collapse
Affiliation(s)
| | - Colin Ellard
- Department of Psychology, University of Waterloo, Waterloo, Canada,
| |
Collapse
|
11
|
Attenuating the 'attentional white bear' effect enhances suppressive attention. Atten Percept Psychophys 2022; 84:2444-2460. [PMID: 36138299 PMCID: PMC9630199 DOI: 10.3758/s13414-022-02560-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/23/2022] [Indexed: 12/04/2022]
Abstract
Trying to ignore an object can bias attention towards it – a phenomenon referred to as the ‘attentional white bear’ (AWB) effect. The mechanisms behind this effect remain unclear. On one hand, the AWB may reflect reactive, ‘search and destroy’ distractor suppression, which directs attention toward irrelevant objects in order to suppress further attention to them. However, another possibility is that the AWB results from failed proactive distractor suppression – attempting to suppress attention to an irrelevant object from the outset may inadvertently result in an attentional shift towards it. To distinguish these two possibilities, we developed a categorical visual search task that addresses limitations present in prior studies. In five experiments (Ntotal = 96), participants searched displays of naturalistic stimuli cued only with distractor categories (targets were unknown and unpredictable). We observed an AWB and later attenuated it by presenting a pre-search stimulus, likely disrupting guidance from distractor templates in working memory. We conclude that the AWB resulted from a failure of proactive suppression rather than a search and destroy process.
Collapse
|
12
|
Marin MM, Leder H. Gaze patterns reveal aesthetic distance while viewing art. Ann N Y Acad Sci 2022; 1514:155-165. [PMID: 35610177 DOI: 10.1111/nyas.14792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
For centuries, Western philosophers have argued that aesthetic experiences differ from common, everyday pleasing sensations, and further, that mental states, such as disinterested contemplation and aesthetic distance, underlie these complex experiences. We empirically tested whether basic perceptual processes of information intake reveal evidence for aesthetic distance, specifically toward visual art. We conducted two eye tracking experiments using appropriately matched visual stimuli (environmental scenes and representational paintings) with 59 participants using two different presentation durations (25 and 6 s). Linear mixed-effects models considering individual differences showed that affective content (pleasantness and arousal), but not stimulus composition (complexity), leads to differential effects when viewing representational paintings in comparison to environmental scenes. We demonstrate that an increase in aesthetic pleasantness induced by representational paintings during a free-viewing task leads to a slower and deeper processing mode than when viewing environmental scenes of motivational relevance, for which we observed the opposite effect. In addition, long presentation durations led to an increase in scanning behavior during visual art perception. These empirical findings inform the debate about how aesthetic experiences differ from everyday perceptual processes by showing that the notion of aesthetic distance may be better understood by examining different modes of viewing.
Collapse
Affiliation(s)
- Manuela M Marin
- Department of Cognition, Emotion and Methods in Psychology, University of Vienna, Vienna, Austria.,Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Helmut Leder
- Department of Cognition, Emotion and Methods in Psychology, University of Vienna, Vienna, Austria
| |
Collapse
|
13
|
Bayani KYT, Natraj N, Khresdish N, Pargeter J, Stout D, Wheaton LA. Emergence of perceptuomotor relationships during paleolithic stone toolmaking learning: intersections of observation and practice. Commun Biol 2021; 4:1278. [PMID: 34764417 PMCID: PMC8585878 DOI: 10.1038/s42003-021-02768-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 10/11/2021] [Indexed: 11/08/2022] Open
Abstract
Stone toolmaking is a human motor skill which provides the earliest archeological evidence motor skill and social learning. Intentionally shaping a stone into a functional tool relies on the interaction of action observation and practice to support motor skill acquisition. The emergence of adaptive and efficient visuomotor processes during motor learning of such a novel motor skill requiring complex semantic understanding, like stone toolmaking, is not understood. Through the examination of eye movements and motor skill, the current study sought to evaluate the changes and relationship in perceptuomotor processes during motor learning and performance over 90 h of training. Participants' gaze and motor performance were assessed before, during and following training. Gaze patterns reveal a transition from initially high gaze variability during initial observation to lower gaze variability after training. Perceptual changes were strongly associated with motor performance improvements suggesting a coupling of perceptual and motor processes during motor learning.
Collapse
Affiliation(s)
| | - Nikhilesh Natraj
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
- Division of Neurology, UCSF Weill Institute for Neurosciences, San Francisco, CA, USA
| | - Nada Khresdish
- Anthropology Department, Emory University, Atlanta, GA, USA
| | - Justin Pargeter
- Anthropology Department, Emory University, Atlanta, GA, USA
- Department of Anthropology, New York University, New York, NY, USA
| | - Dietrich Stout
- Anthropology Department, Emory University, Atlanta, GA, USA
| | - Lewis A Wheaton
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
14
|
Taub M, Yovel Y. Adaptive learning and recall of motor-sensory sequences in adult echolocating bats. BMC Biol 2021; 19:164. [PMID: 34412628 PMCID: PMC8377959 DOI: 10.1186/s12915-021-01099-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 07/15/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Learning to adapt to changes in the environment is highly beneficial. This is especially true for echolocating bats that forage in diverse environments, moving between open spaces to highly complex ones. Bats are known for their ability to rapidly adjust their sensing according to auditory information gathered from the environment within milliseconds but can they also benefit from longer adaptive processes? In this study, we examined adult bats' ability to slowly adapt their sensing strategy to a new type of environment they have never experienced for such long durations, and to then maintain this learned echolocation strategy over time. RESULTS We show that over a period of weeks, Pipistrellus kuhlii bats gradually adapt their pre-takeoff echolocation sequence when moved to a constantly cluttered environment. After adopting this improved strategy, the bats retained an ability to instantaneously use it when placed back in a similarly cluttered environment, even after spending many months in a significantly less cluttered environment. CONCLUSIONS We demonstrate long-term adaptive flexibility in sensory acquisition in adult animals. Our study also gives further insight into the importance of sensory planning in the initiation of a precise sensorimotor behavior such as approaching for landing.
Collapse
Affiliation(s)
- Mor Taub
- Department of Zoology, Faculty of Life Sciences, Tel Aviv University, 6997801, Tel Aviv, Israel.
| | - Yossi Yovel
- Department of Zoology, Faculty of Life Sciences, Tel Aviv University, 6997801, Tel Aviv, Israel.
- Sagol School of Neuroscience, Tel Aviv University, 6997801, Tel Aviv, Israel.
| |
Collapse
|
15
|
Hayes TR, Henderson JM. Looking for Semantic Similarity: What a Vector-Space Model of Semantics Can Tell Us About Attention in Real-World Scenes. Psychol Sci 2021; 32:1262-1270. [PMID: 34252325 PMCID: PMC8726595 DOI: 10.1177/0956797621994768] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 12/23/2020] [Indexed: 11/15/2022] Open
Abstract
The visual world contains more information than we can perceive and understand in any given moment. Therefore, we must prioritize important scene regions for detailed analysis. Semantic knowledge gained through experience is theorized to play a central role in determining attentional priority in real-world scenes but is poorly understood. Here, we examined the relationship between object semantics and attention by combining a vector-space model of semantics with eye movements in scenes. In this approach, the vector-space semantic model served as the basis for a concept map, an index of the spatial distribution of the semantic similarity of objects across a given scene. The results showed a strong positive relationship between the semantic similarity of a scene region and viewers' focus of attention; specifically, greater attention was given to more semantically related scene regions. We conclude that object semantics play a critical role in guiding attention through real-world scenes.
Collapse
Affiliation(s)
| | - John M. Henderson
- Center for Mind and Brain, University of California, Davis
- Department of Psychology, University of California, Davis
| |
Collapse
|
16
|
Avoiding potential pitfalls in visual search and eye-movement experiments: A tutorial review. Atten Percept Psychophys 2021; 83:2753-2783. [PMID: 34089167 PMCID: PMC8460493 DOI: 10.3758/s13414-021-02326-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2021] [Indexed: 12/15/2022]
Abstract
Examining eye-movement behavior during visual search is an increasingly popular approach for gaining insights into the moment-to-moment processing that takes place when we look for targets in our environment. In this tutorial review, we describe a set of pitfalls and considerations that are important for researchers – both experienced and new to the field – when engaging in eye-movement and visual search experiments. We walk the reader through the research cycle of a visual search and eye-movement experiment, from choosing the right predictions, through to data collection, reporting of methodology, analytic approaches, the different dependent variables to analyze, and drawing conclusions from patterns of results. Overall, our hope is that this review can serve as a guide, a talking point, a reflection on the practices and potential problems with the current literature on this topic, and ultimately a first step towards standardizing research practices in the field.
Collapse
|
17
|
Milisavljevic A, Abate F, Le Bras T, Gosselin B, Mancas M, Doré-Mazars K. Similarities and Differences Between Eye and Mouse Dynamics During Web Pages Exploration. Front Psychol 2021; 12:554595. [PMID: 33841223 PMCID: PMC8024563 DOI: 10.3389/fpsyg.2021.554595] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 01/28/2021] [Indexed: 11/13/2022] Open
Abstract
The study of eye movements is a common way to non-invasively understand and analyze human behavior. However, eye-tracking techniques are very hard to scale, and require expensive equipment and extensive expertise. In the context of web browsing, these issues could be overcome by studying the link between the eye and the computer mouse. Here, we propose new analysis methods, and a more advanced characterization of this link. To this end, we recorded the eye, mouse, and scroll movements of 151 participants exploring 18 dynamic web pages while performing free viewing and visual search tasks for 20 s. The data revealed significant differences of eye, mouse, and scroll parameters over time which stabilize at the end of exploration. This suggests the existence of a task-independent relationship between eye, mouse, and scroll parameters, which are characterized by two distinct patterns: one common pattern for movement parameters and a second for dwelling/fixation parameters. Within these patterns, mouse and eye movements remained consistent with each other, while the scrolling behaved the opposite way.
Collapse
Affiliation(s)
- Alexandre Milisavljevic
- Vision Action Cognition Laboratory, Psychology Institute, Université de Paris, Boulogne-Billancourt, France.,Information, Signal and Artificial Intelligence Laboratory, Numediart Institute, University of Mons, Mons, Belgium.,Research and Development Department, Sublime Skinz, Paris, France
| | - Fabrice Abate
- Vision Action Cognition Laboratory, Psychology Institute, Université de Paris, Boulogne-Billancourt, France
| | - Thomas Le Bras
- Vision Action Cognition Laboratory, Psychology Institute, Université de Paris, Boulogne-Billancourt, France
| | - Bernard Gosselin
- Information, Signal and Artificial Intelligence Laboratory, Numediart Institute, University of Mons, Mons, Belgium
| | - Matei Mancas
- Information, Signal and Artificial Intelligence Laboratory, Numediart Institute, University of Mons, Mons, Belgium
| | - Karine Doré-Mazars
- Vision Action Cognition Laboratory, Psychology Institute, Université de Paris, Boulogne-Billancourt, France
| |
Collapse
|
18
|
Salience-based object prioritization during active viewing of naturalistic scenes in young and older adults. Sci Rep 2020; 10:22057. [PMID: 33328485 PMCID: PMC7745017 DOI: 10.1038/s41598-020-78203-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 11/18/2020] [Indexed: 11/21/2022] Open
Abstract
Whether fixation selection in real-world scenes is guided by image salience or by objects has been a matter of scientific debate. To contrast the two views, we compared effects of location-based and object-based visual salience in young and older (65 + years) adults. Generalized linear mixed models were used to assess the unique contribution of salience to fixation selection in scenes. When analysing fixation guidance without recurrence to objects, visual salience predicted whether image patches were fixated or not. This effect was reduced for the elderly, replicating an earlier finding. When using objects as the unit of analysis, we found that highly salient objects were more frequently selected for fixation than objects with low visual salience. Interestingly, this effect was larger for older adults. We also analysed where viewers fixate within objects, once they are selected. A preferred viewing location close to the centre of the object was found for both age groups. The results support the view that objects are important units of saccadic selection. Reconciling the salience view with the object view, we suggest that visual salience contributes to prioritization among objects. Moreover, the data point towards an increasing relevance of object-bound information with increasing age.
Collapse
|
19
|
Abstract
In art schools and classes for art history students are trained to pay attention to different aspects of an artwork, such as art movement characteristics and painting techniques. Experts are better at processing style and visual features of an artwork than nonprofessionals. Here we tested the hypothesis that experts in art use different, task-dependent viewing strategies than nonprofessionals when analyzing a piece of art. We compared a group of art history students with a group of students with no art education background, while viewing 36 paintings under three discrimination tasks. Participants were asked to determine the art movement, the date and the medium of the paintings. We analyzed behavioral and eye-movement data of 27 participants. Our observers adjusted their viewing strategies according to the task, resulting in longer fixation durations and shorter saccade amplitudes for the medium detection task. We found higher task accuracy and subjective confidence, less congruence and higher dispersion in fixation locations in experts. Expertise also influenced saccade metrics, biasing it towards larger saccade amplitudes, advocating a more holistic scanning strategy of experts in all three tasks.
Collapse
|
20
|
Henderson JM, Goold JE, Choi W, Hayes TR. Neural Correlates of Fixated Low- and High-level Scene Properties during Active Scene Viewing. J Cogn Neurosci 2020; 32:2013-2023. [PMID: 32573384 PMCID: PMC11164273 DOI: 10.1162/jocn_a_01599] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
During real-world scene perception, viewers actively direct their attention through a scene in a controlled sequence of eye fixations. During each fixation, local scene properties are attended, analyzed, and interpreted. What is the relationship between fixated scene properties and neural activity in the visual cortex? Participants inspected photographs of real-world scenes in an MRI scanner while their eye movements were recorded. Fixation-related fMRI was used to measure activation as a function of lower- and higher-level scene properties at fixation, operationalized as edge density and meaning maps, respectively. We found that edge density at fixation was most associated with activation in early visual areas, whereas semantic content at fixation was most associated with activation along the ventral visual stream including core object and scene-selective areas (lateral occipital complex, parahippocampal place area, occipital place area, and retrosplenial cortex). The observed activation from semantic content was not accounted for by differences in edge density. The results are consistent with active vision models in which fixation gates detailed visual analysis for fixated scene regions, and this gating influences both lower and higher levels of scene analysis.
Collapse
Affiliation(s)
| | | | - Wonil Choi
- Gwangju Institute of Science and Technology
| | | |
Collapse
|