1
|
Marinova M, Reynvoet B. Are three zebras more than three frogs: examining conceptual and physical congruency in numerosity judgements of familiar objects. PSYCHOLOGICAL RESEARCH 2024; 89:39. [PMID: 39731611 DOI: 10.1007/s00426-024-02044-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 10/14/2024] [Indexed: 12/30/2024]
Abstract
Researchers in numerical cognition have extensively studied the number sense-the innate human ability to extract numerical information from the environment quickly and effortlessly. Much of this research, however, uses abstract stimuli (e.g., dot configurations) that are also strictly controlled for their low-level visual confounds, such as size. Nonetheless, individuals rarely extract numerical information from abstract stimuli in everyday life. Yet, numerical judgments of familiar objects remain poorly understood and understudied. In the current study, we examined the cognitive mechanisms underlying the numerical decisions of familiar objects. In two experiments, we asked adult participants (Experiment 1) and two groups of children (aged 7-9 years and 11-12 years, Experiment 2) to perform an animal numerosity task (i.e., "Which animal is more numerous?"), while the conceptual congruency (i.e., the congruency between an object's real-life size and its numerosity) and physical congruency (the congruency between the number of items and the total space they occupy on the screen) were manipulated. Results showed that the conceptual congruency effect (i.e., better performance when the animal with a larger size in real life is more numerous) and a physical congruency effect (i.e., better performance when the physically larger animal is more numerous) were present in adults and children. However, the effects differed across the age groups and were also a subject of developmental change. To our knowledge, this study is the first one to demonstrate that conceptual knowledge can interfere with numerosity judgements in a top-down manner. This interference effect is distinct from the bottom-up interference effect, which comes from the physical properties of the set. Our results imply that the number sense is not a standalone core system for numbers but is embedded in a more extensive network where both low-level and higher-order influences are possible. We encourage numerical cognition researchers to consider employing not only abstract but also familiar objects when examining numerosity judgements across the lifespan.
Collapse
Affiliation(s)
- Mila Marinova
- Department of Behavioural and Cognitive Sciences, Faculty of Humanities, Education and Social Sciences, Institute of Cognitive Science and Assessment, University of Luxembourg, Esch-Belval, Luxembourg
- Brain and Cognition, KU Leuven, Leuven, Belgium
- Faculty of Psychology and Educational Sciences, KU Leuven @Kulak, Etienne Sabbelaan 51, 8500, Kortrijk, Belgium
| | - Bert Reynvoet
- Brain and Cognition, KU Leuven, Leuven, Belgium.
- Faculty of Psychology and Educational Sciences, KU Leuven @Kulak, Etienne Sabbelaan 51, 8500, Kortrijk, Belgium.
| |
Collapse
|
2
|
Sefranek M, Zokaei N, Draschkow D, Nobre AC. Comparing the impact of contextual associations and statistical regularities in visual search and attention orienting. PLoS One 2024; 19:e0302751. [PMID: 39570820 PMCID: PMC11581329 DOI: 10.1371/journal.pone.0302751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 10/06/2024] [Indexed: 11/24/2024] Open
Abstract
During visual search, we quickly learn to attend to an object's likely location. Research has shown that this process can be guided by learning target locations based on consistent spatial contextual associations or other statistical regularities. Here, we tested how different types of associations guide learning and the utilisation of established memories for different purposes. Participants learned contextual associations or rule-like statistical regularities that predicted target locations within different scenes. The consequences of this learning for subsequent performance were then evaluated on attention-orienting and memory-recall tasks. Participants demonstrated facilitated attention-orienting and recall performance based on both contextual associations and statistical regularities. Contextual associations facilitated attention orienting with a different time course compared to statistical regularities. Benefits to memory-recall performance depended on the alignment between the learned association or regularity and the recall demands. The distinct patterns of behavioural facilitation by contextual associations and statistical regularities show how different forms of long-term memory may influence neural information processing through different modulatory mechanisms.
Collapse
Affiliation(s)
- Marcus Sefranek
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Nahid Zokaei
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Dejan Draschkow
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Anna C. Nobre
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
- Wu Tsai Institute, Yale University, New Haven, CT, United States of America
- Department of Psychology, Yale University, New Haven, CT, United States of America
| |
Collapse
|
3
|
Aivar MP, Li CL, Tong MH, Kit DM, Hayhoe MM. Knowing where to go: Spatial memory guides eye and body movements in a naturalistic visual search task. J Vis 2024; 24:1. [PMID: 39226069 PMCID: PMC11373708 DOI: 10.1167/jov.24.9.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024] Open
Abstract
Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.
Collapse
Affiliation(s)
- M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
- https://www.psicologiauam.es/aivar/
| | - Chia-Ling Li
- Institute of Neuroscience, The University of Texas at Austin, Austin, TX, USA
- Present address: Apple Inc., Cupertino, California, USA
| | - Matthew H Tong
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: IBM Research, Cambridge, Massachusetts, USA
| | - Dmitry M Kit
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: F5, Boston, Massachusetts, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
4
|
Moher J, Delos Reyes A, Drew T. Cue relevance drives early quitting in visual search. Cogn Res Princ Implic 2024; 9:54. [PMID: 39183257 PMCID: PMC11345343 DOI: 10.1186/s41235-024-00587-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 08/08/2024] [Indexed: 08/27/2024] Open
Abstract
Irrelevant salient distractors can trigger early quitting in visual search, causing observers to miss targets they might otherwise find. Here, we asked whether task-relevant salient cues can produce a similar early quitting effect on the subset of trials where those cues fail to highlight the target. We presented participants with a difficult visual search task and used two cueing conditions. In the high-predictive condition, a salient cue in the form of a red circle highlighted the target most of the time a target was present. In the low-predictive condition, the cue was far less accurate and did not reliably predict the target (i.e., the cue was often a false positive). These were contrasted against a control condition in which no cues were presented. In the high-predictive condition, we found clear evidence of early quitting on trials where the cue was a false positive, as evidenced by both increased miss errors and shorter response times on target absent trials. No such effects were observed with low-predictive cues. Together, these results suggest that salient cues which are false positives can trigger early quitting, though perhaps only when the cues have a high-predictive value. These results have implications for real-world searches, such as medical image screening, where salient cues (referred to as computer-aided detection or CAD) may be used to highlight potentially relevant areas of images but are sometimes inaccurate.
Collapse
Affiliation(s)
- Jeff Moher
- Psychology Department, Connecticut College, 270 Mohegan Avenue, New London, CT, 06320, USA.
| | | | | |
Collapse
|
5
|
Walper D, Bendixen A, Grimm S, Schubö A, Einhäuser W. Attention deployment in natural scenes: Higher-order scene statistics rather than semantics modulate the N2pc component. J Vis 2024; 24:7. [PMID: 38848099 PMCID: PMC11166226 DOI: 10.1167/jov.24.6.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 04/19/2024] [Indexed: 06/13/2024] Open
Abstract
Which properties of a natural scene affect visual search? We consider the alternative hypotheses that low-level statistics, higher-level statistics, semantics, or layout affect search difficulty in natural scenes. Across three experiments (n = 20 each), we used four different backgrounds that preserve distinct scene properties: (a) natural scenes (all experiments); (b) 1/f noise (pink noise, which preserves only low-level statistics and was used in Experiments 1 and 2); (c) textures that preserve low-level and higher-level statistics but not semantics or layout (Experiments 2 and 3); and (d) inverted (upside-down) scenes that preserve statistics and semantics but not layout (Experiment 2). We included "split scenes" that contained different backgrounds left and right of the midline (Experiment 1, natural/noise; Experiment 3, natural/texture). Participants searched for a Gabor patch that occurred at one of six locations (all experiments). Reaction times were faster for targets on noise and slower on inverted images, compared to natural scenes and textures. The N2pc component of the event-related potential, a marker of attentional selection, had a shorter latency and a higher amplitude for targets in noise than for all other backgrounds. The background contralateral to the target had an effect similar to that on the target side: noise led to faster reactions and shorter N2pc latencies than natural scenes, although we observed no difference in N2pc amplitude. There were no interactions between the target side and the non-target side. Together, this shows that-at least when searching simple targets without own semantic content-natural scenes are more effective distractors than noise and that this results from higher-order statistics rather than from semantics or layout.
Collapse
Affiliation(s)
- Daniel Walper
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Chemnitz University of Technology, Chemnitz, Germany
- https://www.tu-chemnitz.de/physik/SFKS/index.html.en
| | - Sabine Grimm
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
- Cognitive Systems Lab, Chemnitz University of Technology, Chemnitz, Germany
| | - Anna Schubö
- Cognitive Neuroscience of Perception & Action, Philipps University Marburg, Marburg, Germany
- https://www.uni-marburg.de/en/fb04/team-schuboe
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Chemnitz University of Technology, Chemnitz, Germany
- https://www.tu-chemnitz.de/physik/PHKP/index.html.en
| |
Collapse
|
6
|
A-Izzeddin EJ, Mattingley JB, Harrison WJ. The influence of natural image statistics on upright orientation judgements. Cognition 2024; 242:105631. [PMID: 37820487 DOI: 10.1016/j.cognition.2023.105631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 09/24/2023] [Accepted: 09/28/2023] [Indexed: 10/13/2023]
Abstract
Humans have well-documented priors for many features present in nature that guide visual perception. Despite being putatively grounded in the statistical regularities of the environment, scene priors are frequently violated due to the inherent variability of visual features from one scene to the next. However, these repeated violations do not appreciably challenge visuo-cognitive function, necessitating the broad use of priors in conjunction with context-specific information. We investigated the trade-off between participants' internal expectations formed from both longer-term priors and those formed from immediate contextual information using a perceptual inference task and naturalistic stimuli. Notably, our task required participants to make perceptual inferences about naturalistic images using their own internal criteria, rather than making comparative judgements. Nonetheless, we show that observers' performance is well approximated by a model that makes inferences using a prior for low-level image statistics, aggregated over many images. We further show that the dependence on this prior is rapidly re-weighted against contextual information, even when misleading. Our results therefore provide insight into how apparent high-level interpretations of scene appearances follow from the most basic of perceptual processes, which are grounded in the statistics of natural images.
Collapse
Affiliation(s)
- Emily J A-Izzeddin
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia.
| | - Jason B Mattingley
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia; School of Psychology, Building 24A, University of Queensland, St Lucia, QLD 4072, Australia
| | - William J Harrison
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia; School of Psychology, Building 24A, University of Queensland, St Lucia, QLD 4072, Australia
| |
Collapse
|
7
|
Schmid D, Jarvers C, Neumann H. Canonical circuit computations for computer vision. BIOLOGICAL CYBERNETICS 2023; 117:299-329. [PMID: 37306782 PMCID: PMC10600314 DOI: 10.1007/s00422-023-00966-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 05/18/2023] [Indexed: 06/13/2023]
Abstract
Advanced computer vision mechanisms have been inspired by neuroscientific findings. However, with the focus on improving benchmark achievements, technical solutions have been shaped by application and engineering constraints. This includes the training of neural networks which led to the development of feature detectors optimally suited to the application domain. However, the limitations of such approaches motivate the need to identify computational principles, or motifs, in biological vision that can enable further foundational advances in machine vision. We propose to utilize structural and functional principles of neural systems that have been largely overlooked. They potentially provide new inspirations for computer vision mechanisms and models. Recurrent feedforward, lateral, and feedback interactions characterize general principles underlying processing in mammals. We derive a formal specification of core computational motifs that utilize these principles. These are combined to define model mechanisms for visual shape and motion processing. We demonstrate how such a framework can be adopted to run on neuromorphic brain-inspired hardware platforms and can be extended to automatically adapt to environment statistics. We argue that the identified principles and their formalization inspires sophisticated computational mechanisms with improved explanatory scope. These and other elaborated, biologically inspired models can be employed to design computer vision solutions for different tasks and they can be used to advance neural network architectures of learning.
Collapse
Affiliation(s)
- Daniel Schmid
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| | - Christian Jarvers
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| | - Heiko Neumann
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| |
Collapse
|
8
|
Noonan MP, Störmer VS. Contextual and Temporal Constraints for Attentional Capture: Commentary on Theeuwes' 2023 Review "The Attentional Capture Debate: When Can We Avoid Salient Distractors and when Not?". J Cogn 2023; 6:37. [PMID: 37426062 PMCID: PMC10327855 DOI: 10.5334/joc.274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 04/06/2023] [Indexed: 07/11/2023] Open
Abstract
Salient distractors demand our attention. Their salience, derived from intensity, relative contrast or learned relevance, captures our limited information capacity. This is typically an adaptive response as salient stimuli may require an immediate change in behaviour. However, sometimes apparent salient distractors do not capture attention. Theeuwes, in his recent commentary, has proposed certain boundary conditions of the visual scene that result in one of two search modes, serial or parallel, that determine whether we can avoid salient distractors or not. Here, we argue that a more complete theory should consider the temporal and contextual factors that influence the very salience of the distractor itself.
Collapse
Affiliation(s)
- MaryAnn P. Noonan
- Department of Psychology, University of York, Heslington, York, UK
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, UK
| | - Viola S. Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, USA
| |
Collapse
|
9
|
Bischof WF, Anderson NC, Kingstone A. Eye and head movements while encoding and recognizing panoramic scenes in virtual reality. PLoS One 2023; 18:e0282030. [PMID: 36800398 PMCID: PMC9937482 DOI: 10.1371/journal.pone.0282030] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 02/06/2023] [Indexed: 02/18/2023] Open
Abstract
One approach to studying the recognition of scenes and objects relies on the comparison of eye movement patterns during encoding and recognition. Past studies typically analyzed the perception of flat stimuli of limited extent presented on a computer monitor that did not require head movements. In contrast, participants in the present study saw omnidirectional panoramic scenes through an immersive 3D virtual reality viewer, and they could move their head freely to inspect different parts of the visual scenes. This allowed us to examine how unconstrained observers use their head and eyes to encode and recognize visual scenes. By studying head and eye movement within a fully immersive environment, and applying cross-recurrence analysis, we found that eye movements are strongly influenced by the content of the visual environment, as are head movements-though to a much lesser degree. Moreover, we found that the head and eyes are linked, with the head supporting, and by and large mirroring the movements of the eyes, consistent with the notion that the head operates to support the acquisition of visual information by the eyes.
Collapse
Affiliation(s)
- Walter F. Bischof
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Nicola C. Anderson
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
10
|
Odic D, Oppenheimer DM. Visual numerosity perception shows no advantage in real-world scenes compared to artificial displays. Cognition 2023; 230:105291. [PMID: 36183630 DOI: 10.1016/j.cognition.2022.105291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 09/15/2022] [Accepted: 09/23/2022] [Indexed: 10/14/2022]
Abstract
While the human visual system is sensitive to numerosity, the mechanisms that allow perception to extract and represent the number of objects in a scene remains unknown. Prominent theoretical approaches posit that numerosity perception emerges from passive experience with visual scenes throughout development, and that unsupervised deep neural network models mirror all characteristic behavioral features observed in participants. Here, we derive and test a novel prediction: if the visual number sense emerges from exposure to real-world scenes, then the closer a stimulus aligns with the natural statistics of the real world, the better number perception should be. But - in contrast to this prediction - we observe no such advantage (and sometimes even a notable impairment) in number perception for natural scenes compared to artificial dot displays in college-aged adults. These findings are not accounted for by the difficulty in object identification, visual clutter, the parsability of objects from the rest of the scene, or increased occlusion. This pattern of results represents a fundamental challenge to recent models of numerosity perception based in experiential learning of statistical regularities, and instead suggests that the visual number sense is attuned to abstract number of objects, independent of their underlying correlation with non-numeric features. We discuss our results in the context of recent proposals that suggest that object complexity and entropy may play a role in number perception.
Collapse
|
11
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
12
|
Hoogerbrugge AJ, Strauch C, Oláh ZA, Dalmaijer ES, Nijboer TCW, Van der Stigchel S. Seeing the Forrest through the trees: Oculomotor metrics are linked to heart rate. PLoS One 2022; 17:e0272349. [PMID: 35917377 PMCID: PMC9345484 DOI: 10.1371/journal.pone.0272349] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 07/19/2022] [Indexed: 11/18/2022] Open
Abstract
Fluctuations in a person’s arousal accompany mental states such as drowsiness, mental effort, or motivation, and have a profound effect on task performance. Here, we investigated the link between two central instances affected by arousal levels, heart rate and eye movements. In contrast to heart rate, eye movements can be inferred remotely and unobtrusively, and there is evidence that oculomotor metrics (i.e., fixations and saccades) are indicators for aspects of arousal going hand in hand with changes in mental effort, motivation, or task type. Gaze data and heart rate of 14 participants during film viewing were used in Random Forest models, the results of which show that blink rate and duration, and the movement aspect of oculomotor metrics (i.e., velocities and amplitudes) link to heart rate–more so than the amount or duration of fixations and saccades. We discuss that eye movements are not only linked to heart rate, but they may both be similarly influenced by the common underlying arousal system. These findings provide new pathways for the remote measurement of arousal, and its link to psychophysiological features.
Collapse
Affiliation(s)
- Alex J. Hoogerbrugge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
- * E-mail:
| | - Christoph Strauch
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| | - Zoril A. Oláh
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
| | - Edwin S. Dalmaijer
- School of Psychological Science, University of Bristol, Bristol, United Kingdom
| | - Tanja C. W. Nijboer
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
- Center of Excellence for Rehabilitation Medicine, UMC Utrecht Brain Center, University Medical Center Utrecht, De Hoogstraat Rehabilitation, Utrecht, Netherlands
- Department of Rehabilitation, Physical Therapy Science & Sports, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | | |
Collapse
|
13
|
Theeuwes J, Bogaerts L, van Moorselaar D. What to expect where and when: how statistical learning drives visual selection. Trends Cogn Sci 2022; 26:860-872. [PMID: 35840476 DOI: 10.1016/j.tics.2022.06.001] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 05/30/2022] [Accepted: 06/02/2022] [Indexed: 12/26/2022]
Abstract
While the visual environment contains massive amounts of information, we should not and cannot pay attention to all events. Instead, we need to direct attention to those events that have proven to be important in the past and suppress those that were distracting and irrelevant. Experiences molded through a learning process enable us to extract and adapt to the statistical regularities in the world. While previous studies have shown that visual statistical learning (VSL) is critical for representing higher order units of perception, here we review the role of VSL in attentional selection. Evidence suggests that through VSL, attentional priority settings are optimally adjusted to regularities in the environment, without intention and without conscious awareness.
Collapse
Affiliation(s)
- Jan Theeuwes
- Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute Brain and Behavior (iBBA), Amsterdam, the Netherlands; William James Center for Research, ISPA-Instituto Universitario, Lisbon, Portugal.
| | - Louisa Bogaerts
- Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute Brain and Behavior (iBBA), Amsterdam, the Netherlands; Ghent University, Ghent, Belgium
| | - Dirk van Moorselaar
- Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute Brain and Behavior (iBBA), Amsterdam, the Netherlands
| |
Collapse
|
14
|
Scutt G, Williams S, Auyeung V, Overall A. Clinical decision-making and dispensing performance in pharmacy students and its relationship to executive function and implicit memory. EXPLORATORY RESEARCH IN CLINICAL AND SOCIAL PHARMACY 2022; 5:100096. [PMID: 35478524 PMCID: PMC9030318 DOI: 10.1016/j.rcsop.2021.100096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 11/09/2021] [Accepted: 12/08/2021] [Indexed: 10/27/2022] Open
|
15
|
Lauer T, Schmidt F, Võ MLH. The role of contextual materials in object recognition. Sci Rep 2021; 11:21988. [PMID: 34753999 PMCID: PMC8578445 DOI: 10.1038/s41598-021-01406-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 10/22/2021] [Indexed: 01/01/2023] Open
Abstract
While scene context is known to facilitate object recognition, little is known about which contextual "ingredients" are at the heart of this phenomenon. Here, we address the question of whether the materials that frequently occur in scenes (e.g., tiles in a bathroom) associated with specific objects (e.g., a perfume) are relevant for the processing of that object. To this end, we presented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone) superimposed on scenes (e.g., a bathroom) and close-ups of materials (e.g., tiles). In Experiment 1, consistent objects on scenes were named more accurately than inconsistent ones, while there was only a marginal consistency effect for objects on materials. Also, we did not find any consistency effect for scrambled materials that served as color control condition. In Experiment 2, we recorded event-related potentials and found N300/N400 responses-markers of semantic violations-for objects on inconsistent relative to consistent scenes. Critically, objects on materials triggered N300/N400 responses of similar magnitudes. Our findings show that contextual materials indeed affect object processing-even in the absence of spatial scene structure and object content-suggesting that material is one of the contextual "ingredients" driving scene context effects.
Collapse
Affiliation(s)
- Tim Lauer
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Theodor-W.-Adorno-Platz 6, PEG 5.G144, 60323, Frankfurt am Main, Germany.
| | - Filipp Schmidt
- Department of Experimental Psychology, Justus Liebig University Giessen, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Theodor-W.-Adorno-Platz 6, PEG 5.G144, 60323, Frankfurt am Main, Germany
| |
Collapse
|
16
|
Ki JJ, Dmochowski JP, Touryan J, Parra LC. Neural responses to natural visual motion are spatially selective across the visual field, with selectivity differing across brain areas and task. Eur J Neurosci 2021; 54:7609-7625. [PMID: 34679237 PMCID: PMC9298375 DOI: 10.1111/ejn.15503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 09/16/2021] [Accepted: 10/07/2021] [Indexed: 11/28/2022]
Abstract
It is well established that neural responses to visual stimuli are enhanced at select locations in the visual field. Although spatial selectivity and the effects of spatial attention are well understood for discrete tasks (e.g. visual cueing), little is known for naturalistic experience that involves continuous dynamic visual stimuli (e.g. driving). Here, we assess the strength of neural responses across the visual space during a kart‐race game. Given the varying relevance of visual location in this task, we hypothesized that the strength of neural responses to movement will vary across the visual field, and it would differ between active play and passive viewing. To test this, we measure the correlation strength of scalp‐evoked potentials with optical flow magnitude at individual locations on the screen. We find that neural responses are strongly correlated at task‐relevant locations in visual space, extending beyond the focus of overt attention. Although the driver's gaze is directed upon the heading direction at the centre of the screen, neural responses were robust at the peripheral areas (e.g. roads and surrounding buildings). Importantly, neural responses to visual movement are broadly distributed across the scalp, with visual spatial selectivity differing across electrode locations. Moreover, during active gameplay, neural responses are enhanced at select locations in the visual space. Conventionally, spatial selectivity of neural response has been interpreted as an attentional gain mechanism. In the present study, the data suggest that different brain areas focus attention on different portions of the visual field that are task‐relevant, beyond the focus of overt attention.
Collapse
Affiliation(s)
- Jason J Ki
- Department of Biomedical Engineering, City College of the City University of New York, New York, New York, USA
| | - Jacek P Dmochowski
- Department of Biomedical Engineering, City College of the City University of New York, New York, New York, USA
| | | | - Lucas C Parra
- Department of Biomedical Engineering, City College of the City University of New York, New York, New York, USA
| |
Collapse
|
17
|
Li W, Guan J, Shi W. Increasing the load on executive working memory reduces the search performance in the natural scenes: Evidence from eye movements. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02270-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
18
|
Almadori E, Mastroberardino S, Botta F, Brunetti R, Lupiáñez J, Spence C, Santangelo V. Crossmodal Semantic Congruence Interacts with Object Contextual Consistency in Complex Visual Scenes to Enhance Short-Term Memory Performance. Brain Sci 2021; 11:brainsci11091206. [PMID: 34573227 PMCID: PMC8467083 DOI: 10.3390/brainsci11091206] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 08/30/2021] [Accepted: 09/09/2021] [Indexed: 11/17/2022] Open
Abstract
Object sounds can enhance the attentional selection and perceptual processing of semantically-related visual stimuli. However, it is currently unknown whether crossmodal semantic congruence also affects the post-perceptual stages of information processing, such as short-term memory (STM), and whether this effect is modulated by the object consistency with the background visual scene. In two experiments, participants viewed everyday visual scenes for 500 ms while listening to an object sound, which could either be semantically related to the object that served as the STM target at retrieval or not. This defined crossmodal semantically cued vs. uncued targets. The target was either in- or out-of-context with respect to the background visual scene. After a maintenance period of 2000 ms, the target was presented in isolation against a neutral background, in either the same or different spatial position as in the original scene. The participants judged the same vs. different position of the object and then provided a confidence judgment concerning the certainty of their response. The results revealed greater accuracy when judging the spatial position of targets paired with a semantically congruent object sound at encoding. This crossmodal facilitatory effect was modulated by whether the target object was in- or out-of-context with respect to the background scene, with out-of-context targets reducing the facilitatory effect of object sounds. Overall, these findings suggest that the presence of the object sound at encoding facilitated the selection and processing of the semantically related visual stimuli, but this effect depends on the semantic configuration of the visual scene.
Collapse
Affiliation(s)
- Erika Almadori
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome, Italy;
| | - Serena Mastroberardino
- Department of Psychology, School of Medicine & Psychology, Sapienza University of Rome, Via dei Marsi 78, 00185 Rome, Italy;
| | - Fabiano Botta
- Department of Experimental Psychology and Mind, Brain, and Behavior Research Center (CIMCYC), University of Granada, 18071 Granada, Spain; (F.B.); (J.L.)
| | - Riccardo Brunetti
- Cognitive and Clinical Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, 00163 Roma, Italy;
| | - Juan Lupiáñez
- Department of Experimental Psychology and Mind, Brain, and Behavior Research Center (CIMCYC), University of Granada, 18071 Granada, Spain; (F.B.); (J.L.)
| | - Charles Spence
- Department of Experimental Psychology, Oxford University, Oxford OX2 6GG, UK;
| | - Valerio Santangelo
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome, Italy;
- Department of Philosophy, Social Sciences & Education, University of Perugia, Piazza G. Ermini, 1, 06123 Perugia, Italy
- Correspondence:
| |
Collapse
|
19
|
Sullivan B, Ludwig CJH, Damen D, Mayol-Cuevas W, Gilchrist ID. Look-ahead fixations during visuomotor behavior: Evidence from assembling a camping tent. J Vis 2021; 21:13. [PMID: 33688920 PMCID: PMC7961111 DOI: 10.1167/jov.21.3.13] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Eye movements can support ongoing manipulative actions, but a class of so-called look ahead fixations (LAFs) are related to future tasks. We examined LAFs in a complex natural task—assembling a camping tent. Tent assembly is a relatively uncommon task and requires the completion of multiple subtasks in sequence over a 5- to 20-minute duration. Participants wore a head-mounted camera and eye tracker. Subtasks and LAFs were annotated. We document four novel aspects of LAFs. First, LAFs were not random and their frequency was biased to certain objects and subtasks. Second, latencies are larger than previously noted, with 35% of LAFs occurring within 10 seconds before motor manipulation and 75% within 100 seconds. Third, LAF behavior extends far into future subtasks, because only 47% of LAFs are made to objects relevant to the current subtask. Seventy-five percent of LAFs are to objects used within five upcoming steps. Last, LAFs are often directed repeatedly to the target before manipulation, suggesting memory volatility. LAFs with short fixation–action latencies have been hypothesized to benefit future visual search and/or motor manipulation. However, the diversity of LAFs suggest they may also reflect scene exploration and task relevance, as well as longer term problem solving and task planning.
Collapse
Affiliation(s)
- Brian Sullivan
- School of Psychological Sciences, University of Bristol, Bristol, UK.,
| | | | - Dima Damen
- Department of Computer Science, University of Bristol, Bristol, UK.,
| | | | - Iain D Gilchrist
- School of Psychological Sciences, University of Bristol, Bristol, UK.,
| |
Collapse
|
20
|
Kootstra T, Teuwen J, Goudsmit J, Nijboer T, Dodd M, Van der Stigchel S. Machine learning-based classification of viewing behavior using a wide range of statistical oculomotor features. J Vis 2021; 20:1. [PMID: 32876676 PMCID: PMC7476673 DOI: 10.1167/jov.20.9.1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Since the seminal work of Yarbus, multiple studies have demonstrated the influence of task-set on oculomotor behavior and the current cognitive state. In more recent years, this field of research has expanded by evaluating the costs of abruptly switching between such different tasks. At the same time, the field of classifying oculomotor behavior has been moving toward more advanced, data-driven methods of decoding data. For the current study, we used a large dataset compiled over multiple experiments and implemented separate state-of-the-art machine learning methods for decoding both cognitive state and task-switching. We found that, by extracting a wide range of oculomotor features, we were able to implement robust classifier models for decoding both cognitive state and task-switching. Our decoding performance highlights the feasibility of this approach, even invariant of image statistics. Additionally, we present a feature ranking for both models, indicating the relative magnitude of different oculomotor features for both classifiers. These rankings indicate a separate set of important predictors for decoding each task, respectively. Finally, we discuss the implications of the current approach related to interpreting the decoding results.
Collapse
Affiliation(s)
- Timo Kootstra
- Experimental Psychology, Helmholtz Institute, Utrecht University, The Netherlands
| | - Jonas Teuwen
- Radboud University Medical Center/Netherlands Cancer Institute, The Netherlands
| | | | - Tanja Nijboer
- Experimental Psychology, Helmholtz Institute, Utrecht University, The Netherlands.,Center of Excellence for Rehabilitation Medicine, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht University and De Hoogstraat Rehabilitation, 3583 TM Utrecht, The Netherlands
| | | | | |
Collapse
|
21
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
22
|
Mena-Garcia L, Pastor-Jimeno JC, Maldonado MJ, Coco-Martin MB, Fernandez I, Arenillas JF. Multitasking Compensatory Saccadic Training Program for Hemianopia Patients: A New Approach With 3-Dimensional Real-World Objects. Transl Vis Sci Technol 2021; 10:3. [PMID: 34003888 PMCID: PMC7873505 DOI: 10.1167/tvst.10.2.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 12/25/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To examine whether a noncomputerized multitasking compensatory saccadic training program (MCSTP) for patients with hemianopia, based on a reading regimen and eight exercises that recreate everyday visuomotor activities using three-dimensional (3D) real-world objects, improves the visual ability/function, quality of life (QL), and functional independence (FI). Methods The 3D-MCSTP included four in-office visits and two customized home-based daily training sessions over 12 weeks. A quasiexperimental, pretest/posttest study design was carried out with an intervention group (IG) (n = 20) and a no-training group (NTG) (n = 20) matched for age, hemianopia type, and brain injury duration. Results The groups were comparable for the main baseline variables and all participants (n = 40) completed the study. The IG mainly showed significant improvements in visual-processing speed (57.34% ± 19.28%; P < 0.0001) and visual attention/retention ability (26.67% ± 19.21%; P < 0.0001), which also were significantly greater (P < 0.05) than in the NTG. Moreover, the IG showed large effect sizes (Cohen's d) in 75% of the total QL and FI dimensions analyzed; in contrast to the NTG that showed negligible mean effect sizes in 96% of these dimensions. Conclusions The customized 3D-MCSTP was associated with a satisfactory response in the IG for improving complex visual processing, QL, and FI. Translational Relevance Neurovisual rehabilitation of patients with hemianopia seems more efficient when programs combine in-office visits and customized home-based training sessions based on real objects and simulating real-life conditions, than no treatment or previously reported computer-screen approaches, probably because of better stimulation of patients´ motivation and visual-processing speed brain mechanisms.
Collapse
Affiliation(s)
- Laura Mena-Garcia
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Eye Institute, Universidad de Valladolid, Valladolid, Spain
- Universidad de Valladolid, Valladolid, Spain
| | - Jose C. Pastor-Jimeno
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Eye Institute, Universidad de Valladolid, Valladolid, Spain
- Universidad de Valladolid, Valladolid, Spain
- Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
- Red Temática de Investigación Colaborativa en Oftalmología (OftaRed), Instituto de Salud Carlos III, Madrid, Spain
| | - Miguel J. Maldonado
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Eye Institute, Universidad de Valladolid, Valladolid, Spain
- Universidad de Valladolid, Valladolid, Spain
- Red Temática de Investigación Colaborativa en Oftalmología (OftaRed), Instituto de Salud Carlos III, Madrid, Spain
| | - Maria B. Coco-Martin
- Universidad de Valladolid, Valladolid, Spain
- Department of Neurology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
| | - Itziar Fernandez
- Universidad de Valladolid, Valladolid, Spain
- Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Valladolid, Spain
| | - Juan F. Arenillas
- Universidad de Valladolid, Valladolid, Spain
- Department of Neurology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
| |
Collapse
|
23
|
Durugbo CM. Eye tracking for work-related visual search: a cognitive task analysis. ERGONOMICS 2021; 64:225-240. [PMID: 32914697 DOI: 10.1080/00140139.2020.1822547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Accepted: 09/04/2020] [Indexed: 06/11/2023]
Abstract
Cognitive Task Analysis (CTA) is an important methodology in ergonomics for studying workplaces and work patterns. Using eye tracking as a CTA methodology, this article explores visual search patterns in complex work environments and situations. It presents a simulated crime scene case study that applies eye tracking-based experiments in foraging and sense-making loops to elicit and represent knowledge on expert versus novice search patterns for complex work. The case probes the visual search task of preliminarily evaluating and documenting potential crime scene evidence. The experimental protocol relies on the ASL Mobile Eye and the analyses of experimental data include preliminary inspections of live-viewing data on eye-movements, precedence matrices detailing scan paths, and gaze charts that illustrate participants' attention based on fixation counts and durations. In line with the CTA methodology, the article uses concept maps to represent knowledge derived from different phases of the study. The article also discusses the research implications and methodologically reflects on the case study. Practitioner summary: This study offers valuable insights for work design. The use of eye tracking as a CTA methodology offers potentials for translating visual search tasks into defined visual search concepts for complex work environments and situations. The ability to model visual attention is valuable for work designs that improve complex work performance, reduce work stress, and promote work satisfaction.
Collapse
Affiliation(s)
- Christopher M Durugbo
- Department of Innovation and Technology Management, Arabian Gulf University, Manama, Bahrain
| |
Collapse
|
24
|
Hebert KP, Goldinger SD, Walenchok SC. Eye movements and the label feedback effect: Speaking modulates visual search via template integrity. Cognition 2021; 210:104587. [PMID: 33508577 DOI: 10.1016/j.cognition.2021.104587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 01/05/2021] [Accepted: 01/06/2021] [Indexed: 11/24/2022]
Abstract
The label-feedback hypothesis (Lupyan, 2012) proposes that language modulates low- and high-level visual processing, such as priming visual object perception. Lupyan and Swingley (2012) found that repeating target names facilitates visual search, resulting in shorter response times (RTs) and higher accuracy. In the present investigation, we conceptually replicated and extended their study, using additional control conditions and recording eye movements during search. Our goal was to evaluate whether self-directed speech influences target locating (i.e. attentional guidance) or object perception (i.e., distractor rejection and target appreciation). In three experiments, during object search, people spoke target names, nonwords, irrelevant (absent) object names, or irrelevant (present) object names (all within-participants). Experiments 1 and 2 examined search RTs and accuracy: Speaking target names improved performance, without differences among the remaining conditions. Experiment 3 incorporated eye-tracking: Gaze fixation patterns suggested that language does not affect attentional guidance, but instead affects both distractor rejection and target appreciation. When search trials were conditionalized according to distractor fixations, language effects became more orderly: Search was fastest while people spoke target names, followed in linear order by the nonword, distractor-absent, and distractor-present conditions. We suggest that language affects template maintenance during search, allowing fluent differentiation of targets and distractors. Materials, data, and analyses can be retrieved here: https://osf.io/z9ex2/.
Collapse
|
25
|
Winsor AM, Pagoti GF, Daye DJ, Cheries EW, Cave KR, Jakob EM. What gaze direction can tell us about cognitive processes in invertebrates. Biochem Biophys Res Commun 2021; 564:43-54. [PMID: 33413978 DOI: 10.1016/j.bbrc.2020.12.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 11/30/2020] [Accepted: 12/01/2020] [Indexed: 01/29/2023]
Abstract
Most visually guided animals shift their gaze using body movements, eye movements, or both to gather information selectively from their environments. Psychological studies of eye movements have advanced our understanding of perceptual and cognitive processes that mediate visual attention in humans and other vertebrates. However, much less is known about how these processes operate in other organisms, particularly invertebrates. We here make the case that studies of invertebrate cognition can benefit by adding precise measures of gaze direction. To accomplish this, we briefly review the human visual attention literature and outline four research themes and several experimental paradigms that could be extended to invertebrates. We briefly review selected studies where the measurement of gaze direction in invertebrates has provided new insights, and we suggest future areas of exploration.
Collapse
Affiliation(s)
- Alex M Winsor
- Graduate Program in Organismic and Evolutionary Biology, University of Massachusetts Amherst, Amherst, MA, 01003, USA.
| | - Guilherme F Pagoti
- Programa de Pós-Graduação em Zoologia, Instituto de Biociências, Universidade de São Paulo, Rua do Matão, 321, Travessa 14, Cidade Universitária, São Paulo, SP, 05508-090, Brazil
| | - Daniel J Daye
- Department of Biology, University of Massachusetts Amherst, Amherst, MA, 01003, USA; Graduate Program in Biological and Environmental Sciences, University of Rhode Island, Kingston, RI, 02881, USA
| | - Erik W Cheries
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, MA, 01003, USA
| | - Kyle R Cave
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, MA, 01003, USA
| | - Elizabeth M Jakob
- Department of Biology, University of Massachusetts Amherst, Amherst, MA, 01003, USA.
| |
Collapse
|
26
|
Meghdadi AH, Giesbrecht B, Eckstein MP. EEG signatures of contextual influences on visual search with real scenes. Exp Brain Res 2021; 239:797-809. [PMID: 33398454 DOI: 10.1007/s00221-020-05984-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 11/07/2020] [Indexed: 01/23/2023]
Abstract
The use of scene context is a powerful way by which biological organisms guide and facilitate visual search. Although many studies have shown enhancements of target-related electroencephalographic activity (EEG) with synthetic cues, there have been fewer studies demonstrating such enhancements during search with scene context and objects in real world scenes. Here, observers covertly searched for a target in images of real scenes while we used EEG to measure the steady state visual evoked response to objects flickering at different frequencies. The target appeared in its typical contextual location or out of context while we controlled for low-level properties of the image including target saliency against the background and retinal eccentricity. A pattern classifier using EEG activity at the relevant modulated frequencies showed target detection accuracy increased when the target was in a contextually appropriate location. A control condition for which observers searched the same images for a different target orthogonal to the contextual manipulation, resulted in no effects of scene context on classifier performance, confirming that image properties cannot explain the contextual modulations of neural activity. Pattern classifier decisions for individual images were also related to the aggregated observer behavioral decisions for individual images. Together, these findings demonstrate target-related neural responses are modulated by scene context during visual search with real world scenes and can be related to behavioral search decisions.
Collapse
Affiliation(s)
- Amir H Meghdadi
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93106-9660, USA.
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA, 93106-5100, USA.
| | - Barry Giesbrecht
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93106-9660, USA
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA, 93106-5100, USA
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, CA, 93106-5100, USA
| | - Miguel P Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93106-9660, USA
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA, 93106-5100, USA
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, CA, 93106-5100, USA
| |
Collapse
|
27
|
Beitner J, Helbing J, Draschkow D, Võ MLH. Get Your Guidance Going: Investigating the Activation of Spatial Priors for Efficient Search in Virtual Reality. Brain Sci 2021; 11:44. [PMID: 33406655 PMCID: PMC7823740 DOI: 10.3390/brainsci11010044] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/21/2020] [Accepted: 12/22/2020] [Indexed: 11/21/2022] Open
Abstract
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.
Collapse
Affiliation(s)
- Julia Beitner
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| | - Jason Helbing
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, UK;
| | - Melissa L.-H. Võ
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| |
Collapse
|
28
|
Abstract
Safe driving demands the coordination of multiple sensory and cognitive functions, such as vision and attention. Patients with neurologic or ophthalmic disease are exposed to selective pathophysiologic insults to driving-critical systems, placing them at a higher risk for unsafe driving and restricted driving privileges. Here, we evaluate how vision and attention contribute to unsafe driving across different patient populations. In ophthalmic disease, we focus on macular degeneration, glaucoma, diabetic retinopathy, and cataract; in neurologic disease, we focus on Alzheimer's disease, Parkinson's disease, and multiple sclerosis. Unsafe driving is generally associated with impaired vision and attention in ophthalmic and neurologic patients, respectively. Furthermore, patients with ophthalmic disease experience some degree of impairment in attention. Similarly, patients with neurologic disease experience some degree of impairment in vision. While numerous studies have demonstrated a relationship between impaired vision and unsafe driving in neurologic disease, there remains a dearth of knowledge regarding the relationship between impaired attention and unsafe driving in ophthalmic disease. In summary, this chapter confirms-and offers opportunities for future research into-the contribution of vision and attention to safe driving.
Collapse
Affiliation(s)
- David E Anderson
- Department of Ophthalmology & Visual Sciences, University of Nebraska Medical Center, Omaha, NE, United States
| | - Deepta A Ghate
- Department of Ophthalmology & Visual Sciences, University of Nebraska Medical Center, Omaha, NE, United States
| | - Matthew Rizzo
- Department of Neurological Sciences, University of Nebraska Medical Center, Omaha, NE, United States.
| |
Collapse
|
29
|
Aziz JR, Good SR, Klein RM, Eskes GA. Role of aging and working memory in performance on a naturalistic visual search task. Cortex 2020; 136:28-40. [PMID: 33453649 DOI: 10.1016/j.cortex.2020.12.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 09/16/2020] [Accepted: 12/09/2020] [Indexed: 10/22/2022]
Abstract
Studying age-related changes in working memory (WM) and visual search can provide insights into mechanisms of visuospatial attention. In visual search, WM is used to remember previously inspected objects/locations and to maintain a mental representation of the target to guide the search. We sought to extend this work, using aging as a case of reduced WM capacity. The present study tested whether various domains of WM would predict visual search performance in both young (n = 47; aged 18-35 yrs) and older (n = 48; aged 55-78) adults. Participants completed executive and domain-specific WM measures, and a naturalistic visual search task with (single) feature and triple-conjunction (three-feature) search conditions. We also varied the WM load requirements of the search task by manipulating whether a reference picture of the target (i.e., target template) was displayed during the search, or whether participants needed to search from memory. In both age groups, participants with better visuospatial executive WM were faster to locate complex search targets. Working memory storage capacity predicted search performance regardless of target complexity; however, visuospatial storage capacity was more predictive for young adults, whereas verbal storage capacity was more predictive for older adults. Displaying a target template during search diminished the involvement of WM in search performance, but this effect was primarily observed in young adults. Age-specific interactions between WM and visual search abilities are discussed in the context of mechanisms of visuospatial attention and how they may vary across the lifespan.
Collapse
Affiliation(s)
- Jasmine R Aziz
- Department of Psychology & Neuroscience, Dalhousie University, Halifax, Nova Scotia, Canada.
| | - Samantha R Good
- Department of Psychology & Neuroscience, Dalhousie University, Halifax, Nova Scotia, Canada; Department of Psychiatry, Dalhousie University, Halifax, Nova Scotia, Canada.
| | - Raymond M Klein
- Department of Psychology & Neuroscience, Dalhousie University, Halifax, Nova Scotia, Canada.
| | - Gail A Eskes
- Department of Psychology & Neuroscience, Dalhousie University, Halifax, Nova Scotia, Canada; Department of Psychiatry, Dalhousie University, Halifax, Nova Scotia, Canada.
| |
Collapse
|
30
|
When Natural Behavior Engages Working Memory. Curr Biol 2020; 31:869-874.e5. [PMID: 33278355 PMCID: PMC7902904 DOI: 10.1016/j.cub.2020.11.013] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
Working memory (WM) enables temporary storage and manipulation of information,1 supporting tasks that require bridging between perception and subsequent behavior. Its properties, such as its capacity, have been thoroughly investigated in highly controlled laboratory tasks.1, 2, 3, 4, 5, 6, 7, 8 Much less is known about the utilization and properties of WM in natural behavior,9, 10, 11 when reliance on WM emerges as a natural consequence of interactions with the environment. We measured the trade-off between reliance on WM and gathering information externally during immersive behavior in an adapted object-copying task.12 By manipulating the locomotive demands required for task completion, we could investigate whether and how WM utilization changed as gathering information from the environment became more effortful. Reliance on WM was lower than WM capacity measures in typical laboratory tasks. A clear trade-off also occurred. As sampling information from the environment required increasing locomotion and time investment, participants relied more on their WM representations. This reliance on WM increased in a shallow and linear fashion and was associated with longer encoding durations. Participants’ avoidance of WM usage showcases a fundamental dependence on external information during ecological behavior, even if the potentially storable information is well within the capacity of the cognitive system. These foundational findings highlight the importance of using immersive tasks to understand how cognitive processes unfold within natural behavior. Our novel VR approach effectively combines the ecological validity, experimental rigor, and sensitive measures required to investigate the interplay between memory and perception in immersive behavior. Video Abstract
Gaze provides a measure of working-memory (WM) usage during natural behavior Natural reliance on WM is low even when searching for objects externally is effortful WM utilization increases linearly as searching for objects requires more locomotion The trade-off between using WM versus external sampling affects performance
Collapse
|
31
|
Ramey MM, Henderson JM, Yonelinas AP. The spatial distribution of attention predicts familiarity strength during encoding and retrieval. J Exp Psychol Gen 2020; 149:2046-2062. [PMID: 32250136 PMCID: PMC7541439 DOI: 10.1037/xge0000758] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The memories we form are determined by what we attend to, and conversely, what we attend to is influenced by our memory for past experiences. Although we know that shifts of attention via eye movements are related to memory during encoding and retrieval, the role of specific memory processes in this relationship is unclear. There is evidence that attention may be especially important for some forms of memory (i.e., conscious recollection), and less so for others (i.e., familiarity-based recognition and unconscious influences of memory), but results are conflicting with respect to both the memory processes and eye movement patterns involved. To address this, we used a confidence-based method of isolating eye movement indices of spatial attention that are related to different memory processes (i.e., recollection, familiarity strength, and unconscious memory) during encoding and retrieval of real-world scenes. We also developed a new method of measuring the dispersion of eye movements, which proved to be more sensitive to memory processing than previously used measures. Specifically, in 2 studies, we found that familiarity strength-that is, changes in subjective reports of memory confidence-increased with (a) more dispersed patterns of viewing during encoding, (b) less dispersed viewing during retrieval, and (c) greater overlap in regions viewed between encoding and retrieval (i.e., resampling). Recollection was also related to these eye movements in a similar manner, though the associations with recollection were less consistent across experiments. Furthermore, we found no evidence for effects related to unconscious influences of memory. These findings indicate that attentional processes during viewing may not preferentially relate to recollection, and that the spatial distribution of eye movements is directly related to familiarity-based memory during encoding and retrieval. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
- Michelle M. Ramey
- Department of Psychology, University of California, Davis, CA, USA
- Center for Neuroscience, University of California, Davis, CA, USA
- Center for Mind and Brain, University of California, Davis, CA, USA
| | - John M. Henderson
- Department of Psychology, University of California, Davis, CA, USA
- Center for Mind and Brain, University of California, Davis, CA, USA
| | - Andrew P. Yonelinas
- Department of Psychology, University of California, Davis, CA, USA
- Center for Neuroscience, University of California, Davis, CA, USA
| |
Collapse
|
32
|
Lauer T, Willenbockel V, Maffongelli L, Võ MLH. The influence of scene and object orientation on the scene consistency effect. Behav Brain Res 2020; 394:112812. [DOI: 10.1016/j.bbr.2020.112812] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 07/06/2020] [Accepted: 07/14/2020] [Indexed: 01/18/2023]
|
33
|
Fischer M, Moscovitch M, Alain C. Incidental auditory learning and memory-guided attention: Examining the role of attention at the behavioural and neural level using EEG. Neuropsychologia 2020; 147:107586. [PMID: 32818487 DOI: 10.1016/j.neuropsychologia.2020.107586] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 07/16/2020] [Accepted: 08/16/2020] [Indexed: 02/08/2023]
Abstract
The current study addressed the relation between awareness, attention, and memory, by examining whether merely presenting a tone and audio-clip, without deliberately associating one with other, was sufficient to bias attention to a given side. Participants were exposed to 80 different audio-clips (half included a lateralized pure tone) and told to classify audio-clips as natural (e.g., waterfall) or manmade (e.g., airplane engine). A surprise memory test followed, in which participants pressed a button to a lateralized faint tone (target) embedded in each audio-clip. They also indicated if the clip was (i) old/new; (ii) recollected/familiar; and (iii) if the tone was on left/right/not present when they heard the clip at exposure. The results demonstrate good explicit memory for the clip, but not for tone location. Response times were faster for old than for new clips but did not vary according to the target-context associations. Neuro-electric activity revealed an old-new effect at midline-frontal sites and a difference between old clips that were previously associated with the target tone and those that were not. These results are consistent with the attention-dependent learning hypothesis and suggest that associations were formed incidentally at a neural level (silent memory trace or engram), but these associations did not guide attention at a level that influenced behaviour either explicitly or implicitly.
Collapse
Affiliation(s)
- Manda Fischer
- Rotman Research Institute, Baycrest Hospital, Toronto, Canada; University of Toronto, Department of Psychology, Toronto, Canada.
| | - Morris Moscovitch
- Rotman Research Institute, Baycrest Hospital, Toronto, Canada; University of Toronto, Department of Psychology, Toronto, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Hospital, Toronto, Canada; University of Toronto, Department of Psychology, Toronto, Canada
| |
Collapse
|
34
|
Ryan JD, Shen K, Liu Z. The intersection between the oculomotor and hippocampal memory systems: empirical developments and clinical implications. Ann N Y Acad Sci 2020; 1464:115-141. [PMID: 31617589 PMCID: PMC7154681 DOI: 10.1111/nyas.14256] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Revised: 08/29/2019] [Accepted: 09/19/2019] [Indexed: 12/28/2022]
Abstract
Decades of cognitive neuroscience research has shown that where we look is intimately connected to what we remember. In this article, we review findings from human and nonhuman animals, using behavioral, neuropsychological, neuroimaging, and computational modeling methods, to show that the oculomotor and hippocampal memory systems interact in a reciprocal manner, on a moment-to-moment basis, mediated by a vast structural and functional network. Visual exploration serves to efficiently gather information from the environment for the purpose of creating new memories, updating existing memories, and reconstructing the rich, vivid details from memory. Conversely, memory increases the efficiency of visual exploration. We call for models of oculomotor control to consider the influence of the hippocampal memory system on the cognitive control of eye movements, and for models of hippocampal and broader medial temporal lobe function to consider the influence of the oculomotor system on the development and expression of memory. We describe eye movement-based applications for the detection of neurodegeneration and delivery of therapeutic interventions for mental health disorders for which the hippocampus is implicated and memory dysfunctions are at the forefront.
Collapse
Affiliation(s)
- Jennifer D. Ryan
- Rotman Research InstituteBaycrestTorontoOntarioCanada
- Department of PsychologyUniversity of TorontoTorontoOntarioCanada
- Department of PsychiatryUniversity of TorontoTorontoOntarioCanada
| | - Kelly Shen
- Rotman Research InstituteBaycrestTorontoOntarioCanada
| | - Zhong‐Xu Liu
- Department of Behavioral SciencesUniversity of Michigan‐DearbornDearbornMichigan
| |
Collapse
|
35
|
Grieben R, Tekülve J, Zibner SKU, Lins J, Schneegans S, Schöner G. Scene memory and spatial inhibition in visual search : A neural dynamic process model and new experimental evidence. Atten Percept Psychophys 2020; 82:775-798. [PMID: 32048181 PMCID: PMC7246253 DOI: 10.3758/s13414-019-01898-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Any object-oriented action requires that the object be first brought into the attentional foreground, often through visual search. Outside the laboratory, this would always take place in the presence of a scene representation acquired from ongoing visual exploration. The interaction of scene memory with visual search is still not completely understood. Feature integration theory (FIT) has shaped both research on visual search, emphasizing the scaling of search times with set size when searches entail feature conjunctions, and research on visual working memory through the change detection paradigm. Despite its neural motivation, there is no consistently neural process account of FIT in both its dimensions. We propose such an account that integrates (1) visual exploration and the building of scene memory, (2) the attentional detection of visual transients and the extraction of search cues, and (3) visual search itself. The model uses dynamic field theory in which networks of neural dynamic populations supporting stable activation states are coupled to generate sequences of processing steps. The neural architecture accounts for basic findings in visual search and proposes a concrete mechanism for the integration of working memory into the search process. In a behavioral experiment, we address the long-standing question of whether both the overall speed and the efficiency of visual search can be improved by scene memory. We find both effects and provide model fits of the behavioral results. In a second experiment, we show that the increase in efficiency is fragile, and trace that fragility to the resetting of spatial working memory.
Collapse
Affiliation(s)
- Raul Grieben
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | - Jan Tekülve
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | - Stephan K. U. Zibner
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | - Jonas Lins
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | | | - Gregor Schöner
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| |
Collapse
|
36
|
Mena-Garcia L, Maldonado-Lopez MJ, Fernandez I, Coco-Martin MB, Finat-Saez J, Martinez-Jimenez JL, Pastor-Jimeno JC, Arenillas JF. Visual processing speed in hemianopia patients secondary to acquired brain injury: a new assessment methodology. J Neuroeng Rehabil 2020; 17:12. [PMID: 32005265 PMCID: PMC6995150 DOI: 10.1186/s12984-020-0650-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Accepted: 01/23/2020] [Indexed: 01/10/2023] Open
Abstract
Background There is a clinical need to identify diagnostic parameters that objectively quantify and monitor the effective visual ability of patients with homonymous visual field defects (HVFDs). Visual processing speed (VPS) is an objective measure of visual ability. It is the reaction time (RT) needed to correctly search and/or reach for a visual stimulus. VPS depends on six main brain processing systems: auditory-cognitive, attentional, working memory, visuocognitive, visuomotor, and executive. We designed a new assessment methodology capable of activating these six systems and measuring RTs to determine the VPS of patients with HVFDs. Methods New software was designed for assessing subject visual stimulus search and reach times (S-RT and R-RT respectively), measured in seconds. Thirty-two different everyday visual stimuli were divided in four complexity groups that were presented along 8 radial visual field positions at three different eccentricities (10o, 20o, and 30o). Thus, for each HVFD and control subject, 96 S- and R-RT measures related to VPS were registered. Three additional variables were measured to gather objective data on the validity of the test: eye-hand coordination mistakes (ehcM), eye-hand coordination accuracy (ehcA), and degrees of head movement (dHM, measured by a head-tracker system). HVFD patients and healthy controls (30 each) matched by age and gender were included. Each subject was assessed in a single visit. VPS measurements for HFVD patients and control subjects were compared for the complete test, for each stimulus complexity group, and for each eccentricity. Results VPS was significantly slower (p < 0.0001) in the HVFD group for the complete test, each stimulus complexity group, and each eccentricity. For the complete test, the VPS of the HVFD patients was 73.0% slower than controls. They also had 335.6% more ehcMs, 41.3% worse ehcA, and 189.0% more dHMs than the controls. Conclusions Measurement of VPS by this new assessment methodology could be an effective tool for objectively quantifying the visual ability of HVFD patients. Future research should evaluate the effectiveness of this novel method for measuring the impact that any specific neurovisual rehabilitation program has for these patients.
Collapse
Affiliation(s)
- Laura Mena-Garcia
- Universidad de Valladolid, Valladolid, Spain. .,Instituto Universitario de Oftalmobiología Aplicada (IOBA), Eye Institute, Universidad de Valladolid, Valladolid, Spain.
| | - Miguel J Maldonado-Lopez
- Universidad de Valladolid, Valladolid, Spain.,Instituto Universitario de Oftalmobiología Aplicada (IOBA), Eye Institute, Universidad de Valladolid, Valladolid, Spain
| | - Itziar Fernandez
- Instituto Universitario de Oftalmobiología Aplicada (IOBA), Eye Institute, Universidad de Valladolid, Valladolid, Spain.,CIBER BBN, National Institute of Health Carlos III, Madrid, Spain
| | - Maria B Coco-Martin
- Universidad de Valladolid, Valladolid, Spain.,Department of Neurology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
| | - Jaime Finat-Saez
- ASPAYM-Castilla y Leon Foundation, Research Centre for Physical Disabilities, Valladolid, Spain
| | - Jose L Martinez-Jimenez
- ASPAYM-Castilla y Leon Foundation, Research Centre for Physical Disabilities, Valladolid, Spain
| | - Jose C Pastor-Jimeno
- Universidad de Valladolid, Valladolid, Spain.,Instituto Universitario de Oftalmobiología Aplicada (IOBA), Eye Institute, Universidad de Valladolid, Valladolid, Spain.,Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
| | - Juan F Arenillas
- Universidad de Valladolid, Valladolid, Spain.,Department of Neurology, Hospital Clínico Universitario de Valladolid, Valladolid, Spain
| |
Collapse
|
37
|
Helbing J, Draschkow D, Võ MLH. Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments. Cognition 2020; 196:104147. [PMID: 32004760 DOI: 10.1016/j.cognition.2019.104147] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 11/19/2019] [Accepted: 11/20/2019] [Indexed: 01/23/2023]
Abstract
We use representations and expectations formed during life-long learning to support attentional allocation and perception. In comparison to traditional laboratory investigations, real-world memory formation is usually achieved without explicit instruction and on-the-fly as a by-product of natural interactions with our environment. Understanding this process and the quality of naturally formed representations is critical to understanding how memory is used to guide attention and perception. Utilizing immersive, navigable, and realistic virtual environments, we investigated incidentally generated memory representations by comparing them to memories for items which were explicitly memorized. Participants either searched for objects embedded in realistic indoor environments or explicitly memorized them for follow-up identity and location memory tests. We show for the first time that memory for the identity of naturalistic objects and their location in 3D space is higher after incidental encoding compared to explicit memorization, even though the subsequent memory tests came as a surprise to participants. Relating gaze behavior to memory performance revealed that encoding time was more predictive of subsequent memory when participants explicitly memorized an item, compared to incidentally encoding it. Our results suggest that the active nature of guiding attentional allocation during proactive behavior allows for behaviorally optimal formation and utilization of representations. This highlights the importance of investigating cognition under ecologically valid conditions and shows that understanding the most natural processes for encoding and maintaining information is critical for understanding adaptive behavior.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Dejan Draschkow
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany; Department of Psychiatry, University of Oxford, Oxford, England, United Kingdom of Great Britain and Northern Ireland.
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
38
|
Functional Imaging of Visuospatial Attention in Complex and Naturalistic Conditions. Curr Top Behav Neurosci 2020. [PMID: 30547430 DOI: 10.1007/7854_2018_73] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
One of the ultimate goals of cognitive neuroscience is to understand how the brain works in the real world. Functional imaging with naturalistic stimuli provides us with the opportunity to study the brain in situations similar to the everyday life. This includes the processing of complex stimuli that can trigger many types of signals related both to the physical characteristics of the external input and to the internal knowledge that we have about natural objects and environments. In this chapter, I will first outline different types of stimuli that have been used in naturalistic imaging studies. These include static pictures, short video clips, full-length movies, and virtual reality, each comprising specific advantages and disadvantages. Next, I will turn to the main issue of visual-spatial orienting in naturalistic conditions and its neural substrates. I will discuss different classes of internal signals, related to objects, scene structure, and long-term memory. All of these, together with external signals about stimulus salience, have been found to modulate the activity and the connectivity of the frontoparietal attention networks. I will conclude by pointing out some promising future directions for functional imaging with naturalistic stimuli. Despite this field of research is still in its early days, I consider that it will play a major role in bridging the gap between standard laboratory paradigms and mechanisms of brain functioning in the real world.
Collapse
|
39
|
Ramzaoui H, Faure S, Spotorno S. Alzheimer's Disease, Visual Search, and Instrumental Activities of Daily Living: A Review and a New Perspective on Attention and Eye Movements. J Alzheimers Dis 2019; 66:901-925. [PMID: 30400086 DOI: 10.3233/jad-180043] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Many instrumental activities of daily living (IADLs), like cooking and managing finances and medications, involve finding efficiently and in a timely manner one or several objects within complex environments. They may thus be disrupted by visual search deficits. These deficits, present in Alzheimer's disease (AD) from its early stages, arise from impairments in multiple attentional and memory mechanisms. A growing body of research on visual search in AD has examined several factors underlying search impairments in simple arrays. Little is known about how AD patients search in real-world scenes and in real settings, and about how such impairments affect patients' functional autonomy. Here, we review studies on visuospatial attention and visual search in AD. We then consider why analysis of patients' oculomotor behavior is promising to improve understanding of the specific search deficits in AD, and of their role in impairing IADL performance. We also highlight why paradigms developed in research on real-world scenes and real settings in healthy individuals are valuable to investigate visual search in AD. Finally, we indicate future research directions that may offer new insights to improve visual search abilities and autonomy in AD patients.
Collapse
Affiliation(s)
- Hanane Ramzaoui
- Laboratoire d'Anthropologie et de Psychologie Cliniques, Cognitives et Sociales, Université Côte d'Azur, France
| | - Sylvane Faure
- Laboratoire d'Anthropologie et de Psychologie Cliniques, Cognitives et Sociales, Université Côte d'Azur, France
| | - Sara Spotorno
- School of Psychology, University of Aberdeen, UK.,Institute of Neuroscience and Psychology, University of Glasgow, UK
| |
Collapse
|
40
|
Nobre AC, Stokes MG. Premembering Experience: A Hierarchy of Time-Scales for Proactive Attention. Neuron 2019; 104:132-146. [PMID: 31600510 PMCID: PMC6873797 DOI: 10.1016/j.neuron.2019.08.030] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 08/07/2019] [Accepted: 08/20/2019] [Indexed: 12/30/2022]
Abstract
Memories are about the past, but they serve the future. Memory research often emphasizes the former aspect: focusing on the functions that re-constitute (re-member) experience and elucidating the various types of memories and their interrelations, timescales, and neural bases. Here we highlight the prospective nature of memory in guiding selective attention, focusing on functions that use previous experience to anticipate the relevant events about to unfold-to "premember" experience. Memories of various types and timescales play a fundamental role in guiding perception and performance adaptively, proactively, and dynamically. Consonant with this perspective, memories are often recorded according to expected future demands. Using working memory as an example, we consider how mnemonic content is selected and represented for future use. This perspective moves away from the traditional representational account of memory toward a functional account in which forward-looking memory traces are informationally and computationally tuned for interacting with incoming sensory signals to guide adaptive behavior.
Collapse
Affiliation(s)
- Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Mark G Stokes
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
41
|
Võ MLH, Boettcher SEP, Draschkow D. Reading scenes: how scene grammar guides attention and aids perception in real-world environments. Curr Opin Psychol 2019; 29:205-210. [DOI: 10.1016/j.copsyc.2019.03.009] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Revised: 03/07/2019] [Accepted: 03/13/2019] [Indexed: 11/30/2022]
|
42
|
Geng JJ, Witkowski P. Template-to-distractor distinctiveness regulates visual search efficiency. Curr Opin Psychol 2019; 29:119-125. [PMID: 30743200 PMCID: PMC6625942 DOI: 10.1016/j.copsyc.2019.01.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 12/13/2018] [Accepted: 01/04/2019] [Indexed: 11/18/2022]
Abstract
All models of attention include the concept of an attentional template (or a target or search template). The template is conceptualized as target information held in memory that is used for prioritizing sensory processing and determining if an object matches the target. It is frequently assumed that the template contains a veridical copy of the target. However, we review recent evidence showing that the template encodes a version of the target that is adapted to the current context (e.g. distractors, task, etc.); information held within the template may include only a subset of target features, real world knowledge, pre-existing perceptual biases, or even be a distorted version of the veridical target. We argue that the template contents are customized in order to maximize the ability to prioritize information that distinguishes targets from distractors. We refer to this as template-to-distractor distinctiveness and hypothesize that it contributes to visual search efficiency by exaggerating target-to-distractor dissimilarity.
Collapse
Affiliation(s)
- Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, 95616, United States; Department of Psychology, University of California Davis, Davis, CA, 95616, United States.
| | - Phillip Witkowski
- Center for Mind and Brain, University of California Davis, Davis, CA, 95616, United States; Department of Psychology, University of California Davis, Davis, CA, 95616, United States
| |
Collapse
|
43
|
Yu L, Jin M, Zhou K. Multi-channel biomimetic visual transformation for object feature extraction and recognition of complex scenes. APPL INTELL 2019. [DOI: 10.1007/s10489-019-01550-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
44
|
Williams LH, Drew T. What do we know about volumetric medical image interpretation?: a review of the basic science and medical image perception literatures. Cogn Res Princ Implic 2019; 4:21. [PMID: 31286283 PMCID: PMC6614227 DOI: 10.1186/s41235-019-0171-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Accepted: 05/19/2019] [Indexed: 11/26/2022] Open
Abstract
Interpretation of volumetric medical images represents a rapidly growing proportion of the workload in radiology. However, relatively little is known about the strategies that best guide search behavior when looking for abnormalities in volumetric images. Although there is extensive literature on two-dimensional medical image perception, it is an open question whether the conclusions drawn from these images can be generalized to volumetric images. Importantly, volumetric images have distinct characteristics (e.g., scrolling through depth, smooth-pursuit eye-movements, motion onset cues, etc.) that should be considered in future research. In this manuscript, we will review the literature on medical image perception and discuss relevant findings from basic science that can be used to generate predictions about expertise in volumetric image interpretation. By better understanding search through volumetric images, we may be able to identify common sources of error, characterize the optimal strategies for searching through depth, or develop new training and assessment techniques for radiology residents.
Collapse
|
45
|
Williams CC, Castelhano MS. The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes. Vision (Basel) 2019; 3:E33. [PMID: 31735834 PMCID: PMC6802790 DOI: 10.3390/vision3030033] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 06/20/2019] [Accepted: 06/24/2019] [Indexed: 11/16/2022] Open
Abstract
The use of eye movements to explore scene processing has exploded over the last decade. Eye movements provide distinct advantages when examining scene processing because they are both fast and spatially measurable. By using eye movements, researchers have investigated many questions about scene processing. Our review will focus on research performed in the last decade examining: (1) attention and eye movements; (2) where you look; (3) influence of task; (4) memory and scene representations; and (5) dynamic scenes and eye movements. Although typically addressed as separate issues, we argue that these distinctions are now holding back research progress. Instead, it is time to examine the intersections of these seemingly separate influences and examine the intersectionality of how these influences interact to more completely understand what eye movements can tell us about scene processing.
Collapse
Affiliation(s)
- Carrick C. Williams
- Department of Psychology, California State University San Marcos, San Marcos, CA 92069, USA
| | | |
Collapse
|
46
|
Bergmann N, Koch D, Schubö A. Reward expectation facilitates context learning and attentional guidance in visual search. J Vis 2019; 19:10. [PMID: 30916725 DOI: 10.1167/19.3.10] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Modulations of visual attention due to expectation of reward were frequently reported in recent years. Recent studies revealed that reward can modulate the implicit learning of repeated context configurations (e.g., Tseng & Lleras, 2013). We investigated the influence of reward expectations on context learning by associating colors to different reward magnitudes. Participants searched through contexts consisting of spatially distributed L-shaped distractors and a T-shaped target, with half of these objects appearing in a color associated with low, medium, or high reward. Half of these context configurations were repeatedly presented in every experimental block, whereas the other half was generated newly for every trial. Results showed an earlier and more pronounced contextual cueing effect in contexts associated with high reward compared with low reward contexts. This was visible as faster decline of response times to targets in repeated contexts associated with high reward compared with repeated low reward and novel contexts, and was reflected in the eye movement pattern as shorter distance of the first fixation to the target location. These results suggest that expectation of high reward magnitude facilitates subsequent learning of repeated context configurations. High reward also increases the efficiency of attention guidance toward the target location.
Collapse
Affiliation(s)
- Nils Bergmann
- Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Marburg, Germany
| | - Dennis Koch
- Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Marburg, Germany
| | - Anna Schubö
- Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Marburg, Germany
| |
Collapse
|
47
|
Boettcher SEP, Draschkow D, Dienhart E, Võ MLH. Anchoring visual search in scenes: Assessing the role of anchor objects on eye movements during visual search. J Vis 2018; 18:11. [DOI: 10.1167/18.13.11] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | - Dejan Draschkow
- Department of Psychology, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| | - Eric Dienhart
- Department of Psychology, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| | - Melissa L.-H. Võ
- Department of Psychology, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
48
|
Draschkow D, Heikel E, Võ MLH, Fiebach CJ, Sassenhagen J. No evidence from MVPA for different processes underlying the N300 and N400 incongruity effects in object-scene processing. Neuropsychologia 2018; 120:9-17. [DOI: 10.1016/j.neuropsychologia.2018.09.016] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Revised: 09/18/2018] [Accepted: 09/23/2018] [Indexed: 11/24/2022]
|
49
|
Macaluso E, Ogawa A. Visuo-spatial orienting during active exploratory behavior: Processing of task-related and stimulus-related signals. Cortex 2018; 102:26-44. [DOI: 10.1016/j.cortex.2017.08.032] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Revised: 08/08/2017] [Accepted: 08/25/2017] [Indexed: 10/18/2022]
|
50
|
Abstract
Search is a central visual function. Most of what is known about search derives from experiments where subjects view 2D displays on computer monitors. In the natural world, however, search involves movement of the body in large-scale spatial contexts, and it is unclear how this might affect search strategies. In this experiment, we explore the nature of memory representations developed when searching in an immersive virtual environment. By manipulating target location, we demonstrate that search depends on episodic spatial memory as well as learnt spatial priors. Subjects rapidly learned the large-scale structure of the space, with shorter paths and less head rotation to find targets. These results suggest that spatial memory of the global structure allows a search strategy that involves efficient attention allocation based on the relevance of scene regions. Thus spatial memory may allow less energetically costly search strategies.
Collapse
Affiliation(s)
- Chia-Ling Li
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas, USA.
| | - M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | | | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|