1
|
van Moorselaar D, Theeuwes J. Spatial transfer of object-based statistical learning. Atten Percept Psychophys 2024; 86:768-775. [PMID: 38316722 PMCID: PMC11063099 DOI: 10.3758/s13414-024-02852-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/23/2024] [Indexed: 02/07/2024]
Abstract
A large number of recent studies have demonstrated that efficient attentional selection depends to a large extent on the ability to extract regularities present in the environment. Through statistical learning, attentional selection is facilitated by directing attention to locations in space that were relevant in the past while suppressing locations that previously were distracting. The current study shows that we are not only able to learn to prioritize locations in space but also locations within objects independent of space. Participants learned that within a specific object, particular locations within the object were more likely to contain relevant information than other locations. The current results show that this learned prioritization was bound to the object as the learned bias to prioritize a specific location within the object stayed in place even when the object moved to a completely different location in space. We conclude that in addition to spatial attention prioritization of locations in space, it is also possible to learn to prioritize relevant locations within specific objects. The current findings have implications for the inferred spatial priority map of attentional weights as this map cannot be strictly retinotopically organized.
Collapse
Affiliation(s)
- Dirk van Moorselaar
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands.
- Institute of Brain and Behaviour Amsterdam (iBBA), Amsterdam, the Netherlands.
| | - Jan Theeuwes
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
- Institute of Brain and Behaviour Amsterdam (iBBA), Amsterdam, the Netherlands
- William James Centre for Research, ISPA-Instituto Universitario, Lisbon, Portugal
| |
Collapse
|
2
|
Abstract
Research has recently shown that efficient selection relies on the implicit extraction of environmental regularities, known as statistical learning. Although this has been demonstrated for scenes, similar learning arguably also occurs for objects. To test this, we developed a paradigm that allowed us to track attentional priority at specific object locations irrespective of the object's orientation in three experiments with young adults (all Ns = 80). Experiments 1a and 1b established within-object statistical learning by demonstrating increased attentional priority at relevant object parts (e.g., hammerhead). Experiment 2 extended this finding by demonstrating that learned priority generalized to viewpoints in which learning never took place. Together, these findings demonstrate that as a function of statistical learning, the visual system not only is able to tune attention relative to specific locations in space but also can develop preferential biases for specific parts of an object independently of the viewpoint of that object.
Collapse
Affiliation(s)
- Dirk van Moorselaar
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam.,Institute of Brain and Behaviour Amsterdam (iBBA), The Netherlands
| | - Jan Theeuwes
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam.,Institute of Brain and Behaviour Amsterdam (iBBA), The Netherlands.,William James Center for Research, ISPA-Instituto Universitario
| |
Collapse
|
3
|
Task-specific engagement of object-based and space-based attention with spatiotemporally defined objects. Atten Percept Psychophys 2021; 83:1479-1490. [PMID: 33398657 DOI: 10.3758/s13414-020-02201-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2020] [Indexed: 11/08/2022]
Abstract
We used a form of ambiguous apparent motion known as Ternus motion to isolate the effects of object-based and space-based attention, and to explore functional differences between them. Two frames of horizontally aligned disks that were shifted by one position between frames were temporally separated by either a short or a long inter-stimulus interval (ISI). Short ISI displays were perceived as element motion where one disk appeared to jump across the other two. Long ISI displays were perceived as group motion where all three disks appeared to move together. Because element and group motion imply mutually exclusive object structures, adding stimuli (e.g., a small gap) to one disk in each frame created conditions of orthogonal object and location status (same or different), depending on ISI. We used two tasks with different functional demands, an identification task (Experiments 1 and 3a) in which observers responded to a single attribute of the final stimulus, and a comparison task (Experiments 2 and 3b) in which observers compared two attributes across two stimuli. Reliable object-specific effects occurred only with the comparison task, whereas location-specific effects occurred with both tasks. These results confirm that attention can be directed to objects separately from spatial locations and vice versa, and, moreover, that object-based and space-based attention are engaged differently depending on the processing demands of the task.
Collapse
|
4
|
Abstract
Our perception of the world remains stable despite the retinal shifts that occur with each saccade. The role of spatial attention in matching pre- to postsaccadic visual information has been well established, but the role of feature-based attention remains unclear. In this study, we examined the transsaccadic processing of a color pop-out target. Participants made a saccade towards a neutral target and performed a search task on a peripheral array presented once the saccade landed. A similar array was presented just before the saccade and we analyzed what aspect of this preview benefitted the postsaccadic search task. We assessed the preview effect in the spatiotopic and retinotopic reference frames, and the potential transfer of feature selectivity across the saccade. In the first experiment, the target and distractor colors remained identical for the preview and the postsaccadic array and performance improved. The largest benefit was observed at the spatiotopic location. In the second experiment, the target and distractor colors were swapped across the saccade. All responses were slowed but the cost was least at the spatiotopic location. Our results show that the preview attracted spatial attention to the target location, which was then remapped, and suggest that previewed features, specifically colors, were transferred across the saccade. Furthermore, the preview induced a spatiotopic advantage regardless of whether the target switched color or not, suggesting that spatiotopy was established independently of feature processing. Our results support independent priming effects of features versus location and underline the role of feature-based selection in visual stability.
Collapse
|
5
|
Abstract
We proposed to abandon the item as conceptual unit in visual search and adopt a fixation-based framework instead. We treat various themes raised by our commentators, including the nature of the Functional Visual Field and existing similar ideas, alongside the importance of items, covert attention, and top-down/contextual influences. We reflect on the current state of, and future directions for, visual search.
Collapse
|
6
|
Out with the new, in with the old: Exogenous orienting to locations with physically constant stimulation. Psychon Bull Rev 2018; 25:1331-1336. [PMID: 29368269 DOI: 10.3758/s13423-017-1426-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Dominant methods of investigating exogenous orienting presume that attention is captured most effectively at locations containing new events. This is evidenced by the ubiquitous use of transient stimuli as cues in the literature on exogenous orienting. In the present study, we showed that attention can be oriented exogenously toward a location containing a completely unchanging stimulus by modifying Posner's landmark exogenous spatial-cueing paradigm. Observers searched a six-element array of placeholder stimuli for an onset target. The target was preceded by a decrement in luminance to five of the six placeholders, such that one location remained physically constant. This "nonset" stimulus (so named to distinguish it from a traditional onsetting transient) acted as an exogenous cue, eliciting patterns of facilitation and inhibition at the nonset location and demonstrating that exogenous orienting is not always evident at the location of a visual transient. This method eliminates the decades-long confounding of orienting to a location with the processing of new events at that location, permitting alternative considerations of the nature of attentional selection.
Collapse
|
7
|
Don't admit defeat: A new dawn for the item in visual search. Behav Brain Sci 2018; 40:e159. [PMID: 29342621 DOI: 10.1017/s0140525x16000285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Even though we lack a precise definition of "item," it is clear that people do parse their visual environment into objects (the real-world equivalent of items). We will review evidence that items are essential in visual search, and argue that computer vision - especially deep learning - may offer a solution for the lack of a solid definition of "item."
Collapse
|
8
|
Exogenous attention during perceptual group formation and dissolution. Atten Percept Psychophys 2017; 79:593-602. [DOI: 10.3758/s13414-016-1235-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
9
|
Öğmen H, Herzog MH. A New Conceptualization of Human Visual Sensory-Memory. Front Psychol 2016; 7:830. [PMID: 27375519 PMCID: PMC4899472 DOI: 10.3389/fpsyg.2016.00830] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2016] [Accepted: 05/18/2016] [Indexed: 11/16/2022] Open
Abstract
Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson–Shiffrin “modal model” forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory.
Collapse
Affiliation(s)
- Haluk Öğmen
- Department of Electrical and Computer Engineering, University of HoustonHouston, TX, USA; Center for Neuro-Engineering and Cognitive Science, University of HoustonHouston, TX, USA
| | - Michael H Herzog
- Laboratory of Psychophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL) Lausanne, Switzerland
| |
Collapse
|
10
|
Nakayama R, Motoyoshi I, Sato T. The Roles of Non-retinotopic Motions in Visual Search. Front Psychol 2016; 7:840. [PMID: 27313560 PMCID: PMC4887493 DOI: 10.3389/fpsyg.2016.00840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2016] [Accepted: 05/19/2016] [Indexed: 11/30/2022] Open
Abstract
In visual search, a moving target among stationary distracters is detected more rapidly and more efficiently than a static target among moving distracters. Here we examined how this search asymmetry depends on motion signals from three distinct coordinate systems—retinal, relative, and spatiotopic (head/body-centered). Our search display consisted of a target element, distracters elements, and a fixation point tracked by observers. Each element was composed of a spatial carrier grating windowed by a Gaussian envelope, and the motions of carriers, windows, and fixation were manipulated independently and used in various combinations to decouple the respective effects of motion coordinate systems on visual search asymmetry. We found that retinal motion hardly contributes to reaction times and search slopes but that relative and spatiotopic motions contribute to them substantially. Results highlight the important roles of non-retinotopic motions for guiding observer attention in visual search.
Collapse
Affiliation(s)
- Ryohei Nakayama
- Department of Psychology, The University of TokyoTokyo, Japan
- *Correspondence: Ryohei Nakayama
| | - Isamu Motoyoshi
- Department of Life Sciences, The University of TokyoTokyo, Japan
| | - Takao Sato
- Department of Psychology, The University of TokyoTokyo, Japan
| |
Collapse
|
11
|
Herzog MH, Thunell E, Ögmen H. Putting low-level vision into global context: Why vision cannot be reduced to basic circuits. Vision Res 2015; 126:9-18. [PMID: 26456069 DOI: 10.1016/j.visres.2015.09.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2015] [Revised: 07/28/2015] [Accepted: 09/18/2015] [Indexed: 11/28/2022]
Abstract
To cope with the complexity of vision, most models in neuroscience and computer vision are of hierarchical and feedforward nature. Low-level vision, such as edge and motion detection, is explained by basic low-level neural circuits, whose outputs serve as building blocks for more complex circuits computing higher level features such as shape and entire objects. There is an isomorphism between states of the outer world, neural circuits, and perception, inspired by the positivistic philosophy of the mind. Here, we show that although such an approach is conceptually and mathematically appealing, it fails to explain many phenomena including crowding, visual masking, and non-retinotopic processing.
Collapse
Affiliation(s)
- Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland.
| | - Evelina Thunell
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Haluk Ögmen
- Department of Electrical and Computer Engineering, Center for Neuro-Engineering and Cognitive Science, University of Houston, TX, USA
| |
Collapse
|
12
|
Facilitation by exogenous attention for static and dynamic gestalt groups. Atten Percept Psychophys 2015; 76:1709-20. [PMID: 24811040 DOI: 10.3758/s13414-014-0679-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Attentional mechanisms allow the brain to selectively allocate its resources to stimuli of interest within the huge amount of information reaching its sensory systems. The voluntary component of attention, endogenous attention, can be allocated in a flexible manner depending on the goals and strategies of the observer. On the other hand, the reflexive component, exogenous attention, is driven by the stimulus. Here, we investigated how exogenous attention is deployed to moving stimuli that form distinct perceptual groups. We showed that exogenous attention is deployed according to a reference frame that moves along with the stimulus. Moreover, in addition to the cued stimulus, exogenous attention is deployed to all elements forming a perceptual group. These properties provide a basis for the efficient deployment of exogenous attention under ecological viewing conditions.
Collapse
|
13
|
Rutherford BJ. List constituency and orthographic and phonological processing: a shift to high familiarity words from low familiarity words. Neuropsychologia 2014; 65:74-81. [PMID: 25455570 DOI: 10.1016/j.neuropsychologia.2014.10.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2013] [Revised: 10/10/2014] [Accepted: 10/13/2014] [Indexed: 11/24/2022]
Abstract
Two lexical decision experiments build on established patterns of laterality and hemispheric interaction to test whether the presence of low familiarity words dynamically affects the use of an orthographic or phonological strategy for high familiarity words; and, if so, whether the hemispheres are similarly flexible in adapting to the constituency change. Experiment 1 restricted word strings to the highly familiar. Experiment 2 presented the same high familiarity words, along with an equal number of low familiarity words. Targets for lexical decision were presented at fixation to approximate normal viewing behaviour, either with or without a non-lexical distractor lateralized left visual field (LVF) or right visual field (RVF). Response time and accuracy were measured. Responses were faster in Experiment 1 than Experiment 2 to high familiarity words, pseudowords (orthographically correct), and non-words (orthographically incorrect), suggesting that a different strategy was used. A main effect of distractor location in Experiment 1 was due to more accurate responses to letter strings accompanied by a RVF distractor than no distractor, revealing a cost from hemispheric interaction compared to the right hemisphere when a task is simple. Experiment 2 found an interaction between distractor location and string type in both the response time and accuracy data. Separate analyses of word strings revealed a shift to a left hemisphere advantage: Accuracy to low familiarity words and speed to high familiarity words was better when accompanied by a LVF than a RVF distractor. Critical to a dynamic effect of list constituency is that the right hemisphere slowed to the same high familiarity words that had provoked speedier responses in Experiment 1. The findings are consistent with the use of an orthographic strategy in Experiment 1 and a phonological strategy in Experiment 2, and support the idea that right hemisphere access to familiar phonology is slower than the left hemisphere. Taken together, the findings suggest that the strategy used by both hemispheres is flexible, that both adapt to list constituency by adopting a strategy that is optimal for the task as a whole, and that there are different timelines of phonological activation in the two cerebral hemispheres.
Collapse
|
14
|
|
15
|
McCourt ME, Blakeslee B, Padmanabhan G. Lighting direction and visual field modulate perceived intensity of illumination. Front Psychol 2013; 4:983. [PMID: 24399990 PMCID: PMC3870952 DOI: 10.3389/fpsyg.2013.00983] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2013] [Accepted: 12/10/2013] [Indexed: 12/02/2022] Open
Abstract
When interpreting object shape from shading the visual system exhibits a strong bias that illumination comes from above and slightly from the left. We asked whether such biases in the perceived direction of illumination might also influence its perceived intensity. Arrays of nine cubes were stereoscopically rendered where individual cubes varied in their 3D pose, but possessed identical triplets of visible faces. Arrays were virtually illuminated from one of four directions: Above-Left, Above-Right, Below-Left, and Below-Right (±24.4° azimuth; ±90° elevation). Illumination intensity possessed 15 levels, resulting in mean cube array luminances ranging from 1.31-3.45 cd/m(2). A "reference" array was consistently illuminated from Above-Left at mid-intensity (mean array luminance = 2.38 cd/m(2)). The reference array's illumination was compared to that of matching arrays which were illuminated from all four directions at all intensities. Reference and matching arrays appeared in the left and right visual field, respectively, or vice versa. Subjects judged which cube array appeared to be under more intense illumination. Using the method of constant stimuli we determined the illumination level of matching arrays required to establish subjective equality with the reference array as a function of matching cube visual field, illumination elevation, and illumination azimuth. Cube arrays appeared significantly more intensely illuminated when they were situated in the left visual field (p = 0.017), and when they were illuminated from below (p = 0.001), and from the left (p = 0.001). An interaction of modest strength was that the effect of illumination azimuth was greater for matching arrays situated in the left visual field (p = 0.042). We propose that objects lit from below appear more intensely illuminated than identical objects lit from above due to long-term adaptation to downward lighting. The amplification of perceived intensity of illumination for stimuli situated in the left visual field and lit from the left is best explained by tonic egocentric and allocentric leftward attentional biases, respectively.
Collapse
Affiliation(s)
- Mark E. McCourt
- Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State UniversityFargo, ND, USA
| | | | | |
Collapse
|
16
|
Boon PJ, Theeuwes J, Belopolsky AV. Updating visual-spatial working memory during object movement. Vision Res 2013; 94:51-7. [PMID: 24262811 DOI: 10.1016/j.visres.2013.11.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2012] [Revised: 10/29/2013] [Accepted: 11/12/2013] [Indexed: 11/20/2022]
Abstract
Working memory enables temporary maintenance and manipulation of information for immediate access by cognitive processes. The present study investigates how spatial information stored in working memory is updated during object movement. Participants had to remember a particular location on an object which, after a retention interval, started to move. The question was whether the memorized location was updated with the movement of the object or whether after object movement it remained represented in retinotopic coordinates. We used saccade trajectories to examine how memorized locations were represented. The results showed that immediately after the object stopped moving, there was both a retinotopic and an object-centered representation. However, 200ms later, the activity at the retinotopic location decayed, making the memory representation fully object-centered. Our results suggest that memorized locations are updated from retinotopic to object-centered coordinates during, or shortly after object movement.
Collapse
Affiliation(s)
- Paul J Boon
- Department of Cognitive Psychology, Vrije Universiteit, Amsterdam, The Netherlands.
| | - Jan Theeuwes
- Department of Cognitive Psychology, Vrije Universiteit, Amsterdam, The Netherlands
| | - Artem V Belopolsky
- Department of Cognitive Psychology, Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|