1
|
Cabbai G, Brown CRH, Dance C, Simner J, Forster S. Mental imagery and visual attentional templates: A dissociation. Cortex 2023; 169:259-278. [PMID: 37967476 DOI: 10.1016/j.cortex.2023.09.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Revised: 08/10/2023] [Accepted: 09/26/2023] [Indexed: 11/17/2023]
Abstract
There is a growing interest in the relationship between mental images and attentional templates as both are considered pictorial representations that involve similar neural mechanisms. Here, we investigated the role of mental imagery in the automatic implementation of attentional templates and their effect on involuntary attention. We developed a novel version of the contingent capture paradigm designed to encourage the generation of a new template on each trial and measure contingent spatial capture by a template-matching visual feature (color). Participants were required to search at four different locations for a specific object indicated at the start of each trial. Immediately prior to the search display, color cues were presented surrounding the potential target locations, one of which matched the target color (e.g., red for strawberry). Across three experiments, our task induced a robust contingent capture effect, reflected by faster responses when the target appeared in the location previously occupied by the target-matching cue. Contrary to our predictions, this effect remained consistent regardless of self-reported individual differences in visual mental imagery (Experiment 1, N = 216) or trial-by-trial variation of voluntary imagery vividness (Experiment 2, N = 121). Moreover, contingent capture was observed even among aphantasic participants, who report no imagery (Experiment 3, N = 91). The magnitude of the effect was not reduced in aphantasics compared to a control sample of non-aphantasics, although the two groups reported substantial differences in their search strategy and exhibited differences in overall speed and accuracy. Our results hence establish a dissociation between the generation and implementation of attentional templates for a visual feature (color) and subjectively experienced imagery.
Collapse
Affiliation(s)
- Giulia Cabbai
- School of Psychology, University of Sussex, Brighton, United Kingdom; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, United Kingdom.
| | | | - Carla Dance
- School of Psychology, University of Sussex, Brighton, United Kingdom
| | - Julia Simner
- School of Psychology, University of Sussex, Brighton, United Kingdom; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, United Kingdom
| | - Sophie Forster
- School of Psychology, University of Sussex, Brighton, United Kingdom; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, United Kingdom
| |
Collapse
|
2
|
Yeh LC, Yeh YY, Kuo BC. Spatially Specific Attention Mechanisms Are Sensitive to Competition during Visual Search. J Cogn Neurosci 2019; 31:1248-1259. [DOI: 10.1162/jocn_a_01418] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Extensive studies have focused on selection mechanisms during visual search. One important influence on these mechanisms is the perceptual characteristics of the stimuli. We investigated the impact of perceptual similarity between targets and nontargets (T-N similarity) in a visual search task using EEG. Participants searched for a predefined target letter among five nontargets. The T-N similarity was manipulated with three levels: high, middle, and low. We tested for the influences of T-N similarity on an ERP (e.g., N2pc) and alpha oscillations. We observed a significant N2pc effect across all levels of similarity. The N2pc amplitude was reduced and occurred later for high similarity relative to low and middle similarities. We also showed that the N2pc amplitude was inversely correlated with the RTs across all similarities. Importantly, we found a significant alpha phase adjustment about the same time as the N2pc for high similarity; by contrast, no such effect was observed for middle and low similarities. Finally, we showed a positive correlation between the phase-locking value and the N2pc—the stronger the alpha phase-locking value, the larger the N2pc, when the T-N similarity was high. In conclusion, our results provide novel evidence for multiple competitive mechanisms during visual search.
Collapse
|
3
|
Marini F, Breeding KA, Snow JC. Distinct visuo-motor brain dynamics for real-world objects versus planar images. Neuroimage 2019; 195:232-242. [PMID: 30776529 DOI: 10.1016/j.neuroimage.2019.02.026] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 01/26/2019] [Accepted: 02/09/2019] [Indexed: 10/27/2022] Open
Abstract
Ultimately, we aim to generalize and translate scientific knowledge to the real world, yet current understanding of human visual perception is based predominantly on studies of two-dimensional (2-D) images. Recent cognitive-behavioral evidence shows that real objects are processed differently to images, although the neural processes that underlie these differences are unknown. Because real objects (unlike images) afford actions, they may trigger stronger or more prolonged activation in neural populations for visuo-motor action planning. Here, we recorded electroencephalography (EEG) when human observers viewed real-world three-dimensional (3-D) objects or closely matched 2-D images of the same items. Although responses to real objects and images were similar overall, there were critical differences. Compared to images, viewing real objects triggered stronger and more sustained event-related desynchronization (ERD) in the μ frequency band (8-13 Hz) - a neural signature of automatic motor preparation. Event-related potentials (ERPs) revealed a transient, early occipital negativity for real objects (versus images), likely reflecting 3-D stereoscopic differences, and a late sustained parietal amplitude modulation consistent with an 'old-new' memory advantage for real objects over images. Together, these findings demonstrate that real-world objects trigger stronger and more sustained action-related brain responses than images do. The results highlight important similarities and differences between brain responses to images and richer, more ecologically relevant, real-world objects.
Collapse
Affiliation(s)
- Francesco Marini
- Department of Psychology, University of Nevada, 1664 N Virginia St, Reno, NV, 89557-0296, USA; Swartz Center for Computational Neuroscience, University of California San Diego, 9500 Gilman Drive, La Jolla, CA, 92093-0559, USA.
| | - Katherine A Breeding
- Department of Psychology, University of Nevada, 1664 N Virginia St, Reno, NV, 89557-0296, USA
| | - Jacqueline C Snow
- Department of Psychology, University of Nevada, 1664 N Virginia St, Reno, NV, 89557-0296, USA.
| |
Collapse
|
4
|
The "item" as a window into how prior knowledge guides visual search. Behav Brain Sci 2018; 40:e162. [PMID: 29342603 DOI: 10.1017/s0140525x16000315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
We challenge the central idea proposed in Hulleman & Olivers (H&O) by arguing that the "item" is still useful for understanding visual search and for developing new theoretical frameworks. The "item" is a flexible unit that represents not only an individual object, but also a bundle of objects that are grouped based on prior knowledge. Uncovering how the "item" is represented based on prior knowledge is essential for advancing theories of visual search.
Collapse
|
5
|
Wu R, Zhao J. Prior Knowledge of Object Associations Shapes Attentional Templates and Information Acquisition. Front Psychol 2017; 8:843. [PMID: 28588542 PMCID: PMC5440728 DOI: 10.3389/fpsyg.2017.00843] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2016] [Accepted: 05/08/2017] [Indexed: 11/13/2022] Open
Abstract
Studies on attentional selection typically use unpredictable and meaningless stimuli, such as simple shapes and oriented lines. The assumption is that using these stimuli minimizes effects due to learning or prior knowledge, such that the task performance indexes a "pure" measure of the underlying cognitive ability. However, prior knowledge of the test stimuli and related stimuli acquired before or during the task impacts performance in meaningful ways. This mini review focuses on prior knowledge of object associations, because it is an important, yet often ignored, aspect of attentional selection. We first briefly review recent studies demonstrating that how objects are selected during visual search depends on the participant's prior experience with other objects associated with the target. These effects appear with both task-relevant and task-irrelevant knowledge. We then review how existing object associations may influence subsequent learning of new information, which is both a driver and a consequence of selection processes. These insights highlight the importance of one aspect of prior knowledge for attentional selection and information acquisition. We briefly discuss how this work with young adults may inform other age groups throughout the lifespan, as learners gradually increase their prior knowledge. Importantly, these insights have implications for developing more accurate measurements of cognitive abilities.
Collapse
Affiliation(s)
- Rachel Wu
- Department of Psychology, University of California, Riverside, RiversideCA, United States
| | - Jiaying Zhao
- Department of Psychology and Institute for Resources, Environment and Sustainability, University of British Columbia, VancouverBC, Canada
| |
Collapse
|
6
|
Matran-Fernandez A, Poli R. Towards the automated localisation of targets in rapid image-sifting by collaborative brain-computer interfaces. PLoS One 2017; 12:e0178498. [PMID: 28562664 PMCID: PMC5451058 DOI: 10.1371/journal.pone.0178498] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Accepted: 05/14/2017] [Indexed: 11/18/2022] Open
Abstract
The N2pc is a lateralised Event-Related Potential (ERP) that signals a shift of attention towards the location of a potential object of interest. We propose a single-trial target-localisation collaborative Brain-Computer Interface (cBCI) that exploits this ERP to automatically approximate the horizontal position of targets in aerial images. Images were presented by means of the rapid serial visual presentation technique at rates of 5, 6 and 10 Hz. We created three different cBCIs and tested a participant selection method in which groups are formed according to the similarity of participants' performance. The N2pc that is elicited in our experiments contains information about the position of the target along the horizontal axis. Moreover, combining information from multiple participants provides absolute median improvements in the area under the receiver operating characteristic curve of up to 21% (for groups of size 3) with respect to single-user BCIs. These improvements are bigger when groups are formed by participants with similar individual performance, and much of this effect can be explained using simple theoretical models. Our results suggest that BCIs for automated triaging can be improved by integrating two classification systems: one devoted to target detection and another to detect the attentional shifts associated with lateral targets.
Collapse
Affiliation(s)
- Ana Matran-Fernandez
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, Essex, United Kingdom
- * E-mail:
| | - Riccardo Poli
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, Essex, United Kingdom
| |
Collapse
|
7
|
Jenkins M, Grubert A, Eimer M. Rapid Parallel Attentional Selection Can Be Controlled by Shape and Alphanumerical Category. J Cogn Neurosci 2016; 28:1672-1687. [DOI: 10.1162/jocn_a_00995] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Previous research has shown that when two color-defined target objects appear in rapid succession at different locations, attention is deployed independently and in parallel to both targets. This study investigated whether this rapid simultaneous attentional target selection mechanism can also be employed in tasks where targets are defined by a different visual feature (shape) or when alphanumerical category is the target selection attribute. Two displays that both contained a target and a nontarget object on opposite sides were presented successively, and the SOA between the two displays was 100, 50, 20, or 10 msec in different blocks. N2pc components were recorded to both targets as a temporal marker of their attentional selection. When observers searched for shape-defined targets (Experiment 1), N2pc components to the two targets were equal in size and overlapped in time when the SOA between the two displays was short, reflecting two parallel shape-guided target selection processes with their own independent time course. Essentially the same temporal pattern of N2pc components was observed when alphanumerical category was the target-defining attribute (Experiment 2), demonstrating that the rapid parallel attentional selection of multiple target objects is not restricted to situations where the deployment of attention can be guided by elementary visual features but that these processes can even be employed in category-based attentional selection tasks. These findings have important implications for our understanding of the cognitive and neural basis of top–down attentional control.
Collapse
|
8
|
Reeder RR. Individual differences shape the content of visual representations. Vision Res 2016; 141:266-281. [PMID: 27720956 DOI: 10.1016/j.visres.2016.08.008] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Revised: 08/21/2016] [Accepted: 08/29/2016] [Indexed: 01/01/2023]
Abstract
Visually perceiving a stimulus activates a pictorial representation of that item in the brain, but how pictorial is the representation of a stimulus in the absence of visual stimulation? Here I address this question with a review of the literatures on visual imagery (VI), visual working memory (VWM), and visual preparatory templates, all of which require activating visual information in the absence of sensory stimulation. These processes have historically been studied separately, but I propose that they can provide complimentary evidence for the pictorial nature of their contents. One major challenge in studying the contents of visual representations is the discrepant findings concerning the extent of overlap (both cortical and behavioral) between externally and internally sourced visual representations. I argue that these discrepancies may in large part be due to individual differences in VI vividness and precision, the specific representative abilities required to perform a task, appropriateness of visual preparatory strategies, visual cortex anatomy, and level of expertise with a particular object category. Individual differences in visual representative abilities greatly impact task performance and may influence the likelihood of experiences such as intrusive VI and hallucinations, but research still predominantly focuses on uniformities in visual experience across individuals. In this paper I review the evidence for the pictorial content of visual representations activated for VI, VWM, and preparatory templates, and highlight the importance of accounting for various individual differences in conducting research on this topic.
Collapse
Affiliation(s)
- Reshanne R Reeder
- Department of Experimental Psychology, Institute of Psychology II, Otto-von-Guericke University, Magdeburg, Germany.
| |
Collapse
|
9
|
Nako R, Smith TJ, Eimer M. The Role of Color in Search Templates for Real-world Target Objects. J Cogn Neurosci 2016; 28:1714-1727. [PMID: 27315273 DOI: 10.1162/jocn_a_00996] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., "apple") was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.
Collapse
|
10
|
Automatic capture of attention by conceptually generated working memory templates. Atten Percept Psychophys 2015; 77:1841-7. [DOI: 10.3758/s13414-015-0918-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|