1
|
Sun M, Huang Y, Ying H. Repulsion bias is insensitive to spatial attention, yet expands during active working memory maintenance. Atten Percept Psychophys 2024:10.3758/s13414-024-02910-w. [PMID: 38862765 DOI: 10.3758/s13414-024-02910-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/22/2024] [Indexed: 06/13/2024]
Abstract
Our brain sometimes represents visual information in a biased manner. Multiple visual features presented simultaneously or sequentially may interact with each other when we perceive them or maintain them in visual working memory (WM), giving rise to report bias. How goal-directed attention influences target representation is not fully understood, especially concerning whether attention towards distractors modulates report bias for the target. Our study investigated the WM biases of the target when it is concurrent with (1) one attended distractor only, (2) one unattended distractor only, and (3) both kinds of distractors during perception. It was found that the target WM is reported as being repelled away from concurrent distractors, attended or unattended, suggesting attention is not necessary for the occurrence of repulsion bias during perception. Furthermore, goal-directed attention towards the distractors modulates the strength of interitem interaction, and the repulsion bias was found to be stronger when attention was directed toward the distractor than when it was not. However, the exaggerated repulsion associated with the attended distractor is likely due to increased relevance to the memory task and (or) WM load instead of spatial attention. In contrast, spatial attention towards the distractor increases the chances of misreporting the distractor for the target.
Collapse
Affiliation(s)
- Mengdan Sun
- Department of Psychology, Soochow University, Suzhou, China.
| | - Yaxin Huang
- Department of Psychology, Soochow University, Suzhou, China
| | - Haojiang Ying
- Department of Psychology, Soochow University, Suzhou, China.
| |
Collapse
|
2
|
Martinovic J, Boyanova A, Andersen SK. Division and spreading of attention across color. Cereb Cortex 2024; 34:bhae240. [PMID: 38858841 PMCID: PMC11164655 DOI: 10.1093/cercor/bhae240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 05/16/2024] [Indexed: 06/12/2024] Open
Abstract
Biological systems must allocate limited perceptual resources to relevant elements in their environment. This often requires simultaneous selection of multiple elements from the same feature dimension (e.g. color). To establish the determinants of divided attentional selection of color, we conducted an experiment that used multicolored displays with four overlapping random dot kinematograms that differed only in hue. We manipulated (i) requirement to focus attention to a single color or divide it between two colors; (ii) distances of distractor hues from target hues in a perceptual color space. We conducted a behavioral and an electroencephalographic experiment, in which each color was tagged by a specific flicker frequency and driving its own steady-state visual evoked potential. Behavioral and neural indices of attention showed several major consistencies. Concurrent selection halved the neural signature of target enhancement observed for single targets, consistent with an approximately equal division of limited resources between two hue-selective foci. Distractors interfered with behavioral performance in a context-dependent fashion but their effects were asymmetric, indicating that perceptual distance did not adequately capture attentional distance. These asymmetries point towards an important role of higher-level mechanisms such as categorization and grouping-by-color in determining the efficiency of attentional allocation in complex, multicolored scenes.
Collapse
Affiliation(s)
- Jasna Martinovic
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, 7 George Square, EH8 9JZ, Edinburgh, United Kingdom
| | - Antoniya Boyanova
- School of Psychology, University of Aberdeen, William Guild Building, AB24 3UB, Aberdeen, United Kingdom
| | - Søren K Andersen
- School of Psychology, University of Aberdeen, William Guild Building, AB24 3UB, Aberdeen, United Kingdom
- Department of Psychology, University of Southern Denmark, Campusvej 55, 5230 Odense, Denmark
| |
Collapse
|
3
|
Yu X, Rahim RA, Geng JJ. Task-adaptive changes to the target template in response to distractor context: Separability versus similarity. J Exp Psychol Gen 2024; 153:564-572. [PMID: 37917441 PMCID: PMC10843062 DOI: 10.1037/xge0001507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Theories of attention hypothesize the existence of an attentional template that contains target features in working or long-term memory. It is frequently assumed that the template contains a veridical copy of the target, but recent studies suggest that this is not true when the distractors are linearly separable from the target. In such cases, target representations shift "off-veridical" in response to the distractor context, presumably because doing so is adaptive and increases the representational distinctiveness of targets from distractors. However, some have argued that the shifts may be entirely explained by perceptual biases created by simultaneous color contrast. Here we address this debate and test the more general hypothesis that the target template is adaptively shaped by elements of the distractor context needed to distinguish targets from distractors. We used a two-dimensional target and separately manipulated the linear separability of one dimension (color) and the visual similarity of the other (orientation). We found that target shifting along the linearly separable color dimension was dependent on the similarity of targets-to-distractors along the other dimension. The target representations were consistent with a postexperiment strategy questionnaire in which participants reported using color more when orientation was hard to use, and orientation more when it was easier to use. We conclude that the target template is task-adaptive and exploit features in the distractor context that most predictably distinguish targets from distractors to increase visual search efficiency. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California, Davis
| | - Raisa A. Rahim
- Center for Mind and Brain, University of California, Davis
| | - Joy J. Geng
- Center for Mind and Brain, University of California, Davis
- Department of Psychology, University of California, Davis
| |
Collapse
|
4
|
Henderson MM, Serences JT, Rungratsameetaweemana N. Dynamic categorization rules alter representations in human visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.09.11.557257. [PMID: 37745512 PMCID: PMC10515851 DOI: 10.1101/2023.09.11.557257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
Everyday perceptual tasks require sensory stimuli to be dynamically encoded and analyzed according to changing behavioral goals. For example, when searching for an apple at the supermarket, one might first find the Granny Smith apples by separating all visible apples into the categories "green" and "non-green". However, suddenly remembering that your family actually likes Fuji apples would necessitate reconfiguring the boundary to separate "red" from "red-yellow" objects. This flexible processing enables identical sensory stimuli to elicit varied behaviors based on the current task context. While this phenomenon is ubiquitous in nature, little is known about the neural mechanisms that underlie such flexible computation. Traditionally, sensory regions have been viewed as mainly devoted to processing inputs, with limited involvement in adapting to varying task contexts. However, from the standpoint of efficient computation, it is plausible that sensory regions integrate inputs with current task goals, facilitating more effective information relay to higher-level cortical areas. Here we test this possibility by asking human participants to visually categorize novel shape stimuli based on different linear and non-linear boundaries. Using fMRI and multivariate analyses of retinotopically-defined visual areas, we found that shape representations in visual cortex became more distinct across relevant decision boundaries in a context-dependent manner, with the largest changes in discriminability observed for stimuli near the decision boundary. Importantly, these context-driven modulations were associated with improved categorization performance. Together, these findings demonstrate that codes in visual cortex are adaptively modulated to optimize object separability based on currently relevant decision boundaries.
Collapse
Affiliation(s)
- Margaret M Henderson
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, USA
- Department of Machine Learning, Carnegie Mellon University, Pittsburgh, USA
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, USA
| | - John T Serences
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, USA
- Department of Psychology, University of California, San Diego, La Jolla, USA
- Kavli Foundation for the Brain and Mind, University of California, San Diego, La Jolla, USA
| | - Nuttida Rungratsameetaweemana
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, USA
- The Salk Institute for Biological Studies, La Jolla, USA
- Department of Biomedical Engineering, Columbia University, New York, USA
| |
Collapse
|
5
|
O'Bryan SR, Jung S, Mohan AJ, Scolari M. Category Learning Selectively Enhances Representations of Boundary-Adjacent Exemplars in Early Visual Cortex. J Neurosci 2024; 44:e1039232023. [PMID: 37968121 PMCID: PMC10860654 DOI: 10.1523/jneurosci.1039-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 08/31/2023] [Accepted: 10/23/2023] [Indexed: 11/17/2023] Open
Abstract
Category learning and visual perception are fundamentally interactive processes, such that successful categorization often depends on the ability to make fine visual discriminations between stimuli that vary on continuously valued dimensions. Research suggests that category learning can improve perceptual discrimination along the stimulus dimensions that predict category membership and that these perceptual enhancements are a byproduct of functional plasticity in the visual system. However, the precise mechanisms underlying learning-dependent sensory modulation in categorization are not well understood. We hypothesized that category learning leads to a representational sharpening of underlying sensory populations tuned to values at or near the category boundary. Furthermore, such sharpening should occur largely during active learning of new categories. These hypotheses were tested using fMRI and a theoretically constrained model of vision to quantify changes in the shape of orientation representations while human adult subjects learned to categorize physically identical stimuli based on either an orientation rule (N = 12) or an orthogonal spatial frequency rule (N = 13). Consistent with our predictions, modeling results revealed relatively enhanced reconstructed representations of stimulus orientation in visual cortex (V1-V3) only for orientation rule learners. Moreover, these reconstructed representations varied as a function of distance from the category boundary, such that representations for challenging stimuli near the boundary were significantly sharper than those for stimuli at the category centers. These results support an efficient model of plasticity wherein only the sensory populations tuned to the most behaviorally relevant regions of feature space are enhanced during category learning.
Collapse
Affiliation(s)
- Sean R O'Bryan
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas 79409
| | - Shinyoung Jung
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas 79409
| | - Anto J Mohan
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas 79409
| | - Miranda Scolari
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas 79409
| |
Collapse
|
6
|
Chen J, Golomb JD. Dynamic neural reconstructions of attended object location and features using EEG. J Neurophysiol 2023; 130:139-154. [PMID: 37283457 PMCID: PMC10393364 DOI: 10.1152/jn.00180.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/10/2023] [Accepted: 06/02/2023] [Indexed: 06/08/2023] Open
Abstract
Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the present study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention. We demonstrate that EEG can be used to create simultaneous time courses of neural representations of attended features (time point-by-time point inverted encoding model reconstructions) and attended location (time point-by-time point decoding) during both stable periods and across dynamic shifts of attention. Each trial presented two oriented gratings that flickered at the same frequency but had different orientations; participants were cued to attend one of them and on half of trials received a shift cue midtrial. We trained models on a stable period from Hold attention trials and then reconstructed/decoded the attended orientation/location at each time point on Shift attention trials. Our results showed that both feature reconstruction and location decoding dynamically track the shift of attention and that there may be time points during the shifting of attention when 1) feature and location representations become uncoupled and 2) both the previously attended and currently attended orientations are represented with roughly equal strength. The results offer insight into our understanding of attentional shifts, and the noninvasive techniques developed in the present study lend themselves well to a wide variety of future applications.NEW & NOTEWORTHY We used human EEG and machine learning to reconstruct neural response profiles during dynamic shifts of attention. Specifically, we demonstrated that we could simultaneously read out both location and feature information from an attended item in a multistimulus display. Moreover, we examined how that readout evolves over time during the dynamic process of attentional shifts. These results provide insight into our understanding of attention, and this technique carries substantial potential for versatile extensions and applications.
Collapse
Affiliation(s)
- Jiageng Chen
- Department of Psychology, The Ohio State University, Columbus, Ohio, United States
| | - Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|
7
|
Chapman AF, Chunharas C, Störmer VS. Feature-based attention warps the perception of visual features. Sci Rep 2023; 13:6487. [PMID: 37081047 PMCID: PMC10119379 DOI: 10.1038/s41598-023-33488-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 04/13/2023] [Indexed: 04/22/2023] Open
Abstract
Selective attention improves sensory processing of relevant information but can also impact the quality of perception. For example, attention increases visual discrimination performance and at the same time boosts apparent stimulus contrast of attended relative to unattended stimuli. Can attention also lead to perceptual distortions of visual representations? Optimal tuning accounts of attention suggest that processing is biased towards "off-tuned" features to maximize the signal-to-noise ratio in favor of the target, especially when targets and distractors are confusable. Here, we tested whether such tuning gives rise to phenomenological changes of visual features. We instructed participants to select a color among other colors in a visual search display and subsequently asked them to judge the appearance of the target color in a 2-alternative forced choice task. Participants consistently judged the target color to appear more dissimilar from the distractor color in feature space. Critically, the magnitude of these perceptual biases varied systematically with the similarity between target and distractor colors during search, indicating that attentional tuning quickly adapts to current task demands. In control experiments we rule out possible non-attentional explanations such as color contrast or memory effects. Overall, our results demonstrate that selective attention warps the representational geometry of color space, resulting in profound perceptual changes across large swaths of feature space. Broadly, these results indicate that efficient attentional selection can come at a perceptual cost by distorting our sensory experience.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychology, UC San Diego, La Jolla, CA, 92092, USA.
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA.
| | - Chaipat Chunharas
- Cognitive Clinical and Computational Neuroscience Lab, KCMH Chula Neuroscience Center, Thai Red Cross Society, Department of Internal Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Viola S Störmer
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
8
|
Yu X, Zhou Z, Becker SI, Boettcher SEP, Geng JJ. Good-enough attentional guidance. Trends Cogn Sci 2023; 27:391-403. [PMID: 36841692 DOI: 10.1016/j.tics.2023.01.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/24/2023] [Accepted: 01/25/2023] [Indexed: 02/27/2023]
Abstract
Theories of attention posit that attentional guidance operates on information held in a target template within memory. The template is often thought to contain veridical target features, akin to a photograph, and to guide attention to objects that match the exact target features. However, recent evidence suggests that attentional guidance is highly flexible and often guided by non-veridical features, a subset of features, or only associated features. We integrate these findings and propose that attentional guidance maximizes search efficiency based on a 'good-enough' principle to rapidly localize candidate target objects. Candidates are then serially interrogated to make target-match decisions using more precise information. We suggest that good-enough guidance optimizes the speed-accuracy-effort trade-offs inherent in each stage of visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA
| | - Zhiheng Zhou
- Center for Mind and Brain, University of California Davis, Davis, CA, USA
| | - Stefanie I Becker
- School of Psychology, University of Queensland, Brisbane, QLD, Australia
| | | | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA.
| |
Collapse
|
9
|
Bowen JD, Alforque CV, Silver MA. Effects of involuntary and voluntary attention on critical spacing of visual crowding. J Vis 2023; 23:2. [PMID: 36862108 PMCID: PMC9987171 DOI: 10.1167/jov.23.3.2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023] Open
Abstract
Visual spatial attention can be allocated in two distinct ways: one that is voluntarily directed to behaviorally relevant locations in the world, and one that is involuntarily captured by salient external stimuli. Precueing spatial attention has been shown to improve perceptual performance on a number of visual tasks. However, the effects of spatial attention on visual crowding, defined as the reduction in the ability to identify target objects in clutter, are far less clear. In this study, we used an anticueing paradigm to separately measure the effects of involuntary and voluntary spatial attention on a crowding task. Each trial began with a brief peripheral cue that predicted that the crowded target would appear on the opposite side of the screen 80% of the time and on the same side of the screen 20% of the time. Subjects performed an orientation discrimination task on a target Gabor patch that was flanked by other similar Gabor patches with independent random orientations. For trials with a short stimulus onset asynchrony between cue and target, involuntary capture of attention led to faster response times and smaller critical spacing when the target appeared on the cue side. For trials with a long stimulus onset asynchrony, voluntary allocation of attention led to faster reaction times but no significant effect on critical spacing when the target appeared on the opposite side to the cue. We additionally found that the magnitudes of these cueing effects of involuntary and voluntary attention were not strongly correlated across subjects for either reaction time or critical spacing.
Collapse
Affiliation(s)
- Joel D Bowen
- Vision Science Graduate Group, University of California Berkeley, Berkeley, CA, USA.,
| | - Carissa V Alforque
- Herbert Wertheim School of Optometry & Vision Science, University of California Berkeley, Berkeley, CA, USA.,
| | - Michael A Silver
- Vision Science Graduate Group, University of California Berkeley, Berkeley, CA, USA.,Herbert Wertheim School of Optometry & Vision Science, University of California Berkeley, Berkeley, CA, USA.,Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA.,
| |
Collapse
|
10
|
Internal attention is the only retroactive mechanism for controlling precision in working memory. Atten Percept Psychophys 2022:10.3758/s13414-022-02628-7. [PMID: 36536206 PMCID: PMC10371937 DOI: 10.3758/s13414-022-02628-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/20/2022] [Indexed: 12/23/2022]
Abstract
AbstractRecent research has suggested that humans can assert control over the precision of working memory (WM) items. However, the mechanisms that enable this control are unclear. While some studies suggest that internal attention improves precision, it may not be the only factor, as previous work also demonstrated that WM storage is disentangled from attention. To test whether there is a precision control mechanism beyond internal attention, we contrasted internal attention and precision requirements within the same trial in three experiments. In every trial, participants memorized two items briefly. Before the test, a retro-cue indicated which item would be tested first, thus should be attended. Importantly, we encouraged participants to store the unattended item with higher precision by testing it using more similar lure colors at the probe display. Accuracy was analyzed on a small proportion of trials where the target-lure similarity, hence the task difficulty, was equal for attended and unattended items. Experiments 2 and 3 controlled for output interference by the first test and involuntary precision boost by the retro-cue, respectively. In all experiments, the unattended item had lower accuracy than the attended item, suggesting that individuals were not able to remember it more precisely than the attended item. Thus, we conclude that there is no precision control mechanism beyond internal attention, highlighting the close relationship between attentional and qualitative prioritization within WM. We discuss the important implications of these findings for our understanding of the fundamentals of WM and WM-driven behaviors.
Collapse
|
11
|
Chapman AF, Störmer VS. Feature similarity is non-linearly related to attentional selection: Evidence from visual search and sustained attention tasks. J Vis 2022; 22:4. [PMID: 35834377 PMCID: PMC9290316 DOI: 10.1167/jov.22.8.4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Although many theories of attention highlight the importance of similarity between target and distractor items for selection, few studies have directly quantified the function underlying this relationship. Across two commonly used tasks-visual search and sustained attention-we investigated how target-distractor similarity impacts feature-based attentional selection. Importantly, we found comparable patterns of performance in both visual search and sustained feature-based attention tasks, with performance (response times and d', respectively) plateauing at medium target-distractor distances (40°-50° around a luminance-matched color wheel). In contrast, visual search efficiency, as measured by search slopes, was affected by a much more narrow range of similarity levels (10°-20°). We assessed the relationship between target-distractor similarity and attentional performance using both a stimulus-based and psychologically-based measure of similarity and found this nonlinear relationship in both cases. However, psychological similarity accounted for some of the nonlinearities observed in the data, suggesting that measures of psychological similarity are more appropriate when studying effects of target-distractor similarities. These findings place novel constraints on models of selective attention and emphasize the importance of considering the similarity structure of the feature space over which attention operates. Broadly, the nonlinear effects of similarity on attention are consistent with accounts that propose attention exaggerates the distance between competing representations, possibly through enhancement of off-tuned neurons.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA.,
| | - Viola S Störmer
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, NH, USA.,
| |
Collapse
|
12
|
Kumar M, Anderson MJ, Antony JW, Baldassano C, Brooks PP, Cai MB, Chen PHC, Ellis CT, Henselman-Petrusek G, Huberdeau D, Hutchinson JB, Li YP, Lu Q, Manning JR, Mennen AC, Nastase SA, Richard H, Schapiro AC, Schuck NW, Shvartsman M, Sundaram N, Suo D, Turek JS, Turner D, Vo VA, Wallace G, Wang Y, Williams JA, Zhang H, Zhu X, Capota˘ M, Cohen JD, Hasson U, Li K, Ramadge PJ, Turk-Browne NB, Willke TL, Norman KA. BrainIAK: The Brain Imaging Analysis Kit. APERTURE NEURO 2022; 1. [PMID: 35939268 PMCID: PMC9351935 DOI: 10.52294/31bb5b68-2184-411b-8c00-a1dacb61e1da] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Functional magnetic resonance imaging (fMRI) offers a rich source of data for studying the neural basis of cognition. Here, we describe the Brain Imaging Analysis Kit (BrainIAK), an open-source, free Python package that provides computationally optimized solutions to key problems in advanced fMRI analysis. A variety of techniques are presently included in BrainIAK: intersubject correlation (ISC) and intersubject functional connectivity (ISFC), functional alignment via the shared response model (SRM), full correlation matrix analysis (FCMA), a Bayesian version of representational similarity analysis (BRSA), event segmentation using hidden Markov models, topographic factor analysis (TFA), inverted encoding models (IEMs), an fMRI data simulator that uses noise characteristics from real data (fmrisim), and some emerging methods. These techniques have been optimized to leverage the efficiencies of high-performance compute (HPC) clusters, and the same code can be seamlessly transferred from a laptop to a cluster. For each of the aforementioned techniques, we describe the data analysis problem that the technique is meant to solve and how it solves that problem; we also include an example Jupyter notebook for each technique and an annotated bibliography of papers that have used and/or described that technique. In addition to the sections describing various analysis techniques in BrainIAK, we have included sections describing the future applications of BrainIAK to real-time fMRI, tutorials that we have developed and shared online to facilitate learning the techniques in BrainIAK, computational innovations in BrainIAK, and how to contribute to BrainIAK. We hope that this manuscript helps readers to understand how BrainIAK might be useful in their research.
Collapse
Affiliation(s)
- Manoj Kumar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Michael J. Anderson
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - James W. Antony
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | | | - Paula P. Brooks
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Ming Bo Cai
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Japan
| | - Po-Hsuan Cameron Chen
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | | | | | | | | | - Y. Peeta Li
- Department of Psychology, University of Oregon, Eugene, OR
| | - Qihong Lu
- Department of Psychology, Princeton University, Princeton, NJ
| | - Jeremy R. Manning
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH
| | - Anne C. Mennen
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Samuel A. Nastase
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Hugo Richard
- Parietal Team, Inria, Neurospin, CEA, Université Paris-Saclay, France
| | - Anna C. Schapiro
- Department of Psychology, University of Pennsylvania, Philadelphia, PA
| | - Nicolas W. Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany
| | - Michael Shvartsman
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Narayanan Sundaram
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - Daniel Suo
- epartment of Computer Science, Princeton University, Princeton, NJ
| | - Javier S. Turek
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - David Turner
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Vy A. Vo
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Grant Wallace
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Yida Wang
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - Jamal A. Williams
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| | - Hejia Zhang
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Xia Zhu
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Mihai Capota˘
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Jonathan D. Cohen
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| | - Uri Hasson
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| | - Kai Li
- Department of Computer Science, Princeton University, Princeton, NJ
| | - Peter J. Ramadge
- Department of Electrical Engineering, and the Center for Statistics and Machine Learning, Princeton University, Princeton, NJ
| | | | | | - Kenneth A. Norman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| |
Collapse
|
13
|
Keller AS, Jagadeesh AV, Bugatus L, Williams LM, Grill-Spector K. Attention enhances category representations across the brain with strengthened residual correlations to ventral temporal cortex. Neuroimage 2022; 249:118900. [PMID: 35021039 PMCID: PMC8947761 DOI: 10.1016/j.neuroimage.2022.118900] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 01/06/2022] [Accepted: 01/08/2022] [Indexed: 11/05/2022] Open
Abstract
How does attention enhance neural representations of goal-relevant stimuli while suppressing representations of ignored stimuli across regions of the brain? While prior studies have shown that attention enhances visual responses, we lack a cohesive understanding of how selective attention modulates visual representations across the brain. Here, we used functional magnetic resonance imaging (fMRI) while participants performed a selective attention task on superimposed stimuli from multiple categories and used a data-driven approach to test how attention affects both decodability of category information and residual correlations (after regressing out stimulus-driven variance) with category-selective regions of ventral temporal cortex (VTC). Our data reveal three main findings. First, when two objects are simultaneously viewed, the category of the attended object can be decoded more readily than the category of the ignored object, with the greatest attentional enhancements observed in occipital and temporal lobes. Second, after accounting for the response to the stimulus, the correlation in the residual brain activity between a cortical region and a category-selective region of VTC was elevated when that region’s preferred category was attended vs. ignored, and more so in the right occipital, parietal, and frontal cortices. Third, we found that the stronger the residual correlations between a given region of cortex and VTC, the better visual category information could be decoded from that region. These findings suggest that heightened residual correlations by selective attention may reflect the sharing of information between sensory regions and higher-order cortical regions to provide attentional enhancement of goal-relevant information.
Collapse
Affiliation(s)
- Arielle S Keller
- Department of Psychiatry and Behavioral Sciences, Stanford University, CA 94305, USA; Neurosciences Graduate Program, Stanford University, CA 94305, USA.
| | | | - Lior Bugatus
- Department of Psychology, Stanford University, CA 94305, USA
| | - Leanne M Williams
- Department of Psychiatry and Behavioral Sciences, Stanford University, CA 94305, USA
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, CA 94305, USA; Wu Tsai Neurosciences Institute, Stanford University, CA 94305, USA
| |
Collapse
|
14
|
Yu X, Hanks TD, Geng JJ. Attentional Guidance and Match Decisions Rely on Different Template Information During Visual Search. Psychol Sci 2021; 33:105-120. [PMID: 34878949 DOI: 10.1177/09567976211032225] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
When searching for a target object, we engage in a continuous "look-identify" cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students (Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California, Davis.,Department of Psychology, University of California, Davis
| | - Timothy D Hanks
- Center for Neuroscience, University of California, Davis.,Department of Neurology, University of California, Davis
| | - Joy J Geng
- Center for Mind and Brain, University of California, Davis.,Department of Psychology, University of California, Davis
| |
Collapse
|
15
|
Goddard E, Carlson TA, Woolgar A. Spatial and Feature-selective Attention Have Distinct, Interacting Effects on Population-level Tuning. J Cogn Neurosci 2021; 34:290-312. [PMID: 34813647 PMCID: PMC7613071 DOI: 10.1162/jocn_a_01796] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Attention can be deployed in different ways: When searching for a taxi in New York City, we can decide where to attend (e.g., to the street) and what to attend to (e.g., yellow cars). Although we use the same word to describe both processes, nonhuman primate data suggest that these produce distinct effects on neural tuning. This has been challenging to assess in humans, but here we used an opportunity afforded by multivariate decoding of MEG data. We found that attending to an object at a particular location and attending to a particular object feature produced effects that interacted multiplicatively. The two types of attention induced distinct patterns of enhancement in occipital cortex, with feature-selective attention producing relatively more enhancement of small feature differences and spatial attention producing relatively larger effects for larger feature differences. An information flow analysis further showed that stimulus representations in occipital cortex were Granger-caused by coding in frontal cortices earlier in time and that the timing of this feedback matched the onset of attention effects. The data suggest that spatial and feature-selective attention rely on distinct neural mechanisms that arise from frontal-occipital information exchange, interacting multiplicatively to selectively enhance task-relevant information.
Collapse
Affiliation(s)
- Erin Goddard
- University of New South Wales.,Macquarie University, Sydney, New South Wales, Australia
| | - Thomas A Carlson
- Macquarie University, Sydney, New South Wales, Australia.,University of Sydney
| | - Alexandra Woolgar
- Macquarie University, Sydney, New South Wales, Australia.,University of Cambridge
| |
Collapse
|
16
|
Neural representations of ensemble coding in the occipital and parietal cortices. Neuroimage 2021; 245:118680. [PMID: 34718139 DOI: 10.1016/j.neuroimage.2021.118680] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 10/17/2021] [Accepted: 10/23/2021] [Indexed: 11/23/2022] Open
Abstract
The human visual system is able to extract summary statistics from sets of similar items, but the underlying neural mechanism remains poorly understood. Using functional magnetic resonance imaging (fMRI) and an encoding model, we examined how the neural representation of ensemble coding is constructed by manipulating the task-relevance of ensemble features. We found a gradual increase in orientation-selective responses to the mean orientation of multiple stimuli along the visual hierarchy only when these orientations were task-relevant. Such responses to the ensemble orientation were present in the extrastriate area, V3, even when the mean orientation was not task-relevant, indicating that the ensemble representation can co-exist with the task-relevant individual feature representation. Ensemble orientations were also represented in frontal regions, but those representations were robust only when each mean orientation was linked to a motor response dimension. Together, our findings suggest that the neural representation of the ensemble percept is formed by pooling signals at multiple levels of the visual processing stream.
Collapse
|
17
|
Abstract
Selectivity for many basic properties of visual stimuli, such as orientation, is thought to be organized at the scale of cortical columns, making it difficult or impossible to measure directly with noninvasive human neuroscience measurement. However, computational analyses of neuroimaging data have shown that selectivity for orientation can be recovered by considering the pattern of response across a region of cortex. This suggests that computational analyses can reveal representation encoded at a finer spatial scale than is implied by the spatial resolution limits of measurement techniques. This potentially opens up the possibility to study a much wider range of neural phenomena that are otherwise inaccessible through noninvasive measurement. However, as we review in this article, a large body of evidence suggests an alternative hypothesis to this superresolution account: that orientation information is available at the spatial scale of cortical maps and thus easily measurable at the spatial resolution of standard techniques. In fact, a population model shows that this orientation information need not even come from single-unit selectivity for orientation tuning, but instead can result from population selectivity for spatial frequency. Thus, a categorical error of interpretation can result whereby orientation selectivity can be confused with spatial frequency selectivity. This is similarly problematic for the interpretation of results from numerous studies of more complex representations and cognitive functions that have built upon the computational techniques used to reveal stimulus orientation. We suggest in this review that these interpretational ambiguities can be avoided by treating computational analyses as models of the neural processes that give rise to measurement. Building upon the modeling tradition in vision science using considerations of whether population models meet a set of core criteria is important for creating the foundation for a cumulative and replicable approach to making valid inferences from human neuroscience measurements. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Justin L Gardner
- Department of Psychology, Stanford University, Stanford, California 94305, USA;
| | - Elisha P Merriam
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland 20892, USA;
| |
Collapse
|
18
|
Sharper attentional tuning with target templates in long-term compared to working memory. Psychon Bull Rev 2021; 28:1261-1269. [PMID: 33754320 DOI: 10.3758/s13423-021-01898-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/07/2021] [Indexed: 11/08/2022]
Abstract
Theories of attention postulate the existence of an attentional template containing target features in working or long-term memory. Previous research has shown that these internal representations of target features in memory are shifted away from nontarget features and that attention is tuned to the shifted feature especially when the target appeared with similar nontarget items. While previous studies have shown that the target-nontarget relationship has influence on the attentional selection and the representation shift when attentional template is maintained in long-term memory, there is little evidence for such effects when attentional template is stored in working memory. To address this issue, we asked participants to search for a target, which varied from trial to trial (working memory attentional template), or look for the target being stable across trials (long-term memory attentional template). We found that the shifted target features captured attention and that the representations of target features were deviated away from nontarget features when the target template was stored in either working memory or long-term memory. However, such effects were found to be greater for the attentional template in long-term memory. The present results provide evidence that one can encode the target-nontarget relationship even though the target varies from trial to trial, and such contextual information influences attentional selection and target representation shift even under this dynamically changing environment.
Collapse
|
19
|
Itthipuripat S, Deering S, Serences JT. When Conflict Cannot be Avoided: Relative Contributions of Early Selection and Frontal Executive Control in Mitigating Stroop Conflict. Cereb Cortex 2020; 29:5037-5048. [PMID: 30877786 DOI: 10.1093/cercor/bhz042] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 01/10/2019] [Indexed: 01/29/2023] Open
Abstract
When viewing familiar stimuli (e.g., common words), processing is highly automatized such that it can interfere with the processing of incompatible sensory information. At least two mechanisms may help mitigate this interference. Early selection accounts posit that attentional processes filter out distracting sensory information to avoid conflict. Alternatively, late selection accounts hold that all sensory inputs receive full semantic analysis and that frontal executive mechanisms are recruited to resolve conflict. To test how these mechanisms operate to overcome conflict induced by highly automatized processing, we developed a novel version of the color-word Stroop task, where targets and distractors were simultaneously flickered at different frequencies. We measured the quality of early sensory processing by assessing the amplitude of steady-state visually evoked potentials (SSVEPs) elicited by targets and distractors. We also indexed frontal executive processes by assessing changes in frontal theta oscillations induced by color-word incongruency. We found that target- and distractor-related SSVEPs were not modulated by changes in the level of conflict whereas frontal theta activity increased on high compared to low conflict trials. These results suggest that frontal executive processes play a more dominant role in mitigating cognitive interference driven by the automatic tendency to process highly familiar stimuli.
Collapse
Affiliation(s)
- Sirawaj Itthipuripat
- Department of Psychology and Center for Integrative and Cognitive Neuroscience, Vanderbilt University, Nashville, TN, USA.,Learning Institute and Futuristic Research in Enigmatic Aesthetics Knowledge Laboratory, King Mongkut's University of Technology Thonburi, Bangkok, Thailand.,Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA, USA.,Brain Development Imaging Laboratories, Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Sean Deering
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA.,Health Services Research and Development, Veterans Affairs San Diego Healthcare System, La Jolla, CA, USA
| | - John T Serences
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA, USA.,Department of Psychology, University of California, San Diego, La Jolla, CA, USA.,Kavli Foundation for the Brain and Mind, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
20
|
Abstract
When searching for a specific object, we often form an image of the target, which we use as a search template. This template is thought to be maintained in working memory, primarily because of evidence that the contents of working memory influences search behavior. However, it is unknown whether this interaction applies in both directions. Here, we show that changes in search templates influence working memory. Participants were asked to remember the orientation of a line that changed every trial, and on some trials (75%) search for that orientation, but on remaining trials recall the orientation. Critically, we manipulated the target template by introducing a predictable context—distractors in the visual search task were always counterclockwise (or clockwise) from the search target. The predictable context produced a large bias in search. Importantly, we also found a similar bias in orientation memory reports, demonstrating that working memory and target templates were not held as completely separate, isolated representations. However, the memory bias was considerably smaller than the search bias, suggesting that, although there is a common source, the two may not be driven by a single, shared process.
Collapse
|
21
|
Won BY, Haberman J, Bliss-Moreau E, Geng JJ. Flexible target templates improve visual search accuracy for faces depicting emotion. Atten Percept Psychophys 2020; 82:2909-2923. [PMID: 31974937 PMCID: PMC8806142 DOI: 10.3758/s13414-019-01965-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Theories of visual attention hypothesize that target selection depends upon matching visual inputs to a memory representation of the target - i.e., the target or attentional template. Most theories assume that the template contains a veridical copy of target features, but recent studies suggest that target representations may shift "off veridical" from actual target features to increase target-to-distractor distinctiveness. However, these studies have been limited to simple visual features (e.g., orientation, color), which leaves open the question of whether similar principles apply to complex stimuli, such as a face depicting an emotion, the perception of which is known to be shaped by conceptual knowledge. In three studies, we find confirmatory evidence for the hypothesis that attention modulates the representation of an emotional face to increase target-to-distractor distinctiveness. This occurs over-and-above strong pre-existing conceptual and perceptual biases in the representation of individual faces. The results are consistent with the view that visual search accuracy is determined by the representational distance between the target template in memory and distractor information in the environment, not the veridical target and distractor features.
Collapse
Affiliation(s)
- Bo-Yeong Won
- Center for Mind and Brain, University of California Davis, Davis, CA, USA.
| | | | - Eliza Bliss-Moreau
- Department of Psychology, University of California Davis, Davis, CA, USA
- California National Primate Research Center, University of California Davis, Davis, CA, USA
| | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA.
- Department of Psychology, University of California Davis, Davis, CA, USA.
| |
Collapse
|
22
|
Complementary Brain Signals for Categorical Decisions. J Neurosci 2020; 40:5706-5708. [PMID: 32699153 DOI: 10.1523/jneurosci.0785-20.2020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 05/31/2020] [Accepted: 06/07/2020] [Indexed: 11/21/2022] Open
|
23
|
Conjunction search: Can we simultaneously bias attention to features and relations? Atten Percept Psychophys 2020; 82:246-268. [PMID: 31317396 DOI: 10.3758/s13414-019-01807-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Attention allows selection of sought-after objects by tuning attention in a top-down manner to task-relevant features. Among other possible search modes, attention can be tuned to the exact feature values of a target (e.g., red, large), or to the relative target feature (e.g., reddest, largest item), in which case selection is context dependent. The present study tested whether we can tune attention simultaneously to a specific feature value (e.g., specific size) and a relative target feature (e.g., relative color) of a conjunction target, using a variant of the spatial cueing paradigm. Tuning to the specific feature of the target was encouraged by randomly presenting the conjunction target in a varying context of nontarget items, and feature-specific versus relational tuning was assessed by briefly presenting conjunction cues that either matched or mismatched the relative versus physical features of the target. The results showed that attention could be biased to the specific size and the relative color of the conjunction target or vice versa. These results suggest the existence of local and relatively low-level attentional control mechanisms that operate independently of each other in separate feature dimensions (color, size) to choose the best search strategy in line with current top-down goals.
Collapse
|
24
|
Zhang RY, Kay K. Flexible top-down modulation in human ventral temporal cortex. Neuroimage 2020; 218:116964. [PMID: 32439537 DOI: 10.1016/j.neuroimage.2020.116964] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 05/13/2020] [Accepted: 05/14/2020] [Indexed: 01/10/2023] Open
Abstract
Visual neuroscientists have long characterized attention as inducing a scaling or additive effect on fixed parametric functions describing neural responses (e.g., contrast response functions). Here, we instead propose that top-down effects are more complex and manifest in ways that depend not only on attention but also other cognitive processes involved in executing a task. To substantiate this theory, we analyze fMRI responses in human ventral temporal cortex (VTC) in a study where stimulus eccentricity and cognitive task are varied. We find that as stimuli are presented farther into the periphery, bottom-up stimulus-driven responses decline but top-down attentional enhancement increases substantially. This disproportionate enhancement of weak responses cannot be easily explained by conventional models of attention. Furthermore, we find that attentional effects depend on the specific cognitive task performed by the subject, indicating the influence of additional cognitive processes other than attention (e.g., decision-making). The effects we observe replicate in an independent experiment from the same study, and also generalize to a separate study involving different stimulus manipulations (contrast and phase coherence). Our results suggest that a quantitative understanding of top-down modulation requires more nuanced characterization of the multiple cognitive factors involved in completing a perceptual task.
Collapse
Affiliation(s)
- Ru-Yuan Zhang
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200030, China; Institute of Psychology and Behavioral Science, Shanghai Jiao Tong University, Shanghai, 200030, China; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN, 55455, USA.
| | - Kendrick Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN, 55455, USA
| |
Collapse
|
25
|
Categorical Biases in Human Occipitoparietal Cortex. J Neurosci 2019; 40:917-931. [PMID: 31862856 DOI: 10.1523/jneurosci.2700-19.2019] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 12/03/2019] [Indexed: 12/25/2022] Open
Abstract
Categorization allows organisms to generalize existing knowledge to novel stimuli and to discriminate between physically similar yet conceptually different stimuli. Humans, nonhuman primates, and rodents can readily learn arbitrary categories defined by low-level visual features, and learning distorts perceptual sensitivity for category-defining features such that differences between physically similar yet categorically distinct exemplars are enhanced, whereas differences between equally similar but categorically identical stimuli are reduced. We report a possible basis for these distortions in human occipitoparietal cortex. In three experiments, we used an inverted encoding model to recover population-level representations of stimuli from multivoxel and multielectrode patterns of human brain activity while human participants (both sexes) classified continuous stimulus sets into discrete groups. In each experiment, reconstructed representations of to-be-categorized stimuli were systematically biased toward the center of the appropriate category. These biases were largest for exemplars near a category boundary, predicted participants' overt category judgments, emerged shortly after stimulus onset, and could not be explained by mechanisms of response selection or motor preparation. Collectively, our findings suggest that category learning can influence processing at the earliest stages of cortical visual processing.SIGNIFICANCE STATEMENT Category learning enhances perceptual sensitivity for physically similar yet categorically different stimuli. We report a possible mechanism for these changes in human occipitoparietal cortex. In three experiments, we used an inverted encoding model to recover population-level representations of stimuli from multivariate patterns in occipitoparietal cortex while participants categorized sets of continuous stimuli into discrete groups. The recovered representations were systematically biased by category membership, with larger biases for exemplars adjacent to a category boundary. These results suggest that mechanisms of categorization shape information processing at the earliest stages of the visual system.
Collapse
|
26
|
Sprague TC, Boynton GM, Serences JT. The Importance of Considering Model Choices When Interpreting Results in Computational Neuroimaging. eNeuro 2019; 6:ENEURO.0196-19.2019. [PMID: 31772033 PMCID: PMC6924997 DOI: 10.1523/eneuro.0196-19.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 09/23/2019] [Accepted: 11/18/2019] [Indexed: 11/21/2022] Open
Abstract
Model-based analyses open exciting opportunities for understanding neural information processing. In a commentary published in eNeuro, Gardner and Liu (2019) discuss the role of model specification in interpreting results derived from complex models of neural data. As a case study, they suggest that one such analysis, the inverted encoding model (IEM), should not be used to assay properties of "stimulus representations" because the ability to apply linear transformations at various stages of the analysis procedure renders results "arbitrary." Here, we argue that the specification of all models is arbitrary to the extent that an experimenter makes choices based on current knowledge of the model system. However, the results derived from any given model, such as the reconstructed channel response profiles obtained from an IEM analysis, are uniquely defined and are arbitrary only in the sense that changes in the model can predictably change results. IEM-based channel response profiles should therefore not be considered arbitrary when the model is clearly specified and guided by our best understanding of neural population representations in the brain regions being analyzed. Intuitions derived from this case study are important to consider when interpreting results from all model-based analyses, which are similarly contingent upon the specification of the models used.
Collapse
Affiliation(s)
- Thomas C Sprague
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA 93106-9660
| | - Geoffrey M Boynton
- Department of Psychology, University of Washington, Seattle, WA 98195-1525
| | - John T Serences
- Department of Psychology, University of California San Diego, La Jolla, CA 92093-0109
- Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093-0109
- Kavli Foundation for the Brain and Mind, University of California San Diego, La Jolla, CA 92093-0126
| |
Collapse
|
27
|
Nobre AC, Stokes MG. Premembering Experience: A Hierarchy of Time-Scales for Proactive Attention. Neuron 2019; 104:132-146. [PMID: 31600510 PMCID: PMC6873797 DOI: 10.1016/j.neuron.2019.08.030] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 08/07/2019] [Accepted: 08/20/2019] [Indexed: 12/30/2022]
Abstract
Memories are about the past, but they serve the future. Memory research often emphasizes the former aspect: focusing on the functions that re-constitute (re-member) experience and elucidating the various types of memories and their interrelations, timescales, and neural bases. Here we highlight the prospective nature of memory in guiding selective attention, focusing on functions that use previous experience to anticipate the relevant events about to unfold-to "premember" experience. Memories of various types and timescales play a fundamental role in guiding perception and performance adaptively, proactively, and dynamically. Consonant with this perspective, memories are often recorded according to expected future demands. Using working memory as an example, we consider how mnemonic content is selected and represented for future use. This perspective moves away from the traditional representational account of memory toward a functional account in which forward-looking memory traces are informationally and computationally tuned for interacting with incoming sensory signals to guide adaptive behavior.
Collapse
Affiliation(s)
- Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Mark G Stokes
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
28
|
Geng JJ, Witkowski P. Template-to-distractor distinctiveness regulates visual search efficiency. Curr Opin Psychol 2019; 29:119-125. [PMID: 30743200 PMCID: PMC6625942 DOI: 10.1016/j.copsyc.2019.01.003] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 12/13/2018] [Accepted: 01/04/2019] [Indexed: 11/18/2022]
Abstract
All models of attention include the concept of an attentional template (or a target or search template). The template is conceptualized as target information held in memory that is used for prioritizing sensory processing and determining if an object matches the target. It is frequently assumed that the template contains a veridical copy of the target. However, we review recent evidence showing that the template encodes a version of the target that is adapted to the current context (e.g. distractors, task, etc.); information held within the template may include only a subset of target features, real world knowledge, pre-existing perceptual biases, or even be a distorted version of the veridical target. We argue that the template contents are customized in order to maximize the ability to prioritize information that distinguishes targets from distractors. We refer to this as template-to-distractor distinctiveness and hypothesize that it contributes to visual search efficiency by exaggerating target-to-distractor dissimilarity.
Collapse
Affiliation(s)
- Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, 95616, United States; Department of Psychology, University of California Davis, Davis, CA, 95616, United States.
| | - Phillip Witkowski
- Center for Mind and Brain, University of California Davis, Davis, CA, 95616, United States; Department of Psychology, University of California Davis, Davis, CA, 95616, United States
| |
Collapse
|
29
|
Kozyrev V, Daliri MR, Schwedhelm P, Treue S. Strategic deployment of feature-based attentional gain in primate visual cortex. PLoS Biol 2019; 17:e3000387. [PMID: 31386656 PMCID: PMC6684042 DOI: 10.1371/journal.pbio.3000387] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Accepted: 07/02/2019] [Indexed: 11/18/2022] Open
Abstract
Attending to visual stimuli enhances the gain of those neurons in primate visual cortex that preferentially respond to the matching locations and features (on-target gain). Although this is well suited to enhance the neuronal representation of attended stimuli, it is nonoptimal under difficult discrimination conditions, as in the presence of similar distractors. In such cases, directing attention to neighboring neuronal populations (off-target gain) has been shown to be the most efficient strategy, but although such a strategic deployment of attention has been shown behaviorally, its underlying neural mechanisms are unknown. Here, we investigated how attention affects the population responses of neurons in the middle temporal (MT) visual area of rhesus monkeys to bidirectional movement inside the neurons' receptive field (RF). The monkeys were trained to focus their attention onto the fixation spot or to detect a direction or speed change in one of the motion directions (the "target"), ignoring the distractor motion. Population activity profiles were determined by systematically varying the patterns' directions while maintaining a constant angle between them. As expected, the response profiles show a peak for each of the 2 motion directions. Switching spatial attention from the fixation spot into the RF enhanced the peak representing the attended stimulus and suppressed the distractor representation. Importantly, the population data show a direction-dependent attentional modulation that does not peak at the target feature but rather along the slopes of the activity profile representing the target direction. Our results show that attentional gains are strategically deployed to optimize the discriminability of target stimuli, in line with an optimal gain mechanism proposed by Navalpakkam and Itti.
Collapse
Affiliation(s)
- Vladislav Kozyrev
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany.,Laboratory of Systems Neuroscience and Imaging in Psychiatry (SNIP), University Medical Center Goettingen, Germany.,Department of Cognitive Neurology, University Medical Center Goettingen, Germany
| | - Mohammad Reza Daliri
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany.,Neuroscience and Neuroengineering Research Lab., Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran, Iran.,Cognitive Neurobiology Lab., School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Niavaran, Tehran, Iran
| | - Philipp Schwedhelm
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, Goettingen, Germany.,Center for Mind and Brain Sciences, University of Trento, Italy.,Institute of Molecular and Clinical Ophthalmology Basel (IOB), Switzerland.,Functional Imaging Laboratory, German Primate Center-Leibniz Institute for Primate Research, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center-Leibniz Institute for Primate Research, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany.,Leibniz ScienceCampus PrimateCognition, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Germany
| |
Collapse
|
30
|
Rungratsameetaweemana N, Serences JT. Dissociating the impact of attention and expectation on early sensory processing. Curr Opin Psychol 2019; 29:181-186. [PMID: 31022561 DOI: 10.1016/j.copsyc.2019.03.014] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 03/14/2019] [Accepted: 03/18/2019] [Indexed: 10/27/2022]
Abstract
Most studies that focus on understanding how top-down knowledge influences behavior attempt to manipulate either 'attention' or 'expectation' and often use the terms interchangeably. However, having expectations about statistical regularities in the environment and the act of willfully allocating attention to a subset of relevant sensory inputs are logically distinct processes that could, in principle, rely on similar neural mechanisms and influence information processing at the same stages. In support of this framework, several recent studies attempted to isolate expectation from attention, and advanced the idea that expectation and attention both modulate early sensory processing. Here, we argue that there is currently insufficient empirical evidence to support this conclusion, because previous studies have not fully isolated the effects of expectation and attention. Instead, most prior studies manipulated the relevance of different sensory features, and as a result, few existing findings speak directly to the potentially separable influences of expectation and attention on early sensory processing. Indeed, recent studies that attempt to more strictly isolate expectation and attention suggest that expectation has little influence on early sensory responses and primarily influences later 'decisional' stages of information processing.
Collapse
Affiliation(s)
| | - John T Serences
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093-0109, USA; Department of Psychology, University of California, San Diego, La Jolla, CA 92093-1090, USA; Kavli Foundation for the Brain and Mind, University of California, San Diego, La Jolla, CA 92093-0109, USA.
| |
Collapse
|
31
|
Gardner JL, Liu T. Inverted Encoding Models Reconstruct an Arbitrary Model Response, Not the Stimulus. eNeuro 2019; 6:ENEURO.0363-18.2019. [PMID: 30923743 PMCID: PMC6437661 DOI: 10.1523/eneuro.0363-18.2019] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 02/25/2019] [Accepted: 03/03/2019] [Indexed: 01/24/2023] Open
Abstract
Probing how large populations of neurons represent stimuli is key to understanding sensory representations as many stimulus characteristics can only be discerned from population activity and not from individual single-units. Recently, inverted encoding models have been used to produce channel response functions from large spatial-scale measurements of human brain activity that are reminiscent of single-unit tuning functions and have been proposed to assay "population-level stimulus representations" (Sprague et al., 2018a). However, these channel response functions do not assay population tuning. We show by derivation that the channel response function is only determined up to an invertible linear transform. Thus, these channel response functions are arbitrary, one of an infinite family and therefore not a unique description of population representation. Indeed, simulations demonstrate that bimodal, even random, channel basis functions can account perfectly well for population responses without any underlying neural response units that are so tuned. However, the approach can be salvaged by extending it to reconstruct the stimulus, not the assumed model. We show that when this is done, even using bimodal and random channel basis functions, a unimodal function peaking at the appropriate value of the stimulus is recovered which can be interpreted as a measure of population selectivity. More precisely, the recovered function signifies how likely any value of the stimulus is, given the observed population response. Whether an analysis is recovering the hypothetical responses of an arbitrary model rather than assessing the selectivity of population representations is not an issue unique to the inverted encoding model and human neuroscience, but a general problem that must be confronted as more complex analyses intervene between measurement of population activity and presentation of data.
Collapse
Affiliation(s)
| | - Taosheng Liu
- Department of Psychology, Michigan State University, East Lansing, MI 48824
| |
Collapse
|
32
|
Gardner JL. Optimality and heuristics in perceptual neuroscience. Nat Neurosci 2019; 22:514-523. [PMID: 30804531 DOI: 10.1038/s41593-019-0340-4] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Accepted: 01/16/2019] [Indexed: 11/09/2022]
Abstract
The foundation for modern understanding of how we make perceptual decisions about what we see or where to look comes from considering the optimal way to perform these behaviors. While statistical computation is useful for deriving the optimal solution to a perceptual problem, optimality requires perfect knowledge of priors and often complex computation. Accumulating evidence, however, suggests that optimal perceptual goals can be achieved or approximated more simply by human observers using heuristic approaches. Perceptual neuroscientists captivated by optimal explanations of sensory behaviors will fail in their search for the neural circuits and cortical processes that implement an optimal computation whenever that behavior is actually achieved through heuristics. This article provides a cross-disciplinary review of decision-making with the aim of building perceptual theory that uses optimality to set the computational goals for perceptual behavior but, through consideration of ecological, computational, and energetic constraints, incorporates how these optimal goals can be achieved through heuristic approximation.
Collapse
Affiliation(s)
- Justin L Gardner
- Department of Psychology, Stanford University, Stanford, California, USA.
| |
Collapse
|
33
|
Yu X, Geng JJ. The attentional template is shifted and asymmetrically sharpened by distractor context. J Exp Psychol Hum Percept Perform 2019; 45:336-353. [PMID: 30742475 DOI: 10.1037/xhp0000609] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Theories of attention hypothesize the existence of an "attentional template" that contains target features in working or long-term memory. It is often assumed that the template contents are veridical, but recent studies have found that this is not true when the distractor set is linearly separable from the target (e.g., all distractors are "yellower" than an orange-colored target). In such cases, the target representation in memory shifts away from distractor features (Navalpakkam & Itti, 2007) and develops a sharper boundary with distractors (Geng, DiQuattro, & Helm, 2017). These changes in the target template are presumed to increase the target-to-distractor psychological distinctiveness and lead to better attentional selection, but it remains unclear what characteristics of the distractor context produce shifting versus sharpening. Here, we tested the hypothesis that the template representation shifts whenever the distractor set (i.e., all of the distractors) is linearly separable from the target but asymmetrical sharpening occurs only when linearly separable distractors are highly target-similar. Our results were consistent, suggesting that template shifting and asymmetrical sharpening are 2 mechanisms that increase the representational distinctiveness of targets from expected distractors and improve visual search performance. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
34
|
Lee J, Leonard CJ, Luck SJ, Geng JJ. Dynamics of Feature-based Attentional Selection during Color–Shape Conjunction Search. J Cogn Neurosci 2018; 30:1773-1787. [DOI: 10.1162/jocn_a_01318] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Feature-based attentional selection is accomplished by increasing the gain of sensory neurons encoding target-relevant features while decreasing that of other features. But how do these mechanisms work when targets and distractors share features? We investigated this in a simplified color–shape conjunction search task using ERP components (N2pc, PD, and SPCN) that index lateralized attentional processing. In Experiment 1, we manipulated the presence and frequency of color distractors while holding shape distractors constant. We tested the hypothesis that the color distractor would capture attention, requiring active suppression such that processing of the target can continue. Consistent with this hypothesis, we found that color distractors consistently captured attention, as indexed by a significant N2pc, but were reactively suppressed (indexed by PD). Interestingly, when the color distractor was present, target processing was sustained (indexed by SPCN), suggesting that the dynamics of attentional competition involved distractor suppression interlinked with sustained target processing. In Experiment 2, we examined the contribution of shape to the dynamics of attentional competition under similar conditions. In contrast to color distractors, shape distractors did not reliably capture attention, even when the color distractor was very frequent and attending to target shape would be beneficial. Together, these results suggest that target-colored objects are prioritized during color–shape conjunction search, and the ability to select the target is delayed while target-colored distractors are actively suppressed.
Collapse
Affiliation(s)
- Jeongmi Lee
- Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | | | | | | |
Collapse
|
35
|
Snyder AC, Yu BM, Smith MA. Distinct population codes for attention in the absence and presence of visual stimulation. Nat Commun 2018; 9:4382. [PMID: 30348942 PMCID: PMC6197235 DOI: 10.1038/s41467-018-06754-5] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Accepted: 09/12/2018] [Indexed: 12/02/2022] Open
Abstract
Visual neurons respond more vigorously to an attended stimulus than an unattended one. How the brain prepares for response gain in anticipation of that stimulus is not well understood. One prominent proposal is that anticipation is characterized by gain-like modulations of spontaneous activity similar to gains in stimulus responses. Here we test an alternative idea: anticipation is characterized by a mixture of both increases and decreases of spontaneous firing rates. Such a strategy would be adaptive as it supports a simple linear scheme for disentangling internal, modulatory signals from external, sensory inputs. We recorded populations of V4 neurons in monkeys performing an attention task, and found that attention states are signaled by different mixtures of neurons across the population in the presence or absence of a stimulus. Our findings support a move from a stimulation-invariant account of anticipation towards a richer view of attentional modulation in a diverse neuronal population. Attention affects stimulus response gain, but its impact without sensory drive is less known. Here, the authors show that attention is coded diversely in a population and is distinct between unstimulated and stimulated contexts, providing a contrast to normalized gain models of attention.
Collapse
Affiliation(s)
- Adam C Snyder
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, 15289, PA, USA.,Department of Ophthalmology, University of Pittsburgh, Pittsburgh, 15213, PA, USA.,Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, 15260, PA, USA
| | - Byron M Yu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, 15289, PA, USA.,Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, 15260, PA, USA.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, 15289, PA, USA
| | - Matthew A Smith
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, 15260, PA, USA. .,Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA.
| |
Collapse
|
36
|
Contextual-Dependent Attention Effect on Crowded Orientation Signals in Human Visual Cortex. J Neurosci 2018; 38:8433-8440. [PMID: 30120209 DOI: 10.1523/jneurosci.0805-18.2018] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Revised: 08/03/2018] [Accepted: 08/07/2018] [Indexed: 11/21/2022] Open
Abstract
A target becomes hard to identify with nearby visual stimuli. This phenomenon, known as crowding, places a fundamental limit on conscious perception and object recognition. To understand the neural representation of crowded stimuli, we used fMRI and a forward encoding model to reconstruct the target-specific feature from multivoxel activation patterns evoked by orientation patches. Orientation-selective response profiles were constructed in V1-V4 for a target embedded in different contexts. Subjects of both sexes either directed their attention over all the orientation patches or selectively to the target. In the context with a weak crowding effect, attending to the target enhanced the orientation selectivity of the response profile; such effect increased along the visual pathway. In the context with a strong crowding effect, attending to the target enhanced the orientation selectivity of the response profile in the earlier visual area, but not in V4. The increase and decrease of orientation selectivity along the visual hierarchy demonstrate a contextual-dependent attention effect on crowded orientation signals: in the context with a weak crowding effect, selective attention gradually resolves the target from nearby distractors along the hierarchy; in the context with a strong crowding effect, while selective attention maintains the target feature in the earlier visual area, its effect decreases in the downstream area. Our findings reveal how the human visual system represents the target-specific feature at multiple stages under the limit of attention selection in a cluttered scene.SIGNIFICANCE STATEMENT Using fMRI and a forward encoding model, we reconstructed orientation-selective response profiles for a target embedded in crowded contexts. In the context with a weak crowding effect, attention gradually resolves the target from nearby distractors along the visual hierarchy. In the context with a strong crowding effect, while the feature of the target is preserved in the early visual cortex, it degrades in the later visual processing stage. The increase and decrease of orientation selectivity along the visual hierarchy reveal how the human visual system strikes to present the target-specific feature under the limit of attention selection in a cluttered scene.
Collapse
|
37
|
Rungratsameetaweemana N, Itthipuripat S, Salazar A, Serences JT. Expectations Do Not Alter Early Sensory Processing during Perceptual Decision-Making. J Neurosci 2018; 38:5632-5648. [PMID: 29773755 PMCID: PMC8174137 DOI: 10.1523/jneurosci.3638-17.2018] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2017] [Revised: 04/27/2018] [Accepted: 05/04/2018] [Indexed: 11/21/2022] Open
Abstract
Two factors play important roles in shaping perception: the allocation of selective attention to behaviorally relevant sensory features, and prior expectations about regularities in the environment. Signal detection theory proposes distinct roles of attention and expectation on decision-making such that attention modulates early sensory processing, whereas expectation influences the selection and execution of motor responses. Challenging this classic framework, recent studies suggest that expectations about sensory regularities enhance the encoding and accumulation of sensory evidence during decision-making. However, it is possible, that these findings reflect well documented attentional modulations in visual cortex. Here, we tested this framework in a group of male and female human participants by examining how expectations about stimulus features (orientation and color) and expectations about motor responses impacted electroencephalography (EEG) markers of early sensory processing and the accumulation of sensory evidence during decision-making (the early visual negative potential and the centro-parietal positive potential, respectively). We first demonstrate that these markers are sensitive to changes in the amount of sensory evidence in the display. Then we show, counter to recent findings, that neither marker is modulated by either feature or motor expectations, despite a robust effect of expectations on behavior. Instead, violating expectations about likely sensory features and motor responses impacts posterior alpha and frontal theta oscillations, signals thought to index overall processing time and cognitive conflict. These findings are inconsistent with recent theoretical accounts and suggest instead that expectations primarily influence decisions by modulating post-perceptual stages of information processing.SIGNIFICANCE STATEMENT Expectations about likely features or motor responses play an important role in shaping behavior. Classic theoretical frameworks posit that expectations modulate decision-making by biasing late stages of decision-making including the selection and execution of motor responses. In contrast, recent accounts suggest that expectations also modulate decisions by improving the quality of early sensory processing. However, these effects could instead reflect the influence of selective attention. Here we examine the effect of expectations about sensory features and motor responses on a set of electroencephalography (EEG) markers that index early sensory processing and later post-perceptual processing. Counter to recent empirical results, expectations have little effect on early sensory processing but instead modulate EEG markers of time-on-task and cognitive conflict.
Collapse
Affiliation(s)
| | - Sirawaj Itthipuripat
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California 92093-0109
- Learning Institute, King Mongkut's University of Technology Thonburi, Bangkok, Thailand 10140
- Department of Psychology, Vanderbilt University, Nashville, Tennessee 37235
| | - Annalisa Salazar
- Department of Psychology, University of California, San Diego, La Jolla, California 92093-0109, and
| | - John T Serences
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California 92093-0109,
- Department of Psychology, University of California, San Diego, La Jolla, California 92093-0109, and
- Kavli Institute for Brain and Mind, University of California, San Diego, La Jolla, California 92093-0109
| |
Collapse
|
38
|
Inverted Encoding Models Assay Population-Level Stimulus Representations, Not Single-Unit Neural Tuning. eNeuro 2018; 5:eN-COM-0098-18. [PMID: 29876523 PMCID: PMC5987635 DOI: 10.1523/eneuro.0098-18.2018] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Revised: 04/26/2018] [Accepted: 05/03/2018] [Indexed: 11/21/2022] Open
|
39
|
Building on a Solid Baseline: Anticipatory Biases in Attention. Trends Neurosci 2018; 41:120-122. [PMID: 29499772 PMCID: PMC6041469 DOI: 10.1016/j.tins.2018.01.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Accepted: 01/10/2018] [Indexed: 11/20/2022]
Abstract
A brain-imaging paper by Kastner and colleagues in 1999 was the first to demonstrate that merely focusing attention at a spatial location changed the baseline activity level in various regions of human visual cortex even before any stimuli appeared. The study provided a touchstone for investigating cognitive–sensory interactions and understanding the proactive endogenous signals that shape perception.
Collapse
|
40
|
Inverted Encoding Models of Human Population Response Conflate Noise and Neural Tuning Width. J Neurosci 2017; 38:398-408. [PMID: 29167406 DOI: 10.1523/jneurosci.2453-17.2017] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Revised: 11/08/2017] [Accepted: 11/10/2017] [Indexed: 01/02/2023] Open
Abstract
Channel-encoding models offer the ability to bridge different scales of neuronal measurement by interpreting population responses, typically measured with BOLD imaging in humans, as linear sums of groups of neurons (channels) tuned for visual stimulus properties. Inverting these models to form predicted channel responses from population measurements in humans seemingly offers the potential to infer neuronal tuning properties. Here, we test the ability to make inferences about neural tuning width from inverted encoding models. We examined contrast invariance of orientation selectivity in human V1 (both sexes) and found that inverting the encoding model resulted in channel response functions that became broader with lower contrast, thus apparently violating contrast invariance. Simulations showed that this broadening could be explained by contrast-invariant single-unit tuning with the measured decrease in response amplitude at lower contrast. The decrease in response lowers the signal-to-noise ratio of population responses that results in poorer population representation of orientation. Simulations further showed that increasing signal to noise makes channel response functions less sensitive to underlying neural tuning width, and in the limit of zero noise will reconstruct the channel function assumed by the model regardless of the bandwidth of single units. We conclude that our data are consistent with contrast-invariant orientation tuning in human V1. More generally, our results demonstrate that population selectivity measures obtained by encoding models can deviate substantially from the behavior of single units because they conflate neural tuning width and noise and are therefore better used to estimate the uncertainty of decoded stimulus properties.SIGNIFICANCE STATEMENT It is widely recognized that perceptual experience arises from large populations of neurons, rather than a few single units. Yet, much theory and experiment have examined links between single units and perception. Encoding models offer a way to bridge this gap by explicitly interpreting population activity as the aggregate response of many single neurons with known tuning properties. Here we use this approach to examine contrast-invariant orientation tuning of human V1. We show with experiment and modeling that due to lower signal to noise, contrast-invariant orientation tuning of single units manifests in population response functions that broaden at lower contrast, rather than remain contrast-invariant. These results highlight the need for explicit quantitative modeling when making a reverse inference from population response profiles to single-unit responses.
Collapse
|
41
|
Myers NE, Chekroud SR, Stokes MG, Nobre AC. Benefits of flexible prioritization in working memory can arise without costs. J Exp Psychol Hum Percept Perform 2017; 44:398-411. [PMID: 28816476 PMCID: PMC5868459 DOI: 10.1037/xhp0000449] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Most recent models conceptualize working memory (WM) as a continuous resource, divided up according to task demands. When an increasing number of items need to be remembered, each item receives a smaller chunk of the memory resource. These models predict that the allocation of attention to high-priority WM items during the retention interval should be a zero-sum game: improvements in remembering cued items come at the expense of uncued items because resources are dynamically transferred from uncued to cued representations. The current study provides empirical data challenging this model. Four precision retrocueing WM experiments assessed cued and uncued items on every trial. This permitted a test for trade-off of the memory resource. We found no evidence for trade-offs in memory across trials. Moreover, robust improvements in WM performance for cued items came at little or no cost to uncued items that were probed afterward, thereby increasing the net capacity of WM relative to neutral cueing conditions. An alternative mechanism of prioritization proposes that cued items are transferred into a privileged state within a response-gating bottleneck, in which an item uniquely controls upcoming behavior. We found evidence consistent with this alternative. When an uncued item was probed first, report of its orientation was biased away from the cued orientation to be subsequently reported. We interpret this bias as competition for behavioral control in the output-driving bottleneck. Other items in WM did not bias each other, making this result difficult to explain with a shared resource model. This study challenges the dominant model for how we remember and prioritize pieces of information over short intervals (working memory). The dominant view is that all items in working memory share a single resource, and that we can prioritize one item by redistributing resources in its favor. This view predicts that nonprioritized memories become lost or impoverished. By testing how well participants remember both prioritized and nonprioritized items, we show that this is not the case. Our findings suggest that memories can be prioritized flexibly without necessarily jeopardizing others that may still become relevant.
Collapse
Affiliation(s)
| | | | - Mark G Stokes
- Department of Experimental Psychology, University of Oxford
| | - Anna C Nobre
- Department of Experimental Psychology, University of Oxford
| |
Collapse
|
42
|
Feature-Selective Attentional Modulations in Human Frontoparietal Cortex. J Neurosci 2017; 36:8188-99. [PMID: 27488638 DOI: 10.1523/jneurosci.3935-15.2016] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2015] [Accepted: 06/20/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Control over visual selection has long been framed in terms of a dichotomy between "source" and "site," where top-down feedback signals originating in frontoparietal cortical areas modulate or bias sensory processing in posterior visual areas. This distinction is motivated in part by observations that frontoparietal cortical areas encode task-level variables (e.g., what stimulus is currently relevant or what motor outputs are appropriate), while posterior sensory areas encode continuous or analog feature representations. Here, we present evidence that challenges this distinction. We used fMRI, a roving searchlight analysis, and an inverted encoding model to examine representations of an elementary feature property (orientation) across the entire human cortical sheet while participants attended either the orientation or luminance of a peripheral grating. Orientation-selective representations were present in a multitude of visual, parietal, and prefrontal cortical areas, including portions of the medial occipital cortex, the lateral parietal cortex, and the superior precentral sulcus (thought to contain the human homolog of the macaque frontal eye fields). Additionally, representations in many-but not all-of these regions were stronger when participants were instructed to attend orientation relative to luminance. Collectively, these findings challenge models that posit a strict segregation between sources and sites of attentional control on the basis of representational properties by demonstrating that simple feature values are encoded by cortical regions throughout the visual processing hierarchy, and that representations in many of these areas are modulated by attention. SIGNIFICANCE STATEMENT Influential models of visual attention posit a distinction between top-down control and bottom-up sensory processing networks. These models are motivated in part by demonstrations showing that frontoparietal cortical areas associated with top-down control represent abstract or categorical stimulus information, while visual areas encode parametric feature information. Here, we show that multivariate activity in human visual, parietal, and frontal cortical areas encode representations of a simple feature property (orientation). Moreover, representations in several (though not all) of these areas were modulated by feature-based attention in a similar fashion. These results provide an important challenge to models that posit dissociable top-down control and sensory processing networks on the basis of representational properties.
Collapse
|
43
|
Serences JT. Neural mechanisms of information storage in visual short-term memory. Vision Res 2016; 128:53-67. [PMID: 27668990 DOI: 10.1016/j.visres.2016.09.010] [Citation(s) in RCA: 115] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Revised: 09/02/2016] [Accepted: 09/21/2016] [Indexed: 11/26/2022]
Abstract
The capacity to briefly memorize fleeting sensory information supports visual search and behavioral interactions with relevant stimuli in the environment. Traditionally, studies investigating the neural basis of visual short term memory (STM) have focused on the role of prefrontal cortex (PFC) in exerting executive control over what information is stored and how it is adaptively used to guide behavior. However, the neural substrates that support the actual storage of content-specific information in STM are more controversial, with some attributing this function to PFC and others to the specialized areas of early visual cortex that initially encode incoming sensory stimuli. In contrast to these traditional views, I will review evidence suggesting that content-specific information can be flexibly maintained in areas across the cortical hierarchy ranging from early visual cortex to PFC. While the factors that determine exactly where content-specific information is represented are not yet entirely clear, recognizing the importance of task-demands and better understanding the operation of non-spiking neural codes may help to constrain new theories about how memories are maintained at different resolutions, across different timescales, and in the presence of distracting information.
Collapse
Affiliation(s)
- John T Serences
- Department of Psychology, Neurosciences Graduate Program, and the Kavli Institute for Mind and Brain, University of California, San Diego, United States.
| |
Collapse
|
44
|
Attentional Effects on Phenomenological Appearance: How They Change with Task Instructions and Measurement Methods. PLoS One 2016; 11:e0152353. [PMID: 27022928 PMCID: PMC4811431 DOI: 10.1371/journal.pone.0152353] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2015] [Accepted: 03/11/2016] [Indexed: 11/26/2022] Open
Abstract
It has been reported that exogenous cues accentuate contrast appearance. The empirical finding is controversial because non-veridical perception challenges the idea that attention prioritizes processing resources to make perception better, and because philosophers have used the finding to challenge representational accounts of mental experience. The present experiments confirm that when evaluated with comparison paradigms exogenous cues increase the apparent contrast. In addition, contrast appearance was also changed by simply changing the purpose of a secondary task. When comparison and discrimination reports were combined in a single experiment there was a behavioral disassociation: contrast enhanced for comparison responses, but did not change for discrimination judgments, even when participants made both types of judgment for a single stimulus. That a single object can have multiple simultaneous appearances leads inescapably to the conclusion that our unitary mental experience is illusory.
Collapse
|
45
|
Samaha J, Sprague TC, Postle BR. Decoding and Reconstructing the Focus of Spatial Attention from the Topography of Alpha-band Oscillations. J Cogn Neurosci 2016; 28:1090-7. [PMID: 27003790 DOI: 10.1162/jocn_a_00955] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Many aspects of perception and cognition are supported by activity in neural populations that are tuned to different stimulus features (e.g., orientation, spatial location, color). Goal-directed behavior, such as sustained attention, requires a mechanism for the selective prioritization of contextually appropriate representations. A candidate mechanism of sustained spatial attention is neural activity in the alpha band (8-13 Hz), whose power in the human EEG covaries with the focus of covert attention. Here, we applied an inverted encoding model to assess whether spatially selective neural responses could be recovered from the topography of alpha-band oscillations during spatial attention. Participants were cued to covertly attend to one of six spatial locations arranged concentrically around fixation while EEG was recorded. A linear classifier applied to EEG data during sustained attention demonstrated successful classification of the attended location from the topography of alpha power, although not from other frequency bands. We next sought to reconstruct the focus of spatial attention over time by applying inverted encoding models to the topography of alpha power and phase. Alpha power, but not phase, allowed for robust reconstructions of the specific attended location beginning around 450 msec postcue, an onset earlier than previous reports. These results demonstrate that posterior alpha-band oscillations can be used to track activity in feature-selective neural populations with high temporal precision during the deployment of covert spatial attention.
Collapse
|
46
|
Cheadle S, Egner T, Wyart V, Wu C, Summerfield C. Feature expectation heightens visual sensitivity during fine orientation discrimination. J Vis 2016; 15:14. [PMID: 26505967 DOI: 10.1167/15.14.14] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Attending to a stimulus enhances the sensitivity of perceptual decisions. However, it remains unclear how perceptual sensitivity varies according to whether a feature is expected or unexpected. Here, observers made fine discrimination judgments about the orientation of visual gratings embedded in low spatial-frequency noise, and psychophysical reverse correlation was used to estimate decision 'kernels' that revealed how visual features influenced choices. Orthogonal cues alerted subjects to which of two spatial locations was likely to be probed (spatial attention cue) and which of two oriented gratings was likely to occur (feature expectation cue). When an expected (relative to unexpected) feature occurred, decision kernels shifted away from the category boundary, allowing observers to capitalize on more informative, "off-channel" stimulus features. By contrast, the spatial attention cue had a multiplicative influence on decision kernels, consistent with an increase in response gain. Feature expectation thus heightens sensitivity to the most informative visual features, independent of selective attention.
Collapse
|
47
|
Abstract
AbstractAlthough the authors do a valuable service by elucidating the pitfalls of inferring top-down effects, they overreach by claiming that vision is cognitively impenetrable. Their argument, and the entire question of cognitive penetrability, seems rooted in a discrete, stage-like model of the mind that is unsupported by neural data.
Collapse
|
48
|
Reconstructing representations of dynamic visual objects in early visual cortex. Proc Natl Acad Sci U S A 2015; 113:1453-8. [PMID: 26712004 DOI: 10.1073/pnas.1512144113] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the "intermediate" orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations.
Collapse
|
49
|
Pratte MS, Sy JL, Swisher JD, Tong F. Radial bias is not necessary for orientation decoding. Neuroimage 2015; 127:23-33. [PMID: 26666900 DOI: 10.1016/j.neuroimage.2015.11.066] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Revised: 11/24/2015] [Accepted: 11/28/2015] [Indexed: 11/30/2022] Open
Abstract
Multivariate pattern analysis can be used to decode the orientation of a viewed grating from fMRI signals in early visual areas. Although some studies have reported identifying multiple sources of the orientation information that make decoding possible, a recent study argued that orientation decoding is only possible because of a single source: a coarse-scale retinotopically organized preference for radial orientations. Here we aim to resolve these discrepant findings. We show that there were subtle, but critical, experimental design choices that led to the erroneous conclusion that a radial bias is the only source of orientation information in fMRI signals. In particular, we show that the reliance on a fast temporal-encoding paradigm for spatial mapping can be problematic, as effects of space and time become conflated and lead to distorted estimates of a voxel's orientation or retinotopic preference. When we implement minor changes to the temporal paradigm or to the visual stimulus itself, by slowing the periodic rotation of the stimulus or by smoothing its contrast-energy profile, we find significant evidence of orientation information that does not originate from radial bias. In an additional block-paradigm experiment where space and time were not conflated, we apply a formal model comparison approach and find that many voxels exhibit more complex tuning properties than predicted by radial bias alone or in combination with other known coarse-scale biases. Our findings support the conclusion that radial bias is not necessary for orientation decoding. In addition, our study highlights potential limitations of using temporal phase-encoded fMRI designs for characterizing voxel tuning properties.
Collapse
Affiliation(s)
- Michael S Pratte
- Psychology Department, Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Mississippi State University, Starkville, MS, USA.
| | - Jocelyn L Sy
- Psychology Department, Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, USA
| | - Jascha D Swisher
- Psychology Department, Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, USA
| | - Frank Tong
- Psychology Department, Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
50
|
Garner KG, Matthews N, Remington RW, Dux PE. Transferability of Training Benefits Differs across Neural Events: Evidence from ERPs. J Cogn Neurosci 2015; 27:2079-94. [DOI: 10.1162/jocn_a_00833] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Humans can show striking capacity limitations in sensorimotor processing. Fortunately, these limitations can be attenuated with training. However, less fortunately, training benefits often remain limited to trained tasks. Recent behavioral observations suggest that the extent to which training transfers may depend on the specific stage of information processing that is being executed. Training benefits for a task that taps the consolidation of sensory information (sensory encoding) transfer to new stimulus–response mappings, whereas benefits for selecting an appropriate action (decision-making/response selection) remain specific to the trained mappings. Therefore, training may have dissociable influences on the neural events underlying subsequent sensorimotor processing stages. Here, we used EEG to investigate this possibility. In a pretraining baseline session, participants completed two four-alternative-choice response time tasks, presented both as a single task and as part of a dual task (with another task). The training group completed a further 3,000 training trials on one of the four-alternative-choice tasks. Hence, one task became trained, whereas the other remained untrained. At test, a negative-going component that is sensitive to sensory-encoding demands (N2) showed increased amplitudes and reduced latencies for trained and untrained mappings relative to a no-train control group. In contrast, the onset of the stimulus-locked lateralized readiness potential, a component that reflects the activation of motor plans, was reduced only for tasks that employed trained stimulus–response mappings, relative to untrained stimulus–response mappings and controls. Collectively, these results show that training benefits are dissociable for the brain events that reflect distinct sensorimotor processing stages.
Collapse
|