1
|
Yashiro R, Sawayama M, Amano K. Decoding time-resolved neural representations of orientation ensemble perception. Front Neurosci 2024; 18:1387393. [PMID: 39148524 PMCID: PMC11325722 DOI: 10.3389/fnins.2024.1387393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 07/15/2024] [Indexed: 08/17/2024] Open
Abstract
The visual system can compute summary statistics of several visual elements at a glance. Numerous studies have shown that an ensemble of different visual features can be perceived over 50-200 ms; however, the time point at which the visual system forms an accurate ensemble representation associated with an individual's perception remains unclear. This is mainly because most previous studies have not fully addressed time-resolved neural representations that occur during ensemble perception, particularly lacking quantification of the representational strength of ensembles and their correlation with behavior. Here, we conducted orientation ensemble discrimination tasks and electroencephalogram (EEG) recordings to decode orientation representations over time while human observers discriminated an average of multiple orientations. We modeled EEG signals as a linear sum of hypothetical orientation channel responses and inverted this model to quantify the representational strength of orientation ensemble. Our analysis using this inverted encoding model revealed stronger representations of the average orientation over 400-700 ms. We also correlated the orientation representation estimated from EEG signals with the perceived average orientation reported in the ensemble discrimination task with adjustment methods. We found that the estimated orientation at approximately 600-700 ms significantly correlated with the individual differences in perceived average orientation. These results suggest that although ensembles can be quickly and roughly computed, the visual system may gradually compute an orientation ensemble over several hundred milliseconds to achieve a more accurate ensemble representation.
Collapse
Affiliation(s)
- Ryuto Yashiro
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Masataka Sawayama
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Kaoru Amano
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
2
|
Barbieri R, Töpfer FM, Soch J, Bogler C, Sprekeler H, Haynes JD. Encoding of continuous perceptual choices in human early visual cortex. Front Hum Neurosci 2023; 17:1277539. [PMID: 38021249 PMCID: PMC10679739 DOI: 10.3389/fnhum.2023.1277539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 10/25/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction Research on the neural mechanisms of perceptual decision-making has typically focused on simple categorical choices, say between two alternative motion directions. Studies on such discrete alternatives have often suggested that choices are encoded either in a motor-based or in an abstract, categorical format in regions beyond sensory cortex. Methods In this study, we used motion stimuli that could vary anywhere between 0° and 360° to assess how the brain encodes choices for features that span the full sensory continuum. We employed a combination of neuroimaging and encoding models based on Gaussian process regression to assess how either stimuli or choices were encoded in brain responses. Results We found that single-voxel tuning patterns could be used to reconstruct the trial-by-trial physical direction of motion as well as the participants' continuous choices. Importantly, these continuous choice signals were primarily observed in early visual areas. The tuning properties in this region generalized between choice encoding and stimulus encoding, even for reports that reflected pure guessing. Discussion We found only little information related to the decision outcome in regions beyond visual cortex, such as parietal cortex, possibly because our task did not involve differential motor preparation. This could suggest that decisions for continuous stimuli take can place already in sensory brain regions, potentially using similar mechanisms to the sensory recruitment in visual working memory.
Collapse
Affiliation(s)
- Riccardo Barbieri
- Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Berlin, Germany
| | - Felix M. Töpfer
- Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Berlin, Germany
| | - Joram Soch
- Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Berlin, Germany
- German Center for Neurodegenerative Diseases, Göttingen, Germany
| | - Carsten Bogler
- Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Berlin, Germany
| | - Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| | - John-Dylan Haynes
- Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Berlin, Germany
- Berlin School of Mind and Brain and Institute of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
3
|
Huang L, Wang J, He Q, Li C, Sun Y, Seger CA, Zhang X. A source for category-induced global effects of feature-based attention in human prefrontal cortex. Cell Rep 2023; 42:113080. [PMID: 37659080 DOI: 10.1016/j.celrep.2023.113080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 06/14/2023] [Accepted: 08/16/2023] [Indexed: 09/04/2023] Open
Abstract
Global effects of feature-based attention (FBA) are generally limited to stimuli sharing the same or similar features, as hypothesized in the "feature-similarity gain model." Visual perception, however, often reflects categories acquired via experience/learning; whether the global-FBA effect can be induced by the categorized features remains unclear. Here, human subjects were trained to classify motion directions into two discrete categories and perform a classical motion-based attention task. We found a category-induced global-FBA effect in both the middle temporal area (MT+) and frontoparietal areas, where attention to a motion direction globally spread to unattended motion directions within the same category, but not to those in a different category. Effective connectivity analysis showed that the category-induced global-FBA effect in MT+ was derived by feedback from the inferior frontal junction (IFJ). Altogether, our study reveals a category-induced global-FBA effect and identifies a source for this effect in human prefrontal cortex, implying that FBA is of greater ecological significance than previously thought.
Collapse
Affiliation(s)
- Ling Huang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Jingyi Wang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Qionghua He
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Chu Li
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Yueling Sun
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Carol A Seger
- School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China; Department of Psychology, Colorado State University, Fort Collins, CO 80523, USA
| | - Xilin Zhang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China.
| |
Collapse
|
4
|
Chen J, Golomb JD. Dynamic neural reconstructions of attended object location and features using EEG. J Neurophysiol 2023; 130:139-154. [PMID: 37283457 PMCID: PMC10393364 DOI: 10.1152/jn.00180.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/10/2023] [Accepted: 06/02/2023] [Indexed: 06/08/2023] Open
Abstract
Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the present study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention. We demonstrate that EEG can be used to create simultaneous time courses of neural representations of attended features (time point-by-time point inverted encoding model reconstructions) and attended location (time point-by-time point decoding) during both stable periods and across dynamic shifts of attention. Each trial presented two oriented gratings that flickered at the same frequency but had different orientations; participants were cued to attend one of them and on half of trials received a shift cue midtrial. We trained models on a stable period from Hold attention trials and then reconstructed/decoded the attended orientation/location at each time point on Shift attention trials. Our results showed that both feature reconstruction and location decoding dynamically track the shift of attention and that there may be time points during the shifting of attention when 1) feature and location representations become uncoupled and 2) both the previously attended and currently attended orientations are represented with roughly equal strength. The results offer insight into our understanding of attentional shifts, and the noninvasive techniques developed in the present study lend themselves well to a wide variety of future applications.NEW & NOTEWORTHY We used human EEG and machine learning to reconstruct neural response profiles during dynamic shifts of attention. Specifically, we demonstrated that we could simultaneously read out both location and feature information from an attended item in a multistimulus display. Moreover, we examined how that readout evolves over time during the dynamic process of attentional shifts. These results provide insight into our understanding of attention, and this technique carries substantial potential for versatile extensions and applications.
Collapse
Affiliation(s)
- Jiageng Chen
- Department of Psychology, The Ohio State University, Columbus, Ohio, United States
| | - Julie D Golomb
- Department of Psychology, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|
5
|
Fulvio JM, Yu Q, Postle BR. Strategic control of location and ordinal context in visual working memory. Cereb Cortex 2023; 33:8821-8834. [PMID: 37164767 PMCID: PMC10321086 DOI: 10.1093/cercor/bhad164] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 04/21/2023] [Accepted: 04/22/2023] [Indexed: 05/12/2023] Open
Abstract
Working memory (WM) requires encoding stimulus identity and context (e.g. where or when stimuli were encountered). To explore the neural bases of the strategic control of context binding in WM, we acquired fMRI while subjects performed delayed recognition of 3 orientation patches presented serially and at different locations. The recognition probe was an orientation patch with a superimposed digit, and pretrial instructions directed subjects to respond according to its location ("location-relevant"), to the ordinal position corresponding to its digit ("order-relevant"), or to just its orientation (relative to all three samples; "context-irrelevant"). Delay period signal in PPC was greater for context-relevant than for "context-irrelevant" trials, and multivariate decoding revealed strong sensitivity to context binding requirements (relevant vs. "irrelevant") and to context domain ("location-" vs. "order-relevant") in both occipital cortex and PPC. At recognition, multivariate inverted encoding modeling revealed markedly different patterns in these 2 regions, suggesting different context-processing functions. In occipital cortex, an active representation of the location of each of the 3 samples was reinstated regardless of the trial type. The pattern in PPC, by contrast, suggested a trial type-dependent filtering of sample information. These results indicate that PPC exerts strategic control over the representation of stimulus context in visual WM.
Collapse
Affiliation(s)
- Jacqueline M Fulvio
- Department of Psychology, University of Wisconsin–Madison, 1202 West Johnson St. Madison, WI 53706, USA
| | - Qing Yu
- Department of Psychiatry, University of Wisconsin–Madison, 6001 Research Park Blvd, Madison, WI 53719, USA
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, 320 Yue Yang Road Shanghai, 200031 P.R.China
| | - Bradley R Postle
- Department of Psychology, University of Wisconsin–Madison, 1202 West Johnson St. Madison, WI 53706, USA
- Department of Psychiatry, University of Wisconsin–Madison, 6001 Research Park Blvd, Madison, WI 53719, USA
| |
Collapse
|
6
|
Bullock T, Pickett K, Salimian A, Gregory C, MacLean MH, Giesbrecht B. Eye movements disrupt EEG alpha-band coding of behaviorally relevant and irrelevant spatial locations held in working memory. J Neurophysiol 2023; 129:1191-1211. [PMID: 36988227 PMCID: PMC10190932 DOI: 10.1152/jn.00302.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 03/21/2023] [Accepted: 03/22/2023] [Indexed: 03/30/2023] Open
Abstract
Oscillations in the alpha frequency band (∼8-12 Hz) of the human electroencephalogram play an important role in supporting selective attention to visual items and maintaining their spatial locations in working memory (WM). Recent findings suggest that spatial information maintained in alpha is modulated by interruptions to continuous visual input, such that attention shifts, eye closure, and backward masking of the encoded item cause reconstructed representations of remembered locations to become degraded. Here, we investigated how another common visual disruption-eye movements-modulates reconstructions of behaviorally relevant and irrelevant item locations held in WM. Participants completed a delayed estimation task, where they encoded and recalled either the location or color of an object after a brief retention period. During retention, participants either fixated at the center or executed a sequence of eye movements. Electroencephalography (EEG) was recorded at the scalp and eye position was monitored with an eye tracker. Inverted encoding modeling (IEM) was applied to reconstruct location-selective responses across multiple frequency bands during encoding and retention. Location-selective responses were successfully reconstructed from alpha activity during retention where participants fixated at the center, but these reconstructions were disrupted during eye movements. Recall performance decreased during eye-movements conditions but remained largely intact, and further analyses revealed that under specific task conditions, it was possible to reconstruct retained location information from lower frequency bands (1-4 Hz) during eye movements. These results suggest that eye movements disrupt maintained spatial information in alpha in a manner consistent with other acute interruptions to continuous visual input, but this information may be represented in other frequency bands.NEW & NOTEWORTHY Neural oscillations in the alpha frequency band support selective attention to visual items and maintenance of their spatial locations in human working memory. Here, we investigate how eye movements disrupt representations of item locations held in working memory. Although it was not possible to recover item locations from alpha during eye movements, retained location information could be recovered from select lower frequency bands. This suggests that during eye movements, stored spatial information may be represented in other frequencies.
Collapse
Affiliation(s)
- Tom Bullock
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, California, United States
| | - Kamryn Pickett
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States
| | - Anabel Salimian
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States
| | - Caitlin Gregory
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, California, United States
| | - Mary H MacLean
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, California, United States
| | - Barry Giesbrecht
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, California, United States
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, California, United States
| |
Collapse
|
7
|
Sadil P, Cowell RA, Huber DE. A modeling framework for determining modulation of neural-level tuning from non-invasive human fMRI data. Commun Biol 2022; 5:1244. [PMID: 36376370 PMCID: PMC9663541 DOI: 10.1038/s42003-022-04000-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 09/07/2022] [Indexed: 11/16/2022] Open
Abstract
Many neuroscience theories assume that tuning modulation of individual neurons underlies changes in human cognition. However, non-invasive fMRI lacks sufficient resolution to visualize this modulation. To address this limitation, we developed an analysis framework called Inferring Neural Tuning Modulation (INTM) for "peering inside" voxels. Precise specification of neural tuning from the BOLD signal is not possible. Instead, INTM compares theoretical alternatives for the form of neural tuning modulation that might underlie changes in BOLD across experimental conditions. The most likely form is identified via formal model comparison, with assumed parametric Normal tuning functions, followed by a non-parametric check of conclusions. We validated the framework by successfully identifying a well-established form of modulation: visual contrast-induced multiplicative gain for orientation tuned neurons. INTM can be applied to any experimental paradigm testing several points along a continuous feature dimension (e.g., direction of motion, isoluminant hue) across two conditions (e.g., with/without attention, before/after learning).
Collapse
|
8
|
Henderson MM, Rademaker RL, Serences JT. Flexible utilization of spatial- and motor-based codes for the storage of visuo-spatial information. eLife 2022; 11:e75688. [PMID: 35522567 PMCID: PMC9075954 DOI: 10.7554/elife.75688] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 04/24/2022] [Indexed: 01/26/2023] Open
Abstract
Working memory provides flexible storage of information in service of upcoming behavioral goals. Some models propose specific fixed loci and mechanisms for the storage of visual information in working memory, such as sustained spiking in parietal and prefrontal cortex during working memory maintenance. An alternative view is that information can be remembered in a flexible format that best suits current behavioral goals. For example, remembered visual information might be stored in sensory areas for easier comparison to future sensory inputs, or might be re-coded into a more abstract action-oriented format and stored in motor areas. Here, we tested this hypothesis using a visuo-spatial working memory task where the required behavioral response was either known or unknown during the memory delay period. Using functional magnetic resonance imaging (fMRI) and multivariate decoding, we found that there was less information about remembered spatial position in early visual and parietal regions when the required response was known versus unknown. Furthermore, a representation of the planned motor action emerged in primary somatosensory, primary motor, and premotor cortex during the same task condition where spatial information was reduced in early visual cortex. These results suggest that the neural networks supporting working memory can be strategically reconfigured depending on specific behavioral requirements during a canonical visual working memory paradigm.
Collapse
Affiliation(s)
- Margaret M Henderson
- Neurosciences Graduate Program, University of California, San DiegoSan DiegoUnited States
- Department of Machine Learning, Carnegie Mellon UniversityPittsburghUnited States
- Neuroscience Institute, Carnegie Mellon UniversityPittsburghUnited States
| | - Rosanne L Rademaker
- Department of Psychology, University of California, San DiegoSan DiegoUnited States
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck SocietyFrankfurtGermany
| | - John T Serences
- Neurosciences Graduate Program, University of California, San DiegoSan DiegoUnited States
- Department of Psychology, University of California, San DiegoSan DiegoUnited States
- Kavli Foundation for the Brain and Mind, University of California, San DiegoSan DiegoUnited States
| |
Collapse
|
9
|
Kumar M, Anderson MJ, Antony JW, Baldassano C, Brooks PP, Cai MB, Chen PHC, Ellis CT, Henselman-Petrusek G, Huberdeau D, Hutchinson JB, Li YP, Lu Q, Manning JR, Mennen AC, Nastase SA, Richard H, Schapiro AC, Schuck NW, Shvartsman M, Sundaram N, Suo D, Turek JS, Turner D, Vo VA, Wallace G, Wang Y, Williams JA, Zhang H, Zhu X, Capota˘ M, Cohen JD, Hasson U, Li K, Ramadge PJ, Turk-Browne NB, Willke TL, Norman KA. BrainIAK: The Brain Imaging Analysis Kit. APERTURE NEURO 2022; 1. [PMID: 35939268 PMCID: PMC9351935 DOI: 10.52294/31bb5b68-2184-411b-8c00-a1dacb61e1da] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Functional magnetic resonance imaging (fMRI) offers a rich source of data for studying the neural basis of cognition. Here, we describe the Brain Imaging Analysis Kit (BrainIAK), an open-source, free Python package that provides computationally optimized solutions to key problems in advanced fMRI analysis. A variety of techniques are presently included in BrainIAK: intersubject correlation (ISC) and intersubject functional connectivity (ISFC), functional alignment via the shared response model (SRM), full correlation matrix analysis (FCMA), a Bayesian version of representational similarity analysis (BRSA), event segmentation using hidden Markov models, topographic factor analysis (TFA), inverted encoding models (IEMs), an fMRI data simulator that uses noise characteristics from real data (fmrisim), and some emerging methods. These techniques have been optimized to leverage the efficiencies of high-performance compute (HPC) clusters, and the same code can be seamlessly transferred from a laptop to a cluster. For each of the aforementioned techniques, we describe the data analysis problem that the technique is meant to solve and how it solves that problem; we also include an example Jupyter notebook for each technique and an annotated bibliography of papers that have used and/or described that technique. In addition to the sections describing various analysis techniques in BrainIAK, we have included sections describing the future applications of BrainIAK to real-time fMRI, tutorials that we have developed and shared online to facilitate learning the techniques in BrainIAK, computational innovations in BrainIAK, and how to contribute to BrainIAK. We hope that this manuscript helps readers to understand how BrainIAK might be useful in their research.
Collapse
Affiliation(s)
- Manoj Kumar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Michael J. Anderson
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - James W. Antony
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | | | - Paula P. Brooks
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Ming Bo Cai
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Japan
| | - Po-Hsuan Cameron Chen
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | | | | | | | | | - Y. Peeta Li
- Department of Psychology, University of Oregon, Eugene, OR
| | - Qihong Lu
- Department of Psychology, Princeton University, Princeton, NJ
| | - Jeremy R. Manning
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH
| | - Anne C. Mennen
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Samuel A. Nastase
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Hugo Richard
- Parietal Team, Inria, Neurospin, CEA, Université Paris-Saclay, France
| | - Anna C. Schapiro
- Department of Psychology, University of Pennsylvania, Philadelphia, PA
| | - Nicolas W. Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany
| | - Michael Shvartsman
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Narayanan Sundaram
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - Daniel Suo
- epartment of Computer Science, Princeton University, Princeton, NJ
| | - Javier S. Turek
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - David Turner
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Vy A. Vo
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Grant Wallace
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Yida Wang
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - Jamal A. Williams
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| | - Hejia Zhang
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Xia Zhu
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Mihai Capota˘
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Jonathan D. Cohen
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| | - Uri Hasson
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| | - Kai Li
- Department of Computer Science, Princeton University, Princeton, NJ
| | - Peter J. Ramadge
- Department of Electrical Engineering, and the Center for Statistics and Machine Learning, Princeton University, Princeton, NJ
| | | | | | - Kenneth A. Norman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| |
Collapse
|
10
|
Vo VA, Sutterer DW, Foster JJ, Sprague TC, Awh E, Serences JT. Shared Representational Formats for Information Maintained in Working Memory and Information Retrieved from Long-Term Memory. Cereb Cortex 2021; 32:1077-1092. [PMID: 34428283 DOI: 10.1093/cercor/bhab267] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 06/08/2021] [Accepted: 07/13/2021] [Indexed: 11/13/2022] Open
Abstract
Current theories propose that the short-term retention of information in working memory (WM) and the recall of information from long-term memory (LTM) are supported by overlapping neural mechanisms in occipital and parietal cortex. However, the extent of the shared representations between WM and LTM is unclear. We designed a spatial memory task that allowed us to directly compare the representations of remembered spatial information in WM and LTM with carefully matched behavioral response precision between tasks. Using multivariate pattern analyses on functional magnetic resonance imaging data, we show that visual memories were represented in a sensory-like code in both memory tasks across retinotopic regions in occipital and parietal cortex. Regions in lateral parietal cortex also encoded remembered locations in both tasks, but in a format that differed from sensory-evoked activity. These results suggest a striking correspondence in the format of representations maintained in WM and retrieved from LTM across occipital and parietal cortex. On the other hand, we also show that activity patterns in nearly all parietal regions, but not occipital regions, contained information that could discriminate between WM and LTM trials. Our data provide new evidence for theories of memory systems and the representation of mnemonic content.
Collapse
Affiliation(s)
- Vy A Vo
- Brain-Inspired Computing, Emerging Technologies Research, Intel Labs, Hillsboro, OR 97124, USA.,Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093, USA.,Department of Psychology, University of California San Diego, La Jolla, CA 92093, USA
| | - David W Sutterer
- Department of Psychological Sciences, Vanderbilt University, Nashville, TN 37235, USA
| | - Joshua J Foster
- Department of Psychological and Brain Sciences, Boston University, Boston, MA 02215, USA.,Center for Systems Neuroscience, Boston University, Boston, MA, USA
| | - Thomas C Sprague
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA 93106, USA
| | - Edward Awh
- Department of Psychology, The University of Chicago, Chicago, IL 60637, USA.,Institute for Mind and Biology, The University of Chicago, Chicago, IL 60637, USA
| | - John T Serences
- Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093, USA.,Department of Psychology, University of California San Diego, La Jolla, CA 92093, USA.,Kavli Foundation for the Brain and Mind, University of California San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
11
|
Hallenbeck GE, Sprague TC, Rahmati M, Sreenivasan KK, Curtis CE. Working memory representations in visual cortex mediate distraction effects. Nat Commun 2021; 12:4714. [PMID: 34354071 PMCID: PMC8342709 DOI: 10.1038/s41467-021-24973-1] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 07/13/2021] [Indexed: 11/17/2022] Open
Abstract
Although the contents of working memory can be decoded from visual cortex activity, these representations may play a limited role if they are not robust to distraction. We used model-based fMRI to estimate the impact of distracting visual tasks on working memory representations in several visual field maps in visual and frontoparietal association cortex. Here, we show distraction causes the fidelity of working memory representations to briefly dip when both the memorandum and distractor are jointly encoded by the population activities. Distraction induces small biases in memory errors which can be predicted by biases in neural decoding in early visual cortex, but not other regions. Although distraction briefly disrupts working memory representations, the widespread redundancy with which working memory information is encoded may protect against catastrophic loss. In early visual cortex, the neural representation of information in working memory and behavioral performance are intertwined, solidifying its importance in visual memory. The relative roles of visual, parietal, and frontal cortex in working memory have been actively debated. Here, the authors show that distraction impacts visual working memory representations in primary visual areas, indicating that these regions play a key role in the maintenance of working memory.
Collapse
Affiliation(s)
| | - Thomas C Sprague
- Department of Psychology, New York University, New York, NY, USA.,Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Masih Rahmati
- Department of Psychology, New York University, New York, NY, USA.,Center for Neural Science, New York University, New York, NY, USA
| | - Kartik K Sreenivasan
- Division of Science and Mathematics, New York University Abu Dhabi, Abu Dhabi, UAE
| | - Clayton E Curtis
- Department of Psychology, New York University, New York, NY, USA. .,Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
12
|
Covert Attention Increases the Gain of Stimulus-Evoked Population Codes. J Neurosci 2021; 41:1802-1815. [PMID: 33441434 DOI: 10.1523/jneurosci.2186-20.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 11/17/2020] [Accepted: 12/17/2020] [Indexed: 11/21/2022] Open
Abstract
Covert spatial attention has a variety of effects on the responses of individual neurons. However, relatively little is known about the net effect of these changes on sensory population codes, even though perception ultimately depends on population activity. Here, we measured the EEG in human observers (male and female), and isolated stimulus-evoked activity that was phase-locked to the onset of attended and ignored visual stimuli. Using an encoding model, we reconstructed spatially selective population tuning functions from the pattern of stimulus-evoked activity across the scalp. Our EEG-based approach allowed us to measure very early visually evoked responses occurring ∼100 ms after stimulus onset. In Experiment 1, we found that covert attention increased the amplitude of spatially tuned population responses at this early stage of sensory processing. In Experiment 2, we parametrically varied stimulus contrast to test how this effect scaled with stimulus contrast. We found that the effect of attention on the amplitude of spatially tuned responses increased with stimulus contrast, and was well described by an increase in response gain (i.e., a multiplicative scaling of the population response). Together, our results show that attention increases the gain of spatial population codes during the first wave of visual processing.SIGNIFICANCE STATEMENT We know relatively little about how attention improves population codes, even though perception is thought to critically depend on population activity. In this study, we used an encoding-model approach to test how attention modulates the spatial tuning of stimulus-evoked population responses measured with EEG. We found that attention multiplicatively scales the amplitude of spatially tuned population responses. Furthermore, this effect was present within 100 ms of stimulus onset. Thus, our results show that attention improves spatial population codes by increasing their gain at this early stage of processing.
Collapse
|