451
|
Dissociable memory traces within the macaque medial temporal lobe predict subsequent recognition performance. J Neurosci 2014; 34:1988-97. [PMID: 24478378 DOI: 10.1523/jneurosci.4048-13.2014] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies have revealed that activity in the medial temporal lobe (MTL) predicts subsequent memory performance in humans. Because of limited knowledge on cytoarchitecture and axonal projections of the human MTL, precise localization and characterization of the areas that can predict subsequent memory performance are benefited by the use of nonhuman primates in which integrated approach of the MRI- and cytoarchiture-based boundary delineation is available. However, neural correlates of this subsequent memory effect have not yet been identified in monkeys. Here, we used fMRI to examine activity in the MTL during memory encoding of events that monkeys later remembered or forgot. Application of both multivoxel pattern analysis and conventional univariate analysis to high-resolution fMRI data allowed us to identify memory traces within the caudal entorhinal cortex (cERC) and perirhinal cortex (PRC), as well as within the hippocampus proper. Furthermore, activity in the cERC and the hippocampus, which are directly connected, was responsible for encoding the initial items of sequentially presented pictures, which may reflect recollection-like recognition, whereas activity in the PRC was not. These results suggest that two qualitatively distinct encoding processes work in the monkey MTL and that recollection-based memory is formed by the interplay of the hippocampus with the cERC, a focal cortical area anatomically closer to the hippocampus and hierarchically higher than previously believed. These findings will advance the understanding of common memory system between humans and monkeys and accelerate fine electrophysiological characterization of these dissociable memory traces in the monkey MTL.
Collapse
|
452
|
Cox DD. Do we understand high-level vision? Curr Opin Neurobiol 2014; 25:187-93. [PMID: 24552691 DOI: 10.1016/j.conb.2014.01.016] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2013] [Revised: 01/21/2014] [Accepted: 01/22/2014] [Indexed: 11/16/2022]
Abstract
'High-level' vision lacks a single, agreed upon definition, but it might usefully be defined as those stages of visual processing that transition from analyzing local image structure to analyzing structure of the external world that produced those images. Much work in the last several decades has focused on object recognition as a framing problem for the study of high-level visual cortex, and much progress has been made in this direction. This approach presumes that the operational goal of the visual system is to read-out the identity of an object (or objects) in a scene, in spite of variation in the position, size, lighting and the presence of other nearby objects. However, while object recognition as a operational framing of high-level is intuitive appealing, it is by no means the only task that visual cortex might do, and the study of object recognition is beset by challenges in building stimulus sets that adequately sample the infinite space of possible stimuli. Here I review the successes and limitations of this work, and ask whether we should reframe our approaches to understanding high-level vision.
Collapse
Affiliation(s)
- David Daniel Cox
- Department of Molecular and Cellular Biology, Center for Brain Science, School of Engineering and Applied Sciences, Harvard University 52 Oxford St., Room 219.40, Cambridge, MA 02138, United States.
| |
Collapse
|
453
|
Johnson MR, Johnson MK. Decoding individual natural scene representations during perception and imagery. Front Hum Neurosci 2014; 8:59. [PMID: 24574998 PMCID: PMC3921604 DOI: 10.3389/fnhum.2014.00059] [Citation(s) in RCA: 64] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2013] [Accepted: 01/24/2014] [Indexed: 11/13/2022] Open
Abstract
We used a multi-voxel classification analysis of functional magnetic resonance imaging (fMRI) data to determine to what extent item-specific information about complex natural scenes is represented in several category-selective areas of human extrastriate visual cortex during visual perception and visual mental imagery. Participants in the scanner either viewed or were instructed to visualize previously memorized natural scene exemplars, and the neuroimaging data were subsequently subjected to a multi-voxel pattern analysis (MVPA) using a support vector machine (SVM) classifier. We found that item-specific information was represented in multiple scene-selective areas: the occipital place area (OPA), parahippocampal place area (PPA), retrosplenial cortex (RSC), and a scene-selective portion of the precuneus/intraparietal sulcus region (PCu/IPS). Furthermore, item-specific information from perceived scenes was re-instantiated during mental imagery of the same scenes. These results support findings from previous decoding analyses for other types of visual information and/or brain areas during imagery or working memory, and extend them to the case of visual scenes (and scene-selective cortex). Taken together, such findings support models suggesting that reflective mental processes are subserved by the re-instantiation of perceptual information in high-level visual cortex. We also examined activity in the fusiform face area (FFA) and found that it, too, contained significant item-specific scene information during perception, but not during mental imagery. This suggests that although decodable scene-relevant activity occurs in FFA during perception, FFA activity may not be a necessary (or even relevant) component of one's mental representation of visual scenes.
Collapse
Affiliation(s)
| | - Marcia K Johnson
- Department of Psychology, Yale University New Haven, CT, USA ; Interdepartmental Neuroscience Program, Yale University New Haven, CT, USA
| |
Collapse
|
454
|
Abstract
Memory consolidation transforms initially labile memory traces into more stable representations. One putative mechanism for consolidation is the reactivation of memory traces after their initial encoding during subsequent sleep or waking state. However, it is still unknown whether consolidation of individual memory contents relies on reactivation of stimulus-specific neural representations in humans. Investigating stimulus-specific representations in humans is particularly difficult, but potentially feasible using multivariate pattern classification analysis (MVPA). Here, we show in healthy human participants that stimulus-specific activation patterns can indeed be identified with MVPA, that these patterns reoccur spontaneously during postlearning resting periods and sleep, and that the frequency of reactivation predicts subsequent memory for individual items. We conducted a paired-associate learning task with items and spatial positions and extracted stimulus-specific activity patterns by MVPA in a simultaneous electroencephalography and functional magnetic resonance imaging (fMRI) study. As a first step, we investigated the amount of fMRI volumes during rest that resembled either one of the items shown before or one of the items shown as a control after the resting period. Reactivations during both awake resting state and sleep predicted subsequent memory. These data are first evidence that spontaneous reactivation of stimulus-specific activity patterns during resting state can be investigated using MVPA. They show that reactivation occurs in humans and is behaviorally relevant for stabilizing memory traces against interference. They move beyond previous studies because replay was investigated on the level of individual stimuli and because reactivations were not evoked by sensory cues but occurred spontaneously.
Collapse
|
455
|
Sreenivasan KK, Curtis CE, D'Esposito M. Revisiting the role of persistent neural activity during working memory. Trends Cogn Sci 2014; 18:82-9. [PMID: 24439529 DOI: 10.1016/j.tics.2013.12.001] [Citation(s) in RCA: 291] [Impact Index Per Article: 29.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2013] [Revised: 12/01/2013] [Accepted: 12/03/2013] [Indexed: 10/25/2022]
Abstract
What are the neural mechanisms underlying working memory (WM)? One influential theory posits that neurons in the lateral prefrontal cortex (lPFC) store WM information via persistent activity. In this review, we critically evaluate recent findings that together indicate that this model of WM needs revision. We argue that sensory cortex, not the lPFC, maintains high-fidelity representations of WM content. By contrast, the lPFC simultaneously maintains representations of multiple goal-related variables that serve to bias stimulus-specific activity in sensory regions. This work highlights multiple neural mechanisms supporting WM, including temporally dynamic population coding in addition to persistent activity. These new insights focus the question on understanding how the mechanisms that underlie WM are related, interact, and are coordinated in the lPFC and sensory cortex.
Collapse
Affiliation(s)
- Kartik K Sreenivasan
- Division of Science and Mathematics, New York University Abu Dhabi, 19 Washington Square North, New York, NY 10011, USA.
| | - Clayton E Curtis
- Department of Psychology, and Center for Neural Science, New York University, 6 Washington Place, New York, NY 10003, USA
| | - Mark D'Esposito
- Helen Wills Neuroscience Institute, and Department of Psychology, University of California, Berkeley, 132 Barker Hall, Berkeley, CA 94720, USA
| |
Collapse
|
456
|
Abstract
Characterizing how activity in the central and autonomic nervous systems corresponds to distinct emotional states is one of the central goals of affective neuroscience. Despite the ease with which individuals label their own experiences, identifying specific autonomic and neural markers of emotions remains a challenge. Here we explore how multivariate pattern classification approaches offer an advantageous framework for identifying emotion specific biomarkers and for testing predictions of theoretical models of emotion. Based on initial studies using multivariate pattern classification, we suggest that central and autonomic nervous system activity can be reliably decoded into distinct emotional states. Finally, we consider future directions in applying pattern classification to understand the nature of emotion in the nervous system.
Collapse
|
457
|
Santoro R, Moerel M, De Martino F, Goebel R, Ugurbil K, Yacoub E, Formisano E. Encoding of natural sounds at multiple spectral and temporal resolutions in the human auditory cortex. PLoS Comput Biol 2014; 10:e1003412. [PMID: 24391486 PMCID: PMC3879146 DOI: 10.1371/journal.pcbi.1003412] [Citation(s) in RCA: 121] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2013] [Accepted: 11/12/2013] [Indexed: 11/18/2022] Open
Abstract
Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex. How does the human brain analyze natural sounds? Previous functional neuroimaging research could only describe the response patterns that sounds evoke in the human brain at the level of preferential regional activations. A comprehensive account of the neural basis of human hearing, however, requires deriving computational models that are able to provide quantitative predictions of brain responses to natural sounds. Here, we make a significant step in this direction by combining functional magnetic resonance imaging (fMRI) with computational modeling. We compare competing computational models of sound representations and select the model that most accurately predicts the measured fMRI response patterns. The computational models describe the processing of three relevant properties of natural sounds: frequency, temporal modulations and spectral modulations. We find that a model that represents spectral and temporal modulations jointly and in a frequency-dependent fashion provides the best account of fMRI responses and that the functional specialization of auditory cortical fields can be partially accounted for by their modulation tuning. Our results provide insights on how natural sounds are encoded in human auditory cortex and our methodological approach constitutes an advance in the way this question can be addressed in future studies.
Collapse
Affiliation(s)
- Roberta Santoro
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, The Netherlands
| | - Michelle Moerel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, The Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, The Netherlands
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, The Netherlands
- Department of Neuroimaging and Neuromodeling, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences (KNAW), Amsterdam, The Netherlands
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), Maastricht, The Netherlands
- * E-mail:
| |
Collapse
|
458
|
Kay KN, Rokem A, Winawer J, Dougherty RF, Wandell BA. GLMdenoise: a fast, automated technique for denoising task-based fMRI data. Front Neurosci 2013; 7:247. [PMID: 24381539 PMCID: PMC3865440 DOI: 10.3389/fnins.2013.00247] [Citation(s) in RCA: 124] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2013] [Accepted: 12/01/2013] [Indexed: 11/13/2022] Open
Abstract
In task-based functional magnetic resonance imaging (fMRI), researchers seek to measure fMRI signals related to a given task or condition. In many circumstances, measuring this signal of interest is limited by noise. In this study, we present GLMdenoise, a technique that improves signal-to-noise ratio (SNR) by entering noise regressors into a general linear model (GLM) analysis of fMRI data. The noise regressors are derived by conducting an initial model fit to determine voxels unrelated to the experimental paradigm, performing principal components analysis (PCA) on the time-series of these voxels, and using cross-validation to select the optimal number of principal components to use as noise regressors. Due to the use of data resampling, GLMdenoise requires and is best suited for datasets involving multiple runs (where conditions repeat across runs). We show that GLMdenoise consistently improves cross-validation accuracy of GLM estimates on a variety of event-related experimental datasets and is accompanied by substantial gains in SNR. To promote practical application of methods, we provide MATLAB code implementing GLMdenoise. Furthermore, to help compare GLMdenoise to other denoising methods, we present the Denoise Benchmark (DNB), a public database and architecture for evaluating denoising methods. The DNB consists of the datasets described in this paper, a code framework that enables automatic evaluation of a denoising method, and implementations of several denoising methods, including GLMdenoise, the use of motion parameters as noise regressors, ICA-based denoising, and RETROICOR/RVHRCOR. Using the DNB, we find that GLMdenoise performs best out of all of the denoising methods we tested.
Collapse
Affiliation(s)
- Kendrick N Kay
- Department of Psychology, Washington University in St. Louis St. Louis, MO, USA
| | - Ariel Rokem
- Department of Psychology, Stanford University Stanford, CA, USA
| | | | - Robert F Dougherty
- Center for Cognitive and Neurobiological Imaging, Stanford University Stanford, CA, USA
| | - Brian A Wandell
- Department of Psychology, Stanford University Stanford, CA, USA
| |
Collapse
|
459
|
Zelinsky GJ, Peng Y, Samaras D. Eye can read your mind: decoding gaze fixations to reveal categorical search targets. J Vis 2013; 13:13.14.10. [PMID: 24338446 DOI: 10.1167/13.14.10] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Is it possible to infer a person's goal by decoding their fixations on objects? Two groups of participants categorically searched for either a teddy bear or butterfly among random category distractors, each rated as high, medium, or low in similarity to the target classes. Target-similar objects were preferentially fixated in both search tasks, demonstrating information about target category in looking behavior. Different participants then viewed the searchers' scanpaths, superimposed over the target-absent displays, and attempted to decode the target category (bear/butterfly). Bear searchers were classified perfectly; butterfly searchers were classified at 77%. Bear and butterfly Support Vector Machine (SVM) classifiers were also used to decode the same preferentially fixated objects and found to yield highly comparable classification rates. We conclude that information about a person's search goal exists in fixation behavior, and that this information can be behaviorally decoded to reveal a search target-essentially reading a person's mind by analyzing their fixations.
Collapse
|
460
|
Cong F, Puoliväli T, Alluri V, Sipola T, Burunat I, Toiviainen P, Nandi AK, Brattico E, Ristaniemi T. Key issues in decomposing fMRI during naturalistic and continuous music experience with independent component analysis. J Neurosci Methods 2013; 223:74-84. [PMID: 24333752 DOI: 10.1016/j.jneumeth.2013.11.025] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2013] [Revised: 11/26/2013] [Accepted: 11/27/2013] [Indexed: 11/19/2022]
Abstract
BACKGROUND Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. NEW METHOD For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. RESULTS The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. COMPARISON WITH EXISTING METHOD(S) Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. CONCLUSIONS Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps.
Collapse
Affiliation(s)
- Fengyu Cong
- Department of Mathematical Information Technology, University of Jyväskylä, Finland.
| | - Tuomas Puoliväli
- Department of Mathematical Information Technology, University of Jyväskylä, Finland
| | - Vinoo Alluri
- Department of Mathematical Information Technology, University of Jyväskylä, Finland; Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland
| | - Tuomo Sipola
- Department of Mathematical Information Technology, University of Jyväskylä, Finland
| | - Iballa Burunat
- Department of Mathematical Information Technology, University of Jyväskylä, Finland; Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland
| | - Petri Toiviainen
- Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland
| | - Asoke K Nandi
- Department of Electronic and Computer Engineering, Brunel University, UK; Department of Mathematical Information Technology, University of Jyväskylä, Finland
| | - Elvira Brattico
- Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland; Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki, Finland
| | - Tapani Ristaniemi
- Department of Mathematical Information Technology, University of Jyväskylä, Finland
| |
Collapse
|
461
|
Abstract
The fusiform face area (FFA) is a well-studied human brain region that shows strong activation for faces. In functional MRI studies, FFA is often assumed to be a homogeneous collection of voxels with similar visual tuning. To test this assumption, we used natural movies and a quantitative voxelwise modeling and decoding framework to estimate category tuning profiles for individual voxels within FFA. We find that the responses in most FFA voxels are strongly enhanced by faces, as reported in previous studies. However, we also find that responses of individual voxels are selectively enhanced or suppressed by a wide variety of other categories and that these broader tuning profiles differ across FFA voxels. Cluster analysis of category tuning profiles across voxels reveals three spatially segregated functional subdomains within FFA. These subdomains differ primarily in their responses for nonface categories, such as animals, vehicles, and communication verbs. Furthermore, this segregation does not depend on the statistical threshold used to define FFA from responses to functional localizers. These results suggest that voxels within FFA represent more diverse information about object and action categories than generally assumed.
Collapse
|
462
|
Sato JR, Basilio R, Paiva FF, Garrido GJ, Bramati IE, Bado P, Tovar-Moll F, Zahn R, Moll J. Real-time fMRI pattern decoding and neurofeedback using FRIEND: an FSL-integrated BCI toolbox. PLoS One 2013; 8:e81658. [PMID: 24312569 PMCID: PMC3847114 DOI: 10.1371/journal.pone.0081658] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2013] [Accepted: 10/15/2013] [Indexed: 11/19/2022] Open
Abstract
The demonstration that humans can learn to modulate their own brain activity based on feedback of neurophysiological signals opened up exciting opportunities for fundamental and applied neuroscience. Although EEG-based neurofeedback has been long employed both in experimental and clinical investigation, functional MRI (fMRI)-based neurofeedback emerged as a promising method, given its superior spatial resolution and ability to gauge deep cortical and subcortical brain regions. In combination with improved computational approaches, such as pattern recognition analysis (e.g., Support Vector Machines, SVM), fMRI neurofeedback and brain decoding represent key innovations in the field of neuromodulation and functional plasticity. Expansion in this field and its applications critically depend on the existence of freely available, integrated and user-friendly tools for the neuroimaging research community. Here, we introduce FRIEND, a graphic-oriented user-friendly interface package for fMRI neurofeedback and real-time multivoxel pattern decoding. The package integrates routines for image preprocessing in real-time, ROI-based feedback (single-ROI BOLD level and functional connectivity) and brain decoding-based feedback using SVM. FRIEND delivers an intuitive graphic interface with flexible processing pipelines involving optimized procedures embedding widely validated packages, such as FSL and libSVM. In addition, a user-defined visual neurofeedback module allows users to easily design and run fMRI neurofeedback experiments using ROI-based or multivariate classification approaches. FRIEND is open-source and free for non-commercial use. Processing tutorials and extensive documentation are available.
Collapse
Affiliation(s)
- João R. Sato
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D’Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
- Center of Mathematics, Computation and Cognition, Universidade Federal do ABC, Santo André, Brazil
| | - Rodrigo Basilio
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D’Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
| | - Fernando F. Paiva
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D’Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
- Institute of Physics of São Carlos, University of São Paulo, São Carlos, Brazil
| | - Griselda J. Garrido
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D’Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
| | - Ivanei E. Bramati
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D’Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
- Institute for Biomedical Sciences, Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
| | - Patricia Bado
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D’Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
- Institute for Biomedical Sciences, Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
| | - Fernanda Tovar-Moll
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D’Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
- Institute for Biomedical Sciences, Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
| | - Roland Zahn
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D’Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
- Department of Psychological Medicine, Institute of Psychiatry, King's College, London, United Kingdom
| | - Jorge Moll
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D’Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
- Institute for Biomedical Sciences, Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
| |
Collapse
|
463
|
Paik SB. Developmental models of functional maps in cortex. Biomed Eng Lett 2013. [DOI: 10.1007/s13534-013-0115-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
|
464
|
MEG-based decoding of the spatiotemporal dynamics of visual category perception. Neuroimage 2013; 83:1063-73. [DOI: 10.1016/j.neuroimage.2013.07.075] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2013] [Revised: 07/12/2013] [Accepted: 07/25/2013] [Indexed: 11/22/2022] Open
|
465
|
Benjamini Y, Yu B. The shuffle estimator for explainable variance in fMRI experiments. Ann Appl Stat 2013; 7. [DOI: 10.1214/13-aoas681] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
466
|
Leeds DD, Seibert DA, Pyles JA, Tarr MJ. Comparing visual representations across human fMRI and computational vision. J Vis 2013; 13:25. [PMID: 24273227 PMCID: PMC3839261 DOI: 10.1167/13.13.25] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2013] [Accepted: 09/16/2013] [Indexed: 11/24/2022] Open
Abstract
Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from "interest points," was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation.
Collapse
Affiliation(s)
- Daniel D. Leeds
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Computer and Information Science, Fordham University, Bronx, NY, USA
| | - Darren A. Seibert
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - John A. Pyles
- Center for the Neural Basis of Cognition and Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Michael J. Tarr
- Center for the Neural Basis of Cognition and Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
467
|
Bruffaerts R, Dupont P, Peeters R, De Deyne S, Storms G, Vandenberghe R. Similarity of fMRI activity patterns in left perirhinal cortex reflects semantic similarity between words. J Neurosci 2013; 33:18597-607. [PMID: 24259581 PMCID: PMC6618797 DOI: 10.1523/jneurosci.1548-13.2013] [Citation(s) in RCA: 82] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2013] [Revised: 10/13/2013] [Accepted: 10/19/2013] [Indexed: 11/21/2022] Open
Abstract
How verbal and nonverbal visuoperceptual input connects to semantic knowledge is a core question in visual and cognitive neuroscience, with significant clinical ramifications. In an event-related functional magnetic resonance imaging (fMRI) experiment we determined how cosine similarity between fMRI response patterns to concrete words and pictures reflects semantic clustering and semantic distances between the represented entities within a single category. Semantic clustering and semantic distances between 24 animate entities were derived from a concept-feature matrix based on feature generation by >1000 subjects. In the main fMRI study, 19 human subjects performed a property verification task with written words and pictures and a low-level control task. The univariate contrast between the semantic and the control task yielded extensive bilateral occipitotemporal activation from posterior cingulate to anteromedial temporal cortex. Entities belonging to a same semantic cluster elicited more similar fMRI activity patterns in left occipitotemporal cortex. When words and pictures were analyzed separately, the effect reached significance only for words. The semantic similarity effect for words was localized to left perirhinal cortex. According to a representational similarity analysis of left perirhinal responses, semantic distances between entities correlated inversely with cosine similarities between fMRI response patterns to written words. An independent replication study in 16 novel subjects confirmed these novel findings. Semantic similarity is reflected by similarity of functional topography at a fine-grained level in left perirhinal cortex. The word specificity excludes perceptually driven confounds as an explanation and is likely to be task dependent.
Collapse
Affiliation(s)
- Rose Bruffaerts
- Laboratory for Cognitive Neurology, Department of Neurosciences
- Neurology Department, and
| | - Patrick Dupont
- Laboratory for Cognitive Neurology, Department of Neurosciences
| | - Ronald Peeters
- Radiology Department, University Hospitals Leuven, 3000 Leuven, Belgium, and
| | - Simon De Deyne
- Laboratory of Experimental Psychology, Humanities and Social Sciences Group, University of Leuven, 3000 Leuven, Belgium
| | - Gerrit Storms
- Laboratory of Experimental Psychology, Humanities and Social Sciences Group, University of Leuven, 3000 Leuven, Belgium
| | - Rik Vandenberghe
- Laboratory for Cognitive Neurology, Department of Neurosciences
- Neurology Department, and
| |
Collapse
|
468
|
Sprague TC, Serences JT. Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices. Nat Neurosci 2013; 16:1879-87. [PMID: 24212672 PMCID: PMC3977704 DOI: 10.1038/nn.3574] [Citation(s) in RCA: 158] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2013] [Accepted: 10/09/2013] [Indexed: 11/13/2022]
Abstract
Computational theories propose that attention modulates the topographical
landscape of spatial ‘priority’ maps in regions of visual cortex
so that the location of an important object is associated with higher activation
levels. While single-unit recording studies have demonstrated attention-related
increases in the gain of neural responses and changes in the size of spatial
receptive fields, the net effect of these modulations on the topography of
region-level priority maps has not been investigated. Here, we used fMRI and a
multivariate encoding model to reconstruct spatial representations of attended
and ignored stimuli using activation patterns across entire visual areas. These
reconstructed spatial representations reveal the influence of attention on the
amplitude and size of stimulus representations within putative priority maps
across the visual hierarchy. Our results suggest that attention increases the
amplitude of stimulus representations in these spatial maps, particularly in
higher visual areas, but does not substantively change their size.
Collapse
Affiliation(s)
- Thomas C Sprague
- Neuroscience Graduate Program, University of California San Diego, La Jolla, California, USA
| | | |
Collapse
|
469
|
Song S, Ma X, Zhan Y, Zhan Z, Yao L, Zhang J. Bayesian reconstruction of multiscale local contrast images from brain activity. J Neurosci Methods 2013; 220:39-45. [DOI: 10.1016/j.jneumeth.2013.08.020] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2013] [Revised: 08/22/2013] [Accepted: 08/23/2013] [Indexed: 10/26/2022]
|
470
|
Processing of natural sounds: characterization of multipeak spectral tuning in human auditory cortex. J Neurosci 2013; 33:11888-98. [PMID: 23864678 DOI: 10.1523/jneurosci.5306-12.2013] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
We examine the mechanisms by which the human auditory cortex processes the frequency content of natural sounds. Through mathematical modeling of ultra-high field (7 T) functional magnetic resonance imaging responses to natural sounds, we derive frequency-tuning curves of cortical neuronal populations. With a data-driven analysis, we divide the auditory cortex into five spatially distributed clusters, each characterized by a spectral tuning profile. Beyond neuronal populations with simple single-peaked spectral tuning (grouped into two clusters), we observe that ∼60% of auditory populations are sensitive to multiple frequency bands. Specifically, we observe sensitivity to multiple frequency bands (1) at exactly one octave distance from each other, (2) at multiple harmonically related frequency intervals, and (3) with no apparent relationship to each other. We propose that beyond the well known cortical tonotopic organization, multipeaked spectral tuning amplifies selected combinations of frequency bands. Such selective amplification might serve to detect behaviorally relevant and complex sound features, aid in segregating auditory scenes, and explain prominent perceptual phenomena such as octave invariance.
Collapse
|
471
|
Abstract
In this paper, I will review why psychotherapy is relevant to the question of how consciousness relates to brain plasticity. A great deal of the research and theorizing on consciousness and the brain, including my own on hallucinations for example (Collerton and Perry, 2011) has focused upon specific changes in conscious content which can be related to temporal changes in restricted brain systems. I will argue that psychotherapy, in contrast, allows only a focus on holistic aspects of consciousness; an emphasis which may usefully complement what can be learnt from more specific methodologies.
Collapse
Affiliation(s)
- Daniel Collerton
- Clinical Psychology, Northumberland, Tyne and Wear NHS Foundation Trust Gateshead, UK ; Newcastle upon Tyne Newcastle University, UK
| |
Collapse
|
472
|
Discussion about “Grouping strategies and thresholding for high dimensional linear models”. J Stat Plan Inference 2013. [DOI: 10.1016/j.jspi.2013.03.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
473
|
|
474
|
Abstract
Most studies of the early stages of visual analysis (V1-V3) have focused on the properties of neurons that support processing of elemental features of a visual stimulus or scene, such as local contrast, orientation, or direction of motion. Recent evidence from electrophysiology and neuroimaging studies, however, suggests that early visual cortex may also play a role in retaining stimulus representations in memory for short periods. For example, fMRI responses obtained during the delay period between two presentations of an oriented visual stimulus can be used to decode the remembered stimulus orientation with multivariate pattern analysis. Here, we investigated whether orientation is a special case or if this phenomenon generalizes to working memory traces of other visual features. We found that multivariate classification of fMRI signals from human visual cortex could be used to decode the contrast of a perceived stimulus even when the mean response changes were accounted for, suggesting some consistent spatial signal for contrast in these areas. Strikingly, we found that fMRI responses also supported decoding of contrast when the stimulus had to be remembered. Furthermore, classification generalized from perceived to remembered stimuli and vice versa, implying that the corresponding pattern of responses in early visual cortex were highly consistent. In additional analyses, we show that stimulus decoding here is driven by biases depending on stimulus eccentricity. This places important constraints on the interpretation for decoding stimulus properties for which cortical processing is known to vary with eccentricity, such as contrast, color, spatial frequency, and temporal frequency.
Collapse
|
475
|
Abstract
To locate visual objects, the brain combines information about retinal location and direction of gaze. Studies in monkeys have demonstrated that eye position modulates the gain of visual signals with "gain fields," so that single neurons represent both retinotopic location and eye position. We wished to know whether eye position and retinotopic stimulus location are both represented in human visual cortex. Using functional magnetic resonance imaging, we measured separately for each of several different gaze positions cortical responses to stimuli that varied periodically in retinal locus. Visually evoked responses were periodic following the periodic retinotopic stimulation. Only the response amplitudes depended on eye position; response phases were indistinguishable across eye positions. We used multivoxel pattern analysis to decode eye position from the spatial pattern of response amplitudes. The decoder reliably discriminated eye position in five of the early visual cortical areas by taking advantage of a spatially heterogeneous eye position-dependent modulation of cortical activity. We conclude that responses in retinotopically organized visual cortical areas are modulated by gain fields qualitatively similar to those previously observed neurophysiologically.
Collapse
|
476
|
Li Y, Long J, Huang B, Yu T, Wu W, Liu Y, Liang C, Sun P. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception. Cereb Cortex 2013; 25:384-95. [PMID: 23978654 DOI: 10.1093/cercor/bht228] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration.
Collapse
Affiliation(s)
- Yuanqing Li
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou 510640, China
| | - Jinyi Long
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou 510640, China
| | - Biao Huang
- Department of Radiology, Guangdong General Hospital, Guangzhou 510080, China
| | - Tianyou Yu
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou 510640, China
| | - Wei Wu
- Center for Brain Computer Interfaces and Brain Information Processing, South China University of Technology, Guangzhou 510640, China
| | - Yongjian Liu
- Department of MR, Foshan Hospital of Traditional Chinese Medicine, Foshan 528000, China
| | - Changhong Liang
- Department of Radiology, Guangdong General Hospital, Guangzhou 510080, China
| | - Pei Sun
- Department of Psychology, Tsinghua University, Beijing 100084, China
| |
Collapse
|
477
|
Stansbury DE, Naselaris T, Gallant JL. Natural scene statistics account for the representation of scene categories in human visual cortex. Neuron 2013; 79:1025-34. [PMID: 23932491 DOI: 10.1016/j.neuron.2013.06.034] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/12/2013] [Indexed: 11/29/2022]
Abstract
During natural vision, humans categorize the scenes they encounter: an office, the beach, and so on. These categories are informed by knowledge of the way that objects co-occur in natural scenes. How does the human brain aggregate information about objects to represent scene categories? To explore this issue, we used statistical learning methods to learn categories that objectively capture the co-occurrence statistics of objects in a large collection of natural scenes. Using the learned categories, we modeled fMRI brain signals evoked in human subjects when viewing images of scenes. We find that evoked activity across much of anterior visual cortex is explained by the learned categories. Furthermore, a decoder based on these scene categories accurately predicts the categories and objects comprising novel scenes from brain activity evoked by those scenes. These results suggest that the human brain represents scene categories that capture the co-occurrence statistics of objects in the world.
Collapse
|
478
|
Sharifian F, Nurminen L, Vanni S. Visual interactions conform to pattern decorrelation in multiple cortical areas. PLoS One 2013; 8:e68046. [PMID: 23874491 PMCID: PMC3707897 DOI: 10.1371/journal.pone.0068046] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2012] [Accepted: 05/24/2013] [Indexed: 11/23/2022] Open
Abstract
Neural responses to visual stimuli are strongest in the classical receptive field, but they are also modulated by stimuli in a much wider region. In the primary visual cortex, physiological data and models suggest that such contextual modulation is mediated by recurrent interactions between cortical areas. Outside the primary visual cortex, imaging data has shown qualitatively similar interactions. However, whether the mechanisms underlying these effects are similar in different areas has remained unclear. Here, we found that the blood oxygenation level dependent (BOLD) signal spreads over considerable cortical distances in the primary visual cortex, further than the classical receptive field. This indicates that the synaptic activity induced by a given stimulus occurs in a surprisingly extensive network. Correspondingly, we found suppressive and facilitative interactions far from the maximum retinotopic response. Next, we characterized the relationship between contextual modulation and correlation between two spatial activation patterns. Regardless of the functional area or retinotopic eccentricity, higher correlation between the center and surround response patterns was associated with stronger suppressive interaction. In individual voxels, suppressive interaction was predominant when the center and surround stimuli produced BOLD signals with the same sign. Facilitative interaction dominated in the voxels with opposite BOLD signal signs. Our data was in unison with recently published cortical decorrelation model, and was validated against alternative models, separately in different eccentricities and functional areas. Our study provides evidence that spatial interactions among neural populations involve decorrelation of macroscopic neural activation patterns, and suggests that the basic design of the cerebral cortex houses a robust decorrelation mechanism for afferent synaptic input.
Collapse
Affiliation(s)
- Fariba Sharifian
- Brain Research Unit, O.V. Lounasmaa Laboratory, School of Science, Aalto University, Espoo, Finland.
| | | | | |
Collapse
|
479
|
Schoenmakers S, Barth M, Heskes T, van Gerven M. Linear reconstruction of perceived images from human brain activity. Neuroimage 2013; 83:951-61. [PMID: 23886984 DOI: 10.1016/j.neuroimage.2013.07.043] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2013] [Revised: 07/11/2013] [Accepted: 07/13/2013] [Indexed: 10/26/2022] Open
Abstract
With the advent of sophisticated acquisition and analysis techniques, decoding the contents of someone's experience has become a reality. We propose a straightforward linear Gaussian approach, where decoding relies on the inversion of properly regularized encoding models, which can still be solved analytically. In order to test our approach we acquired functional magnetic resonance imaging data under a rapid event-related design in which subjects were presented with handwritten characters. Our approach is shown to yield state-of-the-art reconstructions of perceived characters as estimated from BOLD responses. This even holds for previously unseen characters. We propose that this framework serves as a baseline with which to compare more sophisticated models for which analytical inversion is infeasible.
Collapse
Affiliation(s)
- Sanne Schoenmakers
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands.
| | | | | | | |
Collapse
|
480
|
Kriegeskorte N, Kievit RA. Representational geometry: integrating cognition, computation, and the brain. Trends Cogn Sci 2013; 17:401-12. [PMID: 23876494 PMCID: PMC3730178 DOI: 10.1016/j.tics.2013.06.007] [Citation(s) in RCA: 458] [Impact Index Per Article: 41.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2013] [Revised: 06/06/2013] [Accepted: 06/12/2013] [Indexed: 01/08/2023]
Abstract
Representational geometry is a framework that enables us to relate brain, computation, and cognition. Representations in brains and models can be characterized by representational distance matrices. Distance matrices can be readily compared to test computational models. We review recent insights into perception, cognition, memory, and action and discuss current challenges.
The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure.
Collapse
|
481
|
Brandmeyer A, Farquhar JDR, McQueen JM, Desain PWM. Decoding speech perception by native and non-native speakers using single-trial electrophysiological data. PLoS One 2013; 8:e68261. [PMID: 23874567 PMCID: PMC3708957 DOI: 10.1371/journal.pone.0068261] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2013] [Accepted: 05/27/2013] [Indexed: 11/19/2022] Open
Abstract
Brain-computer interfaces (BCIs) are systems that use real-time analysis of neuroimaging data to determine the mental state of their user for purposes such as providing neurofeedback. Here, we investigate the feasibility of a BCI based on speech perception. Multivariate pattern classification methods were applied to single-trial EEG data collected during speech perception by native and non-native speakers. Two principal questions were asked: 1) Can differences in the perceived categories of pairs of phonemes be decoded at the single-trial level? 2) Can these same categorical differences be decoded across participants, within or between native-language groups? Results indicated that classification performance progressively increased with respect to the categorical status (within, boundary or across) of the stimulus contrast, and was also influenced by the native language of individual participants. Classifier performance showed strong relationships with traditional event-related potential measures and behavioral responses. The results of the cross-participant analysis indicated an overall increase in average classifier performance when trained on data from all participants (native and non-native). A second cross-participant classifier trained only on data from native speakers led to an overall improvement in performance for native speakers, but a reduction in performance for non-native speakers. We also found that the native language of a given participant could be decoded on the basis of EEG data with accuracy above 80%. These results indicate that electrophysiological responses underlying speech perception can be decoded at the single-trial level, and that decoding performance systematically reflects graded changes in the responses related to the phonological status of the stimuli. This approach could be used in extensions of the BCI paradigm to support perceptual learning during second language acquisition.
Collapse
Affiliation(s)
- Alex Brandmeyer
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | | | | | | |
Collapse
|
482
|
Han J, Ji X, Hu X, Zhu D, Li K, Jiang X, Cui G, Guo L, Liu T. Representing and retrieving video shots in human-centric brain imaging space. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:2723-36. [PMID: 23568507 PMCID: PMC3984391 DOI: 10.1109/tip.2013.2256919] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Meaningful representation and effective retrieval of video shots in a large-scale database has been a profound challenge for the image/video processing and computer vision communities. A great deal of effort has been devoted to the extraction of low-level visual features, such as color, shape, texture, and motion for characterizing and retrieving video shots. However, the accuracy of these feature descriptors is still far from satisfaction due to the well-known semantic gap. In order to alleviate the problem, this paper investigates a novel methodology of representing and retrieving video shots using human-centric high-level features derived in brain imaging space (BIS) where brain responses to natural stimulus of video watching can be explored and interpreted. At first, our recently developed dense individualized and common connectivity-based cortical landmarks (DICCCOL) system is employed to locate large-scale functional brain networks and their regions of interests (ROIs) that are involved in the comprehension of video stimulus. Then, functional connectivities between various functional ROI pairs are utilized as BIS features to characterize the brain's comprehension of video semantics. Then an effective feature selection procedure is applied to learn the most relevant features while removing redundancy, which results in the formation of the final BIS features. Afterwards, a mapping from low-level visual features to high-level semantic features in the BIS is built via the Gaussian process regression (GPR) algorithm, and a manifold structure is then inferred, in which video key frames are represented by the mapped feature vectors in the BIS. Finally, the manifold-ranking algorithm concerning the relationship among all data is applied to measure the similarity between key frames of video shots. Experimental results on the TRECVID 2005 dataset demonstrate the superiority of the proposed work in comparison with traditional methods.
Collapse
Affiliation(s)
- Junwei Han
- School of Automation, Northwestern Polytechnical University, Xi’an 710072, China.
| | | | | | | | | | | | | | | | | |
Collapse
|
483
|
From Vivaldi to Beatles and back: predicting lateralized brain responses to music. Neuroimage 2013; 83:627-36. [PMID: 23810975 DOI: 10.1016/j.neuroimage.2013.06.064] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2012] [Revised: 05/31/2013] [Accepted: 06/18/2013] [Indexed: 11/21/2022] Open
Abstract
We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschl's gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce a powerful means to predict brain responses to music, speech, or soundscapes across a large variety of contexts.
Collapse
|
484
|
Evers K, Sigman M. Possibilities and limits of mind-reading: a neurophilosophical perspective. Conscious Cogn 2013; 22:887-97. [PMID: 23807515 DOI: 10.1016/j.concog.2013.05.011] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2012] [Revised: 05/26/2013] [Accepted: 05/29/2013] [Indexed: 01/24/2023]
Abstract
Access to other minds once presupposed other individuals' expressions and narrations. Today, several methods have been developed which can measure brain states relevant for assessments of mental states without 1st person overt external behavior or speech. Functional magnetic resonance imaging and trace conditioning are used clinically to identify patterns of activity in the brain that suggest the presence of consciousness in people suffering from severe consciousness disorders and methods to communicate cerebrally with patients who are motorically unable to communicate. The techniques are also used non-clinically to access subjective awareness in adults and infants. In this article we inspect technical and theoretical limits on brain-machine interface access to other minds. We argue that these techniques hold promises of important medical breakthroughs, open up new vistas of communication, and of understanding the infant mind. Yet they also give rise to ethical concerns, notably misuse as a consequence of hypes and misinterpretations.
Collapse
Affiliation(s)
- Kathinka Evers
- Centre for Research Ethics and Bioethics (CRB), Uppsala University, Sweden.
| | | |
Collapse
|
485
|
De Martino F, Moerel M, van de Moortele PF, Ugurbil K, Goebel R, Yacoub E, Formisano E. Spatial organization of frequency preference and selectivity in the human inferior colliculus. Nat Commun 2013; 4:1386. [PMID: 23340426 PMCID: PMC3556928 DOI: 10.1038/ncomms2379] [Citation(s) in RCA: 83] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2012] [Accepted: 12/12/2012] [Indexed: 01/10/2023] Open
Abstract
To date, the functional organization of human auditory sub-cortical structures can only be inferred from animal models. Here we use high-resolution functional MRI at ultra-high magnetic fields (7 Tesla) to map the organization of spectral responses in the human inferior colliculus (hIC), a sub-cortical structure fundamental for sound processing. We reveal a tonotopic map with a spatial gradient of preferred frequencies approximately oriented from dorso-lateral (low frequencies) to ventro-medial (high frequencies) locations. Furthermore, we observe a spatial organization of spectral selectivity (tuning) of fMRI responses in the hIC. Along isofrequency contours, fMRI-tuning is narrowest in central locations and broadest in the surrounding regions. Finally, by comparing sub-cortical and cortical auditory areas we show that fMRI-tuning is narrower in hIC than on the cortical surface. Our findings pave the way to non-invasive investigations of sound processing in human sub-cortical nuclei and to studying the interplay between sub-cortical and cortical neuronal populations.
Collapse
Affiliation(s)
- Federico De Martino
- Faculty of Psychology and Neuroscience, Department of Cognitive Neurosciences, Maastricht University, Universiteitssingel 40, Maastricht 6229ER, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
486
|
Attention selectively modifies the representation of individual faces in the human brain. J Neurosci 2013; 33:6979-89. [PMID: 23595755 DOI: 10.1523/jneurosci.4142-12.2013] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Attention modifies neural tuning for low-level features, but it is unclear how attention influences tuning for complex stimuli. We investigated this question in humans using fMRI and face stimuli. Participants were shown six faces (F1-F6) along a morph continuum, and selectivity was quantified by constructing tuning curves for individual voxels. Face-selective voxels exhibited greater responses to their preferred face than to nonpreferred faces, particularly in posterior face areas. Anterior face areas instead displayed tuning for face categories: voxels in these areas preferred either the first (F1-F3) or second (F4-F6) half of the morph continuum. Next, we examined the effects of attention on voxel tuning by having subjects direct attention to one of the superimposed images of F1 and F6. We found that attention selectively enhanced responses in voxels preferring the attended face. Together, our results demonstrate that single voxels carry information about individual faces and that the nature of this information varies across cortical face areas. Additionally, we found that attention selectively enhances these representations. Our findings suggest that attention may act via a unitary principle of selective enhancement of responses to both simple and complex stimuli across multiple stages of the visual hierarchy.
Collapse
|
487
|
Cognitive-motor brain-machine interfaces. ACTA ACUST UNITED AC 2013; 108:38-44. [PMID: 23774120 DOI: 10.1016/j.jphysparis.2013.05.005] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2012] [Revised: 03/27/2013] [Accepted: 05/23/2013] [Indexed: 11/21/2022]
Abstract
Brain-machine interfaces (BMIs) open new horizons for the treatment of paralyzed persons, giving hope for the artificial restoration of lost physiological functions. Whereas BMI development has mainly focused on motor rehabilitation, recent studies have suggested that higher cognitive functions can also be deciphered from brain activity, bypassing low level planning and execution functions, and replacing them by computer-controlled effectors. This review describes the new generation of cognitive-motor BMIs, focusing on three BMI types: By outlining recent progress in developing these BMI types, we aim to provide a unified view of contemporary research towards the replacement of behavioral outputs of cognitive processes by direct interaction with the brain.
Collapse
|
488
|
Davis T, Poldrack RA. Measuring neural representations with fMRI: practices and pitfalls. Ann N Y Acad Sci 2013; 1296:108-34. [PMID: 23738883 DOI: 10.1111/nyas.12156] [Citation(s) in RCA: 106] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Recently, there has been a dramatic increase in the number of functional magnetic resonance imaging studies seeking to answer questions about how the brain represents information. Representational questions are of particular importance in connecting neuroscientific and cognitive levels of analysis because it is at the representational level that many formal models of cognition make distinct predictions. This review discusses techniques for univariate, adaptation, and multivoxel analysis, and how they have been used to answer questions about content specificity in different regions of the brain, how this content is organized, and how representations are shaped by and contribute to cognitive processes. Each of the analysis techniques makes different assumptions about the underlying neural code and thus differ in how they can be applied to specific questions. We also discuss the many pitfalls of representational analysis, from the flexibility in data analysis pipelines to emergent nonrepresentational relationships that can arise between stimuli in a task.
Collapse
Affiliation(s)
- Tyler Davis
- Imaging Research Center, University of Texas at Austin, 1 University Station, Austin, TX 78712, USA.
| | | |
Collapse
|
489
|
Moran JM, Zaki J. Functional Neuroimaging and Psychology: What Have You Done for Me Lately? J Cogn Neurosci 2013; 25:834-42. [DOI: 10.1162/jocn_a_00380] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Abstract
Functional imaging has become a primary tool in the study of human psychology but is not without its detractors. Although cognitive neuroscientists have made great strides in understanding the neural instantiation of countless cognitive processes, commentators have sometimes argued that functional imaging provides little or no utility for psychologists. And indeed, myriad studies over the last quarter century have employed the technique of brain mapping—identifying the neural correlates of various psychological phenomena—in ways that bear minimally on psychological theory. How can brain mapping be made more relevant to behavioral scientists broadly? Here, we describe three trends that increase precisely this relevance: (i) the use of neuroimaging data to adjudicate between competing psychological theories through forward inference, (ii) isolating neural markers of information processing steps to better understand complex tasks and psychological phenomena through probabilistic reverse inference, and (iii) using brain activity to predict subsequent behavior. Critically, these new approaches build on the extensive tradition of brain mapping, suggesting that efforts in this area—although not initially maximally relevant to psychology—can indeed be used in ways that constrain and advance psychological theory.
Collapse
|
490
|
Kay KN, Winawer J, Rokem A, Mezer A, Wandell BA. A two-stage cascade model of BOLD responses in human visual cortex. PLoS Comput Biol 2013; 9:e1003079. [PMID: 23737741 PMCID: PMC3667759 DOI: 10.1371/journal.pcbi.1003079] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2012] [Accepted: 04/18/2013] [Indexed: 12/03/2022] Open
Abstract
Visual neuroscientists have discovered fundamental properties of neural representation through careful analysis of responses to controlled stimuli. Typically, different properties are studied and modeled separately. To integrate our knowledge, it is necessary to build general models that begin with an input image and predict responses to a wide range of stimuli. In this study, we develop a model that accepts an arbitrary band-pass grayscale image as input and predicts blood oxygenation level dependent (BOLD) responses in early visual cortex as output. The model has a cascade architecture, consisting of two stages of linear and nonlinear operations. The first stage involves well-established computations—local oriented filters and divisive normalization—whereas the second stage involves novel computations—compressive spatial summation (a form of normalization) and a variance-like nonlinearity that generates selectivity for second-order contrast. The parameters of the model, which are estimated from BOLD data, vary systematically across visual field maps: compared to primary visual cortex, extrastriate maps generally have larger receptive field size, stronger levels of normalization, and increased selectivity for second-order contrast. Our results provide insight into how stimuli are encoded and transformed in successive stages of visual processing. Much has been learned about how stimuli are represented in the visual system from measuring responses to carefully designed stimuli. Typically, different studies focus on different types of stimuli. Making sense of the large array of findings requires integrated models that explain responses to a wide range of stimuli. In this study, we measure functional magnetic resonance imaging (fMRI) responses in early visual cortex to a wide range of band-pass filtered images, and construct a computational model that takes the stimuli as input and predicts the fMRI responses as output. The model has a cascade architecture, consisting of two stages of linear and nonlinear operations. A novel component of the model is a nonlinear operation that generates selectivity for second-order contrast, that is, variations in contrast-energy across the visual field. We find that this nonlinearity is stronger in extrastriate areas V2 and V3 than in primary visual cortex V1. Our results provide insight into how stimuli are encoded and transformed in the visual system.
Collapse
Affiliation(s)
- Kendrick N Kay
- Department of Psychology, Stanford University, Stanford, California, USA.
| | | | | | | | | |
Collapse
|
491
|
Op de Beeck HP, Vermaercke B, Woolley DG, Wenderoth N. Combinatorial brain decoding of people's whereabouts during visuospatial navigation. Front Neurosci 2013; 7:78. [PMID: 23730269 PMCID: PMC3657635 DOI: 10.3389/fnins.2013.00078] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2013] [Accepted: 04/30/2013] [Indexed: 12/03/2022] Open
Abstract
Complex behavior typically relies upon many different processes which are related to activity in multiple brain regions. In contrast, neuroimaging analyses typically focus upon isolated processes. Here we present a new approach, combinatorial brain decoding, in which we decode complex behavior by combining the information which we can retrieve from the neural signals about the many different sub-processes. The case in point is visuospatial navigation. We explore the extent to which the route travelled by human subjects (N = 3) in a complex virtual maze can be decoded from activity patterns as measured with functional magnetic resonance imaging. Preliminary analyses suggest that it is difficult to directly decode spatial position from regions known to contain an explicit cognitive map of the environment, such as the hippocampus. Instead, we were able to indirectly derive spatial position from the pattern of activity in visual and motor cortex. The non-spatial representations in these regions reflect processes which are inherent to navigation, such as which stimuli are perceived at which point in time and which motor movement is executed when (e.g., turning left at a crossroad). Highly successful decoding of routes followed through the maze was possible by combining information about multiple aspects of navigation events across time and across multiple cortical regions. This “proof of principle” study highlights how visuospatial navigation is related to the combined activity of multiple brain regions, and establishes combinatorial brain decoding as a means to study complex mental events that involve a dynamic interplay of many cognitive processes.
Collapse
Affiliation(s)
- Hans P Op de Beeck
- Laboratory of Biological Psychology, University of Leuven (KU Leuven) Leuven, Belgium
| | | | | | | |
Collapse
|
492
|
Omer DB, Hildesheim R, Grinvald A. Temporally-structured acquisition of multidimensional optical imaging data facilitates visualization of elusive cortical representations in the behaving monkey. Neuroimage 2013; 82:237-51. [PMID: 23689017 DOI: 10.1016/j.neuroimage.2013.05.045] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2012] [Revised: 04/13/2013] [Accepted: 05/05/2013] [Indexed: 11/24/2022] Open
Abstract
Fundamental understanding of higher cognitive functions can greatly benefit from imaging of cortical activity with high spatiotemporal resolution in the behaving non-human primate. To achieve rapid imaging of high-resolution dynamics of cortical representations of spontaneous and evoked activity, we designed a novel data acquisition protocol for sensory stimulation by rapidly interleaving multiple stimuli in continuous sessions of optical imaging with voltage-sensitive dyes. We also tested a new algorithm for the "temporally structured component analysis" (TSCA) of a multidimensional time series that was developed for our new data acquisition protocol, but was tested only on simulated data (Blumenfeld, 2010). In addition to the raw data, the algorithm incorporates prior knowledge about the temporal structure of the data as well as input from other information. Here we showed that TSCA can successfully separate functional signal components from other signals referred to as noise. Imaging of responses to multiple visual stimuli, utilizing voltage-sensitive dyes, was performed on the visual cortex of awake monkeys. Multiple cortical representations, including orientation and ocular dominance maps as well as the hitherto elusive retinotopic representation of orientation stimuli, were extracted in only 10s of imaging, approximately two orders of magnitude faster than accomplished by conventional methods. Since the approach is rather general, other imaging techniques may also benefit from the same stimulation protocol. This methodology can thus facilitate rapid optical imaging explorations in monkeys, rodents and other species with a versatility and speed that were not feasible before.
Collapse
Affiliation(s)
- David B Omer
- Department of Neurobiology, The Weizmann Institute of Science, 76100 Rehovot, Israel.
| | | | | |
Collapse
|
493
|
Conroy BR, Singer BD, Guntupalli JS, Ramadge PJ, Haxby JV. Inter-subject alignment of human cortical anatomy using functional connectivity. Neuroimage 2013; 81:400-411. [PMID: 23685161 DOI: 10.1016/j.neuroimage.2013.05.009] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2012] [Revised: 04/24/2013] [Accepted: 05/01/2013] [Indexed: 10/26/2022] Open
Abstract
Inter-subject alignment of functional MRI (fMRI) data is necessary for group analyses. The standard approach to this problem matches anatomical features of the brain, such as major anatomical landmarks or cortical curvature. Precise alignment of functional cortical topographies, however, cannot be derived using only anatomical features. We propose a new inter-subject registration algorithm that aligns intra-subject patterns of functional connectivity across subjects. We derive functional connectivity patterns by correlating fMRI BOLD time-series, measured during movie viewing, between spatially remote cortical regions. We validate our technique extensively on real fMRI experimental data and compare our method to two state-of-the-art inter-subject registration algorithms. By cross-validating our method on independent datasets, we show that the derived alignment generalizes well to other experimental paradigms.
Collapse
Affiliation(s)
- Bryan R Conroy
- Department of Electrical Engineering, Princeton University, Princeton, NJ, USA; Department of Biomedical Engineering, Columbia University, New York, NY, USA.
| | - Benjamin D Singer
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - J Swaroop Guntupalli
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Peter J Ramadge
- Department of Electrical Engineering, Princeton University, Princeton, NJ, USA
| | - James V Haxby
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA; Center for Mind/Brain Sciences (CIMeC), Universitá degli studi di Trento, Rovereto, Italy
| |
Collapse
|
494
|
Lei Y, Tong L, Yan B. A mixed L2 norm regularized HRF estimation method for rapid event-related fMRI experiments. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2013; 2013:643129. [PMID: 23762193 PMCID: PMC3665251 DOI: 10.1155/2013/643129] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2013] [Revised: 03/26/2013] [Accepted: 04/08/2013] [Indexed: 11/18/2022]
Abstract
Brain state decoding or "mind reading" via multivoxel pattern analysis (MVPA) has become a popular focus of functional magnetic resonance imaging (fMRI) studies. In brain decoding, stimulus presentation rate is increased as fast as possible to collect many training samples and obtain an effective and reliable classifier or computational model. However, for extremely rapid event-related experiments, the blood-oxygen-level-dependent (BOLD) signals evoked by adjacent trials are heavily overlapped in the time domain. Thus, identifying trial-specific BOLD responses is difficult. In addition, voxel-specific hemodynamic response function (HRF), which is useful in MVPA, should be used in estimation to decrease the loss of weak information across voxels and obtain fine-grained spatial information. Regularization methods have been widely used to increase the efficiency of HRF estimates. In this study, we propose a regularization framework called mixed L2 norm regularization. This framework involves Tikhonov regularization and an additional L2 norm regularization term to calculate reliable HRF estimates. This technique improves the accuracy of HRF estimates and significantly increases the classification accuracy of the brain decoding task when applied to a rapid event-related four-category object classification experiment. At last, some essential issues such as the impact of low-frequency fluctuation (LFF) and the influence of smoothing are discussed for rapid event-related experiments.
Collapse
Affiliation(s)
- Yu Lei
- China National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
| | - Li Tong
- China National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
| | - Bin Yan
- China National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
| |
Collapse
|
495
|
Tyler LK, Chiu S, Zhuang J, Randall B, Devereux BJ, Wright P, Clarke A, Taylor KI. Objects and categories: feature statistics and object processing in the ventral stream. J Cogn Neurosci 2013; 25:1723-35. [PMID: 23662861 DOI: 10.1162/jocn_a_00419] [Citation(s) in RCA: 81] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recognizing an object involves more than just visual analyses; its meaning must also be decoded. Extensive research has shown that processing the visual properties of objects relies on a hierarchically organized stream in ventral occipitotemporal cortex, with increasingly more complex visual features being coded from posterior to anterior sites culminating in the perirhinal cortex (PRC) in the anteromedial temporal lobe (aMTL). The neurobiological principles of the conceptual analysis of objects remain more controversial. Much research has focused on two neural regions-the fusiform gyrus and aMTL, both of which show semantic category differences, but of different types. fMRI studies show category differentiation in the fusiform gyrus, based on clusters of semantically similar objects, whereas category-specific deficits, specifically for living things, are associated with damage to the aMTL. These category-specific deficits for living things have been attributed to problems in differentiating between highly similar objects, a process that involves the PRC. To determine whether the PRC and the fusiform gyri contribute to different aspects of an object's meaning, with differentiation between confusable objects in the PRC and categorization based on object similarity in the fusiform, we carried out an fMRI study of object processing based on a feature-based model that characterizes the degree of semantic similarity and difference between objects and object categories. Participants saw 388 objects for which feature statistic information was available and named the objects at the basic level while undergoing fMRI scanning. After controlling for the effects of visual information, we found that feature statistics that capture similarity between objects formed category clusters in fusiform gyri, such that objects with many shared features (typical of living things) were associated with activity in the lateral fusiform gyri whereas objects with fewer shared features (typical of nonliving things) were associated with activity in the medial fusiform gyri. Significantly, a feature statistic reflecting differentiation between highly similar objects, enabling object-specific representations, was associated with bilateral PRC activity. These results confirm that the statistical characteristics of conceptual object features are coded in the ventral stream, supporting a conceptual feature-based hierarchy, and integrating disparate findings of category responses in fusiform gyri and category deficits in aMTL into a unifying neurocognitive framework.
Collapse
Affiliation(s)
- Lorraine K Tyler
- Department of Psychology, University of Cambridge, Cambridge, UK.
| | | | | | | | | | | | | | | |
Collapse
|
496
|
Yeatman JD, Rauschecker AM, Wandell BA. Anatomy of the visual word form area: adjacent cortical circuits and long-range white matter connections. BRAIN AND LANGUAGE 2013; 125:146-55. [PMID: 22632810 PMCID: PMC3432298 DOI: 10.1016/j.bandl.2012.04.010] [Citation(s) in RCA: 164] [Impact Index Per Article: 14.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2011] [Revised: 02/03/2012] [Accepted: 04/18/2012] [Indexed: 05/15/2023]
Abstract
Circuitry in ventral occipital-temporal cortex is essential for seeing words. We analyze the circuitry within a specific ventral-occipital region, the visual word form area (VWFA). The VWFA is immediately adjacent to the retinotopically organized VO-1 and VO-2 visual field maps and lies medial and inferior to visual field maps within motion selective human cortex. Three distinct white matter fascicles pass within close proximity to the VWFA: (1) the inferior longitudinal fasciculus, (2) the inferior frontal occipital fasciculus, and (3) the vertical occipital fasciculus. The vertical occipital fasciculus terminates in or adjacent to the functionally defined VWFA voxels in every individual. The vertical occipital fasciculus projects dorsally to language and reading related cortex. The combination of functional responses from cortex and anatomical measures in the white matter provides an overview of how the written word is encoded and communicated along the ventral occipital-temporal circuitry for seeing words.
Collapse
Affiliation(s)
| | - Andreas M. Rauschecker
- Psychology Department, Stanford University, Stanford, CA 94305
- Medical Scientist Training Program and Neurosciences
| | - Brian A. Wandell
- Psychology Department, Stanford University, Stanford, CA 94305
- Stanford Center for Cognitive and Neurobiological Imaging
| |
Collapse
|
497
|
Multivoxel patterns reveal functionally differentiated networks underlying auditory feedback processing of speech. J Neurosci 2013; 33:4339-48. [PMID: 23467350 DOI: 10.1523/jneurosci.6319-11.2013] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared with during passive listening. One network of regions appears to encode an "error signal" regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a frontotemporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems.
Collapse
|
498
|
Kay KN, Winawer J, Mezer A, Wandell BA. Compressive spatial summation in human visual cortex. J Neurophysiol 2013; 110:481-94. [PMID: 23615546 DOI: 10.1152/jn.00105.2013] [Citation(s) in RCA: 184] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Neurons within a small (a few cubic millimeters) region of visual cortex respond to stimuli within a restricted region of the visual field. Previous studies have characterized the population response of such neurons using a model that sums contrast linearly across the visual field. In this study, we tested linear spatial summation of population responses using blood oxygenation level-dependent (BOLD) functional MRI. We measured BOLD responses to a systematic set of contrast patterns and discovered systematic deviation from linearity: the data are more accurately explained by a model in which a compressive static nonlinearity is applied after linear spatial summation. We found that the nonlinearity is present in early visual areas (e.g., V1, V2) and grows more pronounced in relatively anterior extrastriate areas (e.g., LO-2, VO-2). We then analyzed the effect of compressive spatial summation in terms of changes in the position and size of a viewed object. Compressive spatial summation is consistent with tolerance to changes in position and size, an important characteristic of object representation.
Collapse
Affiliation(s)
- Kendrick N Kay
- Department of Psychology, Stanford University, Stanford, CA, USA.
| | | | | | | |
Collapse
|
499
|
Attention during natural vision warps semantic representation across the human brain. Nat Neurosci 2013; 16:763-70. [PMID: 23603707 PMCID: PMC3929490 DOI: 10.1038/nn.3381] [Citation(s) in RCA: 200] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2013] [Accepted: 03/18/2013] [Indexed: 11/09/2022]
Abstract
Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision.
Collapse
|
500
|
Horikawa T, Tamaki M, Miyawaki Y, Kamitani Y. Neural decoding of visual imagery during sleep. Science 2013; 340:639-42. [PMID: 23558170 DOI: 10.1126/science.1234330] [Citation(s) in RCA: 226] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.
Collapse
Affiliation(s)
- T Horikawa
- ATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan
| | | | | | | |
Collapse
|