1
|
Simony E, Grossman S, Malach R. Brain-machine convergent evolution: Why finding parallels between brain and artificial systems is informative. Proc Natl Acad Sci U S A 2024; 121:e2319709121. [PMID: 39356668 PMCID: PMC11474058 DOI: 10.1073/pnas.2319709121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2024] Open
Abstract
Central nervous system neurons manifest a rich diversity of selectivity profiles-whose precise role is still poorly understood. Following the striking success of artificial networks, a major debate has emerged concerning their usefulness in explaining neuronal properties. Here we propose that finding parallels between artificial and neuronal networks is informative precisely because these systems are so different from each other. Our argument is based on an extension of the concept of convergent evolution-well established in biology-to the domain of artificial systems. Applying this concept to different areas and levels of the cortical hierarchy can be a powerful tool for elucidating the functional role of well-known cortical selectivities. Importantly, we further demonstrate that such parallels can uncover novel functionalities by showing that grid cells in the entorhinal cortex can be modeled to function as a set of basis functions in a lossy representation such as the well-known JPEG compression. Thus, contrary to common intuition, here we illustrate that finding parallels with artificial systems provides novel and informative insights, particularly in those cases that are far removed from realistic brain biology.
Collapse
Affiliation(s)
- Erez Simony
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot76100, Israel
- Faculty of Electrical Engineering, Holon Institute of Technology, Holon5810201, Israel
| | - Shany Grossman
- Max Planck Institute for Human Development, Berlin14195, Germany
- Max Planck University College London Centre for Computational Psychiatry and Ageing Research, Berlin14195, Germany
- Institute of Psychology, Universitsät Hamburg, Hamburg20146, Germany
| | - Rafael Malach
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot76100, Israel
| |
Collapse
|
2
|
Fleming SM, Shea N. Quality space computations for consciousness. Trends Cogn Sci 2024; 28:896-906. [PMID: 39025769 DOI: 10.1016/j.tics.2024.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 06/18/2024] [Accepted: 06/19/2024] [Indexed: 07/20/2024]
Abstract
The quality space hypothesis about conscious experience proposes that conscious sensory states are experienced in relation to other possible sensory states. For instance, the colour red is experienced as being more like orange, and less like green or blue. Recent empirical findings suggest that subjective similarity space can be explained in terms of similarities in neural activation patterns. Here, we consider how localist, workspace, and higher-order theories of consciousness can accommodate claims about the qualitative character of experience and functionally support a quality space. We review existing empirical evidence for each of these positions, and highlight novel experimental tools, such as altering local activation spaces via brain stimulation or behavioural training, that can distinguish these accounts.
Collapse
Affiliation(s)
- Stephen M Fleming
- Wellcome Centre for Human Neuroimaging, University College London, London, UK; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK; Department of Experimental Psychology, University College London, London, UK; Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada.
| | - Nicholas Shea
- Institute of Philosophy, School of Advanced Study, University of London, London, UK; Faculty of Philosophy, University of Oxford, Oxford, UK.
| |
Collapse
|
3
|
Bezsudnova Y, Quinn AJ, Jensen O. Optimizing magnetometers arrays and analysis pipelines for multivariate pattern analysis. J Neurosci Methods 2024; 412:110279. [PMID: 39265820 DOI: 10.1016/j.jneumeth.2024.110279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 08/12/2024] [Accepted: 09/09/2024] [Indexed: 09/14/2024]
Abstract
BACKGROUND Multivariate pattern analysis (MVPA) has proven an excellent tool in cognitive neuroscience. It also holds a strong promise when applied to optically-pumped magnetometer-based magnetoencephalography. NEW METHOD To optimize OPM-MEG systems for MVPA experiments this study examines data from a conventional MEG magnetometer array, focusing on appropriate noise reduction techniques for magnetometers. We determined the least required number of sensors needed for robust MVPA for image categorization experiments. RESULTS We found that the use of signal space separation (SSS) without a proper regularization significantly lowered the classification accuracy considering a sub-array of 102 magnetometers or a sub-array of 204 gradiometers. We also found that classification accuracy did not improve when going beyond 30 sensors irrespective of whether SSS has been applied. COMPARISON WITH EXISTING METHODS The power spectra of data filtered with SSS has a substantially higher noise floor that data cleaned with SSP or HFC. Consequently, MVPA decoding results obtained from the SSS-filtered data are significantly lower compared to all other methods employed. CONCLUSIONS When designing MEG system based on SQUID magnetometers optimized for multivariate analysis for image categorization experiments, about 30 magnetometers are sufficient. We advise against applying SSS filters without a proper regularization to data from MEG and OPM systems prior to performing MVPA as this method, albeit reducing low-frequency external noise contributions, also introduces an increase in broadband noise. We recommend employing noise reduction techniques that either decrease or maintain the noise floor of the data like signal-space projection, homogeneous field correction and gradient noise reduction.
Collapse
Affiliation(s)
- Yulia Bezsudnova
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, UK.
| | - Andrew J Quinn
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, UK
| | - Ole Jensen
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
4
|
Contier O, Baker CI, Hebart MN. Distributed representations of behaviour-derived object dimensions in the human visual system. Nat Hum Behav 2024:10.1038/s41562-024-01980-y. [PMID: 39251723 DOI: 10.1038/s41562-024-01980-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 08/06/2024] [Indexed: 09/11/2024]
Abstract
Object vision is commonly thought to involve a hierarchy of brain regions processing increasingly complex image features, with high-level visual cortex supporting object recognition and categorization. However, object vision supports diverse behavioural goals, suggesting basic limitations of this category-centric framework. To address these limitations, we mapped a series of dimensions derived from a large-scale analysis of human similarity judgements directly onto the brain. Our results reveal broadly distributed representations of behaviourally relevant information, demonstrating selectivity to a wide variety of novel dimensions while capturing known selectivities for visual features and categories. Behaviour-derived dimensions were superior to categories at predicting brain responses, yielding mixed selectivity in much of visual cortex and sparse selectivity in category-selective clusters. This framework reconciles seemingly disparate findings regarding regional specialization, explaining category selectivity as a special case of sparse response profiles among representational dimensions, suggesting a more expansive view on visual processing in the human brain.
Collapse
Affiliation(s)
- Oliver Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
- Max Planck School of Cognition, Leipzig, Germany.
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Martin N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
5
|
Lifanov-Carr J, Griffiths BJ, Linde-Domingo J, Ferreira CS, Wilson M, Mayhew SD, Charest I, Wimber M. Reconstructing Spatiotemporal Trajectories of Visual Object Memories in the Human Brain. eNeuro 2024; 11:ENEURO.0091-24.2024. [PMID: 39242212 PMCID: PMC11439564 DOI: 10.1523/eneuro.0091-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 07/03/2024] [Accepted: 08/09/2024] [Indexed: 09/09/2024] Open
Abstract
How the human brain reconstructs, step-by-step, the core elements of past experiences is still unclear. Here, we map the spatiotemporal trajectories along which visual object memories are reconstructed during associative recall. Specifically, we inquire whether retrieval reinstates feature representations in a copy-like but reversed direction with respect to the initial perceptual experience, or alternatively, this reconstruction involves format transformations and regions beyond initial perception. Participants from two cohorts studied new associations between verbs and randomly paired object images, and subsequently recalled the objects when presented with the corresponding verb cue. We first analyze multivariate fMRI patterns to map where in the brain high- and low-level object features can be decoded during perception and retrieval, showing that retrieval is dominated by conceptual features, represented in comparatively late visual and parietal areas. A separately acquired EEG dataset is then used to track the temporal evolution of the reactivated patterns using similarity-based EEG-fMRI fusion. This fusion suggests that memory reconstruction proceeds from anterior frontotemporal to posterior occipital and parietal regions, in line with a conceptual-to-perceptual gradient but only partly following the same trajectories as during perception. Specifically, a linear regression statistically confirms that the sequential activation of ventral visual stream regions is reversed between image perception and retrieval. The fusion analysis also suggests an information relay to frontoparietal areas late during retrieval. Together, the results shed light onto the temporal dynamics of memory recall and the transformations that the information undergoes between the initial experience and its later reconstruction from memory.
Collapse
Affiliation(s)
- Julia Lifanov-Carr
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Benjamin J Griffiths
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Juan Linde-Domingo
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
- Department of Experimental Psychology, Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, 18011 Granada, Spain
- Center for Adaptive Rationality, Max Planck Institute for Human Development, 14195 Berlin, Germany
| | - Catarina S Ferreira
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Martin Wilson
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Stephen D Mayhew
- Institute of Health and Neurodevelopment (IHN), School of Psychology, Aston University, Birmingham B4 7ET, United Kingdom
| | - Ian Charest
- Département de Psychologie, Université de Montréal, Montréal, Quebec H2V 2S9, Canada
| | - Maria Wimber
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
- School of Psychology & Neuroscience and Centre for Cognitive Neuroimaging (CCNi), University of Glasgow, Glasgow G12 8QB, United Kingdom
| |
Collapse
|
6
|
Contier O, Baker CI, Hebart MN. Distributed representations of behavior-derived object dimensions in the human visual system. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.23.553812. [PMID: 37662312 PMCID: PMC10473665 DOI: 10.1101/2023.08.23.553812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Object vision is commonly thought to involve a hierarchy of brain regions processing increasingly complex image features, with high-level visual cortex supporting object recognition and categorization. However, object vision supports diverse behavioral goals, suggesting basic limitations of this category-centric framework. To address these limitations, we mapped a series of dimensions derived from a large-scale analysis of human similarity judgments directly onto the brain. Our results reveal broadly distributed representations of behaviorally-relevant information, demonstrating selectivity to a wide variety of novel dimensions while capturing known selectivities for visual features and categories. Behavior-derived dimensions were superior to categories at predicting brain responses, yielding mixed selectivity in much of visual cortex and sparse selectivity in category-selective clusters. This framework reconciles seemingly disparate findings regarding regional specialization, explaining category selectivity as a special case of sparse response profiles among representational dimensions, suggesting a more expansive view on visual processing in the human brain.
Collapse
Affiliation(s)
- O Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Max Planck School of Cognition, Leipzig, Germany
| | - C I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda MD, USA
| | - M N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
7
|
Celeghin A, Borriero A, Orsenigo D, Diano M, Méndez Guerrero CA, Perotti A, Petri G, Tamietto M. Convolutional neural networks for vision neuroscience: significance, developments, and outstanding issues. Front Comput Neurosci 2023; 17:1153572. [PMID: 37485400 PMCID: PMC10359983 DOI: 10.3389/fncom.2023.1153572] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 06/19/2023] [Indexed: 07/25/2023] Open
Abstract
Convolutional Neural Networks (CNN) are a class of machine learning models predominately used in computer vision tasks and can achieve human-like performance through learning from experience. Their striking similarities to the structural and functional principles of the primate visual system allow for comparisons between these artificial networks and their biological counterparts, enabling exploration of how visual functions and neural representations may emerge in the real brain from a limited set of computational principles. After considering the basic features of CNNs, we discuss the opportunities and challenges of endorsing CNNs as in silico models of the primate visual system. Specifically, we highlight several emerging notions about the anatomical and physiological properties of the visual system that still need to be systematically integrated into current CNN models. These tenets include the implementation of parallel processing pathways from the early stages of retinal input and the reconsideration of several assumptions concerning the serial progression of information flow. We suggest design choices and architectural constraints that could facilitate a closer alignment with biology provide causal evidence of the predictive link between the artificial and biological visual systems. Adopting this principled perspective could potentially lead to new research questions and applications of CNNs beyond modeling object recognition.
Collapse
Affiliation(s)
| | | | - Davide Orsenigo
- Department of Psychology, University of Torino, Turin, Italy
| | - Matteo Diano
- Department of Psychology, University of Torino, Turin, Italy
| | | | | | | | - Marco Tamietto
- Department of Psychology, University of Torino, Turin, Italy
- Department of Medical and Clinical Psychology, and CoRPS–Center of Research on Psychology in Somatic Diseases–Tilburg University, Tilburg, Netherlands
| |
Collapse
|
8
|
Palenciano AF, Senoussi M, Formica S, González-García C. Canonical template tracking: Measuring the activation state of specific neural representations. FRONTIERS IN NEUROIMAGING 2023; 1:974927. [PMID: 37555182 PMCID: PMC10406196 DOI: 10.3389/fnimg.2022.974927] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 12/13/2022] [Indexed: 08/10/2023]
Abstract
Multivariate analyses of neural data have become increasingly influential in cognitive neuroscience since they allow to address questions about the representational signatures of neurocognitive phenomena. Here, we describe Canonical Template Tracking: a multivariate approach that employs independent localizer tasks to assess the activation state of specific representations during the execution of cognitive paradigms. We illustrate the benefits of this methodology in characterizing the particular content and format of task-induced representations, comparing it with standard (cross-)decoding and representational similarity analyses. Then, we discuss relevant design decisions for experiments using this analysis approach, focusing on the nature of the localizer tasks from which the canonical templates are derived. We further provide a step-by-step tutorial of this method, stressing the relevant analysis choices for functional magnetic resonance imaging and magneto/electroencephalography data. Importantly, we point out the potential pitfalls linked to canonical template tracking implementation and interpretation of the results, together with recommendations to mitigate them. To conclude, we provide some examples from previous literature that highlight the potential of this analysis to address relevant theoretical questions in cognitive neuroscience.
Collapse
Affiliation(s)
- Ana F. Palenciano
- Mind, Brain, and Behavior Research Center, University of Granada, Granada, Spain
| | - Mehdi Senoussi
- CLLE Lab, CNRS UMR 5263, University of Toulouse, Toulouse, France
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Silvia Formica
- Department of Psychology, Berlin School of Mind and Brain, Humboldt Universität zu Berlin, Berlin, Germany
| | | |
Collapse
|
9
|
Abstract
Humans are exquisitely sensitive to the spatial arrangement of visual features in objects and scenes, but not in visual textures. Category-selective regions in the visual cortex are widely believed to underlie object perception, suggesting such regions should distinguish natural images of objects from synthesized images containing similar visual features in scrambled arrangements. Contrarily, we demonstrate that representations in category-selective cortex do not discriminate natural images from feature-matched scrambles but can discriminate images of different categories, suggesting a texture-like encoding. We find similar insensitivity to feature arrangement in Imagenet-trained deep convolutional neural networks. This suggests the need to reconceptualize the role of category-selective cortex as representing a basis set of complex texture-like features, useful for a myriad of behaviors. The human visual ability to recognize objects and scenes is widely thought to rely on representations in category-selective regions of the visual cortex. These representations could support object vision by specifically representing objects, or, more simply, by representing complex visual features regardless of the particular spatial arrangement needed to constitute real-world objects, that is, by representing visual textures. To discriminate between these hypotheses, we leveraged an image synthesis approach that, unlike previous methods, provides independent control over the complexity and spatial arrangement of visual features. We found that human observers could easily detect a natural object among synthetic images with similar complex features that were spatially scrambled. However, observer models built from BOLD responses from category-selective regions, as well as a model of macaque inferotemporal cortex and Imagenet-trained deep convolutional neural networks, were all unable to identify the real object. This inability was not due to a lack of signal to noise, as all observer models could predict human performance in image categorization tasks. How then might these texture-like representations in category-selective regions support object perception? An image-specific readout from category-selective cortex yielded a representation that was more selective for natural feature arrangement, showing that the information necessary for natural object discrimination is available. Thus, our results suggest that the role of the human category-selective visual cortex is not to explicitly encode objects but rather to provide a basis set of texture-like features that can be infinitely reconfigured to flexibly learn and identify new object categories.
Collapse
|
10
|
Wammes J, Norman KA, Turk-Browne N. Increasing stimulus similarity drives nonmonotonic representational change in hippocampus. eLife 2022; 11:e68344. [PMID: 34989336 PMCID: PMC8735866 DOI: 10.7554/elife.68344] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 08/09/2021] [Indexed: 12/16/2022] Open
Abstract
Studies of hippocampal learning have obtained seemingly contradictory results, with manipulations that increase coactivation of memories sometimes leading to differentiation of these memories, but sometimes not. These results could potentially be reconciled using the nonmonotonic plasticity hypothesis, which posits that representational change (memories moving apart or together) is a U-shaped function of the coactivation of these memories during learning. Testing this hypothesis requires manipulating coactivation over a wide enough range to reveal the full U-shape. To accomplish this, we used a novel neural network image synthesis procedure to create pairs of stimuli that varied parametrically in their similarity in high-level visual regions that provide input to the hippocampus. Sequences of these pairs were shown to human participants during high-resolution fMRI. As predicted, learning changed the representations of paired images in the dentate gyrus as a U-shaped function of image similarity, with neural differentiation occurring only for moderately similar images.
Collapse
Affiliation(s)
- Jeffrey Wammes
- Department of Psychology, Yale UniversityNew HavenUnited States
- Department of Psychology, Queen’s UniversityKingstonCanada
| | - Kenneth A Norman
- Department of Psychology, Princeton UniversityPrincetonUnited States
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | | |
Collapse
|
11
|
Zheng L, Gao Z, McAvan AS, Isham EA, Ekstrom AD. Partially overlapping spatial environments trigger reinstatement in hippocampus and schema representations in prefrontal cortex. Nat Commun 2021; 12:6231. [PMID: 34711830 PMCID: PMC8553856 DOI: 10.1038/s41467-021-26560-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Accepted: 10/11/2021] [Indexed: 01/17/2023] Open
Abstract
When we remember a city that we have visited, we retrieve places related to finding our goal but also non-target locations within this environment. Yet, understanding how the human brain implements the neural computations underlying holistic retrieval remains unsolved, particularly for shared aspects of environments. Here, human participants learned and retrieved details from three partially overlapping environments while undergoing high-resolution functional magnetic resonance imaging (fMRI). Our findings show reinstatement of stores even when they are not related to a specific trial probe, providing evidence for holistic environmental retrieval. For stores shared between cities, we find evidence for pattern separation (representational orthogonalization) in hippocampal subfield CA2/3/DG and repulsion in CA1 (differentiation beyond orthogonalization). Additionally, our findings demonstrate that medial prefrontal cortex (mPFC) stores representations of the common spatial structure, termed schema, across environments. Together, our findings suggest how unique and common elements of multiple spatial environments are accessed computationally and neurally.
Collapse
Affiliation(s)
- Li Zheng
- grid.134563.60000 0001 2168 186XDepartment of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA ,grid.134563.60000 0001 2168 186XEvelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA
| | - Zhiyao Gao
- grid.5685.e0000 0004 1936 9668Department of Psychology, University of York, Heslington, York YO10 5DD UK
| | - Andrew S. McAvan
- grid.134563.60000 0001 2168 186XDepartment of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA ,grid.134563.60000 0001 2168 186XEvelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA
| | - Eve A. Isham
- grid.134563.60000 0001 2168 186XDepartment of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA ,grid.134563.60000 0001 2168 186XEvelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA
| | - Arne D. Ekstrom
- grid.134563.60000 0001 2168 186XDepartment of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA ,grid.134563.60000 0001 2168 186XEvelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA
| |
Collapse
|
12
|
Bara I, Darda KM, Kurz AS, Ramsey R. Functional specificity and neural integration in the aesthetic appreciation of artworks with implied motion. Eur J Neurosci 2021; 54:7231-7259. [PMID: 34585450 DOI: 10.1111/ejn.15479] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 09/17/2021] [Accepted: 09/17/2021] [Indexed: 12/29/2022]
Abstract
Although there is growing interest in the neural foundations of aesthetic experience, it remains unclear how particular mental subsystems (e.g. perceptual, affective and cognitive) are involved in different types of aesthetic judgements. Here, we use fMRI to investigate the involvement of different neural networks during aesthetic judgements of visual artworks with implied motion cues. First, a behavioural experiment (N = 45) confirmed a preference for paintings with implied motion over static cues. Subsequently, in a preregistered fMRI experiment (N = 27), participants made aesthetic and motion judgements towards paintings representing human bodies in dynamic and static postures. Using functional region-of-interest and Bayesian multilevel modelling approaches, we provide no compelling evidence for unique sensitivity within or between neural systems associated with body perception, motion and affective processing during the aesthetic evaluation of paintings with implied motion. However, we show suggestive evidence that motion and body-selective systems may integrate signals via functional connections with a separate neural network in dorsal parietal cortex, which may act as a relay or integration site. Our findings clarify the roles of basic visual and affective brain circuitry in evaluating a central aesthetic feature-implied motion-while also pointing towards promising future research directions, which involve modelling aesthetic preferences as hierarchical interplay between visual and affective circuits and integration processes in frontoparietal cortex.
Collapse
Affiliation(s)
- Ionela Bara
- Wales Institute for Cognitive Neuroscience, School of Psychology, Bangor University, Bangor, UK
| | - Kohinoor Monish Darda
- University of Glasgow, Glasgow, UK.,Department of Psychology, Macquarie University, Sydney, Australia
| | - Andrew Solomon Kurz
- VISN 17 Center of Excellence for Research on Returning War Veterans, Central Texas Veterans Health Care System, Temple, Texas, USA
| | - Richard Ramsey
- Department of Psychology, Macquarie University, Sydney, Australia
| |
Collapse
|
13
|
Silvernagel MP, Ling AS, Nuyujukian P. A markerless platform for ambulatory systems neuroscience. Sci Robot 2021; 6:eabj7045. [PMID: 34516749 DOI: 10.1126/scirobotics.abj7045] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Motor systems neuroscience seeks to understand how the brain controls movement. To minimize confounding variables, large-animal studies typically constrain body movement from areas not under observation, ensuring consistent, repeatable behaviors. Such studies have fueled decades of research, but they may be artificially limiting the richness of neural data observed, preventing generalization to more natural movements and settings. Neuroscience studies of unconstrained movement would capture a greater range of behavior and a more complete view of neuronal activity, but instrumenting an experimental rig suitable for large animals presents substantial engineering challenges. Here, we present a markerless, full-body motion tracking and synchronized wireless neural electrophysiology platform for large, ambulatory animals. Composed of four depth (RGB-D) cameras that provide a 360° view of a 4.5-square-meters enclosed area, this system is designed to record a diverse range of neuroethologically relevant behaviors. This platform also allows for the simultaneous acquisition of hundreds of wireless neural recording channels in multiple brain regions. As behavioral and neuronal data are generated at rates below 200 megabytes per second, a single desktop can facilitate hours of continuous recording. This setup is designed for systems neuroscience and neuroengineering research, where synchronized kinematic behavior and neural data are the foundation for investigation. By enabling the study of previously unexplored movement tasks, this system can generate insights into the functioning of the mammalian motor system and provide a platform to develop brain-machine interfaces for unconstrained applications.
Collapse
Affiliation(s)
| | - Alissa S Ling
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Paul Nuyujukian
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.,Department of Bioengineering, Stanford University, Stanford, CA, USA.,Department of Neurosurgery, Stanford University, Stanford, CA, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.,Stanford Bio-X, Stanford University, Stanford, CA, USA
| | | |
Collapse
|
14
|
Yang J, Huber L, Yu Y, Bandettini PA. Linking cortical circuit models to human cognition with laminar fMRI. Neurosci Biobehav Rev 2021; 128:467-478. [PMID: 34245758 DOI: 10.1016/j.neubiorev.2021.07.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 06/29/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022]
Abstract
Laboratory animal research has provided significant knowledge into the function of cortical circuits at the laminar level, which has yet to be fully leveraged towards insights about human brain function on a similar spatiotemporal scale. The use of functional magnetic resonance imaging (fMRI) in conjunction with neural models provides new opportunities to gain important insights from current knowledge. During the last five years, human studies have demonstrated the value of high-resolution fMRI to study laminar-specific activity in the human brain. This is mostly performed at ultra-high-field strengths (≥ 7 T) and is known as laminar fMRI. Advancements in laminar fMRI are beginning to open new possibilities for studying questions in basic cognitive neuroscience. In this paper, we first review recent methodological advances in laminar fMRI and describe recent human laminar fMRI studies. Then, we discuss how the use of laminar fMRI can help bridge the gap between cortical circuit models and human cognition.
Collapse
Affiliation(s)
- Jiajia Yang
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan; Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, MD, USA.
| | - Laurentius Huber
- MR-Methods Group, Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, the Netherlands
| | - Yinghua Yu
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan; Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, MD, USA
| | - Peter A Bandettini
- Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, MD, USA; Functional MRI Core Facility, National Institute of Mental Health, Bethesda, MD, USA
| |
Collapse
|
15
|
Ekstrom AD. Regional variation in neurovascular coupling and why we still lack a Rosetta Stone. Philos Trans R Soc Lond B Biol Sci 2020; 376:20190634. [PMID: 33190605 DOI: 10.1098/rstb.2019.0634] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) is the dominant tool in cognitive neuroscience although its relation to underlying neural activity, particularly in the human brain, remains largely unknown. A major research goal, therefore, has been to uncover a 'Rosetta Stone' providing direct translation between the blood oxygen level-dependent (BOLD) signal, the local field potential and single-neuron activity. Here, I evaluate the proposal that BOLD signal changes equate to changes in gamma-band activity, which in turn may partially relate to the spiking activity of neurons. While there is some support for this idea in sensory cortices, findings in deeper brain structures like the hippocampus instead suggest both regional and frequency-wise differences. Relatedly, I consider four important factors in linking fMRI to neural activity: interpretation of correlations between these signals, regional variability in local vasculature, distributed neural coding schemes and varying fMRI signal quality. Novel analytic fMRI techniques, such as multivariate pattern analysis (MVPA), employ the distributed patterns of voxels across a brain region to make inferences about information content rather than whether a small number of voxels go up or down relative to baseline in response to a stimulus. Although unlikely to provide a Rosetta Stone, MVPA, therefore, may represent one possible means forward for better linking BOLD signal changes to the information coded by underlying neural activity. This article is part of the theme issue 'Key relationships between non-invasive functional neuroimaging and the underlying neuronal activity'.
Collapse
Affiliation(s)
- Arne D Ekstrom
- Department of Psychology, University of Arizona, 1503 E. University Boulevard, Tucson, AZ 85721, USA.,Evelyn McKnight Brain Institute, University of Arizona, 1503 E. University Boulevard, Tucson, AZ 85721, USA
| |
Collapse
|
16
|
Deep learning and cognitive science. Cognition 2020; 203:104365. [PMID: 32563082 DOI: 10.1016/j.cognition.2020.104365] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2019] [Revised: 05/31/2020] [Accepted: 06/03/2020] [Indexed: 11/22/2022]
Abstract
In recent years, the family of algorithms collected under the term "deep learning" has revolutionized artificial intelligence, enabling machines to reach human-like performances in many complex cognitive tasks. Although deep learning models are grounded in the connectionist paradigm, their recent advances were basically developed with engineering goals in mind. Despite of their applied focus, deep learning models eventually seem fruitful for cognitive purposes. This can be thought as a kind of biological exaptation, where a physiological structure becomes applicable for a function different from that for which it was selected. In this paper, it will be argued that it is time for cognitive science to seriously come to terms with deep learning, and we try to spell out the reasons why this is the case. First, the path of the evolution of deep learning from the connectionist project is traced, demonstrating the remarkable continuity, and the differences as well. Then, it will be considered how deep learning models can be useful for many cognitive topics, especially those where it has achieved performance comparable to humans, from perception to language. It will be maintained that deep learning poses questions that cognitive sciences should try to answer. One of such questions is the reasons why deep convolutional models that are disembodied, inactive, unaware of context, and static, are by far the closest to the patterns of activation in the brain visual system.
Collapse
|
17
|
Kong NCL, Kaneshiro B, Yamins DLK, Norcia AM. Time-resolved correspondences between deep neural network layers and EEG measurements in object processing. Vision Res 2020; 172:27-45. [PMID: 32388211 DOI: 10.1016/j.visres.2020.04.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 03/18/2020] [Accepted: 04/20/2020] [Indexed: 10/24/2022]
Abstract
The ventral visual stream is known to be organized hierarchically, where early visual areas processing simplistic features feed into higher visual areas processing more complex features. Hierarchical convolutional neural networks (CNNs) were largely inspired by this type of brain organization and have been successfully used to model neural responses in different areas of the visual system. In this work, we aim to understand how an instance of these models corresponds to temporal dynamics of human object processing. Using representational similarity analysis (RSA) and various similarity metrics, we compare the model representations with two electroencephalography (EEG) data sets containing responses to a shared set of 72 images. We find that there is a hierarchical relationship between the depth of a layer and the time at which peak correlation with the brain response occurs for certain similarity metrics in both data sets. However, when comparing across layers in the neural network, the correlation onset time did not appear in a strictly hierarchical fashion. We present two additional methods that improve upon the achieved correlations by optimally weighting features from the CNN and show that depending on the similarity metric, deeper layers of the CNN provide a better correspondence than shallow layers to later time points in the EEG responses. However, we do not find that shallow layers provide better correspondences than those of deeper layers to early time points, an observation that violates the hierarchy and is in agreement with the finding from the onset-time analysis. This work makes a first comparison of various response features-including multiple similarity metrics and data sets-with respect to a neural network.
Collapse
Affiliation(s)
- Nathan C L Kong
- Department of Psychology, Stanford University, United States; Department of Electrical Engineering, Stanford University, United States.
| | - Blair Kaneshiro
- Center for Computer Research in Music and Acoustics, Stanford University, United States.
| | - Daniel L K Yamins
- Department of Psychology, Stanford University, United States; Department of Computer Science, Stanford University, United States; Wu Tsai Neurosciences Institute, Stanford University, United States.
| | - Anthony M Norcia
- Department of Psychology, Stanford University, United States; Wu Tsai Neurosciences Institute, Stanford University, United States.
| |
Collapse
|
18
|
Schreiner T, Staudigl T. Electrophysiological signatures of memory reactivation in humans. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190293. [PMID: 32248789 PMCID: PMC7209925 DOI: 10.1098/rstb.2019.0293] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
The reactivation of neural activity that was present during the encoding of an event is assumed to be essential for human episodic memory retrieval and the consolidation of memories during sleep. Pioneering animal work has already established a crucial role of memory reactivation to prepare and guide behaviour. Research in humans is now delineating the neural processes involved in memory reactivation during both wakefulness and sleep as well as their functional significance. Focusing on the electrophysiological signatures of memory reactivation in humans during both memory retrieval and sleep-related consolidation, this review provides an overview of the state of the art in the field. We outline recent advances, methodological developments and open questions and specifically highlight commonalities and differences in the neuronal signatures of memory reactivation during the states of wakefulness and sleep. This article is part of the Theo Murphy meeting issue ‘Memory reactivation: replaying events past, present and future’.
Collapse
Affiliation(s)
- Thomas Schreiner
- School of Psychology and Centre for Human Brain Health, University of Birmingham, Birmingham, UK.,Department of Psychology, Ludwig-Maximilians-University Munich, Munich, Germany
| | - Tobias Staudigl
- Department of Psychology, Ludwig-Maximilians-University Munich, Munich, Germany
| |
Collapse
|
19
|
Jacobs C, Petras K, Moors P, Goffaux V. Contrast versus identity encoding in the face image follow distinct orientation selectivity profiles. PLoS One 2020; 15:e0229185. [PMID: 32187178 PMCID: PMC7080280 DOI: 10.1371/journal.pone.0229185] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 02/02/2020] [Indexed: 11/18/2022] Open
Abstract
Orientation selectivity is a fundamental property of primary visual encoding. High-level processing stages also show some form of orientation dependence, with face identification preferentially relying on horizontally-oriented information. How high-level orientation tuning emerges from primary orientation biases is unclear. In the same group of participants, we derived the orientation selectivity profile at primary and high-level visual processing stages using a contrast detection and an identity matching task. To capture the orientation selectivity profile, we calculated the difference in performance between all tested orientations (0, 45, 90, and 135°) for each task and for upright and inverted faces, separately. Primary orientation selectivity was characterized by higher sensitivity to oblique as compared to cardinal orientations. The orientation profile of face identification showed superior horizontal sensitivity to face identity. In each task, performance with upright and inverted faces projected onto qualitatively similar a priori models of orientation selectivity. Yet the fact that the orientation selectivity profiles of contrast detection in upright and inverted faces correlated significantly while such correlation was absent for identification indicates a progressive dissociation of orientation selectivity profiles from primary to high-level stages of orientation encoding. Bayesian analyses further indicate a lack of correlation between the orientation selectivity profiles in the contrast detection and face identification tasks, for upright and inverted faces. From these findings, we conclude that orientation selectivity shows distinct profiles at primary and high-level stages of face processing and that a transformation must occur from general cardinal attenuation when processing basic properties of the face image to horizontal tuning when encoding more complex properties such as identity.
Collapse
Affiliation(s)
- Christianne Jacobs
- Faculty of Psychology and Educational Sciences, Research Institute for Psychological Science (IPSY), UC Louvain, Louvain-la-Neuve, Belgium
| | - Kirsten Petras
- Faculty of Psychology and Educational Sciences, Research Institute for Psychological Science (IPSY), UC Louvain, Louvain-la-Neuve, Belgium
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Pieter Moors
- Faculty of Psychology and Educational Sciences, Research Institute for Psychological Science (IPSY), UC Louvain, Louvain-la-Neuve, Belgium
- Department of Brain and Cognition, Laboratory of Experimental Psychology, KU Leuven, Leuven, Belgium
| | - Valerie Goffaux
- Faculty of Psychology and Educational Sciences, Research Institute for Psychological Science (IPSY), UC Louvain, Louvain-la-Neuve, Belgium
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
- Department of Brain and Cognition, Laboratory of Experimental Psychology, KU Leuven, Leuven, Belgium
- Institute of Neuroscience (IoNS), UC Louvain, Louvain-la-Neuve, Belgium
- * E-mail:
| |
Collapse
|
20
|
|
21
|
Rezai O, Stoffl L, Tripp B. How are response properties in the middle temporal area related to inference on visual motion patterns? Neural Netw 2019; 121:122-131. [PMID: 31541880 DOI: 10.1016/j.neunet.2019.08.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 08/04/2019] [Accepted: 08/22/2019] [Indexed: 10/26/2022]
Abstract
Neurons in the primate middle temporal area (MT) respond to moving stimuli, with strong tuning for motion speed and direction. These responses have been characterized in detail, but the functional significance of these details (e.g. shapes and widths of speed tuning curves) is unclear, because they cannot be selectively manipulated. To estimate their functional significance, we used a detailed model of MT population responses as input to convolutional networks that performed sophisticated motion processing tasks (visual odometry and gesture recognition). We manipulated the distributions of speed and direction tuning widths, and studied the effects on task performance. We also studied performance with random linear mixtures of the responses, and with responses that had the same representational dissimilarity as the model populations, but were otherwise randomized. The width of speed and direction tuning both affected task performance, despite the networks having been optimized individually for each tuning variation, but the specific effects were different in each task. Random linear mixing improved performance of the odometry task, but not the gesture recognition task. Randomizing the responses while maintaining representational dissimilarity resulted in poor odometry performance. In summary, despite full optimization of the deep networks in each case, each manipulation of the representation affected performance of sophisticated visual tasks. Representation properties such as tuning width and representational similarity have been studied extensively from other perspectives, but this work provides new insight into their possible roles in sophisticated visual inference.
Collapse
|
22
|
Rajaei K, Mohsenzadeh Y, Ebrahimpour R, Khaligh-Razavi SM. Beyond core object recognition: Recurrent processes account for object recognition under occlusion. PLoS Comput Biol 2019; 15:e1007001. [PMID: 31091234 PMCID: PMC6538196 DOI: 10.1371/journal.pcbi.1007001] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 05/28/2019] [Accepted: 04/02/2019] [Indexed: 01/08/2023] Open
Abstract
Core object recognition, the ability to rapidly recognize objects despite variations in their appearance, is largely solved through the feedforward processing of visual information. Deep neural networks are shown to achieve human-level performance in these tasks, and explain the primate brain representation. On the other hand, object recognition under more challenging conditions (i.e. beyond the core recognition problem) is less characterized. One such example is object recognition under occlusion. It is unclear to what extent feedforward and recurrent processes contribute in object recognition under occlusion. Furthermore, we do not know whether the conventional deep neural networks, such as AlexNet, which were shown to be successful in solving core object recognition, can perform similarly well in problems that go beyond the core recognition. Here, we characterize neural dynamics of object recognition under occlusion, using magnetoencephalography (MEG), while participants were presented with images of objects with various levels of occlusion. We provide evidence from multivariate analysis of MEG data, behavioral data, and computational modelling, demonstrating an essential role for recurrent processes in object recognition under occlusion. Furthermore, the computational model with local recurrent connections, used here, suggests a mechanistic explanation of how the human brain might be solving this problem. In recent years, deep-learning-based computer vision algorithms have been able to achieve human-level performance in several object recognition tasks. This has also contributed in our understanding of how our brain may be solving these recognition tasks. However, object recognition under more challenging conditions, such as occlusion, is less characterized. Temporal dynamics of object recognition under occlusion is largely unknown in the human brain. Furthermore, we do not know if the previously successful deep-learning algorithms can similarly achieve human-level performance in these more challenging object recognition tasks. By linking brain data with behavior, and computational modeling, we characterized temporal dynamics of object recognition under occlusion, and proposed a computational mechanism that explains both behavioral and the neural data in humans. This provides a plausible mechanistic explanation for how our brain might be solving object recognition under more challenging conditions.
Collapse
Affiliation(s)
- Karim Rajaei
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Niavaran, Tehran, Iran
| | - Yalda Mohsenzadeh
- Computer Science and AI Lab (CSAIL), MIT, Cambridge, Massachusetts, United States of America
| | - Reza Ebrahimpour
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Niavaran, Tehran, Iran
- Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran
- * E-mail: (RE); (S-MK-R)
| | - Seyed-Mahdi Khaligh-Razavi
- Computer Science and AI Lab (CSAIL), MIT, Cambridge, Massachusetts, United States of America
- Department of Brain and Cognitive Sciences, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
- * E-mail: (RE); (S-MK-R)
| |
Collapse
|
23
|
Hong YW, Yoo Y, Han J, Wager TD, Woo CW. False-positive neuroimaging: Undisclosed flexibility in testing spatial hypotheses allows presenting anything as a replicated finding. Neuroimage 2019; 195:384-395. [PMID: 30946952 DOI: 10.1016/j.neuroimage.2019.03.070] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Revised: 03/27/2019] [Accepted: 03/28/2019] [Indexed: 11/27/2022] Open
Abstract
Hypothesis testing in neuroimaging studies relies heavily on treating named anatomical regions (e.g., "the amygdala") as unitary entities. Though data collection and analyses are conducted at the voxel level, inferences are often based on anatomical regions. The discrepancy between the unit of analysis and the unit of inference leads to ambiguity and flexibility in analyses that can create a false sense of reproducibility. For example, hypothesizing effects on "amygdala activity" does not provide a falsifiable and reproducible definition of precisely which voxels or which patterns of activation should be observed. Rather, it comprises a large number of unspecified sub-hypotheses, leaving room for flexible interpretation of findings, which we refer to as "model degrees of freedom." From a survey of 135 functional Magnetic Resonance Imaging studies in which researchers claimed replications of previous findings, we found that 42.2% of the studies did not report any quantitative evidence for replication such as activation peaks. Only 14.1% of the papers used exact coordinate-based or a priori pattern-based models. Of the studies that reported peak information, 42.9% of the 'replicated' findings had peak coordinates more than 15 mm away from the 'original' findings, suggesting that different brain locations were activated, even when studies claimed to replicate prior results. To reduce the flexible and qualitative region-level tests in neuroimaging studies, we recommend adopting quantitative spatial models and tests to assess the spatial reproducibility of findings. Techniques reviewed here include permutation tests on peak distance, Bayesian MANOVA, and a priori multivariate pattern-based models. These practices will help researchers to establish precise and falsifiable spatial hypotheses, promoting a cumulative science of neuroimaging.
Collapse
Affiliation(s)
- Yong-Wook Hong
- Center for Neuroscience Imaging Research, Institute for Basic Science, South Korea; Department of Biomedical Engineering, Sungkyunkwan University, South Korea
| | - Yejong Yoo
- Center for Neuroscience Imaging Research, Institute for Basic Science, South Korea; Department of Biology, Taylor University, United States
| | - Jihoon Han
- Center for Neuroscience Imaging Research, Institute for Basic Science, South Korea; Department of Biomedical Engineering, Sungkyunkwan University, South Korea
| | - Tor D Wager
- Department of Psychology and Neuroscience, University of Colorado Boulder, United States; Institute for Cognitive Sciences, University of Colorado Boulder, United States
| | - Choong-Wan Woo
- Center for Neuroscience Imaging Research, Institute for Basic Science, South Korea; Department of Biomedical Engineering, Sungkyunkwan University, South Korea.
| |
Collapse
|
24
|
Abstract
Understanding how cognitive processes affect the responses of sensory neurons may clarify the relationship between neuronal population activity and behavior. However, tools for analyzing neuronal activity have not kept up with technological advances in recording from large neuronal populations. Here, we describe prevalent hypotheses of how cognitive processes affect sensory neurons, driven largely by a model based on the activity of single neurons or pools of neurons as the units of computation. We then use simple simulations to expand this model to a new conceptual framework that focuses on subspaces of population activity as the relevant units of computation, uses comparisons between brain areas or to behavior to guide analyses of these subspaces, and suggests that population activity is optimized to decode the large variety of stimuli and tasks that animals encounter in natural behavior. This framework provides new ways of understanding the ever-growing quantity of recorded population activity data.
Collapse
Affiliation(s)
- Douglas A Ruff
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213, USA;
| | - Amy M Ni
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213, USA;
| | - Marlene R Cohen
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213, USA;
| |
Collapse
|
25
|
Zabicki A, de Haas B, Zentgraf K, Stark R, Munzert J, Krüger B. Imagined and Executed Actions in the Human Motor System: Testing Neural Similarity Between Execution and Imagery of Actions with a Multivariate Approach. Cereb Cortex 2018; 27:4523-4536. [PMID: 27600847 DOI: 10.1093/cercor/bhw257] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2016] [Accepted: 07/18/2016] [Indexed: 12/31/2022] Open
Abstract
Simulation theory proposes motor imagery (MI) to be a simulation based on representations also used for motor execution (ME). Nonetheless, it is unclear how far they use the same neural code. We use multivariate pattern analysis (MVPA) and representational similarity analysis (RSA) to describe the neural representations associated with MI and ME within the frontoparietal motor network. During functional magnetic resonance imaging scanning, 20 volunteers imagined or executed 3 different types of right-hand actions. Results of MVPA showed that these actions as well as their modality (MI or ME) could be decoded significantly above chance from the spatial patterns of BOLD signals in premotor and posterior parietal cortices. This was also true for cross-modal decoding. Furthermore, representational dissimilarity matrices of frontal and parietal areas showed that MI and ME representations formed separate clusters, but that the representational organization of action types within these clusters was identical. For most ROIs, this pattern of results best fits with a model that assumes a low-to-moderate degree of similarity between the neural patterns associated with MI and ME. Thus, neural representations of MI and ME are neither the same nor totally distinct but exhibit a similar structural geometry with respect to different types of action.
Collapse
Affiliation(s)
- Adam Zabicki
- Institute for Sports Science, Justus Liebig University Giessen, Giessen, 35394, Germany
| | - Benjamin de Haas
- Institute of Cognitive Neuroscience, University College London, London, WC1H 0AP, UK.,Experimental Psychology, University College London, London, WC1H 0AP, UK
| | - Karen Zentgraf
- Institute of Sport and Exercise Sciences, University of Münster, Münster, 48149, Germany.,Bender Institute of Neuroimaging, Justus Liebig University Giessen, Giessen, 35394, Germany
| | - Rudolf Stark
- Bender Institute of Neuroimaging, Justus Liebig University Giessen, Giessen, 35394, Germany
| | - Jörn Munzert
- Institute for Sports Science, Justus Liebig University Giessen, Giessen, 35394, Germany
| | - Britta Krüger
- Institute for Sports Science, Justus Liebig University Giessen, Giessen, 35394, Germany.,Bender Institute of Neuroimaging, Justus Liebig University Giessen, Giessen, 35394, Germany
| |
Collapse
|
26
|
Guggenmos M, Sterzer P, Cichy RM. Multivariate pattern analysis for MEG: A comparison of dissimilarity measures. Neuroimage 2018; 173:434-447. [DOI: 10.1016/j.neuroimage.2018.02.044] [Citation(s) in RCA: 83] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2017] [Revised: 12/30/2017] [Accepted: 02/22/2018] [Indexed: 11/17/2022] Open
|
27
|
Thavabalasingam S, O'Neil EB, Lee ACH. Multivoxel pattern similarity suggests the integration of temporal duration in hippocampal event sequence representations. Neuroimage 2018; 178:136-146. [PMID: 29775662 DOI: 10.1016/j.neuroimage.2018.05.036] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 04/20/2018] [Accepted: 05/14/2018] [Indexed: 10/16/2022] Open
Abstract
Recent rodent work suggests the hippocampus may provide a temporal representation of event sequences, in which the order of events and the interval durations between them are encoded. There is, however, limited human evidence for the latter, in particular whether the hippocampus processes duration information pertaining to the passage of time rather than qualitative or quantitative changes in event content. We scanned participants while they made match-mismatch judgements on each trial between a study sequence of events and a subsequent test sequence. Participants explicitly remembered event order or interval duration information (Experiment 1), or monitored order only, with duration being manipulated implicitly (Experiment 2). Hippocampal study-test pattern similarity was significantly reduced by changes to order or duration in mismatch trials, even when duration was processed implicitly. Our findings suggest the human hippocampus processes short intervals within sequences and support the idea that duration information is integrated into hippocampal mnemonic representations.
Collapse
Affiliation(s)
| | - Edward B O'Neil
- Department of Psychology (Scarborough), University of Toronto, Toronto, Canada
| | - Andy C H Lee
- Department of Psychology (Scarborough), University of Toronto, Toronto, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Canada.
| |
Collapse
|
28
|
Abstract
Psychology moved beyond the stimulus response mapping of behaviorism by adopting an information processing framework. This shift from behavioral to cognitive science was partly inspired by work demonstrating that the concept of information could be defined and quantified (Shannon, 1948). This transition developed further from cognitive science into cognitive neuroscience, in an attempt to measure information in the brain. In the cognitive neurosciences, however, the term information is often used without a clear definition. This paper will argue that, if the formulation proposed by Shannon is applied to modern neuroimaging, then numerous results would be interpreted differently. More specifically, we argue that much modern cognitive neuroscience implicitly focuses on the question of how we can interpret the activations we record in the brain (experimenter-as-receiver), rather than on the core question of how the rest of the brain can interpret those activations (cortex-as-receiver). A clearer focus on whether activations recorded via neuroimaging can actually act as information in the brain would not only change how findings are interpreted but should also change the direction of empirical research in cognitive neuroscience.
Collapse
|
29
|
Spigler G, Wilson SP. Familiarization: A theory of repetition suppression predicts interference between overlapping cortical representations. PLoS One 2017; 12:e0179306. [PMID: 28604787 PMCID: PMC5467900 DOI: 10.1371/journal.pone.0179306] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Accepted: 05/26/2017] [Indexed: 01/16/2023] Open
Abstract
Repetition suppression refers to a reduction in the cortical response to a novel stimulus that results from repeated presentation of the stimulus. We demonstrate repetition suppression in a well established computational model of cortical plasticity, according to which the relative strengths of lateral inhibitory interactions are modified by Hebbian learning. We present the model as an extension to the traditional account of repetition suppression offered by sharpening theory, which emphasises the contribution of afferent plasticity, by instead attributing the effect primarily to plasticity of intra-cortical circuitry. In support, repetition suppression is shown to emerge in simulations with plasticity enabled only in intra-cortical connections. We show in simulation how an extended 'inhibitory sharpening theory' can explain the disruption of repetition suppression reported in studies that include an intermediate phase of exposure to additional novel stimuli composed of features similar to those of the original stimulus. The model suggests a re-interpretation of repetition suppression as a manifestation of the process by which an initially distributed representation of a novel object becomes a more localist representation. Thus, inhibitory sharpening may constitute a more general process by which representation emerges from cortical re-organisation.
Collapse
Affiliation(s)
- Giacomo Spigler
- Sheffield Robotics, The University of Sheffield, Sheffield, United Kingdom
- Department of Psychology, The University of Sheffield, Sheffield, United Kingdom
- * E-mail:
| | - Stuart P. Wilson
- Sheffield Robotics, The University of Sheffield, Sheffield, United Kingdom
- Department of Psychology, The University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
30
|
Horikawa T, Kamitani Y. Generic decoding of seen and imagined objects using hierarchical visual features. Nat Commun 2017; 8:15037. [PMID: 28530228 PMCID: PMC5458127 DOI: 10.1038/ncomms15037] [Citation(s) in RCA: 147] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2015] [Accepted: 02/21/2017] [Indexed: 11/10/2022] Open
Abstract
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Collapse
Affiliation(s)
- Tomoyasu Horikawa
- ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika, Soraku, Kyoto 619-0288, Japan
| | - Yukiyasu Kamitani
- ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika, Soraku, Kyoto 619-0288, Japan.,Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501, Japan
| |
Collapse
|
31
|
Quantifying cerebral contributions to pain beyond nociception. Nat Commun 2017; 8:14211. [PMID: 28195170 PMCID: PMC5316889 DOI: 10.1038/ncomms14211] [Citation(s) in RCA: 114] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Accepted: 12/05/2016] [Indexed: 12/21/2022] Open
Abstract
Cerebral processes contribute to pain beyond the level of nociceptive input and mediate psychological and behavioural influences. However, cerebral contributions beyond nociception are not yet well characterized, leading to a predominant focus on nociception when studying pain and developing interventions. Here we use functional magnetic resonance imaging combined with machine learning to develop a multivariate pattern signature-termed the stimulus intensity independent pain signature-1 (SIIPS1)-that predicts pain above and beyond nociceptive input in four training data sets (Studies 1-4, N=137). The SIIPS1 includes patterns of activity in nucleus accumbens, lateral prefrontal and parahippocampal cortices, and other regions. In cross-validated analyses of Studies 1-4 and in two independent test data sets (Studies 5-6, N=46), SIIPS1 responses explain variation in trial-by-trial pain ratings not captured by a previous fMRI-based marker for nociceptive pain. In addition, SIIPS1 responses mediate the pain-modulating effects of three psychological manipulations of expectations and perceived control. The SIIPS1 provides an extensible characterization of cerebral contributions to pain and specific brain targets for interventions.
Collapse
|
32
|
Khaligh-Razavi SM, Henriksson L, Kay K, Kriegeskorte N. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models. JOURNAL OF MATHEMATICAL PSYCHOLOGY 2017; 76:184-197. [PMID: 28298702 PMCID: PMC5341758 DOI: 10.1016/j.jmp.2016.10.007] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.
Collapse
Affiliation(s)
- Seyed-Mahdi Khaligh-Razavi
- MRC Cognition and Brain Sciences Unit, Cambridge, UK
- Computer Science & Artificial intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Linda Henriksson
- MRC Cognition and Brain Sciences Unit, Cambridge, UK
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Kendrick Kay
- Department of Psychology, Washington University in St. Louis, St. Louis, MO, USA
| | | |
Collapse
|
33
|
Palmeri TJ, Love BC, Turner BM. Model-based cognitive neuroscience. JOURNAL OF MATHEMATICAL PSYCHOLOGY 2017; 76:59-64. [PMID: 30147145 PMCID: PMC6103531 DOI: 10.1016/j.jmp.2016.10.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This special issue explores the growing intersection between mathematical psychology and cognitive neuroscience. Mathematical psychology, and cognitive modeling more generally, has a rich history of formalizing and testing hypotheses about cognitive mechanisms within a mathematical and computational language, making exquisite predictions of how people perceive, learn, remember, and decide. Cognitive neuroscience aims to identify neural mechanisms associated with key aspects of cognition using techniques like neurophysiology, electrophysiology, and structural and functional brain imaging. These two come together in a powerful new approach called model-based cognitive neuroscience, which can both inform cognitive modeling and help to interpret neural measures. Cognitive models decompose complex behavior into representations and processes and these latent model states can be used to explain the modulation of brain states under different experimental conditions. Reciprocally, neural measures provide data that help constrain cognitive models and adjudicate between competing cognitive models that make similar predictions about behavior. As examples, brain measures are related to cognitive model parameters fitted to individual participant data, measures of brain dynamics are related to measures of model dynamics, model parameters are constrained by neural measures, model parameters or model states are used in statistical analyses of neural data, or neural and behavioral data are analyzed jointly within a hierarchical modeling framework. We provide an introduction to the field of model-based cognitive neuroscience and to the articles contained within this special issue.
Collapse
|
34
|
Guest O, Love BC. What the success of brain imaging implies about the neural code. eLife 2017; 6. [PMID: 28103186 PMCID: PMC5245971 DOI: 10.7554/elife.21397] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Accepted: 12/23/2016] [Indexed: 12/05/2022] Open
Abstract
The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI’s limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI’s successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI. DOI:http://dx.doi.org/10.7554/eLife.21397.001 We can appreciate that a cat is more similar to a dog than to a truck. The combined activity of millions of neurons in the brain somehow captures these everyday similarities, and this activity can be measured using imaging techniques such as functional magnetic resonance imaging (fMRI). However, fMRI scanners are not particularly precise – they average together the responses of many thousands of neurons over several seconds, which provides a blurry snapshot of brain activity. Nevertheless, the pattern of activity measured when viewing a photograph of a cat is more similar to that seen when viewing a picture of a dog than a picture of a truck. This tells us a lot about how the brain codes information, as only certain coding methods would allow fMRI to capture these similarities given the technique’s limitations. There are many different models that attempt to describe how the brain codes similarity relations. Some models use the principle of neural networks, in which neurons can be considered as arranged into interconnected layers. In such models, neurons transmit information from one layer to the next. By investigating which models are consistent with fMRI’s ability to capture similarity relations, Guest and Love have found that certain neural network models are plausible accounts of how the brain represents and processes information. These models include the deep learning networks that contain many layers of neurons and are popularly used in artificial intelligence. Other modeling approaches do not account for the ability of fMRI to capture similarity relations. As neural networks become deeper with more layers, they should be less readily understood using fMRI: as the number of layers increases, the representations of objects with similarities (for example, cats and dogs) become more unrelated. One question that requires further investigation is whether this finding explains why certain parts of the brain are more difficult to image. DOI:http://dx.doi.org/10.7554/eLife.21397.002
Collapse
Affiliation(s)
- Olivia Guest
- Experimental Psychology, University College London, London, United Kingdom
| | - Bradley C Love
- Experimental Psychology, University College London, London, United Kingdom.,The Alan Turing Institute, London, United Kingdom
| |
Collapse
|
35
|
Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation. Neuroimage 2016; 137:70-85. [PMID: 27179606 DOI: 10.1016/j.neuroimage.2016.04.072] [Citation(s) in RCA: 471] [Impact Index Per Article: 58.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 03/14/2016] [Accepted: 04/01/2016] [Indexed: 12/19/2022] Open
Abstract
Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis.
Collapse
|
36
|
Sormaz M, Watson DM, Smith WA, Young AW, Andrews TJ. Modelling the perceptual similarity of facial expressions from image statistics and neural responses. Neuroimage 2016; 129:64-71. [DOI: 10.1016/j.neuroimage.2016.01.041] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2015] [Revised: 12/17/2015] [Accepted: 01/18/2016] [Indexed: 10/22/2022] Open
|
37
|
Yamins DLK, DiCarlo JJ. Using goal-driven deep learning models to understand sensory cortex. Nat Neurosci 2016; 19:356-65. [PMID: 26906502 DOI: 10.1038/nn.4244] [Citation(s) in RCA: 628] [Impact Index Per Article: 78.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 01/13/2016] [Indexed: 11/08/2022]
Abstract
Fueled by innovation in the computer vision and artificial intelligence communities, recent developments in computational neuroscience have used goal-driven hierarchical convolutional neural networks (HCNNs) to make strides in modeling neural single-unit and population responses in higher visual cortical areas. In this Perspective, we review the recent progress in a broader modeling context and describe some of the key technical innovations that have supported it. We then outline how the goal-driven HCNN approach can be used to delve even more deeply into understanding the development and organization of sensory cortical processing.
Collapse
Affiliation(s)
- Daniel L K Yamins
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - James J DiCarlo
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| |
Collapse
|
38
|
Abstract
Human fMRI signals exhibit a spatial patterning that contains detailed information about a person's mental states. Using classifiers it is possible to access this information and study brain processes at the level of individual mental representations. The precise link between fMRI signals and neural population signals still needs to be unraveled. Also, the interpretation of classification studies needs to be handled with care. Nonetheless, pattern-based analyses make it possible to investigate human representational spaces in unprecedented ways, especially when combined with computational modeling.
Collapse
Affiliation(s)
- John-Dylan Haynes
- Bernstein Center for Computational Neuroscience, Charité - Universitätsmedizin, 10117 Berlin, Germany; Berlin Center for Advanced Neuroimaging, Charité - Universitätsmedizin, 10117 Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10117 Berlin, Germany; Department of Neurology, Charité - Universitätsmedizin, 10117 Berlin, Germany; Department of Psychology, Humboldt Universität zu Berlin, 10117 Berlin, Germany; Cluster of Excellence NeuroCure, Charité - Universitätsmedizin, 10117 Berlin, Germany; SFB 940, Volition and Cognitive Control, Technische Universität Dresden, 01069 Dresden, Germany.
| |
Collapse
|
39
|
Henriksson L, Khaligh-Razavi SM, Kay K, Kriegeskorte N. Visual representations are dominated by intrinsic fluctuations correlated between areas. Neuroimage 2015; 114:275-86. [PMID: 25896934 PMCID: PMC4503804 DOI: 10.1016/j.neuroimage.2015.04.026] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2014] [Revised: 02/27/2015] [Accepted: 04/08/2015] [Indexed: 12/05/2022] Open
Abstract
Intrinsic cortical dynamics are thought to underlie trial-to-trial variability of visually evoked responses in animal models. Understanding their function in the context of sensory processing and representation is a major current challenge. Here we report that intrinsic cortical dynamics strongly affect the representational geometry of a brain region, as reflected in response-pattern dissimilarities, and exaggerate the similarity of representations between brain regions. We characterized the representations in several human visual areas by representational dissimilarity matrices (RDMs) constructed from fMRI response-patterns for natural image stimuli. The RDMs of different visual areas were highly similar when the response-patterns were estimated on the basis of the same trials (sharing intrinsic cortical dynamics), and quite distinct when patterns were estimated on the basis of separate trials (sharing only the stimulus-driven component). We show that the greater similarity of the representational geometries can be explained by coherent fluctuations of regional-mean activation within visual cortex, reflecting intrinsic dynamics. Using separate trials to study stimulus-driven representations revealed clearer distinctions between the representational geometries: a Gabor wavelet pyramid model explained representational geometry in visual areas V1–3 and a categorical animate–inanimate model in the object-responsive lateral occipital cortex.
Collapse
Affiliation(s)
- Linda Henriksson
- MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK; Brain Research Unit, Department of Neuroscience and Biomedical Engineering, Aalto University, 02150 Espoo, Finland.
| | - Seyed-Mahdi Khaligh-Razavi
- MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK; Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Kendrick Kay
- Department of Psychology, Washington University in St. Louis, St. Louis, MO 63130, USA
| | | |
Collapse
|
40
|
Panzeri S, Macke JH, Gross J, Kayser C. Neural population coding: combining insights from microscopic and mass signals. Trends Cogn Sci 2015; 19:162-72. [PMID: 25670005 PMCID: PMC4379382 DOI: 10.1016/j.tics.2015.01.002] [Citation(s) in RCA: 120] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2014] [Revised: 12/30/2014] [Accepted: 01/09/2015] [Indexed: 12/31/2022]
Abstract
Behavior relies on the distributed and coordinated activity of neural populations. Population activity can be measured using multi-neuron recordings and neuroimaging. Neural recordings reveal how the heterogeneity, sparseness, timing, and correlation of population activity shape information processing in local networks, whereas neuroimaging shows how long-range coupling and brain states impact on local activity and perception. To obtain an integrated perspective on neural information processing we need to combine knowledge from both levels of investigation. We review recent progress of how neural recordings, neuroimaging, and computational approaches begin to elucidate how interactions between local neural population activity and large-scale dynamics shape the structure and coding capacity of local information representations, make them state-dependent, and control distributed populations that collectively shape behavior.
Collapse
Affiliation(s)
- Stefano Panzeri
- Center for Neuroscience and Cognitive Systems, Istituto Italiano di Tecnologia, Corso Bettini 31, 38068 Rovereto, Italy; Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tübingen, Germany.
| | - Jakob H Macke
- Neural Computation and Behaviour Group, Max Planck Institute for Biological Cybernetics, Spemannstrasse 41, 72076 Tübingen, Germany; Bernstein Center for Computational Neuroscience Tübingen, Germany; Werner Reichardt Centre for Integrative Neuroscience Tübingen, Germany
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| |
Collapse
|
41
|
Mason RA, Just MA. Physics instruction induces changes in neural knowledge representation during successive stages of learning. Neuroimage 2015; 111:36-48. [PMID: 25665967 DOI: 10.1016/j.neuroimage.2014.12.086] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2014] [Revised: 11/19/2014] [Accepted: 12/29/2014] [Indexed: 11/18/2022] Open
Abstract
Incremental instruction on the workings of a set of mechanical systems induced a progression of changes in the neural representations of the systems. The neural representations of four mechanical systems were assessed before, during, and after three phases of incremental instruction (which first provided information about the system components, then provided partial causal information, and finally provided full functional information). In 14 participants, the neural representations of four systems (a bathroom scale, a fire extinguisher, an automobile braking system, and a trumpet) were assessed using three recently developed techniques: (1) machine learning and classification of multi-voxel patterns; (2) localization of consistently responding voxels; and (3) representational similarity analysis (RSA). The neural representations of the systems progressed through four stages, or states, involving spatially and temporally distinct multi-voxel patterns: (1) initially, the representation was primarily visual (occipital cortex); (2) it subsequently included a large parietal component; (3) it eventually became cortically diverse (frontal, parietal, temporal, and medial frontal regions); and (4) at the end, it demonstrated a strong frontal cortex weighting (frontal and motor regions). At each stage of knowledge, it was possible for a classifier to identify which one of four mechanical systems a participant was thinking about, based on their brain activation patterns. The progression of representational states was suggestive of progressive stages of learning: (1) encoding information from the display; (2) mental animation, possibly involving imagining the components moving; (3) generating causal hypotheses associated with mental animation; and finally (4) determining how a person (probably oneself) would interact with the system. This interpretation yields an initial, cortically-grounded, theory of learning of physical systems that potentially can be related to cognitive learning theories by suggesting links between cortical representations, stages of learning, and the understanding of simple systems.
Collapse
Affiliation(s)
- Robert A Mason
- Center for Cognitive Brain Imaging, Psychology Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Marcel Adam Just
- Center for Cognitive Brain Imaging, Psychology Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| |
Collapse
|
42
|
Testing key predictions of the associative account of mirror neurons in humans using multivariate pattern analysis. Behav Brain Sci 2014; 37:213-5. [PMID: 24775171 DOI: 10.1017/s0140525x13002434] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Cook et al. overstate the evidence supporting their associative account of mirror neurons in humans: most studies do not address a key property, action-specificity that generalizes across the visual and motor domains. Multivariate pattern analysis (MVPA) of neuroimaging data can address this concern, and we illustrate how MVPA can be used to test key predictions of their account.
Collapse
|
43
|
Abstract
Humans recognize faces and objects with high speed and accuracy regardless of their orientation. Recent studies have proposed that orientation invariance in face recognition involves an intermediate representation where neural responses are similar for mirror-symmetric views. Here, we used fMRI, multivariate pattern analysis, and computational modeling to investigate the neural encoding of faces and vehicles at different rotational angles. Corroborating previous studies, we demonstrate a representation of face orientation in the fusiform face-selective area (FFA). We go beyond these studies by showing that this representation is category-selective and tolerant to retinal translation. Critically, by controlling for low-level confounds, we found the representation of orientation in FFA to be compatible with a linear angle code. Aspects of mirror-symmetric coding cannot be ruled out when FFA mean activity levels are considered as a dimension of coding. Finally, we used a parametric family of computational models, involving a biased sampling of view-tuned neuronal clusters, to compare different face angle encoding models. The best fitting model exhibited a predominance of neuronal clusters tuned to frontal views of faces. In sum, our findings suggest a category-selective and monotonic code of face orientation in the human FFA, in line with primate electrophysiology studies that observed mirror-symmetric tuning of neural responses at higher stages of the visual system, beyond the putative homolog of human FFA.
Collapse
|
44
|
Khaligh-Razavi SM, Kriegeskorte N. Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Comput Biol 2014; 10:e1003915. [PMID: 25375136 PMCID: PMC4222664 DOI: 10.1371/journal.pcbi.1003915] [Citation(s) in RCA: 540] [Impact Index Per Article: 54.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Accepted: 09/11/2014] [Indexed: 11/20/2022] Open
Abstract
Inferior temporal (IT) cortex in human and nonhuman primates serves visual object recognition. Computational object-vision models, although continually improving, do not yet reach human performance. It is unclear to what extent the internal representations of computational models can explain the IT representation. Here we investigate a wide range of computational model representations (37 in total), testing their categorization performance and their ability to account for the IT representational geometry. The models include well-known neuroscientific object-recognition models (e.g. HMAX, VisNet) along with several models from computer vision (e.g. SIFT, GIST, self-similarity features, and a deep convolutional neural network). We compared the representational dissimilarity matrices (RDMs) of the model representations with the RDMs obtained from human IT (measured with fMRI) and monkey IT (measured with cell recording) for the same set of stimuli (not used in training the models). Better performing models were more similar to IT in that they showed greater clustering of representational patterns by category. In addition, better performing models also more strongly resembled IT in terms of their within-category representational dissimilarities. Representational geometries were significantly correlated between IT and many of the models. However, the categorical clustering observed in IT was largely unexplained by the unsupervised models. The deep convolutional network, which was trained by supervision with over a million category-labeled images, reached the highest categorization performance and also best explained IT, although it did not fully explain the IT data. Combining the features of this model with appropriate weights and adding linear combinations that maximize the margin between animate and inanimate objects and between faces and other objects yielded a representation that fully explained our IT data. Overall, our results suggest that explaining IT requires computational features trained through supervised learning to emphasize the behaviorally important categorical divisions prominently reflected in IT.
Collapse
Affiliation(s)
| | - Nikolaus Kriegeskorte
- Medical Research Council, Cognition and Brain Sciences Unit, Cambridge, United Kingdom
| |
Collapse
|
45
|
Clark IA, Niehaus KE, Duff EP, Di Simplicio MC, Clifford GD, Smith SM, Mackay CE, Woolrich MW, Holmes EA. First steps in using machine learning on fMRI data to predict intrusive memories of traumatic film footage. Behav Res Ther 2014; 62:37-46. [PMID: 25151915 PMCID: PMC4222599 DOI: 10.1016/j.brat.2014.07.010] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2014] [Revised: 07/04/2014] [Accepted: 07/16/2014] [Indexed: 01/26/2023]
Abstract
After psychological trauma, why do some only some parts of the traumatic event return as intrusive memories while others do not? Intrusive memories are key to cognitive behavioural treatment for post-traumatic stress disorder, and an aetiological understanding is warranted. We present here analyses using multivariate pattern analysis (MVPA) and a machine learning classifier to investigate whether peri-traumatic brain activation was able to predict later intrusive memories (i.e. before they had happened). To provide a methodological basis for understanding the context of the current results, we first show how functional magnetic resonance imaging (fMRI) during an experimental analogue of trauma (a trauma film) via a prospective event-related design was able to capture an individual's later intrusive memories. Results showed widespread increases in brain activation at encoding when viewing a scene in the scanner that would later return as an intrusive memory in the real world. These fMRI results were replicated in a second study. While traditional mass univariate regression analysis highlighted an association between brain processing and symptomatology, this is not the same as prediction. Using MVPA and a machine learning classifier, it was possible to predict later intrusive memories across participants with 68% accuracy, and within a participant with 97% accuracy; i.e. the classifier could identify out of multiple scenes those that would later return as an intrusive memory. We also report here brain networks key in intrusive memory prediction. MVPA opens the possibility of decoding brain activity to reconstruct idiosyncratic cognitive events with relevance to understanding and predicting mental health symptoms. Why only some moments within a trauma intrude while others do not is unclear. Neuroimaging may provide further clues as to why this is the case. Multivariate pattern analysis, a recent neuroimaging analysis tool, was able to predict intrusive memories. Those brain networks involved in intrusive memory prediction are presented. Multivariate pattern analysis may inform future innovation in mental health.
Collapse
Affiliation(s)
- Ian A Clark
- University Department of Psychiatry, Warneford Hospital, University of Oxford, United Kingdom
| | - Katherine E Niehaus
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, United Kingdom
| | - Eugene P Duff
- FMRIB Centre, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, United Kingdom
| | - Martina C Di Simplicio
- Medical Research Council Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Gari D Clifford
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, United Kingdom
| | - Stephen M Smith
- FMRIB Centre, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, United Kingdom
| | - Clare E Mackay
- University Department of Psychiatry, Warneford Hospital, University of Oxford, United Kingdom
| | - Mark W Woolrich
- Oxford Centre for Human Brain Activity (OHBA), Department of Psychiatry, Warneford Hospital, University of Oxford, United Kingdom
| | - Emily A Holmes
- Medical Research Council Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom; Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
46
|
Ghodrati M, Farzmahdi A, Rajaei K, Ebrahimpour R, Khaligh-Razavi SM. Feedforward object-vision models only tolerate small image variations compared to human. Front Comput Neurosci 2014; 8:74. [PMID: 25100986 PMCID: PMC4103258 DOI: 10.3389/fncom.2014.00074] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2014] [Accepted: 06/28/2014] [Indexed: 11/13/2022] Open
Abstract
Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex.
Collapse
Affiliation(s)
- Masoud Ghodrati
- Brain and Intelligent Systems Research Laboratory, Department of Electrical and Computer Engineering, Shahid Rajaee Teacher Training University Tehran, Iran ; School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM) Tehran, Iran ; Department of Physiology, Monash University Melbourne, VIC, Australia
| | - Amirhossein Farzmahdi
- Brain and Intelligent Systems Research Laboratory, Department of Electrical and Computer Engineering, Shahid Rajaee Teacher Training University Tehran, Iran ; School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM) Tehran, Iran ; Department of Electrical Engineering, Amirkabir University of Technology Tehran, Iran
| | - Karim Rajaei
- Brain and Intelligent Systems Research Laboratory, Department of Electrical and Computer Engineering, Shahid Rajaee Teacher Training University Tehran, Iran ; School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM) Tehran, Iran
| | - Reza Ebrahimpour
- Brain and Intelligent Systems Research Laboratory, Department of Electrical and Computer Engineering, Shahid Rajaee Teacher Training University Tehran, Iran ; School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM) Tehran, Iran
| | | |
Collapse
|
47
|
Hippocampal activity patterns carry information about objects in temporal context. Neuron 2014; 81:1165-1178. [PMID: 24607234 DOI: 10.1016/j.neuron.2014.01.015] [Citation(s) in RCA: 220] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/31/2013] [Indexed: 01/19/2023]
Abstract
The hippocampus is critical for human episodic memory, but its role remains controversial. One fundamental question concerns whether the hippocampus represents specific objects or assigns context-dependent representations to objects. Here, we used multivoxel pattern similarity analysis of fMRI data during retrieval of learned object sequences to systematically investigate hippocampal coding of object and temporal context information. Hippocampal activity patterns carried information about the temporal positions of objects in learned sequences, but not about objects or temporal positions in random sequences. Hippocampal activity patterns differentiated between overlapping object sequences and between temporally adjacent objects that belonged to distinct sequence contexts. Parahippocampal and perirhinal cortex showed different pattern information profiles consistent with coding of temporal position and object information, respectively. These findings are consistent with models proposing that the hippocampus represents objects within specific temporal contexts, a capability that might explain its critical role in episodic memory.
Collapse
|
48
|
Abstract
Most studies of the early stages of visual analysis (V1-V3) have focused on the properties of neurons that support processing of elemental features of a visual stimulus or scene, such as local contrast, orientation, or direction of motion. Recent evidence from electrophysiology and neuroimaging studies, however, suggests that early visual cortex may also play a role in retaining stimulus representations in memory for short periods. For example, fMRI responses obtained during the delay period between two presentations of an oriented visual stimulus can be used to decode the remembered stimulus orientation with multivariate pattern analysis. Here, we investigated whether orientation is a special case or if this phenomenon generalizes to working memory traces of other visual features. We found that multivariate classification of fMRI signals from human visual cortex could be used to decode the contrast of a perceived stimulus even when the mean response changes were accounted for, suggesting some consistent spatial signal for contrast in these areas. Strikingly, we found that fMRI responses also supported decoding of contrast when the stimulus had to be remembered. Furthermore, classification generalized from perceived to remembered stimuli and vice versa, implying that the corresponding pattern of responses in early visual cortex were highly consistent. In additional analyses, we show that stimulus decoding here is driven by biases depending on stimulus eccentricity. This places important constraints on the interpretation for decoding stimulus properties for which cortical processing is known to vary with eccentricity, such as contrast, color, spatial frequency, and temporal frequency.
Collapse
|
49
|
Notice of retraction: 'The emergence of orthographic word representations in the brain: evaluating a neural shape-based framework using fMRI and the HMAX model' by Wouter Braet, Jonas Kubilius, Johan Wagemans and Hans P. Op de Beeck. doi:10.1093/Cercor/bhs355, published online November 16, 2012. Cereb Cortex 2013; 23:2015. [PMID: 23162048 DOI: 10.1093/cercor/bhs355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/20/2024] Open
|
50
|
Schmithorst VJ, Farah R, Keith RW. Left ear advantage in speech-related dichotic listening is not specific to auditory processing disorder in children: A machine-learning fMRI and DTI study. NEUROIMAGE-CLINICAL 2013; 3:8-17. [PMID: 24179844 PMCID: PMC3791276 DOI: 10.1016/j.nicl.2013.06.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2013] [Revised: 06/24/2013] [Accepted: 06/25/2013] [Indexed: 12/13/2022]
Abstract
Dichotic listening (DL) tests are among the most frequently included in batteries for the diagnosis of auditory processing disorders (APD) in children. A finding of atypical left ear advantage (LEA) for speech-related stimuli is often taken by clinical audiologists as an indicator for APD. However, the precise etiology of ear advantage in DL tests has been a source of debate for decades. It is uncertain whether a finding of LEA is truly indicative of a sensory processing deficit such as APD, or whether attentional or other supramodal factors may also influence ear advantage. Multivariate machine learning was used on diffusion tensor imaging (DTI) and functional MRI (fMRI) data from a cohort of children ages 7–14 referred for APD testing with LEA, and typical controls with right-ear advantage (REA). LEA was predicted by: increased axial diffusivity in the left internal capsule (sublenticular region), and decreased functional activation in the left frontal eye fields (BA 8) during words presented diotically as compared to words presented dichotically, compared to children with right-ear advantage (REA). These results indicate that both sensory and attentional deficits may be predictive of LEA, and thus a finding of LEA, while possibly due to sensory factors, is not a specific indicator of APD as it may stem from a supramodal etiology. Left-ear advantage (LEA) in speech-related dichotic listening tests is atypical. LEA is predicted by differences in functional activation in frontal eye fields. LEA also predicted by differences in WM microstructure in left auditory radiation. LEA is therefore not specific for auditory processing disorder (APD) in children.
Collapse
Affiliation(s)
- Vincent J Schmithorst
- Pediatric Neuroimaging Research Consortium, Dept. of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | | | | |
Collapse
|