1
|
Wang T, Morehead RJ, Tsay JS, Ivry RB. The Origin of Movement Biases During Reaching. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.15.585272. [PMID: 38562840 PMCID: PMC10983854 DOI: 10.1101/2024.03.15.585272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Goal-directed movements can fail due to errors in our perceptual and motor systems. While these errors may arise from random noise within these sources, they also reflect systematic motor biases that vary with the location of the target. The origin of these systematic biases remains controversial. Drawing on data from an extensive array of reaching tasks conducted over the past 30 years, we evaluated the merits of various computational models regarding the origin of motor biases. Contrary to previous theories, we show that motor biases do not arise from systematic errors associated with the sensed hand position during motor planning or from the biomechanical constraints imposed during motor execution. Rather, motor biases are primarily caused by a misalignment between eye-centric and the body-centric representations of position. This model can account for motor biases across a wide range of contexts, encompassing movements with the right versus left hand, proximal and distal effectors, visible and occluded starting positions, as well as before and after sensorimotor adaptation.
Collapse
Affiliation(s)
- Tianhe Wang
- Department of Psychology, University of California, Berkeley
- Helen Wills Neuroscience Institute, University of California, Berkeley
| | | | | | - Richard B Ivry
- Department of Psychology, University of California, Berkeley
- Helen Wills Neuroscience Institute, University of California, Berkeley
| |
Collapse
|
2
|
Gonzalez JE, Nieto N, Brusco P, Gravano A, Kamienkowski JE. Speech-induced suppression during natural dialogues. Commun Biol 2024; 7:291. [PMID: 38459110 PMCID: PMC10923813 DOI: 10.1038/s42003-024-05945-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 02/21/2024] [Indexed: 03/10/2024] Open
Abstract
When engaged in a conversation, one receives auditory information from the other's speech but also from their own speech. However, this information is processed differently by an effect called Speech-Induced Suppression. Here, we studied brain representation of acoustic properties of speech in natural unscripted dialogues, using electroencephalography (EEG) and high-quality speech recordings from both participants. Using encoding techniques, we were able to reproduce a broad range of previous findings on listening to another's speech, and achieving even better performances when predicting EEG signal in this complex scenario. Furthermore, we found no response when listening to oneself, using different acoustic features (spectrogram, envelope, etc.) and frequency bands, evidencing a strong effect of SIS. The present work shows that this mechanism is present, and even stronger, during natural dialogues. Moreover, the methodology presented here opens the possibility of a deeper understanding of the related mechanisms in a wider range of contexts.
Collapse
Affiliation(s)
- Joaquin E Gonzalez
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación (Universidad de Buenos Aires - Consejo Nacional de Investigaciones Cientificas y Tecnicas), Buenos Aires, Argentina.
| | - Nicolás Nieto
- Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional, sinc(i) (Universidad Nacional del Litoral - Consejo Nacional de Investigaciones Cientificas y Tecnicas), Santa Fe, Argentina
- Instituto de Matemática Aplicada del Litoral, IMAL-UNL/CONICET, Santa Fe, Argentina
| | - Pablo Brusco
- Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Agustín Gravano
- Laboratorio de Inteligencia Artificial, Universidad Torcuato Di Tella, Buenos Aires, Argentina
- Escuela de Negocios, Universidad Torcuato Di Tella, Buenos Aires, Argentina
- Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina
| | - Juan E Kamienkowski
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación (Universidad de Buenos Aires - Consejo Nacional de Investigaciones Cientificas y Tecnicas), Buenos Aires, Argentina
- Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, Argentina
- Maestria de Explotación de Datos y Descubrimiento del Conocimiento, Facultad de Ciencias Exactas y Naturales - Facultad de Ingenieria, Universidad de Buenos Aires, Buenos Aires, Argentina
| |
Collapse
|
3
|
Tuckute G, Feather J, Boebinger D, McDermott JH. Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions. PLoS Biol 2023; 21:e3002366. [PMID: 38091351 PMCID: PMC10718467 DOI: 10.1371/journal.pbio.3002366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 10/06/2023] [Indexed: 12/18/2023] Open
Abstract
Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.
Collapse
Affiliation(s)
- Greta Tuckute
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
| | - Jenelle Feather
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
| | - Dana Boebinger
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard, Cambridge, Massachusetts, United States of America
- University of Rochester Medical Center, Rochester, New York, New York, United States of America
| | - Josh H. McDermott
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard, Cambridge, Massachusetts, United States of America
| |
Collapse
|
4
|
Singer Y, Taylor L, Willmore BDB, King AJ, Harper NS. Hierarchical temporal prediction captures motion processing along the visual pathway. eLife 2023; 12:e52599. [PMID: 37844199 PMCID: PMC10629830 DOI: 10.7554/elife.52599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 10/04/2023] [Indexed: 10/18/2023] Open
Abstract
Visual neurons respond selectively to features that become increasingly complex from the eyes to the cortex. Retinal neurons prefer flashing spots of light, primary visual cortical (V1) neurons prefer moving bars, and those in higher cortical areas favor complex features like moving textures. Previously, we showed that V1 simple cell tuning can be accounted for by a basic model implementing temporal prediction - representing features that predict future sensory input from past input (Singer et al., 2018). Here, we show that hierarchical application of temporal prediction can capture how tuning properties change across at least two levels of the visual system. This suggests that the brain does not efficiently represent all incoming information; instead, it selectively represents sensory inputs that help in predicting the future. When applied hierarchically, temporal prediction extracts time-varying features that depend on increasingly high-level statistics of the sensory input.
Collapse
Affiliation(s)
- Yosef Singer
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Luke Taylor
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Ben DB Willmore
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of OxfordOxfordUnited Kingdom
| |
Collapse
|
5
|
LeBel A, Wagner L, Jain S, Adhikari-Desai A, Gupta B, Morgenthal A, Tang J, Xu L, Huth AG. A natural language fMRI dataset for voxelwise encoding models. Sci Data 2023; 10:555. [PMID: 37612332 PMCID: PMC10447563 DOI: 10.1038/s41597-023-02437-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 08/02/2023] [Indexed: 08/25/2023] Open
Abstract
Speech comprehension is a complex process that draws on humans' abilities to extract lexical information, parse syntax, and form semantic understanding. These sub-processes have traditionally been studied using separate neuroimaging experiments that attempt to isolate specific effects of interest. More recently it has become possible to study all stages of language comprehension in a single neuroimaging experiment using narrative natural language stimuli. The resulting data are richly varied at every level, enabling analyses that can probe everything from spectral representations to high-level representations of semantic meaning. We provide a dataset containing BOLD fMRI responses recorded while 8 participants each listened to 27 complete, natural, narrative stories (~6 hours). This dataset includes pre-processed and raw MRIs, as well as hand-constructed 3D cortical surfaces for each participant. To address the challenges of analyzing naturalistic data, this dataset is accompanied by a python library containing basic code for creating voxelwise encoding models. Altogether, this dataset provides a large and novel resource for understanding speech and language processing in the human brain.
Collapse
Affiliation(s)
- Amanda LeBel
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, 94704, USA
| | - Lauren Wagner
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, CA, 90095, USA
| | - Shailee Jain
- Department of Computer Science, The University of Texas at Austin, Austin, TX, 78712, USA
| | - Aneesh Adhikari-Desai
- Department of Computer Science, The University of Texas at Austin, Austin, TX, 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, TX, 78712, USA
| | - Bhavin Gupta
- Department of Computer Science, The University of Texas at Austin, Austin, TX, 78712, USA
| | - Allyson Morgenthal
- Department of Neuroscience, The University of Texas at Austin, Austin, TX, 78712, USA
| | - Jerry Tang
- Department of Computer Science, The University of Texas at Austin, Austin, TX, 78712, USA
| | - Lixiang Xu
- Department of Physics, The University of Texas at Austin, Austin, TX, 78712, USA
| | - Alexander G Huth
- Department of Computer Science, The University of Texas at Austin, Austin, TX, 78712, USA.
- Department of Neuroscience, The University of Texas at Austin, Austin, TX, 78712, USA.
| |
Collapse
|
6
|
Wang EY, Fahey PG, Ponder K, Ding Z, Chang A, Muhammad T, Patel S, Ding Z, Tran D, Fu J, Papadopoulos S, Franke K, Ecker AS, Reimer J, Pitkow X, Sinz FH, Tolias AS. Towards a Foundation Model of the Mouse Visual Cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.21.533548. [PMID: 36993435 PMCID: PMC10055288 DOI: 10.1101/2023.03.21.533548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Understanding the brain's perception algorithm is a highly intricate problem, as the inherent complexity of sensory inputs and the brain's nonlinear processing make characterizing sensory representations difficult. Recent studies have shown that functional models-capable of predicting large-scale neuronal activity in response to arbitrary sensory input-can be powerful tools for characterizing neuronal representations by enabling high-throughput in silico experiments. However, accurately modeling responses to dynamic and ecologically relevant inputs like videos remains challenging, particularly when generalizing to new stimulus domains outside the training distribution. Inspired by recent breakthroughs in artificial intelligence, where foundation models-trained on vast quantities of data-have demonstrated remarkable capabilities and generalization, we developed a "foundation model" of the mouse visual cortex: a deep neural network trained on large amounts of neuronal responses to ecological videos from multiple visual cortical areas and mice. The model accurately predicted neuronal responses not only to natural videos but also to various new stimulus domains, such as coherent moving dots and noise patterns, underscoring its generalization abilities. The foundation model could also be adapted to new mice with minimal natural movie training data. We applied the foundation model to the MICrONS dataset: a study of the brain that integrates structure with function at unprecedented scale, containing nanometer-scale morphology, connectivity with >500,000,000 synapses, and function of >70,000 neurons within a ~1mm3 volume spanning multiple areas of the mouse visual cortex. This accurate functional model of the MICrONS data opens the possibility for a systematic characterization of the relationship between circuit structure and function. By precisely capturing the response properties of the visual cortex and generalizing to new stimulus domains and mice, foundation models can pave the way for a deeper understanding of visual computation.
Collapse
Affiliation(s)
- Eric Y Wang
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Paul G Fahey
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Kayla Ponder
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Zhuokun Ding
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Andersen Chang
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Taliah Muhammad
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Saumil Patel
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Zhiwei Ding
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Dat Tran
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Jiakun Fu
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Stelios Papadopoulos
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Katrin Franke
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Alexander S Ecker
- Institute for Computer Science, University Göttingen, Göttingen, Germany
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Jacob Reimer
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Xaq Pitkow
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Fabian H Sinz
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
- Institute for Computer Science, University Göttingen, Göttingen, Germany
- Institute for Bioinformatics and Medical Informatics, University of Tübingen, Germany
| | - Andreas S Tolias
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| |
Collapse
|
7
|
Nakai T, Nishimoto S. Artificial neural network modelling of the neural population code underlying mathematical operations. Neuroimage 2023; 270:119980. [PMID: 36848969 DOI: 10.1016/j.neuroimage.2023.119980] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 02/10/2023] [Accepted: 02/23/2023] [Indexed: 02/28/2023] Open
Abstract
Mathematical operations have long been regarded as a sparse, symbolic process in neuroimaging studies. In contrast, advances in artificial neural networks (ANN) have enabled extracting distributed representations of mathematical operations. Recent neuroimaging studies have compared distributed representations of the visual, auditory and language domains in ANNs and biological neural networks (BNNs). However, such a relationship has not yet been examined in mathematics. Here we hypothesise that ANN-based distributed representations can explain brain activity patterns of symbolic mathematical operations. We used the fMRI data of a series of mathematical problems with nine different combinations of operators to construct voxel-wise encoding/decoding models using both sparse operator and latent ANN features. Representational similarity analysis demonstrated shared representations between ANN and BNN, an effect particularly evident in the intraparietal sulcus. Feature-brain similarity (FBS) analysis served to reconstruct a sparse representation of mathematical operations based on distributed ANN features in each cortical voxel. Such reconstruction was more efficient when using features from deeper ANN layers. Moreover, latent ANN features allowed the decoding of novel operators not used during model training from brain activity. The current study provides novel insights into the neural code underlying mathematical thought.
Collapse
Affiliation(s)
- Tomoya Nakai
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan; Lyon Neuroscience Research Center (CRNL), INSERM U1028 - CNRS UMR5292, University of Lyon, Bron, France.
| | - Shinji Nishimoto
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan; Graduate School of Frontier Biosciences, Osaka University, Suita, Japan; Graduate School of Medicine, Osaka University, Suita, Japan
| |
Collapse
|
8
|
Ding Z, Fahey PG, Papadopoulos S, Wang EY, Celii B, Papadopoulos C, Kunin AB, Chang A, Fu J, Ding Z, Patel S, Ponder K, Muhammad T, Bae JA, Bodor AL, Brittain D, Buchanan J, Bumbarger DJ, Castro MA, Cobos E, Dorkenwald S, Elabbady L, Halageri A, Jia Z, Jordan C, Kapner D, Kemnitz N, Kinn S, Lee K, Li K, Lu R, Macrina T, Mahalingam G, Mitchell E, Mondal SS, Mu S, Nehoran B, Popovych S, Schneider-Mizell CM, Silversmith W, Takeno M, Torres R, Turner NL, Wong W, Wu J, Yin W, Yu SC, Froudarakis E, Sinz F, Seung HS, Collman F, da Costa NM, Reid RC, Walker EY, Pitkow X, Reimer J, Tolias AS. Functional connectomics reveals general wiring rule in mouse visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.13.531369. [PMID: 36993398 PMCID: PMC10054929 DOI: 10.1101/2023.03.13.531369] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
To understand how the brain computes, it is important to unravel the relationship between circuit connectivity and function. Previous research has shown that excitatory neurons in layer 2/3 of the primary visual cortex of mice with similar response properties are more likely to form connections. However, technical challenges of combining synaptic connectivity and functional measurements have limited these studies to few, highly local connections. Utilizing the millimeter scale and nanometer resolution of the MICrONS dataset, we studied the connectivity-function relationship in excitatory neurons of the mouse visual cortex across interlaminar and interarea projections, assessing connection selectivity at the coarse axon trajectory and fine synaptic formation levels. A digital twin model of this mouse, that accurately predicted responses to arbitrary video stimuli, enabled a comprehensive characterization of the function of neurons. We found that neurons with highly correlated responses to natural videos tended to be connected with each other, not only within the same cortical area but also across multiple layers and visual areas, including feedforward and feedback connections, whereas we did not find that orientation preference predicted connectivity. The digital twin model separated each neuron's tuning into a feature component (what the neuron responds to) and a spatial component (where the neuron's receptive field is located). We show that the feature, but not the spatial component, predicted which neurons were connected at the fine synaptic scale. Together, our results demonstrate the "like-to-like" connectivity rule generalizes to multiple connection types, and the rich MICrONS dataset is suitable to further refine a mechanistic understanding of circuit structure and function.
Collapse
Affiliation(s)
- Zhuokun Ding
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Paul G Fahey
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Stelios Papadopoulos
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Eric Y Wang
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Brendan Celii
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Christos Papadopoulos
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Alexander B Kunin
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
- Department of Mathematics, Creighton University, Omaha, USA
| | - Andersen Chang
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Jiakun Fu
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Zhiwei Ding
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Saumil Patel
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Kayla Ponder
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Taliah Muhammad
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - J Alexander Bae
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Electrical and Computer Engineering Department, Princeton University, Princeton, USA
| | | | | | | | | | - Manuel A Castro
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Erick Cobos
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Sven Dorkenwald
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | | | - Akhilesh Halageri
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Zhen Jia
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | - Chris Jordan
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Dan Kapner
- Allen Institute for Brain Science, Seattle, USA
| | - Nico Kemnitz
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Sam Kinn
- Allen Institute for Brain Science, Seattle, USA
| | - Kisuk Lee
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Brain & Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, USA
| | - Kai Li
- Computer Science Department, Princeton University, Princeton, USA
| | - Ran Lu
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Thomas Macrina
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | | | - Eric Mitchell
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Shanka Subhra Mondal
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Electrical and Computer Engineering Department, Princeton University, Princeton, USA
| | - Shang Mu
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Barak Nehoran
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | - Sergiy Popovych
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | | | | | - Marc Takeno
- Allen Institute for Brain Science, Seattle, USA
| | | | - Nicholas L Turner
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
- Computer Science Department, Princeton University, Princeton, USA
| | - William Wong
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Jingpeng Wu
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Wenjing Yin
- Allen Institute for Brain Science, Seattle, USA
| | - Szi-Chieh Yu
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | - Emmanouil Froudarakis
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
- Department of Basic Sciences, Faculty of Medicine, University of Crete, Heraklion, Greece
| | - Fabian Sinz
- Institute for Bioinformatics and Medical Informatics, University Tübingen, Tübingen, Germany
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology Hellas, Heraklion, Greece
| | - H Sebastian Seung
- Princeton Neuroscience Institute, Princeton University, Princeton, USA
| | | | | | - R Clay Reid
- Allen Institute for Brain Science, Seattle, USA
| | - Edgar Y Walker
- Department of Physiology and Biophysics, University of Washington, Seattle, USA
- Computational Neuroscience Center, University of Washington, Seattle, USA
| | - Xaq Pitkow
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, USA
| | - Jacob Reimer
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Andreas S Tolias
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, USA
| |
Collapse
|
9
|
Ding Z, Tran DT, Ponder K, Cobos E, Ding Z, Fahey PG, Wang E, Muhammad T, Fu J, Cadena SA, Papadopoulos S, Patel S, Franke K, Reimer J, Sinz FH, Ecker AS, Pitkow X, Tolias AS. Bipartite invariance in mouse primary visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.15.532836. [PMID: 36993218 PMCID: PMC10055119 DOI: 10.1101/2023.03.15.532836] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
A defining characteristic of intelligent systems, whether natural or artificial, is the ability to generalize and infer behaviorally relevant latent causes from high-dimensional sensory input, despite significant variations in the environment. To understand how brains achieve generalization, it is crucial to identify the features to which neurons respond selectively and invariantly. However, the high-dimensional nature of visual inputs, the non-linearity of information processing in the brain, and limited experimental time make it challenging to systematically characterize neuronal tuning and invariances, especially for natural stimuli. Here, we extended "inception loops" - a paradigm that iterates between large-scale recordings, neural predictive models, and in silico experiments followed by in vivo verification - to systematically characterize single neuron invariances in the mouse primary visual cortex. Using the predictive model we synthesized Diverse Exciting Inputs (DEIs), a set of inputs that differ substantially from each other while each driving a target neuron strongly, and verified these DEIs' efficacy in vivo. We discovered a novel bipartite invariance: one portion of the receptive field encoded phase-invariant texture-like patterns, while the other portion encoded a fixed spatial pattern. Our analysis revealed that the division between the fixed and invariant portions of the receptive fields aligns with object boundaries defined by spatial frequency differences present in highly activating natural images. These findings suggest that bipartite invariance might play a role in segmentation by detecting texture-defined object boundaries, independent of the phase of the texture. We also replicated these bipartite DEIs in the functional connectomics MICrONs data set, which opens the way towards a circuit-level mechanistic understanding of this novel type of invariance. Our study demonstrates the power of using a data-driven deep learning approach to systematically characterize neuronal invariances. By applying this method across the visual hierarchy, cell types, and sensory modalities, we can decipher how latent variables are robustly extracted from natural scenes, leading to a deeper understanding of generalization.
Collapse
Affiliation(s)
- Zhiwei Ding
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Dat T Tran
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Kayla Ponder
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Erick Cobos
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Zhuokun Ding
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Paul G Fahey
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Eric Wang
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Taliah Muhammad
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Jiakun Fu
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Santiago A Cadena
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Stelios Papadopoulos
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Saumil Patel
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Katrin Franke
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Jacob Reimer
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Fabian H Sinz
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- Institute for Bioinformatics and Medical Informatics, University of Tübingen, Germany
| | - Alexander S Ecker
- Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Xaq Pitkow
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Andreas S Tolias
- Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| |
Collapse
|
10
|
Raghavan RT, Kelly JG, Hasse JM, Levy PG, Hawken MJ, Movshon JA. Contrast and Luminance Gain Control in the Macaque's Lateral Geniculate Nucleus. eNeuro 2023; 10:ENEURO.0515-22.2023. [PMID: 36858825 PMCID: PMC10035770 DOI: 10.1523/eneuro.0515-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 02/16/2023] [Indexed: 03/03/2023] Open
Abstract
There is substantial variation in the mean and variance of light levels (luminance and contrast) in natural visual scenes. Retinal ganglion cells maintain their sensitivity despite this variation using two adaptive mechanisms, which control how responses depend on luminance and on contrast. However, the nature of each mechanism and their interactions downstream of the retina are unknown. We recorded neurons in the magnocellular and parvocellular layers of the lateral geniculate nucleus (LGN) in anesthetized adult male macaques and characterized how their responses adapt to changes in contrast and luminance. As contrast increases, neurons in the magnocellular layers maintain sensitivity to high temporal frequency stimuli but attenuate sensitivity to low-temporal frequency stimuli. Neurons in the parvocellular layers do not adapt to changes in contrast. As luminance increases, both magnocellular and parvocellular cells increase their sensitivity to high-temporal frequency stimuli. Adaptation to luminance is independent of adaptation to contrast, as previously reported for LGN neurons in the cat. Our results are similar to those previously reported for macaque retinal ganglion cells, suggesting that adaptation to luminance and contrast result from two independent mechanisms that are retinal in origin.
Collapse
Affiliation(s)
- R T Raghavan
- Center for Neural Science, New York University, New York, New York 10003
| | - Jenna G Kelly
- Center for Neural Science, New York University, New York, New York 10003
| | - J Michael Hasse
- Center for Neural Science, New York University, New York, New York 10003
| | - Paul G Levy
- Center for Neural Science, New York University, New York, New York 10003
| | - Michael J Hawken
- Center for Neural Science, New York University, New York, New York 10003
| | - J Anthony Movshon
- Center for Neural Science, New York University, New York, New York 10003
| |
Collapse
|
11
|
Feature-space selection with banded ridge regression. Neuroimage 2022; 264:119728. [PMID: 36334814 PMCID: PMC9807218 DOI: 10.1016/j.neuroimage.2022.119728] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 10/05/2022] [Accepted: 10/31/2022] [Indexed: 11/09/2022] Open
Abstract
Encoding models provide a powerful framework to identify the information represented in brain recordings. In this framework, a stimulus representation is expressed within a feature space and is used in a regularized linear regression to predict brain activity. To account for a potential complementarity of different feature spaces, a joint model is fit on multiple feature spaces simultaneously. To adapt regularization strength to each feature space, ridge regression is extended to banded ridge regression, which optimizes a different regularization hyperparameter per feature space. The present paper proposes a method to decompose over feature spaces the variance explained by a banded ridge regression model. It also describes how banded ridge regression performs a feature-space selection, effectively ignoring non-predictive and redundant feature spaces. This feature-space selection leads to better prediction accuracy and to better interpretability. Banded ridge regression is then mathematically linked to a number of other regression methods with similar feature-space selection mechanisms. Finally, several methods are proposed to address the computational challenge of fitting banded ridge regressions on large numbers of voxels and feature spaces. All implementations are released in an open-source Python package called Himalaya.
Collapse
|
12
|
Ivanov AZ, King AJ, Willmore BDB, Walker KMM, Harper NS. Cortical adaptation to sound reverberation. eLife 2022; 11:75090. [PMID: 35617119 PMCID: PMC9213001 DOI: 10.7554/elife.75090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 05/25/2022] [Indexed: 11/13/2022] Open
Abstract
In almost every natural environment, sounds are reflected by nearby objects, producing many delayed and distorted copies of the original sound, known as reverberation. Our brains usually cope well with reverberation, allowing us to recognize sound sources regardless of their environments. In contrast, reverberation can cause severe difficulties for speech recognition algorithms and hearing-impaired people. The present study examines how the auditory system copes with reverberation. We trained a linear model to recover a rich set of natural, anechoic sounds from their simulated reverberant counterparts. The model neurons achieved this by extending the inhibitory component of their receptive filters for more reverberant spaces, and did so in a frequency-dependent manner. These predicted effects were observed in the responses of auditory cortical neurons of ferrets in the same simulated reverberant environments. Together, these results suggest that auditory cortical neurons adapt to reverberation by adjusting their filtering properties in a manner consistent with dereverberation.
Collapse
Affiliation(s)
- Aleksandar Z Ivanov
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Ben D B Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Kerry M M Walker
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
13
|
Merrick CM, Dixon TC, Breska A, Lin J, Chang EF, King-Stephens D, Laxer KD, Weber PB, Carmena J, Thomas Knight R, Ivry RB. Left hemisphere dominance for bilateral kinematic encoding in the human brain. eLife 2022; 11:e69977. [PMID: 35227374 PMCID: PMC8887902 DOI: 10.7554/elife.69977] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 01/19/2022] [Indexed: 11/29/2022] Open
Abstract
Neurophysiological studies in humans and nonhuman primates have revealed movement representations in both the contralateral and ipsilateral hemispheres. Inspired by clinical observations, we ask if this bilateral representation differs for the left and right hemispheres. Electrocorticography was recorded in human participants during an instructed-delay reaching task, with movements produced with either the contralateral or ipsilateral arm. Using a cross-validated kinematic encoding model, we found stronger bilateral encoding in the left hemisphere, an effect that was present during preparation and was amplified during execution. Consistent with this asymmetry, we also observed better across-arm generalization in the left hemisphere, indicating similar neural representations for right and left arm movements. Notably, these left hemisphere electrodes were centered over premotor and parietal regions. The more extensive bilateral encoding in the left hemisphere adds a new perspective to the pervasive neuropsychological finding that the left hemisphere plays a dominant role in praxis.
Collapse
Affiliation(s)
- Christina M Merrick
- Department of Psychology, University of California, BerkeleyBerkeleyUnited States
| | - Tanner C Dixon
- UC Berkeley – UCSF Graduate Program in Bioengineering, University of California, BerkeleyBerkeleyUnited States
| | - Assaf Breska
- Department of Psychology, University of California, BerkeleyBerkeleyUnited States
| | - Jack Lin
- Department of Neurology, University of California at IrvineIrvineUnited States
| | - Edward F Chang
- Department of Neurological Surgery, University of California San Francisco, San FranciscoSan FranciscoUnited States
| | - David King-Stephens
- Department of Neurology and Neurosurgery, California Pacific Medical CenterSan FranciscoUnited States
| | - Kenneth D Laxer
- Department of Neurology and Neurosurgery, California Pacific Medical CenterSan FranciscoUnited States
| | - Peter B Weber
- Department of Neurology and Neurosurgery, California Pacific Medical CenterSan FranciscoUnited States
| | - Jose Carmena
- UC Berkeley – UCSF Graduate Program in Bioengineering, University of California, BerkeleyBerkeleyUnited States
- Department of Electrical Engineering and Computer Sciences, University of California, BerkeleyBerkeleyUnited States
- Helen Wills Neuroscience Institute, University of California, BerkeleyBerkeleyUnited States
| | - Robert Thomas Knight
- Department of Psychology, University of California, BerkeleyBerkeleyUnited States
- UC Berkeley – UCSF Graduate Program in Bioengineering, University of California, BerkeleyBerkeleyUnited States
- Department of Neurological Surgery, University of California San Francisco, San FranciscoSan FranciscoUnited States
- Helen Wills Neuroscience Institute, University of California, BerkeleyBerkeleyUnited States
| | - Richard B Ivry
- Department of Psychology, University of California, BerkeleyBerkeleyUnited States
- UC Berkeley – UCSF Graduate Program in Bioengineering, University of California, BerkeleyBerkeleyUnited States
- Helen Wills Neuroscience Institute, University of California, BerkeleyBerkeleyUnited States
| |
Collapse
|
14
|
Norman-Haignere SV, Long LK, Devinsky O, Doyle W, Irobunda I, Merricks EM, Feldstein NA, McKhann GM, Schevon CA, Flinker A, Mesgarani N. Multiscale temporal integration organizes hierarchical computation in human auditory cortex. Nat Hum Behav 2022; 6:455-469. [PMID: 35145280 PMCID: PMC8957490 DOI: 10.1038/s41562-021-01261-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 11/18/2021] [Indexed: 01/11/2023]
Abstract
To derive meaning from sound, the brain must integrate information across many timescales. What computations underlie multiscale integration in human auditory cortex? Evidence suggests that auditory cortex analyses sound using both generic acoustic representations (for example, spectrotemporal modulation tuning) and category-specific computations, but the timescales over which these putatively distinct computations integrate remain unclear. To answer this question, we developed a general method to estimate sensory integration windows-the time window when stimuli alter the neural response-and applied our method to intracranial recordings from neurosurgical patients. We show that human auditory cortex integrates hierarchically across diverse timescales spanning from ~50 to 400 ms. Moreover, we find that neural populations with short and long integration windows exhibit distinct functional properties: short-integration electrodes (less than ~200 ms) show prominent spectrotemporal modulation selectivity, while long-integration electrodes (greater than ~200 ms) show prominent category selectivity. These findings reveal how multiscale integration organizes auditory computation in the human brain.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,HHMI Postdoctoral Fellow of the Life Sciences Research Foundation
| | - Laura K. Long
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,Doctoral Program in Neurobiology and Behavior, Columbia University
| | - Orrin Devinsky
- Department of Neurology, NYU Langone Medical Center,Comprehensive Epilepsy Center, NYU Langone Medical Center
| | - Werner Doyle
- Comprehensive Epilepsy Center, NYU Langone Medical Center,Department of Neurosurgery, NYU Langone Medical Center
| | - Ifeoma Irobunda
- Department of Neurology, Columbia University Irving Medical Center
| | | | - Neil A. Feldstein
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | - Guy M. McKhann
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | | | - Adeen Flinker
- Department of Neurology, NYU Langone Medical Center,Comprehensive Epilepsy Center, NYU Langone Medical Center,Department of Biomedical Engineering, NYU Tandon School of Engineering
| | - Nima Mesgarani
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,Doctoral Program in Neurobiology and Behavior, Columbia University,Department of Electrical Engineering, Columbia University
| |
Collapse
|
15
|
LeBel A, Jain S, Huth AG. Voxelwise Encoding Models Show That Cerebellar Language Representations Are Highly Conceptual. J Neurosci 2021; 41:10341-10355. [PMID: 34732520 PMCID: PMC8672691 DOI: 10.1523/jneurosci.0118-21.2021] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 08/09/2021] [Accepted: 09/14/2021] [Indexed: 11/21/2022] Open
Abstract
There is a growing body of research demonstrating that the cerebellum is involved in language understanding. Early theories assumed that the cerebellum is involved in low-level language processing. However, those theories are at odds with recent work demonstrating cerebellar activation during cognitive tasks. Using natural language stimuli and an encoding model framework, we performed an fMRI experiment on 3 men and 2 women, where subjects passively listened to 5 h of natural language stimuli, which allowed us to analyze language processing in the cerebellum with higher precision than previous work. We used these data to fit voxelwise encoding models with five different feature spaces that span the hierarchy of language processing from acoustic input to high-level conceptual processing. Examining the prediction performance of these models on separate BOLD data shows that cerebellar responses to language are almost entirely explained by high-level conceptual language features rather than low-level acoustic or phonemic features. Additionally, we found that the cerebellum has a higher proportion of voxels that represent social semantic categories, which include "social" and "people" words, and lower representations of all other semantic categories, including "mental," "concrete," and "place" words, than cortex. This suggests that the cerebellum is representing language at a conceptual level with a preference for social information.SIGNIFICANCE STATEMENT Recent work has demonstrated that, beyond its typical role in motor planning, the cerebellum is implicated in a wide variety of tasks, including language. However, little is known about the language representations in the cerebellum, or how those representations compare to cortex. Using voxelwise encoding models and natural language fMRI data, we demonstrate here that language representations are significantly different in the cerebellum compared with cortex. Cerebellum language representations are almost entirely semantic, and the cerebellum contains overrepresentation of social semantic information compared with cortex. These results suggest that the cerebellum is not involved in language processing per se, but cognitive processing more generally.
Collapse
Affiliation(s)
- Amanda LeBel
- Helen Wills Neuroscience Institute, University of California-Berkeley, Berkeley, California 94720
| | - Shailee Jain
- Department of Computer Science, University of Texas-Austin, Austin, Texas 78712
| | - Alexander G Huth
- Department of Neuroscience, University of Texas-Austin, Austin, Texas 78712
- Department of Computer Science, University of Texas-Austin, Austin, Texas 78712
| |
Collapse
|
16
|
Generalizable EEG Encoding Models with Naturalistic Audiovisual Stimuli. J Neurosci 2021; 41:8946-8962. [PMID: 34503996 DOI: 10.1523/jneurosci.2891-20.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 08/24/2021] [Accepted: 08/29/2021] [Indexed: 11/21/2022] Open
Abstract
In natural conversations, listeners must attend to what others are saying while ignoring extraneous background sounds. Recent studies have used encoding models to predict electroencephalography (EEG) responses to speech in noise-free listening situations, sometimes referred to as "speech tracking." Researchers have analyzed how speech tracking changes with different types of background noise. It is unclear, however, whether neural responses from acoustically rich, naturalistic environments with and without background noise can be generalized to more controlled stimuli. If encoding models for acoustically rich, naturalistic stimuli are generalizable to other tasks, this could aid in data collection from populations of individuals who may not tolerate listening to more controlled and less engaging stimuli for long periods of time. We recorded noninvasive scalp EEG while 17 human participants (8 male/9 female) listened to speech without noise and audiovisual speech stimuli containing overlapping speakers and background sounds. We fit multivariate temporal receptive field encoding models to predict EEG responses to pitch, the acoustic envelope, phonological features, and visual cues in both stimulus conditions. Our results suggested that neural responses to naturalistic stimuli were generalizable to more controlled datasets. EEG responses to speech in isolation were predicted accurately using phonological features alone, while responses to speech in a rich acoustic background were more accurate when including both phonological and acoustic features. Our findings suggest that naturalistic audiovisual stimuli can be used to measure receptive fields that are comparable and generalizable to more controlled audio-only stimuli.SIGNIFICANCE STATEMENT Understanding spoken language in natural environments requires listeners to parse acoustic and linguistic information in the presence of other distracting stimuli. However, most studies of auditory processing rely on highly controlled stimuli with no background noise, or with background noise inserted at specific times. Here, we compare models where EEG data are predicted based on a combination of acoustic, phonetic, and visual features in highly disparate stimuli-sentences from a speech corpus and speech embedded within movie trailers. We show that modeling neural responses to highly noisy, audiovisual movies can uncover tuning for acoustic and phonetic information that generalizes to simpler stimuli typically used in sensory neuroscience experiments.
Collapse
|
17
|
Alvarez I, Hurley SA, Parker AJ, Bridge H. Human primary visual cortex shows larger population receptive fields for binocular disparity-defined stimuli. Brain Struct Funct 2021; 226:2819-2838. [PMID: 34347164 PMCID: PMC8541985 DOI: 10.1007/s00429-021-02351-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 07/22/2021] [Indexed: 11/26/2022]
Abstract
The visual perception of 3D depth is underpinned by the brain's ability to combine signals from the left and right eyes to produce a neural representation of binocular disparity for perception and behaviour. Electrophysiological studies of binocular disparity over the past 2 decades have investigated the computational role of neurons in area V1 for binocular combination, while more recent neuroimaging investigations have focused on identifying specific roles for different extrastriate visual areas in depth perception. Here we investigate the population receptive field properties of neural responses to binocular information in striate and extrastriate cortical visual areas using ultra-high field fMRI. We measured BOLD fMRI responses while participants viewed retinotopic mapping stimuli defined by different visual properties: contrast, luminance, motion, correlated and anti-correlated stereoscopic disparity. By fitting each condition with a population receptive field model, we compared quantitatively the size of the population receptive field for disparity-specific stimulation. We found larger population receptive fields for disparity compared with contrast and luminance in area V1, the first stage of binocular combination, which likely reflects the binocular integration zone, an interpretation supported by modelling of the binocular energy model. A similar pattern was found in region LOC, where it may reflect the role of disparity as a cue for 3D shape. These findings provide insight into the binocular receptive field properties underlying processing for human stereoscopic vision.
Collapse
Affiliation(s)
- Ivan Alvarez
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK
| | - Samuel A Hurley
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK
- Department of Radiology, University of Wisconsin, Madison, WI, 53705, USA
| | - Andrew J Parker
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, OX1 3PT, UK
- Institut für Biologie, Otto-von-Guericke Universität, 39120, Magdeburg, Germany
| | - Holly Bridge
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK.
| |
Collapse
|
18
|
Pospisil DA, Bair W. The unbiased estimation of the fraction of variance explained by a model. PLoS Comput Biol 2021; 17:e1009212. [PMID: 34347786 PMCID: PMC8367013 DOI: 10.1371/journal.pcbi.1009212] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 08/16/2021] [Accepted: 06/24/2021] [Indexed: 11/28/2022] Open
Abstract
The correlation coefficient squared, r2, is commonly used to validate quantitative models on neural data, yet it is biased by trial-to-trial variability: as trial-to-trial variability increases, measured correlation to a model’s predictions decreases. As a result, models that perfectly explain neural tuning can appear to perform poorly. Many solutions to this problem have been proposed, but no consensus has been reached on which is the least biased estimator. Some currently used methods substantially overestimate model fit, and the utility of even the best performing methods is limited by the lack of confidence intervals and asymptotic analysis. We provide a new estimator, r^ER2, that outperforms all prior estimators in our testing, and we provide confidence intervals and asymptotic guarantees. We apply our estimator to a variety of neural data to validate its utility. We find that neural noise is often so great that confidence intervals of the estimator cover the entire possible range of values ([0, 1]), preventing meaningful evaluation of the quality of a model’s predictions. This leads us to propose the use of the signal-to-noise ratio (SNR) as a quality metric for making quantitative comparisons across neural recordings. Analyzing a variety of neural data sets, we find that up to ∼ 40% of some state-of-the-art neural recordings do not pass even a liberal SNR criterion. Moving toward more reliable estimates of correlation, and quantitatively comparing quality across recording modalities and data sets, will be critical to accelerating progress in modeling biological phenomena. Quantifying the similarity between a model and noisy data is fundamental to advancing the scientific understanding of biological phenomena, and it is particularly relevant to modeling neuronal responses. A ubiquitous metric of similarity is the correlation coefficient, but this metric depends on a variety of factors that are irrelevant to the similarity between a model and data. While neuroscientists have recognized this problem and proposed corrected methods, no consensus has been reached as to which are effective. Prior methods have wide variation in their accuracy, and even the most successful methods lack confidence intervals, leaving uncertainty about the reliability of any particular estimate. We address these issues by developing a new estimator along with an associated confidence interval that outperforms all prior methods. We also demonstrate how a signal-to-noise ratio can be used to usefully threshold and compare noisy experimental data across studies and recording paradigms.
Collapse
Affiliation(s)
- Dean A. Pospisil
- Department of Biological Structure, University of Washington, Seattle, Washington, United States of America
- * E-mail:
| | - Wyeth Bair
- Department of Biological Structure, University of Washington, Seattle, Washington, United States of America
- Washington National Primate Research Center, University of Washington, Seattle, Washington, United States of America
- University of Washington Institute for Neuroengineering, Seattle, Washington, United States of America
- Computational Neuroscience Center, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
19
|
Nonlinear Spatial Integration Underlies the Diversity of Retinal Ganglion Cell Responses to Natural Images. J Neurosci 2021; 41:3479-3498. [PMID: 33664129 PMCID: PMC8051676 DOI: 10.1523/jneurosci.3075-20.2021] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 02/05/2021] [Accepted: 02/09/2021] [Indexed: 02/06/2023] Open
Abstract
How neurons encode natural stimuli is a fundamental question for sensory neuroscience. In the early visual system, standard encoding models assume that neurons linearly filter incoming stimuli through their receptive fields, but artificial stimuli, such as contrast-reversing gratings, often reveal nonlinear spatial processing. We investigated to what extent such nonlinear processing is relevant for the encoding of natural images in retinal ganglion cells in mice of either sex. How neurons encode natural stimuli is a fundamental question for sensory neuroscience. In the early visual system, standard encoding models assume that neurons linearly filter incoming stimuli through their receptive fields, but artificial stimuli, such as contrast-reversing gratings, often reveal nonlinear spatial processing. We investigated to what extent such nonlinear processing is relevant for the encoding of natural images in retinal ganglion cells in mice of either sex. We found that standard linear receptive field models yielded good predictions of responses to flashed natural images for a subset of cells but failed to capture the spiking activity for many others. Cells with poor model performance displayed pronounced sensitivity to fine spatial contrast and local signal rectification as the dominant nonlinearity. By contrast, sensitivity to high-frequency contrast-reversing gratings, a classical test for nonlinear spatial integration, was not a good predictor of model performance and thus did not capture the variability of nonlinear spatial integration under natural images. In addition, we also observed a class of nonlinear ganglion cells with inverse tuning for spatial contrast, responding more strongly to spatially homogeneous than to spatially structured stimuli. These findings highlight the diversity of receptive field nonlinearities as a crucial component for understanding early sensory encoding in the context of natural stimuli. SIGNIFICANCE STATEMENT Experiments with artificial visual stimuli have revealed that many types of retinal ganglion cells pool spatial input signals nonlinearly. However, it is still unclear how relevant this nonlinear spatial integration is when the input signals are natural images. Here we analyze retinal responses to natural scenes in large populations of mouse ganglion cells. We show that nonlinear spatial integration strongly influences responses to natural images for some ganglion cells, but not for others. Cells with nonlinear spatial integration were sensitive to spatial structure inside their receptive fields, and a small group of cells displayed a surprising sensitivity to spatially homogeneous stimuli. Traditional analyses with contrast-reversing gratings did not predict this variability of nonlinear spatial integration under natural images.
Collapse
|
20
|
Rahman M, Willmore BDB, King AJ, Harper NS. Simple transformations capture auditory input to cortex. Proc Natl Acad Sci U S A 2020; 117:28442-28451. [PMID: 33097665 PMCID: PMC7668077 DOI: 10.1073/pnas.1922033117] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Sounds are processed by the ear and central auditory pathway. These processing steps are biologically complex, and many aspects of the transformation from sound waveforms to cortical response remain unclear. To understand this transformation, we combined models of the auditory periphery with various encoding models to predict auditory cortical responses to natural sounds. The cochlear models ranged from detailed biophysical simulations of the cochlea and auditory nerve to simple spectrogram-like approximations of the information processing in these structures. For three different stimulus sets, we tested the capacity of these models to predict the time course of single-unit neural responses recorded in ferret primary auditory cortex. We found that simple models based on a log-spaced spectrogram with approximately logarithmic compression perform similarly to the best-performing biophysically detailed models of the auditory periphery, and more consistently well over diverse natural and synthetic sounds. Furthermore, we demonstrated that including approximations of the three categories of auditory nerve fiber in these simple models can substantially improve prediction, particularly when combined with a network encoding model. Our findings imply that the properties of the auditory periphery and central pathway may together result in a simpler than expected functional transformation from ear to cortex. Thus, much of the detailed biological complexity seen in the auditory periphery does not appear to be important for understanding the cortical representation of sound.
Collapse
Affiliation(s)
- Monzilur Rahman
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| | - Ben D B Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, OX1 3PT Oxford, United Kingdom
| |
Collapse
|
21
|
Chen Y, Ierapetritou M. A framework of hybrid model development with identification of plant‐model mismatch. AIChE J 2020. [DOI: 10.1002/aic.16996] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Affiliation(s)
- Yingjie Chen
- Department of Chemical and Biomolecular Engineering University of Delaware Newark Delaware USA
| | - Marianthi Ierapetritou
- Department of Chemical and Biomolecular Engineering University of Delaware Newark Delaware USA
| |
Collapse
|
22
|
Keshishian M, Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Estimating and interpreting nonlinear receptive field of sensory neural responses with deep neural network models. eLife 2020; 9:53445. [PMID: 32589140 PMCID: PMC7347387 DOI: 10.7554/elife.53445] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 06/21/2020] [Indexed: 12/21/2022] Open
Abstract
Our understanding of nonlinear stimulus transformations by neural circuits is hindered by the lack of comprehensive yet interpretable computational modeling frameworks. Here, we propose a data-driven approach based on deep neural networks to directly model arbitrarily nonlinear stimulus-response mappings. Reformulating the exact function of a trained neural network as a collection of stimulus-dependent linear functions enables a locally linear receptive field interpretation of the neural network. Predicting the neural responses recorded invasively from the auditory cortex of neurosurgical patients as they listened to speech, this approach significantly improves the prediction accuracy of auditory cortical responses, particularly in nonprimary areas. Moreover, interpreting the functions learned by neural networks uncovered three distinct types of nonlinear transformations of speech that varied considerably from primary to nonprimary auditory regions. The ability of this framework to capture arbitrary stimulus-response mappings while maintaining model interpretability leads to a better understanding of cortical processing of sensory signals.
Collapse
Affiliation(s)
- Menoua Keshishian
- Department of Electrical Engineering, Columbia University, New York, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Hassan Akbari
- Department of Electrical Engineering, Columbia University, New York, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Bahar Khalighinejad
- Department of Electrical Engineering, Columbia University, New York, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Jose L Herrero
- Feinstein Institute for Medical Research, Manhasset, United States.,Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, United States
| | - Ashesh D Mehta
- Feinstein Institute for Medical Research, Manhasset, United States.,Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, United States
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| |
Collapse
|
23
|
de Vries SEJ, Lecoq JA, Buice MA, Groblewski PA, Ocker GK, Oliver M, Feng D, Cain N, Ledochowitsch P, Millman D, Roll K, Garrett M, Keenan T, Kuan L, Mihalas S, Olsen S, Thompson C, Wakeman W, Waters J, Williams D, Barber C, Berbesque N, Blanchard B, Bowles N, Caldejon SD, Casal L, Cho A, Cross S, Dang C, Dolbeare T, Edwards M, Galbraith J, Gaudreault N, Gilbert TL, Griffin F, Hargrave P, Howard R, Huang L, Jewell S, Keller N, Knoblich U, Larkin JD, Larsen R, Lau C, Lee E, Lee F, Leon A, Li L, Long F, Luviano J, Mace K, Nguyen T, Perkins J, Robertson M, Seid S, Shea-Brown E, Shi J, Sjoquist N, Slaughterbeck C, Sullivan D, Valenza R, White C, Williford A, Witten DM, Zhuang J, Zeng H, Farrell C, Ng L, Bernard A, Phillips JW, Reid RC, Koch C. A large-scale standardized physiological survey reveals functional organization of the mouse visual cortex. Nat Neurosci 2020; 23:138-151. [PMID: 31844315 PMCID: PMC6948932 DOI: 10.1038/s41593-019-0550-9] [Citation(s) in RCA: 141] [Impact Index Per Article: 35.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Accepted: 10/28/2019] [Indexed: 11/16/2022]
Abstract
To understand how the brain processes sensory information to guide behavior, we must know how stimulus representations are transformed throughout the visual cortex. Here we report an open, large-scale physiological survey of activity in the awake mouse visual cortex: the Allen Brain Observatory Visual Coding dataset. This publicly available dataset includes the cortical activity of nearly 60,000 neurons from six visual areas, four layers, and 12 transgenic mouse lines in a total of 243 adult mice, in response to a systematic set of visual stimuli. We classify neurons on the basis of joint reliabilities to multiple stimuli and validate this functional classification with models of visual responses. While most classes are characterized by responses to specific subsets of the stimuli, the largest class is not reliably responsive to any of the stimuli and becomes progressively larger in higher visual areas. These classes reveal a functional organization wherein putative dorsal areas show specialization for visual motion signals.
Collapse
Affiliation(s)
| | | | | | | | | | | | - David Feng
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | | | | | - Kate Roll
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | - Tom Keenan
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Leonard Kuan
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | - Shawn Olsen
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | | | - Jack Waters
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | - Chris Barber
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | | | | | | | - Linzy Casal
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Andrew Cho
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Sissy Cross
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Chinh Dang
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Tim Dolbeare
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | | | | | | | | | | | | | | | - Sean Jewell
- Department of Statistics, University of Washington, Seattle, WA, USA
| | - Nika Keller
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Ulf Knoblich
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | | | - Chris Lau
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Eric Lee
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Felix Lee
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Arielle Leon
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Lu Li
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Fuhui Long
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | - Kyla Mace
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | - Jed Perkins
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | - Sam Seid
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Eric Shea-Brown
- Allen Institute for Brain Science, Seattle, WA, USA
- Department of Applied Mathematics, University of Washington, Seattle, WA, USA
| | - Jianghong Shi
- Department of Applied Mathematics, University of Washington, Seattle, WA, USA
| | | | | | | | - Ryan Valenza
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Casey White
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | - Daniela M Witten
- Department of Statistics, University of Washington, Seattle, WA, USA
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Jun Zhuang
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | - Lydia Ng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Amy Bernard
- Allen Institute for Brain Science, Seattle, WA, USA
| | | | - R Clay Reid
- Allen Institute for Brain Science, Seattle, WA, USA
| | | |
Collapse
|
24
|
Paradoxical Rules of Spike Train Decoding Revealed at the Sensitivity Limit of Vision. Neuron 2019; 104:576-587.e11. [PMID: 31519460 DOI: 10.1016/j.neuron.2019.08.005] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Revised: 05/28/2019] [Accepted: 08/03/2019] [Indexed: 12/11/2022]
Abstract
All sensory information is encoded in neural spike trains. It is unknown how the brain utilizes this neural code to drive behavior. Here, we unravel the decoding rules of the brain at the most elementary level by linking behavioral decisions to retinal output signals in a single-photon detection task. A transgenic mouse line allowed us to separate the two primary retinal outputs, ON and OFF pathways, carrying information about photon absorptions as increases and decreases in spiking, respectively. We measured the sensitivity limit of rods and the most sensitive ON and OFF ganglion cells and correlated these results with visually guided behavior using markerless head and eye tracking. We show that behavior relies only on the ON pathway even when the OFF pathway would allow higher sensitivity. Paradoxically, behavior does not rely on the spike code with maximal information but instead relies on a decoding strategy based on increases in spiking.
Collapse
|
25
|
Kell AJE, McDermott JH. Invariance to background noise as a signature of non-primary auditory cortex. Nat Commun 2019; 10:3958. [PMID: 31477711 PMCID: PMC6718388 DOI: 10.1038/s41467-019-11710-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Accepted: 07/30/2019] [Indexed: 12/22/2022] Open
Abstract
Despite well-established anatomical differences between primary and non-primary auditory cortex, the associated representational transformations have remained elusive. Here we show that primary and non-primary auditory cortex are differentiated by their invariance to real-world background noise. We measured fMRI responses to natural sounds presented in isolation and in real-world noise, quantifying invariance as the correlation between the two responses for individual voxels. Non-primary areas were substantially more noise-invariant than primary areas. This primary-nonprimary difference occurred both for speech and non-speech sounds and was unaffected by a concurrent demanding visual task, suggesting that the observed invariance is not specific to speech processing and is robust to inattention. The difference was most pronounced for real-world background noise-both primary and non-primary areas were relatively robust to simple types of synthetic noise. Our results suggest a general representational transformation between auditory cortical stages, illustrating a representational consequence of hierarchical organization in the auditory system.
Collapse
Affiliation(s)
- Alexander J E Kell
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, 02139, USA.
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, 02139, USA.
- Zuckerman Institute of Mind, Brain, and Behavior, Columbia University, New York, NY, 10027, USA.
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, 02139, USA.
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, 02139, USA.
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Boston, MA, USA.
| |
Collapse
|
26
|
Hansen BC, Field DJ, Greene MR, Olson C, Miskovic V. Towards a state-space geometry of neural responses to natural scenes: A steady-state approach. Neuroimage 2019; 201:116027. [PMID: 31325643 DOI: 10.1016/j.neuroimage.2019.116027] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 06/13/2019] [Accepted: 07/16/2019] [Indexed: 10/26/2022] Open
Abstract
Our understanding of information processing by the mammalian visual system has come through a variety of techniques ranging from psychophysics and fMRI to single unit recording and EEG. Each technique provides unique insights into the processing framework of the early visual system. Here, we focus on the nature of the information that is carried by steady state visual evoked potentials (SSVEPs). To study the information provided by SSVEPs, we presented human participants with a population of natural scenes and measured the relative SSVEP response. Rather than focus on particular features of this signal, we focused on the full state-space of possible responses and investigated how the evoked responses are mapped onto this space. Our results show that it is possible to map the relatively high-dimensional signal carried by SSVEPs onto a 2-dimensional space with little loss. We also show that a simple biologically plausible model can account for a high proportion of the explainable variance (~73%) in that space. Finally, we describe a technique for measuring the mutual information that is available about images from SSVEPs. The techniques introduced here represent a new approach to understanding the nature of the information carried by SSVEPs. Crucially, this approach is general and can provide a means of comparing results across different neural recording methods. Altogether, our study sheds light on the encoding principles of early vision and provides a much needed reference point for understanding subsequent transformations of the early visual response space to deeper knowledge structures that link different visual environments.
Collapse
Affiliation(s)
- Bruce C Hansen
- Colgate University, Department of Psychological & Brain Sciences, Neuroscience Program, Hamilton, NY, USA.
| | - David J Field
- Cornell University, Department of Psychology, Ithaca, NY, USA
| | | | - Cassady Olson
- Colgate University, Department of Psychological & Brain Sciences, Neuroscience Program, Hamilton, NY, USA; Current Address: University of Chicago, Committee on Computational Neuroscience, Chicago, IL, USA
| | - Vladimir Miskovic
- State University of New York at Binghamton, Department of Psychology, Binghamton, NY, USA
| |
Collapse
|
27
|
Kindel WF, Christensen ED, Zylberberg J. Using deep learning to probe the neural code for images in primary visual cortex. J Vis 2019; 19:29. [PMID: 31026016 PMCID: PMC6485988 DOI: 10.1167/19.4.29] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Primary visual cortex (V1) is the first stage of cortical image processing, and major effort in systems neuroscience is devoted to understanding how it encodes information about visual stimuli. Within V1, many neurons respond selectively to edges of a given preferred orientation: These are known as either simple or complex cells. Other neurons respond to localized center–surround image features. Still others respond selectively to certain image stimuli, but the specific features that excite them are unknown. Moreover, even for the simple and complex cells—the best-understood V1 neurons—it is challenging to predict how they will respond to natural image stimuli. Thus, there are important gaps in our understanding of how V1 encodes images. To fill this gap, we trained deep convolutional neural networks to predict the firing rates of V1 neurons in response to natural image stimuli, and we find that the predicted firing rates are highly correlated (\begin{document}\newcommand{\bialpha}{\boldsymbol{\alpha}}\newcommand{\bibeta}{\boldsymbol{\beta}}\newcommand{\bigamma}{\boldsymbol{\gamma}}\newcommand{\bidelta}{\boldsymbol{\delta}}\newcommand{\bivarepsilon}{\boldsymbol{\varepsilon}}\newcommand{\bizeta}{\boldsymbol{\zeta}}\newcommand{\bieta}{\boldsymbol{\eta}}\newcommand{\bitheta}{\boldsymbol{\theta}}\newcommand{\biiota}{\boldsymbol{\iota}}\newcommand{\bikappa}{\boldsymbol{\kappa}}\newcommand{\bilambda}{\boldsymbol{\lambda}}\newcommand{\bimu}{\boldsymbol{\mu}}\newcommand{\binu}{\boldsymbol{\nu}}\newcommand{\bixi}{\boldsymbol{\xi}}\newcommand{\biomicron}{\boldsymbol{\micron}}\newcommand{\bipi}{\boldsymbol{\pi}}\newcommand{\birho}{\boldsymbol{\rho}}\newcommand{\bisigma}{\boldsymbol{\sigma}}\newcommand{\bitau}{\boldsymbol{\tau}}\newcommand{\biupsilon}{\boldsymbol{\upsilon}}\newcommand{\biphi}{\boldsymbol{\phi}}\newcommand{\bichi}{\boldsymbol{\chi}}\newcommand{\bipsi}{\boldsymbol{\psi}}\newcommand{\biomega}{\boldsymbol{\omega}}{\overline {{\bf{CC}}} _{{\bf{norm}}}}\end{document} = 0.556 ± 0.01) with the neurons' actual firing rates over a population of 355 neurons. This performance value is quoted for all neurons, with no selection filter. Performance is better for more active neurons: When evaluated only on neurons with mean firing rates above 5 Hz, our predictors achieve correlations of \begin{document}\newcommand{\bialpha}{\boldsymbol{\alpha}}\newcommand{\bibeta}{\boldsymbol{\beta}}\newcommand{\bigamma}{\boldsymbol{\gamma}}\newcommand{\bidelta}{\boldsymbol{\delta}}\newcommand{\bivarepsilon}{\boldsymbol{\varepsilon}}\newcommand{\bizeta}{\boldsymbol{\zeta}}\newcommand{\bieta}{\boldsymbol{\eta}}\newcommand{\bitheta}{\boldsymbol{\theta}}\newcommand{\biiota}{\boldsymbol{\iota}}\newcommand{\bikappa}{\boldsymbol{\kappa}}\newcommand{\bilambda}{\boldsymbol{\lambda}}\newcommand{\bimu}{\boldsymbol{\mu}}\newcommand{\binu}{\boldsymbol{\nu}}\newcommand{\bixi}{\boldsymbol{\xi}}\newcommand{\biomicron}{\boldsymbol{\micron}}\newcommand{\bipi}{\boldsymbol{\pi}}\newcommand{\birho}{\boldsymbol{\rho}}\newcommand{\bisigma}{\boldsymbol{\sigma}}\newcommand{\bitau}{\boldsymbol{\tau}}\newcommand{\biupsilon}{\boldsymbol{\upsilon}}\newcommand{\biphi}{\boldsymbol{\phi}}\newcommand{\bichi}{\boldsymbol{\chi}}\newcommand{\bipsi}{\boldsymbol{\psi}}\newcommand{\biomega}{\boldsymbol{\omega}}{\overline {{\bf{CC}}} _{{\bf{norm}}}}\end{document} = 0.69 ± 0.01 with the neurons' true firing rates. We find that the firing rates of both orientation-selective and non-orientation-selective neurons can be predicted with high accuracy. Additionally, we use a variety of models to benchmark performance and find that our convolutional neural-network model makes more accurate predictions.
Collapse
Affiliation(s)
- William F Kindel
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Elijah D Christensen
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Joel Zylberberg
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO, USA.,Learning in Machines and Brains Program, Canadian Institute for Advanced Research, Toronto, Canada
| |
Collapse
|
28
|
Nunez-Elizalde AO, Huth AG, Gallant JL. Voxelwise encoding models with non-spherical multivariate normal priors. Neuroimage 2019; 197:482-492. [PMID: 31075394 DOI: 10.1016/j.neuroimage.2019.04.012] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 04/01/2019] [Accepted: 04/03/2019] [Indexed: 10/26/2022] Open
Abstract
Predictive models for neural or fMRI data are often fit using regression methods that employ priors on the model parameters. One widely used method is ridge regression, which employs a spherical multivariate normal prior that assumes equal and independent variance for all parameters. However, a spherical prior is not always optimal or appropriate. There are many cases where expert knowledge or hypotheses about the structure of the model parameters could be used to construct a better prior. In these cases, non-spherical multivariate normal priors can be employed using a generalized form of ridge known as Tikhonov regression. Yet Tikhonov regression is only rarely used in neuroscience. In this paper we discuss the theoretical basis for Tikhonov regression, demonstrate a computationally efficient method for its application, and show several examples of how Tikhonov regression can improve predictive models for fMRI data. We also show that many earlier studies have implicitly used Tikhonov regression by linearly transforming the regressors before performing ridge regression.
Collapse
Affiliation(s)
| | - Alexander G Huth
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, 94720, USA
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, 94720, USA; Department of Psychology, University of California, Berkeley, CA, 94720, USA.
| |
Collapse
|
29
|
Rahman M, Willmore BDB, King AJ, Harper NS. A dynamic network model of temporal receptive fields in primary auditory cortex. PLoS Comput Biol 2019; 15:e1006618. [PMID: 31059503 PMCID: PMC6534339 DOI: 10.1371/journal.pcbi.1006618] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2018] [Revised: 05/24/2019] [Accepted: 04/13/2019] [Indexed: 11/19/2022] Open
Abstract
Auditory neurons encode stimulus history, which is often modelled using a span of time-delays in a spectro-temporal receptive field (STRF). We propose an alternative model for the encoding of stimulus history, which we apply to extracellular recordings of neurons in the primary auditory cortex of anaesthetized ferrets. For a linear-non-linear STRF model (LN model) to achieve a high level of performance in predicting single unit neural responses to natural sounds in the primary auditory cortex, we found that it is necessary to include time delays going back at least 200 ms in the past. This is an unrealistic time span for biological delay lines. We therefore asked how much of this dependence on stimulus history can instead be explained by dynamical aspects of neurons. We constructed a neural-network model whose output is the weighted sum of units whose responses are determined by a dynamic firing-rate equation. The dynamic aspect performs low-pass filtering on each unit's response, providing an exponentially decaying memory whose time constant is individual to each unit. We find that this dynamic network (DNet) model, when fitted to the neural data using STRFs of only 25 ms duration, can achieve prediction performance on a held-out dataset comparable to the best performing LN model with STRFs of 200 ms duration. These findings suggest that integration due to the membrane time constants or other exponentially-decaying memory processes may underlie linear temporal receptive fields of neurons beyond 25 ms.
Collapse
Affiliation(s)
- Monzilur Rahman
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Nicol S. Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
30
|
Lage-Castellanos A, Valente G, Formisano E, De Martino F. Methods for computing the maximum performance of computational models of fMRI responses. PLoS Comput Biol 2019; 15:e1006397. [PMID: 30849071 PMCID: PMC6426260 DOI: 10.1371/journal.pcbi.1006397] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Revised: 03/20/2019] [Accepted: 01/17/2019] [Indexed: 11/19/2022] Open
Abstract
Computational neuroimaging methods aim to predict brain responses (measured e.g. with functional magnetic resonance imaging [fMRI]) on the basis of stimulus features obtained through computational models. The accuracy of such prediction is used as an indicator of how well the model describes the computations underlying the brain function that is being considered. However, the prediction accuracy is bounded by the proportion of the variance of the brain response which is related to the measurement noise and not to the stimuli (or cognitive functions). This bound to the performance of a computational model has been referred to as the noise ceiling. In previous fMRI applications two methods have been proposed to estimate the noise ceiling based on either a split-half procedure or Monte Carlo simulations. These methods make different assumptions over the nature of the effects underlying the data, and, importantly, their relation has not been clarified yet. Here, we derive an analytical form for the noise ceiling that does not require computationally expensive simulations or a splitting procedure that reduce the amount of data. The validity of this analytical definition is proved in simulations, we show that the analytical solution results in the same estimate of the noise ceiling as the Monte Carlo method. Considering different simulated noise structure, we evaluate different estimators of the variance of the responses and their impact on the estimation of the noise ceiling. We furthermore evaluate the interplay between regularization (often used to estimate model fits to the data when the number of computational features in the model is large) and model complexity on the performance with respect to the noise ceiling. Our results indicate that when considering the variance of the responses across runs, computing the noise ceiling analytically results in similar estimates as the split half estimator and approaches the true noise ceiling under a variety of simulated noise scenarios. Finally, the methods are tested on real fMRI data acquired at 7 Tesla. Encoding computational models in brain responses measured with fMRI allows testing the algorithmic representations carried out by the neural population within voxels. The accuracy of a model in predicting new responses is used as a measure of the brain validity of the computational model being tested, but the result of this analysis is determined not only by how precisely the model describes the responses but also by the quality of the data. In this article, we evaluate existing approaches to estimate the best possible accuracy that any computational model can achieve conditioned to the amount of measurement noise that is present in the experimental data (i.e. the noise ceiling). Additionally we introduce a close form estimation of the noise ceiling that does not require computationally or data expensive procedures. All the methods are compared using simulated and real fMRI data. We draw conclusions over the impact of regularization procedures and make practical recommendations on how to report the results of computational models in neuroimaging.
Collapse
Affiliation(s)
- Agustin Lage-Castellanos
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of NeuroInformatics, Cuban Center for Neuroscience, Cuba
- * E-mail:
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, Maastricht, The Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, United States of America
| |
Collapse
|
31
|
Norman-Haignere SV, McDermott JH. Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex. PLoS Biol 2018; 16:e2005127. [PMID: 30507943 PMCID: PMC6292651 DOI: 10.1371/journal.pbio.2005127] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 12/13/2018] [Accepted: 11/08/2018] [Indexed: 11/19/2022] Open
Abstract
A central goal of sensory neuroscience is to construct models that can explain neural responses to natural stimuli. As a consequence, sensory models are often tested by comparing neural responses to natural stimuli with model responses to those stimuli. One challenge is that distinct model features are often correlated across natural stimuli, and thus model features can predict neural responses even if they do not in fact drive them. Here, we propose a simple alternative for testing a sensory model: we synthesize a stimulus that yields the same model response as each of a set of natural stimuli, and test whether the natural and "model-matched" stimuli elicit the same neural responses. We used this approach to test whether a common model of auditory cortex-in which spectrogram-like peripheral input is processed by linear spectrotemporal filters-can explain fMRI responses in humans to natural sounds. Prior studies have that shown that this model has good predictive power throughout auditory cortex, but this finding could reflect feature correlations in natural stimuli. We observed that fMRI responses to natural and model-matched stimuli were nearly equivalent in primary auditory cortex (PAC) but that nonprimary regions, including those selective for music or speech, showed highly divergent responses to the two sound sets. This dissociation between primary and nonprimary regions was less clear from model predictions due to the influence of feature correlations across natural stimuli. Our results provide a signature of hierarchical organization in human auditory cortex, and suggest that nonprimary regions compute higher-order stimulus properties that are not well captured by traditional models. Our methodology enables stronger tests of sensory models and could be broadly applied in other domains.
Collapse
Affiliation(s)
- Sam V. Norman-Haignere
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Zuckerman Institute of Mind, Brain and Behavior, Columbia University, New York, New York, United States of America
- Laboratoire des Sytèmes Perceptifs, Département d’Études Cognitives, ENS, PSL University, CNRS, Paris France
| | - Josh H. McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, Massachusetts, United States of America
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
32
|
Fischer BJ, Wydick JL, Köppl C, Peña JL. Multidimensional stimulus encoding in the auditory nerve of the barn owl. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2116. [PMID: 30404459 PMCID: PMC6185867 DOI: 10.1121/1.5056171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Revised: 09/07/2018] [Accepted: 09/10/2018] [Indexed: 06/08/2023]
Abstract
Auditory perception depends on multi-dimensional information in acoustic signals that must be encoded by auditory nerve fibers (ANF). These dimensions are represented by filters with different frequency selectivities. Multiple models have been suggested; however, the identification of relevant filters and type of interactions has been elusive, limiting progress in modeling the cochlear output. Spike-triggered covariance analysis of barn owl ANF responses was used to determine the number of relevant stimulus filters and estimate the nonlinearity that produces responses from filter outputs. This confirmed that ANF responses depend on multiple filters. The first, most dominant filter was the spike-triggered average, which was excitatory for all neurons. The second and third filters could be either suppressive or excitatory with center frequencies above or below that of the first filter. The nonlinear function mapping the first two filter outputs to the spiking probability ranged from restricted to nearly circular-symmetric, reflecting different modes of interaction between stimulus dimensions across the sample. This shows that stimulus encoding in ANFs of the barn owl is multidimensional and exhibits diversity over the population, suggesting that models must allow for variable numbers of filters and types of interactions between filters to describe how sound is encoded in ANFs.
Collapse
Affiliation(s)
- Brian J Fischer
- Department of Mathematics, Seattle University, Seattle, Washington 98122, USA
| | - Jacob L Wydick
- Department of Mathematics, Seattle University, Seattle, Washington 98122, USA
| | - Christine Köppl
- Cluster of Excellence "Hearing4all" and Research Centre Neurosensory Science, Department of Neuroscience, School of Medicine and Health Science, Carl von Ossietzky University, Oldenburg, Germany
| | - José L Peña
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, New York, New York 10461, USA
| |
Collapse
|
33
|
Wong DDE, Fuglsang SA, Hjortkjær J, Ceolini E, Slaney M, de Cheveigné A. A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding. Front Neurosci 2018; 12:531. [PMID: 30131670 PMCID: PMC6090837 DOI: 10.3389/fnins.2018.00531] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Accepted: 07/16/2018] [Indexed: 11/17/2022] Open
Abstract
The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies.
Collapse
Affiliation(s)
- Daniel D. E. Wong
- Laboratoire des Systèmes Perceptifs, CNRS, UMR 8248, Paris, France
- Département d'Études Cognitives, École Normale Supérieure, PSL Research University, Paris, France
| | - Søren A. Fuglsang
- Department of Electrical Engineering, Danmarks Tekniske Universitet, Kongens Lyngby, Denmark
| | - Jens Hjortkjær
- Department of Electrical Engineering, Danmarks Tekniske Universitet, Kongens Lyngby, Denmark
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, Hvidovre, Denmark
| | - Enea Ceolini
- Institute of Neuroinformatics, University of Zürich, Zurich, Switzerland
| | - Malcolm Slaney
- AI Machine Perception, Google, Mountain View, CA, United States
| | - Alain de Cheveigné
- Laboratoire des Systèmes Perceptifs, CNRS, UMR 8248, Paris, France
- Département d'Études Cognitives, École Normale Supérieure, PSL Research University, Paris, France
- Ear Institute, University College London, London, United Kingdom
| |
Collapse
|
34
|
Westö J, May PJC. Describing complex cells in primary visual cortex: a comparison of context and multifilter LN models. J Neurophysiol 2018; 120:703-719. [PMID: 29718805 PMCID: PMC6139451 DOI: 10.1152/jn.00916.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 04/30/2018] [Accepted: 04/30/2018] [Indexed: 11/24/2022] Open
Abstract
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multifilter linear-nonlinear (LN) models and context models. Models are, however, never correct, and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: 1) we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions, and 2) we evaluate context models and multifilter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multifilter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multifilter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior. NEW & NOTEWORTHY We used data from complex cells in primary visual cortex to estimate a wide variety of receptive field models from two frameworks that have previously not been compared with each other. The models included traditionally used multifilter linear-nonlinear models and novel variants of context models. Using mutual information and correlation coefficients as performance measures, we showed that context models are superior for describing complex cells and that the novel context models performed the best.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering Aalto University , Espoo , Finland
| | - Patrick J C May
- Department of Psychology, Lancaster University , Lancaster , United Kingdom
| |
Collapse
|
35
|
Benjamin AS, Fernandes HL, Tomlinson T, Ramkumar P, VerSteeg C, Chowdhury RH, Miller LE, Kording KP. Modern Machine Learning as a Benchmark for Fitting Neural Responses. Front Comput Neurosci 2018; 12:56. [PMID: 30072887 PMCID: PMC6060269 DOI: 10.3389/fncom.2018.00056] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Accepted: 06/29/2018] [Indexed: 11/13/2022] Open
Abstract
Neuroscience has long focused on finding encoding models that effectively ask "what predicts neural spiking?" and generalized linear models (GLMs) are a typical approach. It is often unknown how much of explainable neural activity is captured, or missed, when fitting a model. Here we compared the predictive performance of simple models to three leading machine learning methods: feedforward neural networks, gradient boosted trees (using XGBoost), and stacked ensembles that combine the predictions of several methods. We predicted spike counts in macaque motor (M1) and somatosensory (S1) cortices from standard representations of reaching kinematics, and in rat hippocampal cells from open field location and orientation. Of these methods, XGBoost and the ensemble consistently produced more accurate spike rate predictions and were less sensitive to the preprocessing of features. These methods can thus be applied quickly to detect if feature sets relate to neural activity in a manner not captured by simpler methods. Encoding models built with a machine learning approach accurately predict spike rates and can offer meaningful benchmarks for simpler models.
Collapse
Affiliation(s)
- Ari S. Benjamin
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | - Hugo L. Fernandes
- Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Chicago, Northwestern University, Chicago, IL, United States
| | - Tucker Tomlinson
- Department of Physiology, Northwestern University, Chicago, IL, United States
| | - Pavan Ramkumar
- Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Chicago, Northwestern University, Chicago, IL, United States
- Department of Neurobiology, Northwestern University, Evanston, IL, United States
| | - Chris VerSteeg
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States
| | - Raeed H. Chowdhury
- Department of Physiology, Northwestern University, Chicago, IL, United States
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States
| | - Lee E. Miller
- Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Chicago, Northwestern University, Chicago, IL, United States
- Department of Physiology, Northwestern University, Chicago, IL, United States
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States
| | - Konrad P. Kording
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
36
|
Zhang Y, Lee TS, Li M, Liu F, Tang S. Convolutional neural network models of V1 responses to complex patterns. J Comput Neurosci 2018; 46:33-54. [PMID: 29869761 DOI: 10.1007/s10827-018-0687-7] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Revised: 04/26/2018] [Accepted: 04/30/2018] [Indexed: 11/30/2022]
Abstract
In this study, we evaluated the convolutional neural network (CNN) method for modeling V1 neurons of awake macaque monkeys in response to a large set of complex pattern stimuli. CNN models outperformed all the other baseline models, such as Gabor-based standard models for V1 cells and various variants of generalized linear models. We then systematically dissected different components of the CNN and found two key factors that made CNNs outperform other models: thresholding nonlinearity and convolution. In addition, we fitted our data using a pre-trained deep CNN via transfer learning. The deep CNN's higher layers, which encode more complex patterns, outperformed lower ones, and this result was consistent with our earlier work on the complexity of V1 neural code. Our study systematically evaluates the relative merits of different CNN components in the context of V1 neuron modeling.
Collapse
Affiliation(s)
- Yimeng Zhang
- Center for the Neural Basis of Cognition and Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Tai Sing Lee
- Center for the Neural Basis of Cognition and Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - Ming Li
- Peking University School of Life Sciences and Peking-Tsinghua Center for Life Sciences, Beijing, 100871, China.,IDG/McGovern Institute for Brain Research at Peking University, Beijing, 100871, China
| | - Fang Liu
- Peking University School of Life Sciences and Peking-Tsinghua Center for Life Sciences, Beijing, 100871, China.,IDG/McGovern Institute for Brain Research at Peking University, Beijing, 100871, China
| | - Shiming Tang
- Peking University School of Life Sciences and Peking-Tsinghua Center for Life Sciences, Beijing, 100871, China. .,IDG/McGovern Institute for Brain Research at Peking University, Beijing, 100871, China.
| |
Collapse
|
37
|
Kell AJ, Yamins DL, Shook EN, Norman-Haignere SV, McDermott JH. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy. Neuron 2018; 98:630-644.e16. [DOI: 10.1016/j.neuron.2018.03.044] [Citation(s) in RCA: 232] [Impact Index Per Article: 38.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Revised: 12/22/2017] [Accepted: 03/23/2018] [Indexed: 11/28/2022]
|
38
|
Wei P, Bao R, Lv Z, Jing B. Weak but Critical Links between Primary Somatosensory Centers and Motor Cortex during Movement. Front Hum Neurosci 2018; 12:1. [PMID: 29387003 PMCID: PMC5776089 DOI: 10.3389/fnhum.2018.00001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 01/01/2018] [Indexed: 12/12/2022] Open
Abstract
Motor performance is improved by stimulation of the agonist muscle during movement. However, related brain mechanisms remain unknown. In this work, we perform a functional magnetic resonance imaging (fMRI) study in 21 healthy subjects under three different conditions: (1) movement of right ankle alone; (2) movement and simultaneous stimulation of the agonist muscle; or (3) movement and simultaneous stimulation of a control area. We constructed weighted brain networks for each condition by using functional connectivity. Network features were analyzed using graph theoretical approaches. We found that: (1) the second condition evokes the strongest and most widespread brain activations (5147 vs. 4419 and 2320 activated voxels); and (2) this condition also induces a unique network layout and changes hubs and the modular structure of the brain motor network by activating the most “silent” links between primary somatosensory centers and the motor cortex, particularly weak links from the thalamus to the left primary motor cortex (M1). Significant statistical differences were found when the strength values of the right cerebellum (P < 0.001) or the left thalamus (P = 0.006) were compared among the three conditions. Over the years, studies reported a small number of projections from the thalamus to the motor cortex. This is the first work to present functions of these pathways. These findings reveal mechanisms for enhancing motor function with somatosensory stimulation, and suggest that network function cannot be thoroughly understood when weak ties are disregarded.
Collapse
Affiliation(s)
- Pengxu Wei
- Beijing Key Laboratory of Rehabilitation Technical Aids for Old-Age Disability, Key Laboratory of Neuro-functional Information and Rehabilitation Engineering of the Ministry of Civil Affairs, National Research Center for Rehabilitation Technical Aids, Beijing, China
| | - Ruixue Bao
- Beijing Boai Hospital, School of Rehabilitation Medicine, China Rehabilitation Research Center, Capital Medical University, Beijing, China
| | - Zeping Lv
- Beijing Key Laboratory of Rehabilitation Technical Aids for Old-Age Disability, Key Laboratory of Neuro-functional Information and Rehabilitation Engineering of the Ministry of Civil Affairs, National Research Center for Rehabilitation Technical Aids, Beijing, China
| | - Bin Jing
- School of Biomedical Engineering, Capital Medical University, Beijing, China
| |
Collapse
|
39
|
Encoding of natural timbre dimensions in human auditory cortex. Neuroimage 2017; 166:60-70. [PMID: 29080711 DOI: 10.1016/j.neuroimage.2017.10.050] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Revised: 10/19/2017] [Accepted: 10/24/2017] [Indexed: 11/22/2022] Open
Abstract
Timbre, or sound quality, is a crucial but poorly understood dimension of auditory perception that is important in describing speech, music, and environmental sounds. The present study investigates the cortical representation of different timbral dimensions. Encoding models have typically incorporated the physical characteristics of sounds as features when attempting to understand their neural representation with functional MRI. Here we test an encoding model that is based on five subjectively derived dimensions of timbre to predict cortical responses to natural orchestral sounds. Results show that this timbre model can outperform other models based on spectral characteristics, and can perform as well as a complex joint spectrotemporal modulation model. In cortical regions at the medial border of Heschl's gyrus, bilaterally, and regions at its posterior adjacency in the right hemisphere, the timbre model outperforms even the complex joint spectrotemporal modulation model. These findings suggest that the responses of cortical neuronal populations in auditory cortex may reflect the encoding of perceptual timbre dimensions.
Collapse
|
40
|
Wei P, Zhang Z, Lv Z, Jing B. Strong Functional Connectivity among Homotopic Brain Areas Is Vital for Motor Control in Unilateral Limb Movement. Front Hum Neurosci 2017; 11:366. [PMID: 28747880 PMCID: PMC5506200 DOI: 10.3389/fnhum.2017.00366] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Accepted: 06/27/2017] [Indexed: 11/13/2022] Open
Abstract
The mechanism underlying brain region organization for motor control in humans remains poorly understood. In this functional magnetic resonance imaging (fMRI) study, right-handed volunteers were tasked to maintain unilateral foot movements on the right and left sides as consistently as possible. We aimed to identify the similarities and differences between brain motor networks of the two conditions. We recruited 18 right-handed healthy volunteers aged 25 ± 2.3 years and used a whole-body 3T system for magnetic resonance (MR) scanning. Image analysis was performed using SPM8, Conn toolbox and Brain Connectivity Toolbox. We determined a craniocaudally distributed, mirror-symmetrical modular structure. The functional connectivity between homotopic brain areas was generally stronger than the intrahemispheric connections, and such strong connectivity led to the abovementioned modular structure. Our findings indicated that the interhemispheric functional interaction between homotopic brain areas is more intensive than the interaction along the conventional top-down and bottom-up pathways within the brain during unilateral limb movement. The detected strong interhemispheric horizontal functional interaction is an important aspect of motor control but often neglected or underestimated. The strong interhemispheric connectivity may explain the physiological phenomena and effects of promising therapeutic approaches. Further accurate and effective therapeutic methods may be developed on the basis of our findings.
Collapse
Affiliation(s)
- Pengxu Wei
- Beijing Key Laboratory of Rehabilitation Technical Aids for Old-age Disability, Key Laboratory of Neuro-functional Information and Rehabilitation Engineering of the Ministry of Civil Affairs, National Research Center for Rehabilitation Technical AidsBeijing, China
| | - Zuting Zhang
- Beijing Key Laboratory of Rehabilitation Technical Aids for Old-age Disability, Key Laboratory of Neuro-functional Information and Rehabilitation Engineering of the Ministry of Civil Affairs, National Research Center for Rehabilitation Technical AidsBeijing, China
| | - Zeping Lv
- Beijing Key Laboratory of Rehabilitation Technical Aids for Old-age Disability, Key Laboratory of Neuro-functional Information and Rehabilitation Engineering of the Ministry of Civil Affairs, National Research Center for Rehabilitation Technical AidsBeijing, China
| | - Bin Jing
- School of Biomedical Engineering, Capital Medical UniversityBeijing, China
| |
Collapse
|
41
|
Harper NS, Schoppe O, Willmore BDB, Cui Z, Schnupp JWH, King AJ. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons. PLoS Comput Biol 2016; 12:e1005113. [PMID: 27835647 PMCID: PMC5105998 DOI: 10.1371/journal.pcbi.1005113] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2015] [Accepted: 08/22/2016] [Indexed: 11/28/2022] Open
Abstract
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Collapse
Affiliation(s)
- Nicol S. Harper
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
- Institute of Biomedical Engineering, Department of Engineering Science, Old Road Campus Research Building, University of Oxford, Headington, United Kingdom
| | - Oliver Schoppe
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
- Bio-Inspired Information Processing, Technische Universität München, Germany
| | - Ben D. B. Willmore
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
| | - Zhanfeng Cui
- Institute of Biomedical Engineering, Department of Engineering Science, Old Road Campus Research Building, University of Oxford, Headington, United Kingdom
| | - Jan W. H. Schnupp
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
- Department of Biomedical Science, City University of Hong Kong, Kowloon Tong, Hong Kong
| | - Andrew J. King
- Dept. of Physiology, Anatomy and Genetics (DPAG), Sherrington Building, University of Oxford, United Kingdom
| |
Collapse
|
42
|
Rubin J, Ulanovsky N, Nelken I, Tishby N. The Representation of Prediction Error in Auditory Cortex. PLoS Comput Biol 2016; 12:e1005058. [PMID: 27490251 PMCID: PMC4973877 DOI: 10.1371/journal.pcbi.1005058] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Accepted: 07/07/2016] [Indexed: 11/19/2022] Open
Abstract
To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. A crucial aspect of all life is the ability to use past events in order to guide future behavior. To do that, creatures need the ability to predict future events. Indeed, predictability has been shown to affect neuronal responses in many animals and under many conditions. Clearly, the quality of predictions should depend on the amount and detail of the past information used to generate them. Here, by using a basic principle from information theory, we show how to derive explicitly the tradeoff between quality of prediction and complexity of the representation of past information. We then apply these ideas to a concrete case–neuronal responses recorded in auditory cortex during the presentation of oddball sequences, consisting of two tones with varying probabilities. We show that the neuronal responses fit quantitatively the prediction errors of optimal predictors derived from our theory, and use that result in order to deduce the properties of the representations of the past in the auditory system. We conclude that these memory representations have surprisingly long duration (10 stimuli back or more), but keep relatively little detail about this past. Our theory can be applied widely to other sensory systems.
Collapse
Affiliation(s)
- Jonathan Rubin
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| | - Nachum Ulanovsky
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Israel Nelken
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
- Department of Neurobiology, Institute of Life Sciences, Hebrew University, Jerusalem, Israel
- * E-mail:
| | - Naftali Tishby
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
- The Benin School of Computer Science and Engineering, Hebrew University, Jerusalem, Israel
| |
Collapse
|