1
|
Wang P, Guo SJ, Li HJ. Brain imaging of a gamified cognitive flexibility task in young and older adults. Brain Imaging Behav 2024; 18:902-912. [PMID: 38627304 DOI: 10.1007/s11682-024-00883-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/10/2024] [Indexed: 08/31/2024]
Abstract
The study aimed to develop and validate a gamified cognitive flexibility task through brain imaging, and to investigate behavioral and brain activation differences between young and older adults during task performance. Thirty-one young adults (aged 18-35) and 31 older adults (aged 60-80) were included in the present study. All participants underwent fMRI scans while completing the gamified cognitive flexibility task. Results showed that young adults outperformed older adults on the task. The left inferior frontal junction (IFJ), a key region of cognitive flexibility, was significantly activated during the task in both older and young adults. Comparatively, the percent signal change in the left IFJ was stronger in older adults than in young adults. Moreover, older adults demonstrated more precise representations during the task in the left IFJ. Additionally, the left inferior parietal lobule (IPL) and superior parietal lobule in older adults and the left middle frontal gyrus (MFG) and inferior frontal gyrus in young adults were also activated during the task. Psychophysiological interaction analyses showed significant functional connectivity between the left IFJ and the left IPL, as well as the right precuneus in older adults. In young adults, significant functional connectivity was found between the left IFJ and the left MFG, as well as the right angular. The current study provides preliminary evidence for the validity of the gamified cognitive flexibility task through brain imaging. The findings suggest that this task could serve as a reliable tool for assessing cognitive flexibility and for exploring age-related differences of cognitive flexibility in both brain and behavior.
Collapse
Affiliation(s)
- Ping Wang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100101, China
- McGovern Institute for Brain Research, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Sheng-Ju Guo
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100101, China
| | - Hui-Jie Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100101, China.
| |
Collapse
|
2
|
Scott DN, Mukherjee A, Nassar MR, Halassa MM. Thalamocortical architectures for flexible cognition and efficient learning. Trends Cogn Sci 2024; 28:739-756. [PMID: 38886139 PMCID: PMC11305962 DOI: 10.1016/j.tics.2024.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 05/12/2024] [Accepted: 05/13/2024] [Indexed: 06/20/2024]
Abstract
The brain exhibits a remarkable ability to learn and execute context-appropriate behaviors. How it achieves such flexibility, without sacrificing learning efficiency, is an important open question. Neuroscience, psychology, and engineering suggest that reusing and repurposing computations are part of the answer. Here, we review evidence that thalamocortical architectures may have evolved to facilitate these objectives of flexibility and efficiency by coordinating distributed computations. Recent work suggests that distributed prefrontal cortical networks compute with flexible codes, and that the mediodorsal thalamus provides regularization to promote efficient reuse. Thalamocortical interactions resemble hierarchical Bayesian computations, and their network implementation can be related to existing gating, synchronization, and hub theories of thalamic function. By reviewing recent findings and providing a novel synthesis, we highlight key research horizons integrating computation, cognition, and systems neuroscience.
Collapse
Affiliation(s)
- Daniel N Scott
- Department of Neuroscience, Brown University, Providence, RI, USA; Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| | - Arghya Mukherjee
- Department of Neuroscience, Tufts University School of Medicine, Boston, MA, USA
| | - Matthew R Nassar
- Department of Neuroscience, Brown University, Providence, RI, USA; Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, USA
| | - Michael M Halassa
- Department of Neuroscience, Tufts University School of Medicine, Boston, MA, USA; Department of Psychiatry, Tufts University School of Medicine, Boston, MA, USA.
| |
Collapse
|
3
|
Insanally MN, Albanna BF, Toth J, DePasquale B, Fadaei SS, Gupta T, Lombardi O, Kuchibhotla K, Rajan K, Froemke RC. Contributions of cortical neuron firing patterns, synaptic connectivity, and plasticity to task performance. Nat Commun 2024; 15:6023. [PMID: 39019848 PMCID: PMC11255273 DOI: 10.1038/s41467-024-49895-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 06/20/2024] [Indexed: 07/19/2024] Open
Abstract
Neuronal responses during behavior are diverse, ranging from highly reliable 'classical' responses to irregular 'non-classically responsive' firing. While a continuum of response properties is observed across neural systems, little is known about the synaptic origins and contributions of diverse responses to network function, perception, and behavior. To capture the heterogeneous responses measured from auditory cortex of rodents performing a frequency recognition task, we use a novel task-performing spiking recurrent neural network incorporating spike-timing-dependent plasticity. Reliable and irregular units contribute differentially to task performance via output and recurrent connections, respectively. Excitatory plasticity shifts the response distribution while inhibition constrains its diversity. Together both improve task performance with full network engagement. The same local patterns of synaptic inputs predict spiking response properties of network units and auditory cortical neurons from in vivo whole-cell recordings during behavior. Thus, diverse neural responses contribute to network function and emerge from synaptic plasticity rules.
Collapse
Affiliation(s)
- Michele N Insanally
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA.
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA.
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
| | - Badr F Albanna
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA
| | - Jade Toth
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Brian DePasquale
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
- Center for Systems Neuroscience, Boston University, Boston, MA, 02215, USA
| | - Saba Shokat Fadaei
- Skirball Institute for Biomolecular Medicine, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Otolaryngology, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Neuroscience, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Physiology, New York University Grossman School of Medicine, New York, NY, 10016, USA
| | - Trisha Gupta
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Olivia Lombardi
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Kishore Kuchibhotla
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, USA
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD, 21218, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Kanaka Rajan
- Department of Neurobiology, Harvard Medical School, Boston, MA, 02115, USA
- Kempner Institute, Harvard University, Cambridge, MA, 02138, USA
| | - Robert C Froemke
- Skirball Institute for Biomolecular Medicine, New York University Grossman School of Medicine, New York, NY, 10016, USA.
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, 10016, USA.
- Department of Otolaryngology, New York University Grossman School of Medicine, New York, NY, 10016, USA.
- Department of Neuroscience, New York University Grossman School of Medicine, New York, NY, 10016, USA.
- Department of Physiology, New York University Grossman School of Medicine, New York, NY, 10016, USA.
- Center for Neural Science, New York University, New York, NY, 10003, USA.
| |
Collapse
|
4
|
Driscoll LN, Shenoy K, Sussillo D. Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. Nat Neurosci 2024; 27:1349-1363. [PMID: 38982201 PMCID: PMC11239504 DOI: 10.1038/s41593-024-01668-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 04/26/2024] [Indexed: 07/11/2024]
Abstract
Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization.
Collapse
Affiliation(s)
- Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Krishna Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
5
|
Proca AM, Rosas FE, Luppi AI, Bor D, Crosby M, Mediano PAM. Synergistic information supports modality integration and flexible learning in neural networks solving multiple tasks. PLoS Comput Biol 2024; 20:e1012178. [PMID: 38829900 PMCID: PMC11175422 DOI: 10.1371/journal.pcbi.1012178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 06/13/2024] [Accepted: 05/18/2024] [Indexed: 06/05/2024] Open
Abstract
Striking progress has been made in understanding cognition by analyzing how the brain is engaged in different modes of information processing. For instance, so-called synergistic information (information encoded by a set of neurons but not by any subset) plays a key role in areas of the human brain linked with complex cognition. However, two questions remain unanswered: (a) how and why a cognitive system can become highly synergistic; and (b) how informational states map onto artificial neural networks in various learning modes. Here we employ an information-decomposition framework to investigate neural networks performing cognitive tasks. Our results show that synergy increases as networks learn multiple diverse tasks, and that in tasks requiring integration of multiple sources, performance critically relies on synergistic neurons. Overall, our results suggest that synergy is used to combine information from multiple modalities-and more generally for flexible and efficient learning. These findings reveal new ways of investigating how and why learning systems employ specific information-processing strategies, and support the principle that the capacity for general-purpose learning critically relies on the system's information dynamics.
Collapse
Affiliation(s)
- Alexandra M. Proca
- Department of Computing, Imperial College London, London, United Kingdom
| | - Fernando E. Rosas
- Department of Informatics, University of Sussex, Brighton, United Kingdom
- Sussex Centre for Consciousness Science and Sussex AI, University of Sussex, Brighton, United Kingdom
- Centre for Psychedelic Research and Centre for Complexity Science, Department of Brain Sciences, Imperial College London, London, United Kingdom
- Centre for Eudaimonia and Human Flourishing, University of Oxford, Oxford, United Kingdom
| | - Andrea I. Luppi
- Department of Clinical Neurosciences and Division of Anaesthesia, University of Cambridge, Cambridge, United Kingdom
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom
- Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Daniel Bor
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- Department of Psychology, Queen Mary University of London, London, United Kingdom
| | - Matthew Crosby
- Department of Computing, Imperial College London, London, United Kingdom
| | - Pedro A. M. Mediano
- Department of Computing, Imperial College London, London, United Kingdom
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
6
|
Kravchenko A, Cusack R. The limitations of automatically generated curricula for continual learning. PLoS One 2024; 19:e0290706. [PMID: 38625859 PMCID: PMC11020929 DOI: 10.1371/journal.pone.0290706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/14/2023] [Indexed: 04/18/2024] Open
Abstract
In many applications, artificial neural networks are best trained for a task by following a curriculum, in which simpler concepts are learned before more complex ones. This curriculum can be hand-crafted by the engineer or optimised like other hyperparameters, by evaluating many curricula. However, this is computationally intensive and the hyperparameters are unlikely to generalise to new datasets. An attractive alternative, demonstrated in influential prior works, is that the network could choose its own curriculum by monitoring its learning. This would be particularly beneficial for continual learning, in which the network must learn from an environment that is changing over time, relevant both to practical applications and in the modelling of human development. In this paper we test the generality of this approach using a proof-of-principle model, training a network on two sequential tasks under static and continual conditions, and investigating both the benefits of a curriculum and the handicap induced by continuous learning. Additionally, we test a variety of prior task-switching metrics, and find that in some cases even in this simple scenario the a network is often unable to choose the optimal curriculum, as the benefits are sometimes only apparent with hindsight, at the end of training. We discuss the implications of the results for network engineering and models of human development.
Collapse
Affiliation(s)
- Anna Kravchenko
- Faculty of Science, Radboud University, Nijmegen, The Netherlands
| | - Rhodri Cusack
- Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
7
|
Capouskova K, Zamora‐López G, Kringelbach ML, Deco G. Integration and segregation manifolds in the brain ensure cognitive flexibility during tasks and rest. Hum Brain Mapp 2023; 44:6349-6363. [PMID: 37846551 PMCID: PMC10681658 DOI: 10.1002/hbm.26511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 09/14/2023] [Accepted: 09/25/2023] [Indexed: 10/18/2023] Open
Abstract
Adapting to a constantly changing environment requires the human brain to flexibly switch among many demanding cognitive tasks, processing both specialized and integrated information associated with the activity in functional networks over time. In this study, we investigated the nature of the temporal alternation between segregated and integrated states in the brain during rest and six cognitive tasks using functional MRI. We employed a deep autoencoder to explore the 2D latent space associated with the segregated and integrated states. Our results show that the integrated state occupies less space in the latent space manifold compared to the segregated states. Moreover, the integrated state is characterized by lower entropy of occupancy than the segregated state, suggesting that integration plays a consolidating role, while segregation may serve as cognitive expertness. Comparing rest and the tasks, we found that rest exhibits higher entropy of occupancy, indicating a more random wandering of the mind compared to the expected focus during task performance. Our study demonstrates that both transient, short-lived integrated and segregated states are present during rest and task performance, flexibly switching between them, with integration serving as information compression and segregation related to information specialization.
Collapse
Affiliation(s)
- Katerina Capouskova
- Center for Brain and Cognition, Computational Neuroscience Group, DTICUniversitat Pompeu FabraBarcelonaSpain
| | - Gorka Zamora‐López
- Center for Brain and Cognition, Computational Neuroscience Group, DTICUniversitat Pompeu FabraBarcelonaSpain
| | - Morten L. Kringelbach
- Department of PsychiatryUniversity of OxfordOxfordUnited Kingdom
- Center for Music in the Brain, Department of Clinical MedicineAarhus UniversityAarhusDenmark
- Centre for Eudaimonia and Human Flourishing, Linacre CollegeUniversity of OxfordOxfordUnited Kingdom
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, DTICUniversitat Pompeu FabraBarcelonaSpain
- Institució Catalana de Recerca i Estudis Avançats (ICREA)BarcelonaSpain
| |
Collapse
|
8
|
Gurnani H, Cayco Gajic NA. Signatures of task learning in neural representations. Curr Opin Neurobiol 2023; 83:102759. [PMID: 37708653 DOI: 10.1016/j.conb.2023.102759] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/28/2023] [Accepted: 07/20/2023] [Indexed: 09/16/2023]
Abstract
While neural plasticity has long been studied as the basis of learning, the growth of large-scale neural recording techniques provides a unique opportunity to study how learning-induced activity changes are coordinated across neurons within the same circuit. These distributed changes can be understood through an evolution of the geometry of neural manifolds and latent dynamics underlying new computations. In parallel, studies of multi-task and continual learning in artificial neural networks hint at a tradeoff between non-interference and compositionality as guiding principles to understand how neural circuits flexibly support multiple behaviors. In this review, we highlight recent findings from both biological and artificial circuits that together form a new framework for understanding task learning at the population level.
Collapse
Affiliation(s)
- Harsha Gurnani
- Department of Biology, University of Washington, Seattle, WA, USA. https://twitter.com/HarshaGurnani
| | - N Alex Cayco Gajic
- Laboratoire de Neuroscience Cognitives, Ecole Normale Supérieure, Université PSL, Paris, France.
| |
Collapse
|
9
|
Mizes KGC, Lindsey J, Escola GS, Ölveczky BP. Motor cortex is required for flexible but not automatic motor sequences. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.05.556348. [PMID: 37732225 PMCID: PMC10508748 DOI: 10.1101/2023.09.05.556348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/22/2023]
Abstract
How motor cortex contributes to motor sequence execution is much debated, with studies supporting disparate views. Here we probe the degree to which motor cortex's engagement depends on task demands, specifically whether its role differs for highly practiced, or 'automatic', sequences versus flexible sequences informed by external events. To test this, we trained rats to generate three-element motor sequences either by overtraining them on a single sequence or by having them follow instructive visual cues. Lesioning motor cortex revealed that it is necessary for flexible cue-driven motor sequences but dispensable for single automatic behaviors trained in isolation. However, when an automatic motor sequence was practiced alongside the flexible task, it became motor cortex-dependent, suggesting that subcortical consolidation of an automatic motor sequence is delayed or prevented when the same sequence is produced also in a flexible context. A simple neural network model recapitulated these results and explained the underlying circuit mechanisms. Our results critically delineate the role of motor cortex in motor sequence execution, describing the condition under which it is engaged and the functions it fulfills, thus reconciling seemingly conflicting views about motor cortex's role in motor sequence generation.
Collapse
Affiliation(s)
- Kevin G. C. Mizes
- Program in Biophysics, Harvard University, Cambridge, MA 02138,
USA
- Department of Organismic and Evolutionary Biology and Center for
Brain Science, Harvard University, Cambridge, MA, USA
| | - Jack Lindsey
- Zuckerman Mind Brain and Behavior Institute, Columbia
University, New York, NY, 10027, USA
| | - G. Sean Escola
- Zuckerman Mind Brain and Behavior Institute, Columbia
University, New York, NY, 10027, USA
- Department of Psychiatry, Columbia University, New York, NY,
10032, USA
| | - Bence P. Ölveczky
- Department of Organismic and Evolutionary Biology and Center for
Brain Science, Harvard University, Cambridge, MA, USA
| |
Collapse
|
10
|
Han CZ, Donoghue T, Cao R, Kunz L, Wang S, Jacobs J. Using multi-task experiments to test principles of hippocampal function. Hippocampus 2023; 33:646-657. [PMID: 37042212 PMCID: PMC10249632 DOI: 10.1002/hipo.23540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 03/30/2023] [Accepted: 03/31/2023] [Indexed: 04/13/2023]
Abstract
Investigations of hippocampal functions have revealed a dizzying array of findings, from lesion-based behavioral deficits, to a diverse range of characterized neural activations, to computational models of putative functionality. Across these findings, there remains an ongoing debate about the core function of the hippocampus and the generality of its representation. Researchers have debated whether the hippocampus's primary role relates to the representation of space, the neural basis of (episodic) memory, or some more general computation that generalizes across various cognitive domains. Within these different perspectives, there is much debate about the nature of feature encodings. Here, we suggest that in order to evaluate hippocampal responses-investigating, for example, whether neuronal representations are narrowly targeted to particular tasks or if they subserve domain-general purposes-a promising research strategy may be the use of multi-task experiments, or more generally switching between multiple task contexts while recording from the same neurons in a given session. We argue that this strategy-when combined with explicitly defined theoretical motivations that guide experiment design-could be a fruitful approach to better understand how hippocampal representations support different behaviors. In doing so, we briefly review key open questions in the field, as exemplified by articles in this special issue, as well as previous work using multi-task experiments, and extrapolate to consider how this strategy could be further applied to probe fundamental questions about hippocampal function.
Collapse
Affiliation(s)
- Claire Z. Han
- Department of Biomedical Engineering, Columbia University
| | | | - Runnan Cao
- Department of Radiology, Washington University in St. Louis
| | - Lukas Kunz
- Department of Epileptology, University of Bonn Medical Center, Bonn, Germany
| | - Shuo Wang
- Department of Radiology, Washington University in St. Louis
| | - Joshua Jacobs
- Department of Biomedical Engineering, Columbia University
- Department of Neurological Surgery, Columbia University
| |
Collapse
|
11
|
Pugavko MM, Maslennikov OV, Nekorkin VI. Multitask computation through dynamics in recurrent spiking neural networks. Sci Rep 2023; 13:3997. [PMID: 36899052 PMCID: PMC10006454 DOI: 10.1038/s41598-023-31110-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 03/06/2023] [Indexed: 03/12/2023] Open
Abstract
In this work, inspired by cognitive neuroscience experiments, we propose recurrent spiking neural networks trained to perform multiple target tasks. These models are designed by considering neurocognitive activity as computational processes through dynamics. Trained by input-output examples, these spiking neural networks are reverse engineered to find the dynamic mechanisms that are fundamental to their performance. We show that considering multitasking and spiking within one system provides insightful ideas on the principles of neural computation.
Collapse
Affiliation(s)
- Mechislav M Pugavko
- Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, 603950, Russia
| | - Oleg V Maslennikov
- Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, 603950, Russia.
| | - Vladimir I Nekorkin
- Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, 603950, Russia
| |
Collapse
|
12
|
Johnston WJ, Fusi S. Abstract representations emerge naturally in neural networks trained to perform multiple tasks. Nat Commun 2023; 14:1040. [PMID: 36823136 PMCID: PMC9950464 DOI: 10.1038/s41467-023-36583-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 02/07/2023] [Indexed: 02/25/2023] Open
Abstract
Humans and other animals demonstrate a remarkable ability to generalize knowledge across distinct contexts and objects during natural behavior. We posit that this ability to generalize arises from a specific representational geometry, that we call abstract and that is referred to as disentangled in machine learning. These abstract representations have been observed in recent neurophysiological studies. However, it is unknown how they emerge. Here, using feedforward neural networks, we demonstrate that the learning of multiple tasks causes abstract representations to emerge, using both supervised and reinforcement learning. We show that these abstract representations enable few-sample learning and reliable generalization on novel tasks. We conclude that abstract representations of sensory and cognitive variables may emerge from the multiple behaviors that animals exhibit in the natural world, and, as a consequence, could be pervasive in high-level brain regions. We also make several specific predictions about which variables will be represented abstractly.
Collapse
Affiliation(s)
- W Jeffrey Johnston
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
- Mortimer B. Zuckerman Mind, Brain and Behavior Institute, Columbia University, New York, NY, USA.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
- Mortimer B. Zuckerman Mind, Brain and Behavior Institute, Columbia University, New York, NY, USA.
| |
Collapse
|
13
|
Ito T, Murray JD. Multitask representations in the human cortex transform along a sensory-to-motor hierarchy. Nat Neurosci 2023; 26:306-315. [PMID: 36536240 DOI: 10.1038/s41593-022-01224-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 10/28/2022] [Indexed: 12/24/2022]
Abstract
Human cognition recruits distributed neural processes, yet the organizing computational and functional architectures remain unclear. Here, we characterized the geometry and topography of multitask representations across the human cortex using functional magnetic resonance imaging during 26 cognitive tasks in the same individuals. We measured the representational similarity across tasks within a region and the alignment of representations between regions. Representational alignment varied in a graded manner along the sensory-association-motor axis. Multitask dimensionality exhibited compression then expansion along this gradient. To investigate computational principles of multitask representations, we trained multilayer neural network models to transform empirical visual-to-motor representations. Compression-then-expansion organization in models emerged exclusively in a rich training regime, which is associated with learning optimized representations that are robust to noise. This regime produces hierarchically structured representations similar to empirical cortical patterns. Together, these results reveal computational principles that organize multitask representations across the human cortex to support multitask cognition.
Collapse
Affiliation(s)
- Takuya Ito
- Department of Psychiatry, Yale School of Medicine, New Haven, CT, USA
| | - John D Murray
- Department of Psychiatry, Yale School of Medicine, New Haven, CT, USA.
- Department of Neuroscience, Yale School of Medicine, New Haven, CT, USA.
- Department of Physics, Yale University, New Haven, CT, USA.
| |
Collapse
|
14
|
Momennejad I. A rubric for human-like agents and NeuroAI. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210446. [PMID: 36511409 PMCID: PMC9745874 DOI: 10.1098/rstb.2021.0446] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 10/27/2022] [Indexed: 12/15/2022] Open
Abstract
Researchers across cognitive, neuro- and computer sciences increasingly reference 'human-like' artificial intelligence and 'neuroAI'. However, the scope and use of the terms are often inconsistent. Contributed research ranges widely from mimicking behaviour, to testing machine learning methods as neurally plausible hypotheses at the cellular or functional levels, or solving engineering problems. However, it cannot be assumed nor expected that progress on one of these three goals will automatically translate to progress in others. Here, a simple rubric is proposed to clarify the scope of individual contributions, grounded in their commitments to human-like behaviour, neural plausibility or benchmark/engineering/computer science goals. This is clarified using examples of weak and strong neuroAI and human-like agents, and discussing the generative, corroborate and corrective ways in which the three dimensions interact with one another. The author maintains that future progress in artificial intelligence will need strong interactions across the disciplines, with iterative feedback loops and meticulous validity tests-leading to both known and yet-unknown advances that may span decades to come. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Ida Momennejad
- Microsoft Research NYC, Reinforcement Learning Station, 300 Lafayette, New York, NY 10012, USA
| |
Collapse
|
15
|
Rajalingham R, Piccato A, Jazayeri M. Recurrent neural networks with explicit representation of dynamic latent variables can mimic behavioral patterns in a physical inference task. Nat Commun 2022; 13:5865. [PMID: 36195614 PMCID: PMC9532407 DOI: 10.1038/s41467-022-33581-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 09/22/2022] [Indexed: 11/09/2022] Open
Abstract
Primates can richly parse sensory inputs to infer latent information. This ability is hypothesized to rely on establishing mental models of the external world and running mental simulations of those models. However, evidence supporting this hypothesis is limited to behavioral models that do not emulate neural computations. Here, we test this hypothesis by directly comparing the behavior of primates (humans and monkeys) in a ball interception task to that of a large set of recurrent neural network (RNN) models with or without the capacity to dynamically track the underlying latent variables. Humans and monkeys exhibit similar behavioral patterns. This primate behavioral pattern is best captured by RNNs endowed with dynamic inference, consistent with the hypothesis that the primate brain uses dynamic inferences to support flexible physical predictions. Moreover, our work highlights a general strategy for using model neural systems to test computational hypotheses of higher brain function.
Collapse
Affiliation(s)
- Rishi Rajalingham
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139, USA
| | - Aída Piccato
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139, USA.,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139-4307, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139, USA. .,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Building 46, 43 Vassar St., Cambridge, MA, 02139-4307, USA.
| |
Collapse
|
16
|
Kadmon Harpaz N, Hardcastle K, Ölveczky BP. Learning-induced changes in the neural circuits underlying motor sequence execution. Curr Opin Neurobiol 2022; 76:102624. [PMID: 36030613 PMCID: PMC11125547 DOI: 10.1016/j.conb.2022.102624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 06/02/2022] [Accepted: 07/19/2022] [Indexed: 11/03/2022]
Abstract
As the old adage goes: practice makes perfect. Yet, the neural mechanisms by which rote repetition transforms a halting behavior into a fluid, effortless, and "automatic" action are not well understood. Here we consider the possibility that well-practiced motor sequences, which initially rely on higher-level decision-making circuits, become wholly specified in lower-level control circuits. We review studies informing this idea, discuss the constraints on such shift in control, and suggest approaches to pinpoint circuit-level changes associated with motor sequence learning.
Collapse
Affiliation(s)
- Naama Kadmon Harpaz
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University. https://twitter.com/@NKadmonHarpaz
| | - Kiah Hardcastle
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University. https://twitter.com/@kiahhardcastle
| | - Bence P Ölveczky
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University.
| |
Collapse
|
17
|
Capouskova K, Kringelbach ML, Deco G. Modes of cognition: Evidence from metastable brain dynamics. Neuroimage 2022; 260:119489. [PMID: 35882268 DOI: 10.1016/j.neuroimage.2022.119489] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 01/31/2023] Open
Abstract
Managing cognitive load depends on adequate resource allocation by the human brain through the engagement of metastable substates, which are large-scale functional networks that change over time. We employed a novel analysis method, deep autoencoder dynamical analysis (DADA), with 100 healthy adults selected from the Human Connectome Project (HCP) data set in rest and six cognitive tasks. The deep autoencoder of DADA described seven recurrent stochastic metastable substates from the functional connectome of BOLD phase coherence matrices. These substates were significantly differentiated in terms of their probability of appearance, time duration, and spatial attributes. We found that during different cognitive tasks, there was a higher probability of having more connected substates dominated by a high degree of connectivity in the thalamus. In addition, compared with those during tasks, resting brain dynamics have a lower level of predictability, indicating a more uniform distribution of metastability between substates, quantified by higher entropy. These novel findings provide empirical evidence for the philosophically motivated cognitive theory, suggesting on-line and off-line as two fundamentally distinct modes of cognition. On-line cognition refers to task-dependent engagement with the sensory input, while off-line cognition is a slower, environmentally detached mode engaged with decision and planning. Overall, the DADA framework provides a bridge between neuroscience and cognitive theory that can be further explored in the future.
Collapse
Affiliation(s)
- Katerina Capouskova
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Ramon Trias Fargas 25-27, Barcelona 08005, Spain.
| | - Morten L Kringelbach
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Ramon Trias Fargas 25-27, Barcelona 08005, Spain; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain; Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
18
|
Rabadan MA, De La Cruz ED, Rao SB, Chen Y, Gong C, Crabtree G, Xu B, Markx S, Gogos JA, Yuste R, Tomer R. An in vitro model of neuronal ensembles. Nat Commun 2022; 13:3340. [PMID: 35680927 PMCID: PMC9184643 DOI: 10.1038/s41467-022-31073-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 06/01/2022] [Indexed: 11/28/2022] Open
Abstract
Advances in 3D neuronal cultures, such as brain spheroids and organoids, are allowing unprecedented in vitro access to some of the molecular, cellular and developmental mechanisms underlying brain diseases. However, their efficacy in recapitulating brain network properties that encode brain function remains limited, thereby precluding development of effective in vitro models of complex brain disorders like schizophrenia. Here, we develop and characterize a Modular Neuronal Network (MoNNet) approach that recapitulates specific features of neuronal ensemble dynamics, segregated local-global network activities and a hierarchical modular organization. We utilized MoNNets for quantitative in vitro modelling of schizophrenia-related network dysfunctions caused by highly penetrant mutations in SETD1A and 22q11.2 risk loci. Furthermore, we demonstrate its utility for drug discovery by performing pharmacological rescue of alterations in neuronal ensembles stability and global network synchrony. MoNNets allow in vitro modelling of brain diseases for investigating the underlying neuronal network mechanisms and systematic drug discovery.
Collapse
Affiliation(s)
- M Angeles Rabadan
- Department of Biological Sciences, Columbia University, New York, NY, USA
| | | | - Sneha B Rao
- Mortimer B. Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY, USA
| | - Yannan Chen
- Department of Biological Sciences, Columbia University, New York, NY, USA
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | - Cheng Gong
- Department of Biological Sciences, Columbia University, New York, NY, USA
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | - Gregg Crabtree
- Mortimer B. Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY, USA
| | - Bin Xu
- Department of Psychiatry, Vagelos College of Physicians & Surgeons, Columbia University, New York, NY, USA
| | - Sander Markx
- Department of Psychiatry, Vagelos College of Physicians & Surgeons, Columbia University, New York, NY, USA
| | - Joseph A Gogos
- Mortimer B. Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY, USA
- Department of Physiology, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University, New York, NY, USA
- Department of Psychiatry, Columbia University, New York, NY, USA
| | - Rafael Yuste
- Department of Biological Sciences, Columbia University, New York, NY, USA
- NeuroTechnology Center, Columbia University, New York, NY, USA
| | - Raju Tomer
- Department of Biological Sciences, Columbia University, New York, NY, USA.
- Mortimer B. Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY, USA.
- Department of Biomedical Engineering, Columbia University, New York, NY, USA.
- NeuroTechnology Center, Columbia University, New York, NY, USA.
| |
Collapse
|
19
|
Affiliation(s)
| | - Siyan Zhou
- Icahn School of Medicine at Mount Sinai, New York, NY, USA.,Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Kanaka Rajan
- Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
20
|
Voina D, Recanatesi S, Hu B, Shea-Brown E, Mihalas S. Single Circuit in V1 Capable of Switching Contexts during Movement Using an Inhibitory Population as a Switch. Neural Comput 2022; 34:541-594. [PMID: 35016220 DOI: 10.1162/neco_a_01472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 09/21/2021] [Indexed: 11/04/2022]
Abstract
As animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit. Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatiotemporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.
Collapse
Affiliation(s)
- Doris Voina
- Applied Mathematics, University of Washington, Seattle, WA 98195 U.S.A.
| | - Stefano Recanatesi
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, U.S.A.
| | - Brian Hu
- Allen Institute for Brain Science, Seattle, WA 98109 U.S.A
| | - Eric Shea-Brown
- Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A., and Allen Institute for Brain Science, Seattle, WA 98109, U.S.A.
| | - Stefan Mihalas
- Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A., and Allen Institute for Brain Science, Seattle, WA 98109, U.S.A.
| |
Collapse
|
21
|
Towards the next generation of recurrent network models for cognitive neuroscience. Curr Opin Neurobiol 2021; 70:182-192. [PMID: 34844122 DOI: 10.1016/j.conb.2021.10.015] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 10/25/2021] [Accepted: 10/28/2021] [Indexed: 11/20/2022]
Abstract
Recurrent neural networks (RNNs) trained with machine learning techniques on cognitive tasks have become a widely accepted tool for neuroscientists. In this short opinion piece, we discuss fundamental challenges faced by the early work of this approach and recent steps to overcome such challenges and build next-generation RNN models for cognition. We propose several essential questions that practitioners of this approach should address to continue to build future generations of RNN models.
Collapse
|
22
|
Freund MC, Etzel JA, Braver TS. Neural Coding of Cognitive Control: The Representational Similarity Analysis Approach. Trends Cogn Sci 2021; 25:622-638. [PMID: 33895065 PMCID: PMC8279005 DOI: 10.1016/j.tics.2021.03.011] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 03/17/2021] [Accepted: 03/18/2021] [Indexed: 01/07/2023]
Abstract
Cognitive control relies on distributed and potentially high-dimensional frontoparietal task representations. Yet, the classical cognitive neuroscience approach in this domain has focused on aggregating and contrasting neural measures - either via univariate or multivariate methods - along highly abstracted, 1D factors (e.g., Stroop congruency). Here, we present representational similarity analysis (RSA) as a complementary approach that can powerfully inform representational components of cognitive control theories. We review several exemplary uses of RSA in this regard. We further show that most classical paradigms, given their factorial structure, can be optimized for RSA with minimal modification. Our aim is to illustrate how RSA can be incorporated into cognitive control investigations to shed new light on old questions.
Collapse
Affiliation(s)
- Michael C Freund
- Department of Psychological and Brain Sciences, Washington University in St Louis, St Louis, MO 63130, USA
| | - Joset A Etzel
- Department of Psychological and Brain Sciences, Washington University in St Louis, St Louis, MO 63130, USA
| | - Todd S Braver
- Department of Psychological and Brain Sciences, Washington University in St Louis, St Louis, MO 63130, USA; Department of Radiology, Washington University in St Louis, School of Medicine, St Louis, MO 63110, USA; Department of Neuroscience, Washington University in St Louis, School of Medicine, St Louis, MO 63110, USA.
| |
Collapse
|
23
|
Deli E, Peters J, Kisvárday Z. The thermodynamics of cognition: A mathematical treatment. Comput Struct Biotechnol J 2021; 19:784-793. [PMID: 33552449 PMCID: PMC7843413 DOI: 10.1016/j.csbj.2021.01.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 01/07/2021] [Accepted: 01/07/2021] [Indexed: 10/26/2022] Open
Abstract
There is a general expectation that the laws of classical physics must apply to biology, particularly the neural system. The evoked cycle represents the brain's energy/information exchange with the physical environment through stimulus. Therefore, the thermodynamics of emotions might elucidate the neurological origin of intellectual evolution, and explain the psychological and health consequences of positive and negative emotional states based on their energy profiles. We utilized the Carnot cycle and Landauer's principle to analyze the energetic consequences of the brain's resting and evoked states during and after various cognitive states. Namely, positive emotional states can be represented by the reversed Carnot cycle, whereas negative emotional reactions trigger the Carnot cycle. The two conditions have contrasting energetic and entropic aftereffects with consequences for mental energy. The mathematics of the Carnot and reversed Carnot cycles, which can explain recent findings in human psychology, might be constructive in the scientific endeavor in turning psychology into hard science.
Collapse
Affiliation(s)
- Eva Deli
- Institute for Consciousness Studies (ICS), Benczur ter 9, Nyiregyhaza 4400, Hungary
| | - James Peters
- Department of Electrical and Computer Engineering, University of Manitoba, 75A Chancellor's Circle, Winnipeg, MB R3T 5V6, Canada
- Department of Mathematics Faculty of Arts and Sciences, Adiyaman University, Adiyaman, Turkey
| | - Zoltán Kisvárday
- MTA-Debreceni Egyetem, Neuroscience Research Group, 4032 Debrecen, Nagyerdei krt.98., Hungary
| |
Collapse
|
24
|
Ito T, Hearne L, Mill R, Cocuzza C, Cole MW. Discovering the Computational Relevance of Brain Network Organization. Trends Cogn Sci 2020; 24:25-38. [PMID: 31727507 PMCID: PMC6943194 DOI: 10.1016/j.tics.2019.10.005] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 10/08/2019] [Accepted: 10/16/2019] [Indexed: 12/26/2022]
Abstract
Understanding neurocognitive computations will require not just localizing cognitive information distributed throughout the brain but also determining how that information got there. We review recent advances in linking empirical and simulated brain network organization with cognitive information processing. Building on these advances, we offer a new framework for understanding the role of connectivity in cognition: network coding (encoding/decoding) models. These models utilize connectivity to specify the transfer of information via neural activity flow processes, successfully predicting the formation of cognitive representations in empirical neural data. The success of these models supports the possibility that localized neural functions mechanistically emerge (are computed) from distributed activity flow processes that are specified primarily by connectivity patterns.
Collapse
Affiliation(s)
- Takuya Ito
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA; Behavioral and Neural Sciences PhD Program, Rutgers University, Newark, NJ 07102, USA
| | - Luke Hearne
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA
| | - Ravi Mill
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA
| | - Carrisa Cocuzza
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA; Behavioral and Neural Sciences PhD Program, Rutgers University, Newark, NJ 07102, USA
| | - Michael W Cole
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102, USA.
| |
Collapse
|