1
|
Storm JF, Klink PC, Aru J, Senn W, Goebel R, Pigorini A, Avanzini P, Vanduffel W, Roelfsema PR, Massimini M, Larkum ME, Pennartz CMA. An integrative, multiscale view on neural theories of consciousness. Neuron 2024; 112:1531-1552. [PMID: 38447578 DOI: 10.1016/j.neuron.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 12/20/2023] [Accepted: 02/05/2024] [Indexed: 03/08/2024]
Abstract
How is conscious experience related to material brain processes? A variety of theories aiming to answer this age-old question have emerged from the recent surge in consciousness research, and some are now hotly debated. Although most researchers have so far focused on the development and validation of their preferred theory in relative isolation, this article, written by a group of scientists representing different theories, takes an alternative approach. Noting that various theories often try to explain different aspects or mechanistic levels of consciousness, we argue that the theories do not necessarily contradict each other. Instead, several of them may converge on fundamental neuronal mechanisms and be partly compatible and complementary, so that multiple theories can simultaneously contribute to our understanding. Here, we consider unifying, integration-oriented approaches that have so far been largely neglected, seeking to combine valuable elements from various theories.
Collapse
Affiliation(s)
- Johan F Storm
- The Brain Signaling Group, Division of Physiology, IMB, Faculty of Medicine, University of Oslo, Domus Medica, Sognsvannsveien 9, Blindern, 0317 Oslo, Norway.
| | - P Christiaan Klink
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences, 1105 BA Amsterdam, the Netherlands; Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, the Netherlands; Laboratory of Visual Brain Therapy, Sorbonne Université, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Institut de la Vision, Paris 75012, France
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu, Estonia
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV Maastricht, The Netherlands
| | - Andrea Pigorini
- Department of Biomedical, Surgical and Dental Sciences, Università degli Studi di Milano, Milan 20122, Italy
| | - Pietro Avanzini
- Istituto di Neuroscienze, Consiglio Nazionale delle Ricerche, 43125 Parma, Italy
| | - Wim Vanduffel
- Department of Neurosciences, Laboratory of Neuro and Psychophysiology, KU Leuven Medical School, 3000 Leuven, Belgium; Leuven Brain Institute, KU Leuven, 3000 Leuven, Belgium; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA; Department of Radiology, Harvard Medical School, Boston, MA 02144, USA
| | - Pieter R Roelfsema
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences, 1105 BA Amsterdam, the Netherlands; Laboratory of Visual Brain Therapy, Sorbonne Université, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Institut de la Vision, Paris 75012, France; Department of Integrative Neurophysiology, VU University, De Boelelaan 1085, 1081 HV Amsterdam, the Netherlands; Department of Neurosurgery, Academisch Medisch Centrum, Postbus 22660, 1100 DD Amsterdam, the Netherlands
| | - Marcello Massimini
- Department of Biomedical and Clinical Sciences "L. Sacco", Università degli Studi di Milano, Milan 20157, Italy; Istituto di Ricovero e Cura a Carattere Scientifico, Fondazione Don Carlo Gnocchi, Milan 20122, Italy; Azrieli Program in Brain, Mind and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, ON M5G 1M1, Canada
| | - Matthew E Larkum
- Institute of Biology, Humboldt University Berlin, Berlin, Germany; Neurocure Center for Excellence, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Cyriel M A Pennartz
- Swammerdam Institute for Life Sciences, Center for Neuroscience, Faculty of Science, University of Amsterdam, Sciencepark 904, Amsterdam 1098 XH, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
2
|
Deperrois N, Petrovici MA, Senn W, Jordan J. Learning beyond sensations: How dreams organize neuronal representations. Neurosci Biobehav Rev 2024; 157:105508. [PMID: 38097096 DOI: 10.1016/j.neubiorev.2023.105508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 12/05/2023] [Accepted: 12/09/2023] [Indexed: 12/25/2023]
Abstract
Semantic representations in higher sensory cortices form the basis for robust, yet flexible behavior. These representations are acquired over the course of development in an unsupervised fashion and continuously maintained over an organism's lifespan. Predictive processing theories propose that these representations emerge from predicting or reconstructing sensory inputs. However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs. Here, we suggest that virtual experiences may be just as relevant as actual sensory inputs in shaping cortical representations. In particular, we discuss two complementary learning principles that organize representations through the generation of virtual experiences. First, "adversarial dreaming" proposes that creative dreams support a cortical implementation of adversarial learning in which feedback and feedforward pathways engage in a productive game of trying to fool each other. Second, "contrastive dreaming" proposes that the invariance of neuronal representations to irrelevant factors of variation is acquired by trying to map similar virtual experiences together via a contrastive learning process. These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep thus providing promising directions to explain cortical learning beyond the classical predictive processing paradigm.
Collapse
Affiliation(s)
| | | | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland; Electrical Engineering, Yale University, New Haven, CT, United States
| |
Collapse
|
3
|
Yoshida K, Toyoizumi T. Computational role of sleep in memory reorganization. Curr Opin Neurobiol 2023; 83:102799. [PMID: 37844426 DOI: 10.1016/j.conb.2023.102799] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 09/07/2023] [Accepted: 09/21/2023] [Indexed: 10/18/2023]
Abstract
Sleep is considered to play an essential role in memory reorganization. Despite its importance, classical theoretical models did not focus on some sleep characteristics. Here, we review recent theoretical approaches investigating their roles in learning and discuss the possibility that non-rapid eye movement (NREM) sleep selectively consolidates memory, and rapid eye movement (REM) sleep reorganizes the representations of memories. We first review the possibility that slow waves during NREM sleep contribute to memory selection by using sequential firing patterns and the existence of up and down states. Second, we discuss the role of dreaming during REM sleep in developing neuronal representations. We finally discuss how to develop these points further, emphasizing the connections to experimental neuroscience and machine learning.
Collapse
Affiliation(s)
- Kensuke Yoshida
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
| |
Collapse
|
4
|
Benjamin AS, Kording KP. A role for cortical interneurons as adversarial discriminators. PLoS Comput Biol 2023; 19:e1011484. [PMID: 37768890 PMCID: PMC10538760 DOI: 10.1371/journal.pcbi.1011484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 08/31/2023] [Indexed: 09/30/2023] Open
Abstract
The brain learns representations of sensory information from experience, but the algorithms by which it does so remain unknown. One popular theory formalizes representations as inferred factors in a generative model of sensory stimuli, meaning that learning must improve this generative model and inference procedure. This framework underlies many classic computational theories of sensory learning, such as Boltzmann machines, the Wake/Sleep algorithm, and a more recent proposal that the brain learns with an adversarial algorithm that compares waking and dreaming activity. However, in order for such theories to provide insights into the cellular mechanisms of sensory learning, they must be first linked to the cell types in the brain that mediate them. In this study, we examine whether a subtype of cortical interneurons might mediate sensory learning by serving as discriminators, a crucial component in an adversarial algorithm for representation learning. We describe how such interneurons would be characterized by a plasticity rule that switches from Hebbian plasticity during waking states to anti-Hebbian plasticity in dreaming states. Evaluating the computational advantages and disadvantages of this algorithm, we find that it excels at learning representations in networks with recurrent connections but scales poorly with network size. This limitation can be partially addressed if the network also oscillates between evoked activity and generative samples on faster timescales. Consequently, we propose that an adversarial algorithm with interneurons as discriminators is a plausible and testable strategy for sensory learning in biological systems.
Collapse
Affiliation(s)
- Ari S. Benjamin
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Konrad P. Kording
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
5
|
Bredenberg C, Savin C. Desiderata for normative models of synaptic plasticity. ARXIV 2023:arXiv:2308.04988v1. [PMID: 37608931 PMCID: PMC10441445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Normative models of synaptic plasticity use a combination of mathematics and computational simulations to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work on these models, but experimental confirmation is relatively limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions. We then discuss how new models have begun to improve on these criteria and suggest avenues for further development. As prototypes, we provide detailed analyses of two specific models - REINFORCE and the Wake-Sleep algorithm. We provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, USA
- Mila-Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, USA
- Center for Data Science, New York University, New York, NY 10011, USA
| |
Collapse
|
6
|
Huber LS, Geirhos R, Wichmann FA. The developmental trajectory of object recognition robustness: Children are like small adults but unlike big deep neural networks. J Vis 2023; 23:4. [PMID: 37410494 PMCID: PMC10337805 DOI: 10.1167/jov.23.7.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 05/10/2023] [Indexed: 07/07/2023] Open
Abstract
In laboratory object recognition tasks based on undistorted photographs, both adult humans and deep neural networks (DNNs) perform close to ceiling. Unlike adults', whose object recognition performance is robust against a wide range of image distortions, DNNs trained on standard ImageNet (1.3M images) perform poorly on distorted images. However, the last 2 years have seen impressive gains in DNN distortion robustness, predominantly achieved through ever-increasing large-scale datasets-orders of magnitude larger than ImageNet. Although this simple brute-force approach is very effective in achieving human-level robustness in DNNs, it raises the question of whether human robustness, too, is simply due to extensive experience with (distorted) visual input during childhood and beyond. Here we investigate this question by comparing the core object recognition performance of 146 children (aged 4-15 years) against adults and against DNNs. We find, first, that already 4- to 6-year-olds show remarkable robustness to image distortions and outperform DNNs trained on ImageNet. Second, we estimated the number of images children had been exposed to during their lifetime. Compared with various DNNs, children's high robustness requires relatively little data. Third, when recognizing objects, children-like adults but unlike DNNs-rely heavily on shape but not on texture cues. Together our results suggest that the remarkable robustness to distortions emerges early in the developmental trajectory of human object recognition and is unlikely the result of a mere accumulation of experience with distorted visual input. Even though current DNNs match human performance regarding robustness, they seem to rely on different and more data-hungry strategies to do so.
Collapse
Affiliation(s)
- Lukas S Huber
- Department of Psychology, University of Bern, Bern, Switzerland
- Neural Information Processing Group, University of Tübingen, Tübingen, Germany
- https://orcid.org/0000-0002-7755-6926
| | - Robert Geirhos
- Neural Information Processing Group, University of Tübingen, Tübingen, Germany
- https://orcid.org/0000-0001-7698-3187
| | - Felix A Wichmann
- Neural Information Processing Group, University of Tübingen, Tübingen, Germany
- https://orcid.org/0000-0002-2592-634X
| |
Collapse
|
7
|
Northoff G, Scalabrini A, Fogel S. Topographic-dynamic reorganisation model of dreams (TRoD) - A spatiotemporal approach. Neurosci Biobehav Rev 2023; 148:105117. [PMID: 36870584 DOI: 10.1016/j.neubiorev.2023.105117] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 12/13/2022] [Accepted: 02/28/2023] [Indexed: 03/06/2023]
Abstract
Dreams are one of the most bizarre and least understood states of consciousness. Bridging the gap between brain and phenomenology of (un)conscious experience, we propose the Topographic-dynamic Re-organization model of Dreams (TRoD). Topographically, dreams are characterized by a shift towards increased activity and connectivity in the default-mode network (DMN) while they are reduced in the central executive network, including the dorsolateral prefrontal cortex (except in lucid dreaming). This topographic re-organization is accompanied by dynamic changes; a shift towards slower frequencies and longer timescales. This puts dreams dynamically in an intermediate position between awake state and NREM 2/SWS sleep. TRoD proposes that the shift towards DMN and slower frequencies leads to an abnormal spatiotemporal framing of input processing including both internally- and externally-generated inputs (from body and environment). In dreams, a shift away from temporal segregation to temporal integration of inputs results in the often bizarre and highly self-centric mental contents as well as hallucinatory-like states. We conclude that topography and temporal dynamics are core features of the TroD, which may provide the connection of neural and mental activity, e.g., brain and experience during dreams as their "common currency".
Collapse
Affiliation(s)
- Georg Northoff
- Faculty of Medicine, Centre for Neural Dynamics, The Royal's Institute of Mental Health Research, Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada; Mental Health Centre, Zhejiang University School of Medicine, Hangzhou, China; Centre for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, China.
| | - Andrea Scalabrini
- Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy.
| | - Stuart Fogel
- Sleep and Neuroscience, The Royal's Institute of Mental Health Research, Brain and Mind Research Institute and Faculty of Social Sciences, University of Ottawa, Ottawa, ON, Canada.
| |
Collapse
|
8
|
Kurth-Nelson Z, Behrens T, Wayne G, Miller K, Luettgau L, Dolan R, Liu Y, Schwartenbeck P. Replay and compositional computation. Neuron 2023; 111:454-469. [PMID: 36640765 DOI: 10.1016/j.neuron.2022.12.028] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/11/2022] [Accepted: 12/18/2022] [Indexed: 01/15/2023]
Abstract
Replay in the brain has been viewed as rehearsal or, more recently, as sampling from a transition model. Here, we propose a new hypothesis: that replay is able to implement a form of compositional computation where entities are assembled into relationally bound structures to derive qualitatively new knowledge. This idea builds on recent advances in neuroscience, which indicate that the hippocampus flexibly binds objects to generalizable roles and that replay strings these role-bound objects into compound statements. We suggest experiments to test our hypothesis, and we end by noting the implications for AI systems which lack the human ability to radically generalize past experience to solve new problems.
Collapse
Affiliation(s)
- Zeb Kurth-Nelson
- DeepMind, London, UK; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, London, UK.
| | - Timothy Behrens
- Wellcome Centre for Human Neuroimaging, University College London, London, UK; Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
| | | | - Kevin Miller
- DeepMind, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Lennart Luettgau
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, London, UK
| | - Ray Dolan
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, London, UK; Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Yunzhe Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China; Chinese Institute for Brain Research, Beijing, China
| | - Philipp Schwartenbeck
- Max Planck Institute for Biological Cybernetics, Tubingen, Germany; University of Tubingen, Tubingen, Germany
| |
Collapse
|
9
|
Yoshida K, Toyoizumi T. Information maximization explains state-dependent synaptic plasticity and memory reorganization during non-rapid eye movement sleep. PNAS NEXUS 2022; 2:pgac286. [PMID: 36712943 PMCID: PMC9833047 DOI: 10.1093/pnasnexus/pgac286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 12/06/2022] [Indexed: 12/14/2022]
Abstract
Slow waves during the non-rapid eye movement (NREM) sleep reflect the alternating up and down states of cortical neurons; global and local slow waves promote memory consolidation and forgetting, respectively. Furthermore, distinct spike-timing-dependent plasticity (STDP) operates in these up and down states. The contribution of different plasticity rules to neural information coding and memory reorganization remains unknown. Here, we show that optimal synaptic plasticity for information maximization in a cortical neuron model provides a unified explanation for these phenomena. The model indicates that the optimal synaptic plasticity is biased toward depression as the baseline firing rate increases. This property explains the distinct STDP observed in the up and down states. Furthermore, it explains how global and local slow waves predominantly potentiate and depress synapses, respectively, if the background firing rate of excitatory neurons declines with the spatial scale of waves as the model predicts. The model provides a unifying account of the role of NREM sleep, bridging neural information coding, synaptic plasticity, and memory reorganization.
Collapse
|
10
|
Chrysanthidis N, Fiebig F, Lansner A, Herman P. Traces of semantization - from episodic to semantic memory in a spiking cortical network model. eNeuro 2022; 9:ENEURO.0062-22.2022. [PMID: 35803714 PMCID: PMC9347313 DOI: 10.1523/eneuro.0062-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 05/05/2022] [Accepted: 05/28/2022] [Indexed: 11/21/2022] Open
Abstract
Episodic memory is a recollection of past personal experiences associated with particular times and places. This kind of memory is commonly subject to loss of contextual information or" semantization", which gradually decouples the encoded memory items from their associated contexts while transforming them into semantic or gist-like representations. Novel extensions to the classical Remember/Know behavioral paradigm attribute the loss of episodicity to multiple exposures of an item in different contexts. Despite recent advancements explaining semantization at a behavioral level, the underlying neural mechanisms remain poorly understood. In this study, we suggest and evaluate a novel hypothesis proposing that Bayesian-Hebbian synaptic plasticity mechanisms might cause semantization of episodic memory. We implement a cortical spiking neural network model with a Bayesian-Hebbian learning rule called Bayesian Confidence Propagation Neural Network (BCPNN), which captures the semantization phenomenon and offers a mechanistic explanation for it. Encoding items across multiple contexts leads to item-context decoupling akin to semantization. We compare BCPNN plasticity with the more commonly used spike-timing dependent plasticity (STDP) learning rule in the same episodic memory task. Unlike BCPNN, STDP does not explain the decontextualization process. We further examine how selective plasticity modulation of isolated salient events may enhance preferential retention and resistance to semantization. Our model reproduces important features of episodicity on behavioral timescales under various biological constraints whilst also offering a novel neural and synaptic explanation for semantization, thereby casting new light on the interplay between episodic and semantic memory processes.Significance StatementRemembering single episodes is a fundamental attribute of cognition. Difficulties recollecting contextual information is a key sign of episodic memory loss or semantization. Behavioral studies demonstrate that semantization of episodic memory can occur rapidly, yet the neural mechanisms underlying this effect are insufficiently investigated. In line with recent behavioral findings, we show that multiple stimulus exposures in different contexts may advance item-context decoupling. We suggest a Bayesian-Hebbian synaptic plasticity hypothesis of memory semantization and further show that a transient modulation of plasticity during salient events may disrupt the decontextualization process by strengthening memory traces, and thus, enhancing preferential retention. The proposed cortical network-of-networks model thus bridges micro and mesoscale synaptic effects with network dynamics and behavior.
Collapse
Affiliation(s)
- Nikolaos Chrysanthidis
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
| | - Florian Fiebig
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
| | - Anders Lansner
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
- Department of Mathematics, Stockholm University, 10691 Stockholm, Sweden
| | - Pawel Herman
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
- Digital Futures, Stockholm, Sweden
- Swedish e-Science Research Centre, Stockholm, Sweden
| |
Collapse
|