1
|
Kim JZ, Larsen B, Parkes L. Shaping dynamical neural computations using spatiotemporal constraints. Biochem Biophys Res Commun 2024; 728:150302. [PMID: 38968771 DOI: 10.1016/j.bbrc.2024.150302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 03/21/2024] [Accepted: 04/11/2024] [Indexed: 07/07/2024]
Abstract
Dynamics play a critical role in computation. The principled evolution of states over time enables both biological and artificial networks to represent and integrate information to make decisions. In the past few decades, significant multidisciplinary progress has been made in bridging the gap between how we understand biological versus artificial computation, including how insights gained from one can translate to the other. Research has revealed that neurobiology is a key determinant of brain network architecture, which gives rise to spatiotemporally constrained patterns of activity that underlie computation. Here, we discuss how neural systems use dynamics for computation, and claim that the biological constraints that shape brain networks may be leveraged to improve the implementation of artificial neural networks. To formalize this discussion, we consider a natural artificial analog of the brain that has been used extensively to model neural computation: the recurrent neural network (RNN). In both the brain and the RNN, we emphasize the common computational substrate atop which dynamics occur-the connectivity between neurons-and we explore the unique computational advantages offered by biophysical constraints such as resource efficiency, spatial embedding, and neurodevelopment.
Collapse
Affiliation(s)
- Jason Z Kim
- Department of Physics, Cornell University, Ithaca, NY, 14853, USA.
| | - Bart Larsen
- Department of Pediatrics, Masonic Institute for the Developing Brain, University of Minnesota, USA
| | - Linden Parkes
- Department of Psychiatry, Rutgers University, Piscataway, NJ, 08854, USA.
| |
Collapse
|
2
|
Beer RD, Barwich AS, Severino GJ. Milking a spherical cow: Toy models in neuroscience. Eur J Neurosci 2024. [PMID: 39257366 DOI: 10.1111/ejn.16529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 07/19/2024] [Accepted: 08/25/2024] [Indexed: 09/12/2024]
Abstract
There are many different kinds of models, and they play many different roles in the scientific endeavour. Neuroscience, and biology more generally, has understandably tended to emphasise empirical models that are grounded in data and make specific, experimentally testable predictions. Meanwhile, strongly idealised or 'toy' models have played a central role in the theoretical development of other sciences such as physics. In this paper, we examine the nature of toy models and their prospects in neuroscience.
Collapse
Affiliation(s)
- Randall D Beer
- Cognitive Science Program, Indiana University, Bloomington, Indiana, USA
- Neuroscience Program, Indiana University, Bloomington, Indiana, USA
- Department of Informatics, Indiana University, Bloomington, Indiana, USA
| | - Ann-Sophie Barwich
- Cognitive Science Program, Indiana University, Bloomington, Indiana, USA
- Neuroscience Program, Indiana University, Bloomington, Indiana, USA
- Department of History and Philosophy of Science and Medicine, Indiana University, Bloomington, Indiana, USA
| | - Gabriel J Severino
- Cognitive Science Program, Indiana University, Bloomington, Indiana, USA
| |
Collapse
|
3
|
Hammer J, Kajsova M, Kalina A, Krysl D, Fabera P, Kudr M, Jezdik P, Janca R, Krsek P, Marusic P. Antagonistic behavior of brain networks mediated by low-frequency oscillations: electrophysiological dynamics during internal-external attention switching. Commun Biol 2024; 7:1105. [PMID: 39251869 PMCID: PMC11385230 DOI: 10.1038/s42003-024-06732-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 08/13/2024] [Indexed: 09/11/2024] Open
Abstract
Antagonistic activity of brain networks likely plays a fundamental role in how the brain optimizes its performance by efficient allocation of computational resources. A prominent example involves externally/internally oriented attention tasks, implicating two anticorrelated, intrinsic brain networks: the default mode network (DMN) and the dorsal attention network (DAN). To elucidate electrophysiological underpinnings and causal interplay during attention switching, we recorded intracranial EEG (iEEG) from 25 epilepsy patients with electrode contacts localized in the DMN and DAN. We show antagonistic network dynamics of activation-related changes in high-frequency (> 50 Hz) and low-frequency (< 30 Hz) power. The temporal profile of information flow between the networks estimated by functional connectivity suggests that the activated network inhibits the other one, gating its activity by increasing the amplitude of the low-frequency oscillations. Insights about inter-network communication may have profound implications for various brain disorders in which these dynamics are compromised.
Collapse
Affiliation(s)
- Jiri Hammer
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic.
| | - Michaela Kajsova
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - Adam Kalina
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - David Krysl
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - Petr Fabera
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - Martin Kudr
- Department of Pediatric Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - Petr Jezdik
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
- Department of Circuit Theory, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic
| | - Radek Janca
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
- Department of Circuit Theory, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic
| | - Pavel Krsek
- Department of Pediatric Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - Petr Marusic
- Department of Neurology, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic.
| |
Collapse
|
4
|
Branchi I. Uncovering the determinants of brain functioning, behavior and their interplay in the light of context. Eur J Neurosci 2024; 60:4687-4706. [PMID: 38558227 DOI: 10.1111/ejn.16331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 03/07/2024] [Indexed: 04/04/2024]
Abstract
Notwithstanding the huge progress in molecular and cellular neuroscience, our ability to understand the brain and develop effective treatments promoting mental health is still limited. This can be partially ascribed to the reductionist, deterministic and mechanistic approaches in neuroscience that struggle with the complexity of the central nervous system. Here, I introduce the Context theory of constrained systems proposing a novel role of contextual factors and genetic, molecular and neural substrates in determining brain functioning and behavior. This theory entails key conceptual implications. First, context is the main driver of behavior and mental states. Second, substrates, from genes to brain areas, have no direct causal link to complex behavioral responses as they can be combined in multiple ways to produce the same response and different responses can impinge on the same substrates. Third, context and biological substrates play distinct roles in determining behavior: context drives behavior, substrates constrain the behavioral repertoire that can be implemented. Fourth, since behavior is the interface between the central nervous system and the environment, it is a privileged level of control and orchestration of brain functioning. Such implications are illustrated through the Kitchen metaphor of the brain. This theoretical framework calls for the revision of key concepts in neuroscience and psychiatry, including causality, specificity and individuality. Moreover, at the clinical level, it proposes treatments inducing behavioral changes through contextual interventions as having the highest impact to reorganize the complexity of the human mind and to achieve a long-lasting improvement in mental health.
Collapse
Affiliation(s)
- Igor Branchi
- Center for Behavioral Sciences and Mental Health, Istituto Superiore di Sanità, Rome, Italy
| |
Collapse
|
5
|
Piantadosi ST, Gallistel CR. Formalising the role of behaviour in neuroscience. Eur J Neurosci 2024; 60:4756-4770. [PMID: 38858853 DOI: 10.1111/ejn.16372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 01/19/2024] [Accepted: 03/21/2024] [Indexed: 06/12/2024]
Abstract
We develop a mathematical approach to formally proving that certain neural computations and representations exist based on patterns observed in an organism's behaviour. To illustrate, we provide a simple set of conditions under which an ant's ability to determine how far it is from its nest would logically imply neural structures isomorphic to the natural numbers ℕ . We generalise these results to arbitrary behaviours and representations and show what mathematical characterisation of neural computation and representation is simplest while being maximally predictive of behaviour. We develop this framework in detail using a path integration example, where an organism's ability to search for its nest in the correct location implies representational structures isomorphic to two-dimensional coordinates under addition. We also study a system for processinga n b n strings common in comparative work. Our approach provides an objective way to determine what theory of a physical system is best, addressing a fundamental challenge in neuroscientific inference. These results motivate considering which neurobiological structures have the requisite formal structure and are otherwise physically plausible given relevant physical considerations such as generalisability, information density, thermodynamic stability and energetic cost.
Collapse
Affiliation(s)
- Steven T Piantadosi
- Department of Psychology, Department of Neuroscience, UC Berkeley, Berkeley, California, USA
| | | |
Collapse
|
6
|
Stoll A, Maier A, Krauss P, Gerum R, Schilling A. Coincidence detection and integration behavior in spiking neural networks. Cogn Neurodyn 2024; 18:1753-1765. [PMID: 39104689 PMCID: PMC11297875 DOI: 10.1007/s11571-023-10038-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/11/2023] [Accepted: 11/09/2023] [Indexed: 08/07/2024] Open
Abstract
Recently, the interest in spiking neural networks (SNNs) remarkably increased, as up to now some key advances of biological neural networks are still out of reach. Thus, the energy efficiency and the ability to dynamically react and adapt to input stimuli as observed in biological neurons is still difficult to achieve. One neuron model commonly used in SNNs is the leaky-integrate-and-fire (LIF) neuron. LIF neurons already show interesting dynamics and can be run in two operation modes: coincidence detectors for low and integrators for high membrane decay times, respectively. However, the emergence of these modes in SNNs and the consequence on network performance and information processing ability is still elusive. In this study, we examine the effect of different decay times in SNNs trained with a surrogate-gradient-based approach. We propose two measures that allow to determine the operation mode of LIF neurons: the number of contributing input spikes and the effective integration interval. We show that coincidence detection is characterized by a low number of input spikes as well as short integration intervals, whereas integration behavior is related to many input spikes over long integration intervals. We find the two measures to linearly correlate via a correlation factor that depends on the decay time. Thus, the correlation factor as function of the decay time shows a powerlaw behavior, which could be an intrinsic property of LIF networks. We argue that our work could be a starting point to further explore the operation modes in SNNs to boost efficiency and biological plausibility. Supplementary Information The online version of this article (10.1007/s11571-023-10038-0) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Andreas Stoll
- Pattern Recognition Lab, University Erlangen-Nürnberg, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nürnberg, Erlangen, Germany
| | - Patrick Krauss
- Pattern Recognition Lab, University Erlangen-Nürnberg, Erlangen, Germany
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
| | - Richard Gerum
- Department of Physics and Astronomy, York University, Toronto, Canada
| | - Achim Schilling
- Pattern Recognition Lab, University Erlangen-Nürnberg, Erlangen, Germany
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
| |
Collapse
|
7
|
Zhang A, Wengler K, Zhu X, Horga G, Goldberg TE, Lee S. Altered Hierarchical Gradients of Intrinsic Neural Timescales in Mild Cognitive Impairment and Alzheimer's Disease. J Neurosci 2024; 44:e2024232024. [PMID: 38658167 PMCID: PMC11209657 DOI: 10.1523/jneurosci.2024-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 03/08/2024] [Accepted: 04/12/2024] [Indexed: 04/26/2024] Open
Abstract
Alzheimer's disease (AD) is a devastating neurodegenerative disease that affects millions of seniors in the United States. Resting-state functional magnetic resonance imaging (rs-fMRI) is widely used to study neurophysiology in AD and its prodromal condition, mild cognitive impairment (MCI). The intrinsic neural timescale (INT), which can be estimated through the magnitude of the autocorrelation of neural signals from rs-fMRI, is thought to quantify the duration that neural information is stored in a local circuit. Such heterogeneity of the timescales forms a basis of the brain functional hierarchy and captures an aspect of circuit dynamics relevant to excitation/inhibition balance, which is broadly relevant for cognitive functions. Given that, we applied rs-fMRI to test whether distinct changes of INT at different hierarchies are present in people with MCI, those progressing to AD (called Converter), and AD patients of both sexes. Linear mixed-effect model was implemented to detect altered hierarchical gradients across populations followed by pairwise comparisons to identify regional differences. High similarities between AD and Converter were observed. Specifically, the inferior temporal, caudate, and pallidum areas exhibit significant alterations in both AD and Converter. Distinct INT-related pathological changes in MCI and AD were found. For AD/Converter, neural information is stored for a longer time in lower hierarchical areas, while higher levels of hierarchy seem to be preferentially impaired in MCI leading to a less pronounced hierarchical gradient. These results inform that the INT holds great potential as an additional measure for AD prediction, even a stable biomarker for clinical diagnosis.
Collapse
Affiliation(s)
- Aiying Zhang
- New York State Psychiatric Institute, New York, New York 10032
- Department of Psychiatry, Columbia University, New York, New York 10032
| | - Kenneth Wengler
- New York State Psychiatric Institute, New York, New York 10032
- Department of Psychiatry, Columbia University, New York, New York 10032
| | - Xi Zhu
- New York State Psychiatric Institute, New York, New York 10032
- Department of Psychiatry, Columbia University, New York, New York 10032
| | - Guillermo Horga
- New York State Psychiatric Institute, New York, New York 10032
- Department of Psychiatry, Columbia University, New York, New York 10032
| | - Terry E Goldberg
- New York State Psychiatric Institute, New York, New York 10032
- Department of Psychiatry, Columbia University, New York, New York 10032
- Department of Anesthesiology, Columbia University Irving Medical Center, New York, New York 10032
- Taub Institute for Research on Alzheimer's Disease and the Aging Brain, Columbia University, New York, New York 10032
| | - Seonjoo Lee
- New York State Psychiatric Institute, New York, New York 10032
- Department of Psychiatry, Columbia University, New York, New York 10032
- Department of Biostatistics, Mailman School of Public Health, Columbia University, New York, New York 10032
| |
Collapse
|
8
|
Cisek P, Green AM. Toward a neuroscience of natural behavior. Curr Opin Neurobiol 2024; 86:102859. [PMID: 38583263 DOI: 10.1016/j.conb.2024.102859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 03/04/2024] [Indexed: 04/09/2024]
Abstract
One of the most exciting new developments in systems neuroscience is the progress being made toward neurophysiological experiments that move beyond simplified laboratory settings and address the richness of natural behavior. This is enabled by technological advances such as wireless recording in freely moving animals, automated quantification of behavior, and new methods for analyzing large data sets. Beyond new empirical methods and data, however, there is also a need for new theories and concepts to interpret that data. Such theories need to address the particular challenges of natural behavior, which often differ significantly from the scenarios studied in traditional laboratory settings. Here, we discuss some strategies for developing such novel theories and concepts and some example hypotheses being proposed.
Collapse
Affiliation(s)
- Paul Cisek
- Department of Neuroscience, University of Montréal, Montréal, Québec, Canada.
| | - Andrea M Green
- Department of Neuroscience, University of Montréal, Montréal, Québec, Canada
| |
Collapse
|
9
|
Coward LA. Hierarchies of description enable understanding of cognitive phenomena in terms of neuron activity. Cogn Process 2024; 25:333-347. [PMID: 38483738 PMCID: PMC11106207 DOI: 10.1007/s10339-024-01181-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 02/07/2024] [Indexed: 05/22/2024]
Abstract
One objective of neuroscience is to understand a wide range of specific cognitive processes in terms of neuron activity. The huge amount of observational data about the brain makes achieving this objective challenging. Different models on different levels of detail provide some insight, but the relationship between models on different levels is not clear. Complex computing systems with trillions of components like transistors are fully understood in the sense that system features can be precisely related to transistor activity. Such understanding could not involve a designer simultaneously thinking about the ongoing activity of all the components active in the course of carrying out some system feature. Brain modeling approaches like dynamical systems are inadequate to support understanding of computing systems, because their use relies on approximations like treating all components as more or less identical. Understanding computing systems needs a much more sophisticated use of approximation, involving creation of hierarchies of description in which the higher levels are more approximate, with effective translation between different levels in the hierarchy made possible by using the same general types of information processes on every level. These types are instruction and data read/write. There are no direct resemblances between computers and brains, but natural selection pressures have resulted in brain resources being organized into modular hierarchies and in the existence of two general types of information processes called condition definition/detection and behavioral recommendation. As a result, it is possible to create hierarchies of description linking cognitive phenomena to neuron activity, analogous with but qualitatively different from the hierarchies of description used to understand computing systems. An intuitively satisfying understanding of cognitive processes in terms of more detailed brain activity is then possible.
Collapse
Affiliation(s)
- L Andrew Coward
- College of Engineering, Computing and Cybernetics, Australian National University, Canberra, Australia.
| |
Collapse
|
10
|
Maizels RJ. A dynamical perspective: moving towards mechanism in single-cell transcriptomics. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230049. [PMID: 38432314 PMCID: PMC10909508 DOI: 10.1098/rstb.2023.0049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 10/31/2023] [Indexed: 03/05/2024] Open
Abstract
As the field of single-cell transcriptomics matures, research is shifting focus from phenomenological descriptions of cellular phenotypes to a mechanistic understanding of the gene regulation underneath. This perspective considers the value of capturing dynamical information at single-cell resolution for gaining mechanistic insight; reviews the available technologies for recording and inferring temporal information in single cells; and explores whether better dynamical resolution is sufficient to adequately capture the causal relationships driving complex biological systems. This article is part of a discussion meeting issue 'Causes and consequences of stochastic processes in development and disease'.
Collapse
Affiliation(s)
- Rory J. Maizels
- The Francis Crick Institute, 1 Midland Road, London NW1 1AT, UK
- University College London, London WC1E 6BT, UK
| |
Collapse
|
11
|
Rush ER, Heckman C, Jayaram K, Humbert JS. Neural dynamics of robust legged robots. Front Robot AI 2024; 11:1324404. [PMID: 38699630 PMCID: PMC11063321 DOI: 10.3389/frobt.2024.1324404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 03/26/2024] [Indexed: 05/05/2024] Open
Abstract
Legged robot control has improved in recent years with the rise of deep reinforcement learning, however, much of the underlying neural mechanisms remain difficult to interpret. Our aim is to leverage bio-inspired methods from computational neuroscience to better understand the neural activity of robust robot locomotion controllers. Similar to past work, we observe that terrain-based curriculum learning improves agent stability. We study the biomechanical responses and neural activity within our neural network controller by simultaneously pairing physical disturbances with targeted neural ablations. We identify an agile hip reflex that enables the robot to regain its balance and recover from lateral perturbations. Model gradients are employed to quantify the relative degree that various sensory feedback channels drive this reflexive behavior. We also find recurrent dynamics are implicated in robust behavior, and utilize sampling-based ablation methods to identify these key neurons. Our framework combines model-based and sampling-based methods for drawing causal relationships between neural network activity and robust embodied robot behavior.
Collapse
Affiliation(s)
- Eugene R. Rush
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States
| | - Christoffer Heckman
- Department of Computer Science, University of Colorado Boulder, Boulder, CO, United States
| | - Kaushik Jayaram
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States
| | - J. Sean Humbert
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States
| |
Collapse
|
12
|
Bredenberg C, Savin C, Kiani R. Recurrent Neural Circuits Overcome Partial Inactivation by Compensation and Re-learning. J Neurosci 2024; 44:e1635232024. [PMID: 38413233 PMCID: PMC11026338 DOI: 10.1523/jneurosci.1635-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 01/14/2024] [Accepted: 01/20/2024] [Indexed: 02/29/2024] Open
Abstract
Technical advances in artificial manipulation of neural activity have precipitated a surge in studying the causal contribution of brain circuits to cognition and behavior. However, complexities of neural circuits challenge interpretation of experimental results, necessitating new theoretical frameworks for reasoning about causal effects. Here, we take a step in this direction, through the lens of recurrent neural networks trained to perform perceptual decisions. We show that understanding the dynamical system structure that underlies network solutions provides a precise account for the magnitude of behavioral effects due to perturbations. Our framework explains past empirical observations by clarifying the most sensitive features of behavior, and how complex circuits compensate and adapt to perturbations. In the process, we also identify strategies that can improve the interpretability of inactivation experiments.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003
- Center for Data Science, New York University, New York, NY 10011
| | - Roozbeh Kiani
- Center for Neural Science, New York University, New York, NY 10003
- Department of Psychology, New York University, New York, NY 10003
| |
Collapse
|
13
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
14
|
Sievers B, Thornton MA. Deep social neuroscience: the promise and peril of using artificial neural networks to study the social brain. Soc Cogn Affect Neurosci 2024; 19:nsae014. [PMID: 38334747 PMCID: PMC10880882 DOI: 10.1093/scan/nsae014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 12/20/2023] [Accepted: 02/04/2024] [Indexed: 02/10/2024] Open
Abstract
This review offers an accessible primer to social neuroscientists interested in neural networks. It begins by providing an overview of key concepts in deep learning. It then discusses three ways neural networks can be useful to social neuroscientists: (i) building statistical models to predict behavior from brain activity; (ii) quantifying naturalistic stimuli and social interactions; and (iii) generating cognitive models of social brain function. These applications have the potential to enhance the clinical value of neuroimaging and improve the generalizability of social neuroscience research. We also discuss the significant practical challenges, theoretical limitations and ethical issues faced by deep learning. If the field can successfully navigate these hazards, we believe that artificial neural networks may prove indispensable for the next stage of the field's development: deep social neuroscience.
Collapse
Affiliation(s)
- Beau Sievers
- Department of Psychology, Stanford University, 420 Jane Stanford Way, Stanford, CA 94305, USA
- Department of Psychology, Harvard University, 33 Kirkland St., Cambridge, MA 02138, USA
| | - Mark A Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, 6207 Moore Hall, Hanover, NH 03755, USA
| |
Collapse
|
15
|
Schulz MA, Bzdok D, Haufe S, Haynes JD, Ritter K. Performance reserves in brain-imaging-based phenotype prediction. Cell Rep 2024; 43:113597. [PMID: 38159275 PMCID: PMC11215805 DOI: 10.1016/j.celrep.2023.113597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 07/03/2023] [Accepted: 12/04/2023] [Indexed: 01/03/2024] Open
Abstract
This study examines the impact of sample size on predicting cognitive and mental health phenotypes from brain imaging via machine learning. Our analysis shows a 3- to 9-fold improvement in prediction performance when sample size increases from 1,000 to 1 M participants. However, despite this increase, the data suggest that prediction accuracy remains worryingly low and far from fully exploiting the predictive potential of brain imaging data. Additionally, we find that integrating multiple imaging modalities boosts prediction accuracy, often equivalent to doubling the sample size. Interestingly, the most informative imaging modality often varied with increasing sample size, emphasizing the need to consider multiple modalities. Despite significant performance reserves for phenotype prediction, achieving substantial improvements may necessitate prohibitively large sample sizes, thus casting doubt on the practical or clinical utility of machine learning in some areas of neuroimaging.
Collapse
Affiliation(s)
- Marc-Andre Schulz
- Charité - Universitätsmedizin Berlin (corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Department of Psychiatry and Psychotherapy, Berlin, Germany; Bernstein Center for Computational Neuroscience, Berlin, Germany.
| | - Danilo Bzdok
- McConnell Brain Imaging Centre (BIC), Montreal Neurological Institute (MNI), Faculty of Medicine, McGill University, Montreal, QC, Canada; Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, QC, Canada; Mila - Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Stefan Haufe
- Bernstein Center for Computational Neuroscience, Berlin, Germany; Technische Universität Berlin, Berlin, Germany; Physikalisch-Technische Bundesanstalt, Berlin, Germany; Charité - Universitätsmedizin Berlin (corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Department of Neurology, Berlin Center for Advanced Neuroimaging, Berlin, Germany
| | - John-Dylan Haynes
- Bernstein Center for Computational Neuroscience, Berlin, Germany; Charité - Universitätsmedizin Berlin (corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Department of Neurology, Berlin Center for Advanced Neuroimaging, Berlin, Germany
| | - Kerstin Ritter
- Charité - Universitätsmedizin Berlin (corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Department of Psychiatry and Psychotherapy, Berlin, Germany; Bernstein Center for Computational Neuroscience, Berlin, Germany
| |
Collapse
|
16
|
O'Sullivan FM, Ryan TJ. If Engrams Are the Answer, What Is the Question? ADVANCES IN NEUROBIOLOGY 2024; 38:273-302. [PMID: 39008021 DOI: 10.1007/978-3-031-62983-9_15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Engram labelling and manipulation methodologies are now a staple of contemporary neuroscientific practice, giving the impression that the physical basis of engrams has been discovered. Despite enormous progress, engrams have not been clearly identified, and it is unclear what they should look like. There is an epistemic bias in engram neuroscience toward characterizing biological changes while neglecting the development of theory. However, the tools of engram biology are exciting precisely because they are not just an incremental step forward in understanding the mechanisms of plasticity and learning but because they can be leveraged to inform theory on one of the fundamental mysteries in neuroscience-how and in what format the brain stores information. We do not propose such a theory here, as we first require an appreciation for what is lacking. We outline a selection of issues in four sections from theoretical biology and philosophy that engram biology and systems neuroscience generally should engage with in order to construct useful future theoretical frameworks. Specifically, what is it that engrams are supposed to explain? How do the different building blocks of the brain-wide engram come together? What exactly are these component parts? And what information do they carry, if they carry anything at all? Asking these questions is not purely the privilege of philosophy but a key to informing scientific hypotheses that make the most of the experimental tools at our disposal. The risk for not engaging with these issues is high. Without a theory of what engrams are, what they do, and the wider computational processes they fit into, we may never know when they have been found.
Collapse
Affiliation(s)
- Fionn M O'Sullivan
- School of Biochemistry and Immunology, Trinity College Dublin, Dublin, Ireland
- Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Tomás J Ryan
- School of Biochemistry and Immunology, Trinity College Dublin, Dublin, Ireland.
- Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
- Florey Institute of Neuroscience and Mental Health, Melbourne Brain Centre, University of Melbourne, Melbourne, VIC, Australia.
- Child & Brain Development Program, Canadian Institute for Advanced Research (CIFAR), Toronto, ON, Canada.
| |
Collapse
|
17
|
Agarwal G, Lustig B, Akera S, Pastalkova E, Lee AK, Sommer FT. News without the buzz: reading out weak theta rhythms in the hippocampus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.22.573160. [PMID: 38187593 PMCID: PMC10769352 DOI: 10.1101/2023.12.22.573160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Local field potentials (LFPs) reflect the collective dynamics of neural populations, yet their exact relationship to neural codes remains unknown1. One notable exception is the theta rhythm of the rodent hippocampus, which seems to provide a reference clock to decode the animal's position from spatiotemporal patterns of neuronal spiking2 or LFPs3. But when the animal stops, theta becomes irregular4, potentially indicating the breakdown of temporal coding by neural populations. Here we show that no such breakdown occurs, introducing an artificial neural network that can recover position-tuned rhythmic patterns (pThetas) without relying on the more prominent theta rhythm as a reference clock. pTheta and theta preferentially correlate with place cell and interneuron spiking, respectively. When rats forage in an open field, pTheta is jointly tuned to position and head orientation, a property not seen in individual place cells but expected to emerge from place cell sequences5. Our work demonstrates that weak and intermittent oscillations, as seen in many brain regions and species, can carry behavioral information commensurate with population spike codes.
Collapse
Affiliation(s)
- Gautam Agarwal
- Department of Natural Sciences, Pitzer and Scripps Colleges, Claremont, CA
| | - Brian Lustig
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA
- University of Chicago, Chicago, IL
| | | | | | - Albert K. Lee
- Howard Hughes Medical Institute, Beth Israel Deaconess Medical Center, Boston, MA
| | - Friedrich T. Sommer
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA
- Neuromorphic Computing Lab, Intel Corporation, Santa Clara, CA
| |
Collapse
|
18
|
Hoffmann M, Henninger J, Veith J, Richter L, Judkewitz B. Blazed oblique plane microscopy reveals scale-invariant inference of brain-wide population activity. Nat Commun 2023; 14:8019. [PMID: 38049412 PMCID: PMC10695970 DOI: 10.1038/s41467-023-43741-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 11/17/2023] [Indexed: 12/06/2023] Open
Abstract
Due to the size and opacity of vertebrate brains, it has until now been impossible to simultaneously record neuronal activity at cellular resolution across the entire adult brain. As a result, scientists are forced to choose between cellular-resolution microscopy over limited fields-of-view or whole-brain imaging at coarse-grained resolution. Bridging the gap between these spatial scales of understanding remains a major challenge in neuroscience. Here, we introduce blazed oblique plane microscopy to perform brain-wide recording of neuronal activity at cellular resolution in an adult vertebrate. Contrary to common belief, we find that inferences of neuronal population activity are near-independent of spatial scale: a set of randomly sampled neurons has a comparable predictive power as the same number of coarse-grained macrovoxels. Our work thus links cellular resolution with brain-wide scope, challenges the prevailing view that macroscale methods are generally inferior to microscale techniques and underscores the value of multiscale approaches to studying brain-wide activity.
Collapse
Affiliation(s)
- Maximilian Hoffmann
- Einstein Center for Neurosciences, NeuroCure Cluster of Excellence, Charité - Universitätsmedizin Berlin, Berlin, Germany
- Rockefeller University, New York, USA
| | - Jörg Henninger
- Einstein Center for Neurosciences, NeuroCure Cluster of Excellence, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Johannes Veith
- Einstein Center for Neurosciences, NeuroCure Cluster of Excellence, Charité - Universitätsmedizin Berlin, Berlin, Germany
- Department of Biology, Humboldt University Berlin, Berlin, Germany
| | - Lars Richter
- Department of Chemistry and Center for NanoScience, Ludwig Maximilians University, Munich, Germany
| | - Benjamin Judkewitz
- Einstein Center for Neurosciences, NeuroCure Cluster of Excellence, Charité - Universitätsmedizin Berlin, Berlin, Germany.
| |
Collapse
|
19
|
Schilling A, Sedley W, Gerum R, Metzner C, Tziridis K, Maier A, Schulze H, Zeng FG, Friston KJ, Krauss P. Predictive coding and stochastic resonance as fundamental principles of auditory phantom perception. Brain 2023; 146:4809-4825. [PMID: 37503725 PMCID: PMC10690027 DOI: 10.1093/brain/awad255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 06/27/2023] [Accepted: 07/15/2023] [Indexed: 07/29/2023] Open
Abstract
Mechanistic insight is achieved only when experiments are employed to test formal or computational models. Furthermore, in analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying healthy auditory perception. With a special focus on tinnitus-as the prime example of auditory phantom perception-we review recent work at the intersection of artificial intelligence, psychology and neuroscience. In particular, we discuss why everyone with tinnitus suffers from (at least hidden) hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that intrinsic neural noise is generated and amplified along the auditory pathway as a compensatory mechanism to restore normal hearing based on adaptive stochastic resonance. The neural noise increase can then be misinterpreted as auditory input and perceived as tinnitus. This mechanism can be formalized in the Bayesian brain framework, where the percept (posterior) assimilates a prior prediction (brain's expectations) and likelihood (bottom-up neural signal). A higher mean and lower variance (i.e. enhanced precision) of the likelihood shifts the posterior, evincing a misinterpretation of sensory evidence, which may be further confounded by plastic changes in the brain that underwrite prior predictions. Hence, two fundamental processing principles provide the most explanatory power for the emergence of auditory phantom perceptions: predictive coding as a top-down and adaptive stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles also play a crucial role in healthy auditory perception. Finally, in the context of neuroscience-inspired artificial intelligence, both processing principles may serve to improve contemporary machine learning techniques.
Collapse
Affiliation(s)
- Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Richard Gerum
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
- Department of Physics and Astronomy and Center for Vision Research, York University, Toronto, ON M3J 1P3, Canada
| | - Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | | | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - Holger Schulze
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Fan-Gang Zeng
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology–Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, USA
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
- Pattern Recognition Lab, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| |
Collapse
|
20
|
Rubinov M. Circular and unified analysis in network neuroscience. eLife 2023; 12:e79559. [PMID: 38014843 PMCID: PMC10684154 DOI: 10.7554/elife.79559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 10/18/2023] [Indexed: 11/29/2023] Open
Abstract
Genuinely new discovery transcends existing knowledge. Despite this, many analyses in systems neuroscience neglect to test new speculative hypotheses against benchmark empirical facts. Some of these analyses inadvertently use circular reasoning to present existing knowledge as new discovery. Here, I discuss that this problem can confound key results and estimate that it has affected more than three thousand studies in network neuroscience over the last decade. I suggest that future studies can reduce this problem by limiting the use of speculative evidence, integrating existing knowledge into benchmark models, and rigorously testing proposed discoveries against these models. I conclude with a summary of practical challenges and recommendations.
Collapse
Affiliation(s)
- Mika Rubinov
- Departments of Biomedical Engineering, Computer Science, and Psychology, Vanderbilt UniversityNashvilleUnited States
- Janelia Research Campus, Howard Hughes Medical InstituteAshburnUnited States
| |
Collapse
|
21
|
Kim JZ, Larsen B, Parkes L. Shaping dynamical neural computations using spatiotemporal constraints. ARXIV 2023:arXiv:2311.15572v1. [PMID: 38076517 PMCID: PMC10705584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Dynamics play a critical role in computation. The principled evolution of states over time enables both biological and artificial networks to represent and integrate information to make decisions. In the past few decades, significant multidisciplinary progress has been made in bridging the gap between how we understand biological versus artificial computation, including how insights gained from one can translate to the other. Research has revealed that neurobiology is a key determinant of brain network architecture, which gives rise to spatiotemporally constrained patterns of activity that underlie computation. Here, we discuss how neural systems use dynamics for computation, and claim that the biological constraints that shape brain networks may be leveraged to improve the implementation of artificial neural networks. To formalize this discussion, we consider a natural artificial analog of the brain that has been used extensively to model neural computation: the recurrent neural network (RNN). In both the brain and the RNN, we emphasize the common computational substrate atop which dynamics occur-the connectivity between neurons-and we explore the unique computational advantages offered by biophysical constraints such as resource efficiency, spatial embedding, and neurodevelopment.
Collapse
Affiliation(s)
- Jason Z. Kim
- Department of Physics, Cornell University, Ithaca, NY 14853, USA
| | - Bart Larsen
- Department of Pediatrics, Masonic Institute for the Developing Brain, University of Minnesota
| | - Linden Parkes
- Department of Psychiatry, Rutgers University, Piscataway, NJ 08854, USA
| |
Collapse
|
22
|
Lepperød ME, Stöber T, Hafting T, Fyhn M, Kording KP. Inferring causal connectivity from pairwise recordings and optogenetics. PLoS Comput Biol 2023; 19:e1011574. [PMID: 37934793 PMCID: PMC10656035 DOI: 10.1371/journal.pcbi.1011574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 11/17/2023] [Accepted: 10/04/2023] [Indexed: 11/09/2023] Open
Abstract
To understand the neural mechanisms underlying brain function, neuroscientists aim to quantify causal interactions between neurons, for instance by perturbing the activity of neuron A and measuring the effect on neuron B. Recently, manipulating neuron activity using light-sensitive opsins, optogenetics, has increased the specificity of neural perturbation. However, using widefield optogenetic interventions, multiple neurons are usually perturbed, producing a confound-any of the stimulated neurons can have affected the postsynaptic neuron making it challenging to discern which neurons produced the causal effect. Here, we show how such confounds produce large biases in interpretations. We explain how confounding can be reduced by combining instrumental variables (IV) and difference in differences (DiD) techniques from econometrics. Combined, these methods can estimate (causal) effective connectivity by exploiting the weak, approximately random signal resulting from the interaction between stimulation and the absolute refractory period of the neuron. In simulated neural networks, we find that estimates using ideas from IV and DiD outperform naïve techniques suggesting that methods from causal inference can be useful to disentangle neural interactions in the brain.
Collapse
Affiliation(s)
- Mikkel Elle Lepperød
- Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
- Simula Research Laboratory, Oslo, Norway
| | - Tristan Stöber
- Simula Research Laboratory, Oslo, Norway
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, Bochum, Germany
- Epilepsy Center Frankfurt Rhine-Main, Department of Neurology, Goethe University, Frankfurt, Germany
| | - Torkel Hafting
- Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Marianne Fyhn
- Simula Research Laboratory, Oslo, Norway
- Department of Biosciences, University of Oslo, Oslo, Norway
| | - Konrad Paul Kording
- Department of Neuroscience, University of Pennsylvania, Pennsylvania, United States of America
| |
Collapse
|
23
|
Frégnac Y. Flagship Afterthoughts: Could the Human Brain Project (HBP) Have Done Better? eNeuro 2023; 10:ENEURO.0428-23.2023. [PMID: 37963651 PMCID: PMC10646882 DOI: 10.1523/eneuro.0428-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 10/24/2023] [Indexed: 11/16/2023] Open
Affiliation(s)
- Yves Frégnac
- UNIC-NeuroPSI, University Paris-Saclay, 91190 Gif-sur-Yvette, France
- Cognitive Sciences at Ecole Polytechnique, 91120 Palaiseau, France
| |
Collapse
|
24
|
Duru J, Maurer B, Giles Doran C, Jelitto R, Küchler J, Ihle SJ, Ruff T, John R, Genocchi B, Vörös J. Investigation of the input-output relationship of engineered neural networks using high-density microelectrode arrays. Biosens Bioelectron 2023; 239:115591. [PMID: 37634421 DOI: 10.1016/j.bios.2023.115591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 07/25/2023] [Accepted: 08/10/2023] [Indexed: 08/29/2023]
Abstract
Bottom-up neuroscience utilizes small, engineered biological neural networks to study neuronal activity in systems of reduced complexity. We present a platform that establishes up to six independent networks formed by primary rat neurons on planar complementary metal-oxide-semiconductor (CMOS) microelectrode arrays (MEAs). We introduce an approach that allows repetitive stimulation and recording of network activity at any of the over 700 electrodes underlying a network. We demonstrate that the continuous application of a repetitive super-threshold stimulus yields a reproducible network answer within a 15 ms post-stimulus window. This response can be tracked with high spatiotemporal resolution across the whole extent of the network. Moreover, we show that the location of the stimulation plays a significant role in the networks' early response to the stimulus. By applying a stimulation pattern to all network-underlying electrodes in sequence, the sensitivity of the whole network to the stimulus can be visualized. We demonstrate that microchannels reduce the voltage stimulation threshold and induce the strongest network response. By varying the stimulation amplitude and frequency we reveal discrete network transition points. Finally, we introduce vector fields to follow stimulation-induced spike propagation pathways within the network. Overall we show that our defined neural networks on CMOS MEAs enable us to elicit highly reproducible activity patterns that can be precisely modulated by stimulation amplitude, stimulation frequency and the site of stimulation.
Collapse
Affiliation(s)
- Jens Duru
- Laboratory of Biosensors and Bioelectronics, Institute for Biomedical Engineering, University and ETH Zurich, Gloriastrasse 35, Zurich, 8092, Switzerland.
| | - Benedikt Maurer
- Laboratory of Biosensors and Bioelectronics, Institute for Biomedical Engineering, University and ETH Zurich, Gloriastrasse 35, Zurich, 8092, Switzerland.
| | - Ciara Giles Doran
- Laboratory of Biosensors and Bioelectronics, Institute for Biomedical Engineering, University and ETH Zurich, Gloriastrasse 35, Zurich, 8092, Switzerland.
| | - Robert Jelitto
- Laboratory of Biosensors and Bioelectronics, Institute for Biomedical Engineering, University and ETH Zurich, Gloriastrasse 35, Zurich, 8092, Switzerland.
| | - Joël Küchler
- Laboratory of Biosensors and Bioelectronics, Institute for Biomedical Engineering, University and ETH Zurich, Gloriastrasse 35, Zurich, 8092, Switzerland.
| | - Stephan J Ihle
- Laboratory of Biosensors and Bioelectronics, Institute for Biomedical Engineering, University and ETH Zurich, Gloriastrasse 35, Zurich, 8092, Switzerland.
| | - Tobias Ruff
- Laboratory of Biosensors and Bioelectronics, Institute for Biomedical Engineering, University and ETH Zurich, Gloriastrasse 35, Zurich, 8092, Switzerland.
| | - Robert John
- Laboratory of Biosensors and Bioelectronics, Institute for Biomedical Engineering, University and ETH Zurich, Gloriastrasse 35, Zurich, 8092, Switzerland.
| | - Barbara Genocchi
- Computational Biophysics and Imaging Group, Tampere University, Arvo Ylpön katu 34, Tampere, 33520, Finland.
| | - János Vörös
- Laboratory of Biosensors and Bioelectronics, Institute for Biomedical Engineering, University and ETH Zurich, Gloriastrasse 35, Zurich, 8092, Switzerland.
| |
Collapse
|
25
|
Masi M. An evidence-based critical review of the mind-brain identity theory. Front Psychol 2023; 14:1150605. [PMID: 37965649 PMCID: PMC10641890 DOI: 10.3389/fpsyg.2023.1150605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 09/18/2023] [Indexed: 11/16/2023] Open
Abstract
In the philosophy of mind, neuroscience, and psychology, the causal relationship between phenomenal consciousness, mentation, and brain states has always been a matter of debate. On the one hand, material monism posits consciousness and mind as pure brain epiphenomena. One of its most stringent lines of reasoning relies on a 'loss-of-function lesion premise,' according to which, since brain lesions and neurochemical modifications lead to cognitive impairment and/or altered states of consciousness, there is no reason to doubt the mind-brain identity. On the other hand, dualism or idealism (in one form or another) regard consciousness and mind as something other than the sole product of cerebral activity pointing at the ineffable, undefinable, and seemingly unphysical nature of our subjective qualitative experiences and its related mental dimension. Here, several neuroscientific findings are reviewed that question the idea that posits phenomenal experience as an emergent property of brain activity, and argue that the premise of material monism is based on a logical correlation-causation fallacy. While these (mostly ignored) findings, if considered separately from each other, could, in principle, be recast into a physicalist paradigm, once viewed from an integral perspective, they substantiate equally well an ontology that posits mind and consciousness as a primal phenomenon.
Collapse
Affiliation(s)
- Marco Masi
- Independent Researcher, Knetzgau, Germany
| |
Collapse
|
26
|
Moro A, Greco M, Cappa SF. Large languages, impossible languages and human brains. Cortex 2023; 167:82-85. [PMID: 37540953 DOI: 10.1016/j.cortex.2023.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 06/21/2023] [Accepted: 07/11/2023] [Indexed: 08/06/2023]
Abstract
We aim at offering a contribution to highlight the essential differences between Large Language Models (LLM) and the human language faculty. More explicitly, we claim that the existence of impossible languages for humans does not have any equivalent for LLM making them unsuitable models of the human language faculty, especially for a neurobiological point of view. The core part is preceded by two premises bearing on the distinction between machines and humans and the distinction between competence and performance.
Collapse
Affiliation(s)
- Andrea Moro
- Scuola Universitaria Superiore IUSS, Pavia, Italy
| | - Matteo Greco
- Scuola Universitaria Superiore IUSS, Pavia, Italy
| | - Stefano F Cappa
- Scuola Universitaria Superiore IUSS, Pavia, Italy; IRCCS Mondino Foundation, Pavia, Italy.
| |
Collapse
|
27
|
Fenton AA, Hurtado JR, Broek JAC, Park E, Mishra B. Do Place Cells Dream of Deceptive Moves in a Signaling Game? Neuroscience 2023; 529:129-147. [PMID: 37591330 PMCID: PMC10592151 DOI: 10.1016/j.neuroscience.2023.08.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 07/27/2023] [Accepted: 08/06/2023] [Indexed: 08/19/2023]
Abstract
We consider the possibility of applying game theory to analysis and modeling of neurobiological systems. Specifically, the basic properties and features of information asymmetric signaling games are considered and discussed as having potential to explain diverse neurobiological phenomena; we focus on neuronal action potential discharge that can represent cognitive variables in memory and purposeful behavior. We begin by arguing that there is a pressing need for conceptual frameworks that can permit analysis and integration of information and explanations across many scales of biological function including gene regulation, molecular and biochemical signaling, cellular and metabolic function, neuronal population, and systems level organization to generate plausible hypotheses across these scales. Developing such integrative frameworks is crucial if we are to understand cognitive functions like learning, memory, and perception. The present work focuses on systems neuroscience organized around the connected brain regions of the entorhinal cortex and hippocampus. These areas are intensely studied in rodent subjects as model neuronal systems that undergo activity-dependent synaptic plasticity to form neuronal circuits and represent memories and spatial knowledge used for purposeful navigation. Examples of cognition-related spatial information in the observed neuronal discharge of hippocampal place cell populations and medial entorhinal head-direction cell populations are used to illustrate possible challenges to information maximization concepts. It may be natural to explain these observations using the ideas and features of information asymmetric signaling games.
Collapse
Affiliation(s)
- André A Fenton
- Neurobiology of Cognition Laboratory, Center for Neural Science, New York University, New York, NY, USA; Neuroscience Institute at the NYU Langone Medical Center, New York, NY, USA.
| | - José R Hurtado
- Neurobiology of Cognition Laboratory, Center for Neural Science, New York University, New York, NY, USA
| | - Jantine A C Broek
- Departments of Computer Science and Mathematics, Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - EunHye Park
- Neurobiology of Cognition Laboratory, Center for Neural Science, New York University, New York, NY, USA
| | - Bud Mishra
- Departments of Computer Science and Mathematics, Courant Institute of Mathematical Sciences, New York University, New York, NY, USA; Department of Cell Biology, NYU Langone Medical Center, New York, NY, USA; Simon Center for Quantitative Biology, Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| |
Collapse
|
28
|
Zhang A, Wengler K, Zhu X, Horga G, Goldberg TE, Lee S. Altered hierarchical gradients of intrinsic neural timescales in mild cognitive impairment and Alzheimer's disease. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.26.559549. [PMID: 37808862 PMCID: PMC10557723 DOI: 10.1101/2023.09.26.559549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
Alzheimer's disease (AD) is a devastating neurodegenerative disease that affects millions of older adults in the US and worldwide. Resting-state functional magnetic resonance imaging (rs-fMRI) has become a widely used neuroimaging tool to study neurophysiology in AD and its prodromal condition, mild cognitive impairment (MCI). The intrinsic neural timescale (INT), which can be estimated through the magnitude of the autocorrelation of intrinsic neural signals using rs-fMRI, is thought to quantify the duration that neural information is stored in a local cortical circuit. The heterogeneity of the timescales is considered to be a basis of the functional hierarchy in the brain. In addition, INT captures an aspect of circuit dynamics relevant to excitation/inhibition (E/I) balance, which is thought to be broadly relevant for cognitive functions. Here we examined its relevance to AD. We used rs-fMRI data of 904 individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The subjects were divided into 4 groups based on their baseline and end-visit clinical status, which were cognitively normal (CN), stable MCI, Converter, and AD groups. Linear mixed effect model and pairwise comparison were implemented to investigate the large-scale hierarchical organization and local differences. We observed high similarities between AD and Converter groups. Specifically, among the eight identified ROIs with distinct INT alterations in AD, three ROIs (inferior temporal, caudate, pallidum areas) exhibit stable and significant alteration in AD converter. In addition, distinct INT related pathological changes in stable MCI and AD/Converter were found. For AD and Converter groups, neural information is stored for a longer time in lower hierarchical order areas, while higher levels of hierarchy seem to be preferentially impaired in stable MCI leading to a less pronounced hierarchical gradient effect. These results inform that the INT holds great potential as an additional measure for AD prediction, a stable biomarker for clinical diagnosis and an important therapeutic target in AD.
Collapse
|
29
|
Jeon I, Kim T. Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network. Front Comput Neurosci 2023; 17:1092185. [PMID: 37449083 PMCID: PMC10336230 DOI: 10.3389/fncom.2023.1092185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.
Collapse
Affiliation(s)
| | - Taegon Kim
- Brain Science Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
30
|
Wittek N, Wittek K, Keibel C, Güntürkün O. Supervised machine learning aided behavior classification in pigeons. Behav Res Methods 2023; 55:1624-1640. [PMID: 35701721 PMCID: PMC10250476 DOI: 10.3758/s13428-022-01881-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/16/2022] [Indexed: 11/08/2022]
Abstract
Manual behavioral observations have been applied in both environment and laboratory experiments in order to analyze and quantify animal movement and behavior. Although these observations contributed tremendously to ecological and neuroscientific disciplines, there have been challenges and disadvantages following in their footsteps. They are not only time-consuming, labor-intensive, and error-prone but they can also be subjective, which induces further difficulties in reproducing the results. Therefore, there is an ongoing endeavor towards automated behavioral analysis, which has also paved the way for open-source software approaches. Even though these approaches theoretically can be applied to different animal groups, the current applications are mostly focused on mammals, especially rodents. However, extending those applications to other vertebrates, such as birds, is advisable not only for extending species-specific knowledge but also for contributing to the larger evolutionary picture and the role of behavior within. Here we present an open-source software package as a possible initiation of bird behavior classification. It can analyze pose-estimation data generated by established deep-learning-based pose-estimation tools such as DeepLabCut for building supervised machine learning predictive classifiers for pigeon behaviors, which can be broadened to support other bird species as well. We show that by training different machine learning and deep learning architectures using multivariate time series data as input, an F1 score of 0.874 can be achieved for a set of seven distinct behaviors. In addition, an algorithm for further tuning the bias of the predictions towards either precision or recall is introduced, which allows tailoring the classifier to specific needs.
Collapse
Affiliation(s)
- Neslihan Wittek
- Faculty of Psychology, Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44801, Bochum, Germany.
| | - Kevin Wittek
- Faculty of Mathematics, Computer Science and Natural Sciences, Department of Computer Science, RWTH Aachen University, Aachen, Germany
| | - Christopher Keibel
- Institute for Internet Security, Westphalian University of Applied Sciences, Gelsenkirchen, Germany
| | - Onur Güntürkün
- Faculty of Psychology, Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44801, Bochum, Germany
| |
Collapse
|
31
|
Kosal M, Putney J. Neurotechnology and international security: Predicting commercial and military adoption of brain-computer interfaces (BCIs) in the United States and China. Politics Life Sci 2023; 42:81-103. [PMID: 37140225 DOI: 10.1017/pls.2022.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
In the past decade, international actors have launched "brain projects" or "brain initiatives." One of the emerging technologies enabled by these publicly funded programs is brain-computer interfaces (BCIs), which are devices that allow communication between the brain and external devices like a prosthetic arm or a keyboard. BCIs are poised to have significant impacts on public health, society, and national security. This research presents the first analytical framework that attempts to predict the dissemination of neurotechnologies to both the commercial and military sectors in the United States and China. While China started its project later with less funding, we find that it has other advantages that make earlier adoption more likely. We also articulate national security risks implicit in later adoption, including the inability to set international ethical and legal norms for BCI use, especially in wartime operating environments, and data privacy risks for citizens who use technology developed by foreign actors.
Collapse
|
32
|
Aamodt A, Sevenius Nilsen A, Markhus R, Kusztor A, HasanzadehMoghadam F, Kauppi N, Thürer B, Storm JF, Juel BE. EEG Lempel-Ziv complexity varies with sleep stage, but does not seem to track dream experience. Front Hum Neurosci 2023; 16:987714. [PMID: 36704096 PMCID: PMC9871639 DOI: 10.3389/fnhum.2022.987714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Accepted: 12/14/2022] [Indexed: 01/12/2023] Open
Abstract
In a recent electroencephalography (EEG) sleep study inspired by complexity theories of consciousness, we found that multi-channel signal diversity progressively decreased from wakefulness to slow wave sleep, but failed to find any significant difference between dreaming and non-dreaming awakenings within the same sleep stage (NREM2). However, we did find that multi-channel Lempel-Ziv complexity (LZC) measured over the posterior cortex increased with more perceptual ratings of NREM2 dream experience along a thought-perceptual axis. In this follow-up study, we re-tested our previous findings, using a slightly different approach. Partial sleep-deprivation was followed by evening sleep experiments, with repeated awakenings and immediate dream reports. Participants reported whether they had been dreaming, and were asked to rate how diverse, vivid, perceptual, and thought-like the contents of their dreams were. High density (64 channel) EEG was recorded throughout the experiment, and mean single-channel LZC was calculated for each 30 s sleep epoch. LZC progressively decreased with depth of non-REM sleep. Surprisingly, estimated marginal mean LZC was slightly higher for NREM1 than for wakefulness, but the difference did not remain significant after adjusting for multiple comparisons. We found no significant difference in LZC between dream and non-dream awakenings, nor any significant relationship between LZC and subjective ratings of dream experience, within the same sleep stage (NREM2). The failure to reproduce our own previous finding of a positive correlation between posterior LZC and more perceptual dream experiences, or to find any other correlation between brain signal complexity and subjective experience within NREM2 sleep, raises the question of whether EEG LZC is really a reliable correlate of richness of experience as such, within the same sleep stage.
Collapse
Affiliation(s)
- Arnfinn Aamodt
- Brain Signalling Lab, Division of Physiology, Faculty of Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - André Sevenius Nilsen
- Brain Signalling Lab, Division of Physiology, Faculty of Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Rune Markhus
- National Centre for Epilepsy, Oslo University Hospital, Oslo, Norway
| | - Anikó Kusztor
- Brain Signalling Lab, Division of Physiology, Faculty of Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
- School of Psychological Sciences, Monash University, Clayton, VIC, Australia
| | - Fatemeh HasanzadehMoghadam
- Brain Signalling Lab, Division of Physiology, Faculty of Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Nils Kauppi
- Brain Signalling Lab, Division of Physiology, Faculty of Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Benjamin Thürer
- Brain Signalling Lab, Division of Physiology, Faculty of Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Johan Frederik Storm
- Brain Signalling Lab, Division of Physiology, Faculty of Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Bjørn Erik Juel
- Brain Signalling Lab, Division of Physiology, Faculty of Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| |
Collapse
|
33
|
Garibyan A, Schilling A, Boehm C, Zankl A, Krauss P. Neural correlates of linguistic collocations during continuous speech perception. Front Psychol 2022; 13:1076339. [PMID: 36619132 PMCID: PMC9822706 DOI: 10.3389/fpsyg.2022.1076339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 12/02/2022] [Indexed: 12/25/2022] Open
Abstract
Language is fundamentally predictable, both on a higher schematic level as well as low-level lexical items. Regarding predictability on a lexical level, collocations are frequent co-occurrences of words that are often characterized by high strength of association. So far, psycho- and neurolinguistic studies have mostly employed highly artificial experimental paradigms in the investigation of collocations by focusing on the processing of single words or isolated sentences. In contrast, here we analyze EEG brain responses recorded during stimulation with continuous speech, i.e., audio books. We find that the N400 response to collocations is significantly different from that of non-collocations, whereas the effect varies with respect to cortical region (anterior/posterior) and laterality (left/right). Our results are in line with studies using continuous speech, and they mostly contradict those using artificial paradigms and stimuli. To the best of our knowledge, this is the first neurolinguistic study on collocations using continuous speech stimulation.
Collapse
Affiliation(s)
- Armine Garibyan
- Chair of English Philology and Linguistics, University Erlangen-Nuremberg, Erlangen, Germany,Linguistics Lab, University Erlangen-Nuremberg, Erlangen, Germany
| | - Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany,Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany
| | - Claudia Boehm
- Linguistics Lab, University Erlangen-Nuremberg, Erlangen, Germany,Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany,Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany
| | - Alexandra Zankl
- Linguistics Lab, University Erlangen-Nuremberg, Erlangen, Germany,Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany,Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany
| | - Patrick Krauss
- Linguistics Lab, University Erlangen-Nuremberg, Erlangen, Germany,Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany,Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany,Pattern Recognition Lab, University Erlangen-Nuremberg, Erlangen, Germany,*Correspondence: Patrick Krauss,
| |
Collapse
|
34
|
Parker D. Neurobiological reduction: From cellular explanations of behavior to interventions. Front Psychol 2022; 13:987101. [PMID: 36619115 PMCID: PMC9815460 DOI: 10.3389/fpsyg.2022.987101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 11/28/2022] [Indexed: 12/24/2022] Open
Abstract
Scientific reductionism, the view that higher level functions can be explained by properties at some lower-level or levels, has been an assumption of nervous system analyses since the acceptance of the neuron doctrine in the late 19th century, and became a dominant experimental approach with the development of intracellular recording techniques in the mid-20th century. Subsequent refinements of electrophysiological approaches and the continual development of molecular and genetic techniques have promoted a focus on molecular and cellular mechanisms in experimental analyses and explanations of sensory, motor, and cognitive functions. Reductionist assumptions have also influenced our views of the etiology and treatment of psychopathologies, and have more recently led to claims that we can, or even should, pharmacologically enhance the normal brain. Reductionism remains an area of active debate in the philosophy of science. In neuroscience and psychology, the debate typically focuses on the mind-brain question and the mechanisms of cognition, and how or if they can be explained in neurobiological terms. However, these debates are affected by the complexity of the phenomena being considered and the difficulty of obtaining the necessary neurobiological detail. We can instead ask whether features identified in neurobiological analyses of simpler aspects in simpler nervous systems support current molecular and cellular approaches to explaining systems or behaviors. While my view is that they do not, this does not invite the opposing view prevalent in dichotomous thinking that molecular and cellular detail is irrelevant and we should focus on computations or representations. We instead need to consider how to address the long-standing dilemma of how a nervous system that ostensibly functions through discrete cell to cell communication can generate population effects across multiple spatial and temporal scales to generate behavior.
Collapse
Affiliation(s)
- David Parker
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
35
|
|
36
|
Cai Y, Wu J, Dai Q. Review on data analysis methods for mesoscale neural imaging in vivo. NEUROPHOTONICS 2022; 9:041407. [PMID: 35450225 PMCID: PMC9010663 DOI: 10.1117/1.nph.9.4.041407] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 03/23/2022] [Indexed: 06/14/2023]
Abstract
Significance: Mesoscale neural imaging in vivo has gained extreme popularity in neuroscience for its capacity of recording large-scale neurons in action. Optical imaging with single-cell resolution and millimeter-level field of view in vivo has been providing an accumulated database of neuron-behavior correspondence. Meanwhile, optical detection of neuron signals is easily contaminated by noises, background, crosstalk, and motion artifacts, while neural-level signal processing and network-level coordinate are extremely complicated, leading to laborious and challenging signal processing demands. The existing data analysis procedure remains unstandardized, which could be daunting to neophytes or neuroscientists without computational background. Aim: We hope to provide a general data analysis pipeline of mesoscale neural imaging shared between imaging modalities and systems. Approach: We divide the pipeline into two main stages. The first stage focuses on extracting high-fidelity neural responses at single-cell level from raw images, including motion registration, image denoising, neuron segmentation, and signal extraction. The second stage focuses on data mining, including neural functional mapping, clustering, and brain-wide network deduction. Results: Here, we introduce the general pipeline of processing the mesoscale neural images. We explain the principles of these procedures and compare different approaches and their application scopes with detailed discussions about the shortcomings and remaining challenges. Conclusions: There are great challenges and opportunities brought by the large-scale mesoscale data, such as the balance between fidelity and efficiency, increasing computational load, and neural network interpretability. We believe that global circuits on single-neuron level will be more extensively explored in the future.
Collapse
Affiliation(s)
- Yeyi Cai
- Tsinghua University, Department of Automation, Beijing, China
| | - Jiamin Wu
- Tsinghua University, Department of Automation, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Beijing, China
| |
Collapse
|
37
|
Haslbeck JMB, Ryan O. Recovering Within-Person Dynamics from Psychological Time Series. MULTIVARIATE BEHAVIORAL RESEARCH 2022; 57:735-766. [PMID: 34154483 DOI: 10.1080/00273171.2021.1896353] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Idiographic modeling is rapidly gaining popularity, promising to tap into the within-person dynamics underlying psychological phenomena. To gain theoretical understanding of these dynamics, we need to make inferences from time series models about the underlying system. Such inferences are subject to two challenges: first, time series models will arguably always be misspecified, meaning it is unclear how to make inferences to the underlying system; and second, the sampling frequency must be sufficient to capture the dynamics of interest. We discuss both problems with the following approach: we specify a toy model for emotion dynamics as the true system, generate time series data from it, and then try to recover that system with the most popular time series analysis tools. We show that making straightforward inferences from time series models about an underlying system is difficult. We also show that if the sampling frequency is insufficient, the dynamics of interest cannot be recovered. However, we also show that global characteristics of the system can be recovered reliably. We conclude by discussing the consequences of our findings for idiographic modeling and suggest a modeling methodology that goes beyond fitting time series models alone and puts formal theories at the center of theory development.
Collapse
Affiliation(s)
| | - Oisín Ryan
- Department of Methodology and Statistics, Utrecht University
| |
Collapse
|
38
|
Barack DL, Miller EK, Moore CI, Packer AM, Pessoa L, Ross LN, Rust NC. A call for more clarity around causality in neuroscience. Trends Neurosci 2022; 45:654-655. [PMID: 35810023 PMCID: PMC9996677 DOI: 10.1016/j.tins.2022.06.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 06/10/2022] [Indexed: 11/29/2022]
Abstract
In neuroscience, the term 'causality' is used to refer to different concepts, leading to confusion. Here we illustrate some of those variations, and we suggest names for them. We then introduce four ways to enhance clarity around causality in neuroscience.
Collapse
Affiliation(s)
- David L Barack
- Departments of Neuroscience and Philosophy, University of Pennsylvania, Philadelphia, PA, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Christopher I Moore
- Carney Institute for Brain Science, Department of Neuroscience, Brown University, Providence, RI, USA.
| | - Adam M Packer
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK.
| | - Luiz Pessoa
- Department of Psychology and Maryland Neuroimaging Center, University of Maryland, College Park, MD, USA.
| | - Lauren N Ross
- Department of Logic and Philosophy of Science, University of California, Irvine, CA, USA.
| | - Nicole C Rust
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
39
|
Nour MM, Liu Y, Dolan RJ. Functional neuroimaging in psychiatry and the case for failing better. Neuron 2022; 110:2524-2544. [PMID: 35981525 DOI: 10.1016/j.neuron.2022.07.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/06/2022] [Accepted: 07/08/2022] [Indexed: 12/27/2022]
Abstract
Psychiatric disorders encompass complex aberrations of cognition and affect and are among the most debilitating and poorly understood of any medical condition. Current treatments rely primarily on interventions that target brain function (drugs) or learning processes (psychotherapy). A mechanistic understanding of how these interventions mediate their therapeutic effects remains elusive. From the early 1990s, non-invasive functional neuroimaging, coupled with parallel developments in the cognitive neurosciences, seemed to signal a new era of neurobiologically grounded diagnosis and treatment in psychiatry. Yet, despite three decades of intense neuroimaging research, we still lack a neurobiological account for any psychiatric condition. Likewise, functional neuroimaging plays no role in clinical decision making. Here, we offer a critical commentary on this impasse and suggest how the field might fare better and deliver impactful neurobiological insights.
Collapse
Affiliation(s)
- Matthew M Nour
- Max Planck University College London Centre for Computational Psychiatry and Ageing Research, London WC1B 5EH, UK; Wellcome Trust Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK; Department of Psychiatry, University of Oxford, Oxford OX3 7JX, UK.
| | - Yunzhe Liu
- Max Planck University College London Centre for Computational Psychiatry and Ageing Research, London WC1B 5EH, UK; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Chinese Institute for Brain Research, Beijing 102206, China
| | - Raymond J Dolan
- Max Planck University College London Centre for Computational Psychiatry and Ageing Research, London WC1B 5EH, UK; Wellcome Trust Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
40
|
Clawson WP, Levin M. Endless forms most beautiful 2.0: teleonomy and the bioengineering of chimaeric and synthetic organisms. Biol J Linn Soc Lond 2022. [DOI: 10.1093/biolinnean/blac073] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Abstract
The rich variety of biological forms and behaviours results from one evolutionary history on Earth, via frozen accidents and selection in specific environments. This ubiquitous baggage in natural, familiar model species obscures the plasticity and swarm intelligence of cellular collectives. Significant gaps exist in our understanding of the origin of anatomical novelty, of the relationship between genome and form, and of strategies for control of large-scale structure and function in regenerative medicine and bioengineering. Analysis of living forms that have never existed before is necessary to reveal deep design principles of life as it can be. We briefly review existing examples of chimaeras, cyborgs, hybrots and other beings along the spectrum containing evolved and designed systems. To drive experimental progress in multicellular synthetic morphology, we propose teleonomic (goal-seeking, problem-solving) behaviour in diverse problem spaces as a powerful invariant across possible beings regardless of composition or origin. Cybernetic perspectives on chimaeric morphogenesis erase artificial distinctions established by past limitations of technology and imagination. We suggest that a multi-scale competency architecture facilitates evolution of robust problem-solving, living machines. Creation and analysis of novel living forms will be an essential testbed for the emerging field of diverse intelligence, with numerous implications across regenerative medicine, robotics and ethics.
Collapse
Affiliation(s)
| | - Michael Levin
- Allen Discovery Center at Tufts University , Medford, MA , USA
- Wyss Institute for Biologically Inspired Engineering at Harvard University , Boston, MA , USA
| |
Collapse
|
41
|
Branchi I. Recentering neuroscience on behavior: The interface between brain and environment is a privileged level of control of neural activity. Neurosci Biobehav Rev 2022; 138:104678. [PMID: 35487322 DOI: 10.1016/j.neubiorev.2022.104678] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 04/21/2022] [Accepted: 04/22/2022] [Indexed: 02/08/2023]
Abstract
Despite the huge and constant progress in the molecular and cellular neuroscience fields, our capability to understand brain alterations and treat mental illness is still limited. Therefore, a paradigm shift able to overcome such limitation is warranted. Behavior and the associated mental states are the interface between the central nervous system and the living environment. Since, in any system, the interface is a key regulator of system organization, behavior is proposed here as a unique and privileged level of control and orchestration of brain structure and activity. This view has relevant scientific and clinical implications. First, the study of behavior represents a singular starting point for the investigation of neural activity in an integrated and comprehensive fashion. Second, behavioral changes, accomplished through psychotherapy or environmental interventions, are expected to have the highest impact to specifically reorganize the complexity of the human mind and thus achieve a solid and long-lasting improvement in mental health.
Collapse
Affiliation(s)
- Igor Branchi
- Center for Behavioral Sciences and Mental Health, Istituto Superiore di Sanità, Viale Regina Elena, 299, 00161 Rome, Italy.
| |
Collapse
|
42
|
Berger SE, Baria AT. Assessing Pain Research: A Narrative Review of Emerging Pain Methods, Their Technosocial Implications, and Opportunities for Multidisciplinary Approaches. FRONTIERS IN PAIN RESEARCH 2022; 3:896276. [PMID: 35721658 PMCID: PMC9201034 DOI: 10.3389/fpain.2022.896276] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 05/12/2022] [Indexed: 11/13/2022] Open
Abstract
Pain research traverses many disciplines and methodologies. Yet, despite our understanding and field-wide acceptance of the multifactorial essence of pain as a sensory perception, emotional experience, and biopsychosocial condition, pain scientists and practitioners often remain siloed within their domain expertise and associated techniques. The context in which the field finds itself today-with increasing reliance on digital technologies, an on-going pandemic, and continued disparities in pain care-requires new collaborations and different approaches to measuring pain. Here, we review the state-of-the-art in human pain research, summarizing emerging practices and cutting-edge techniques across multiple methods and technologies. For each, we outline foreseeable technosocial considerations, reflecting on implications for standards of care, pain management, research, and societal impact. Through overviewing alternative data sources and varied ways of measuring pain and by reflecting on the concerns, limitations, and challenges facing the field, we hope to create critical dialogues, inspire more collaborations, and foster new ideas for future pain research methods.
Collapse
Affiliation(s)
- Sara E. Berger
- Responsible and Inclusive Technologies Research, Exploratory Sciences Division, IBM Thomas J. Watson Research Center, Yorktown Heights, NY, United States
| | | |
Collapse
|
43
|
Fakhar K, Hilgetag CC. Systematic perturbation of an artificial neural network: A step towards quantifying causal contributions in the brain. PLoS Comput Biol 2022; 18:e1010250. [PMID: 35714139 PMCID: PMC9246164 DOI: 10.1371/journal.pcbi.1010250] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 06/30/2022] [Accepted: 05/25/2022] [Indexed: 11/24/2022] Open
Abstract
Lesion inference analysis is a fundamental approach for characterizing the causal contributions of neural elements to brain function. This approach has gained new prominence through the arrival of modern perturbation techniques with unprecedented levels of spatiotemporal precision. While inferences drawn from brain perturbations are conceptually powerful, they face methodological difficulties. Particularly, they are challenged to disentangle the true causal contributions of the involved elements, since often functions arise from coalitions of distributed, interacting elements, and localized perturbations have unknown global consequences. To elucidate these limitations, we systematically and exhaustively lesioned a small artificial neural network (ANN) playing a classic arcade game. We determined the functional contributions of all nodes and links, contrasting results from sequential single-element perturbations with simultaneous perturbations of multiple elements. We found that lesioning individual elements, one at a time, produced biased results. By contrast, multi-site lesion analysis captured crucial details that were missed by single-site lesions. We conclude that even small and seemingly simple ANNs show surprising complexity that needs to be addressed by multi-lesioning for a coherent causal characterization.
Collapse
Affiliation(s)
- Kayson Fakhar
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg, Germany
| | - Claus C. Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg, Germany
- Department of Health Sciences, Boston University, Boston, Massachusetts, United States of America
| |
Collapse
|
44
|
Siddiqi SH, Kording KP, Parvizi J, Fox MD. Causal mapping of human brain function. Nat Rev Neurosci 2022; 23:361-375. [PMID: 35444305 PMCID: PMC9387758 DOI: 10.1038/s41583-022-00583-8] [Citation(s) in RCA: 111] [Impact Index Per Article: 55.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/14/2022] [Indexed: 12/11/2022]
Abstract
Mapping human brain function is a long-standing goal of neuroscience that promises to inform the development of new treatments for brain disorders. Early maps of human brain function were based on locations of brain damage or brain stimulation that caused a functional change. Over time, this approach was largely replaced by technologies such as functional neuroimaging, which identify brain regions in which activity is correlated with behaviours or symptoms. Despite their advantages, these technologies reveal correlations, not causation. This creates challenges for interpreting the data generated from these tools and using them to develop treatments for brain disorders. A return to causal mapping of human brain function based on brain lesions and brain stimulation is underway. New approaches can combine these causal sources of information with modern neuroimaging and electrophysiology techniques to gain new insights into the functions of specific brain areas. In this Review, we provide a definition of causality for translational research, propose a continuum along which to assess the relative strength of causal information from human brain mapping studies and discuss recent advances in causal brain mapping and their relevance for developing treatments.
Collapse
Affiliation(s)
- Shan H Siddiqi
- Center for Brain Circuit Therapeutics, Brigham & Women's Hospital, Boston, MA, USA.
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| | - Konrad P Kording
- Department of Neuroscience, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Josef Parvizi
- Department of Neurology and Neurological Sciences, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Michael D Fox
- Center for Brain Circuit Therapeutics, Brigham & Women's Hospital, Boston, MA, USA
- Department of Neurology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
45
|
Discovering sparse control strategies in neural activity. PLoS Comput Biol 2022; 18:e1010072. [PMID: 35622828 PMCID: PMC9140285 DOI: 10.1371/journal.pcbi.1010072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 04/01/2022] [Indexed: 11/19/2022] Open
Abstract
Biological circuits such as neural or gene regulation networks use internal states to map sensory input to an adaptive repertoire of behavior. Characterizing this mapping is a major challenge for systems biology. Though experiments that probe internal states are developing rapidly, organismal complexity presents a fundamental obstacle given the many possible ways internal states could map to behavior. Using C. elegans as an example, we propose a protocol for systematic perturbation of neural states that limits experimental complexity and could eventually help characterize collective aspects of the neural-behavioral map. We consider experimentally motivated small perturbations—ones that are most likely to preserve natural dynamics and are closer to internal control mechanisms—to neural states and their impact on collective neural activity. Then, we connect such perturbations to the local information geometry of collective statistics, which can be fully characterized using pairwise perturbations. Applying the protocol to a minimal model of C. elegans neural activity, we find that collective neural statistics are most sensitive to a few principal perturbative modes. Dominant eigenvalues decay initially as a power law, unveiling a hierarchy that arises from variation in individual neural activity and pairwise interactions. Highest-ranking modes tend to be dominated by a few, “pivotal” neurons that account for most of the system’s sensitivity, suggesting a sparse mechanism of collective control.
Collapse
|
46
|
Connecting the dots in ethology: applying network theory to understand neural and animal collectives. Curr Opin Neurobiol 2022; 73:102532. [DOI: 10.1016/j.conb.2022.102532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 02/04/2022] [Accepted: 03/02/2022] [Indexed: 11/24/2022]
|
47
|
Gentile C, Cordella F, Zollo L. Hierarchical Human-Inspired Control Strategies for Prosthetic Hands. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22072521. [PMID: 35408135 PMCID: PMC9003226 DOI: 10.3390/s22072521] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 03/02/2022] [Accepted: 03/23/2022] [Indexed: 05/14/2023]
Abstract
The abilities of the human hand have always fascinated people, and many studies have been devoted to describing and understanding a mechanism so perfect and important for human activities. Hand loss can significantly affect the level of autonomy and the capability of performing the activities of daily life. Although the technological improvements have led to the development of mechanically advanced commercial prostheses, the control strategies are rather simple (proportional or on/off control). The use of these commercial systems is unnatural and not intuitive, and therefore frequently abandoned by amputees. The components of an active prosthetic hand are the mechatronic device, the decoding system of human biological signals into gestures and the control law that translates all the inputs into desired movements. The real challenge is the development of a control law replacing human hand functions. This paper presents a literature review of the control strategies of prosthetics hands with a multiple-layer or hierarchical structure, and points out the main critical aspects of the current solutions, in terms of human's functions replicated with the prosthetic device. The paper finally provides several suggestions for designing a control strategy able to mimic the functions of the human hand.
Collapse
Affiliation(s)
- Cosimo Gentile
- Unit of Advanced Robotics and Human-Centred Technologies, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (F.C.); (L.Z.)
- INAIL Prosthetic Center, Vigorso di Budrio, 40054 Bologna, Italy
- Correspondence:
| | - Francesca Cordella
- Unit of Advanced Robotics and Human-Centred Technologies, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (F.C.); (L.Z.)
| | - Loredana Zollo
- Unit of Advanced Robotics and Human-Centred Technologies, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (F.C.); (L.Z.)
| |
Collapse
|
48
|
Murphy-Baum BL, Awatramani GB. Parallel processing in active dendrites during periods of intense spiking activity. Cell Rep 2022; 38:110412. [PMID: 35196499 DOI: 10.1016/j.celrep.2022.110412] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 12/15/2021] [Accepted: 01/28/2022] [Indexed: 12/17/2022] Open
Abstract
A neuron's ability to perform parallel computations throughout its dendritic arbor substantially improves its computational capacity. However, during natural patterns of activity, the degree to which computations remain compartmentalized, especially in neurons with active dendritic trees, is not clear. Here, we examine how the direction of moving objects is computed across the bistratified dendritic arbors of ON-OFF direction-selective ganglion cells (DSGCs) in the mouse retina. We find that although local synaptic signals propagate efficiently throughout their dendritic trees, direction-selective computations in one part of the dendritic arbor have little effect on those being made elsewhere. Independent dendritic processing allows DSGCs to compute the direction of moving objects multiple times as they traverse their receptive fields, enabling them to rapidly detect changes in motion direction on a sub-receptive-field basis. These results demonstrate that the parallel processing capacity of neurons can be maintained even during periods of intense synaptic activity.
Collapse
Affiliation(s)
| | - Gautam B Awatramani
- Department of Biology, University of Victoria, Victoria, BC V8P 5C2, Canada.
| |
Collapse
|
49
|
Jaworska K, Yan Y, van Rijsbergen NJ, Ince RAA, Schyns PG. Different computations over the same inputs produce selective behavior in algorithmic brain networks. eLife 2022; 11:73651. [PMID: 35174783 PMCID: PMC8853655 DOI: 10.7554/elife.73651] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 01/06/2022] [Indexed: 11/25/2022] Open
Abstract
A key challenge in neuroimaging remains to understand where, when, and now particularly how human brain networks compute over sensory inputs to achieve behavior. To study such dynamic algorithms from mass neural signals, we recorded the magnetoencephalographic (MEG) activity of participants who resolved the classic XOR, OR, and AND functions as overt behavioral tasks (N = 10 participants/task, N-of-1 replications). Each function requires a different computation over the same inputs to produce the task-specific behavioral outputs. In each task, we found that source-localized MEG activity progresses through four computational stages identified within individual participants: (1) initial contralateral representation of each visual input in occipital cortex, (2) a joint linearly combined representation of both inputs in midline occipital cortex and right fusiform gyrus, followed by (3) nonlinear task-dependent input integration in temporal-parietal cortex, and finally (4) behavioral response representation in postcentral gyrus. We demonstrate the specific dynamics of each computation at the level of individual sources. The spatiotemporal patterns of the first two computations are similar across the three tasks; the last two computations are task specific. Our results therefore reveal where, when, and how dynamic network algorithms perform different computations over the same inputs to produce different behaviors.
Collapse
Affiliation(s)
| | - Yuening Yan
- School of Psychology and Neuroscience, University of Glasgow
| | | | - Robin AA Ince
- School of Psychology and Neuroscience, University of Glasgow
| | | |
Collapse
|
50
|
Fraser P, Solé R, De las Cuevas G. Why Can the Brain (and Not a Computer) Make Sense of the Liar Paradox? Front Ecol Evol 2021. [DOI: 10.3389/fevo.2021.802300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Ordinary computing machines prohibit self-reference because it leads to logical inconsistencies and undecidability. In contrast, the human mind can understand self-referential statements without necessitating physically impossible brain states. Why can the brain make sense of self-reference? Here, we address this question by defining the Strange Loop Model, which features causal feedback between two brain modules, and circumvents the paradoxes of self-reference and negation by unfolding the inconsistency in time. We also argue that the metastable dynamics of the brain inhibit and terminate unhalting inferences. Finally, we show that the representation of logical inconsistencies in the Strange Loop Model leads to causal incongruence between brain subsystems in Integrated Information Theory.
Collapse
|