51
|
Pietzonka P, Coghi F. Thermodynamic cost for precision of general counting observables. Phys Rev E 2024; 109:064128. [PMID: 39020906 DOI: 10.1103/physreve.109.064128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 05/13/2024] [Indexed: 07/20/2024]
Abstract
We analytically derive universal bounds that describe the tradeoff between thermodynamic cost and precision in a sequence of events related to some internal changes of an otherwise hidden physical system. The precision is quantified by the fluctuations in either the number of events counted over time or the waiting times between successive events. Our results are valid for the same broad class of nonequilibrium driven systems considered by the thermodynamic uncertainty relation, but they extend to both time-symmetric and asymmetric observables. We show how optimal precision saturating the bounds can be achieved. For waiting-time fluctuations of asymmetric observables, a phase transition in the optimal configuration arises, where higher precision can be achieved by combining several signals.
Collapse
|
52
|
Héroux ME, Fisher G, Axelson LH, Butler AA, Gandevia SC. How we perceive the width of grasped objects: Insights into the central processes that govern proprioceptive judgements. J Physiol 2024; 602:2899-2916. [PMID: 38734987 DOI: 10.1113/jp286322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 04/09/2024] [Indexed: 05/13/2024] Open
Abstract
Low-level proprioceptive judgements involve a single frame of reference, whereas high-level proprioceptive judgements are made across different frames of reference. The present study systematically compared low-level (grasp → $\rightarrow$ grasp) and high-level (vision → $\rightarrow$ grasp, grasp → $\rightarrow$ vision) proprioceptive tasks, and quantified the consistency of grasp → $\rightarrow$ vision and possible reciprocal nature of related high-level proprioceptive tasks. Experiment 1 (n = 30) compared performance across vision → $\rightarrow$ grasp, a grasp → $\rightarrow$ vision and a grasp → $\rightarrow$ grasp tasks. Experiment 2 (n = 30) compared performance on the grasp → $\rightarrow$ vision task between hands and over time. Participants were accurate (mean absolute error 0.27 cm [0.20 to 0.34]; mean [95% CI]) and precise (R 2 $R^2$ = 0.95 [0.93 to 0.96]) for grasp → $\rightarrow$ grasp judgements, with a strong correlation between outcomes (r = -0.85 [-0.93 to -0.70]). Accuracy and precision decreased in the two high-level tasks (R 2 $R^2$ = 0.86 and 0.89; mean absolute error = 1.34 and 1.41 cm), with most participants overestimating perceived width for the vision → $\rightarrow$ grasp task and underestimating it for grasp → $\rightarrow$ vision task. There was minimal correlation between accuracy and precision for these two tasks. Converging evidence indicated performance was largely reciprocal (inverse) between the vision → $\rightarrow$ grasp and grasp → $\rightarrow$ vision tasks. Performance on the grasp → $\rightarrow$ vision task was consistent between dominant and non-dominant hands, and across repeated sessions a day or week apart. Overall, there are fundamental differences between low- and high-level proprioceptive judgements that reflect fundamental differences in the cortical processes that underpin these perceptions. Moreover, the central transformations that govern high-level proprioceptive judgements of grasp are personalised, stable and reciprocal for reciprocal tasks. KEY POINTS: Low-level proprioceptive judgements involve a single frame of reference (e.g. indicating the width of a grasped object by selecting from a series of objects of different width), whereas high-level proprioceptive judgements are made across different frames of reference (e.g. indicating the width of a grasped object by selecting from a series of visible lines of different length). We highlight fundamental differences in the precision and accuracy of low- and high-level proprioceptive judgements. We provide converging evidence that the neural transformations between frames of reference that govern high-level proprioceptive judgements of grasp are personalised, stable and reciprocal for reciprocal tasks. This stability is likely key to precise judgements and accurate predictions in high-level proprioception.
Collapse
Affiliation(s)
- Martin E Héroux
- Neuroscience Research Australia, Randwick, Australia
- University of New South Wales, Sydney, Australia
| | - Georgia Fisher
- Neuroscience Research Australia, Randwick, Australia
- Australian Institute of Health Innovation, Macquarie University, Macquarie Park, Australia
| | | | - Annie A Butler
- Neuroscience Research Australia, Randwick, Australia
- University of New South Wales, Sydney, Australia
| | - Simon C Gandevia
- Neuroscience Research Australia, Randwick, Australia
- University of New South Wales, Sydney, Australia
| |
Collapse
|
53
|
Hatton AL, Chatfield MD, Gane EM, Maharaj JN, Cattagni T, Burns J, Paton J, Rome K, Kerr G. The effects of wearing textured versus smooth shoe insoles for 4-weeks in people with diabetic peripheral neuropathy: a randomised controlled trial. Disabil Rehabil 2024:1-11. [PMID: 38819206 DOI: 10.1080/09638288.2024.2360658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 05/22/2024] [Indexed: 06/01/2024]
Abstract
PURPOSE To determine whether short-term wear of textured insoles alters balance, gait, foot sensation, physical activity, or patient-reported outcomes, in people with diabetic neuropathy. MATERIALS AND METHODS 53 adults with diabetic neuropathy were randomised to wear textured or smooth insoles for 4-weeks. At baseline and post-intervention, balance (foam/firm surface; eyes open/closed) and walking were assessed whilst barefoot, wearing shoes only, and two insoles (textured/smooth). The primary outcome was center of pressure (CoP) total sway velocity. Secondary outcomes included other CoP measures, spatiotemporal gait measures, foot sensation, physical activity, and patient-reported outcomes (foot health, falls efficacy). RESULTS Wearing textured insoles led to improvements in CoP measures when standing on foam with eyes open, relative to smooth insoles (p ≤ 0.04). The intervention group demonstrated a 5% reduction in total sway velocity, indicative of greater balance. The intervention group also showed a 9-point improvement in self-perceived vigour (p = 0.03). Adjustments for multiple comparisons were not applied. CONCLUSIONS This study provides weak statistical evidence in favour of textured insoles. Wearing textured insoles may alter measures of balance, suggestive of greater stability, in people with diabetic neuropathy. Plantar stimulation, through textured insoles, may have the capacity to modulate the perception of foot pain, leading to improved well-being.IMPLICATIONS FOR REHABILITATIONShort-term wear of textured insoles can lead to improvements in centre of pressure sway measures when standing on a compliant supporting surface.Wearing textured insoles may have the capacity to help relieve foot pain leading to enhanced self-perceived vitality in people with diabetic peripheral neuropathy.
Collapse
Affiliation(s)
- Anna L Hatton
- School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, Australia
| | - Mark D Chatfield
- Centre for Health Sciences Research, The University of Queensland, Brisbane, Australia
| | - Elise M Gane
- School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, Australia
| | - Jayishni N Maharaj
- School of Allied Health Sciences, Griffith University, Gold Coast, Australia
| | - Thomas Cattagni
- Laboratory Movement, Interactions, Performance EA 4334, University of Nantes, Nantes, France
| | - Joshua Burns
- Faculty of Medicine and Health & Children's Hospital at Westmead, University of Sydney School of Health Sciences, Sydney, Australia
| | - Joanne Paton
- School of Health Professions, Faculty of Health, University of Plymouth, Plymouth, UK
| | - Keith Rome
- School of Clinical Sciences, Auckland University of Technology, Auckland, New Zealand
| | - Graham Kerr
- Movement Neuroscience Group, School of Exercise and Nutrition Sciences, Queensland University of Technology, Brisbane, Australia
| |
Collapse
|
54
|
Li C, Brenner J, Boesky A, Ramanathan S, Kreiman G. Neuron-level Prediction and Noise can Implement Flexible Reward-Seeking Behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.22.595306. [PMID: 38826332 PMCID: PMC11142161 DOI: 10.1101/2024.05.22.595306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
We show that neural networks can implement reward-seeking behavior using only local predictive updates and internal noise. These networks are capable of autonomous interaction with an environment and can switch between explore and exploit behavior, which we show is governed by attractor dynamics. Networks can adapt to changes in their architectures, environments, or motor interfaces without any external control signals. When networks have a choice between different tasks, they can form preferences that depend on patterns of noise and initialization, and we show that these preferences can be biased by network architectures or by changing learning rates. Our algorithm presents a flexible, biologically plausible way of interacting with environments without requiring an explicit environmental reward function, allowing for behavior that is both highly adaptable and autonomous. Code is available at https://github.com/ccli3896/PaN.
Collapse
Affiliation(s)
- Chenguang Li
- Biophysics Program, Harvard College, Cambridge, MA 02138
| | | | | | - Sharad Ramanathan
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Gabriel Kreiman
- Boston Children's Hospital, Harvard Medical School, Boston, MA 02115
| |
Collapse
|
55
|
Judd N, Aristodemou M, Klingberg T, Kievit R. Interindividual Differences in Cognitive Variability Are Ubiquitous and Distinct From Mean Performance in a Battery of Eleven Tasks. J Cogn 2024; 7:45. [PMID: 38799081 PMCID: PMC11122693 DOI: 10.5334/joc.371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 05/06/2024] [Indexed: 05/29/2024] Open
Abstract
Our performance on cognitive tasks fluctuates: the same individual completing the same task will differ in their response's moment-to-moment. For decades cognitive fluctuations have been implicitly ignored - treated as measurement error - with a focus instead on aggregates such as mean performance. Leveraging dense trial-by-trial data and novel time-series methods we explored variability as an intrinsically important phenotype. Across eleven cognitive tasks with over 7 million trials, we found highly reliable interindividual differences in cognitive variability in every task we examined. These differences are both qualitatively and quantitatively distinct from mean performance. Moreover, we found that a single dimension for variability across tasks was inadequate, demonstrating that previously posited global mechanisms for cognitive variability are at least partially incomplete. Our findings indicate that variability is a fundamental part of cognition - with the potential to offer novel insights into developmental processes.
Collapse
Affiliation(s)
- Nicholas Judd
- Cognitive Neuroscience Department, Donders Institute for Brain, Cognition, and Behavior, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Michael Aristodemou
- Cognitive Neuroscience Department, Donders Institute for Brain, Cognition, and Behavior, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Torkel Klingberg
- Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Rogier Kievit
- Cognitive Neuroscience Department, Donders Institute for Brain, Cognition, and Behavior, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
56
|
Peviani VC, Miller LE, Medendorp WP. Biases in hand perception are driven by somatosensory computations, not a distorted hand model. Curr Biol 2024; 34:2238-2246.e5. [PMID: 38718799 DOI: 10.1016/j.cub.2024.04.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/09/2024] [Accepted: 04/04/2024] [Indexed: 05/23/2024]
Abstract
To sense and interact with objects in the environment, we effortlessly configure our fingertips at desired locations. It is therefore reasonable to assume that the underlying control mechanisms rely on accurate knowledge about the structure and spatial dimensions of our hand and fingers. This intuition, however, is challenged by years of research showing drastic biases in the perception of finger geometry.1,2,3,4,5 This perceptual bias has been taken as evidence that the brain's internal representation of the body's geometry is distorted,6 leading to an apparent paradox regarding the skillfulness of our actions.7 Here, we propose an alternative explanation of the biases in hand perception-they are the result of the Bayesian integration of noisy, but unbiased, somatosensory signals about finger geometry and posture. To address this hypothesis, we combined Bayesian reverse engineering with behavioral experimentation on joint and fingertip localization of the index finger. We modeled the Bayesian integration either in sensory or in space-based coordinates, showing that the latter model variant led to biases in finger perception despite accurate representation of finger length. Behavioral measures of joint and fingertip localization responses showed similar biases, which were well fitted by the space-based, but not the sensory-based, model variant. The space-based model variant also outperformed a distorted hand model with built-in geometric biases. In total, our results suggest that perceptual distortions of finger geometry do not reflect a distorted hand model but originate from near-optimal Bayesian inference on somatosensory signals.
Collapse
Affiliation(s)
- Valeria C Peviani
- Donders Institute for Cognition and Behavior, Radboud University, Nijmegen 6525 GD, the Netherlands.
| | - Luke E Miller
- Donders Institute for Cognition and Behavior, Radboud University, Nijmegen 6525 GD, the Netherlands
| | - W Pieter Medendorp
- Donders Institute for Cognition and Behavior, Radboud University, Nijmegen 6525 GD, the Netherlands
| |
Collapse
|
57
|
Zhou R, Yu Y, Li C. Revealing neural dynamical structure of C. elegans with deep learning. iScience 2024; 27:109759. [PMID: 38711456 PMCID: PMC11070340 DOI: 10.1016/j.isci.2024.109759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 01/27/2024] [Accepted: 04/15/2024] [Indexed: 05/08/2024] Open
Abstract
Caenorhabditis elegans serves as a common model for investigating neural dynamics and functions of biological neural networks. Data-driven approaches have been employed in reconstructing neural dynamics. However, challenges remain regarding the curse of high-dimensionality and stochasticity in realistic systems. In this study, we develop a deep neural network (DNN) approach to reconstruct the neural dynamics of C. elegans and study neural mechanisms for locomotion. Our model identifies two limit cycles in the neural activity space: one underpins basic pirouette behavior, essential for navigation, and the other introduces extra Ω turns. The combination of two limit cycles elucidates predominant locomotion patterns in neural imaging data. The corresponding energy landscape explains the switching strategies between two limit cycles, quantitatively, and provides testable predictions on neural functions and circuit roles. Our work provides a general approach to study neural dynamics by combining imaging data and stochastic modeling.
Collapse
Affiliation(s)
- Ruisong Zhou
- School of Mathematical Sciences and Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200433, China
| | - Yuguo Yu
- Research Institute of Intelligent and Complex Systems, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, and Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
- Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China
| | - Chunhe Li
- School of Mathematical Sciences and Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200433, China
- Institute of Science and Technology for Brain-Inspired Intelligence and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200433, China
| |
Collapse
|
58
|
Chang JC, Perich MG, Miller LE, Gallego JA, Clopath C. De novo motor learning creates structure in neural activity that shapes adaptation. Nat Commun 2024; 15:4084. [PMID: 38744847 PMCID: PMC11094149 DOI: 10.1038/s41467-024-48008-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 04/18/2024] [Indexed: 05/16/2024] Open
Abstract
Animals can quickly adapt learned movements to external perturbations, and their existing motor repertoire likely influences their ease of adaptation. Long-term learning causes lasting changes in neural connectivity, which shapes the activity patterns that can be produced during adaptation. Here, we examined how a neural population's existing activity patterns, acquired through de novo learning, affect subsequent adaptation by modeling motor cortical neural population dynamics with recurrent neural networks. We trained networks on different motor repertoires comprising varying numbers of movements, which they acquired following various learning experiences. Networks with multiple movements had more constrained and robust dynamics, which were associated with more defined neural 'structure'-organization in the available population activity patterns. This structure facilitated adaptation, but only when the changes imposed by the perturbation were congruent with the organization of the inputs and the structure in neural activity acquired during de novo learning. These results highlight trade-offs in skill acquisition and demonstrate how different learning experiences can shape the geometrical properties of neural population activity and subsequent adaptation.
Collapse
Affiliation(s)
- Joanna C Chang
- Department of Bioengineering, Imperial College London, London, UK
| | - Matthew G Perich
- Département de Neurosciences, Faculté de Médecine, Université de Montréal, Montréal, QC, Canada
- Mila, Québec Artificial Intelligence Institute, Montréal, QC, Canada
| | - Lee E Miller
- Departments of Physiology, Biomedical Engineering and Physical Medicine and Rehabilitation, Northwestern University and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Juan A Gallego
- Department of Bioengineering, Imperial College London, London, UK.
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
59
|
Painchaud V, Desrosiers P, Doyon N. The Determining Role of Covariances in Large Networks of Stochastic Neurons. Neural Comput 2024; 36:1121-1162. [PMID: 38657971 DOI: 10.1162/neco_a_01656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Biological neural networks are notoriously hard to model due to their stochastic behavior and high dimensionality. We tackle this problem by constructing a dynamical model of both the expectations and covariances of the fractions of active and refractory neurons in the network's populations. We do so by describing the evolution of the states of individual neurons with a continuous-time Markov chain, from which we formally derive a low-dimensional dynamical system. This is done by solving a moment closure problem in a way that is compatible with the nonlinearity and boundedness of the activation function. Our dynamical system captures the behavior of the high-dimensional stochastic model even in cases where the mean-field approximation fails to do so. Taking into account the second-order moments modifies the solutions that would be obtained with the mean-field approximation and can lead to the appearance or disappearance of fixed points and limit cycles. We moreover perform numerical experiments where the mean-field approximation leads to periodically oscillating solutions, while the solutions of the second-order model can be interpreted as an average taken over many realizations of the stochastic model. Altogether, our results highlight the importance of including higher moments when studying stochastic networks and deepen our understanding of correlated neuronal activity.
Collapse
Affiliation(s)
- Vincent Painchaud
- Department of Mathematics and Statistics, McGill University, Montreal, Québec H3A 0B6, Canada
| | - Patrick Desrosiers
- Department of Physics, Engineering Physics, and Optics, Université Laval, Quebec City, Québec G1V 0A6, Canada
- CERVO Brain Research Center, Quebec City, Québec G1E 1T2, Canada
- Centre interdisciplinaire en modélisation mathématique de l'Université Laval, Quebec City, Québec G1V 0A6, Canada
| | - Nicolas Doyon
- Départment of Mathematics and Statistics, Université Laval, Quebec City, Québec G1V 0A6, Canada
- CERVO Brain Research Center, Quebec City, Québec G1E 1T2, Canada
- Centre interdisciplinaire en modélisation mathématique de l'Université Laval, Quebec City, Québec G1V 0A6, Canada
| |
Collapse
|
60
|
Lin A, Akafia C, Dal Monte O, Fan S, Fagan N, Putnam P, Tye KM, Chang S, Ba D, Allsop AZAS. An unbiased method to partition diverse neuronal responses into functional ensembles reveals interpretable population dynamics during innate social behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.08.593229. [PMID: 38766234 PMCID: PMC11100741 DOI: 10.1101/2024.05.08.593229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
In neuroscience, understanding how single-neuron firing contributes to distributed neural ensembles is crucial. Traditional methods of analysis have been limited to descriptions of whole population activity, or, when analyzing individual neurons, criteria for response categorization varied significantly across experiments. Current methods lack scalability for large datasets, fail to capture temporal changes and rely on parametric assumptions. There's a need for a robust, scalable, and non-parametric functional clustering approach to capture interpretable dynamics. To address this challenge, we developed a model-based, statistical framework for unsupervised clustering of multiple time series datasets that exhibit nonlinear dynamics into an a-priori-unknown number of parameterized ensembles called Functional Encoding Units (FEUs). FEU outperforms existing techniques in accuracy and benchmark scores. Here, we apply this FEU formalism to single-unit recordings collected during social behaviors in rodents and primates and demonstrate its hypothesis-generating and testing capacities. This novel pipeline serves as an analytic bridge, translating neural ensemble codes across model systems.
Collapse
Affiliation(s)
- Alexander Lin
- School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts, USA
| | - Cyril Akafia
- Department of Psychiatry, Yale University, New Haven, Connecticut, USA
| | - Olga Dal Monte
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| | - Siqi Fan
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| | - Nicholas Fagan
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| | - Philip Putnam
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| | - Kay M. Tye
- Salk Institute for Biological Studies, La Jolla, California, USA
- Howard Hughes Medical Institute, La Jolla, California, USA
- Kavli Institute for the Brain and Mind, La Jolla, California, USA
| | - Steve Chang
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| | - Demba Ba
- School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts, USA
- Center for Brain Sciences, Harvard University, Cambridge, Massachusetts, USA
- Kempner Institute for the Study of Artificial and Natural Intelligence, Harvard University, Cambridge, Massachusetts, USA
| | - AZA Stephen Allsop
- Center for Collective Healing, Department of Psychiatry and Behavioral Sciences, Howard University, Washington DC, USA
- Department of Psychiatry, Yale University, New Haven, Connecticut, USA
| |
Collapse
|
61
|
Lim M, Kim DJ, Nascimento TD, DaSilva AF. High-definition tDCS over primary motor cortex modulates brain signal variability and functional connectivity in episodic migraine. Clin Neurophysiol 2024; 161:101-111. [PMID: 38460220 DOI: 10.1016/j.clinph.2024.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Revised: 02/08/2024] [Accepted: 02/10/2024] [Indexed: 03/11/2024]
Abstract
OBJECTIVE This study investigated how high-definition transcranial direct current stimulation (HD-tDCS) over the primary motor cortex (M1) affects brain signal variability and functional connectivity in the trigeminal pain pathway, and their association with changes in migraine attacks. METHODS Twenty-five episodic migraine patients were randomized for ten daily sessions of active or sham M1 HD-tDCS. Resting-state blood-oxygenation-level-dependent (BOLD) signal variability and seed-based functional connectivity were assessed pre- and post-treatment. A mediation analysis was performed to test whether BOLD signal variability mediates the relationship between treatment group and moderate-to-severe headache days. RESULTS The active M1 HD-tDCS group showed reduced BOLD variability in the spinal trigeminal nucleus (SpV) and thalamus, but increased variability in the rostral anterior cingulate cortex (rACC) compared to the sham group. Connectivity decreased between medial pulvinar-temporal pole, medial dorsal-precuneus, and the ventral posterior medial nucleus-SpV, but increased between the rACC-amygdala, and the periaqueductal gray-parahippocampal gyrus. Changes in medial pulvinar variability mediated the reduction in moderate-to-severe headache days at one-month post-treatment. CONCLUSIONS M1 HD-tDCS alters BOLD signal variability and connectivity in the trigeminal somatosensory and modulatory pain system, potentially alleviating migraine headache attacks. SIGNIFICANCE M1 HD-tDCS realigns brain signal variability and connectivity in migraineurs closer to healthy control levels.
Collapse
Affiliation(s)
- Manyoel Lim
- Food Processing Research Group, Korea Food Research Institute, Wanju-gun, Jeollabuk-do 55365, Republic of Korea; Department of Biologic and Materials Sciences & Prosthodontics, University of Michigan School of Dentistry, Ann Arbor, MI 48109, USA
| | - Dajung J Kim
- Department of Biologic and Materials Sciences & Prosthodontics, University of Michigan School of Dentistry, Ann Arbor, MI 48109, USA
| | - Thiago D Nascimento
- Department of Biologic and Materials Sciences & Prosthodontics, University of Michigan School of Dentistry, Ann Arbor, MI 48109, USA
| | - Alexandre F DaSilva
- Department of Biologic and Materials Sciences & Prosthodontics, University of Michigan School of Dentistry, Ann Arbor, MI 48109, USA; Michigan Neuroscience Institute, University of Michigan, Ann Arbor, MI 48109, USA.
| |
Collapse
|
62
|
Terada Y, Toyoizumi T. Chaotic neural dynamics facilitate probabilistic computations through sampling. Proc Natl Acad Sci U S A 2024; 121:e2312992121. [PMID: 38648479 PMCID: PMC11067032 DOI: 10.1073/pnas.2312992121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/13/2024] [Indexed: 04/25/2024] Open
Abstract
Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.
Collapse
Affiliation(s)
- Yu Terada
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
- The Institute for Physics of Intelligence, The University of Tokyo, Tokyo113-0033, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo113-8656, Japan
| |
Collapse
|
63
|
Koren V, Malerba SB, Schwalger T, Panzeri S. Structure, dynamics, coding and optimal biophysical parameters of efficient excitatory-inhibitory spiking networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.24.590955. [PMID: 38712237 PMCID: PMC11071478 DOI: 10.1101/2024.04.24.590955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we rigorously derive the structural, coding, biophysical and dynamical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. The optimal network has biologically-plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-stimulus-specific excitatory external input regulating metabolic cost. The efficient network has excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implementing feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal biophysical parameters include 4 to 1 ratio of excitatory vs inhibitory neurons and 3 to 1 ratio of mean inhibitory-to-inhibitory vs. excitatory-to-inhibitory connectivity that closely match those of cortical sensory networks. The efficient network has biologically-plausible spiking dynamics, with a tight instantaneous E-I balance that makes them capable to achieve efficient coding of external stimuli varying over multiple time scales. Together, these results explain how efficient coding may be implemented in cortical networks and suggests that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Simone Blanco Malerba
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| | - Tilo Schwalger
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| |
Collapse
|
64
|
Zobaer MS, Lotfi N, Domenico CM, Hoffman C, Perotti L, Ji D, Dabaghian Y. Theta oscillons in behaving rats. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.21.590487. [PMID: 38712230 PMCID: PMC11071438 DOI: 10.1101/2024.04.21.590487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Recently discovered constituents of the brain waves-the oscillons -provide high-resolution representation of the extracellular field dynamics. Here we study the most robust, highest-amplitude oscillons that manifest in actively behaving rats and generally correspond to the traditional θ -waves. We show that the resemblances between θ -oscillons and the conventional θ -waves apply to the ballpark characteristics-mean frequencies, amplitudes, and bandwidths. In addition, both hippocampal and cortical oscillons exhibit a number of intricate, behavior-attuned, transient properties that suggest a new vantage point for understanding the θ -rhythms' structure, origins and functions. We demonstrate that oscillons are frequency-modulated waves, with speed-controlled parameters, embedded into a noise background. We also use a basic model of neuronal synchronization to contextualize and to interpret the observed phenomena. In particular, we argue that the synchronicity level in physiological networks is fairly weak and modulated by the animal's locomotion.
Collapse
|
65
|
Mosberger AC, Sibener LJ, Chen TX, Rodrigues HFM, Hormigo R, Ingram JN, Athalye VR, Tabachnik T, Wolpert DM, Murray JM, Costa RM. Exploration biases forelimb reaching strategies. Cell Rep 2024; 43:113958. [PMID: 38520691 PMCID: PMC11097405 DOI: 10.1016/j.celrep.2024.113958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/05/2023] [Accepted: 02/28/2024] [Indexed: 03/25/2024] Open
Abstract
The brain can generate actions, such as reaching to a target, using different movement strategies. We investigate how such strategies are learned in a task where perched head-fixed mice learn to reach to an invisible target area from a set start position using a joystick. This can be achieved by learning to move in a specific direction or to a specific endpoint location. As mice learn to reach the target, they refine their variable joystick trajectories into controlled reaches, which depend on the sensorimotor cortex. We show that individual mice learned strategies biased to either direction- or endpoint-based movements. This endpoint/direction bias correlates with spatial directional variability with which the workspace was explored during training. Model-free reinforcement learning agents can generate both strategies with similar correlation between variability during training and learning bias. These results provide evidence that reinforcement of individual exploratory behavior during training biases the reaching strategies that mice learn.
Collapse
Affiliation(s)
- Alice C Mosberger
- Departments of Neuroscience and Neurology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA.
| | - Leslie J Sibener
- Departments of Neuroscience and Neurology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Tiffany X Chen
- Departments of Neuroscience and Neurology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Helio F M Rodrigues
- Departments of Neuroscience and Neurology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Allen Institute, Seattle, WA 98109, USA
| | - Richard Hormigo
- Department of Neuroscience, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - James N Ingram
- Department of Neuroscience, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Vivek R Athalye
- Departments of Neuroscience and Neurology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Tanya Tabachnik
- Department of Neuroscience, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Daniel M Wolpert
- Department of Neuroscience, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - James M Murray
- Institute of Neuroscience, University of Oregon, Eugene, OR 97403, USA
| | - Rui M Costa
- Departments of Neuroscience and Neurology, Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Allen Institute, Seattle, WA 98109, USA.
| |
Collapse
|
66
|
Zobaer MS, Lotfi N, Domenico CM, Hoffman C, Perotti L, Ji D, Dabaghian Y. Theta oscillons in behaving rats. ARXIV 2024:arXiv:2404.13851v1. [PMID: 38711435 PMCID: PMC11071536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Recently discovered constituents of the brain waves-the oscillons-provide high-resolution representation of the extracellular field dynamics. Here we study the most robust, highest-amplitude oscillons that manifest in actively behaving rats and generally correspond to the traditional θ -waves. We show that the resemblances between θ -oscillons and the conventional θ -waves apply to the ballpark characteristics-mean frequencies, amplitudes, and bandwidths. In addition, both hippocampal and cortical oscillons exhibit a number of intricate, behavior-attuned, transient properties that suggest a new vantage point for understanding the θ -rhythms' structure, origins and functions. We demonstrate that oscillons are frequency-modulated waves, with speed-controlled parameters, embedded into a noise background. We also use a basic model of neuronal synchronization to contextualize and to interpret the observed phenomena. In particular, we argue that the synchronicity level in physiological networks is fairly weak and modulated by the animal's locomotion.
Collapse
Affiliation(s)
- M. S. Zobaer
- Department of Neurology, The University of Texas Health Science Center at Houston, Houston, TX 77030
| | - N. Lotfi
- Department of Neurology, The University of Texas Health Science Center at Houston, Houston, TX 77030
| | - C. M. Domenico
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030
| | - C. Hoffman
- Department of Neurology, The University of Texas Health Science Center at Houston, Houston, TX 77030
| | - L. Perotti
- Department of Physics, Texas Southern University, 3100 Cleburne Ave., Houston, Texas 77004
| | - D. Ji
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030
| | - Y. Dabaghian
- Department of Neurology, The University of Texas Health Science Center at Houston, Houston, TX 77030
| |
Collapse
|
67
|
Menéndez JA, Hennig JA, Golub MD, Oby ER, Sadtler PT, Batista AP, Chase SM, Yu BM, Latham PE. A theory of brain-computer interface learning via low-dimensional control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.18.589952. [PMID: 38712193 PMCID: PMC11071278 DOI: 10.1101/2024.04.18.589952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
A remarkable demonstration of the flexibility of mammalian motor systems is primates' ability to learn to control brain-computer interfaces (BCIs). This constitutes a completely novel motor behavior, yet primates are capable of learning to control BCIs under a wide range of conditions. BCIs with carefully calibrated decoders, for example, can be learned with only minutes to hours of practice. With a few weeks of practice, even BCIs with randomly constructed decoders can be learned. What are the biological substrates of this learning process? Here, we develop a theory based on a re-aiming strategy, whereby learning operates within a low-dimensional subspace of task-relevant inputs driving the local population of recorded neurons. Through comprehensive numerical and formal analysis, we demonstrate that this theory can provide a unifying explanation for disparate phenomena previously reported in three different BCI learning tasks, and we derive a novel experimental prediction that we verify with previously published data. By explicitly modeling the underlying neural circuitry, the theory reveals an interpretation of these phenomena in terms of biological constraints on neural activity.
Collapse
|
68
|
Tessari F, Hermus J, Sugimoto-Dimitrova R, Hogan N. Brownian processes in human motor control support descending neural velocity commands. Sci Rep 2024; 14:8341. [PMID: 38594312 PMCID: PMC11004188 DOI: 10.1038/s41598-024-58380-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 03/28/2024] [Indexed: 04/11/2024] Open
Abstract
The motor neuroscience literature suggests that the central nervous system may encode some motor commands in terms of velocity. In this work, we tackle the question: what consequences would velocity commands produce at the behavioral level? Considering the ubiquitous presence of noise in the neuromusculoskeletal system, we predict that velocity commands affected by stationary noise would produce "random walks", also known as Brownian processes, in position. Brownian motions are distinctively characterized by a linearly growing variance and a power spectral density that declines in inverse proportion to frequency. This work first shows that these Brownian processes are indeed observed in unbounded motion tasks e.g., rotating a crank. We further predict that such growing variance would still be present, but bounded, in tasks requiring a constant posture e.g., maintaining a static hand position or quietly standing. This hypothesis was also confirmed by experimental observations. A series of descriptive models are investigated to justify the observed behavior. Interestingly, one of the models capable of accounting for all the experimental results must feature forward-path velocity commands corrupted by stationary noise. The results of this work provide behavioral support for the hypothesis that humans plan the motion components of their actions in terms of velocity.
Collapse
Affiliation(s)
- Federico Tessari
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - James Hermus
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Rika Sugimoto-Dimitrova
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Neville Hogan
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
69
|
Li C, Qiu J, Huang H. Meta predictive learning model of languages in neural circuits. Phys Rev E 2024; 109:044309. [PMID: 38755909 DOI: 10.1103/physreve.109.044309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 03/18/2024] [Indexed: 05/18/2024]
Abstract
Large language models based on self-attention mechanisms have achieved astonishing performances, not only in natural language itself, but also in a variety of tasks of different nature. However, regarding processing language, our human brain may not operate using the same principle. Then, a debate is established on the connection between brain computation and artificial self-supervision adopted in large language models. One of most influential hypotheses in brain computation is the predictive coding framework, which proposes to minimize the prediction error by local learning. However, the role of predictive coding and the associated credit assignment in language processing remains unknown. Here, we propose a mean-field learning model within the predictive coding framework, assuming that the synaptic weight of each connection follows a spike and slab distribution, and only the distribution, rather than specific weights, is trained. This meta predictive learning is successfully validated on classifying handwritten digits where pixels are input to the network in sequence, and moreover, on the toy and real language corpus. Our model reveals that most of the connections become deterministic after learning, while the output connections have a higher level of variability. The performance of the resulting network ensemble changes continuously with data load, further improving with more training data, in analogy with the emergent behavior of large language models. Therefore, our model provides a starting point to investigate the connection among brain computation, next-token prediction, and general intelligence.
Collapse
Affiliation(s)
- Chan Li
- PMI Laboratory, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
- Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA
| | - Junbin Qiu
- PMI Laboratory, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Haiping Huang
- PMI Laboratory, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
- Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| |
Collapse
|
70
|
Gershman SJ. What have we learned about artificial intelligence from studying the brain? BIOLOGICAL CYBERNETICS 2024; 118:1-5. [PMID: 38337064 DOI: 10.1007/s00422-024-00983-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 01/11/2024] [Indexed: 02/12/2024]
Abstract
Neuroscience and artificial intelligence (AI) share a long, intertwined history. It has been argued that discoveries in neuroscience were (and continue to be) instrumental in driving the development of new AI technology. Scrutinizing these historical claims yields a more nuanced story, where AI researchers were loosely inspired by the brain, but ideas flowed mostly in the other direction.
Collapse
Affiliation(s)
- Samuel J Gershman
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, USA, Cambridge, USA.
- Center for Brains, Minds, and Machines,MIT, Cambridge, USA.
| |
Collapse
|
71
|
Kayser C, Heuer H. Multisensory perception depends on the reliability of the type of judgment. J Neurophysiol 2024; 131:723-737. [PMID: 38416720 DOI: 10.1152/jn.00451.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 02/05/2024] [Accepted: 02/24/2024] [Indexed: 03/01/2024] Open
Abstract
The brain engages the processes of multisensory integration and recalibration to deal with discrepant multisensory signals. These processes consider the reliability of each sensory input, with the more reliable modality receiving the stronger weight. Sensory reliability is typically assessed via the variability of participants' judgments, yet these can be shaped by factors both external and internal to the nervous system. For example, motor noise and participant's dexterity with the specific response method contribute to judgment variability, and different response methods applied to the same stimuli can result in different estimates of sensory reliabilities. Here we ask how such variations in reliability induced by variations in the response method affect multisensory integration and sensory recalibration, as well as motor adaptation, in a visuomotor paradigm. Participants performed center-out hand movements and were asked to judge the position of the hand or rotated visual feedback at the movement end points. We manipulated the variability, and thus the reliability, of repeated judgments by asking participants to respond using either a visual or a proprioceptive matching procedure. We find that the relative weights of visual and proprioceptive signals, and thus the asymmetry of multisensory integration and recalibration, depend on the reliability modulated by the judgment method. Motor adaptation, in contrast, was insensitive to this manipulation. Hence, the outcome of multisensory binding is shaped by the noise introduced by sensorimotor processing, in line with perception and action being intertwined.NEW & NOTEWORTHY Our brain tends to combine multisensory signals based on their respective reliability. This reliability depends on sensory noise in the environment, noise in the nervous system, and, as we show here, variability induced by the specific judgment procedure.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
72
|
Rodrigues EA, Christie GJ, Cosco T, Farzan F, Sixsmith A, Moreno S. A Subtype Perspective on Cognitive Trajectories in Healthy Aging. Brain Sci 2024; 14:351. [PMID: 38672003 PMCID: PMC11048421 DOI: 10.3390/brainsci14040351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 03/25/2024] [Accepted: 03/30/2024] [Indexed: 04/28/2024] Open
Abstract
Cognitive aging is a complex and dynamic process characterized by changes due to genetics and environmental factors, including lifestyle choices and environmental exposure, which contribute to the heterogeneity observed in cognitive outcomes. This heterogeneity is particularly pronounced among older adults, with some individuals maintaining stable cognitive function while others experience complex, non-linear changes, making it difficult to identify meaningful decline accurately. Current research methods range from population-level modeling to individual-specific assessments. In this work, we review these methodologies and propose that population subtyping should be considered as a viable alternative. This approach relies on early individual-specific detection methods that can lead to an improved understanding of changes in individual cognitive trajectories. The improved understanding of cognitive trajectories through population subtyping can lead to the identification of meaningful changes and the determination of timely, effective interventions. This approach can aid in informing policy decisions and in developing targeted interventions that promote cognitive health, ultimately contributing to a more personalized understanding of the aging process within society and reducing the burden on healthcare systems.
Collapse
Affiliation(s)
- Emma A. Rodrigues
- School of Interactive Arts and Technology, Simon Fraser University, Surrey, BC V3T 0A3, Canada
| | | | - Theodore Cosco
- Department of Gerontology, Simon Fraser University, Vancouver, BC V6B 5K3, Canada
| | - Faranak Farzan
- School of Mechatronics and Systems Engineering, Simon Fraser University, Surrey, BC V3T 0A3, Canada
| | - Andrew Sixsmith
- Department of Gerontology, Simon Fraser University, Vancouver, BC V6B 5K3, Canada
| | - Sylvain Moreno
- School of Interactive Arts and Technology, Simon Fraser University, Surrey, BC V3T 0A3, Canada
- Circle Innovation, Simon Fraser University, Surrey, BC V3T 0A3, Canada
| |
Collapse
|
73
|
Tomić A, Sarasso E, Basaia S, Dragašević-Misković N, Svetel M, Kostić VS, Filippi M, Agosta F. Structural brain heterogeneity underlying symptomatic and asymptomatic genetic dystonia: a multimodal MRI study. J Neurol 2024; 271:1767-1775. [PMID: 38019294 DOI: 10.1007/s00415-023-12098-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 11/02/2023] [Accepted: 11/03/2023] [Indexed: 11/30/2023]
Abstract
BACKGROUND Most of DYT genotypes follow an autosomal dominant inheritance pattern with reduced penetrance; the mechanisms underlying the disease development remain unclear. The objective of the study was to investigate cortical thickness, grey matter (GM) volumes and white matter (WM) alterations in asymptomatic (DYT-A) and symptomatic dystonia (DYT-S) mutation carriers. METHODS Eight DYT-A (four DYT-TOR1A and four DYT-THAP1), 14 DYT-S (seven DYT-TOR1A, and seven DYT-THAP1), and 37 matched healthy controls underwent 3D T1-weighted and diffusion tensor (DT) MRI to study cortical thickness, cerebellar and basal ganglia GM volumes and WM microstructural changes. RESULTS DYT-S showed thinning of the frontal and motor cortical regions related to sensorimotor and cognitive processing, together with putaminal atrophy and subcortical microstructural WM damage of both motor and extra-motor tracts such as cerebral peduncle, corona radiata, internal and external capsule, temporal and orbitofrontal WM, and corpus callosum. DYT-A had cortical thickening of middle frontal areas and WM damage of the corona radiata. CONCLUSIONS DYT genes phenotypic expression is associated with alterations of both motor and extra-motor WM and GM regions. Asymptomatic genetic status is characterized by a very subtle affection of the WM motor pathway, together with an increased cortical thickness of higher-order frontal regions that might interfere with phenotypic presentation and disease manifestation.
Collapse
Affiliation(s)
- Aleksandra Tomić
- Clinic of Neurology, Faculty of Medicine, University of Belgrade, Belgrade, Serbia
| | - Elisabetta Sarasso
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Vita-Salute San Raffaele University, Milan, Italy
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics and Maternal Child Health, University of Genoa, Genoa, Italy
| | - Silvia Basaia
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | - Marina Svetel
- Clinic of Neurology, Faculty of Medicine, University of Belgrade, Belgrade, Serbia
| | - Vladimir S Kostić
- Clinic of Neurology, Faculty of Medicine, University of Belgrade, Belgrade, Serbia
| | - Massimo Filippi
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Vita-Salute San Raffaele University, Milan, Italy
- Neurology Unit, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Neurorehabilitation Unit, IRCCS San Raffaele Scientific Institute, Milan, Italy
- Neurophysiology Service, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Federica Agosta
- Neuroimaging Research Unit, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy.
- Vita-Salute San Raffaele University, Milan, Italy.
- Neurology Unit, IRCCS San Raffaele Scientific Institute, Milan, Italy.
| |
Collapse
|
74
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
75
|
Scheller M, Nardini M. Correctly establishing evidence for cue combination via gains in sensory precision: Why the choice of comparator matters. Behav Res Methods 2024; 56:2842-2858. [PMID: 37730934 PMCID: PMC11133123 DOI: 10.3758/s13428-023-02227-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/27/2023] [Indexed: 09/22/2023]
Abstract
Studying how sensory signals from different sources (sensory cues) are integrated within or across multiple senses allows us to better understand the perceptual computations that lie at the foundation of adaptive behaviour. As such, determining the presence of precision gains - the classic hallmark of cue combination - is important for characterising perceptual systems, their development and functioning in clinical conditions. However, empirically measuring precision gains to distinguish cue combination from alternative perceptual strategies requires careful methodological considerations. Here, we note that the majority of existing studies that tested for cue combination either omitted this important contrast, or used an analysis approach that, unknowingly, strongly inflated false positives. Using simulations, we demonstrate that this approach enhances the chances of finding significant cue combination effects in up to 100% of cases, even when cues are not combined. We establish how this error arises when the wrong cue comparator is chosen and recommend an alternative analysis that is easy to implement but has only been adopted by relatively few studies. By comparing combined-cue perceptual precision with the best single-cue precision, determined for each observer individually rather than at the group level, researchers can enhance the credibility of their reported effects. We also note that testing for deviations from optimal predictions alone is not sufficient to ascertain whether cues are combined. Taken together, to correctly test for perceptual precision gains, we advocate for a careful comparator selection and task design to ensure that cue combination is tested with maximum power, while reducing the inflation of false positives.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, Durham University, Durham, UK.
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
76
|
Alizadeh Darbandi SS, Fornito A, Ghasemi A. The impact of input node placement in the controllability of structural brain networks. Sci Rep 2024; 14:6902. [PMID: 38519624 PMCID: PMC10960045 DOI: 10.1038/s41598-024-57181-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 03/14/2024] [Indexed: 03/25/2024] Open
Abstract
Network controllability refers to the ability to steer the state of a network towards a target state by driving certain nodes, known as input nodes. This concept can be applied to brain networks for studying brain function and its relation to the structure, which has numerous practical applications. Brain network controllability involves using external signals such as electrical stimulation to drive specific brain regions and navigate the neurophysiological activity level of the brain around the state space. Although controllability is mainly theoretical, the energy required for control is critical in real-world implementations. With a focus on the structural brain networks, this study explores the impact of white matter fiber architecture on the control energy in brain networks using the theory of how input node placement affects the LCC (the longest distance between inputs and other network nodes). Initially, we use a single input node as it is theoretically possible to control brain networks with just one input. We show that highly connected brain regions that lead to lower LCCs are more energy-efficient as a single input node. However, there may still be a need for a significant amount of control energy with one input, and achieving controllability with less energy could be of interest. We identify the minimum number of input nodes required to control brain networks with smaller LCCs, demonstrating that reducing the LCC can significantly decrease the control energy in brain networks. Our results show that relying solely on highly connected nodes is not effective in controlling brain networks with lower energy by using multiple inputs because of densely interconnected brain network hubs. Instead, a combination of low and high-degree nodes is necessary.
Collapse
Affiliation(s)
| | - Alex Fornito
- The Turner Institute for Brain and Mental Health, School of Psychological Sciences, and Monash Biomedical Imaging, Monash University, Clayton, Victoria, Australia
| | - Abdorasoul Ghasemi
- Department of Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran.
| |
Collapse
|
77
|
Manley J, Vaziri A. Whole-brain neural substrates of behavioral variability in the larval zebrafish. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.03.583208. [PMID: 38496592 PMCID: PMC10942351 DOI: 10.1101/2024.03.03.583208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Animals engaged in naturalistic behavior can exhibit a large degree of behavioral variability even under sensory invariant conditions. Such behavioral variability can include not only variations of the same behavior, but also variability across qualitatively different behaviors driven by divergent cognitive states, such as fight-or-flight decisions. However, the neural circuit mechanisms that generate such divergent behaviors across trials are not well understood. To investigate this question, here we studied the visual-evoked responses of larval zebrafish to moving objects of various sizes, which we found exhibited highly variable and divergent responses across repetitions of the same stimulus. Given that the neuronal circuits underlying such behaviors span sensory, motor, and other brain areas, we built a novel Fourier light field microscope which enables high-resolution, whole-brain imaging of larval zebrafish during behavior. This enabled us to screen for neural loci which exhibited activity patterns correlated with behavioral variability. We found that despite the highly variable activity of single neurons, visual stimuli were robustly encoded at the population level, and the visual-encoding dimensions of neural activity did not explain behavioral variability. This robustness despite apparent single neuron variability was due to the multi-dimensional geometry of the neuronal population dynamics: almost all neural dimensions that were variable across individual trials, i.e. the "noise" modes, were orthogonal to those encoding for sensory information. Investigating this neuronal variability further, we identified two sparsely-distributed, brain-wide neuronal populations whose pre-motor activity predicted whether the larva would respond to a stimulus and, if so, which direction it would turn on a single-trial level. These populations predicted single-trial behavior seconds before stimulus onset, indicating they encoded time-varying internal modulating behavior, perhaps organizing behavior over longer timescales or enabling flexible behavior routines dependent on the animal's internal state. Our results provide the first whole-brain confirmation that sensory, motor, and internal variables are encoded in a highly mixed fashion throughout the brain and demonstrate that de-mixing each of these components at the neuronal population level is critical to understanding the mechanisms underlying the brain's remarkable flexibility and robustness.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| |
Collapse
|
78
|
Roth AM, Lokesh R, Tang J, Buggeln JH, Smith C, Calalo JA, Sullivan SR, Ngo T, Germain LS, Carter MJ, Cashaback JGA. Punishment Leads to Greater Sensorimotor Learning But Less Movement Variability Compared to Reward. Neuroscience 2024; 540:12-26. [PMID: 38220127 PMCID: PMC10922623 DOI: 10.1016/j.neuroscience.2024.01.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/05/2024] [Accepted: 01/09/2024] [Indexed: 01/16/2024]
Abstract
When a musician practices a new song, hitting a correct note sounds pleasant while striking an incorrect note sounds unpleasant. Such reward and punishment feedback has been shown to differentially influence the ability to learn a new motor skill. Recent work has suggested that punishment leads to greater movement variability, which causes greater exploration and faster learning. To further test this idea, we collected 102 participants over two experiments. Unlike previous work, in Experiment 1 we found that punishment did not lead to faster learning compared to reward (n = 68), but did lead to a greater extent of learning. Surprisingly, we also found evidence to suggest that punishment led to less movement variability, which was related to the extent of learning. We then designed a second experiment that did not involve adaptation, allowing us to further isolate the influence of punishment feedback on movement variability. In Experiment 2, we again found that punishment led to significantly less movement variability compared to reward (n = 34). Collectively our results suggest that punishment feedback leads to less movement variability. Future work should investigate whether punishment feedback leads to a greater knowledge of movement variability and or increases the sensitivity of updating motor actions.
Collapse
Affiliation(s)
- Adam M Roth
- Department of Mechanical Engineering, University of Delaware, United States
| | - Rakshith Lokesh
- Department of Biomedical Engineering, University of Delaware, United States
| | - Jiaqiao Tang
- Department of Kinesiology, McMaster University, Canada
| | - John H Buggeln
- Department of Biomedical Engineering, University of Delaware, United States
| | - Carly Smith
- Department of Biomedical Engineering, University of Delaware, United States
| | - Jan A Calalo
- Department of Mechanical Engineering, University of Delaware, United States
| | - Seth R Sullivan
- Department of Biomedical Engineering, University of Delaware, United States
| | - Truc Ngo
- Department of Biomedical Engineering, University of Delaware, United States
| | | | | | - Joshua G A Cashaback
- Department of Mechanical Engineering, University of Delaware, United States; Department of Biomedical Engineering, University of Delaware, United States; Kinesiology and Applied Physiology, University of Delaware, United States; Interdisciplinary Neuroscience Graduate Program, University of Delaware, United States; Biomechanics and Movement Science Program, University of Delaware, United States; Department of Kinesiology, McMaster University, Canada.
| |
Collapse
|
79
|
Valli G, Sarto F, Casolo A, Del Vecchio A, Franchi MV, Narici MV, De Vito G. Lower limb suspension induces threshold-specific alterations of motor units properties that are reversed by active recovery. JOURNAL OF SPORT AND HEALTH SCIENCE 2024; 13:264-276. [PMID: 37331508 PMCID: PMC10980901 DOI: 10.1016/j.jshs.2023.06.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 03/17/2023] [Accepted: 05/16/2023] [Indexed: 06/20/2023]
Abstract
PURPOSE This study aimed to non-invasively test the hypothesis that (a) short-term lower limb unloading would induce changes in the neural control of force production (based on motor units (MUs) properties) in the vastus lateralis muscle and (b) possible changes are reversed by active recovery (AR). METHODS Ten young males underwent 10 days of unilateral lower limb suspension (ULLS) followed by 21 days of AR. During ULLS, participants walked exclusively on crutches with the dominant leg suspended in a slightly flexed position (15°-20°) and with the contralateral foot raised by an elevated shoe. The AR was based on resistance exercise (leg press and leg extension) and executed at 70% of each participant's 1 repetition maximum, 3 times/week. Maximal voluntary isometric contraction (MVC) of knee extensors and MUs properties of the vastus lateralis muscle were measured at baseline, after ULLS, and after AR. MUs were identified using high-density electromyography during trapezoidal isometric contractions at 10%, 25%, and 50% of the current MVC, and individual MUs were tracked across the 3 data collection points. RESULTS We identified 1428 unique MUs, and 270 of them (18.9%) were accurately tracked. After ULLS, MVC decreased by 29.77%, MUs absolute recruitment/derecruitment thresholds were reduced at all contraction intensities (with changes between the 2 variables strongly correlated), while discharge rate was reduced at 10% and 25% but not at 50% MVC. Impaired MVC and MUs properties fully recovered to baseline levels after AR. Similar changes were observed in the pool of total as well as tracked MUs. CONCLUSION Our novel results demonstrate, non-invasively, that 10 days of ULLS affected neural control predominantly by altering the discharge rate of lower-threshold but not of higher-threshold MUs, suggesting a preferential impact of disuse on motoneurons with a lower depolarization threshold. However, after 21 days of AR, the impaired MUs properties were fully restored to baseline levels, highlighting the plasticity of the components involved in neural control.
Collapse
Affiliation(s)
- Giacomo Valli
- Department of Biomedical Sciences, University of Padova, Padova 35131, Italy.
| | - Fabio Sarto
- Department of Biomedical Sciences, University of Padova, Padova 35131, Italy
| | - Andrea Casolo
- Department of Biomedical Sciences, University of Padova, Padova 35131, Italy
| | - Alessandro Del Vecchio
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University, Erlangen-Nürnberg 91052, Germany
| | - Martino V Franchi
- Department of Biomedical Sciences, University of Padova, Padova 35131, Italy
| | - Marco V Narici
- Department of Biomedical Sciences, University of Padova, Padova 35131, Italy
| | - Giuseppe De Vito
- Department of Biomedical Sciences, University of Padova, Padova 35131, Italy
| |
Collapse
|
80
|
Haar S. Motor variability in task-space and body-space. Phys Life Rev 2024; 48:162-163. [PMID: 38237427 DOI: 10.1016/j.plrev.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 01/07/2024] [Indexed: 03/05/2024]
Affiliation(s)
- Shlomi Haar
- Department of Brain Sciences, Imperial College London, London, United Kingdom; UK Dementia Research Institute - Care Research and Technology Centre, Imperial College London, London, UK.
| |
Collapse
|
81
|
Jäger AP, Bailey A, Huntenburg JM, Tardif CL, Villringer A, Gauthier CJ, Nikulin V, Bazin P, Steele CJ. Decreased long-range temporal correlations in the resting-state functional magnetic resonance imaging blood-oxygen-level-dependent signal reflect motor sequence learning up to 2 weeks following training. Hum Brain Mapp 2024; 45:e26539. [PMID: 38124341 PMCID: PMC10915743 DOI: 10.1002/hbm.26539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 11/03/2023] [Accepted: 11/07/2023] [Indexed: 12/23/2023] Open
Abstract
Decreased long-range temporal correlations (LRTC) in brain signals can be used to measure cognitive effort during task execution. Here, we examined how learning a motor sequence affects long-range temporal memory within resting-state functional magnetic resonance imaging signal. Using the Hurst exponent (HE), we estimated voxel-wise LRTC and assessed changes over 5 consecutive days of training, followed by a retention scan 12 days later. The experimental group learned a complex visuomotor sequence while a complementary control group performed tightly matched movements. An interaction analysis revealed that HE decreases were specific to the complex sequence and occurred in well-known motor sequence learning associated regions including left supplementary motor area, left premotor cortex, left M1, left pars opercularis, bilateral thalamus, and right striatum. Five regions exhibited moderate to strong negative correlations with overall behavioral performance improvements. Following learning, HE values returned to pretraining levels in some regions, whereas in others, they remained decreased even 2 weeks after training. Our study presents new evidence of HE's possible relevance for functional plasticity during the resting-state and suggests that a cortical subset of sequence-specific regions may continue to represent a functional signature of learning reflected in decreased long-range temporal dependence after a period of inactivity.
Collapse
Affiliation(s)
- Anna‐Thekla P. Jäger
- Department of NeurologyMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Center for Stroke Research Berlin (CSB)Charité—Universitätsmedizin BerlinBerlinGermany
- Brain Language LabFreie Universität BerlinBerlinGermany
| | - Alexander Bailey
- Temerty Faculty of MedicineUniversity of TorontoTorontoOntarioCanada
| | - Julia M. Huntenburg
- Department of NeurologyMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Max Planck Institute for Biological CyberneticsTuebingenGermany
| | - Christine L. Tardif
- Department of Biomedical EngineeringMcGill UniversityMontrealQuébecCanada
- Montreal Neurological InstituteMontrealQuébecCanada
| | - Arno Villringer
- Department of NeurologyMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Center for Stroke Research Berlin (CSB)Charité—Universitätsmedizin BerlinBerlinGermany
- Clinic for Cognitive NeurologyLeipzigGermany
- Leipzig University Medical Centre, IFB Adiposity DiseasesLeipzigGermany
- Collaborative Research Centre 1052‐A5University of LeipzigLeipzigGermany
| | - Claudine J. Gauthier
- Department of Physics/School of HealthConcordia UniversityMontrealQuébecCanada
- Montreal Heart InstituteMontrealQuébecCanada
| | - Vadim Nikulin
- Department of NeurologyMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Pierre‐Louis Bazin
- Department of NeurologyMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Faculty of Social and Behavioral SciencesUniversity of AmsterdamAmsterdamNetherlands
| | - Christopher J. Steele
- Department of NeurologyMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Department of Psychology/School of HealthConcordia UniversityMontrealQuébecCanada
| |
Collapse
|
82
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
83
|
Bull JW. Life Is Uncertain: Inherent Variability Exhibited by Organisms, and at Higher Levels of Biological Organization. ASTROBIOLOGY 2024; 24:318-327. [PMID: 38350125 DOI: 10.1089/ast.2023.0094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
Organisms act stochastically. A not uncommon view in the ecological literature is that this is mainly due to the observer having insufficient information or a stochastic environment-and not partly because organisms themselves respond with inherent unpredictability. In this study, I compile the evidence that contradicts that view. Organisms generate uncertainty internally, which results in irreducible stochastic responses. I consider why: for instance, stochastic responses are associated with greater adaptability to changing environments and resource availability. Over longer timescales, biologically generated uncertainty influences behavior, evolution, and macroecological processes. Indeed, it could be stated that organisms are systems defined by the internal generation, magnification, and record-keeping of uncertainty as inputs to responses. Important practical implications arise if organisms can indeed be defined by an association with specific classes of inherent uncertainty: not least that isolating those signatures then provides a potential means for detecting life, for considering the forms that life could theoretically take, and for exploring the wider limits to how life might become distributed. These are all fundamental goals in astrobiology.
Collapse
Affiliation(s)
- Joseph W Bull
- Department of Biology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
84
|
Hong J, Sun X, Peng J, Fu Q. A Bio-Inspired Probabilistic Neural Network Model for Noise-Resistant Collision Perception. Biomimetics (Basel) 2024; 9:136. [PMID: 38534821 DOI: 10.3390/biomimetics9030136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/19/2023] [Accepted: 02/20/2024] [Indexed: 03/28/2024] Open
Abstract
Bio-inspired models based on the lobula giant movement detector (LGMD) in the locust's visual brain have received extensive attention and application for collision perception in various scenarios. These models offer advantages such as low power consumption and high computational efficiency in visual processing. However, current LGMD-based computational models, typically organized as four-layered neural networks, often encounter challenges related to noisy signals, particularly in complex dynamic environments. Biological studies have unveiled the intrinsic stochastic nature of synaptic transmission, which can aid neural computation in mitigating noise. In alignment with these biological findings, this paper introduces a probabilistic LGMD (Prob-LGMD) model that incorporates a probability into the synaptic connections between multiple layers, thereby capturing the uncertainty in signal transmission, interaction, and integration among neurons. Comparative testing of the proposed Prob-LGMD model and two conventional LGMD models was conducted using a range of visual stimuli, including indoor structured scenes and complex outdoor scenes, all subject to artificial noise. Additionally, the model's performance was compared to standard engineering noise-filtering methods. The results clearly demonstrate that the proposed model outperforms all comparative methods, exhibiting a significant improvement in noise tolerance. This study showcases a straightforward yet effective approach to enhance collision perception in noisy environments.
Collapse
Affiliation(s)
- Jialan Hong
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
| | - Xuelong Sun
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
| | - Jigen Peng
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
| | - Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou 510006, China
| |
Collapse
|
85
|
Barra B, Kumar R, Gopinath C, Mirzakhalili E, Lempka SF, Gaunt RA, Fisher LE. High-frequency amplitude-modulated sinusoidal stimulation induces desynchronized yet controllable neural firing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.14.580219. [PMID: 38405798 PMCID: PMC10888888 DOI: 10.1101/2024.02.14.580219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Regaining sensory feedback is pivotal for people living with limb amputation. Electrical stimulation of sensory fibers in peripheral nerves has been shown to restore focal percepts in the missing limb. However, conventional rectangular current pulses induce sensations often described as unnatural. This is likely due to the synchronous and periodic nature of activity evoked by these pulses. Here we introduce a fast-oscillating amplitude-modulated sinusoidal (FAMS) stimulation waveform that desynchronizes evoked neural activity. We used a computational model to show that sinusoidal waveforms evoke asynchronous and irregular firing and that firing patterns are frequency dependent. We designed the FAMS waveform to leverage both low- and high-frequency effects and found that membrane non-linearities enhance neuron-specific differences when exposed to FAMS. We implemented this waveform in a feline model of peripheral nerve stimulation and demonstrated that FAMS-evoked activity is more asynchronous than activity evoked by rectangular pulses, while being easily controllable with simple stimulation parameters. These results represent an important step towards biomimetic stimulation strategies useful for clinical applications to restore sensory feedback.
Collapse
Affiliation(s)
- Beatrice Barra
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA, USA
- Neuroscience Institute, New York University Langone Health, New York, USA
| | - Ritesh Kumar
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, USA
| | - Chaitanya Gopinath
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ehsan Mirzakhalili
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
- Biointerfaces Institute, University of Michigan, Ann Arbor, MI, USA
- Department of Neurosurgery, University of Pennsylvania, Philadelphia, USA
| | - Scott F. Lempka
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
- Biointerfaces Institute, University of Michigan, Ann Arbor, MI, USA
- Department of Anesthesiology, University of Michigan, Ann Arbor, MI, USA
| | - Robert A. Gaunt
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, USA
- Department of Physical Medicine & Rehabilitation, University of Pittsburgh, Pittsburgh, USA
| | - Lee E Fisher
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, USA
- Department of Physical Medicine & Rehabilitation, University of Pittsburgh, Pittsburgh, USA
| |
Collapse
|
86
|
Nakuci J, Yeon J, Haddara N, Kim JH, Kim SP, Rahnev D. Multiple brain activation patterns for the same task. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.04.08.536107. [PMID: 37066155 PMCID: PMC10104176 DOI: 10.1101/2023.04.08.536107] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Abstract
Meaningful variation in internal states that impacts cognition and behavior remains challenging to discover and characterize. Here we leveraged trial-to-trial fluctuations in the brain-wide signal recorded using functional MRI to test if distinct sets of brain regions are activated on different trials when accomplishing the same task. Across three different perceptual decision-making experiments, we estimated the brain activations for each trial. We then clustered the trials based on their similarity using modularity-maximization, a data-driven classification method. In each experiment, we found multiple distinct but stable subtypes of trials, suggesting that the same task can be accomplished in the presence of widely varying brain activation patterns. Surprisingly, in all experiments, one of the subtypes exhibited strong activation in the default mode network, which is typically thought to decrease in activity during tasks that require externally focused attention. The remaining subtypes were characterized by activations in different task-positive areas. The default mode network subtype was characterized by behavioral signatures that were similar to the other subtypes exhibiting activation with task-positive regions. Finally, in a fourth experiment, we tested whether multiple activation patterns would also appear for a qualitatively different, working memory task. We again found multiple subtypes of trials with differential activation in frontoparietal control, dorsal attention, and ventral attention networks. Overall, these findings demonstrate that the same cognitive tasks are accomplished through multiple brain activation patterns.
Collapse
Affiliation(s)
- Johan Nakuci
- School of Psychology, Georgia Institute of Technology, Atlanta, Georgia, 30332, USA
| | - Jiwon Yeon
- Department of Psychology, Stanford University, Stanford, California, 94305, USA
| | - Nadia Haddara
- School of Psychology, Georgia Institute of Technology, Atlanta, Georgia, 30332, USA
| | - Ji-Hyun Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| | - Sung-Phil Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| | - Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, Georgia, 30332, USA
| |
Collapse
|
87
|
Ryu J, Lee SH. Bounded contribution of human early visual cortex to the topographic anisotropy in spatial extent perception. Commun Biol 2024; 7:178. [PMID: 38351283 PMCID: PMC10864322 DOI: 10.1038/s42003-024-05846-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 01/23/2024] [Indexed: 02/16/2024] Open
Abstract
To interact successfully with objects, it is crucial to accurately perceive their spatial extent, an enclosed region they occupy in space. Although the topographic representation of space in the early visual cortex (EVC) has been favored as a neural correlate of spatial extent perception, its exact nature and contribution to perception remain unclear. Here, we inspect the topographic representations of human individuals' EVC and perception in terms of how much their anisotropy is influenced by the orientation (co-axiality) and radial position (radiality) of stimuli. We report that while the anisotropy is influenced by both factors, its direction is primarily determined by radiality in EVC but by co-axiality in perception. Despite this mismatch, the individual differences in both radial and co-axial anisotropy are substantially shared between EVC and perception. Our findings suggest that spatial extent perception builds on EVC's spatial representation but requires an additional mechanism to transform its topographic bias.
Collapse
Affiliation(s)
- Juhyoung Ryu
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, 08826, Republic of Korea
| | - Sang-Hun Lee
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, 08826, Republic of Korea.
| |
Collapse
|
88
|
Roberts JW, Burkitt JJ, Elliott D. The type 1 submovement conundrum: an investigation into the function of velocity zero-crossings within two-component aiming movements. Exp Brain Res 2024:10.1007/s00221-024-06784-0. [PMID: 38329516 DOI: 10.1007/s00221-024-06784-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 01/15/2024] [Indexed: 02/09/2024]
Abstract
In rapid manual aiming, traditional wisdom would have it that two components manifest from feedback-based processes, where error accumulated within the primary submovement can be corrected within the secondary submovement courtesy of online sensory feedback. In some aiming contexts, there are more type 1 submovements (overshooting) compared to types 2 and 3 submovements (undershooting), particularly for more rapid movements. These particular submovements have also been attributed to a mechanical artefact involving movement termination and stabilisation. Hence, the goal of our study was to more closely examine the function of type 1 submovements by revisiting some of our previous datasets. We categorised these submovements according to whether the secondary submovement moved the limb closer (functional), or not (non-functional), to the target. Overall, there were both functional and non-functional submovements with a significantly higher proportion for the former. The displacement at the primary and secondary submovements, and negative velocity peak were significantly greater in the functional compared to non-functional. The influence of submovement type on other movement characteristics, including movement time, was somewhat less clear. These findings indicate that the majority of type 1 submovements are related to intended feedforward- and/or feedback-based processes, although there are a portion that can be attributed an indirect manifestation of a mechanical artefact. As a result, we suggest that submovements should be further categorised by their error-reducing function.
Collapse
Affiliation(s)
- James W Roberts
- Brain and Behaviour Research Group, Research Institute of Sport and Exercise Sciences (RISES), Liverpool John Moores University, Tom Reilly Building, Byrom Street, Liverpool, L3 5AF, UK.
- School of Health Sciences, Psychology, Action and Learning of Movement (PALM) Laboratory, Liverpool Hope University, Hope Park, Liverpool, L16 9JD, UK.
- Department of Kinesiology, McMaster University, 1280 Main Street West, Hamilton, ON, L8S 4K1, Canada.
| | - James J Burkitt
- Department of Kinesiology, McMaster University, 1280 Main Street West, Hamilton, ON, L8S 4K1, Canada
| | - Digby Elliott
- Brain and Behaviour Research Group, Research Institute of Sport and Exercise Sciences (RISES), Liverpool John Moores University, Tom Reilly Building, Byrom Street, Liverpool, L3 5AF, UK
- Department of Kinesiology, McMaster University, 1280 Main Street West, Hamilton, ON, L8S 4K1, Canada
| |
Collapse
|
89
|
Alemi A, Aksay ERF, Goldman MS. A Lyapunov theory demonstrating a fundamental limit on the speed of systems consolidation. ARXIV 2024:arXiv:2402.01605v1. [PMID: 38351934 PMCID: PMC10862927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/19/2024]
Abstract
The nervous system reorganizes memories from an early site to a late site, a commonly observed feature of learning and memory systems known as systems consolidation. Previous work has suggested learning rules by which consolidation may occur. Here, we provide conditions under which such rules are guaranteed to lead to stable convergence of learning and consolidation. We use the theory of Lyapunov functions, which enforces stability by requiring learning rules to decrease an energy-like (Lyapunov) function. We present the theory in the context of a simple circuit architecture motivated by classic models of learning in systems consolidation mediated by the cerebellum. Stability is only guaranteed if the learning rate in the late stage is not faster than the learning rate in the early stage. Further, the slower the learning rate at the late stage, the larger the perturbation the system can tolerate with a guarantee of stability. We provide intuition for this result by mapping the consolidation model to a damped driven oscillator system, and showing that the ratio of early-to late-stage learning rates in the consolidation model can be directly identified with the (square of the) oscillator's damping ratio. This work suggests the power of the Lyapunov approach to provide constraints on nervous system function.
Collapse
Affiliation(s)
- Alireza Alemi
- Center for Neuroscience, and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, Davis, CA 95616, USA
| | - Emre R. F. Aksay
- Institute for Computational Biomedicine and Department of Physiology and Biophysics, Weill Cornell Medical College, New York, NY 10021, USA
| | - Mark S. Goldman
- Center for Neuroscience, and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, Davis, CA 95616, USA
- Department of Ophthalmology and Vision Science, University of California, Davis, Davis, CA 95616, USA
| |
Collapse
|
90
|
Kreter N, Fino PC. Consequences of changing planned foot placement on balance control and forward progress. J R Soc Interface 2024; 21:20230577. [PMID: 38350615 PMCID: PMC10864096 DOI: 10.1098/rsif.2023.0577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 01/19/2024] [Indexed: 02/15/2024] Open
Abstract
While walking humans generally plan foot placement two steps in advance. However, it is often necessary to rapidly alter foot placement position just before stepping due to the appearance of a new obstacle. While humans are quite capable of rapidly altering foot placement position, such changes can have major effects on centre of mass dynamics. We investigated how rapid changes to planned foot placement impact centre of mass dynamics, and how such changes influence the control of balance and forward progress, during both straight- and turning-gait. Thirteen young adults walked along a virtually projected walkway with precision footholds oriented either in a straight line or with a single 60°, 90° or 120° turn. On a subset of trials, participants were required to rapidly avoid stepping on select footholds. We found that if the centre of mass was disrupted such that it interfered with task success (i.e. staying upright and continuing along the planned path), walkers were more likely to sacrifice forward progress than the upright stability. Further, walkers appear to control centre of mass dynamics differently following inhibited steps during step turns than during spin turns, which may reflect a larger threat to task success when spin turns are interrupted.
Collapse
Affiliation(s)
- Nicholas Kreter
- Department of Health and Kinesiology, University of Utah, Salt Lake City, UT 84112, USA
| | - Peter C. Fino
- Department of Health and Kinesiology, University of Utah, Salt Lake City, UT 84112, USA
| |
Collapse
|
91
|
Liang J, Yang Z, Zhou C. Excitation-Inhibition Balance, Neural Criticality, and Activities in Neuronal Circuits. Neuroscientist 2024:10738584231221766. [PMID: 38291889 DOI: 10.1177/10738584231221766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Neural activities in local circuits exhibit complex and multilevel dynamic features. Individual neurons spike irregularly, which is believed to originate from receiving balanced amounts of excitatory and inhibitory inputs, known as the excitation-inhibition balance. The spatial-temporal cascades of clustered neuronal spikes occur in variable sizes and durations, manifested as neural avalanches with scale-free features. These may be explained by the neural criticality hypothesis, which posits that neural systems operate around the transition between distinct dynamic states. Here, we summarize the experimental evidence for and the underlying theory of excitation-inhibition balance and neural criticality. Furthermore, we review recent studies of excitatory-inhibitory networks with synaptic kinetics as a simple solution to reconcile these two apparently distinct theories in a single circuit model. This provides a more unified understanding of multilevel neural activities in local circuits, from spontaneous to stimulus-response dynamics.
Collapse
Affiliation(s)
- Junhao Liang
- Eberhard Karls University of Tübingen and Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Zhuda Yang
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
- Life Science Imaging Centre, Hong Kong Baptist University, Kowloon Tong, Hong Kong
- Research Centre, Hong Kong Baptist University Institute of Research and Continuing Education, Shenzhen, China
| |
Collapse
|
92
|
Kraikivski P. A Mechanistic Model of Perceptual Binding Predicts That Binding Mechanism Is Robust against Noise. ENTROPY (BASEL, SWITZERLAND) 2024; 26:133. [PMID: 38392388 PMCID: PMC10888151 DOI: 10.3390/e26020133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/28/2024] [Accepted: 01/30/2024] [Indexed: 02/24/2024]
Abstract
The concept of the brain's own time and space is central to many models and theories that aim to explain how the brain generates consciousness. For example, the temporo-spatial theory of consciousness postulates that the brain implements its own inner time and space for conscious processing of the outside world. Furthermore, our perception and cognition of time and space can be different from actual time and space. This study presents a mechanistic model of mutually connected processes that encode phenomenal representations of space and time. The model is used to elaborate the binding mechanism between two sets of processes representing internal space and time, respectively. Further, a stochastic version of the model is developed to investigate the interplay between binding strength and noise. Spectral entropy is used to characterize noise effects on the systems of interacting processes when the binding strength between them is varied. The stochastic modeling results reveal that the spectral entropy values for strongly bound systems are similar to those for weakly bound or even decoupled systems. Thus, the analysis performed in this study allows us to conclude that the binding mechanism is noise-resilient.
Collapse
Affiliation(s)
- Pavel Kraikivski
- Division of Systems Biology, Academy of Integrated Science, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, USA
| |
Collapse
|
93
|
Wang X, Cichos F. Harnessing synthetic active particles for physical reservoir computing. Nat Commun 2024; 15:774. [PMID: 38287028 PMCID: PMC10825170 DOI: 10.1038/s41467-024-44856-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 01/08/2024] [Indexed: 01/31/2024] Open
Abstract
The processing of information is an indispensable property of living systems realized by networks of active processes with enormous complexity. They have inspired many variants of modern machine learning, one of them being reservoir computing, in which stimulating a network of nodes with fading memory enables computations and complex predictions. Reservoirs are implemented on computer hardware, but also on unconventional physical substrates such as mechanical oscillators, spins, or bacteria often summarized as physical reservoir computing. Here we demonstrate physical reservoir computing with a synthetic active microparticle system that self-organizes from an active and passive component into inherently noisy nonlinear dynamical units. The self-organization and dynamical response of the unit are the results of a delayed propulsion of the microswimmer to a passive target. A reservoir of such units with a self-coupling via the delayed response can perform predictive tasks despite the strong noise resulting from the Brownian motion of the microswimmers. To achieve efficient noise suppression, we introduce a special architecture that uses historical reservoir states for output. Our results pave the way for the study of information processing in synthetic self-organized active particle systems.
Collapse
Affiliation(s)
- Xiangzun Wang
- Peter Debye Institute for Soft Matter Physics, Leipzig University, 04103, Leipzig, Germany
- Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig, 04105, Leipzig, Germany
| | - Frank Cichos
- Peter Debye Institute for Soft Matter Physics, Leipzig University, 04103, Leipzig, Germany.
| |
Collapse
|
94
|
Noda T, Takahashi H. Stochastic resonance in sparse neuronal network: functional role of ongoing activity to detect weak sensory input in awake auditory cortex of rat. Cereb Cortex 2024; 34:bhad428. [PMID: 37955660 PMCID: PMC10793590 DOI: 10.1093/cercor/bhad428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 10/10/2023] [Accepted: 10/25/2023] [Indexed: 11/14/2023] Open
Abstract
The awake cortex is characterized by a higher level of ongoing spontaneous activity, but it has a better detectability of weak sensory inputs than the anesthetized cortex. However, the computational mechanism underlying this paradoxical nature of awake neuronal activity remains to be elucidated. Here, we propose a hypothetical stochastic resonance, which improves the signal-to-noise ratio (SNR) of weak sensory inputs through nonlinear relations between ongoing spontaneous activities and sensory-evoked activities. Prestimulus and tone-evoked activities were investigated via in vivo extracellular recording with a dense microelectrode array covering the entire auditory cortex in rats in both awake and anesthetized states. We found that tone-evoked activities increased supralinearly with the prestimulus activity level in the awake state and that the SNR of weak stimulus representation was optimized at an intermediate level of prestimulus ongoing activity. Furthermore, the temporally intermittent firing pattern, but not the trial-by-trial reliability or the fluctuation of local field potential, was identified as a relevant factor for SNR improvement. Since ongoing activity differs among neurons, hypothetical stochastic resonance or "sparse network stochastic resonance" might offer beneficial SNR improvement at the single-neuron level, which is compatible with the sparse representation in the sensory cortex.
Collapse
Affiliation(s)
- Takahiro Noda
- Department of Mechano-informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| | - Hirokazu Takahashi
- Department of Mechano-informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| |
Collapse
|
95
|
Asadpour A, Tan H, Lenfesty B, Wong-Lin K. Of Rodents and Primates: Time-Variant Gain in Drift-Diffusion Decision Models. COMPUTATIONAL BRAIN & BEHAVIOR 2024; 7:195-206. [PMID: 38798787 PMCID: PMC11111503 DOI: 10.1007/s42113-023-00194-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 12/10/2023] [Indexed: 05/29/2024]
Abstract
Sequential sampling models of decision-making involve evidence accumulation over time and have been successful in capturing choice behaviour. A popular model is the drift-diffusion model (DDM). To capture the finer aspects of choice reaction times (RTs), time-variant gain features representing urgency signals have been implemented in DDM that can exhibit slower error RTs than correct RTs. However, time-variant gain is often implemented on both DDM's signal and noise features, with the assumption that increasing gain on the drift rate (due to urgency) is similar to DDM with collapsing decision bounds. Hence, it is unclear whether gain effects on just the signal or noise feature can lead to a different choice behaviour. This work presents an alternative DDM variant, focusing on the implications of time-variant gain mechanisms, constrained by model parsimony. Specifically, using computational modelling of choice behaviour of rats, monkeys, and humans, we systematically showed that time-variant gain only on the DDM's noise was sufficient to produce slower error RTs, as in monkeys, while time-variant gain only on drift rate leads to faster error RTs, as in rodents. We also found minimal effects of time-variant gain in humans. By highlighting these patterns, this study underscores the utility of group-level modelling in capturing general trends and effects consistent across species. Thus, time-variant gain on DDM's different components can lead to different choice behaviours, shed light on the underlying time-variant gain mechanisms for different species, and can be used for systematic data fitting. Supplementary Information The online version contains supplementary material available at 10.1007/s42113-023-00194-1.
Collapse
Affiliation(s)
- Abdoreza Asadpour
- Intelligent Systems Research Centre, School of Computing, Engineering and Intelligent Systems, Ulster University, Magee Campus, Derry~Londonderry, Northern Ireland UK
| | - Hui Tan
- Intelligent Systems Research Centre, School of Computing, Engineering and Intelligent Systems, Ulster University, Magee Campus, Derry~Londonderry, Northern Ireland UK
- Département Electronique et Technologies Numériques, Polytech Nantes, Nantes Université, Nantes, France
| | - Brendan Lenfesty
- Intelligent Systems Research Centre, School of Computing, Engineering and Intelligent Systems, Ulster University, Magee Campus, Derry~Londonderry, Northern Ireland UK
| | - KongFatt Wong-Lin
- Intelligent Systems Research Centre, School of Computing, Engineering and Intelligent Systems, Ulster University, Magee Campus, Derry~Londonderry, Northern Ireland UK
| |
Collapse
|
96
|
He X, Bao M. Neuroimaging evidence of visual-vestibular interaction accounting for perceptual mislocalization induced by head rotation. NEUROPHOTONICS 2024; 11:015005. [PMID: 38298609 PMCID: PMC10828893 DOI: 10.1117/1.nph.11.1.015005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 01/04/2024] [Accepted: 01/08/2024] [Indexed: 02/02/2024]
Abstract
Significance A fleeting flash aligned vertically with an object remaining stationary in the head-centered space would be perceived as lagging behind the object during the observer's horizontal head rotation. This perceptual mislocalization is an illusion named head-rotation-induced flash-lag effect (hFLE). While many studies have investigated the neural mechanism of the classical visual FLE, the hFLE has been hardly investigated. Aim We measured the cortical activity corresponding to the hFLE on participants experiencing passive head rotations using functional near-infrared spectroscopy. Approach Participants were asked to judge the relative position of a flash to a fixed reference while being horizontally rotated or staying static in a swivel chair. Meanwhile, functional near-infrared spectroscopy signals were recorded in temporal-parietal areas. The flash duration was manipulated to provide control conditions. Results Brain activity specific to the hFLE was found around the right middle/inferior temporal gyri, and bilateral supramarginal gyri and superior temporal gyri areas. The activation was positively correlated with the rotation velocity of the participant around the supramarginal gyrus and negatively related to the hFLE intensity around the middle temporal gyrus. Conclusions These results suggest that the mechanism underlying the hFLE involves multiple aspects of visual-vestibular interactions including the processing of multisensory conflicts mediated by the temporoparietal junction and the modulation of vestibular signals on object position perception in the human middle temporal complex.
Collapse
Affiliation(s)
- Xin He
- Chinese Academy of Sciences, Institute of Psychology, CAS Key Laboratory of Behavioral Science, Beijing, China
| | - Min Bao
- Chinese Academy of Sciences, Institute of Psychology, CAS Key Laboratory of Behavioral Science, Beijing, China
- University of Chinese Academy of Sciences, Department of Psychology, Beijing, China
- State Key Laboratory of Brain and Cognitive Science, Beijing, China
| |
Collapse
|
97
|
Wolfson SS, Kirk I, Waldie K, King C. EEG Complexity Analysis of Brain States, Tasks and ASD Risk. ADVANCES IN NEUROBIOLOGY 2024; 36:733-759. [PMID: 38468061 DOI: 10.1007/978-3-031-47606-8_37] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Autism spectrum disorder is an increasingly prevalent and debilitating neurodevelopmental condition and an electroencephalogram (EEG) diagnostic challenge. Despite large amounts of electrophysiological research over many decades, an EEG biomarker for autism spectrum disorder (ASD) has not been found. We hypothesized that reductions in complex dynamical system behaviour in the human central nervous system as part of the macroscale neuronal function during cognitive processes might be detectable in whole EEG for higher-risk ASD adults. In three studies, we compared the medians of correlation dimension, largest Lyapunov exponent, Higuchi's fractal dimension, multiscale entropy, multifractal detrended fluctuation analysis and Kolmogorov complexity during resting, cognitive and social skill tasks in 20 EEG channels of 39 adults over a range of ASD risk. We found heterogeneous complexity distribution with clusters of hierarchical sequences pointing to potential cognitive processing differences, but no clear distinction based on ASD risk. We suggest that there is indication of statistically significant differences between complexity measures of brain states and tasks. Though replication of our studies is needed with a larger sample, we believe that our electrophysiological and analytic approach has potential as a biomarker for earlier ASD diagnosis.
Collapse
Affiliation(s)
- Stephen S Wolfson
- The University of Auckland School of Psychology, Auckland, Auckland, New Zealand.
| | - Ian Kirk
- The University of Auckland School of Psychology, Auckland, Auckland, New Zealand
| | - Karen Waldie
- The University of Auckland School of Psychology, Auckland, Auckland, New Zealand
| | - Chris King
- The University of Auckland School of Psychology, Auckland, Auckland, New Zealand
| |
Collapse
|
98
|
Agarwal H, Rathore H. BGRL: Basal Ganglia inspired Reinforcement Learning based framework for deep brain stimulators. Artif Intell Med 2024; 147:102736. [PMID: 38184360 DOI: 10.1016/j.artmed.2023.102736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 10/13/2023] [Accepted: 11/28/2023] [Indexed: 01/08/2024]
Abstract
Deep Brain Stimulation (DBS) is an implantable medical device used for electrical stimulation to treat neurological disorders. Traditional DBS devices provide fixed frequency pulses, but personalized adjustment of stimulation parameters is crucial for optimal treatment. This paper introduces a Basal Ganglia inspired Reinforcement Learning (BGRL) approach, incorporating a closed-loop feedback mechanism to suppress neural synchrony during neurological fluctuations. The BGRL approach leverages the resemblance between the Basal Ganglia region of brain by incorporating the actor-critic architecture of reinforcement learning (RL). Simulation results demonstrate that BGRL significantly reduces synchronous electrical pulses compared to other standard RL algorithms. BGRL algorithm outperforms existing RL methods in terms of suppression capability and energy consumption, validated through comparisons using ensemble oscillators. Results shown in the paper demonstrate BGRL suppressed the synchronous electrical pulses across three signaling regimes namely regular, chaotic and bursting by 40%, 146% and 40% respectively as compared to soft actor-critic model. BGRL shows promise in effectively suppressing neural synchrony in DBS therapy, providing an efficient alternative to open-loop methodologies.
Collapse
Affiliation(s)
- Harsh Agarwal
- Department of Electrical and Computer Engineering, Indian Institute of Technology, India.
| | - Heena Rathore
- Department of Computer Science at Texas State University, USA.
| |
Collapse
|
99
|
Hutt A, Trotter D, Pariz A, Valiante TA, Lefebvre J. Diversity-induced trivialization and resilience of neural dynamics. CHAOS (WOODBURY, N.Y.) 2024; 34:013147. [PMID: 38285722 DOI: 10.1063/5.0165773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 01/01/2024] [Indexed: 01/31/2024]
Abstract
Heterogeneity is omnipresent across all living systems. Diversity enriches the dynamical repertoire of these systems but remains challenging to reconcile with their manifest robustness and dynamical persistence over time, a fundamental feature called resilience. To better understand the mechanism underlying resilience in neural circuits, we considered a nonlinear network model, extracting the relationship between excitability heterogeneity and resilience. To measure resilience, we quantified the number of stationary states of this network, and how they are affected by various control parameters. We analyzed both analytically and numerically gradient and non-gradient systems modeled as non-linear sparse neural networks evolving over long time scales. Our analysis shows that neuronal heterogeneity quenches the number of stationary states while decreasing the susceptibility to bifurcations: a phenomenon known as trivialization. Heterogeneity was found to implement a homeostatic control mechanism enhancing network resilience to changes in network size and connection probability by quenching the system's dynamic volatility.
Collapse
Affiliation(s)
- Axel Hutt
- MLMS, MIMESIS, Université de Strasbourg, CNRS, Inria, ICube, 67000 Strasbourg, France
| | - Daniel Trotter
- Department of Physics, University of Ottawa, Ottawa, Ontario K1N 6N5, Canada
- Krembil Brain Institute, University Health Network, Toronto, Ontario M5T 0S8, Canada
| | - Aref Pariz
- Krembil Brain Institute, University Health Network, Toronto, Ontario M5T 0S8, Canada
- Department of Biology, University of Ottawa, Ottawa, Ontario K1N 6N5, Canada
| | - Taufik A Valiante
- Krembil Brain Institute, University Health Network, Toronto, Ontario M5T 0S8, Canada
- Department of Electrical and Computer Engineering, Institute of Medical Science, Institute of Biomedical Engineering, Division of Neurosurgery, Department of Surgery, CRANIA (Center for Advancing Neurotechnological Innovation to Application), Max Planck-University of Toronto Center for Neural Science and Technology, University of Toronto, Toronto, Ontario M5S 3G8, Canada
| | - Jérémie Lefebvre
- Department of Physics, University of Ottawa, Ottawa, Ontario K1N 6N5, Canada
- Krembil Brain Institute, University Health Network, Toronto, Ontario M5T 0S8, Canada
- Department of Biology, University of Ottawa, Ottawa, Ontario K1N 6N5, Canada
- Department of Mathematics, University of Toronto, Toronto, Ontario M5S 2E4, Canada
| |
Collapse
|
100
|
Hermus J, Doeringer J, Sternad D, Hogan N. Dynamic primitives in constrained action: systematic changes in the zero-force trajectory. J Neurophysiol 2024; 131:1-15. [PMID: 37820017 PMCID: PMC11286308 DOI: 10.1152/jn.00082.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 10/02/2023] [Accepted: 10/05/2023] [Indexed: 10/13/2023] Open
Abstract
Humans substantially outperform robotic systems in tasks that require physical interaction, despite seemingly inferior muscle bandwidth and slow neural information transmission. The control strategies that enable this performance remain poorly understood. To bridge that gap, this study examined kinematically constrained motion as an intermediate step between the widely studied unconstrained motions and sparsely studied physical interactions. Subjects turned a horizontal planar crank in two directions (clockwise and counterclockwise) at three constant target speeds (fast, medium, and very slow) as instructed via visual display. With the hand constrained to move in a circle, nonzero forces against the constraint were measured. This experiment exposed two observations that could not result from mechanics alone but may be attributed to neural control composed of dynamic primitives. A plausible mathematical model of interactive dynamics (mechanical impedance) was assumed and used to "subtract" peripheral neuromechanics. This method revealed a summary of the underlying neural control in terms of motion, a zero-force trajectory. The estimated zero-force trajectories were approximately elliptical and their orientation differed significantly with turning direction; that is consistent with control using oscillations to generate an elliptical zero-force trajectory. However, for periods longer than 2-5 s, motion can no longer be perceived or executed as periodic. Instead, it decomposes into a sequence of submovements, manifesting as increased variability. These quantifiable performance limitations support the hypothesis that humans simplify this constrained-motion task by exploiting at least three primitive dynamic actions: oscillations, submovements, and mechanical impedance.NEW & NOTEWORTHY Control using primitive dynamic actions may explain why human performance is superior to robots despite seemingly inferior "wetware"; however, this also implies limitations. For a crank-turning task, this work quantified two such informative limitations. Force was exerted even though it produced no mechanical work, the underlying zero-force trajectory was roughly elliptical, and its orientation differed with turning direction, evidence of oscillatory control. At slow speeds, speed variability increased substantially, indicating intermittent control via submovements.
Collapse
Affiliation(s)
- James Hermus
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
| | | | - Dagmar Sternad
- Departments of Biology, Electrical and Computer Engineering, and Physics, Northeastern University, Boston, Massachusetts, United States
| | - Neville Hogan
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
| |
Collapse
|