1
|
Scott DN, Mukherjee A, Nassar MR, Halassa MM. Thalamocortical architectures for flexible cognition and efficient learning. Trends Cogn Sci 2024; 28:739-756. [PMID: 38886139 PMCID: PMC11305962 DOI: 10.1016/j.tics.2024.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 05/12/2024] [Accepted: 05/13/2024] [Indexed: 06/20/2024]
Abstract
The brain exhibits a remarkable ability to learn and execute context-appropriate behaviors. How it achieves such flexibility, without sacrificing learning efficiency, is an important open question. Neuroscience, psychology, and engineering suggest that reusing and repurposing computations are part of the answer. Here, we review evidence that thalamocortical architectures may have evolved to facilitate these objectives of flexibility and efficiency by coordinating distributed computations. Recent work suggests that distributed prefrontal cortical networks compute with flexible codes, and that the mediodorsal thalamus provides regularization to promote efficient reuse. Thalamocortical interactions resemble hierarchical Bayesian computations, and their network implementation can be related to existing gating, synchronization, and hub theories of thalamic function. By reviewing recent findings and providing a novel synthesis, we highlight key research horizons integrating computation, cognition, and systems neuroscience.
Collapse
Affiliation(s)
- Daniel N Scott
- Department of Neuroscience, Brown University, Providence, RI, USA; Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| | - Arghya Mukherjee
- Department of Neuroscience, Tufts University School of Medicine, Boston, MA, USA
| | - Matthew R Nassar
- Department of Neuroscience, Brown University, Providence, RI, USA; Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, USA
| | - Michael M Halassa
- Department of Neuroscience, Tufts University School of Medicine, Boston, MA, USA; Department of Psychiatry, Tufts University School of Medicine, Boston, MA, USA.
| |
Collapse
|
2
|
Gu S, Mattar MG, Tang H, Pan G. Emergence and reconfiguration of modular structure for artificial neural networks during continual familiarity detection. SCIENCE ADVANCES 2024; 10:eadm8430. [PMID: 39058783 PMCID: PMC11277393 DOI: 10.1126/sciadv.adm8430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 06/21/2024] [Indexed: 07/28/2024]
Abstract
Advances in artificial intelligence enable neural networks to learn a wide variety of tasks, yet our understanding of the learning dynamics of these networks remains limited. Here, we study the temporal dynamics during learning of Hebbian feedforward neural networks in tasks of continual familiarity detection. Drawing inspiration from network neuroscience, we examine the network's dynamic reconfiguration, focusing on how network modules evolve throughout learning. Through a comprehensive assessment involving metrics like network accuracy, modular flexibility, and distribution entropy across diverse learning modes, our approach reveals various previously unknown patterns of network reconfiguration. We find that the emergence of network modularity is a salient predictor of performance and that modularization strengthens with increasing flexibility throughout learning. These insights not only elucidate the nuanced interplay of network modularity, accuracy, and learning dynamics but also bridge our understanding of learning in artificial and biological agents.
Collapse
Affiliation(s)
- Shi Gu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
- Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen, China
| | - Marcelo G. Mattar
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
- State Key Laboratory of Brain Machine Intelligence, Zhejiang University, Hangzhou, China
| | - Gang Pan
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
- State Key Laboratory of Brain Machine Intelligence, Zhejiang University, Hangzhou, China
| |
Collapse
|
3
|
Serrano-Fernández L, Beirán M, Parga N. Emergent perceptual biases from state-space geometry in trained spiking recurrent neural networks. Cell Rep 2024; 43:114412. [PMID: 38968075 DOI: 10.1016/j.celrep.2024.114412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 04/08/2024] [Accepted: 06/12/2024] [Indexed: 07/07/2024] Open
Abstract
A stimulus held in working memory is perceived as contracted toward the average stimulus. This contraction bias has been extensively studied in psychophysics, but little is known about its origin from neural activity. By training recurrent networks of spiking neurons to discriminate temporal intervals, we explored the causes of this bias and how behavior relates to population firing activity. We found that the trained networks exhibited animal-like behavior. Various geometric features of neural trajectories in state space encoded warped representations of the durations of the first interval modulated by sensory history. Formulating a normative model, we showed that these representations conveyed a Bayesian estimate of the interval durations, thus relating activity and behavior. Importantly, our findings demonstrate that Bayesian computations already occur during the sensory phase of the first stimulus and persist throughout its maintenance in working memory, until the time of stimulus comparison.
Collapse
Affiliation(s)
- Luis Serrano-Fernández
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain; Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - Manuel Beirán
- Center for Theoretical Neuroscience, Zuckerman Institute, Columbia University, New York, NY, USA
| | - Néstor Parga
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain; Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain.
| |
Collapse
|
4
|
Gupta D, Kopec CD, Bondy AG, Luo TZ, Elliott VA, Brody CD. A multi-region recurrent circuit for evidence accumulation in rats. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.08.602544. [PMID: 39026895 PMCID: PMC11257434 DOI: 10.1101/2024.07.08.602544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Decision-making based on noisy evidence requires accumulating evidence and categorizing it to form a choice. Here we evaluate a proposed feedforward and modular mapping of this process in rats: evidence accumulated in anterodorsal striatum (ADS) is categorized in prefrontal cortex (frontal orienting fields, FOF). Contrary to this, we show that both regions appear to be indistinguishable in their encoding/decoding of accumulator value and communicate this information bidirectionally. Consistent with a role for FOF in accumulation, silencing FOF to ADS projections impacted behavior throughout the accumulation period, even while nonselective FOF silencing did not. We synthesize these findings into a multi-region recurrent neural network trained with a novel approach. In-silico experiments reveal that multiple scales of recurrence in the cortico-striatal circuit rescue computation upon nonselective FOF perturbations. These results suggest that ADS and FOF accumulate evidence in a recurrent and distributed manner, yielding redundant representations and robustness to certain perturbations.
Collapse
Affiliation(s)
- Diksha Gupta
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA
- Present address: Sainsbury Wellcome Centre, University College London, London, UK
| | - Charles D. Kopec
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA
| | - Adrian G. Bondy
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA
| | - Thomas Z. Luo
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA
| | - Verity A. Elliott
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA
| | - Carlos D. Brody
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA
- Howard Hughes Medical Institute, Princeton University, Princeton NJ, USA
| |
Collapse
|
5
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
6
|
Yoon JH, Lee D, Lee C, Cho E, Lee S, Cazenave-Gassiot A, Kim K, Chae S, Dennis EA, Suh PG. Paradigm shift required for translational research on the brain. Exp Mol Med 2024; 56:1043-1054. [PMID: 38689090 PMCID: PMC11148129 DOI: 10.1038/s12276-024-01218-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 02/07/2024] [Accepted: 02/20/2024] [Indexed: 05/02/2024] Open
Abstract
Biomedical research on the brain has led to many discoveries and developments, such as understanding human consciousness and the mind and overcoming brain diseases. However, historical biomedical research on the brain has unique characteristics that differ from those of conventional biomedical research. For example, there are different scientific interpretations due to the high complexity of the brain and insufficient intercommunication between researchers of different disciplines owing to the limited conceptual and technical overlap of distinct backgrounds. Therefore, the development of biomedical research on the brain has been slower than that in other areas. Brain biomedical research has recently undergone a paradigm shift, and conducting patient-centered, large-scale brain biomedical research has become possible using emerging high-throughput analysis tools. Neuroimaging, multiomics, and artificial intelligence technology are the main drivers of this new approach, foreshadowing dramatic advances in translational research. In addition, emerging interdisciplinary cooperative studies provide insights into how unresolved questions in biomedicine can be addressed. This review presents the in-depth aspects of conventional biomedical research and discusses the future of biomedical research on the brain.
Collapse
Affiliation(s)
- Jong Hyuk Yoon
- Neurodegenerative Diseases Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea.
| | - Dongha Lee
- Cognitive Science Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Chany Lee
- Cognitive Science Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Eunji Cho
- Neurodegenerative Diseases Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Seulah Lee
- Neurodegenerative Diseases Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Amaury Cazenave-Gassiot
- Department of Biochemistry and Precision Medicine Translational Research Program, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 119077, Singapore
- Singapore Lipidomics Incubator (SLING), Life Sciences Institute, National University of Singapore, Singapore, 117456, Singapore
| | - Kipom Kim
- Research Strategy Office, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Sehyun Chae
- Neurovascular Unit Research Group, Korean Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Edward A Dennis
- Department of Pharmacology and Department of Chemistry and Biochemistry, University of California, San Diego, La Jolla, CA, 92093-0601, USA
| | - Pann-Ghill Suh
- Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| |
Collapse
|
7
|
Matteucci G, Piasini E, Zoccolan D. Unsupervised learning of mid-level visual representations. Curr Opin Neurobiol 2024; 84:102834. [PMID: 38154417 DOI: 10.1016/j.conb.2023.102834] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 12/03/2023] [Accepted: 12/05/2023] [Indexed: 12/30/2023]
Abstract
Recently, a confluence between trends in neuroscience and machine learning has brought a renewed focus on unsupervised learning, where sensory processing systems learn to exploit the statistical structure of their inputs in the absence of explicit training targets or rewards. Sophisticated experimental approaches have enabled the investigation of the influence of sensory experience on neural self-organization and its synaptic bases. Meanwhile, novel algorithms for unsupervised and self-supervised learning have become increasingly popular both as inspiration for theories of the brain, particularly for the function of intermediate visual cortical areas, and as building blocks of real-world learning machines. Here we review some of these recent developments, placing them in historical context and highlighting some research lines that promise exciting breakthroughs in the near future.
Collapse
Affiliation(s)
- Giulio Matteucci
- Department of Basic Neurosciences, University of Geneva, Geneva, 1206, Switzerland. https://twitter.com/giulio_matt
| | - Eugenio Piasini
- International School for Advanced Studies (SISSA), Trieste, 34136, Italy
| | - Davide Zoccolan
- International School for Advanced Studies (SISSA), Trieste, 34136, Italy.
| |
Collapse
|
8
|
Lin C, Bulls LS, Tepfer LJ, Vyas AD, Thornton MA. Advancing Naturalistic Affective Science with Deep Learning. AFFECTIVE SCIENCE 2023; 4:550-562. [PMID: 37744976 PMCID: PMC10514024 DOI: 10.1007/s42761-023-00215-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/03/2023] [Indexed: 09/26/2023]
Abstract
People express their own emotions and perceive others' emotions via a variety of channels, including facial movements, body gestures, vocal prosody, and language. Studying these channels of affective behavior offers insight into both the experience and perception of emotion. Prior research has predominantly focused on studying individual channels of affective behavior in isolation using tightly controlled, non-naturalistic experiments. This approach limits our understanding of emotion in more naturalistic contexts where different channels of information tend to interact. Traditional methods struggle to address this limitation: manually annotating behavior is time-consuming, making it infeasible to do at large scale; manually selecting and manipulating stimuli based on hypotheses may neglect unanticipated features, potentially generating biased conclusions; and common linear modeling approaches cannot fully capture the complex, nonlinear, and interactive nature of real-life affective processes. In this methodology review, we describe how deep learning can be applied to address these challenges to advance a more naturalistic affective science. First, we describe current practices in affective research and explain why existing methods face challenges in revealing a more naturalistic understanding of emotion. Second, we introduce deep learning approaches and explain how they can be applied to tackle three main challenges: quantifying naturalistic behaviors, selecting and manipulating naturalistic stimuli, and modeling naturalistic affective processes. Finally, we describe the limitations of these deep learning methods, and how these limitations might be avoided or mitigated. By detailing the promise and the peril of deep learning, this review aims to pave the way for a more naturalistic affective science.
Collapse
Affiliation(s)
- Chujun Lin
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Landry S. Bulls
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Lindsey J. Tepfer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Amisha D. Vyas
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Mark A. Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| |
Collapse
|
9
|
O'Reilly JA, Zhu JD, Sowman PF. Localized estimation of electromagnetic sources underlying event-related fields using recurrent neural networks. J Neural Eng 2023; 20:046035. [PMID: 37567215 DOI: 10.1088/1741-2552/acef94] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 08/10/2023] [Indexed: 08/13/2023]
Abstract
Objective. To use a recurrent neural network (RNN) to reconstruct neural activity responsible for generating noninvasively measured electromagnetic signals.Approach. Output weights of an RNN were fixed as the lead field matrix from volumetric source space computed using the boundary element method with co-registered structural magnetic resonance images and magnetoencephalography (MEG). Initially, the network was trained to minimise mean-squared-error loss between its outputs and MEG signals, causing activations in the penultimate layer to converge towards putative neural source activations. Subsequently, L1 regularisation was applied to the final hidden layer, and the model was fine-tuned, causing it to favour more focused activations. Estimated source signals were then obtained from the outputs of the last hidden layer. We developed and validated this approach with simulations before applying it to real MEG data, comparing performance with beamformers, minimum-norm estimate, and mixed-norm estimate source reconstruction methods.Main results. The proposed RNN method had higher output signal-to-noise ratios and comparable correlation and error between estimated and simulated sources. Reconstructed MEG signals were also equal or superior to the other methods regarding their similarity to ground-truth. When applied to MEG data recorded during an auditory roving oddball experiment, source signals estimated with the RNN were generally biophysically plausible and consistent with expectations from the literature.Significance. This work builds on recent developments of RNNs for modelling event-related neural responses by incorporating biophysical constraints from the forward model, thus taking a significant step towards greater biological realism and introducing the possibility of exploring how input manipulations may influence localised neural activity.
Collapse
Affiliation(s)
- Jamie A O'Reilly
- School of Engineering, King Mongkut's Institute of Technology Ladkrabang, Bangkok 10520, Thailand
| | - Judy D Zhu
- School of Psychological Sciences, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Paul F Sowman
- School of Psychological Sciences, Macquarie University, Sydney, New South Wales 2109, Australia
| |
Collapse
|
10
|
Pugavko MM, Maslennikov OV, Nekorkin VI. Multitask computation through dynamics in recurrent spiking neural networks. Sci Rep 2023; 13:3997. [PMID: 36899052 PMCID: PMC10006454 DOI: 10.1038/s41598-023-31110-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 03/06/2023] [Indexed: 03/12/2023] Open
Abstract
In this work, inspired by cognitive neuroscience experiments, we propose recurrent spiking neural networks trained to perform multiple target tasks. These models are designed by considering neurocognitive activity as computational processes through dynamics. Trained by input-output examples, these spiking neural networks are reverse engineered to find the dynamic mechanisms that are fundamental to their performance. We show that considering multitasking and spiking within one system provides insightful ideas on the principles of neural computation.
Collapse
Affiliation(s)
- Mechislav M Pugavko
- Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, 603950, Russia
| | - Oleg V Maslennikov
- Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, 603950, Russia.
| | - Vladimir I Nekorkin
- Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, 603950, Russia
| |
Collapse
|
11
|
Recurrent networks endowed with structural priors explain suboptimal animal behavior. Curr Biol 2023; 33:622-638.e7. [PMID: 36657448 DOI: 10.1016/j.cub.2022.12.044] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/03/2022] [Accepted: 12/16/2022] [Indexed: 01/19/2023]
Abstract
The strategies found by animals facing a new task are determined both by individual experience and by structural priors evolved to leverage the statistics of natural environments. Rats quickly learn to capitalize on the trial sequence correlations of two-alternative forced choice (2AFC) tasks after correct trials but consistently deviate from optimal behavior after error trials. To understand this outcome-dependent gating, we first show that recurrent neural networks (RNNs) trained in the same 2AFC task outperform rats as they can readily learn to use across-trial information both after correct and error trials. We hypothesize that, although RNNs can optimize their behavior in the 2AFC task without any a priori restrictions, rats' strategy is constrained by a structural prior adapted to a natural environment in which rewarded and non-rewarded actions provide largely asymmetric information. When pre-training RNNs in a more ecological task with more than two possible choices, networks develop a strategy by which they gate off the across-trial evidence after errors, mimicking rats' behavior. Population analyses show that the pre-trained networks form an accurate representation of the sequence statistics independently of the outcome in the previous trial. After error trials, gating is implemented by a change in the network dynamics that temporarily decouple the categorization of the stimulus from the across-trial accumulated evidence. Our results suggest that the rats' suboptimal behavior reflects the influence of a structural prior that reacts to errors by isolating the network decision dynamics from the context, ultimately constraining the performance in a 2AFC laboratory task.
Collapse
|
12
|
O’Reilly JA, Wehrman J, Sowman PF. A Guided Tutorial on Modelling Human Event-Related Potentials with Recurrent Neural Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:9243. [PMID: 36501944 PMCID: PMC9738446 DOI: 10.3390/s22239243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 11/13/2022] [Accepted: 11/23/2022] [Indexed: 06/17/2023]
Abstract
In cognitive neuroscience research, computational models of event-related potentials (ERP) can provide a means of developing explanatory hypotheses for the observed waveforms. However, researchers trained in cognitive neurosciences may face technical challenges in implementing these models. This paper provides a tutorial on developing recurrent neural network (RNN) models of ERP waveforms in order to facilitate broader use of computational models in ERP research. To exemplify the RNN model usage, the P3 component evoked by target and non-target visual events, measured at channel Pz, is examined. Input representations of experimental events and corresponding ERP labels are used to optimize the RNN in a supervised learning paradigm. Linking one input representation with multiple ERP waveform labels, then optimizing the RNN to minimize mean-squared-error loss, causes the RNN output to approximate the grand-average ERP waveform. Behavior of the RNN can then be evaluated as a model of the computational principles underlying ERP generation. Aside from fitting such a model, the current tutorial will also demonstrate how to classify hidden units of the RNN by their temporal responses and characterize them using principal component analysis. Statistical hypothesis testing can also be applied to these data. This paper focuses on presenting the modelling approach and subsequent analysis of model outputs in a how-to format, using publicly available data and shared code. While relatively less emphasis is placed on specific interpretations of P3 response generation, the results initiate some interesting discussion points.
Collapse
Affiliation(s)
- Jamie A. O’Reilly
- College of Biomedical Engineering, Rangsit University, Pathum Thani 12000, Thailand
- School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
| | - Jordan Wehrman
- Brain and Mind Centre, University of Sydney, Sydney, NSW 2006, Australia
| | - Paul F. Sowman
- School of Psychological Sciences, Macquarie University, Sydney, NSW 2109, Australia
| |
Collapse
|
13
|
O'Reilly JA. Recurrent Neural Network Model of Human Event-related Potentials in Response to Intensity Oddball Stimulation. Neuroscience 2022; 504:63-74. [DOI: 10.1016/j.neuroscience.2022.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/27/2022] [Accepted: 10/03/2022] [Indexed: 10/31/2022]
|
14
|
O'Reilly JA. Modelling mouse auditory response dynamics along a continuum of consciousness using a deep recurrent neural network. J Neural Eng 2022; 19. [DOI: 10.1088/1741-2552/ac9257] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Accepted: 09/15/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective Understanding neurophysiological changes that accompany transitions between anaesthetized and conscious states is a key objective of anesthesiology and consciousness science. This study aimed to characterize the dynamics of auditory-evoked potential morphology in mice along a continuum of consciousness. Approach Epidural field potentials were recorded from above the primary auditory cortices of two groups of laboratory mice: urethane-anaesthetized (A, n = 14) and conscious (C, n = 17). Both groups received auditory stimulation in the form of a repeated pure-tone stimulus, before and after receiving 10 mg/kg i.p. ketamine (AK and CK). Evoked responses were then ordered by ascending sample entropy into AK, A, CK, and C, considered to reflect physiological correlates of awareness. These data were used to train a recurrent neural network (RNN) with an input parameter encoding state. Model outputs were compared with grand-average event-related potential (ERP) waveforms. Subsequently, the state parameter was varied to simulate changes in the ERP that occur during transitions between states, and relationships with dominant peak amplitudes were quantified. Main results The RNN synthesized output waveforms that were in close agreement with grand-average ERPs for each group (r2 > 0.9, p < 0.0001). Varying the input state parameter generated model outputs reflecting changes in ERP morphology predicted to occur between states. Positive peak amplitudes within 25 to 50 ms, and negative peak amplitudes within 50 to 75 ms post-stimulus-onset, were found to display a sigmoidal characteristic during the transition from anaesthetized to conscious states. In contrast, negative peak amplitudes within 0 to 25 ms displayed greater linearity. Significance This study demonstrates a method for modelling changes in ERP morphology that accompany transitions between states of consciousness using a RNN. In future studies, this approach may be applied to human data to support the clinical use of ERPs to predict transition to consciousness.
Collapse
|
15
|
O'Reilly JA, Angsuwatanakul T, Wehrman J. Decoding violated sensory expectations from the auditory cortex of anaesthetised mice: Hierarchical recurrent neural network depicts separate 'danger' and 'safety' units. Eur J Neurosci 2022; 56:4154-4175. [PMID: 35695993 PMCID: PMC9545291 DOI: 10.1111/ejn.15736] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 06/02/2022] [Accepted: 06/07/2022] [Indexed: 12/27/2022]
Abstract
The ability to respond appropriately to sensory information received from the external environment is among the most fundamental capabilities of central nervous systems. In the auditory domain, processes underlying this behaviour are studied by measuring auditory‐evoked electrophysiology during sequences of sounds with predetermined regularities. Identifying neural correlates of ensuing auditory novelty responses is supported by research in experimental animals. In the present study, we reanalysed epidural field potential recordings from the auditory cortex of anaesthetised mice during frequency and intensity oddball stimulation. Multivariate pattern analysis (MVPA) and hierarchical recurrent neural network (RNN) modelling were adopted to explore these data with greater resolution than previously considered using conventional methods. Time‐wise and generalised temporal decoding MVPA approaches revealed previously underestimated asymmetry between responses to sound‐level transitions in the intensity oddball paradigm, in contrast with tone frequency changes. After training, the cross‐validated RNN model architecture with four hidden layers produced output waveforms in response to simulated auditory inputs that were strongly correlated with grand‐average auditory‐evoked potential waveforms (r2 > .9). Units in hidden layers were classified based on their temporal response properties and characterised using principal component analysis and sample entropy. These demonstrated spontaneous alpha rhythms, sound onset and offset responses and putative ‘safety’ and ‘danger’ units activated by relatively inconspicuous and salient changes in auditory inputs, respectively. The hypothesised existence of corresponding biological neural sources is naturally derived from this model. If proven, this could have significant implications for prevailing theories of auditory processing.
Collapse
Affiliation(s)
- Jamie A O'Reilly
- College of Biomedical Engineering, Rangsit University, Lak Hok, Thailand
| | | | - Jordan Wehrman
- Brain and Mind Centre, University of Sydney, Camperdown, New South Wales, Australia
| |
Collapse
|
16
|
Mei J, Muller E, Ramaswamy S. Informing deep neural networks by multiscale principles of neuromodulatory systems. Trends Neurosci 2022; 45:237-250. [DOI: 10.1016/j.tins.2021.12.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 12/04/2021] [Accepted: 12/21/2021] [Indexed: 01/19/2023]
|