1
|
Zhang WH. Decentralized Neural Circuits of Multisensory Information Integration in the Brain. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:1-21. [PMID: 38270850 DOI: 10.1007/978-981-99-7611-9_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
The brain combines multisensory inputs together to obtain a complete and reliable description of the world. Recent experiments suggest that several interconnected multisensory brain areas are simultaneously involved to integrate multisensory information. It was unknown how these mutually connected multisensory areas achieve multisensory integration. To answer this question, using biologically plausible neural circuit models we developed a decentralized system for information integration that comprises multiple interconnected multisensory brain areas. Through studying an example of integrating visual and vestibular cues to infer heading direction, we show that such a decentralized system is well consistent with experimental observations. In particular, we demonstrate that this decentralized system can optimally integrate information by implementing sampling-based Bayesian inference. The Poisson variability of spike generation provides appropriate variability to drive sampling, and the interconnections between multisensory areas store the correlation prior between multisensory stimuli. The decentralized system predicts that optimally integrated information emerges locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Lyda Hill Department of Bioinformatics and O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
2
|
A normative model of peripersonal space encoding as performing impact prediction. PLoS Comput Biol 2022; 18:e1010464. [PMID: 36103520 PMCID: PMC9512250 DOI: 10.1371/journal.pcbi.1010464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 09/26/2022] [Accepted: 08/02/2022] [Indexed: 11/30/2022] Open
Abstract
Accurately predicting contact between our bodies and environmental objects is paramount to our evolutionary survival. It has been hypothesized that multisensory neurons responding both to touch on the body, and to auditory or visual stimuli occurring near them—thus delineating our peripersonal space (PPS)—may be a critical player in this computation. However, we lack a normative account (i.e., a model specifying how we ought to compute) linking impact prediction and PPS encoding. Here, we leverage Bayesian Decision Theory to develop such a model and show that it recapitulates many of the characteristics of PPS. Namely, a normative model of impact prediction (i) delineates a graded boundary between near and far space, (ii) demonstrates an enlargement of PPS as the speed of incoming stimuli increases, (iii) shows stronger contact prediction for looming than receding stimuli—but critically is still present for receding stimuli when observation uncertainty is non-zero—, (iv) scales with the value we attribute to environmental objects, and finally (v) can account for the differing sizes of PPS for different body parts. Together, these modeling results support the conjecture that PPS reflects the computation of impact prediction, and make a number of testable predictions for future empirical studies. The brain has neurons that respond to touch on the body, as well as to auditory or visual stimuli occurring near the body. These neurons delineate a graded boundary between the near and far space. Here, we aim at understanding whether the function of these neurons is to predict future impact between the environment and body. To do so, we build a mathematical model that is statistically optimal at predicting future impact, taking into account the costs incurred by an impending collision. Then we examine if its properties are similar to those of the above-mentioned neurons. We find that the model (i) differentiates between the near and far space in a graded fashion, predicts different near/far boundary depths for different (ii) body parts, (iii) object speeds and (iv) directions, and (v) that this boundary scales with the value we attribute to environmental objects. These properties have all been described in behavioral studies and ascribed to neurons responding to objects near the body. Together, these findings suggest why the brain has neurons that respond only to objects near the body: to compute predictions of impact.
Collapse
|
3
|
Forch V, Hamker FH. Building and Understanding the Minimal Self. Front Psychol 2021; 12:716982. [PMID: 34899463 PMCID: PMC8660690 DOI: 10.3389/fpsyg.2021.716982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Accepted: 10/26/2021] [Indexed: 11/13/2022] Open
Abstract
Within the methodologically diverse interdisciplinary research on the minimal self, we identify two movements with seemingly disparate research agendas - cognitive science and cognitive (developmental) robotics. Cognitive science, on the one hand, devises rather abstract models which can predict and explain human experimental data related to the minimal self. Incorporating the established models of cognitive science and ideas from artificial intelligence, cognitive robotics, on the other hand, aims to build embodied learning machines capable of developing a self "from scratch" similar to human infants. The epistemic promise of the latter approach is that, at some point, robotic models can serve as a testbed for directly investigating the mechanisms that lead to the emergence of the minimal self. While both approaches can be productive for creating causal mechanistic models of the minimal self, we argue that building a minimal self is different from understanding the human minimal self. Thus, one should be cautious when drawing conclusions about the human minimal self based on robotic model implementations and vice versa. We further point out that incorporating constraints arising from different levels of analysis will be crucial for creating models that can predict, generate, and causally explain behavior in the real world.
Collapse
Affiliation(s)
| | - Fred H. Hamker
- Department of Computer Science, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
4
|
Phillips JM, Kambi NA, Redinbaugh MJ, Mohanta S, Saalmann YB. Disentangling the influences of multiple thalamic nuclei on prefrontal cortex and cognitive control. Neurosci Biobehav Rev 2021; 128:487-510. [PMID: 34216654 DOI: 10.1016/j.neubiorev.2021.06.042] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 04/13/2021] [Accepted: 06/09/2021] [Indexed: 10/21/2022]
Abstract
The prefrontal cortex (PFC) has a complex relationship with the thalamus, involving many nuclei which occupy predominantly medial zones along its anterior-to-posterior extent. Thalamocortical neurons in most of these nuclei are modulated by the affective and cognitive signals which funnel through the basal ganglia. We review how PFC-connected thalamic nuclei likely contribute to all aspects of cognitive control: from the processing of information on internal states and goals, facilitating its interactions with mnemonic information and learned values of stimuli and actions, to their influence on high-level cognitive processes, attentional allocation and goal-directed behavior. This includes contributions to transformations such as rule-to-choice (parvocellular mediodorsal nucleus), value-to-choice (magnocellular mediodorsal nucleus), mnemonic-to-choice (anteromedial nucleus) and sensory-to-choice (medial pulvinar). Common mechanisms appear to be thalamic modulation of cortical gain and cortico-cortical functional connectivity. The anatomy also implies a unique role for medial PFC in modulating processing in thalamocortical circuits involving other orbital and lateral PFC regions. We further discuss how cortico-basal ganglia circuits may provide a mechanism through which PFC controls cortico-cortical functional connectivity.
Collapse
Affiliation(s)
- Jessica M Phillips
- Department of Psychology, University of Wisconsin-Madison, 1202 W Johnson St., Madison, WI 53706, United States.
| | - Niranjan A Kambi
- Department of Psychology, University of Wisconsin-Madison, 1202 W Johnson St., Madison, WI 53706, United States
| | - Michelle J Redinbaugh
- Department of Psychology, University of Wisconsin-Madison, 1202 W Johnson St., Madison, WI 53706, United States
| | - Sounak Mohanta
- Department of Psychology, University of Wisconsin-Madison, 1202 W Johnson St., Madison, WI 53706, United States
| | - Yuri B Saalmann
- Department of Psychology, University of Wisconsin-Madison, 1202 W Johnson St., Madison, WI 53706, United States; Wisconsin National Primate Research Center, University of Wisconsin-Madison, 1202 Capitol Ct., Madison, WI 53715, United States.
| |
Collapse
|
5
|
Bertoni T, Magosso E, Serino A. From statistical regularities in multisensory inputs to peripersonal space representation and body ownership: Insights from a neural network model. Eur J Neurosci 2021; 53:611-636. [PMID: 32965729 PMCID: PMC7894138 DOI: 10.1111/ejn.14981] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 09/03/2020] [Accepted: 09/04/2020] [Indexed: 01/07/2023]
Abstract
Peripersonal space (PPS), the interface between the self and the environment, is represented by a network of multisensory neurons with visual (or auditory) receptive fields anchored to specific body parts, and tactile receptive fields covering the same body parts. Neurophysiological and behavioural features of hand PPS representation have been previously modelled through a neural network constituted by one multisensory population integrating tactile inputs with visual/auditory external stimuli. Reference frame transformations were not explicitly modelled, as stimuli were encoded in pre-computed hand-centred coordinates. Here we present a novel model, aiming to overcome this limitation by including a proprioceptive population encoding hand position. We confirmed behaviourally the plausibility of the proposed architecture, showing that visuo-proprioceptive information is integrated to enhance tactile processing on the hand. Moreover, the network's connectivity was spontaneously tuned through a Hebbian-like mechanism, under two minimal assumptions. First, the plasticity rule was designed to learn the statistical regularities of visual, proprioceptive and tactile inputs. Second, such statistical regularities were simply those imposed by the body structure. The network learned to integrate proprioceptive and visual stimuli, and to compute their hand-centred coordinates to predict tactile stimulation. Through the same mechanism, the network reproduced behavioural correlates of manipulations implicated in subjective body ownership: the invisible and the rubber hand illusion. We thus propose that PPS representation and body ownership may emerge through a unified neurocomputational process; the integration of multisensory information consistently with a model of the body in the environment, learned from the natural statistics of sensory inputs.
Collapse
Affiliation(s)
- Tommaso Bertoni
- MySpace LabDepartment of Clinical NeuroscienceLausanne University Hospital (CHUV)University of LausanneLausanneSwitzerland
| | - Elisa Magosso
- Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi”University of BolognaCesenaItaly
| | - Andrea Serino
- MySpace LabDepartment of Clinical NeuroscienceLausanne University Hospital (CHUV)University of LausanneLausanneSwitzerland
| |
Collapse
|
6
|
Kramer A, Röder B, Bruns P. Feedback Modulates Audio-Visual Spatial Recalibration. Front Integr Neurosci 2020; 13:74. [PMID: 32009913 PMCID: PMC6979315 DOI: 10.3389/fnint.2019.00074] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
In an ever-changing environment, crossmodal recalibration is crucial to maintain precise and coherent spatial estimates across different sensory modalities. Accordingly, it has been found that perceived auditory space is recalibrated toward vision after consistent exposure to spatially misaligned audio-visual stimuli (VS). While this so-called ventriloquism aftereffect (VAE) yields internal consistency between vision and audition, it does not necessarily lead to consistency between the perceptual representation of space and the actual environment. For this purpose, feedback about the true state of the external world might be necessary. Here, we tested whether the size of the VAE is modulated by external feedback and reward. During adaptation audio-VS with a fixed spatial discrepancy were presented. Participants had to localize the sound and received feedback about the magnitude of their localization error. In half of the sessions the feedback was based on the position of the VS and in the other half it was based on the position of the auditory stimulus. An additional monetary reward was given if the localization error fell below a certain threshold that was based on participants’ performance in the pretest. As expected, when error feedback was based on the position of the VS, auditory localization during adaptation trials shifted toward the position of the VS. Conversely, feedback based on the position of the auditory stimuli reduced the visual influence on auditory localization (i.e., the ventriloquism effect) and improved sound localization accuracy. After adaptation with error feedback based on the VS position, a typical auditory VAE (but no visual aftereffect) was observed in subsequent unimodal localization tests. By contrast, when feedback was based on the position of the auditory stimuli during adaptation, no auditory VAE was observed in subsequent unimodal auditory trials. Importantly, in this situation no visual aftereffect was found either. As feedback did not change the physical attributes of the audio-visual stimulation during adaptation, the present findings suggest that crossmodal recalibration is subject to top–down influences. Such top–down influences might help prevent miscalibration of audition toward conflicting visual stimulation in situations in which external feedback indicates that visual information is inaccurate.
Collapse
Affiliation(s)
- Alexander Kramer
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
7
|
Medendorp WP, Heed T. State estimation in posterior parietal cortex: Distinct poles of environmental and bodily states. Prog Neurobiol 2019; 183:101691. [DOI: 10.1016/j.pneurobio.2019.101691] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2019] [Revised: 08/12/2019] [Accepted: 08/29/2019] [Indexed: 01/06/2023]
|
8
|
Kalaska JF. Emerging ideas and tools to study the emergent properties of the cortical neural circuits for voluntary motor control in non-human primates. F1000Res 2019; 8. [PMID: 31275561 PMCID: PMC6544130 DOI: 10.12688/f1000research.17161.1] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/22/2019] [Indexed: 12/22/2022] Open
Abstract
For years, neurophysiological studies of the cerebral cortical mechanisms of voluntary motor control were limited to single-electrode recordings of the activity of one or a few neurons at a time. This approach was supported by the widely accepted belief that single neurons were the fundamental computational units of the brain (the “neuron doctrine”). Experiments were guided by motor-control models that proposed that the motor system attempted to plan and control specific parameters of a desired action, such as the direction, speed or causal forces of a reaching movement in specific coordinate frameworks, and that assumed that the controlled parameters would be expressed in the task-related activity of single neurons. The advent of chronically implanted multi-electrode arrays about 20 years ago permitted the simultaneous recording of the activity of many neurons. This greatly enhanced the ability to study neural control mechanisms at the population level. It has also shifted the focus of the analysis of neural activity from quantifying single-neuron correlates with different movement parameters to probing the structure of multi-neuron activity patterns to identify the emergent computational properties of cortical neural circuits. In particular, recent advances in “dimension reduction” algorithms have attempted to identify specific covariance patterns in multi-neuron activity which are presumed to reflect the underlying computational processes by which neural circuits convert the intention to perform a particular movement into the required causal descending motor commands. These analyses have led to many new perspectives and insights on how cortical motor circuits covertly plan and prepare to initiate a movement without causing muscle contractions, transition from preparation to overt execution of the desired movement, generate muscle-centered motor output commands, and learn new motor skills. Progress is also being made to import optical-imaging and optogenetic toolboxes from rodents to non-human primates to overcome some technical limitations of multi-electrode recording technology.
Collapse
Affiliation(s)
- John F Kalaska
- Groupe de recherche sur le système nerveux central (GRSNC), Département de Neurosciences, Faculté de Médecine, Université de Montréal, C.P. 6128, Succ. Centre-ville, Montréal (Québec), H3C 3J7, Canada
| |
Collapse
|
9
|
Serino A. Peripersonal space (PPS) as a multisensory interface between the individual and the environment, defining the space of the self. Neurosci Biobehav Rev 2019; 99:138-159. [DOI: 10.1016/j.neubiorev.2019.01.016] [Citation(s) in RCA: 112] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 12/23/2018] [Accepted: 01/14/2019] [Indexed: 11/25/2022]
|
10
|
Makin JG, O’Doherty JE, Cardoso MMB, Sabes PN. Superior arm-movement decoding from cortex with a new, unsupervised-learning algorithm. J Neural Eng 2018; 15:026010. [DOI: 10.1088/1741-2552/aa9e95] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
11
|
Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback. Nat Commun 2017; 8:138. [PMID: 28743932 PMCID: PMC5527101 DOI: 10.1038/s41467-017-00181-8] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2016] [Accepted: 06/08/2017] [Indexed: 02/01/2023] Open
Abstract
Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey’s learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules. Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.
Collapse
|
12
|
Sokoloski S. Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics. Neural Comput 2017; 29:2450-2490. [PMID: 28599113 DOI: 10.1162/neco_a_00991] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.
Collapse
Affiliation(s)
- Sacha Sokoloski
- Max Planck Institute for Mathematics in the Sciences, Leipzig, 04103, Germany, and Albert Einstein College of Medicine, New York, NY 10461, U.S.A.
| |
Collapse
|
13
|
Schumann F, O'Regan JK. Sensory augmentation: integration of an auditory compass signal into human perception of space. Sci Rep 2017; 7:42197. [PMID: 28195187 PMCID: PMC5307328 DOI: 10.1038/srep42197] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Accepted: 01/06/2017] [Indexed: 12/30/2022] Open
Abstract
Bio-mimetic approaches to restoring sensory function show great promise in that they rapidly produce perceptual experience, but have the disadvantage of being invasive. In contrast, sensory substitution approaches are non-invasive, but may lead to cognitive rather than perceptual experience. Here we introduce a new non-invasive approach that leads to fast and truly perceptual experience like bio-mimetic techniques. Instead of building on existing circuits at the neural level as done in bio-mimetics, we piggy-back on sensorimotor contingencies at the stimulus level. We convey head orientation to geomagnetic North, a reliable spatial relation not normally sensed by humans, by mimicking sensorimotor contingencies of distal sounds via head-related transfer functions. We demonstrate rapid and long-lasting integration into the perception of self-rotation. Short training with amplified or reduced rotation gain in the magnetic signal can expand or compress the perceived extent of vestibular self-rotation, even with the magnetic signal absent in the test. We argue that it is the reliability of the magnetic signal that allows vestibular spatial recalibration, and the coding scheme mimicking sensorimotor contingencies of distal sounds that permits fast integration. Hence we propose that contingency-mimetic feedback has great potential for creating sensory augmentation devices that achieve fast and genuinely perceptual experiences.
Collapse
Affiliation(s)
- Frank Schumann
- Laboratoire Psychologie de la Perception - CNRS UMR 8242, Université Paris Descartes, Paris, France
| | - J Kevin O'Regan
- Laboratoire Psychologie de la Perception - CNRS UMR 8242, Université Paris Descartes, Paris, France
| |
Collapse
|
14
|
Common and distinct brain regions processing multisensory bodily signals for peripersonal space and body ownership. Neuroimage 2017; 147:602-618. [DOI: 10.1016/j.neuroimage.2016.12.052] [Citation(s) in RCA: 114] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Revised: 11/18/2016] [Accepted: 12/18/2016] [Indexed: 12/18/2022] Open
|
15
|
Chandrasekaran C. Computational principles and models of multisensory integration. Curr Opin Neurobiol 2016; 43:25-34. [PMID: 27918886 DOI: 10.1016/j.conb.2016.11.002] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Revised: 10/27/2016] [Accepted: 11/09/2016] [Indexed: 12/22/2022]
Abstract
Combining information from multiple senses creates robust percepts, speeds up responses, enhances learning, and improves detection, discrimination, and recognition. In this review, I discuss computational models and principles that provide insight into how this process of multisensory integration occurs at the behavioral and neural level. My initial focus is on drift-diffusion and Bayesian models that can predict behavior in multisensory contexts. I then highlight how recent neurophysiological and perturbation experiments provide evidence for a distributed redundant network for multisensory integration. I also emphasize studies which show that task-relevant variables in multisensory contexts are distributed in heterogeneous neural populations. Finally, I describe dimensionality reduction methods and recurrent neural network models that may help decipher heterogeneous neural populations involved in multisensory integration.
Collapse
|
16
|
Marblestone AH, Wayne G, Kording KP. Toward an Integration of Deep Learning and Neuroscience. Front Comput Neurosci 2016; 10:94. [PMID: 27683554 PMCID: PMC5021692 DOI: 10.3389/fncom.2016.00094] [Citation(s) in RCA: 242] [Impact Index Per Article: 30.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 08/24/2016] [Indexed: 01/22/2023] Open
Abstract
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
Collapse
Affiliation(s)
- Adam H. Marblestone
- Synthetic Neurobiology Group, Massachusetts Institute of Technology, Media LabCambridge, MA, USA
| | | | - Konrad P. Kording
- Rehabilitation Institute of Chicago, Northwestern UniversityChicago, IL, USA
| |
Collapse
|
17
|
Badde S, Heed T. Towards explaining spatial touch perception: Weighted integration of multiple location codes. Cogn Neuropsychol 2016; 33:26-47. [PMID: 27327353 PMCID: PMC4975087 DOI: 10.1080/02643294.2016.1168791] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Touch is bound to the skin – that is, to the boundaries of the body. Yet, the activity of neurons in primary somatosensory cortex just mirrors the spatial distribution of the sensors across the skin. To determine the location of a tactile stimulus on the body, the body's spatial layout must be considered. Moreover, to relate touch to the external world, body posture has to be evaluated. In this review, we argue that posture is incorporated, by default, for any tactile stimulus. However, the relevance of the external location and, thus, its expression in behaviour, depends on various sensory and cognitive factors. Together, these factors imply that an external representation of touch dominates over the skin-based, anatomical when our focus is on the world rather than on our own body. We conclude that touch localization is a reconstructive process that is adjusted to the context while maintaining all available spatial information.
Collapse
Affiliation(s)
- Stephanie Badde
- a Department of Psychology , New York University , New York , NY , USA
| | - Tobias Heed
- b Faculty of Psychology and Human Movement Science , University of Hamburg , Hamburg , Germany
| |
Collapse
|
18
|
Tan DW, Schiefer MA, Keith MW, Anderson JR, Tyler J, Tyler DJ. A neural interface provides long-term stable natural touch perception. Sci Transl Med 2016; 6:257ra138. [PMID: 25298320 DOI: 10.1126/scitranslmed.3008669] [Citation(s) in RCA: 455] [Impact Index Per Article: 56.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Touch perception on the fingers and hand is essential for fine motor control, contributes to our sense of self, allows for effective communication, and aids in our fundamental perception of the world. Despite increasingly sophisticated mechatronics, prosthetic devices still do not directly convey sensation back to their wearers. We show that implanted peripheral nerve interfaces in two human subjects with upper limb amputation provided stable, natural touch sensation in their hands for more than 1 year. Electrical stimulation using implanted peripheral nerve cuff electrodes that did not penetrate the nerve produced touch perceptions at many locations on the phantom hand with repeatable, stable responses in the two subjects for 16 and 24 months. Patterned stimulation intensity produced a sensation that the subjects described as natural and without "tingling," or paresthesia. Different patterns produced different types of sensory perception at the same location on the phantom hand. The two subjects reported tactile perceptions they described as natural tapping, constant pressure, light moving touch, and vibration. Changing average stimulation intensity controlled the size of the percept area; changing stimulation frequency controlled sensation strength. Artificial touch sensation improved the subjects' ability to control grasping strength of the prosthesis and enabled them to better manipulate delicate objects. Thus, electrical stimulation through peripheral nerve electrodes produced long-term sensory restoration after limb loss.
Collapse
Affiliation(s)
- Daniel W Tan
- Louis Stokes Veterans Affairs Medical Center, Cleveland, OH 44106, USA. Case Western Reserve University, Cleveland, OH 44106, USA
| | - Matthew A Schiefer
- Louis Stokes Veterans Affairs Medical Center, Cleveland, OH 44106, USA. Case Western Reserve University, Cleveland, OH 44106, USA
| | - Michael W Keith
- Louis Stokes Veterans Affairs Medical Center, Cleveland, OH 44106, USA. Case Western Reserve University, Cleveland, OH 44106, USA. MetroHealth Medical Center, Cleveland, OH 44109, USA
| | - James Robert Anderson
- Louis Stokes Veterans Affairs Medical Center, Cleveland, OH 44106, USA. Case Western Reserve University, Cleveland, OH 44106, USA. University Hospitals Rainbow Babies & Children's Hospital, Cleveland, OH 44106, USA
| | - Joyce Tyler
- MetroHealth Medical Center, Cleveland, OH 44109, USA
| | - Dustin J Tyler
- Louis Stokes Veterans Affairs Medical Center, Cleveland, OH 44106, USA. Case Western Reserve University, Cleveland, OH 44106, USA. MetroHealth Medical Center, Cleveland, OH 44109, USA.
| |
Collapse
|
19
|
Franklin DW, Reichenbach A, Franklin S, Diedrichsen J. Temporal Evolution of Spatial Computations for Visuomotor Control. J Neurosci 2016; 36:2329-41. [PMID: 26911681 PMCID: PMC4764656 DOI: 10.1523/jneurosci.0052-15.2016] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2015] [Revised: 09/11/2015] [Accepted: 10/29/2015] [Indexed: 11/21/2022] Open
Abstract
Goal-directed reaching movements are guided by visual feedback from both target and hand. The classical view is that the brain extracts information about target and hand positions from a visual scene, calculates a difference vector between them, and uses this estimate to control the movement. Here we show that during fast feedback control, this computation is not immediate, but evolves dynamically over time. Immediately after a change in the visual scene, the motor system generates independent responses to the errors in hand and target location. Only about 200 ms later, the changes in target and hand positions are combined appropriately in the response, slowly converging to the true difference vector. Therefore, our results provide evidence for the temporal evolution of spatial computations in the human visuomotor system, in which the accurate difference vector computation is first estimated by a fast approximation.
Collapse
Affiliation(s)
- David W Franklin
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, United Kingdom,
| | - Alexandra Reichenbach
- Motor Control Group, Institute of Cognitive Neuroscience, University College London, London WC1N 3AR, United Kingdom, and
| | - Sae Franklin
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
| | - Jörn Diedrichsen
- Motor Control Group, Institute of Cognitive Neuroscience, University College London, London WC1N 3AR, United Kingdom, and Brain and Mind Institute, Western University, London, Ontario N6A 5B7, Canada
| |
Collapse
|
20
|
Makin JG, Dichter BK, Sabes PN. Learning to Estimate Dynamical State with Probabilistic Population Codes. PLoS Comput Biol 2015; 11:e1004554. [PMID: 26540152 PMCID: PMC4634970 DOI: 10.1371/journal.pcbi.1004554] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2015] [Accepted: 08/26/2015] [Indexed: 12/03/2022] Open
Abstract
Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, “probabilistic population codes.” We show that a recurrent neural network—a modified form of an exponential family harmonium (EFH)—that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states. A basic task for animals is to track objects—predators, prey, even their own limbs—as they move through the world. Because the position estimates provided by the senses are not error-free, higher levels of performance can be, and are, achieved when the velocity and acceleration, as well as the position, of the object are taken into account. Likewise, tracking of limbs under voluntary control can be improved by considering the motor command that is (partially) responsible for its trajectory. Engineers have built tools to solve precisely these problems, and even to learn dynamical features of the object to be tracked. How does the brain do it? We show how artificial networks of neurons can learn to solve this task, simply by trying to become good predictive models of their incoming data—as long as some of those data are the activities of the neurons themselves at a fixed time delay, while the remainder (imperfectly) report the current position. The tracking scheme the network learns to use—keeping track of past positions; the corresponding receptive fields; and the manner in which they are learned, provide predictions for brain areas involved in tracking, like the posterior parietal cortex.
Collapse
Affiliation(s)
- Joseph G. Makin
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, California, United States of America
- Department of Physiology, University of California, San Francisco, San Francisco, California, United States of America
- * E-mail:
| | - Benjamin K. Dichter
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, California, United States of America
- UC Berkeley-UCSF Graduate Program in Bioengineering, University of California, San Francisco, San Francisco, California, United States of America
| | - Philip N. Sabes
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, California, United States of America
- Department of Physiology, University of California, San Francisco, San Francisco, California, United States of America
- UC Berkeley-UCSF Graduate Program in Bioengineering, University of California, San Francisco, San Francisco, California, United States of America
| |
Collapse
|
21
|
Tan D, Tyler D, Sweet J, Miller J. Intensity Modulation: A Novel Approach to Percept Control in Spinal Cord Stimulation. Neuromodulation 2015; 19:254-9. [PMID: 26479774 DOI: 10.1111/ner.12358] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2015] [Revised: 08/11/2015] [Accepted: 09/01/2015] [Indexed: 12/27/2022]
Abstract
OBJECTIVE Spinal cord stimulation (SCS) can be effective for neuropathic pain, but clinical benefit is sometimes inadequate or is offset by stimulation-induced side-effects, and response can be inconsistent among patients. Intensity-modulated stimulation (IMS) is an alternative to tonic stimulation (TS) that involves continuous variation of stimulation intensity in a sinusoidal pattern between two different values, sequentially activating distinct axonal populations to produce an effect that resembles natural physiological signals. The purpose of this study is to evaluate the effect of IMS on the clinical effect of SCS. METHODS Seven patients undergoing a percutaneous SCS trial for postlaminectomy syndrome were enrolled. Thresholds for perception, pain relief, and discomfort were measured and used to create patient-specific models of axonal activation and charge delivery for both TS and IMS. All participants underwent three two-min periods of blinded stimulation using TS, IMS, and placebo, and were asked to describe the effect on quality of the sensory percept and pain relief. RESULTS All participants perceived IMS differently from placebo, and five noted significant differences from TS that resulted in a more comfortable sensation. TS was described as electric and tingling, whereas IMS was described as producing a focal area of deep pressure with a sense of motion away from that focus. The anatomic location of coverage was similar between the two forms of stimulation, although one participant reported better lower back coverage with IMS. Computer modeling revealed that, compared with TS, IMS involved 36.4% less charge delivery and produced 78.7% less suprathreshold axonal activation. CONCLUSIONS IMS for SCS is feasible, produces a more comfortable percept than conventional TS, and appears to provide a similar degree of pain relief with significantly lower energy requirements. Further studies are necessary to determine whether this represents an effective alternative to tonic SCS for treatment of neuropathic pain.
Collapse
Affiliation(s)
- Daniel Tan
- The Neurological Institute, University Hospitals Case Medical Center, Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA
| | - Dustin Tyler
- The Neurological Institute, University Hospitals Case Medical Center, Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA
| | - Jennifer Sweet
- The Neurological Institute, University Hospitals Case Medical Center, Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA
| | - Jonathan Miller
- The Neurological Institute, University Hospitals Case Medical Center, Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA
| |
Collapse
|
22
|
Brandes J, Heed T. Reach Trajectories Characterize Tactile Localization for Sensorimotor Decision Making. J Neurosci 2015; 35:13648-58. [PMID: 26446218 PMCID: PMC6605379 DOI: 10.1523/jneurosci.1873-14.2015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Revised: 08/24/2015] [Accepted: 08/27/2015] [Indexed: 11/21/2022] Open
Abstract
Spatial target information for movement planning appears to be coded in a gaze-centered reference frame. In touch, however, location is initially coded with reference to the skin. Therefore, the tactile spatial location must be derived by integrating skin location and posture. It has been suggested that this recoding is impaired when the limb is placed in the opposite hemispace, for example, by limb crossing. Here, human participants reached toward visual and tactile targets located at uncrossed and crossed feet in a sensorimotor decision task. We characterized stimulus recoding by analyzing the timing and spatial profile of hand reaches. For tactile targets at crossed feet, skin-based information implicates the incorrect side, and only recoded information points to the correct location. Participants initiated straight reaches and redirected the hand toward a target presented in midflight. Trajectories to visual targets were unaffected by foot crossing. In contrast, trajectories to tactile targets were redirected later with crossed than uncrossed feet. Reaches to crossed feet usually continued straight until they were directed toward the correct tactile target and were not biased toward the skin-based target location. Occasional, far deflections toward the incorrect target were most likely when this target was implicated by trial history. These results are inconsistent with the suggestion that spatial transformations in touch are impaired by limb crossing, but are consistent with tactile location being recoded rapidly and efficiently, followed by integration of skin-based and external information to specify the reach target. This process may be implemented in a bounded integrator framework. SIGNIFICANCE STATEMENT How do you touch yourself, for instance, to scratch an itch? The place you need to reach is defined by a sensation on the skin, but our bodies are flexible, so this skin location could be anywhere in 3D space. The movement toward the tactile sensation must therefore be specified by merging skin location and body posture. By investigating human hand reach trajectories toward tactile stimuli on the feet, we provide experimental evidence that this transformation process is quick and efficient, and that its output is integrated with the original skin location in a fashion consistent with bounded integrator decision-making frameworks.
Collapse
Affiliation(s)
- Janina Brandes
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, 20146 Hamburg, Germany
| | - Tobias Heed
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, 20146 Hamburg, Germany
| |
Collapse
|
23
|
Blanke O, Slater M, Serino A. Behavioral, Neural, and Computational Principles of Bodily Self-Consciousness. Neuron 2015; 88:145-66. [PMID: 26447578 DOI: 10.1016/j.neuron.2015.09.029] [Citation(s) in RCA: 394] [Impact Index Per Article: 43.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Affiliation(s)
- Olaf Blanke
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), 9 Chemin des Mines, 1202 Geneva, Switzerland; Department of Neurology, University of Geneva, 24 rue Micheli-du-Crest, 1211 Geneva, Switzerland.
| | - Mel Slater
- ICREA-University of Barcelona, Campus de Mundet, 08035 Barcelona, Spain; Department of Computer Science, University College London, Malet Place Engineering Building, Gower Street, London, WC1E 6BT, UK
| | - Andrea Serino
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), 9 Chemin des Mines, 1202 Geneva, Switzerland.
| |
Collapse
|
24
|
Using the precision of the primate to study the origins of movement variability. Neuroscience 2015; 296:92-100. [DOI: 10.1016/j.neuroscience.2015.01.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2014] [Revised: 01/05/2015] [Accepted: 01/06/2015] [Indexed: 12/28/2022]
|
25
|
Heed T, Buchholz VN, Engel AK, Röder B. Tactile remapping: from coordinate transformation to integration in sensorimotor processing. Trends Cogn Sci 2015; 19:251-8. [DOI: 10.1016/j.tics.2015.03.001] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 03/04/2015] [Accepted: 03/05/2015] [Indexed: 10/23/2022]
|
26
|
Miranda RA, Casebeer WD, Hein AM, Judy JW, Krotkov EP, Laabs TL, Manzo JE, Pankratz KG, Pratt GA, Sanchez JC, Weber DJ, Wheeler TL, Ling GS. DARPA-funded efforts in the development of novel brain–computer interface technologies. J Neurosci Methods 2015; 244:52-67. [PMID: 25107852 DOI: 10.1016/j.jneumeth.2014.07.019] [Citation(s) in RCA: 56] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2014] [Revised: 07/08/2014] [Accepted: 07/24/2014] [Indexed: 02/01/2023]
|
27
|
FIERST JANNAL, PHILLIPS PATRICKC. Modeling the evolution of complex genetic systems: the gene network family tree. JOURNAL OF EXPERIMENTAL ZOOLOGY. PART B, MOLECULAR AND DEVELOPMENTAL EVOLUTION 2015; 324:1-12. [PMID: 25504926 PMCID: PMC5528154 DOI: 10.1002/jez.b.22597] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2014] [Revised: 07/24/2014] [Accepted: 08/26/2014] [Indexed: 11/08/2022]
Abstract
In 1994 and 1996, Andreas Wagner introduced a novel model in two papers addressing the evolution of genetic regulatory networks. This work, and a suite of papers that followed using similar models, helped integrate network thinking into biology and motivate research focused on the evolution of genetic networks. The Wagner network has its mathematical roots in the Ising model, a statistical physics model describing the activity of atoms on a lattice, and in neural networks. These models have given rise to two branches of applications, one in physics and biology and one in artificial intelligence and machine learning. Here, we review development along these branches, outline similarities and differences between biological models of genetic regulatory circuits and neural circuits models used in machine learning, and identify ways in which these models can provide novel insights into biological systems.
Collapse
Affiliation(s)
- JANNA L. FIERST
- Institute of Ecology and Evolution, University of Oregon, Eugene, Oregon
| | | |
Collapse
|
28
|
A learning-based approach to artificial sensory feedback leads to optimal integration. Nat Neurosci 2014; 18:138-44. [PMID: 25420067 PMCID: PMC4282864 DOI: 10.1038/nn.3883] [Citation(s) in RCA: 124] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2014] [Accepted: 10/27/2014] [Indexed: 11/08/2022]
Abstract
Proprioception—the sense of the body’s position in space—plays an important role in natural movement planning and execution and will likewise be necessary for successful motor prostheses and Brain–Machine Interfaces (BMIs). Here, we demonstrated that monkeys could learn to use an initially unfamiliar multi–channel intracortical microstimulation (ICMS) signal, which provided continuous information about hand position relative to an unseen target, to complete accurate reaches. Furthermore, monkeys combined this artificial signal with vision to form an optimal, minimum–variance estimate of relative hand position. These results demonstrate that a learning–based approach can be used to provide a rich artificial sensory feedback signal, suggesting a new strategy for restoring proprioception to patients using BMIs as well as a powerful new tool for studying the adaptive mechanisms of sensory integration.
Collapse
|
29
|
Affiliation(s)
- Dario Farina
- Department of Neurorehabilitation Engineering, Bernstein Focus Neurotechnology Göttingen, Bernstein Center for Computational Neuroscience, University Medical Center Göttingen, Georg-August University, D-37075 Göttingen, Germany.
| | - Oskar Aszmann
- CD Laboratory for Restoration of Extremity Function, Division of Plastic and Reconstructive Surgery, Medical University of Vienna, A-1090 Vienna, Austria
| |
Collapse
|
30
|
Pistohl T, Joshi D, Ganesh G, Jackson A, Nazarpour K. Artificial proprioceptive feedback for myoelectric control. IEEE Trans Neural Syst Rehabil Eng 2014; 23:498-507. [PMID: 25216484 PMCID: PMC7610977 DOI: 10.1109/tnsre.2014.2355856] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The typical control of myoelectric interfaces, whether in laboratory settings or real-life prosthetic applications, largely relies on visual feedback because proprioceptive signals from the controlling muscles are either not available or very noisy. We conducted a set of experiments to test whether artificial proprioceptive feedback, delivered non-invasively to another limb, can improve control of a two-dimensional myoelectrically-controlled computer interface. In these experiments, participants’ were required to reach a target with a visual cursor that was controlled by electromyogram signals recorded from muscles of the left hand, while they were provided with an additional proprioceptive feedback on their right arm by moving it with a robotic manipulandum. Provision of additional artificial proprioceptive feedback improved the angular accuracy of their movements when compared to using visual feedback alone but did not increase the overall accuracy quantified with the average distance between the cursor and the target. The advantages conferred by proprioception were present only when the proprioceptive feedback had similar orientation to the visual feedback in the task space and not when it was mirrored, demonstrating the importance of congruency in feedback modalities for multi-sensory integration. Our results reveal the ability of the human motor system to learn new inter-limb sensory-motor associations; the motor system can utilize task-related sensory feedback, even when it is available on a limb distinct from the one being actuated. In addition, the proposed task structure provides a flexible test paradigm by which the effectiveness of various sensory feedback and multi-sensory integration for myoelectric prosthesis control can be evaluated.
Collapse
Affiliation(s)
- Tobias Pistohl
- Institute of Neuroscience, Newcastle University, UK. He is now with Bernstein Center Freiburg, University of Freiburg
| | - Deepak Joshi
- Department of Electrical and Electronics Engineering, Graphic Era University, Dehradun - 248002, India
| | - Gowrishankar Ganesh
- CNRS-AIST JRL (Joint Robotics Laboratory), UMI3218/CRT, Intelligent Systems Research Institute, Tsukuba, Japan-305-8568 and the Centre for Information and Neural Networks (CINET-NICT), Osaka, Japan-5650871
| | - Andrew Jackson
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | | |
Collapse
|
31
|
Patterns across multiple memories are identified over time. Nat Neurosci 2014; 17:981-6. [PMID: 24880213 DOI: 10.1038/nn.3736] [Citation(s) in RCA: 96] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2014] [Accepted: 05/08/2014] [Indexed: 01/10/2023]
Abstract
Memories are not static but continue to be processed after encoding. This is thought to allow the integration of related episodes via the identification of patterns. Although this idea lies at the heart of contemporary theories of systems consolidation, it has yet to be demonstrated experimentally. Using a modified water-maze paradigm in which platforms are drawn stochastically from a spatial distribution, we found that mice were better at matching platform distributions 30 d compared to 1 d after training. Post-training time-dependent improvements in pattern matching were associated with increased sensitivity to new platforms that conflicted with the pattern. Increased sensitivity to pattern conflict was reduced by pharmacogenetic inhibition of the medial prefrontal cortex (mPFC). These results indicate that pattern identification occurs over time, which can lead to conflicts between new information and existing knowledge that must be resolved, in part, by computations carried out in the mPFC.
Collapse
|
32
|
Tagliabue M, McIntyre J. A modular theory of multisensory integration for motor control. Front Comput Neurosci 2014; 8:1. [PMID: 24550816 PMCID: PMC3908447 DOI: 10.3389/fncom.2014.00001] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Accepted: 01/06/2014] [Indexed: 11/13/2022] Open
Abstract
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g., pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame.
Collapse
Affiliation(s)
- Michele Tagliabue
- Centre d'Étude de la Sensorimotricité, (CNRS UMR 8194), Institut des Neurosciences et de la Cognition, Université Paris Descartes, Sorbonne Paris Cité Paris, France
| | - Joseph McIntyre
- Centre d'Étude de la Sensorimotricité, (CNRS UMR 8194), Institut des Neurosciences et de la Cognition, Université Paris Descartes, Sorbonne Paris Cité Paris, France
| |
Collapse
|
33
|
Loeb GE, Fishel JA. Bayesian action&perception: representing the world in the brain. Front Neurosci 2014; 8:341. [PMID: 25400542 PMCID: PMC4214374 DOI: 10.3389/fnins.2014.00341] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2014] [Accepted: 10/08/2014] [Indexed: 11/23/2022] Open
Abstract
Theories of perception seek to explain how sensory data are processed to identify previously experienced objects, but they usually do not consider the decisions and effort that goes into acquiring the sensory data. Identification of objects according to their tactile properties requires active exploratory movements. The sensory data thereby obtained depend on the details of those movements, which human subjects change rapidly and seemingly capriciously. Bayesian Exploration is an algorithm that uses prior experience to decide which next exploratory movement should provide the most useful data to disambiguate the most likely possibilities. In previous studies, a simple robot equipped with a biomimetic tactile sensor and operated according to Bayesian Exploration performed in a manner similar to and actually better than humans on a texture identification task. Expanding on this, "Bayesian Action&Perception" refers to the construction and querying of an associative memory of previously experienced entities containing both sensory data and the motor programs that elicited them. We hypothesize that this memory can be queried (i) to identify useful next exploratory movements during identification of an unknown entity ("action for perception") or (ii) to characterize whether an unknown entity is fit for purpose ("perception for action") or (iii) to recall what actions might be feasible for a known entity (Gibsonian affordance). The biomimetic design of this mechatronic system may provide insights into the neuronal basis of biological action and perception.
Collapse
Affiliation(s)
- Gerald E. Loeb
- SynTouch LLCLos Angeles, CA, USA
- Department of Biomedical Engineering, University of Southern CaliforniaLos Angeles, CA, USA
- *Correspondence:
| | | |
Collapse
|
34
|
Seilheimer RL, Rosenberg A, Angelaki DE. Models and processes of multisensory cue combination. Curr Opin Neurobiol 2013; 25:38-46. [PMID: 24709599 DOI: 10.1016/j.conb.2013.11.008] [Citation(s) in RCA: 67] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2013] [Revised: 09/26/2013] [Accepted: 11/18/2013] [Indexed: 01/13/2023]
Abstract
Fundamental to our perception of a unified and stable environment is the capacity to combine information across the senses. Although this process appears seamless as an adult, the brain's ability to successfully perform multisensory cue combination takes years to develop and relies on a number of complex processes including cue integration, cue calibration, causal inference, and reference frame transformations. Further complexities exist because multisensory cue combination is implemented across time by populations of noisy neurons. In this review, we discuss recent behavioral studies exploring how the brain combines information from different sensory systems, neurophysiological studies relating behavior to neuronal activity, and a theory of neural sensory encoding that can account for many of these experimental findings.
Collapse
Affiliation(s)
| | - Ari Rosenberg
- Baylor College of Medicine, Houston, TX, United States
| | | |
Collapse
|