101
|
Noel JP, Blanke O, Serino A. From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference. Ann N Y Acad Sci 2018; 1426:146-165. [PMID: 29876922 DOI: 10.1111/nyas.13867] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/24/2018] [Accepted: 05/02/2018] [Indexed: 01/09/2023]
Abstract
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience (LNCO), Center for Neuroprosthetics (CNP), Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Neurology, University of Geneva, Geneva, Switzerland
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
102
|
Dockheer KM, Bockisch CJ, Tarnutzer AA. Effects of Optokinetic Stimulation on Verticality Perception Are Much Larger for Vision-Based Paradigms Than for Vision-Independent Paradigms. Front Neurol 2018; 9:323. [PMID: 29867732 PMCID: PMC5954029 DOI: 10.3389/fneur.2018.00323] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 04/24/2018] [Indexed: 01/08/2023] Open
Abstract
Introduction Verticality perception as assessed by the subjective visual vertical (SVV) is significantly biased by a rotating optokinetic stimulus. The underlying mechanisms of this effect remain open. Potentially, the optokinetic stimulus induces a shift of the internal estimate of the direction of gravity. This hypothesis predicts a shift of perceived vertical using other, non-vision dependent, paradigms as well. Alternatively, an optokinetic stimulus may only induce a shift of visual orientation, and so would be task specific. Methods To test this prediction, both vision-dependent SVV and vision-independent [subjective haptic vertical (SHV)] paradigms were applied. In 12 healthy human subjects, perceived vertical was measured in different whole-body roll positions (up to ±120°, steps = 30°) while watching a clockwise or counterclockwise rotating optokinetic stimulus. For comparison, baseline trials were collected in darkness. A generalized linear model was applied for statistical analysis. Results A significant main effect for optokinetic stimulation was noted both for the SVV paradigm (p < 0.001) and the SHV paradigm (p = 0.013). However, while pairwise comparisons demonstrated significant optokinetic-induced shifts (p ≤ 0.035) compared to baseline in all roll-tilted orientations except 30° and 60° left-ear-down position and counterclockwise optokinetic stimulation for the SVV paradigm, significant shifts were found in only 1 of the 18 test conditions (120° left-ear-down roll orientation, counterclockwise optokinetic stimulation) for the SHV paradigm. Compared to the SHV, the SVV showed significantly (p < 0.001) larger shifts of perceived vertical when presenting a clockwise (15.3 ± 16.0° vs. 1.1 ± 5.2°, mean ± 1 SD) or counterclockwise (−12.6 ± 7.7° vs. −2.6 ± 5.4°) rotating optokinetic stimulus. Conclusion Comparing the effect of optokinetic stimulation on verticality perception in both vision-dependent and vision-independent paradigms, we demonstrated distinct patterns. While significant large and roll-angle dependent shifts were noted for the SVV, offsets were minor and reached significance only in one test condition for the SHV. These results suggest that optokinetic stimulation predominately affects vision-related mechanisms, possibly due to induced torsional eye displacements, and that any shifts of the internal estimate of the direction of gravity are relatively minor.
Collapse
Affiliation(s)
- Katja M Dockheer
- Department of Neurology, University Hospital Zurich, Zurich, Switzerland
| | - Christopher J Bockisch
- Department of Neurology, University Hospital Zurich, Zurich, Switzerland.,Department of Otorhinolaryngology, University Hospital Zurich, Zurich, Switzerland.,Department of Ophthalmology, University Hospital Zurich, Zurich, Switzerland.,University of Zurich, Zurich, Switzerland
| | - Alexander A Tarnutzer
- Department of Neurology, University Hospital Zurich, Zurich, Switzerland.,University of Zurich, Zurich, Switzerland
| |
Collapse
|
103
|
Vélez-Fort M, Bracey EF, Keshavarzi S, Rousseau CV, Cossell L, Lenzi SC, Strom M, Margrie TW. A Circuit for Integration of Head- and Visual-Motion Signals in Layer 6 of Mouse Primary Visual Cortex. Neuron 2018; 98:179-191.e6. [PMID: 29551490 PMCID: PMC5896233 DOI: 10.1016/j.neuron.2018.02.023] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 01/19/2018] [Accepted: 02/23/2018] [Indexed: 11/10/2022]
Abstract
To interpret visual-motion events, the underlying computation must involve internal reference to the motion status of the observer's head. We show here that layer 6 (L6) principal neurons in mouse primary visual cortex (V1) receive a diffuse, vestibular-mediated synaptic input that signals the angular velocity of horizontal rotation. Behavioral and theoretical experiments indicate that these inputs, distributed over a network of 100 L6 neurons, provide both a reliable estimate and, therefore, physiological separation of head-velocity signals. During head rotation in the presence of visual stimuli, L6 neurons exhibit postsynaptic responses that approximate the arithmetic sum of the vestibular and visual-motion response. Functional input mapping reveals that these internal motion signals arrive into L6 via a direct projection from the retrosplenial cortex. We therefore propose that visual-motion processing in V1 L6 is multisensory and contextually dependent on the motion status of the animal's head.
Collapse
Affiliation(s)
- Mateo Vélez-Fort
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Edward F Bracey
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Sepiedeh Keshavarzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Charly V Rousseau
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Lee Cossell
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Stephen C Lenzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Molly Strom
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK
| | - Troy W Margrie
- The Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, 25 Howland Street, London W1T 4JG, UK.
| |
Collapse
|
104
|
Simon DM, Wallace MT. Integration and Temporal Processing of Asynchronous Audiovisual Speech. J Cogn Neurosci 2018; 30:319-337. [DOI: 10.1162/jocn_a_01205] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Multisensory integration of visual mouth movements with auditory speech is known to offer substantial perceptual benefits, particularly under challenging (i.e., noisy) acoustic conditions. Previous work characterizing this process has found that ERPs to auditory speech are of shorter latency and smaller magnitude in the presence of visual speech. We sought to determine the dependency of these effects on the temporal relationship between the auditory and visual speech streams using EEG. We found that reductions in ERP latency and suppression of ERP amplitude are maximal when the visual signal precedes the auditory signal by a small interval and that increasing amounts of asynchrony reduce these effects in a continuous manner. Time–frequency analysis revealed that these effects are found primarily in the theta (4–8 Hz) and alpha (8–12 Hz) bands, with a central topography consistent with auditory generators. Theta effects also persisted in the lower portion of the band (3.5–5 Hz), and this late activity was more frontally distributed. Importantly, the magnitude of these late theta oscillations not only differed with the temporal characteristics of the stimuli but also served to predict participants' task performance. Our analysis thus reveals that suppression of single-trial brain responses by visual speech depends strongly on the temporal concordance of the auditory and visual inputs. It further illustrates that processes in the lower theta band, which we suggest as an index of incongruity processing, might serve to reflect the neural correlates of individual differences in multisensory temporal perception.
Collapse
|
105
|
Gallagher M, Ferrè ER. The aesthetics of verticality: A gravitational contribution to aesthetic preference. Q J Exp Psychol (Hove) 2018; 71:2655-2664. [DOI: 10.1177/1747021817751353] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Verticality plays a fundamental role in the arts, portraying concepts such as power, grandeur, or even morality; however, it is unclear whether people have an aesthetic preference for vertical stimuli. The perception of verticality occurs by integrating vestibular-gravitational input with proprioceptive signals about body posture. Thus, these signals may influence the preference for verticality. Here, we show that people have a genuine aesthetic preference for stimuli aligned with the vertical, and this preference depends on the position of the body relative to the gravitational direction. Observers rated the attractiveness of lines that varied in inclination. Perfectly vertical lines were judged to be more attractive than those inclined clockwise or anticlockwise only when participants held an upright posture. Critically, this preference was not present when their body was tilted away from the gravitational vertical. Our results showed that gravitational signals make a contribution to the perception of attractiveness of environmental objects.
Collapse
Affiliation(s)
- Maria Gallagher
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | | |
Collapse
|
106
|
Peripheral and central determinants of skin wetness sensing in humans. HANDBOOK OF CLINICAL NEUROLOGY 2018; 156:83-102. [PMID: 30454611 DOI: 10.1016/b978-0-444-63912-7.00005-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Evolutionarily, our ability to sense skin wetness and humidity (i.e., hygroreception) could have developed as a way of helping to maintain thermal homeostasis, as much as it is the case for the role of temperature sensation and thermoreception. Humans are not provided with a specific skin hygroreceptor, and recent studies have indicated that skin wetness is likely to be centrally processed as a result of the multisensory integration of peripheral inputs from skin thermoreceptors and mechanoreceptors coding the biophysical interactions between skin and moisture. The existence of a specific hygrosensation strategy for human wetness perception has been proposed and the first neurophysiologic model of skin wetness sensing has been recently developed. However, while these recent findings have shed light on some of the peripheral and central neural mechanisms underlying wetness sensing, our understanding of how the brain processes the thermal and mechanical inputs that give rise to one of our "most worn" skin sensory experiences is still far from being conclusive. Understanding these neural mechanisms is clinically relevant in the context of those neurologic conditions that are accompanied by somatosensory abnormalities. The present chapter will present the current knowledge on the peripheral and central determinants of skin wetness sensing in humans.
Collapse
|
107
|
Magnotti JF, Basu Mallick D, Beauchamp MS. Reducing Playback Rate of Audiovisual Speech Leads to a Surprising Decrease in the McGurk Effect. Multisens Res 2018; 31:19-38. [DOI: 10.1163/22134808-00002586] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2016] [Accepted: 06/03/2017] [Indexed: 11/19/2022]
Abstract
We report the unexpected finding that slowing video playback decreases perception of the McGurk effect. This reduction is counter-intuitive because the illusion depends on visual speech influencing the perception of auditory speech, and slowing speech should increase the amount of visual information available to observers. We recorded perceptual data from 110 subjects viewing audiovisual syllables (either McGurk or congruent control stimuli) played back at one of three rates: the rate used by the talker during recording (the natural rate), a slow rate (50% of natural), or a fast rate (200% of natural). We replicated previous studies showing dramatic variability in McGurk susceptibility at the natural rate, ranging from 0–100% across subjects and from 26–76% across the eight McGurk stimuli tested. Relative to the natural rate, slowed playback reduced the frequency of McGurk responses by 11% (79% of subjects showed a reduction) and reduced congruent accuracy by 3% (25% of subjects showed a reduction). Fast playback rate had little effect on McGurk responses or congruent accuracy. To determine whether our results are consistent with Bayesian integration, we constructed a Bayes-optimal model that incorporated two assumptions: individuals combine auditory and visual information according to their reliability, and changing playback rate affects sensory reliability. The model reproduced both our findings of large individual differences and the playback rate effect. This work illustrates that surprises remain in the McGurk effect and that Bayesian integration provides a useful framework for understanding audiovisual speech perception.
Collapse
Affiliation(s)
- John F. Magnotti
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX, USA
| | | | - Michael S. Beauchamp
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
108
|
Billino J, Drewing K. Age Effects on Visuo-Haptic Length Discrimination: Evidence for Optimal Integration of Senses in Senior Adults. Multisens Res 2018; 31:273-300. [PMID: 31264626 DOI: 10.1163/22134808-00002601] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 07/25/2017] [Indexed: 11/19/2022]
Abstract
Demographic changes in most developed societies have fostered research on functional aging. While cognitive changes have been characterized elaborately, understanding of perceptual aging lacks behind. We investigated age effects on the mechanisms of how multiple sources of sensory information are merged into a common percept. We studied visuo-haptic integration in a length discrimination task. A total of 24 young (20-25 years) and 27 senior (69-77 years) adults compared standard stimuli to appropriate sets of comparison stimuli. Standard stimuli were explored under visual, haptic, or visuo-haptic conditions. The task procedure allowed introducing an intersensory conflict by anamorphic lenses. Comparison stimuli were exclusively explored haptically. We derived psychometric functions for each condition, determining points of subjective equality and discrimination thresholds. We notably evaluated visuo-haptic perception by different models of multisensory processing, i.e., the Maximum-Likelihood-Estimate model of optimal cue integration, a suboptimal integration model, and a cue switching model. Our results support robust visuo-haptic integration across the adult lifespan. We found suboptimal weighted averaging of sensory sources in young adults, however, senior adults exploited differential sensory reliabilities more efficiently to optimize thresholds. Indeed, evaluation of the MLE model indicates that young adults underweighted visual cues by more than 30%; in contrast, visual weights of senior adults deviated only by about 3% from predictions. We suggest that close to optimal multisensory integration might contribute to successful compensation for age-related sensory losses and provides a critical resource. Differentiation between multisensory integration during healthy aging and age-related pathological challenges on the sensory systems awaits further exploration.
Collapse
Affiliation(s)
- Jutta Billino
- Department of Psychology, Justus-Liebig-Universität, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| | - Knut Drewing
- Department of Psychology, Justus-Liebig-Universität, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| |
Collapse
|
109
|
Schubert JTW, Badde S, Röder B, Heed T. Task demands affect spatial reference frame weighting during tactile localization in sighted and congenitally blind adults. PLoS One 2017; 12:e0189067. [PMID: 29228023 PMCID: PMC5724835 DOI: 10.1371/journal.pone.0189067] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 11/17/2017] [Indexed: 11/18/2022] Open
Abstract
Task demands modulate tactile localization in sighted humans, presumably through weight adjustments in the spatial integration of anatomical, skin-based, and external, posture-based information. In contrast, previous studies have suggested that congenitally blind humans, by default, refrain from automatic spatial integration and localize touch using only skin-based information. Here, sighted and congenitally blind participants localized tactile targets on the palm or back of one hand, while ignoring simultaneous tactile distractors at congruent or incongruent locations on the other hand. We probed the interplay of anatomical and external location codes for spatial congruency effects by varying hand posture: the palms either both faced down, or one faced down and one up. In the latter posture, externally congruent target and distractor locations were anatomically incongruent and vice versa. Target locations had to be reported either anatomically (“palm” or “back” of the hand), or externally (“up” or “down” in space). Under anatomical instructions, performance was more accurate for anatomically congruent than incongruent target-distractor pairs. In contrast, under external instructions, performance was more accurate for externally congruent than incongruent pairs. These modulations were evident in sighted and blind individuals. Notably, distractor effects were overall far smaller in blind than in sighted participants, despite comparable target-distractor identification performance. Thus, the absence of developmental vision seems to be associated with an increased ability to focus tactile attention towards a non-spatially defined target. Nevertheless, that blind individuals exhibited effects of hand posture and task instructions in their congruency effects suggests that, like the sighted, they automatically integrate anatomical and external information during tactile localization. Moreover, spatial integration in tactile processing is, thus, flexibly adapted by top-down information—here, task instruction—even in the absence of developmental vision.
Collapse
Affiliation(s)
- Jonathan T. W. Schubert
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Stephanie Badde
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
- Department of Psychology, New York University, New York, United States of America
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Tobias Heed
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
- Biopsychology & Cognitive Neuroscience, Faculty of Psychology & Sports Science, Bielefeld University, Bielefeld, Germany
- Center of Excellence in Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- * E-mail:
| |
Collapse
|
110
|
Boyle SC, Kayser SJ, Kayser C. Neural correlates of multisensory reliability and perceptual weights emerge at early latencies during audio-visual integration. Eur J Neurosci 2017; 46:2565-2577. [PMID: 28940728 PMCID: PMC5725738 DOI: 10.1111/ejn.13724] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 09/11/2017] [Accepted: 09/18/2017] [Indexed: 12/24/2022]
Abstract
To make accurate perceptual estimates, observers must take the reliability of sensory information into account. Despite many behavioural studies showing that subjects weight individual sensory cues in proportion to their reliabilities, it is still unclear when during a trial neuronal responses are modulated by the reliability of sensory information or when they reflect the perceptual weights attributed to each sensory input. We investigated these questions using a combination of psychophysics, EEG‐based neuroimaging and single‐trial decoding. Our results show that the weighted integration of sensory information in the brain is a dynamic process; effects of sensory reliability on task‐relevant EEG components were evident 84 ms after stimulus onset, while neural correlates of perceptual weights emerged 120 ms after stimulus onset. These neural processes had different underlying sources, arising from sensory and parietal regions, respectively. Together these results reveal the temporal dynamics of perceptual and neural audio‐visual integration and support the notion of temporally early and functionally specific multisensory processes in the brain.
Collapse
Affiliation(s)
- Stephanie C Boyle
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Stephanie J Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| |
Collapse
|
111
|
Multisensory integration in orienting behavior: Pupil size, microsaccades, and saccades. Biol Psychol 2017; 129:36-44. [DOI: 10.1016/j.biopsycho.2017.07.024] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Revised: 06/26/2017] [Accepted: 07/31/2017] [Indexed: 11/22/2022]
|
112
|
Lavan N, McGettigan C. Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory Information. Q J Exp Psychol (Hove) 2017; 70:2159-2168. [DOI: 10.1080/17470218.2016.1226370] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.
Collapse
Affiliation(s)
- Nadine Lavan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
113
|
Petzschner FH, Weber LAE, Gard T, Stephan KE. Computational Psychosomatics and Computational Psychiatry: Toward a Joint Framework for Differential Diagnosis. Biol Psychiatry 2017; 82:421-430. [PMID: 28619481 DOI: 10.1016/j.biopsych.2017.05.012] [Citation(s) in RCA: 99] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/01/2017] [Revised: 04/14/2017] [Accepted: 05/15/2017] [Indexed: 12/17/2022]
Abstract
This article outlines how a core concept from theories of homeostasis and cybernetics, the inference-control loop, may be used to guide differential diagnosis in computational psychiatry and computational psychosomatics. In particular, we discuss 1) how conceptualizing perception and action as inference-control loops yields a joint computational perspective on brain-world and brain-body interactions and 2) how the concrete formulation of this loop as a hierarchical Bayesian model points to key computational quantities that inform a taxonomy of potential disease mechanisms. We consider the utility of this perspective for differential diagnosis in concrete clinical applications.
Collapse
Affiliation(s)
- Frederike H Petzschner
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, Switzerland
| | - Lilian A E Weber
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, Switzerland
| | - Tim Gard
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, Switzerland; Center for Complementary and Integrative Medicine, University Hospital Zurich, Zurich, Switzerland
| | - Klaas E Stephan
- Translational Neuromodeling Unit, Institute for Biomedical Engineering, University of Zurich and Swiss Federal Institute of Technology Zurich, Zurich, Switzerland; Max Planck Institute for Metabolism Research, Cologne, Germany; Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom.
| |
Collapse
|
114
|
Gimmon Y, Millar J, Pak R, Liu E, Schubert MC. Central not peripheral vestibular processing impairs gait coordination. Exp Brain Res 2017; 235:3345-3355. [PMID: 28819687 DOI: 10.1007/s00221-017-5061-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2017] [Accepted: 08/08/2017] [Indexed: 11/29/2022]
Abstract
Gait coordination is generated by neuronal inter-connections between central pattern generators in the spinal cord governed by cortical areas. Malfunction of central vestibular processing areas generates vestibular symptoms in the absence of an identifiable peripheral vestibular system lesion. Walking in the dark enforces a coordinated afference primarily from the vestibular and somatosensory systems. We hypothesized that patients with aberrant central vestibular processing would demonstrate unique gait characteristics, and have impaired gait coordination compared with those patients with abnormal peripheral vestibular function and healthy controls. One-hundred and eighteen subjects were recruited. Peripheral vestibular function was determined based on laboratory and clinical examinations. Patients with abnormal central vestibular processing had normal peripheral vestibular function. Subjects were instructed to walk at a comfortable pace during three visual conditions; eyes open, eyes open and closed intermittently, and eyes closed. Both patient groups showed a similar spatiotemporal gait pattern, significantly different from the pattern of the healthy controls. However, only the central vestibular patient group had an abnormal coordination of gait as measured by the phase coordination index (PCI). There were no significant interactions between the groups and walking conditions. Peripheral vestibular deficits impair gait though our data suggest that it is the central processing of such peripheral vestibular information that has a greater influence. This impairment may be related to a neural un-coupling between the brain and central pattern generator of the spinal cord based on the abnormal PCI, which seems to be a good indicator of the integrity of this linkage.
Collapse
Affiliation(s)
- Yoav Gimmon
- Laboratory of Vestibular NeuroAdaptation, Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University School of Medicine, 601 N. Caroline Street, 6th Floor, Baltimore, MD, 21287-0910, USA
| | - Jennifer Millar
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Rebecca Pak
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Elizabeth Liu
- Laboratory of Vestibular NeuroAdaptation, Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University School of Medicine, 601 N. Caroline Street, 6th Floor, Baltimore, MD, 21287-0910, USA
| | - Michael C Schubert
- Laboratory of Vestibular NeuroAdaptation, Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins University School of Medicine, 601 N. Caroline Street, 6th Floor, Baltimore, MD, 21287-0910, USA. .,Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
115
|
Dynamic Multisensory Integration: Somatosensory Speed Trumps Visual Accuracy during Feedback Control. J Neurosci 2017; 36:8598-611. [PMID: 27535908 DOI: 10.1523/jneurosci.0184-16.2016] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2016] [Accepted: 05/21/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Recent advances in movement neuroscience have consistently highlighted that the nervous system performs sophisticated feedback control over very short time scales (<100 ms for upper limb). These observations raise the important question of how the nervous system processes multiple sources of sensory feedback in such short time intervals, given that temporal delays across sensory systems such as vision and proprioception differ by tens of milliseconds. Here we show that during feedback control, healthy humans use dynamic estimates of hand motion that rely almost exclusively on limb afferent feedback even when visual information about limb motion is available. We demonstrate that such reliance on the fastest sensory signal during movement is compatible with dynamic Bayesian estimation. These results suggest that the nervous system considers not only sensory variances but also temporal delays to perform optimal multisensory integration and feedback control in real-time. SIGNIFICANCE STATEMENT Numerous studies have demonstrated that the nervous system combines redundant sensory signals according to their reliability. Although very powerful, this model does not consider how temporal delays may impact sensory reliability, which is an important issue for feedback control because different sensory systems are affected by different temporal delays. Here we show that the brain considers not only sensory variability but also temporal delays when integrating vision and proprioception following mechanical perturbations applied to the upper limb. Compatible with dynamic Bayesian estimation, our results unravel the importance of proprioception for feedback control as a consequence of the shorter temporal delays associated with this sensory modality.
Collapse
|
116
|
Churan J, Paul J, Klingenhoefer S, Bremmer F. Integration of visual and tactile information in reproduction of traveled distance. J Neurophysiol 2017; 118:1650-1663. [PMID: 28659463 DOI: 10.1152/jn.00342.2017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Revised: 06/27/2017] [Accepted: 06/27/2017] [Indexed: 11/22/2022] Open
Abstract
In the natural world, self-motion always stimulates several different sensory modalities. Here we investigated the interplay between a visual optic flow stimulus simulating self-motion and a tactile stimulus (air flow resulting from self-motion) while human observers were engaged in a distance reproduction task. We found that adding congruent tactile information (i.e., speed of the air flow and speed of visual motion are directly proportional) to the visual information significantly improves the precision of the actively reproduced distances. This improvement, however, was smaller than predicted for an optimal integration of visual and tactile information. In contrast, incongruent tactile information (i.e., speed of the air flow and speed of visual motion are inversely proportional) did not improve subjects' precision indicating that incongruent tactile information and visual information were not integrated. One possible interpretation of the results is a link to properties of neurons in the ventral intraparietal area that have been shown to have spatially and action-congruent receptive fields for visual and tactile stimuli.NEW & NOTEWORTHY This study shows that tactile and visual information can be integrated to improve the estimates of the parameters of self-motion. This, however, happens only if the two sources of information are congruent-as they are in a natural environment. In contrast, an incongruent tactile stimulus is still used as a source of information about self-motion but it is not integrated with visual information.
Collapse
Affiliation(s)
- Jan Churan
- Department of Neurophysics, Marburg University, Marburg, Germany; and
| | - Johannes Paul
- Department of Neurophysics, Marburg University, Marburg, Germany; and
| | - Steffen Klingenhoefer
- Department of Neurophysics, Marburg University, Marburg, Germany; and.,Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey
| | - Frank Bremmer
- Department of Neurophysics, Marburg University, Marburg, Germany; and
| |
Collapse
|
117
|
Social Information Is Integrated into Value and Confidence Judgments According to Its Reliability. J Neurosci 2017; 37:6066-6074. [PMID: 28566360 PMCID: PMC5481942 DOI: 10.1523/jneurosci.3880-16.2017] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 03/07/2017] [Accepted: 04/01/2017] [Indexed: 11/21/2022] Open
Abstract
How much we like something, whether it be a bottle of wine or a new film, is affected by the opinions of others. However, the social information that we receive can be contradictory and vary in its reliability. Here, we tested whether the brain incorporates these statistics when judging value and confidence. Participants provided value judgments about consumer goods in the presence of online reviews. We found that participants updated their initial value and confidence judgments in a Bayesian fashion, taking into account both the uncertainty of their initial beliefs and the reliability of the social information. Activity in dorsomedial prefrontal cortex tracked the degree of belief update. Analogous to how lower-level perceptual information is integrated, we found that the human brain integrates social information according to its reliability when judging value and confidence. SIGNIFICANCE STATEMENT The field of perceptual decision making has shown that the sensory system integrates different sources of information according to their respective reliability, as predicted by a Bayesian inference scheme. In this work, we hypothesized that a similar coding scheme is implemented by the human brain to process social signals and guide complex, value-based decisions. We provide experimental evidence that the human prefrontal cortex's activity is consistent with a Bayesian computation that integrates social information that differs in reliability and that this integration affects the neural representation of value and confidence.
Collapse
|
118
|
Crevecoeur F, Barrea A, Libouton X, Thonnard JL, Lefèvre P. Multisensory components of rapid motor responses to fingertip loading. J Neurophysiol 2017; 118:331-343. [PMID: 28468992 DOI: 10.1152/jn.00091.2017] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Revised: 04/25/2017] [Accepted: 04/25/2017] [Indexed: 11/22/2022] Open
Abstract
Tactile and muscle afferents provide critical sensory information for grasp control, yet the contribution of each sensory system during online control has not been clearly identified. More precisely, it is unknown how these two sensory systems participate in online control of digit forces following perturbations to held objects. To address this issue, we investigated motor responses in the context of fingertip loading, which parallels the impact of perturbations to held objects on finger motion and fingerpad deformation, and characterized surface recordings of intrinsic (first dorsal interosseous, FDI) and extrinsic (flexor digitorum superficialis, FDS) hand muscles based on statistical modeling. We designed a series of experiments probing the effects of peripheral stimulation with or without anesthesia of the finger, and of task instructions. Loading of the fingertip generated a motor response in FDI at ~60 ms following the perturbation onset, which was only driven by muscle stretch, as the ring-block anesthesia reduced the gain of the response occurring later than 90 ms, leaving responses occurring before this time unaffected. In contrast, the motor response in FDS was independent of the lateral motion of the finger. This response started at ~90 ms on average and was immediately adjusted to task demands. Altogether these results highlight how a rapid integration of partially distinct sensorimotor circuits supports rapid motor responses to fingertip loading.NEW & NOTEWORTHY To grasp and manipulate objects, the brain uses touch signals related to skin deformation as well as sensory information about motion of the fingers encoded in muscle spindles. Here we investigated how these two sensory systems contribute to feedback responses to perturbation applied to the fingertip. We found distinct response components, suggesting that each sensory system engages separate sensorimotor circuits with distinct functions and latencies.
Collapse
Affiliation(s)
- F Crevecoeur
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium.,Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-la-Neuve, Belgium
| | - A Barrea
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium.,Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-la-Neuve, Belgium
| | - X Libouton
- Cliniques Universitaire Saint-Luc, Université catholique de Louvain, Louvain-la-Neuve, Belgium; and
| | - J-L Thonnard
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium.,Physical and Rehabilitation Medicine Department, Cliniques Universitaire Saint-Luc, Université catholique de Louvain, Louvain-la-Neuve, Belgium
| | - P Lefèvre
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium; .,Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
119
|
Crevecoeur F, Kording KP. Saccadic suppression as a perceptual consequence of efficient sensorimotor estimation. eLife 2017; 6. [PMID: 28463113 PMCID: PMC5449188 DOI: 10.7554/elife.25073] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2017] [Accepted: 04/30/2017] [Indexed: 01/08/2023] Open
Abstract
Humans perform saccadic eye movements two to three times per second. When doing so, the nervous system strongly suppresses sensory feedback for extended periods of time in comparison to movement time. Why does the brain discard so much visual information? Here we suggest that perceptual suppression may arise from efficient sensorimotor computations, assuming that perception and control are fundamentally linked. More precisely, we show theoretically that a Bayesian estimator should reduce the weight of sensory information around the time of saccades, as a result of signal dependent noise and of sensorimotor delays. Such reduction parallels the behavioral suppression occurring prior to and during saccades, and the reduction in neural responses to visual stimuli observed across the visual hierarchy. We suggest that saccadic suppression originates from efficient sensorimotor processing, indicating that the brain shares neural resources for perception and control.
Collapse
Affiliation(s)
- Frédéric Crevecoeur
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université catholique de Louvain, Louvain-la-Neuve, Belgium.,Institute of Neuroscience, Université catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Konrad P Kording
- Rehabilitation Institute of Chicago, Northwestern University, Chicago, United States
| |
Collapse
|
120
|
Jörges B, López-Moliner J. Gravity as a Strong Prior: Implications for Perception and Action. Front Hum Neurosci 2017; 11:203. [PMID: 28503140 PMCID: PMC5408029 DOI: 10.3389/fnhum.2017.00203] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Accepted: 04/07/2017] [Indexed: 11/29/2022] Open
Abstract
In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called “strong prior”. As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities.
Collapse
Affiliation(s)
- Björn Jörges
- Department of Cognition, Development and Psychology of Education, Faculty of Psychology, Universitat de BarcelonaCatalonia, Spain.,Institut de Neurociències, Universitat de BarcelonaCatalonia, Spain
| | - Joan López-Moliner
- Department of Cognition, Development and Psychology of Education, Faculty of Psychology, Universitat de BarcelonaCatalonia, Spain.,Institut de Neurociències, Universitat de BarcelonaCatalonia, Spain
| |
Collapse
|
121
|
Okita M, Yukihiro T, Miyamoto K, Morioka S, Kaba H. Defective imitation of finger configurations in patients with damage in the right or left hemispheres: An integration disorder of visual and somatosensory information? Brain Cogn 2017; 113:109-116. [DOI: 10.1016/j.bandc.2017.01.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 01/20/2017] [Accepted: 01/23/2017] [Indexed: 11/28/2022]
|
122
|
|
123
|
Ozker M, Schepers IM, Magnotti JF, Yoshor D, Beauchamp MS. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography. J Cogn Neurosci 2017; 29:1044-1060. [PMID: 28253074 DOI: 10.1162/jocn_a_01110] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.
Collapse
Affiliation(s)
- Muge Ozker
- 1 University of Texas Graduate School of Biomedical Sciences at Houston.,2 Baylor College of Medicine
| | | | | | | | | |
Collapse
|
124
|
Kabbaligere R, Lee BC, Layne CS. Balancing sensory inputs: Sensory reweighting of ankle proprioception and vision during a bipedal posture task. Gait Posture 2017; 52:244-250. [PMID: 27978501 DOI: 10.1016/j.gaitpost.2016.12.009] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2016] [Revised: 12/01/2016] [Accepted: 12/06/2016] [Indexed: 02/02/2023]
Abstract
During multisensory integration, it has been proposed that the central nervous system (CNS) assigns a weight to each sensory input through a process called sensory reweighting. The outcome of this integration process is a single percept that is used to control posture. The main objective of this study was to determine the interaction between ankle proprioception and vision during sensory integration when the two inputs provide conflicting sensory information pertaining to direction of body sway. Sensory conflict was created by using bilateral Achilles tendon vibration and contracting visual flow and produced body sway in opposing directions when applied independently. Vibration was applied at 80Hz, 1mm amplitude and the visual flow consisted of a virtual reality scene with concentric rings retreating at 3m/s. Body sway elicited by the stimuli individually and in combination was evaluated in 10 healthy young adults by analyzing center of pressure (COP) displacement and lower limb kinematics. The magnitude of COP displacement produced when vibration and visual flow were combined was found to be lesser than the algebraic sum of COP displacement produced by the stimuli when applied individually. This suggests that multisensory integration is not merely an algebraic summation of individual cues. Instead the observed response might be a result of a weighted combination process with the weight attached to each cue being directly proportional to the relative reliability of the cues. The moderating effect of visual flow on postural instability produced by vibration points to the potential use of controlled visual flow for balance training.
Collapse
Affiliation(s)
- Rakshatha Kabbaligere
- Department of Health and Human Performance, University of Houston, Houston, TX, United States; Center for Neuromotor and Biomechanics Research, University of Houston, Houston, TX, United States.
| | - Beom-Chan Lee
- Department of Health and Human Performance, University of Houston, Houston, TX, United States; Center for Neuromotor and Biomechanics Research, University of Houston, Houston, TX, United States
| | - Charles S Layne
- Department of Health and Human Performance, University of Houston, Houston, TX, United States; Center for Neuromotor and Biomechanics Research, University of Houston, Houston, TX, United States; Center for Neuro-Engineering and Cognitive Science, University of Houston, Houston, TX, United States
| |
Collapse
|
125
|
Ward BK, Bockisch CJ, Caramia N, Bertolini G, Tarnutzer AA. Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical. J Neurophysiol 2017; 117:1948-1958. [PMID: 28148642 DOI: 10.1152/jn.00303.2016] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Revised: 01/23/2017] [Accepted: 01/24/2017] [Indexed: 12/17/2022] Open
Abstract
Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° (P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated (R2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis.NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli.
Collapse
Affiliation(s)
- Bryan K Ward
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins School of Medicine, Baltimore, Maryland.,Department of Neurology, University Hospital Zurich and University of Zurich, Switzerland
| | - Christopher J Bockisch
- Department of Neurology, University Hospital Zurich and University of Zurich, Switzerland.,Department of Otorhinolaryngology, University Hospital Zurich and University of Zurich, Switzerland; and.,Department of Ophthalmology, University Hospital Zurich and University of Zurich, Switzerland
| | - Nicoletta Caramia
- Department of Neurology, University Hospital Zurich and University of Zurich, Switzerland
| | - Giovanni Bertolini
- Department of Neurology, University Hospital Zurich and University of Zurich, Switzerland
| | | |
Collapse
|
126
|
Fischer BJ, Peña JL. Optimal nonlinear cue integration for sound localization. J Comput Neurosci 2017; 42:37-52. [PMID: 27714569 PMCID: PMC5253079 DOI: 10.1007/s10827-016-0626-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 08/10/2016] [Accepted: 09/06/2016] [Indexed: 10/20/2022]
Abstract
Integration of multiple sensory cues can improve performance in detection and estimation tasks. There is an open theoretical question of the conditions under which linear or nonlinear cue combination is Bayes-optimal. We demonstrate that a neural population decoded by a population vector requires nonlinear cue combination to approximate Bayesian inference. Specifically, if cues are conditionally independent, multiplicative cue combination is optimal for the population vector. The model was tested on neural and behavioral responses in the barn owl's sound localization system where space-specific neurons owe their selectivity to multiplicative tuning to sound localization cues interaural phase (IPD) and level (ILD) differences. We found that IPD and ILD cues are approximately conditionally independent. As a result, the multiplicative combination selectivity to IPD and ILD of midbrain space-specific neurons permits a population vector to perform Bayesian cue combination. We further show that this model describes the owl's localization behavior in azimuth and elevation. This work provides theoretical justification and experimental evidence supporting the optimality of nonlinear cue combination.
Collapse
Affiliation(s)
- Brian J Fischer
- Department of Mathematics, Seattle University, 901 12th Ave, Seattle, WA, 98122, USA.
| | - Jose Luis Peña
- Department of Neuroscience, Albert Einstein College of Medicine, 1410 Pelham Parkway South, Bronx, NY, 10461, USA
| |
Collapse
|
127
|
Filingeri D, Ackerley R. The biology of skin wetness perception and its implications in manual function and for reproducing complex somatosensory signals in neuroprosthetics. J Neurophysiol 2017; 117:1761-1775. [PMID: 28123008 DOI: 10.1152/jn.00883.2016] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2016] [Revised: 01/19/2017] [Accepted: 01/19/2017] [Indexed: 01/11/2023] Open
Abstract
Our perception of skin wetness is generated readily, yet humans have no known receptor (hygroreceptor) to signal this directly. It is easy to imagine the sensation of water running over our hands or the feel of rain on our skin. The synthetic sensation of wetness is thought to be produced from a combination of specific skin thermal and tactile inputs, registered through thermoreceptors and mechanoreceptors, respectively. The present review explores how thermal and tactile afference from the periphery can generate the percept of wetness centrally. We propose that the main signals include information about skin cooling, signaled primarily by thinly myelinated thermoreceptors, and rapid changes in touch, through fast-conducting, myelinated mechanoreceptors. Potential central sites for integration of these signals, and thus the perception of skin wetness, include the primary and secondary somatosensory cortices and the insula cortex. The interactions underlying these processes can also be modeled to aid in understanding and engineering the mechanisms. Furthermore, we discuss the role that sensing wetness could play in precision grip and the dexterous manipulation of objects. We expand on these lines of inquiry to the application of the knowledge in designing and creating skin sensory feedback in prosthetics. The addition of real-time, complex sensory signals would mark a significant advance in the use and incorporation of prosthetic body parts for amputees in everyday life.NEW & NOTEWORTHY Little is known about the underlying mechanisms that generate the perception of skin wetness. Humans have no specific hygroreceptor, and thus temperature and touch information combine to produce wetness sensations. The present review covers the potential mechanisms leading to the perception of wetness, both peripherally and centrally, along with their implications for manual function. These insights are relevant to inform the design of neuroengineering interfaces, such as sensory prostheses for amputees.
Collapse
Affiliation(s)
- Davide Filingeri
- Environmental Ergonomics Research Centre, Loughborough Design School, Loughborough University, Loughborough, United Kingdom;
| | - Rochelle Ackerley
- Department of Physiology, University of Gothenburg, Göteborg, Sweden; and.,Laboratoire Neurosciences Intégratives et Adaptatives (UMR 7260), Aix Marseille Université-Centre National de la Recherche Scientifique, Marseille, France
| |
Collapse
|
128
|
Kayser SJ, Philiastides MG, Kayser C. Sounds facilitate visual motion discrimination via the enhancement of late occipital visual representations. Neuroimage 2017; 148:31-41. [PMID: 28082107 PMCID: PMC5349847 DOI: 10.1016/j.neuroimage.2017.01.010] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Revised: 12/12/2016] [Accepted: 01/05/2017] [Indexed: 12/24/2022] Open
Abstract
Sensory discriminations, such as judgements about visual motion, often benefit from multisensory evidence. Despite many reports of enhanced brain activity during multisensory conditions, it remains unclear which dynamic processes implement the multisensory benefit for an upcoming decision in the human brain. Specifically, it remains difficult to attribute perceptual benefits to specific processes, such as early sensory encoding, the transformation of sensory representations into a motor response, or to more unspecific processes such as attention. We combined an audio-visual motion discrimination task with the single-trial mapping of dynamic sensory representations in EEG activity to localize when and where multisensory congruency facilitates perceptual accuracy. Our results show that a congruent sound facilitates the encoding of motion direction in occipital sensory - as opposed to parieto-frontal - cortices, and facilitates later - as opposed to early (i.e. below 100 ms) - sensory activations. This multisensory enhancement was visible as an earlier rise of motion-sensitive activity in middle-occipital regions about 350 ms from stimulus onset, which reflected the better discriminability of motion direction from brain activity and correlated with the perceptual benefit provided by congruent multisensory information. This supports a hierarchical model of multisensory integration in which the enhancement of relevant sensory cortical representations is transformed into a more accurate choice. Feature specific multisensory integration occurs in sensory not amodal cortex. Feature specific integration occurs late, i.e. around 350 ms post stimulus onset. Acoustic and visual representations interact in occipital motion regions.
Collapse
Affiliation(s)
- Stephanie J Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.
| | | | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| |
Collapse
|
129
|
Multivoxel neurofeedback selectively modulates confidence without changing perceptual performance. Nat Commun 2016; 7:13669. [PMID: 27976739 PMCID: PMC5171844 DOI: 10.1038/ncomms13669] [Citation(s) in RCA: 88] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2016] [Accepted: 10/21/2016] [Indexed: 01/26/2023] Open
Abstract
A central controversy in metacognition studies concerns whether subjective confidence directly reflects the reliability of perceptual or cognitive processes, as suggested by normative models based on the assumption that neural computations are generally optimal. This view enjoys popularity in the computational and animal literatures, but it has also been suggested that confidence may depend on a late-stage estimation dissociable from perceptual processes. Yet, at least in humans, experimental tools have lacked the power to resolve these issues convincingly. Here, we overcome this difficulty by using the recently developed method of decoded neurofeedback (DecNef) to systematically manipulate multivoxel correlates of confidence in a frontoparietal network. Here we report that bi-directional changes in confidence do not affect perceptual accuracy. Further psychophysical analyses rule out accounts based on simple shifts in reporting strategy. Our results provide clear neuroscientific evidence for the systematic dissociation between confidence and perceptual performance, and thereby challenge current theoretical thinking.
Confidence associated with perceptual judgements is generally seen as directly reflecting the reliability of perceptual processes. Here the authors use fMRI-based decoded neurofeedback to manipulate confidence and show that it does not affect perceptual performance.
Collapse
|
130
|
Chandrasekaran C. Computational principles and models of multisensory integration. Curr Opin Neurobiol 2016; 43:25-34. [PMID: 27918886 DOI: 10.1016/j.conb.2016.11.002] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Revised: 10/27/2016] [Accepted: 11/09/2016] [Indexed: 12/22/2022]
Abstract
Combining information from multiple senses creates robust percepts, speeds up responses, enhances learning, and improves detection, discrimination, and recognition. In this review, I discuss computational models and principles that provide insight into how this process of multisensory integration occurs at the behavioral and neural level. My initial focus is on drift-diffusion and Bayesian models that can predict behavior in multisensory contexts. I then highlight how recent neurophysiological and perturbation experiments provide evidence for a distributed redundant network for multisensory integration. I also emphasize studies which show that task-relevant variables in multisensory contexts are distributed in heterogeneous neural populations. Finally, I describe dimensionality reduction methods and recurrent neural network models that may help decipher heterogeneous neural populations involved in multisensory integration.
Collapse
|
131
|
Accumulation and decay of visual capture and the ventriloquism aftereffect caused by brief audio-visual disparities. Exp Brain Res 2016; 235:585-595. [PMID: 27837258 DOI: 10.1007/s00221-016-4820-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 11/03/2016] [Indexed: 10/20/2022]
Abstract
Visual capture and the ventriloquism aftereffect resolve spatial disparities of incongruent auditory visual (AV) objects by shifting auditory spatial perception to align with vision. Here, we demonstrated the distinct temporal characteristics of visual capture and the ventriloquism aftereffect in response to brief AV disparities. In a set of experiments, subjects localized either the auditory component of AV targets (A within AV) or a second sound presented at varying delays (1-20 s) after AV exposure (A2 after AV). AV targets were trains of brief presentations (1 or 20), covering a ±30° azimuthal range, and with ±8° (R or L) disparity. We found that the magnitude of visual capture generally reached its peak within a single AV pair and did not dissipate with time, while the ventriloquism aftereffect accumulated with repetitions of AV pairs and dissipated with time. Additionally, the magnitude of the auditory shift induced by each phenomenon was uncorrelated across listeners and visual capture was unaffected by subsequent auditory targets, indicating that visual capture and the ventriloquism aftereffect are separate mechanisms with distinct effects on auditory spatial perception. Our results indicate that visual capture is a 'sample-and-hold' process that binds related objects and stores the combined percept in memory, whereas the ventriloquism aftereffect is a 'leaky integrator' process that accumulates with experience and decays with time to compensate for cross-modal disparities.
Collapse
|
132
|
Dangelmayer S, Benda J, Grewe J. Weakly electric fish learn both visual and electrosensory cues in a multisensory object discrimination task. ACTA ACUST UNITED AC 2016; 110:182-189. [PMID: 27825970 DOI: 10.1016/j.jphysparis.2016.10.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Revised: 10/28/2016] [Accepted: 10/28/2016] [Indexed: 01/21/2023]
Abstract
Weakly electric fish use electrosensory, visual, olfactory and lateral line information to guide foraging and navigation behaviors. In many cases they preferentially rely on electrosensory cues. Do fish also memorize non-electrosensory cues? Here, we trained individuals of gymnotiform weakly electric fish Apteronotus albifrons in an object discrimination task. Objects were combinations of differently conductive materials covered with differently colored cotton hoods. By setting visual and electrosensory cues in conflict we analyzed the sensory hierarchy among the electrosensory and the visual sense in object discrimination. Our experiments show that: (i) black ghost knifefish can be trained to solve discrimination tasks similarly to the mormyrid fish; (ii) fish preferentially rely on electrosensory cues for object discrimination; (iii) despite the dominance of the electrosense they still learn the visual cue and use it when electrosensory information is not available; (iv) fish prefer the trained combination of rewarded cues over combinations that match only in a single feature and also memorize the non-rewarded combination.
Collapse
Affiliation(s)
- Sandra Dangelmayer
- Institute for Neurobiology, Eberhardt Karls Universität Tübingen, Germany
| | - Jan Benda
- Institute for Neurobiology, Eberhardt Karls Universität Tübingen, Germany
| | - Jan Grewe
- Institute for Neurobiology, Eberhardt Karls Universität Tübingen, Germany.
| |
Collapse
|
133
|
Brunamonti E, Genovesio A, Pani P, Caminiti R, Ferraina S. Reaching-related Neurons in Superior Parietal Area 5: Influence of the Target Visibility. J Cogn Neurosci 2016; 28:1828-1837. [DOI: 10.1162/jocn_a_01004] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Reaching movements require the integration of both somatic and visual information. These signals can have different relevance, depending on whether reaches are performed toward visual or memorized targets. We tested the hypothesis that under such conditions, therefore depending on target visibility, posterior parietal neurons integrate differently somatic and visual signals. Monkeys were trained to execute both types of reaches from different hand resting positions and in total darkness. Neural activity was recorded in Area 5 (PE) and analyzed by focusing on the preparatory epoch, that is, before movement initiation. Many neurons were influenced by the initial hand position, and most of them were further modulated by the target visibility. For the same starting position, we found a prevalence of neurons with activity that differed depending on whether hand movement was performed toward memorized or visual targets. This result suggests that posterior parietal cortex integrates available signals in a flexible way based on contextual demands.
Collapse
|
134
|
Cashaback JGA, McGregor HR, Pun HCH, Buckingham G, Gribble PL. Does the sensorimotor system minimize prediction error or select the most likely prediction during object lifting? J Neurophysiol 2016; 117:260-274. [PMID: 27760821 DOI: 10.1152/jn.00609.2016] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Accepted: 10/19/2016] [Indexed: 11/22/2022] Open
Abstract
The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems.
Collapse
Affiliation(s)
- Joshua G A Cashaback
- Brain and Mind Institute, Department of Psychology, Western University, London, Ontario, Canada;
| | - Heather R McGregor
- Brain and Mind Institute, Department of Psychology, Western University, London, Ontario, Canada.,Graduate Program in Neuroscience, Western University, London, Ontario, Canada
| | - Henry C H Pun
- Department of Physiology and Pharmacology, Western University, London, Ontario, Canada; and
| | - Gavin Buckingham
- Department of Sport and Health Sciences, University of Exeter, Devon, United Kingdom
| | - Paul L Gribble
- Brain and Mind Institute, Department of Psychology, Western University, London, Ontario, Canada.,Department of Physiology and Pharmacology, Western University, London, Ontario, Canada; and
| |
Collapse
|
135
|
Gu Y, Cheng Z, Yang L, DeAngelis GC, Angelaki DE. Multisensory Convergence of Visual and Vestibular Heading Cues in the Pursuit Area of the Frontal Eye Field. Cereb Cortex 2016; 26:3785-801. [PMID: 26286917 PMCID: PMC5004753 DOI: 10.1093/cercor/bhv183] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023] Open
Abstract
Both visual and vestibular sensory cues are important for perceiving one's direction of heading during self-motion. Previous studies have identified multisensory, heading-selective neurons in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP). Both MSTd and VIP have strong recurrent connections with the pursuit area of the frontal eye field (FEFsem), but whether FEFsem neurons may contribute to multisensory heading perception remain unknown. We characterized the tuning of macaque FEFsem neurons to visual, vestibular, and multisensory heading stimuli. About two-thirds of FEFsem neurons exhibited significant heading selectivity based on either vestibular or visual stimulation. These multisensory neurons shared many properties, including distributions of tuning strength and heading preferences, with MSTd and VIP neurons. Fisher information analysis also revealed that the average FEFsem neuron was almost as sensitive as MSTd or VIP cells. Visual and vestibular heading preferences in FEFsem tended to be either matched (congruent cells) or discrepant (opposite cells), such that combined stimulation strengthened heading selectivity for congruent cells but weakened heading selectivity for opposite cells. These findings demonstrate that, in addition to oculomotor functions, FEFsem neurons also exhibit properties that may allow them to contribute to a cortical network that processes multisensory heading cues.
Collapse
Affiliation(s)
- Yong Gu
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Institute of Neuroscience, Shanghai, China
| | - Zhixian Cheng
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Institute of Neuroscience, Shanghai, China
| | - Lihua Yang
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Institute of Neuroscience, Shanghai, China
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
136
|
Drakul A, Bockisch CJ, Tarnutzer AA. Does gravity influence the visual line bisection task? J Neurophysiol 2016; 116:629-36. [PMID: 27226452 DOI: 10.1152/jn.00312.2016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2016] [Accepted: 05/23/2016] [Indexed: 11/22/2022] Open
Abstract
The visual line bisection task (LBT) is sensitive to perceptual biases of visuospatial attention, showing slight leftward (for horizontal lines) and upward (for vertical lines) errors in healthy subjects. It may be solved in an egocentric or allocentric reference frame, and there is no obvious need for graviceptive input. However, for other visual line adjustments, such as the subjective visual vertical, otolith input is integrated. We hypothesized that graviceptive input is incorporated when performing the LBT and predicted reduced accuracy and precision when roll-tilted. Twenty healthy right-handed subjects repetitively bisected Earth-horizontal and body-horizontal lines in darkness. Recordings were obtained before, during, and after roll-tilt (±45°, ±90°) for 5 min each. Additionally, bisections of Earth-vertical and oblique lines were obtained in 17 subjects. When roll-tilted ±90° ear-down, bisections of Earth-horizontal (i.e., body-vertical) lines were shifted toward the direction of the head (P < 0.001). However, after correction for vertical line-bisection errors when upright, shifts disappeared. Bisecting body-horizontal lines while roll-tilted did not cause any shifts. The precision of Earth-horizontal line bisections decreased (P ≤ 0.006) when roll-tilted, while no such changes were observed for body-horizontal lines. Regardless of the trial condition and paradigm, the scanning direction of the bisecting cursor (leftward vs. rightward) significantly (P ≤ 0.021) affected line bisections. Our findings reject our hypothesis and suggest that gravity does not modulate the LBT. Roll-tilt-dependent shifts are instead explained by the headward bias when bisecting lines oriented along a body-vertical axis. Increased variability when roll-tilted likely reflects larger variability when bisecting body-vertical than body-horizontal lines.
Collapse
Affiliation(s)
- A Drakul
- Department of Neurology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - C J Bockisch
- Department of Neurology, University Hospital Zurich and University of Zurich, Zurich, Switzerland; Department of Otorhinolaryngology, University Hospital Zurich and University of Zurich, Zurich, Switzerland; and Department of Ophthalmology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - A A Tarnutzer
- Department of Neurology, University Hospital Zurich and University of Zurich, Zurich, Switzerland;
| |
Collapse
|
137
|
Petrini K, Caradonna A, Foster C, Burgess N, Nardini M. How vision and self-motion combine or compete during path reproduction changes with age. Sci Rep 2016; 6:29163. [PMID: 27381183 PMCID: PMC4933893 DOI: 10.1038/srep29163] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2015] [Accepted: 06/16/2016] [Indexed: 11/09/2022] Open
Abstract
Human adults can optimally integrate visual and non-visual self-motion cues when navigating, while children up to 8 years old cannot. Whether older children can is unknown, limiting our understanding of how our internal multisensory representation of space develops. Eighteen adults and fifteen 10- to 11-year-old children were guided along a two-legged path in darkness (self-motion only), in a virtual room (visual + self-motion), or were shown a pre-recorded walk in the virtual room while standing still (visual only). Participants then reproduced the path in darkness. We obtained a measure of the dispersion of the end-points (variable error) and of their distances from the correct end point (constant error). Only children reduced their variable error when recalling the path in the visual + self-motion condition, indicating combination of these cues. Adults showed a constant error for the combined condition intermediate to those for single cues, indicative of cue competition, which may explain the lack of near-optimal integration in this group. This suggests that later in childhood humans can gain from optimally integrating spatial cues even when in the same situation these are kept separate in adulthood.
Collapse
Affiliation(s)
- Karin Petrini
- Department of Psychology, University of Bath, UK.,UCL Institute of Ophthalmology, London, UK
| | - Andrea Caradonna
- UCL Research Department of Neuroscience, Physiology and Pharmacology, UK
| | - Celia Foster
- UCL Research Department of Neuroscience, Physiology and Pharmacology, UK.,Centre for Integrative Neuroscience, University of Tübingen, Germany
| | - Neil Burgess
- UCL Institute of Cognitive Neuroscience, London, UK.,University College London Institute of Neurology, UK
| | | |
Collapse
|
138
|
Yau JM, DeAngelis GC, Angelaki DE. Dissecting neural circuits for multisensory integration and crossmodal processing. Philos Trans R Soc Lond B Biol Sci 2016; 370:20140203. [PMID: 26240418 DOI: 10.1098/rstb.2014.0203] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
We rely on rich and complex sensory information to perceive and understand our environment. Our multisensory experience of the world depends on the brain's remarkable ability to combine signals across sensory systems. Behavioural, neurophysiological and neuroimaging experiments have established principles of multisensory integration and candidate neural mechanisms. Here we review how targeted manipulation of neural activity using invasive and non-invasive neuromodulation techniques have advanced our understanding of multisensory processing. Neuromodulation studies have provided detailed characterizations of brain networks causally involved in multisensory integration. Despite substantial progress, important questions regarding multisensory networks remain unanswered. Critically, experimental approaches will need to be combined with theory in order to understand how distributed activity across multisensory networks collectively supports perception.
Collapse
Affiliation(s)
- Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
139
|
Abstract
In the McGurk effect, incongruent auditory and visual syllables are perceived as a third, completely different syllable. This striking illusion has become a popular assay of multisensory integration for individuals and clinical populations. However, there is enormous variability in how often the illusion is evoked by different stimuli and how often the illusion is perceived by different individuals. Most studies of the McGurk effect have used only one stimulus, making it impossible to separate stimulus and individual differences. We created a probabilistic model to separately estimate stimulus and individual differences in behavioral data from 165 individuals viewing up to 14 different McGurk stimuli. The noisy encoding of disparity (NED) model characterizes stimuli by their audiovisual disparity and characterizes individuals by how noisily they encode the stimulus disparity and by their disparity threshold for perceiving the illusion. The model accurately described perception of the McGurk effect in our sample, suggesting that differences between individuals are stable across stimulus differences. The most important benefit of the NED model is that it provides a method to compare multisensory integration across individuals and groups without the confound of stimulus differences. An added benefit is the ability to predict frequency of the McGurk effect for stimuli never before seen by an individual.
Collapse
|
140
|
Thoret E, Aramaki M, Bringoux L, Ystad S, Kronland-Martinet R. Seeing Circles and Drawing Ellipses: When Sound Biases Reproduction of Visual Motion. PLoS One 2016; 11:e0154475. [PMID: 27119411 PMCID: PMC4847762 DOI: 10.1371/journal.pone.0154475] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2015] [Accepted: 04/14/2016] [Indexed: 11/29/2022] Open
Abstract
The perception and production of biological movements is characterized by the 1/3 power law, a relation linking the curvature and the velocity of an intended action. In particular, motions are perceived and reproduced distorted when their kinematics deviate from this biological law. Whereas most studies dealing with this perceptual-motor relation focused on visual or kinaesthetic modalities in a unimodal context, in this paper we show that auditory dynamics strikingly biases visuomotor processes. Biologically consistent or inconsistent circular visual motions were used in combination with circular or elliptical auditory motions. Auditory motions were synthesized friction sounds mimicking those produced by the friction of the pen on a paper when someone is drawing. Sounds were presented diotically and the auditory motion velocity was evoked through the friction sound timbre variations without any spatial cues. Remarkably, when subjects were asked to reproduce circular visual motion while listening to sounds that evoked elliptical kinematics without seeing their hand, they drew elliptical shapes. Moreover, distortion induced by inconsistent elliptical kinematics in both visual and auditory modalities added up linearly. These results bring to light the substantial role of auditory dynamics in the visuo-motor coupling in a multisensory context.
Collapse
Affiliation(s)
- Etienne Thoret
- Laboratoire de Mécanique et d’Acoustique, CNRS, UPR 7051, Aix Marseille Université, Centrale Marseille, Marseille, France
- * E-mail:
| | - Mitsuko Aramaki
- Laboratoire de Mécanique et d’Acoustique, CNRS, UPR 7051, Aix Marseille Université, Centrale Marseille, Marseille, France
| | - Lionel Bringoux
- Aix-Marseille Université, CNRS, ISM, UMR 7287, Marseille, France
| | - Sølvi Ystad
- Laboratoire de Mécanique et d’Acoustique, CNRS, UPR 7051, Aix Marseille Université, Centrale Marseille, Marseille, France
| | - Richard Kronland-Martinet
- Laboratoire de Mécanique et d’Acoustique, CNRS, UPR 7051, Aix Marseille Université, Centrale Marseille, Marseille, France
| |
Collapse
|
141
|
Grabherr L, Macauda G, Lenggenhager B. The Moving History of Vestibular Stimulation as a Therapeutic Intervention. Multisens Res 2016; 28:653-87. [PMID: 26595961 DOI: 10.1163/22134808-00002495] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Although the discovery and understanding of the function of the vestibular system date back only to the 19th century, strategies that involve vestibular stimulation were used long before to calm, soothe and even cure people. While such stimulation was classically achieved with various motion devices, like Cox's chair or Hallaran's swing, the development of caloric and galvanic vestibular stimulation has opened up new possibilities in the 20th century. With the increasing knowledge and recognition of vestibular contributions to various perceptual, motor, cognitive, and emotional processes, vestibular stimulation has been suggested as a powerful and non-invasive treatment for a range of psychiatric, neurological and neurodevelopmental conditions. Yet, the therapeutic interventions were, and still are, often not hypothesis-driven as broader theories remain scarce and underlying neurophysiological mechanisms are often vague. We aim to critically review the literature on vestibular stimulation as a form of therapy in various selected disorders and present its successes, expectations, and drawbacks from a historical perspective.
Collapse
|
142
|
Powers Iii AR, Hillock-Dunn A, Wallace MT. Generalization of multisensory perceptual learning. Sci Rep 2016; 6:23374. [PMID: 27000988 PMCID: PMC4802214 DOI: 10.1038/srep23374] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2015] [Accepted: 03/01/2016] [Indexed: 11/28/2022] Open
Abstract
Life in a multisensory world requires the rapid and accurate integration of stimuli across the different senses. In this process, the temporal relationship between stimuli is critical in determining which stimuli share a common origin. Numerous studies have described a multisensory temporal binding window—the time window within which audiovisual stimuli are likely to be perceptually bound. In addition to characterizing this window’s size, recent work has shown it to be malleable, with the capacity for substantial narrowing following perceptual training. However, the generalization of these effects to other measures of perception is not known. This question was examined by characterizing the ability of training on a simultaneity judgment task to influence perception of the temporally-dependent sound-induced flash illusion (SIFI). Results do not demonstrate a change in performance on the SIFI itself following training. However, data do show an improved ability to discriminate rapidly-presented two-flash control conditions following training. Effects were specific to training and scaled with the degree of temporal window narrowing exhibited. Results do not support generalization of multisensory perceptual learning to other multisensory tasks. However, results do show that training results in improvements in visual temporal acuity, suggesting a generalization effect of multisensory training on unisensory abilities.
Collapse
Affiliation(s)
- Albert R Powers Iii
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, USA.,Medical Scientist Training Program, Vanderbilt University School of Medicine, Nashville, Tennessee, USA.,Department of Psychiatry, Yale University, New Haven, Connecticut, USA
| | - Andrea Hillock-Dunn
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA
| | - Mark T Wallace
- Kennedy Center, Vanderbilt University, Nashville, Tennessee, USA.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA.,Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
143
|
Thakur CS, Afshar S, Wang RM, Hamilton TJ, Tapson J, van Schaik A. Bayesian Estimation and Inference Using Stochastic Electronics. Front Neurosci 2016; 10:104. [PMID: 27047326 PMCID: PMC4796016 DOI: 10.3389/fnins.2016.00104] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2015] [Accepted: 03/03/2016] [Indexed: 11/13/2022] Open
Abstract
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Collapse
Affiliation(s)
- Chetan Singh Thakur
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| | - Saeed Afshar
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| | - Runchun M Wang
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| | - Tara J Hamilton
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| | - Jonathan Tapson
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| | - André van Schaik
- Biomedical Engineering and Neuroscience, The MARCS Institute, Western Sydney University Sydney, NSW, Australia
| |
Collapse
|
144
|
Gepperth ART, Hecht T, Gogate M. A Generative Learning Approach to Sensor Fusion and Change Detection. Cognit Comput 2016. [DOI: 10.1007/s12559-016-9390-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
145
|
Pan Y, Wang L, Wang Z, Xu C, Yu W, Spillmann L, Gu Y, Wang Z, Wang W. Representation of illusory and physical rotations in human MST: A cortical site for the pinna illusion. Hum Brain Mapp 2016; 37:2097-113. [PMID: 26945511 DOI: 10.1002/hbm.23156] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2015] [Revised: 12/12/2015] [Accepted: 02/17/2016] [Indexed: 11/12/2022] Open
Abstract
Visual illusions have fascinated mankind since antiquity, as they provide a unique window to explore the constructive nature of human perception. The Pinna illusion is a striking example of rotation perception in the absence of real physical motion. Upon approaching or receding from the Pinna-Brelstaff figure, the observer experiences vivid illusory counter rotation of the two rings in the figure. Although this phenomenon is well known as an example of integration from local cues to a global percept, the visual areas mediating the illusory rotary perception in the human brain have not yet been identified. In the current study we investigated which cortical area in the human brain initially mediates the Pinna illusion, using psychophysical tests and functional magnetic resonance imaging (fMRI) of visual cortices V1, V2, V3, V3A, V4, and hMT+ of the dorsal and ventral visual pathways. We found that both the Pinna-Brelstaff figure (illusory rotation) and a matched physical rotation control stimulus predominantly activated subarea MST in hMT+ with a similar response intensity. Our results thus provide neural evidence showing that illusory rotation is initiated in human MST rather than MT as if it were physical rotary motion. The findings imply that illusory rotation in the Pinna illusion is mediated by rotation-sensitive neurons that normally encode physical rotation in human MST, both of which may rely on a cascade of similar integrative processes from earlier visual areas. Hum Brain Mapp 37:2097-2113, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Yanxia Pan
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, People's Republic of China
| | - Lijia Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, People's Republic of China
| | - Zhiwei Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, People's Republic of China
| | - Chan Xu
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, People's Republic of China
| | - Wenwen Yu
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, People's Republic of China
| | - Lothar Spillmann
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, People's Republic of China
| | - Yong Gu
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, People's Republic of China
| | - Zheng Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, People's Republic of China
| | - Wei Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, People's Republic of China
| |
Collapse
|
146
|
Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation. PLoS One 2015; 10:e0145015. [PMID: 26658990 PMCID: PMC4687653 DOI: 10.1371/journal.pone.0145015] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2015] [Accepted: 11/25/2015] [Indexed: 11/29/2022] Open
Abstract
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.
Collapse
|
147
|
Schwarz AJ, Straumann D, Tarnutzer AA. Diurnal Fluctuations of Verticality Perception - Lesser Precision Immediately after Waking up in the Morning. Front Neurol 2015; 6:195. [PMID: 26388837 PMCID: PMC4557077 DOI: 10.3389/fneur.2015.00195] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Accepted: 08/20/2015] [Indexed: 11/30/2022] Open
Abstract
Internal estimates of direction of gravity are continuously updated by integrating vestibular, visual and proprioceptive input, and prior experience about upright position. Prolonged static roll-tilt biases perceived direction of gravity by adaptation of peripheral sensors and central structures. We hypothesized that in the morning after sleep, estimates of direction of gravity [assessed by the subjective visual vertical (SVV)] are less precise than in the evening because of adaptation to horizontal body position and lack of prior knowledge about upright position. Using a mobile SVV-measuring device, verticality perception was assessed in seven healthy human subjects on 7 days in the morning immediately after waking up and in the evening while sitting upright. Paired t-tests were applied to analyze diurnal changes in SVV trial-to-trial variability. Average SVV variability in the morning was significantly larger than in the evening (1.9 ± 0.6° vs. 0.9 ± 0.3°, p = 0.002). SVV accuracy was not significantly different (−1.2 ± 0.9° vs. −0.4 ± 0.4°, morning vs. evening, p = 0.058) and was within normal range (±2.3°) in all but one subject. A good night’s sleep has a profound effect on the brain’s ability to estimate direction of gravity. Resulting variability was significantly worse after waking up reaching values more than twice as large as in the evening while there was no significant impact on SVV accuracy. We hypothesize that lacking prior knowledge, adaptation of peripheral sensors, and lower levels of arousal and cerebral metabolism contribute to such impoverished estimates. Our observations have considerable clinical impact as they indicate an increased risk for falls and fall-related injuries in the morning.
Collapse
Affiliation(s)
| | - Dominik Straumann
- Department of Neurology, University Hospital Zurich, University of Zurich , Zurich , Switzerland
| | - Alexander A Tarnutzer
- Department of Neurology, University Hospital Zurich, University of Zurich , Zurich , Switzerland
| |
Collapse
|
148
|
Bill J, Buesing L, Habenschuss S, Nessler B, Maass W, Legenstein R. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition. PLoS One 2015; 10:e0134356. [PMID: 26284370 PMCID: PMC4540468 DOI: 10.1371/journal.pone.0134356] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Accepted: 07/09/2015] [Indexed: 11/24/2022] Open
Abstract
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.
Collapse
Affiliation(s)
- Johannes Bill
- Institute for Theoretical Computer Science, TU Graz, Graz, Austria
- * E-mail:
| | - Lars Buesing
- Department of Statistics, Columbia University, New York, New York, United States of America
| | | | - Bernhard Nessler
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, TU Graz, Graz, Austria
| | | |
Collapse
|
149
|
Abstract
The "problem of serial order in behavior," as formulated and discussed by Lashley (1951), is arguably more pervasive and more profound both than originally stated and than currently appreciated. We spell out two complementary aspects of what we term the generalized problem of behavior: (i) multimodality, stemming from the disparate nature of the sensorimotor variables and processes that underlie behavior, and (ii) concurrency, which reflects the parallel unfolding in time of these processes and of their asynchronous interactions. We illustrate these on a number of examples, with a special focus on language, briefly survey the computational approaches to multimodal concurrency, offer some hypotheses regarding the manner in which brains address it, and discuss some of the broader implications of these as yet unresolved issues for cognitive science.
Collapse
Affiliation(s)
- Oren Kolodny
- Department of Zoology, Tel Aviv University, Tel Aviv, Israel
| | - Shimon Edelman
- Department of Psychology, Cornell University, Ithaca, NY, USA.
| |
Collapse
|
150
|
Hollensteiner KJ, Pieper F, Engler G, König P, Engel AK. Crossmodal integration improves sensory detection thresholds in the ferret. PLoS One 2015; 10:e0124952. [PMID: 25970327 PMCID: PMC4430165 DOI: 10.1371/journal.pone.0124952] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2015] [Accepted: 03/20/2015] [Indexed: 11/19/2022] Open
Abstract
During the last two decades ferrets (Mustela putorius) have been established as a highly efficient animal model in different fields in neuroscience. Here we asked whether ferrets integrate sensory information according to the same principles established for other species. Since only few methods and protocols are available for behaving ferrets we developed a head-free, body-restrained approach allowing a standardized stimulation position and the utilization of the ferret’s natural response behavior. We established a behavioral paradigm to test audiovisual integration in the ferret. Animals had to detect a brief auditory and/or visual stimulus presented either left or right from their midline. We first determined detection thresholds for auditory amplitude and visual contrast. In a second step, we combined both modalities and compared psychometric fits and the reaction times between all conditions. We employed Maximum Likelihood Estimation (MLE) to model bimodal psychometric curves and to investigate whether ferrets integrate modalities in an optimal manner. Furthermore, to test for a redundant signal effect we pooled the reaction times of all animals to calculate a race model. We observed that bimodal detection thresholds were reduced and reaction times were faster in the bimodal compared to unimodal conditions. The race model and MLE modeling showed that ferrets integrate modalities in a statistically optimal fashion. Taken together, the data indicate that principles of multisensory integration previously demonstrated in other species also apply to crossmodal processing in the ferret.
Collapse
Affiliation(s)
- Karl J. Hollensteiner
- Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
- * E-mail:
| | - Florian Pieper
- Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Gerhard Engler
- Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Peter König
- Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
| | - Andreas K. Engel
- Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| |
Collapse
|