201
|
Gu Y, Liu S, Fetsch CR, Yang Y, Fok S, Sunkara A, DeAngelis GC, Angelaki DE. Perceptual learning reduces interneuronal correlations in macaque visual cortex. Neuron 2011; 71:750-61. [PMID: 21867889 DOI: 10.1016/j.neuron.2011.06.015] [Citation(s) in RCA: 153] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/08/2011] [Indexed: 10/17/2022]
Abstract
Responses of neurons in early visual cortex change little with training and appear insufficient to account for perceptual learning. Behavioral performance, however, relies on population activity, and the accuracy of a population code is constrained by correlated noise among neurons. We tested whether training changes interneuronal correlations in the dorsal medial superior temporal area, which is involved in multisensory heading perception. Pairs of single units were recorded simultaneously in two groups of subjects: animals trained extensively in a heading discrimination task, and "naive" animals that performed a passive fixation task. Correlated noise was significantly weaker in trained versus naive animals, which might be expected to improve coding efficiency. However, we show that the observed uniform reduction in noise correlations leads to little change in population coding efficiency when all neurons are decoded. Thus, global changes in correlated noise among sensory neurons may be insufficient to account for perceptual learning.
Collapse
Affiliation(s)
- Yong Gu
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | | | | | | | | |
Collapse
|
202
|
Dearing RR, Harris LR. The contribution of different parts of the visual field to the perception of upright. Vision Res 2011; 51:2207-15. [PMID: 21906616 DOI: 10.1016/j.visres.2011.08.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2011] [Revised: 08/19/2011] [Accepted: 08/19/2011] [Indexed: 11/24/2022]
Abstract
We determined the relative effectiveness of different areas of the visual field in determining the perceptual upright. The perceptual upright was measured using the character 'p', the identity of which depended on its perceived orientation (the Oriented Character Recognition Test). The visual field was divided into left and right, upper and lower, and central and peripheral halves, with different backgrounds presented in each area. The left and right visual fields contributed equally to the perceptual upright while the lower visual field demonstrated a larger effect on the perceptual upright as compared to the upper visual field. The central and peripheral visual fields interacted with one another in a complex manner, although a separate experiment suggested that the peripheral visual field did not alter the perceived orientation of the central field.
Collapse
Affiliation(s)
- Ryan R Dearing
- Centre for Vision Research, York University, Toronto, ON, Canada
| | | |
Collapse
|
203
|
Scotto Di Cesare C, Bringoux L, Bourdin C, Sarlegna FR, Mestre DR. Spatial localization investigated by continuous pointing during visual and gravitoinertial changes. Exp Brain Res 2011; 215:173-82. [DOI: 10.1007/s00221-011-2884-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2011] [Accepted: 09/17/2011] [Indexed: 12/20/2022]
|
204
|
Contributions of vision and proprioception to arm movement planning in the vertical plane. Neurosci Lett 2011; 503:186-90. [PMID: 21889576 DOI: 10.1016/j.neulet.2011.08.032] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2011] [Revised: 08/15/2011] [Accepted: 08/17/2011] [Indexed: 11/21/2022]
|
205
|
Shams L. Early Integration and Bayesian Causal Inference in Multisensory Perception. Front Neurosci 2011. [DOI: 10.1201/9781439812174-16] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
206
|
Shams L. Early Integration and Bayesian Causal Inference in Multisensory Perception. Front Neurosci 2011. [DOI: 10.1201/b11092-16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
207
|
Vilares I, Kording K. Bayesian models: the structure of the world, uncertainty, behavior, and the brain. Ann N Y Acad Sci 2011; 1224:22-39. [PMID: 21486294 DOI: 10.1111/j.1749-6632.2011.05965.x] [Citation(s) in RCA: 89] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Experiments on humans and other animals have shown that uncertainty due to unreliable or incomplete information affects behavior. Recent studies have formalized uncertainty and asked which behaviors would minimize its effect. This formalization results in a wide range of Bayesian models that derive from assumptions about the world, and it often seems unclear how these models relate to one another. In this review, we use the concept of graphical models to analyze differences and commonalities across Bayesian approaches to the modeling of behavioral and neural data. We review behavioral and neural data associated with each type of Bayesian model and explain how these models can be related. We finish with an overview of different theories that propose possible ways in which the brain can represent uncertainty.
Collapse
Affiliation(s)
- Iris Vilares
- Departments of Physical Medicine and Rehabilitation, Physiology, and Applied Mathematics, Northwestern University, Chicago, Illinois. Rehabilitation Institute of Chicago, Northwestern University, Chicago, Illinois.International Neuroscience Doctoral Programme, Champalimaud Neuroscience Programme, Institutio Gulbenkian de Ciência, Oeiras, Portugal
| | - Konrad Kording
- Departments of Physical Medicine and Rehabilitation, Physiology, and Applied Mathematics, Northwestern University, Chicago, Illinois. Rehabilitation Institute of Chicago, Northwestern University, Chicago, Illinois.International Neuroscience Doctoral Programme, Champalimaud Neuroscience Programme, Institutio Gulbenkian de Ciência, Oeiras, Portugal
| |
Collapse
|
208
|
Integration of vestibular and proprioceptive signals for spatial updating. Exp Brain Res 2011; 212:163-76. [PMID: 21590262 DOI: 10.1007/s00221-011-2717-9] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2011] [Accepted: 04/25/2011] [Indexed: 10/18/2022]
Abstract
Spatial updating during self-motion typically involves the appropriate integration of both visual and non-visual cues, including vestibular and proprioceptive information. Here, we investigated how human observers combine these two non-visual cues during full-stride curvilinear walking. To obtain a continuous, real-time estimate of perceived position, observers were asked to continuously point toward a previously viewed target in the absence of vision. They did so while moving on a large circular treadmill under various movement conditions. Two conditions were designed to evaluate spatial updating when information was largely limited to either proprioceptive information (walking in place) or vestibular information (passive movement). A third condition evaluated updating when both sources of information were available (walking through space) and were either congruent or in conflict. During both the passive movement condition and while walking through space, the pattern of pointing behavior demonstrated evidence of accurate egocentric updating. In contrast, when walking in place, perceived self-motion was underestimated and participants always adjusted the pointer at a constant rate, irrespective of changes in the rate at which the participant moved relative to the target. The results are discussed in relation to the maximum likelihood estimation model of sensory integration. They show that when the two cues were congruent, estimates were combined, such that the variance of the adjustments was generally reduced. Results also suggest that when conflicts were introduced between the vestibular and proprioceptive cues, spatial updating was based on a weighted average of the two inputs.
Collapse
|
209
|
Abstract
The neural control of hand movement involves coordination of the sensory, motor, and memory systems. Recent studies have documented the motor coordinates for hand shape, but less is known about the corresponding patterns of somatosensory activity. To initiate this line of investigation, the present study characterized the sense of hand shape by evaluating the influence of differences in the amount of grasping or twisting force, and differences in forearm orientation. Human subjects were asked to use the left hand to report the perceived shape of the right hand. In the first experiment, six commonly grasped items were arranged on the table in front of the subject: bottle, doorknob, egg, notebook, carton, and pan. With eyes closed, subjects used the right hand to lightly touch, forcefully support, or imagine holding each object, while 15 joint angles were measured in each hand with a pair of wired gloves. The forces introduced by supporting or twisting did not influence the perceptual report of hand shape, but for most objects, the report was distorted in a consistent manner by differences in forearm orientation. Subjects appeared to adjust the intrinsic joint angles of the left hand, as well as the left wrist posture, so as to maintain the imagined object in its proper spatial orientation. In a second experiment, this result was largely replicated with unfamiliar objects. Thus, somatosensory and motor information appear to be coordinated in an object-based, spatial-coordinate system, sensitive to orientation relative to gravitational forces, but invariant to grasp forcefulness.
Collapse
|
210
|
Current perspectives and methods in studying neural mechanisms of multisensory interactions. Neurosci Biobehav Rev 2011; 36:111-33. [PMID: 21569794 DOI: 10.1016/j.neubiorev.2011.04.015] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2011] [Accepted: 04/21/2011] [Indexed: 11/22/2022]
Abstract
In the past decade neuroscience has witnessed major advances in the field of multisensory interactions. A large body of research has revealed several new types of cross-sensory interactions. In addition, multisensory interactions have been reported at temporal and spatial system levels previously thought of as strictly unimodal. We review the findings that have led to the current broad consensus that most, if not all, higher, as well as lower level neural processes are in some form multisensory. We continue by outlining the progress that has been made in identifying the functional significance of different types of interactions, for example, in subserving stimulus binding and enhancement of perceptual certainty. Finally, we provide a critical introduction to cutting edge methods from bayes optimal integration to multivoxel pattern analysis as applied to multisensory research at different system levels.
Collapse
|
211
|
Naumer MJ, van den Bosch JJF, Wibral M, Kohler A, Singer W, Kaiser J, van de Ven V, Muckli L. Investigating human audio-visual object perception with a combination of hypothesis-generating and hypothesis-testing fMRI analysis tools. Exp Brain Res 2011; 213:309-20. [PMID: 21503649 PMCID: PMC3155044 DOI: 10.1007/s00221-011-2669-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2010] [Accepted: 03/28/2011] [Indexed: 11/18/2022]
Abstract
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Collapse
Affiliation(s)
- Marcus J Naumer
- Crossmodal Neuroimaging Lab, Institute of Medical Psychology, Goethe-University of Frankfurt, Heinrich-Hoffmann-Strasse 10, 60528 Frankfurt am Main, Germany.
| | | | | | | | | | | | | | | |
Collapse
|
212
|
Tarnutzer AA, Shaikh AG, Palla A, Straumann D, Marti S. Vestibulo-cerebellar disease impairs the central representation of self-orientation. Front Neurol 2011; 2:11. [PMID: 21431098 PMCID: PMC3049414 DOI: 10.3389/fneur.2011.00011] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2010] [Accepted: 02/13/2011] [Indexed: 11/18/2022] Open
Abstract
Transformation of head-fixed otolith signals into a space-fixed frame of reference is essential for perception of self-orientation and ocular motor control. In monkeys the nodulus and ventral uvula of the vestibulo-cerebellum facilitate this transformation by computing an internal estimate of direction of gravity. These experimental findings motivated the hypothesis that degeneration of the vestibulo-cerebellum in humans alter perceptual and ocular motor functions that rely on accurate estimates of gravity, such as subjective visual vertical (SVV), static ocular counterroll (OCR), and gravity-dependent modulation of vertical ocular drifts. We assessed the SVV, OCR, and spontaneous vertical ocular drifts in 12 patients with chronic vestibulo-cerebellar disease and in 10 controls. Substantially increased variability in estimated SVV was noted in the patients. Furthermore, gravity-dependent modulation of spontaneous vertical ocular drifts along the pitch plane was significantly (p < 0.05) larger in the patients. However, the gain and variability of static OCR and errors in SVV were not significantly different. In conclusion, in chronic vestibulo-cerebellar disease SVV and OCR remain intact except for an abnormal variability in the perception of verticality and impaired stabilization of gaze mediated by the otoliths. These findings suggest that OCR and perceived vertical are relatively independent from the cerebellum unless there is a cerebellar imbalance like an acute unilateral cerebellar stroke. The increased trial-to-trial SVV variability may be a general feature of cerebellar disease since a function of the cerebellum may be to compensate for such. SVV variability might be useful to monitor disease progression and treatment response in patients.
Collapse
|
213
|
Frisoli A, Solazzi M, Reiner M, Bergamasco M. The contribution of cutaneous and kinesthetic sensory modalities in haptic perception of orientation. Brain Res Bull 2010; 85:260-6. [PMID: 21134423 DOI: 10.1016/j.brainresbull.2010.11.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2010] [Revised: 07/06/2010] [Accepted: 11/19/2010] [Indexed: 10/18/2022]
Abstract
The aim of this study was to understand the integration of cutaneous and kinesthetic sensory modalities in haptic perception of shape orientation. A specific robotic apparatus was employed to simulate the exploration of virtual surfaces by active touch with two fingers, with kinesthetic only, cutaneous only and combined sensory feedback. The cutaneous feedback was capable of displaying the local surface orientation at the contact point, through a small plate indenting the fingerpad at contact. A psychophysics test was conducted with SDT methodology on 6 subjects to assess the discrimination threshold of angle perception between two parallel surfaces, with three sensory modalities and two shape sizes. Results show that the cutaneous sensor modality is not affected by size of shape, but kinesthetic performance is decreasing with smaller size. Cutaneous and kinesthetic sensory cues are integrated according to a Bayesian model, so that the combined sensory stimulation always performs better than single modalities alone.
Collapse
Affiliation(s)
- Antonio Frisoli
- PERCRO-CEIICP, Scuola Superiore Sant'Anna, Viale Rinaldo Piaggio, 34, 56025 Pontedera, Pisa, Italy.
| | | | | | | |
Collapse
|
214
|
Sound-induced enhancement of low-intensity vision: multisensory influences on human sensory-specific cortices and thalamic bodies relate to perceptual enhancement of visual detection sensitivity. J Neurosci 2010; 30:13609-23. [PMID: 20943902 DOI: 10.1523/jneurosci.4524-09.2010] [Citation(s) in RCA: 112] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Combining information across modalities can affect sensory performance. We studied how co-occurring sounds modulate behavioral visual detection sensitivity (d'), and neural responses, for visual stimuli of higher or lower intensity. Co-occurrence of a sound enhanced human detection sensitivity for lower- but not higher-intensity visual targets. Functional magnetic resonance imaging (fMRI) linked this to boosts in activity-levels for sensory-specific visual and auditory cortex, plus multisensory superior temporal sulcus (STS), specifically for a lower-intensity visual event when paired with a sound. Thalamic structures in visual and auditory pathways, the lateral and medial geniculate bodies, respectively (LGB, MGB), showed a similar pattern. Subject-by-subject psychophysical benefits correlated with corresponding fMRI signals in visual, auditory, and multisensory regions. We also analyzed differential "coupling" patterns of LGB and MGB with other regions in the different experimental conditions. Effective-connectivity analyses showed enhanced coupling of sensory-specific thalamic bodies with the affected cortical sites during enhanced detection of lower-intensity visual events paired with sounds. Coupling strength between visual and auditory thalamus with cortical regions, including STS, covaried parametrically with the psychophysical benefit for this specific multisensory context. Our results indicate that multisensory enhancement of detection sensitivity for low-contrast visual stimuli by co-occurring sounds reflects a brain network involving not only established multisensory STS and sensory-specific cortex but also visual and auditory thalamus.
Collapse
|
215
|
Auditory-visual multisensory interactions in humans: timing, topography, directionality, and sources. J Neurosci 2010; 30:12572-80. [PMID: 20861363 DOI: 10.1523/jneurosci.1099-10.2010] [Citation(s) in RCA: 104] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Current models of brain organization include multisensory interactions at early processing stages and within low-level, including primary, cortices. Embracing this model with regard to auditory-visual (AV) interactions in humans remains problematic. Controversy surrounds the application of an additive model to the analysis of event-related potentials (ERPs), and conventional ERP analysis methods have yielded discordant latencies of effects and permitted limited neurophysiologic interpretability. While hemodynamic imaging and transcranial magnetic stimulation studies provide general support for the above model, the precise timing, superadditive/subadditive directionality, topographic stability, and sources remain unresolved. We recorded ERPs in humans to attended, but task-irrelevant stimuli that did not require an overt motor response, thereby circumventing paradigmatic caveats. We applied novel ERP signal analysis methods to provide details concerning the likely bases of AV interactions. First, nonlinear interactions occur at 60-95 ms after stimulus and are the consequence of topographic, rather than pure strength, modulations in the ERP. AV stimuli engage distinct configurations of intracranial generators, rather than simply modulating the amplitude of unisensory responses. Second, source estimations (and statistical analyses thereof) identified primary visual, primary auditory, and posterior superior temporal regions as mediating these effects. Finally, scalar values of current densities in all of these regions exhibited functionally coupled, subadditive nonlinear effects, a pattern increasingly consistent with the mounting evidence in nonhuman primates. In these ways, we demonstrate how neurophysiologic bases of multisensory interactions can be noninvasively identified in humans, allowing for a synthesis across imaging methods on the one hand and species on the other.
Collapse
|
216
|
Talsma D, Senkowski D, Soto-Faraco S, Woldorff MG. The multifaceted interplay between attention and multisensory integration. Trends Cogn Sci 2010; 14:400-10. [PMID: 20675182 DOI: 10.1016/j.tics.2010.06.008] [Citation(s) in RCA: 484] [Impact Index Per Article: 34.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2009] [Revised: 06/24/2010] [Accepted: 06/25/2010] [Indexed: 11/18/2022]
Abstract
Multisensory integration has often been characterized as an automatic process. Recent findings indicate that multisensory integration can occur across various stages of stimulus processing that are linked to, and can be modulated by, attention. Stimulus-driven, bottom-up mechanisms induced by crossmodal interactions can automatically capture attention towards multisensory events, particularly when competition to focus elsewhere is relatively low. Conversely, top-down attention can facilitate the integration of multisensory inputs and lead to a spread of attention across sensory modalities. These findings point to a more intimate and multifaceted interplay between attention and multisensory integration than was previously thought. We review developments in the current understanding of the interactions between attention and multisensory processing, and propose a framework that unifies previous, apparently discordant, findings.
Collapse
Affiliation(s)
- Durk Talsma
- Department of Cognitive Psychology and Ergonomics, University of Twente, P.O. Box 215, 7500 AE Enschede, The Netherlands.
| | | | | | | |
Collapse
|
217
|
Schuler JR, Bockisch CJ, Straumann D, Tarnutzer AA. Precision and accuracy of the subjective haptic vertical in the roll plane. BMC Neurosci 2010; 11:83. [PMID: 20630097 PMCID: PMC2912915 DOI: 10.1186/1471-2202-11-83] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2010] [Accepted: 07/14/2010] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND When roll-tilted, the subjective visual vertical (SVV) deviates up to 40 degrees from earth-vertical and trial-to-trial variability increases with head roll. Imperfections in the central processing of visual information were postulated to explain these roll-angle dependent errors. For experimental conditions devoid of visual input, e.g. adjustments of body posture or of an object along vertical in darkness, significantly smaller errors were noted. Whereas the accuracy of verticality adjustments seems to depend strongly on the paradigm, we hypothesize that the precision, i.e. the inverse of trial-to-trial variability, is less influenced by the experimental setup and mainly reflects properties of the otoliths. Here we measured the subjective haptic vertical (SHV) and compared findings with previously reported SVV data. Twelve healthy right-handed human subjects (handedness assessed based on subjects' verbal report) adjusted a rod with the right hand along perceived earth-vertical during static head roll-tilts (0-360 degrees , steps of 20 degrees ). RESULTS SHV adjustments showed a tendency for clockwise rod rotations to deviate counter-clockwise and for counter-clockwise rod rotations to deviate clockwise, indicating hysteresis. Clockwise rod rotations resulted in counter-clockwise shifts of perceived earth-vertical up to -11.7 degrees and an average counter-clockwise SHV shift over all roll angles of -3.3 degrees (+/- 11.0 degrees ; +/- 1 StdDev). Counter-clockwise rod rotations yielded peak SHV deviations in clockwise direction of 8.9 degrees and an average clockwise SHV shift over all roll angles of 1.8 degrees (+/- 11.1 degrees ). Trial-to-trial variability was minimal in upright position, increased with increasing roll (peaking around 120-140 degrees ) and decreased to intermediate values in upside-down orientation. Compared to SVV, SHV variability near upright and upside-down was non-significantly (p > 0.05) larger; both showed an m-shaped pattern of variability as a function of roll position. CONCLUSIONS The reduction of adjustment errors by eliminating visual input supports the notion that deviations between perceived and actual earth-vertical in roll-tilted positions arise from central processing of visual information. The shared roll-tilt dependent modulation of trial-to-trial variability for both SVV and SHV, on the other hand, indicates that the perception of earth-verticality is dominated by the same sensory signal, i.e. the otolith signal, independent of whether the line/rod setting is under visual or tactile control.
Collapse
Affiliation(s)
- Jeanine R Schuler
- Department of Neurology, Zurich University Hospital, Zurich, Switzerland
| | | | | | | |
Collapse
|
218
|
Multisensory integration: resolving sensory ambiguities to build novel representations. Curr Opin Neurobiol 2010; 20:353-60. [PMID: 20471245 DOI: 10.1016/j.conb.2010.04.009] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2010] [Revised: 04/10/2010] [Accepted: 04/14/2010] [Indexed: 11/19/2022]
Abstract
Multisensory integration plays several important roles in the nervous system. One is to combine information from multiple complementary cues to improve stimulus detection and discrimination. Another is to resolve peripheral sensory ambiguities and create novel internal representations that do not exist at the level of individual sensors. Here we focus on how ambiguities inherent in vestibular, proprioceptive and visual signals are resolved to create behaviorally useful internal estimates of our self-motion. We review recent studies that have shed new light on the nature of these estimates and how multiple, but individually ambiguous, sensory signals are processed and combined to compute them. We emphasize the need to combine experiments with theoretical insights to understand the transformations that are being performed.
Collapse
|
219
|
Attention as a decision in information space. Trends Cogn Sci 2010; 14:240-8. [PMID: 20399701 DOI: 10.1016/j.tics.2010.03.001] [Citation(s) in RCA: 89] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2009] [Revised: 03/01/2010] [Accepted: 03/02/2010] [Indexed: 11/23/2022]
Abstract
Decision formation and attention are two fundamental processes through which we select, respectively, appropriate actions or sources of information. Although both functions have been studied in the oculomotor system, we lack a unified view explaining both forms of selection. We review evidence showing that parietal neurons encoding saccade motor decisions also carry signals of attention (perceptual selection) that are independent of the metrics, modality and reward of an action. We propose that attention implements a specialized form of decision based on the utility of information. Thus, oculomotor control depends on two interacting but distinct processes: attentional decisions that assign value to sources of information and motor decisions that flexibly link the selected information with action.
Collapse
|
220
|
Green AM, Angelaki DE. Internal models and neural computation in the vestibular system. Exp Brain Res 2010; 200:197-222. [PMID: 19937232 PMCID: PMC2853943 DOI: 10.1007/s00221-009-2054-4] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2009] [Accepted: 10/08/2009] [Indexed: 10/20/2022]
Abstract
The vestibular system is vital for motor control and spatial self-motion perception. Afferents from the otolith organs and the semicircular canals converge with optokinetic, somatosensory and motor-related signals in the vestibular nuclei, which are reciprocally interconnected with the vestibulocerebellar cortex and deep cerebellar nuclei. Here, we review the properties of the many cell types in the vestibular nuclei, as well as some fundamental computations implemented within this brainstem-cerebellar circuitry. These include the sensorimotor transformations for reflex generation, the neural computations for inertial motion estimation, the distinction between active and passive head movements, as well as the integration of vestibular and proprioceptive information for body motion estimation. A common theme in the solution to such computational problems is the concept of internal models and their neural implementation. Recent studies have shed new insights into important organizational principles that closely resemble those proposed for other sensorimotor systems, where their neural basis has often been more difficult to identify. As such, the vestibular system provides an excellent model to explore common neural processing strategies relevant both for reflexive and for goal-directed, voluntary movement as well as perception.
Collapse
Affiliation(s)
- Andrea M Green
- Dépt. de Physiologie, Université de Montréal, 2960 Chemin de la Tour, Rm. 4141, Montreal, QC H3T 1J4, Canada.
| | | |
Collapse
|
221
|
Barr RC, Nolte LW, Pollard AE. Bayesian quantitative electrophysiology and its multiple applications in bioengineering. IEEE Rev Biomed Eng 2010; 3:155-68. [PMID: 22275206 PMCID: PMC3935245 DOI: 10.1109/rbme.2010.2089375] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Bayesian interpretation of observations began in the early 1700s, and scientific electrophysiology began in the late 1700s. For two centuries these two fields developed mostly separately. In part that was because quantitative Bayesian interpretation, in principle a powerful method of relating measurements to their underlying sources, often required too many steps to be feasible with hand calculation in real applications. As computer power became widespread in the later 1900s, Bayesian models and interpretation moved rapidly but unevenly from the domain of mathematical statistics into applications. Use of Bayesian models now is growing rapidly in electrophysiology. Bayesian models are well suited to the electrophysiological environment, allowing a direct and natural way to express what is known (and unknown) and to evaluate which one of many alternatives is most likely the source of the observations, and the closely related receiver operating characteristic (ROC) curve is a powerful tool in making decisions. Yet, in general, many people would ask what such models are for, in electrophysiology, and what particular advantages such models provide. So to examine this question in particular, this review identifies a number of electrophysiological papers in bioengineering arising from questions in several organ systems to see where Bayesian electrophysiological models or ROC curves were important to the results that were achieved.
Collapse
Affiliation(s)
- Roger C. Barr
- Departments of Biomedical Engineering and Pediatrics, Duke University, Durham, NC 27708 USA
| | - Loren W. Nolte
- Department of Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, NC 27708 USA
| | - Andrew E. Pollard
- Departments of Biomedical Engineering and Pediatrics, Duke University, Durham, NC 27708 USA
| |
Collapse
|
222
|
Tarnutzer AA, Bockisch CJ, Straumann D. Roll-dependent modulation of the subjective visual vertical: contributions of head- and trunk-based signals. J Neurophysiol 2009; 103:934-41. [PMID: 20018837 DOI: 10.1152/jn.00407.2009] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Precision and accuracy of the subjective visual vertical (SVV) modulate in the roll plane. At large roll angles, systematic SVV errors are biased toward the subject's body-longitudinal axis and SVV precision is decreased. To explain this, SVV models typically implement a bias signal, or a prior, in a head-fixed reference frame and assume the sensory input to be optimally tuned along the head-longitudinal axis. We tested the pattern of SVV adjustments both in terms of accuracy and precision in experiments in which the head and the trunk reference frames were not aligned. Twelve subjects were placed on a turntable with the head rolled about 28 degrees counterclockwise relative to the trunk by lateral tilt of the neck to dissociate the orientation of head- and trunk-fixed sensors relative to gravity. Subjects were brought to various positions (roll of head- or trunk-longitudinal axis relative to gravity: 0 degrees , +/-75 degrees ) and aligned an arrow with perceived vertical. Both accuracy and precision of the SVV were significantly (P < 0.05) better when the head-longitudinal axis was aligned with gravity. Comparing absolute SVV errors for clockwise and counterclockwise roll tilts, statistical analysis yielded no significant differences (P > 0.05) when referenced relative to head upright, but differed significantly (P < 0.001) when referenced relative to trunk upright. These findings indicate that the bias signal, which drives the SVV toward the subject's body-longitudinal axis, operates in a head-fixed reference frame. Further analysis of SVV precision supports the hypothesis that head-based graviceptive signals provide the predominant input for internal estimates of visual vertical.
Collapse
Affiliation(s)
- A A Tarnutzer
- Neurology Department, Zurich University Hospital, Frauenklinikstrasse 26, CH-8091 Zurich, Switzerland.
| | | | | |
Collapse
|
223
|
Abstract
Recent studies have described vestibular responses in the dorsal medial superior temporal area (MSTd), a region of extrastriate visual cortex thought to be involved in self-motion perception. The pathways by which vestibular signals are conveyed to area MSTd are currently unclear, and one possibility is that vestibular signals are already present in areas that are known to provide visual inputs to MSTd. Thus, we examined whether selective vestibular responses are exhibited by single neurons in the middle temporal area (MT), a visual motion-sensitive region that projects heavily to area MSTd. We compared responses in MT and MSTd to three-dimensional rotational and translational stimuli that were either presented using a motion platform (vestibular condition) or simulated using optic flow (visual condition). When monkeys fixated a visual target generated by a projector, half of MT cells (and most MSTd neurons) showed significant tuning during the vestibular rotation condition. However, when the fixation target was generated by a laser in a dark room, most MT neurons lost their vestibular tuning whereas most MSTd neurons retained their selectivity. Similar results were obtained for free viewing in darkness. Our findings indicate that MT neurons do not show genuine vestibular responses to self-motion; rather, their tuning in the vestibular rotation condition can be explained by retinal slip due to a residual vestibulo-ocular reflex. Thus, the robust vestibular signals observed in area MSTd do not arise through inputs from area MT.
Collapse
|