1
|
Keshavarzi S, Velez-Fort M, Margrie TW. Cortical Integration of Vestibular and Visual Cues for Navigation, Visual Processing, and Perception. Annu Rev Neurosci 2023; 46:301-320. [PMID: 37428601 DOI: 10.1146/annurev-neuro-120722-100503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent to which these internal signals are incorporated within cortical sensory representation and how they might be relied upon for sensory-driven decision-making, during, for example, spatial navigation, is yet to be understood. Recent novel experimental approaches in rodents have probed both the physiological and behavioral significance of vestibular signals and indicate that their widespread integration with vision improves both the cortical representation and perceptual accuracy of self-motion and orientation. Here, we summarize these recent findings with a focus on cortical circuits involved in visual perception and spatial navigation and highlight the major remaining knowledge gaps. We suggest that vestibulo-visual integration reflects a process of constant updating regarding the status of self-motion, and access to such information by the cortex is used for sensory perception and predictions that may be implemented for rapid, navigation-related decision-making.
Collapse
Affiliation(s)
- Sepiedeh Keshavarzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Mateo Velez-Fort
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Troy W Margrie
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| |
Collapse
|
2
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
3
|
Kim HR, Angelaki DE, DeAngelis GC. A neural mechanism for detecting object motion during self-motion. eLife 2022; 11:74971. [PMID: 35642599 PMCID: PMC9159750 DOI: 10.7554/elife.74971] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 05/17/2022] [Indexed: 11/17/2022] Open
Abstract
Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque middle temporal (MT) area with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion, and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during self-motion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.
Collapse
Affiliation(s)
- HyungGoo R Kim
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.,Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, United States
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, United States
| |
Collapse
|
4
|
McFadyen JR, Heider B, Karkhanis AN, Cloherty SL, Muñoz F, Siegel RM, Morris AP. Robust Coding of Eye Position in Posterior Parietal Cortex despite Context-Dependent Tuning. J Neurosci 2022; 42:4116-4130. [PMID: 35410881 PMCID: PMC9121829 DOI: 10.1523/jneurosci.0674-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 03/30/2022] [Accepted: 03/30/2022] [Indexed: 11/21/2022] Open
Abstract
Neurons in posterior parietal cortex (PPC) encode many aspects of the sensory world (e.g., scene structure), the posture of the body, and plans for action. For a downstream computation, however, only some of these dimensions are relevant; the rest are "nuisance variables" because their influence on neural activity changes with sensory and behavioral context, potentially corrupting the read-out of relevant information. Here we show that a key postural variable for vision (eye position) is represented robustly in male macaque PPC across a range of contexts, although the tuning of single neurons depended strongly on context. Contexts were defined by different stages of a visually guided reaching task, including (1) a visually sparse epoch, (2) a visually rich epoch, (3) a "go" epoch in which the reach was cued, and (4) during the reach itself. Eye position was constant within trials but varied across trials in a 3 × 3 grid spanning 24° × 24°. Using demixed principal component analysis of neural spike-counts, we found that the subspace of the population response encoding eye position is orthogonal to that encoding task context. Accordingly, a context-naive (fixed-parameter) decoder was nevertheless able to estimate eye position reliably across contexts. Errors were small given the sample size (∼1.78°) and would likely be even smaller with larger populations. Moreover, they were comparable to that of decoders that were optimized for each context. Our results suggest that population codes in PPC shield encoded signals from crosstalk to support robust sensorimotor transformations across contexts.SIGNIFICANCE STATEMENT Neurons in posterior parietal cortex (PPC) which are sensitive to gaze direction are thought to play a key role in spatial perception and behavior (e.g., reaching, navigation), and provide a potential substrate for brain-controlled prosthetics. Many, however, change their tuning under different sensory and behavioral contexts, raising the prospect that they provide unreliable representations of egocentric space. Here, we analyze the structure of encoding dimensions for gaze direction and context in PPC during different stages of a visually guided reaching task. We use demixed dimensionality reduction and decoding techniques to show that the coding of gaze direction in PPC is mostly invariant to context. This suggests that PPC can provide reliable spatial information across sensory and behavioral contexts.
Collapse
Affiliation(s)
- Jamie R McFadyen
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Barbara Heider
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Anushree N Karkhanis
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Shaun L Cloherty
- School of Engineering, RMIT University, Melbourne, VIC, 3001, Australia
| | - Fabian Muñoz
- Department of Neuroscience, Columbia University, New York, NY, 10027
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027
| | - Ralph M Siegel
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, 07102
| | - Adam P Morris
- Neuroscience Program, Biomedicine Discovery Institute, Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
- Monash Data Futures Institute, Monash University, Clayton, VIC, 3800, Australia
| |
Collapse
|
5
|
Di Marco S, Sulpizio V, Bellagamba M, Fattori P, Galati G, Galletti C, Lappe M, Maltempo T, Pitzalis S. Multisensory integration in cortical regions responding to locomotion-related visual and somatomotor signals. Neuroimage 2021; 244:118581. [PMID: 34543763 DOI: 10.1016/j.neuroimage.2021.118581] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 09/08/2021] [Accepted: 09/14/2021] [Indexed: 11/18/2022] Open
Abstract
During real-world locomotion, in order to be able to move along a path or avoid an obstacle, continuous changes in self-motion direction (i.e. heading) are needed. Control of heading changes during locomotion requires the integration of multiple signals (i.e., visual, somatomotor, vestibular). Recent fMRI studies have shown that both somatomotor areas (human PEc [hPEc], human PE [hPE], primary somatosensory cortex [S-I]) and egomotion visual regions (cingulate sulcus visual area [CSv], posterior cingulate area [pCi], posterior insular cortex [PIC]) respond to either leg movements and egomotion-compatible visual stimulations, suggesting a role in the analysis of both visual attributes of egomotion and somatomotor signals with the aim of guiding locomotion. However, whether these regions are able to integrate egomotion-related visual signals with somatomotor inputs coming from leg movements during heading changes remains an open question. Here we used a combined approach of individual functional localizers and task-evoked activity by fMRI. In thirty subjects we first localized three egomotion areas (CSv, pCi, PIC) and three somatomotor regions (S-I, hPE, hPEc). Then, we tested their responses in a multisensory integration experiment combining visual and somatomotor signals relevant to locomotion in congruent or incongruent trials. We used an fMR-adaptation paradigm to explore the sensitivity to the repeated presentation of these bimodal stimuli in the six regions of interest. Results revealed that hPE, S-I and CSv showed an adaptation effect regardless of congruency, while PIC, pCi and hPEc showed sensitivity to congruency. PIC exhibited a preference for congruent trials compared to incongruent trials. Areas pCi and hPEc exhibited an adaptation effect only for congruent and incongruent trials, respectively. PIC, pCi and hPEc sensitivity to the congruency relationship between visual (locomotion-compatible) cues and (leg-related) somatomotor inputs suggests that these regions are involved in multisensory integration processes, likely in order to guide/adjust leg movements during heading changes.
Collapse
Affiliation(s)
- Sara Di Marco
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Valentina Sulpizio
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Martina Bellagamba
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Gaspare Galati
- Department of Psychology, "Sapienza" University of Rome, Rome, Italy; Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Markus Lappe
- Institute for Psychology, University of Muenster, Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Teresa Maltempo
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| | - Sabrina Pitzalis
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy; Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| |
Collapse
|
6
|
|
7
|
Abstract
We perceive our environment through multiple independent sources of sensory input. The brain is tasked with deciding whether multiple signals are produced by the same or different events (i.e., solve the problem of causal inference). Here, we train a neural network to solve causal inference by either combining or separating visual and vestibular inputs in order to estimate self- and scene motion. We find that the network recapitulates key neurophysiological (i.e., congruent and opposite neurons) and behavioral (e.g., reliability-based cue weighting) properties of biological systems. We show how congruent and opposite neurons support motion estimation and how the balance in activity between these subpopulations determines whether to combine or separate multisensory signals. Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction (“congruent” neurons), while others prefer opposing directions (“opposite” neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.
Collapse
|
8
|
Wild B, Treue S. Primate extrastriate cortical area MST: a gateway between sensation and cognition. J Neurophysiol 2021; 125:1851-1882. [PMID: 33656951 DOI: 10.1152/jn.00384.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Primate visual cortex consists of dozens of distinct brain areas, each providing a highly specialized component to the sophisticated task of encoding the incoming sensory information and creating a representation of our visual environment that underlies our perception and action. One such area is the medial superior temporal cortex (MST), a motion-sensitive, direction-selective part of the primate visual cortex. It receives most of its input from the middle temporal (MT) area, but MST cells have larger receptive fields and respond to more complex motion patterns. The finding that MST cells are tuned for optic flow patterns has led to the suggestion that the area plays an important role in the perception of self-motion. This hypothesis has received further support from studies showing that some MST cells also respond selectively to vestibular cues. Furthermore, the area is part of a network that controls the planning and execution of smooth pursuit eye movements and its activity is modulated by cognitive factors, such as attention and working memory. This review of more than 90 studies focuses on providing clarity of the heterogeneous findings on MST in the macaque cortex and its putative homolog in the human cortex. From this analysis of the unique anatomical and functional position in the hierarchy of areas and processing steps in primate visual cortex, MST emerges as a gateway between perception, cognition, and action planning. Given this pivotal role, this area represents an ideal model system for the transition from sensation to cognition.
Collapse
Affiliation(s)
- Benedict Wild
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Goettingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), University of Goettingen, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
9
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
10
|
Flexible coding of object motion in multiple reference frames by parietal cortex neurons. Nat Neurosci 2020; 23:1004-1015. [PMID: 32541964 PMCID: PMC7474851 DOI: 10.1038/s41593-020-0656-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. We examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates, but should be ignored when judging object motion in head coordinates. Neural responses in the ventral intraparietal area are modulated by the task reference frame, such that population activity represents object direction in either reference frame. In contrast, responses in the lateral portion of the medial superior temporal area primarily represent object motion in head coordinates. Our findings demonstrate a neural representation of object motion that changes with task requirements.
Collapse
|
11
|
French RL, DeAngelis GC. Multisensory neural processing: from cue integration to causal inference. CURRENT OPINION IN PHYSIOLOGY 2020; 16:8-13. [PMID: 32968701 DOI: 10.1016/j.cophys.2020.04.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Neurophysiological studies of multisensory processing have largely focused on how the brain integrates information from different sensory modalities to form a coherent percept. However, in the natural environment, an important extra step is needed: the brain faces the problem of causal inference, which involves determining whether different sources of sensory information arise from the same environmental cause, such that integrating them is advantageous Behavioral and computational studies have provided a strong foundation for studying causal inference, but studies of its neural basis have only recently been undertaken. This review focuses on recent advances regarding how the brain infers the causes of sensory inputs and uses this information to make robust perceptual estimates.
Collapse
Affiliation(s)
- Ranran L French
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY
| |
Collapse
|
12
|
Rideaux R, Michael E, Welchman AE. Adaptation to Binocular Anticorrelation Results in Increased Neural Excitability. J Cogn Neurosci 2019; 32:100-110. [PMID: 31560264 DOI: 10.1162/jocn_a_01471] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Throughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Some neurons are tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause-that is, establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, some binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioral evidence supporting the existence of these neurons [Katyal, S., Vergeer, M., He, S., He, B., & Engel, S. A. Conflict-sensitive neurons gate interocular suppression in human visual cortex. Scientific Reports, 8, 1239, 2018; Kingdom, F. A. A., Jennings, B. J., & Georgeson, M. A. Adaptation to interocular difference. Journal of Vision, 18, 9, 2018; Janssen, P., Vogels, R., Liu, Y., & Orban, G. A. At least at the level of inferior temporal cortex, the stereo correspondence problem is solved. Neuron, 37, 693-701, 2003; Tsao, D. Y., Conway, B. R., & Livingstone, M. S. Receptive fields of disparity-tuned simple cells in macaque V1. Neuron, 38, 103-114, 2003; Cumming, B. G., & Parker, A. J. Responses of primary visual cortical neurons to binocular disparity without depth perception. Nature, 389, 280-283, 1997], their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers' steady-state visually evoked potentials in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger steady-state visually evoked potentials, whereas adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting "what not" neurons play a suppressive role in supporting stereopsis [Goncalves, N. R., & Welchman, A. E. "What not" detectors help the brain see in depth. Current Biology, 27, 1403-1412, 2017]; that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability.
Collapse
|
13
|
Zhang WH, Wang H, Chen A, Gu Y, Lee TS, Wong KM, Wu S. Complementary congruent and opposite neurons achieve concurrent multisensory integration and segregation. eLife 2019; 8:43753. [PMID: 31120416 PMCID: PMC6565362 DOI: 10.7554/elife.43753] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Accepted: 05/22/2019] [Indexed: 11/13/2022] Open
Abstract
Our brain perceives the world by exploiting multisensory cues to extract information about various aspects of external stimuli. The sensory cues from the same stimulus should be integrated to improve perception, and otherwise segregated to distinguish different stimuli. In reality, however, the brain faces the challenge of recognizing stimuli without knowing in advance the sources of sensory cues. To address this challenge, we propose that the brain conducts integration and segregation concurrently with complementary neurons. Studying the inference of heading-direction via visual and vestibular cues, we develop a network model with two reciprocally connected modules modeling interacting visual-vestibular areas. In each module, there are two groups of neurons whose tunings under each sensory cue are either congruent or opposite. We show that congruent neurons implement integration, while opposite neurons compute cue disparity information for segregation, and the interplay between two groups of neurons achieves efficient multisensory information processing.
Collapse
Affiliation(s)
- Wen-Hao Zhang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong.,Center of the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States
| | - He Wang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics, Primate Research Center, East China Normal University, Shanghai, China
| | - Yong Gu
- Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Tai Sing Lee
- Center of the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States
| | - Ky Michael Wong
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong
| | - Si Wu
- School of Electronics Engineering and Computer Science, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| |
Collapse
|
14
|
Causal inference accounts for heading perception in the presence of object motion. Proc Natl Acad Sci U S A 2019; 116:9060-9065. [PMID: 30996126 DOI: 10.1073/pnas.1820373116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.
Collapse
|
15
|
Sasaki R, Angelaki DE, DeAngelis GC. Processing of object motion and self-motion in the lateral subdivision of the medial superior temporal area in macaques. J Neurophysiol 2019; 121:1207-1221. [PMID: 30699042 DOI: 10.1152/jn.00497.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer's self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.
Collapse
Affiliation(s)
- Ryo Sasaki
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester , Rochester, New York
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine , Houston, Texas.,Department of Electrical and Computer Engineering, Rice University , Houston, Texas
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester , Rochester, New York
| |
Collapse
|
16
|
Roth N, Rust NC. Rethinking assumptions about how trial and nuisance variability impact neural task performance in a fast-processing regime. J Neurophysiol 2019; 121:115-130. [PMID: 30403544 DOI: 10.1152/jn.00503.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Task performance is determined not only by the amount of task-relevant signal present in our brains but also by the presence of noise, which can arise from multiple sources. Internal noise, or "trial variability," manifests as trial-by-trial variations in neural responses under seemingly identical conditions. External factors can also translate into noise, particularly when a task requires extraction of a particular type of information from our environment amid changes in other task-irrelevant "nuisance" parameters. To better understand how signal, trial variability, and nuisance variability combine to determine neural task performance, we explored their interactions, both in simulation and when applied to recorded neural data. This exploration revealed that trial variability is typically larger than a neuron's task-relevant signal for tasks with fast reaction times, where spike count integration windows are short. In this low signal-to-trial variability regime, nuisance variability has the counterintuitive property of having a negligible impact on single-neuron task performance, even when it dominates the task-relevant signal. The inconsequential impact of nuisance variability on individual neurons also extends to descriptions of population performance, under the assumption that both trial and nuisance variability are uncorrelated between neurons. These results demonstrate that some basic intuitions about neural coding are misguided in the context of a fast-processing, low-spike-count regime. NEW & NOTEWORTHY Many everyday tasks require us to extract specific information from our environment while ignoring other things. When the neurons in our brains that carry task-relevant signals are also modulated by task-irrelevant "nuisance" information, nuisance modulation is expected to act as performance-limiting noise. Using both simulated and recorded neural data, we demonstrate that these intuitions are misguided when the brain operates in a fast-processing, low-spike-count regime, where nuisance variability is largely inconsequential for performance.
Collapse
Affiliation(s)
- Noam Roth
- Department of Psychology, University of Pennsylvania , Philadelphia, Pennsylvania
| | - Nicole C Rust
- Department of Psychology, University of Pennsylvania , Philadelphia, Pennsylvania
| |
Collapse
|
17
|
Gu Y. Vestibular signals in primate cortex for self-motion perception. Curr Opin Neurobiol 2018; 52:10-17. [DOI: 10.1016/j.conb.2018.04.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 03/12/2018] [Accepted: 04/07/2018] [Indexed: 10/17/2022]
|
18
|
How Does the Brain Tell Self-Motion from Object Motion? J Neurosci 2018; 38:3875-3877. [PMID: 29669798 DOI: 10.1523/jneurosci.0039-18.2018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 02/22/2018] [Accepted: 03/01/2018] [Indexed: 11/21/2022] Open
|
19
|
Rideaux R, Welchman AE. Proscription supports robust perceptual integration by suppression in human visual cortex. Nat Commun 2018; 9:1502. [PMID: 29666361 PMCID: PMC5904115 DOI: 10.1038/s41467-018-03400-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Accepted: 02/07/2018] [Indexed: 11/14/2022] Open
Abstract
Perception relies on integrating information within and between the senses, but how does the brain decide which pieces of information should be integrated and which kept separate? Here we demonstrate how proscription can be used to solve this problem: certain neurons respond best to unrealistic combinations of features to provide ‘what not’ information that drives suppression of unlikely perceptual interpretations. First, we present a model that captures both improved perception when signals are consistent (and thus should be integrated) and robust estimation when signals are conflicting. Second, we test for signatures of proscription in the human brain. We show that concentrations of inhibitory neurotransmitter GABA in a brain region intricately involved in integrating cues (V3B/KO) correlate with robust integration. Finally, we show that perturbing excitation/inhibition impairs integration. These results highlight the role of proscription in robust perception and demonstrate the functional purpose of ‘what not’ sensors in supporting sensory estimation. Perception relies on information integration but it is unclear how the brain decides which information to integrate and which to keep separate. Here, the authors develop and test a biologically inspired model of cue-integration, implicating a key role for GABAergic proscription in robust perception.
Collapse
Affiliation(s)
- Reuben Rideaux
- Department of Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK
| | - Andrew E Welchman
- Department of Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK.
| |
Collapse
|
20
|
Yu X, Hou H, Spillmann L, Gu Y. Causal Evidence of Motion Signals in Macaque Middle Temporal Area Weighted-Pooled for Global Heading Perception. Cereb Cortex 2018; 28:612-624. [PMID: 28057722 DOI: 10.1093/cercor/bhw402] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 12/13/2016] [Indexed: 11/14/2022] Open
Abstract
Accurate heading perception relies on visual information integrated across a wide field, that is, optic flow. Numerous computational studies have speculated how local visual information might be pooled by the brain to compute heading, but these hypotheses lack direct neurophysiological support. In the current study, we instructed human and monkey subjects to judge heading directions based on global optic flow. We showed that a local perturbation cue applied within only a small part of the visual field could bias the subjects' heading judgments, and shift the neuronal tuning in the macaque middle temporal (MT) area at the same time. Electrical microstimulation in MT significantly biased the animals' heading judgments predictable from the tuning of the stimulated neurons. Masking the visual stimuli within these neurons' receptive fields could not remove the stimulation effect, indicating a sufficient role of the MT signals pooled by downstream neurons for global heading estimation. Interestingly, this pooling is not homogeneous because stimulating neurons with excitatory surrounds produced relatively larger effects than stimulating neurons with inhibitory surrounds. Thus our data not only provide direct causal evidence, but also new insights into the neural mechanisms of pooling local motion information for global heading estimation.
Collapse
Affiliation(s)
- Xuefei Yu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Han Hou
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lothar Spillmann
- On leave of absence from Department of Neurology, University of Freiburg, Freiburg 79110, Germany
| | - Yong Gu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
21
|
Dissociation of Self-Motion and Object Motion by Linear Population Decoding That Approximates Marginalization. J Neurosci 2017; 37:11204-11219. [PMID: 29030435 DOI: 10.1523/jneurosci.1177-17.2017] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Revised: 10/02/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion.SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd.
Collapse
|
22
|
Decoupled choice-driven and stimulus-related activity in parietal neurons may be misrepresented by choice probabilities. Nat Commun 2017; 8:715. [PMID: 28959018 PMCID: PMC5620044 DOI: 10.1038/s41467-017-00766-3] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2017] [Accepted: 07/26/2017] [Indexed: 11/09/2022] Open
Abstract
Trial-by-trial correlations between neural responses and choices (choice probabilities) are often interpreted to reflect a causal contribution of neurons to task performance. However, choice probabilities may arise from top-down, rather than bottom-up, signals. We isolated distinct sensory and decision contributions to single-unit activity recorded from the dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas of monkeys during perception of self-motion. Superficially, neurons in both areas show similar tuning curves during task performance. However, tuning in MSTd neurons primarily reflects sensory inputs, whereas choice-related signals dominate tuning in VIP neurons. Importantly, the choice-related activity of VIP neurons is not predictable from their stimulus tuning, and these factors are often confounded in choice probability measurements. This finding was confirmed in a subset of neurons for which stimulus tuning was measured during passive fixation. Our findings reveal decoupled stimulus and choice signals in the VIP area, and challenge our understanding of choice signals in the brain.Choice-related signals in neuronal activity may reflect bottom-up sensory processes, top-down decision-related influences, or a combination of the two. Here the authors report that choice-related activity in VIP neurons is not predictable from their stimulus tuning, and that dominant choice signals can bias the standard metric of choice preference (choice probability).
Collapse
|
23
|
Gain Modulation as a Mechanism for Coding Depth from Motion Parallax in Macaque Area MT. J Neurosci 2017; 37:8180-8197. [PMID: 28739582 DOI: 10.1523/jneurosci.0393-17.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Revised: 06/30/2017] [Accepted: 07/20/2017] [Indexed: 11/21/2022] Open
Abstract
Observer translation produces differential image motion between objects that are located at different distances from the observer's point of fixation [motion parallax (MP)]. However, MP can be ambiguous with respect to depth sign (near vs far), and this ambiguity can be resolved by combining retinal image motion with signals regarding eye movement relative to the scene. We have previously demonstrated that both extra-retinal and visual signals related to smooth eye movements can modulate the responses of neurons in area MT of macaque monkeys, and that these modulations generate neural selectivity for depth sign. However, the neural mechanisms that govern this selectivity have remained unclear. In this study, we analyze responses of MT neurons as a function of both retinal velocity and direction of eye movement, and we show that smooth eye movements modulate MT responses in a systematic, temporally precise, and directionally specific manner to generate depth-sign selectivity. We demonstrate that depth-sign selectivity is primarily generated by multiplicative modulations of the response gain of MT neurons. Through simulations, we further demonstrate that depth can be estimated reasonably well by a linear decoding of a population of MT neurons with response gains that depend on eye velocity. Together, our findings provide the first mechanistic description of how visual cortical neurons signal depth from MP.SIGNIFICANCE STATEMENT Motion parallax is a monocular cue to depth that commonly arises during observer translation. To compute from motion parallax whether an object appears nearer or farther than the point of fixation requires combining retinal image motion with signals related to eye rotation, but the neurobiological mechanisms have remained unclear. This study provides the first mechanistic account of how this interaction takes place in the responses of cortical neurons. Specifically, we show that smooth eye movements modulate the gain of responses of neurons in area MT in a directionally specific manner to generate selectivity for depth sign from motion parallax. We also show, through simulations, that depth could be estimated from a population of such gain-modulated neurons.
Collapse
|
24
|
Representation of Multidimensional Stimuli: Quantifying the Most Informative Stimulus Dimension from Neural Responses. J Neurosci 2017; 37:7332-7346. [PMID: 28663198 DOI: 10.1523/jneurosci.0318-17.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Revised: 06/09/2017] [Accepted: 06/17/2017] [Indexed: 11/21/2022] Open
Abstract
A common way to assess the function of sensory neurons is to measure the number of spikes produced by individual neurons while systematically varying a given dimension of the stimulus. Such measured tuning curves can then be used to quantify the accuracy of the neural representation of the stimulus dimension under study, which can in turn be related to behavioral performance. However, tuning curves often change shape when other dimensions of the stimulus are varied, reflecting the simultaneous sensitivity of neurons to multiple stimulus features. Here we illustrate how one-dimensional information analyses are misleading in this context, and propose a framework derived from Fisher information that allows the quantification of information carried by neurons in multidimensional stimulus spaces. We use this method to probe the representation of sound localization in auditory neurons of chinchillas and guinea pigs of both sexes, and show how heterogeneous tuning properties contribute to a representation of sound source position that is robust to changes in sound level.SIGNIFICANCE STATEMENT Sensory neurons' responses are typically modulated simultaneously by numerous stimulus properties, which can result in an overestimation of neural acuity with existing one-dimensional neural information transmission measures. To overcome this limitation, we develop new, compact expressions of Fisher information-derived measures that bound the robust encoding of separate stimulus dimensions in the context of multidimensional stimuli. We apply this method to the problem of the representation of sound source location in the face of changes in sound source level by neurons of the auditory midbrain.
Collapse
|
25
|
Goncalves NR, Welchman AE. "What Not" Detectors Help the Brain See in Depth. Curr Biol 2017; 27:1403-1412.e8. [PMID: 28502662 PMCID: PMC5457481 DOI: 10.1016/j.cub.2017.03.074] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Revised: 03/13/2017] [Accepted: 03/29/2017] [Indexed: 11/23/2022]
Abstract
Binocular stereopsis is one of the primary cues for three-dimensional (3D) vision in species ranging from insects to primates. Understanding how the brain extracts depth from two different retinal images represents a tractable challenge in sensory neuroscience that has so far evaded full explanation. Central to current thinking is the idea that the brain needs to identify matching features in the two retinal images (i.e., solving the “stereoscopic correspondence problem”) so that the depth of objects in the world can be triangulated. Although intuitive, this approach fails to account for key physiological and perceptual observations. We show that formulating the problem to identify “correct matches” is suboptimal and propose an alternative, based on optimal information encoding, that mixes disparity detection with “proscription”: exploiting dissimilar features to provide evidence against unlikely interpretations. We demonstrate the role of these “what not” responses in a neural network optimized to extract depth in natural images. The network combines information for and against the likely depth structure of the viewed scene, naturally reproducing key characteristics of both neural responses and perceptual interpretations. We capture the encoding and readout computations of the network in simple analytical form and derive a binocular likelihood model that provides a unified account of long-standing puzzles in 3D vision at the physiological and perceptual levels. We suggest that marrying detection with proscription provides an effective coding strategy for sensory estimation that may be useful for diverse feature domains (e.g., motion) and multisensory integration. The brain uses “what not” detectors to facilitate 3D vision Binocular mismatches are used to drive suppression of incompatible depths Proscription accounts for depth perception without binocular correspondence A simple analytical model captures perceptual and neural responses
Collapse
Affiliation(s)
- Nuno R Goncalves
- Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK
| | - Andrew E Welchman
- Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK.
| |
Collapse
|
26
|
Smith AT, Greenlee MW, DeAngelis GC, Angelaki D. Distributed Visual–Vestibular Processing in the Cerebral Cortex of Man and Macaque. Multisens Res 2017. [DOI: 10.1163/22134808-00002568] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Recent advances in understanding the neurobiological underpinnings of visual–vestibular interactions underlying self-motion perception are reviewed with an emphasis on comparisons between the macaque and human brains. In both species, several distinct cortical regions have been identified that are active during both visual and vestibular stimulation and in some of these there is clear evidence for sensory integration. Several possible cross-species homologies between cortical regions are identified. A key feature of cortical organization is that the same information is apparently represented in multiple, anatomically diverse cortical regions, suggesting that information about self-motion is used for different purposes in different brain regions.
Collapse
Affiliation(s)
- Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| | - Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, 93053 Regensburg, Germany
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| |
Collapse
|