51
|
Angelaki DE, Gu Y, Deangelis GC. Visual and vestibular cue integration for heading perception in extrastriate visual cortex. J Physiol 2010; 589:825-33. [PMID: 20679353 DOI: 10.1113/jphysiol.2010.194720] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
Natural behaviours, and hence neuronal populations, often combine multiple sensory cues to improve stimulus detectability or discriminability as we explore the environment. Here we review one such example of multisensory cue integration in the dorsal medial superior temporal area (MSTd) of the macaque visual cortex. Visual and vestibular cues about the direction of self-motion in the world (heading) are encoded by single multisensory neurons in MSTd. Most neurons tend to prefer lateral stimulus directions and, as they are broadly tuned, are most sensitive in discriminating heading directions around straight forward. Decoding of MSTd population activity shows that these neuronal properties can account for the fact that heading perception in humans and macaques is most precise for directions around straight forward, whereas heading sensitivity declines with increasing eccentricity of the reference direction. Remarkably, when heading is specified by both cues simultaneously, behavioural precision is improved in a manner that is predicted by statistically optimal (Bayesian) cue integration models. A subpopulation of multisensory MSTd cells with congruent visual and vestibular heading preferences also combines the cues near-optimally, establishing a potential neural substrate for behavioral cue integration.
Collapse
Affiliation(s)
- Dora E Angelaki
- Department of Anatomy and Neurobiology - Box 8108, Washington University School of Medicine, 660 South Euclid Avenue, St Louis, MO 63110, USA.
| | | | | |
Collapse
|
52
|
Chiappe ME, Seelig JD, Reiser MB, Jayaraman V. Walking modulates speed sensitivity in Drosophila motion vision. Curr Biol 2010; 20:1470-5. [PMID: 20655222 PMCID: PMC4435946 DOI: 10.1016/j.cub.2010.06.072] [Citation(s) in RCA: 220] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2010] [Revised: 06/21/2010] [Accepted: 06/22/2010] [Indexed: 11/26/2022]
Abstract
Changes in behavioral state modify neural activity in many systems. In some vertebrates such modulation has been observed and interpreted in the context of attention and sensorimotor coordinate transformations. Here we report state-dependent activity modulations during walking in a visual-motor pathway of Drosophila. We used two-photon imaging to monitor intracellular calcium activity in motion-sensitive lobula plate tangential cells (LPTCs) in head-fixed Drosophila walking on an air-supported ball. Cells of the horizontal system (HS)--a subgroup of LPTCs--showed stronger calcium transients in response to visual motion when flies were walking rather than resting. The amplified responses were also correlated with walking speed. Moreover, HS neurons showed a relatively higher gain in response strength at higher temporal frequencies, and their optimum temporal frequency was shifted toward higher motion speeds. Walking-dependent modulation of HS neurons in the Drosophila visual system may constitute a mechanism to facilitate processing of higher image speeds in behavioral contexts where these speeds of visual motion are relevant for course stabilization.
Collapse
Affiliation(s)
- M Eugenia Chiappe
- Janelia Farm Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20147, USA
| | | | | | | |
Collapse
|
53
|
Multisensory integration: resolving sensory ambiguities to build novel representations. Curr Opin Neurobiol 2010; 20:353-60. [PMID: 20471245 DOI: 10.1016/j.conb.2010.04.009] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2010] [Revised: 04/10/2010] [Accepted: 04/14/2010] [Indexed: 11/19/2022]
Abstract
Multisensory integration plays several important roles in the nervous system. One is to combine information from multiple complementary cues to improve stimulus detection and discrimination. Another is to resolve peripheral sensory ambiguities and create novel internal representations that do not exist at the level of individual sensors. Here we focus on how ambiguities inherent in vestibular, proprioceptive and visual signals are resolved to create behaviorally useful internal estimates of our self-motion. We review recent studies that have shed new light on the nature of these estimates and how multiple, but individually ambiguous, sensory signals are processed and combined to compute them. We emphasize the need to combine experiments with theoretical insights to understand the transformations that are being performed.
Collapse
|
54
|
Fetsch CR, Deangelis GC, Angelaki DE. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory. Eur J Neurosci 2010; 31:1721-9. [PMID: 20584175 PMCID: PMC3108057 DOI: 10.1111/j.1460-9568.2010.07207.x] [Citation(s) in RCA: 92] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Collapse
Affiliation(s)
- Christopher R Fetsch
- Department of Anatomy and Neurobiology, Washington University School of Medicine, 660 S. Euclid Ave., Box 8108, St. Louis, MO 63110, USA
| | | | | |
Collapse
|
55
|
Maciokas JB, Britten KH. Extrastriate area MST and parietal area VIP similarly represent forward headings. J Neurophysiol 2010; 104:239-47. [PMID: 20427618 DOI: 10.1152/jn.01083.2009] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Many studies have documented the involvement of medial superior temporal extrastriate area (MST) in the perception of heading based on optic flow information. Furthermore, both heading perception and the responses of MST neurons are relatively stable in the presence of eye movements that distort the retinal flow information on which perception is based. Area VIP in the posterior parietal cortex also contains a robust representation of optic flow cues for heading. However, the studies in the two areas were frequently conducted using different stimuli, making quantitative comparison difficult. To remedy this, we studied MST using a family of random dot heading stimuli that we have previously used in the study of VIP. These stimuli simulate observer translation through a three-dimensional cloud of points, and a range of forward headings was presented both with and without horizontal smooth pursuit eye movements. We found that MST neurons, like VIP neurons, respond robustly to these stimuli and partially compensate for the presence of pursuit. Quantitative comparison of the responses revealed no substantial difference between the heading responses of MST and VIP neurons or in their degree of pursuit tolerance.
Collapse
Affiliation(s)
- James B Maciokas
- Center for Neuroscience, University of California, Davis, California 95694, USA
| | | |
Collapse
|
56
|
Yu CP, Page WK, Gaborski R, Duffy CJ. Receptive field dynamics underlying MST neuronal optic flow selectivity. J Neurophysiol 2010; 103:2794-807. [PMID: 20457855 DOI: 10.1152/jn.01085.2009] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Optic flow informs moving observers about their heading direction. Neurons in monkey medial superior temporal (MST) cortex show heading selective responses to optic flow and planar direction selective responses to patches of local motion. We recorded MST neuronal responses to a 90 x 90 degrees optic flow display and to a 3 x 3 array of local motion patches covering the same area. Our goal was to test the hypothesis that the optic flow responses reflect the sum of the local motion responses. The local motion responses of each neuron were modeled as mixtures of Gaussians, combining the effects of two Gaussian response functions derived using a genetic algorithm, and then used to predict that neuron's optic flow responses. Some neurons showed good correspondence between local motion models and optic flow responses, others showed substantial differences. We used the genetic algorithm to modulate the relative strength of each local motion segment's responses to accommodate interactions between segments that might modulate their relative efficacy during co-activation by global patterns of optic flow. These gain modulated models showed uniformly better fits to the optic flow responses, suggesting that coactivation of receptive field segments alters neuronal response properties. We tested this hypothesis by simultaneously presenting local motion stimuli at two different sites. These two-segment stimuli revealed that interactions between response segments have direction and location specific effects that can account for aspects of optic flow selectivity. We conclude that MST's optic flow selectivity reflects dynamic interactions between spatially distributed local planar motion response mechanisms.
Collapse
Affiliation(s)
- Chen Ping Yu
- Department of Computer Sciences, Rochester Institute of Technology Rochester, Rochester, New York, USA
| | | | | | | |
Collapse
|
57
|
Bremmer F, Kubischik M, Pekel M, Hoffmann KP, Lappe M. Visual selectivity for heading in monkey area MST. Exp Brain Res 2010; 200:51-60. [PMID: 19727690 DOI: 10.1007/s00221-009-1990-3] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2009] [Accepted: 08/08/2009] [Indexed: 12/01/2022]
Abstract
The control of self-motion is supported by visual, vestibular, and proprioceptive signals. Recent research has shown how these signals interact in the monkey medio-superior temporal area (area MST) to enhance and disambiguate the perception of heading during self-motion. Area MST is a central stage for self-motion processing from optic flow, and integrates flow Weld information with vestibular self-motion and extraretinal eye movement information. Such multimodal cue integration is clearly important to solidify perception. However to understand the information processing capabilities of the brain, one must also ask how much information can be deduced from a single cue alone. This is particularly pertinent for optic flow, where controversies over its usefulness for self-motion control have existed ever since Gibson proposed his direct approach to ecological perception. In our study, we therefore, tested macaque MST neurons for their heading selectivity in highly complex flow Welds based on the purely visual mechanisms. We recorded responses of MST neurons to simple radial flow Welds and to distorted flow Welds that simulated a self-motion plus an eye movement. About half of the cells compensated for such distortion and kept the same heading selectivity in both cases. Our results strongly support the notion of an involvement of area MST in the computation of heading.
Collapse
Affiliation(s)
- Frank Bremmer
- Allg. Zoologie und Neurobiologie, Ruhr Universität Bochum, 44780 Bochum, Germany.
| | | | | | | | | |
Collapse
|
58
|
Abstract
The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions.
Collapse
Affiliation(s)
- Richard A Andersen
- Division of Biology, California Institute of Technology, Pasadena, California 91125, USA.
| | | | | |
Collapse
|
59
|
Chang SWC, Papadimitriou C, Snyder LH. Using a compound gain field to compute a reach plan. Neuron 2010; 64:744-55. [PMID: 20005829 DOI: 10.1016/j.neuron.2009.11.005] [Citation(s) in RCA: 73] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/18/2009] [Indexed: 10/20/2022]
Abstract
A gain field, the scaling of a tuned neuronal response by a postural signal, may help support neuronal computation. Here, we characterize eye and hand position gain fields in the parietal reach region (PRR). Eye and hand gain fields in individual PRR neurons are similar in magnitude but opposite in sign to one another. This systematic arrangement produces a compound gain field that is proportional to the distance between gaze location and initial hand position. As a result, the visual response to a target for an upcoming reach is scaled by the initial gaze-to-hand distance. Such a scaling is similar to what would be predicted in a neural network that mediates between eye- and hand-centered representations of target location. This systematic arrangement supports a role of PRR in visually guided reaching and provides strong evidence that gain fields are used for neural computations.
Collapse
Affiliation(s)
- Steve W C Chang
- Department of Anatomy and Neurobiology, Washington University in St. Louis School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
60
|
Andersen RA, Cui H. Intention, action planning, and decision making in parietal-frontal circuits. Neuron 2009; 63:568-83. [PMID: 19755101 DOI: 10.1016/j.neuron.2009.08.028] [Citation(s) in RCA: 448] [Impact Index Per Article: 29.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2009] [Revised: 08/26/2009] [Accepted: 08/26/2009] [Indexed: 10/20/2022]
Abstract
The posterior parietal cortex and frontal cortical areas to which it connects are responsible for sensorimotor transformations. This review covers new research on four components of this transformation process: planning, decision making, forward state estimation, and relative-coordinate representations. These sensorimotor functions can be harnessed for neural prosthetic operations by decoding intended goals (planning) and trajectories (forward state estimation) of movements as well as higher cortical functions related to decision making and potentially the coordination of multiple body parts (relative-coordinate representations).
Collapse
Affiliation(s)
- Richard A Andersen
- Division of Biology, California Institute of Technology, Pasadena, CA 91125, USA.
| | | |
Collapse
|
61
|
Liu S, Angelaki DE. Vestibular signals in macaque extrastriate visual cortex are functionally appropriate for heading perception. J Neurosci 2009; 29:8936-45. [PMID: 19605631 PMCID: PMC2728346 DOI: 10.1523/jneurosci.1607-09.2009] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2009] [Revised: 05/23/2009] [Accepted: 06/03/2009] [Indexed: 11/21/2022] Open
Abstract
Visual and vestibular signals converge onto the dorsal medial superior temporal area (MSTd) of the macaque extrastriate visual cortex, which is thought to be involved in multisensory heading perception for spatial navigation. Peripheral otolith information, however, is ambiguous and cannot distinguish linear accelerations experienced during self-motion from those resulting from changes in spatial orientation relative to gravity. Here we show that, unlike peripheral vestibular sensors but similar to lobules 9 and 10 of the cerebellar vermis (nodulus and uvula), MSTd neurons respond selectively to heading and not to changes in orientation relative to gravity. In support of a role in heading perception, MSTd vestibular responses are also dominated by velocity-like temporal dynamics, which might optimize sensory integration with visual motion information. Unlike the cerebellar vermis, however, MSTd neurons also carry a spatial orientation-independent rotation signal from the semicircular canals, which could be useful in compensating for the effects of head rotation on the processing of optic flow. These findings show that vestibular signals in MSTd are appropriately processed to support a functional role in multisensory heading perception.
Collapse
Affiliation(s)
- Sheng Liu
- Department of Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Dora E. Angelaki
- Department of Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| |
Collapse
|
62
|
Merchant H, Zarco W, Prado L, Pérez O. Behavioral and neurophysiological aspects of target interception. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2009; 629:201-20. [PMID: 19227501 DOI: 10.1007/978-0-387-77064-2_10] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
This chapter focuses on the behavioral and neurophysiological aspects of manual interception. We review the most important elements of an interceptive action from the sensory and cognitive stage to the motor side of this behavior. We describe different spatial and temporal target parameters that can be used to control the interception movement, as well as the different strategies used by the subject to intercept a moving target. We review the neurophysiological properties of the parietofrontal system during target motion processing and during a particular experiment of target interception. Finally, we describe the neural responses associated with the temporal and spatial parameters of a moving target and the possible neurophysiological mechanisms used to integrate this information in order to trigger an interception movement.
Collapse
Affiliation(s)
- Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Querétaro Qro. 76230, México, USA.
| | | | | | | |
Collapse
|
63
|
Ilg UJ, Thier P. The neural basis of smooth pursuit eye movements in the rhesus monkey brain. Brain Cogn 2008; 68:229-40. [DOI: 10.1016/j.bandc.2008.08.014] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2008] [Indexed: 12/28/2022]
|
64
|
Affiliation(s)
- Kenneth H. Britten
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California 95616;
| |
Collapse
|
65
|
Profile of Richard A. Andersen. Proc Natl Acad Sci U S A 2008; 105:8167-9. [DOI: 10.1073/pnas.0804405105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
66
|
Ilg UJ. The role of areas MT and MST in coding of visual motion underlying the execution of smooth pursuit. Vision Res 2008; 48:2062-9. [PMID: 18508104 DOI: 10.1016/j.visres.2008.04.015] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2008] [Revised: 04/08/2008] [Accepted: 04/10/2008] [Indexed: 10/22/2022]
Abstract
What is the main purpose of visual motion processing? One very important aspect of motion processing is definitively the generation of smooth pursuit eye movements. These eye movements avoid motion blur of moving objects which would obstruct the analysis of the objects' visual details. However, these eye movements can only be executed if there is a moving target. So there is a very close and inseparable relationship between smooth pursuit and motion processing. The hub for visual motion processing is situated in the middle temporal (MT) and medial superior temporal (MST) area. Despite the undoubted importance of these areas for the generation of smooth pursuit or goal-directed behavior in general, it is important to keep in mind that motion processing in addition serves perceptual purposes such as object recognition, structure-from-motion detection, scene segmentation, self-motion estimation and depth perception. This review focuses at the beginning on pursuit-related activity recorded from MT and MST, subsequently extends the view to goal-directed hand movements, and finally addresses the possible contributions of these areas to motion perception.
Collapse
Affiliation(s)
- Uwe J Ilg
- Department of Cognitive Neurology, Hertie-Institute of Clinical Brain Research, University of Tuebingen, Otfried-Mueller-Street 27, D-72076 Tuebingen, Germany.
| |
Collapse
|
67
|
Abstract
During goal-directed movements, primates are able to rapidly and accurately control an online trajectory despite substantial delay times incurred in the sensorimotor control loop. To address the problem of large delays, it has been proposed that the brain uses an internal forward model of the arm to estimate current and upcoming states of a movement, which are more useful for rapid online control. To study online control mechanisms in the posterior parietal cortex (PPC), we recorded from single neurons while monkeys performed a joystick task. Neurons encoded the static target direction and the dynamic movement angle of the cursor. The dynamic encoding properties of many movement angle neurons reflected a forward estimate of the state of the cursor that is neither directly available from passive sensory feedback nor compatible with outgoing motor commands and is consistent with PPC serving as a forward model for online sensorimotor control. In addition, we found that the space-time tuning functions of these neurons were largely separable in the angle-time plane, suggesting that they mostly encode straight and approximately instantaneous trajectories.
Collapse
|
68
|
Ruiz-Ruiz M, Martinez-Trujillo JC. Human updating of visual motion direction during head rotations. J Neurophysiol 2008; 99:2558-76. [PMID: 18337365 DOI: 10.1152/jn.00931.2007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies have demonstrated that human subjects update the location of visual targets for saccades after head and body movements and in the absence of visual feedback. This phenomenon is known as spatial updating. Here we investigated whether a similar mechanism exists for the perception of motion direction. We recorded eye positions in three dimensions and behavioral responses in seven subjects during a motion task in two different conditions: when the subject's head remained stationary and when subjects rotated their heads around an anteroposterior axis (head tilt). We demonstrated that after head-tilt subjects updated the direction of saccades made in the perceived stimulus direction (direction of motion updating), the amount of updating varied across subjects and stimulus directions, the amount of motion direction updating was highly correlated with the amount of spatial updating during a memory-guided saccade task, subjects updated the stimulus direction during a two-alternative forced-choice direction discrimination task in the absence of saccadic eye movements (perceptual updating), perceptual updating was more accurate than motion direction updating involving saccades, and subjects updated motion direction similarly during active and passive head rotation. These results demonstrate the existence of an updating mechanism for the perception of motion direction in the human brain that operates during active and passive head rotations and that resembles the one of spatial updating. Such a mechanism operates during different tasks involving different motor and perceptual skills (saccade and motion direction discrimination) with different degrees of accuracy.
Collapse
Affiliation(s)
- Mario Ruiz-Ruiz
- Cognitive Neurophysiology Laboratory, Department of Physiology, McGill University, Montreal, Quebec, Canada
| | | |
Collapse
|
69
|
Abstract
The extrastriate cortex of primates encompasses a substantial portion of the cerebral cortex and is devoted to the higher order processing of visual signals and their dispatch to other parts of the brain. A first step towards the understanding of the function of this cortical tissue is a description of the selectivities of the various neuronal populations for higher order aspects of the image. These selectivities present in the various extrastriate areas support many diverse representations of the scene before the subject. The list of the known selectivities includes that for pattern direction and speed gradients in middle temporal/V5 area; for heading in medial superior temporal visual area, dorsal part; for orientation of nonluminance contours in V2 and V4; for curved boundary fragments in V4 and shape parts in infero-temporal area (IT); and for curvature and orientation in depth from disparity in IT and CIP. The most common putative mechanism for generating such emergent selectivity is the pattern of excitatory and inhibitory linear inputs from the afferent area combined with nonlinear mechanisms in the afferent and receiving area.
Collapse
Affiliation(s)
- Guy A Orban
- Laboratorium voor Neuro- en Psychofysiologie, K. U. Leuven Medical School, Leuven, Belgium.
| |
Collapse
|
70
|
Wall MB, Smith AT. The representation of egomotion in the human brain. Curr Biol 2008; 18:191-4. [PMID: 18221876 DOI: 10.1016/j.cub.2007.12.053] [Citation(s) in RCA: 151] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2007] [Revised: 12/21/2007] [Accepted: 12/21/2007] [Indexed: 11/18/2022]
Abstract
An essential function of visual processing is to establish the position of the body in space and, in concert with the other sense systems, to monitor movement of the whole body, or "egomotion." A key cue to egomotion is optic flow. For example, forward motion through the environment generates an expanding pattern of flow on the retina, and (with eyes fixed centrally) the direction of heading corresponds to the center of expansion [1]. In macaques, visual cortical area MST is sensitive to optic-flow structure [2, 3], and it has been suggested that MST has a central role in the computation of heading [4]. However, here we identify two areas of the human brain that represent visual cues to egomotion more directly than does MST. These areas respond strongly to a single optic-flow stimulus but become relatively unresponsive when the stimulus is surrounded with further flow patches and thereby made inconsistent with egomotion. One is putative area VIP in the anterior portion of the intraparietal sulcus. The other is a new visual area, which we refer to as cingulate sulcus visual area (CSv). Areas V1-V4 and MT respond about equally to both types of flow stimulus. MST has intermediate properties, responding well to multiple patches but with a modest preference for a single, egomotion-compatible patch. We suggest that MST is merely an intermediate processing stage for visual cues to egomotion and that such cues are more comprehensively encoded by VIP and CSv.
Collapse
Affiliation(s)
- Matthew B Wall
- Department of Psychology, Royal Holloway, University of London, Egham, TW20 0EX, United Kingdom
| | | |
Collapse
|
71
|
Development of cortical responses to optic flow. Vis Neurosci 2007; 24:845-56. [DOI: 10.1017/s0952523807070769] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2006] [Accepted: 10/08/2007] [Indexed: 11/07/2022]
Abstract
Humans discriminate approaching objects from receding ones shortly after birth, and optic flow associated with self-motion may activate distinctive brain networks, including the human MT+ complex. We sought evidence for evoked brain activity that distinguished radial motion from other optic flow patterns, such as translation or rotation by recording steady-state visual evoked potentials (ssVEPs), in both adults and 4–6 month-old infants to direction-reversing optic flow patterns. In adults, radial flow evoked distinctive brain responses in both the time and frequency domains. Differences between expansion/contraction and both translation and rotation were especially strong in lateral channels (PO7 and PO8), and there was an asymmetry between responses to expansion and contraction. In contrast, infants' evoked response waveforms to all flow types were equivalent, and showed no evidence of the expansion/contraction asymmetry. Infants' responses were largest and most reliable for the translation patterns in which all dots moved in the same direction. This pattern of response is consistent with an account in which motion processing systems detecting locally uniform motion develop earlier than do systems specializing in complex, globally non-uniform patterns of motion, and with evidence suggesting that motion processing undergoes prolonged postnatal development.
Collapse
|
72
|
Simultaneous adaptation of retinal and extra-retinal motion signals. Vision Res 2007; 47:3373-84. [PMID: 18006036 DOI: 10.1016/j.visres.2007.10.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2006] [Revised: 09/20/2007] [Accepted: 10/03/2007] [Indexed: 11/23/2022]
Abstract
A number of models of motion perception include estimates of eye velocity to help compensate for the incidental retinal motion produced by smooth pursuit. The 'classical' model uses extra-retinal motor command signals to obtain the estimate. More recent 'reference-signal' models use retinal motion information to enhance the extra-retinal signal. The consequence of simultaneously adapting to pursuit and retinal motion is thought to favour the reference-signal model, largely because the perception of motion during pursuit ('perceived stability') changes despite the absence of a standard motion aftereffect. The current experiments investigated whether the classical model could also account for these findings. Experiment 1 replicated the changes to perceived stability and then showed how simultaneous motion adaptation changes perceived retinal speed (a velocity aftereffect). Contrary to claims made by proponents of the reference-signal model, adapting simultaneously to pursuit and retinal motion therefore alters the retinal motion inputs to the stability computation. Experiment 2 tested the idea that simultaneous motion adaptation sets up a competitive interaction between two types of velocity aftereffect, one retinal and one extra-retinal. The results showed that pursuit adaptation by itself drove perceived stability in one direction and that adding adapting retinal motion drove perceived stability in the other. Moreover, perceived stability changed in conditions that contained no mismatch between adapting pursuit and adapting retinal motion, contrary to the reference-signal account. Experiment 3 investigated whether the effects of simultaneous motion adaptation were directionally tuned. Surprisingly no tuning was found, but this was true for both perceived stability and retinal velocity aftereffect. The three experiments suggest that simultaneous motion adaptation alters perceived stability based on separable changes to retinal and extra-retinal inputs. Possible mechanisms underlying the extra-retinal velocity aftereffect are discussed.
Collapse
|
73
|
Takahashi K, Gu Y, May PJ, Newlands SD, DeAngelis GC, Angelaki DE. Multimodal coding of three-dimensional rotation and translation in area MSTd: comparison of visual and vestibular selectivity. J Neurosci 2007; 27:9742-56. [PMID: 17804635 PMCID: PMC2587312 DOI: 10.1523/jneurosci.0817-07.2007] [Citation(s) in RCA: 127] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Recent studies have shown that most neurons in the dorsal medial superior temporal area (MSTd) signal the direction of self-translation (i.e., heading) in response to both optic flow and inertial motion. Much less is currently known about the response properties of MSTd neurons during self-rotation. We have characterized the three-dimensional tuning of MSTd neurons while monkeys passively fixated a central, head-fixed target. Rotational stimuli were either presented using a motion platform or simulated visually using optic flow. Nearly all MSTd cells were significantly tuned for the direction of rotation in the absence of optic flow, with more neurons preferring roll than pitch or yaw rotations. The preferred rotation axis in response to optic flow was generally the opposite of that during physical rotation. This result differs sharply from our findings for translational motion, where approximately half of MSTd neurons have congruent visual and vestibular preferences. By testing a subset of neurons with combined visual and vestibular stimulation, we also show that the contributions of visual and vestibular cues to MSTd responses depend on the relative reliabilities of the two stimulus modalities. Previous studies of MSTd responses to motion in darkness have assumed a vestibular origin for the activity observed. We have directly verified this assumption by recording from MSTd neurons after bilateral labyrinthectomy. Selectivity for physical rotation and translation stimuli was eliminated after labyrinthectomy, whereas selectivity to optic flow was unaffected. Overall, the lack of MSTd neurons with congruent rotation tuning for visual and vestibular stimuli suggests that MSTd does not integrate these signals to produce a robust perception of self-rotation. Vestibular rotation signals in MSTd may instead be used to compensate for the confounding effects of rotatory head movements on optic flow.
Collapse
Affiliation(s)
- Katsumasa Takahashi
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Yong Gu
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Paul J. May
- Departments of Anatomy, Ophthalmology, and Neurology, University of Mississippi Medical Center, Jackson, Mississippi 39216, and
| | - Shawn D. Newlands
- Department of Otolaryngology, University of Texas Medical Branch, Galveston, Texas 77550
| | - Gregory C. DeAngelis
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Dora E. Angelaki
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| |
Collapse
|
74
|
Page WK, Duffy CJ. Cortical neuronal responses to optic flow are shaped by visual strategies for steering. Cereb Cortex 2007; 18:727-39. [PMID: 17621608 DOI: 10.1093/cercor/bhm109] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
We hypothesized that neuronal responses to virtual self-movement would be enhanced during steering tasks. We recorded the activity of medial superior temporal (MSTd) neurons in monkeys trained to steer a straight-ahead course, using optic flow. We found smaller optic flow responses during active steering than during the passive viewing of the same stimuli. Behavioral analysis showed that the monkeys had learned to steer using local motion cues. Retraining the monkeys to use the global pattern of optic flow reversed the effects of the active-steering task: active steering then evoked larger responses than passive viewing. We then compared the responses of neurons during active steering by local motion and by global patterns: Local motion trials promoted the use of local dot movement near the center of the stimulus by occluding the peripheral visual field midway through the trial. Global pattern trials promoted the use of radial pattern movement by occluding the central visual field midway through the trial. In this study, identical full-field optic-flow stimuli evoked larger responses in global-pattern trials than in local motion trials. We conclude that the selection of specific visual cues reflects strategies for active steering and alters MSTd neuronal responses to optic flow.
Collapse
Affiliation(s)
- William K Page
- Department of Neurology, and Center for Visual Science, The University of Rochester Medical Center, Rochester, NY 14642-0673, USA
| | | |
Collapse
|
75
|
Bartels A, Zeki S, Logothetis NK. Natural vision reveals regional specialization to local motion and to contrast-invariant, global flow in the human brain. Cereb Cortex 2007; 18:705-17. [PMID: 17615246 DOI: 10.1093/cercor/bhm107] [Citation(s) in RCA: 109] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Visual changes in feature movies, like in real-live, can be partitioned into global flow due to self/camera motion, local/differential flow due to object motion, and residuals, for example, due to illumination changes. We correlated these measures with brain responses of human volunteers viewing movies in an fMRI scanner. Early visual areas responded only to residual changes, thus lacking responses to equally large motion-induced changes, consistent with predictive coding. Motion activated V5+ (MT+), V3A, medial posterior parietal cortex (mPPC) and, weakly, lateral occipital cortex (LOC). V5+ responded to local/differential motion and depended on visual contrast, whereas mPPC responded to global flow spanning the whole visual field and was contrast independent. mPPC thus codes for flow compatible with unbiased heading estimation in natural scenes and for the comparison of visual flow with nonretinal, multimodal motion cues in it or downstream. mPPC was functionally connected to anterior portions of V5+, whereas laterally neighboring putative homologue of lateral intraparietal area (LIP) connected with frontal eye fields. Our results demonstrate a progression of selectivity from local and contrast-dependent motion processing in V5+ toward global and contrast-independent motion processing in mPPC. The function, connectivity, and anatomical neighborhood of mPPC imply several parallels to monkey ventral intraparietal area (VIP).
Collapse
Affiliation(s)
- A Bartels
- Max Planck Institute for Biological Cybernetics, Department of Physiology of Cognitive Processes, 72076 Tübingen, Germany.
| | | | | |
Collapse
|
76
|
Batista AP, Santhanam G, Yu BM, Ryu SI, Afshar A, Shenoy KV. Reference frames for reach planning in macaque dorsal premotor cortex. J Neurophysiol 2007; 98:966-83. [PMID: 17581846 DOI: 10.1152/jn.00421.2006] [Citation(s) in RCA: 100] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When a human or animal reaches out to grasp an object, the brain rapidly computes a pattern of muscular contractions that can acquire the target. This computation involves a reference frame transformation because the target's position is initially available only in a visual reference frame, yet the required control signal is a set of commands to the musculature. One of the core brain areas involved in visually guided reaching is the dorsal aspect of the premotor cortex (PMd). Using chronically implanted electrode arrays in two Rhesus monkeys, we studied the contributions of PMd to the reference frame transformation for reaching. PMd neurons are influenced by the locations of reach targets relative to both the arm and the eyes. Some neurons encode reach goals using limb-centered reference frames, whereas others employ eye-centered reference fames. Some cells encode reach goals in a reference frame best described by the combined position of the eyes and hand. In addition to neurons like these where a reference frame could be identified, PMd also contains cells that are influenced by both the eye- and limb-centered locations of reach goals but for which a distinct reference frame could not be determined. We propose two interpretations for these neurons. First, they may encode reach goals using a reference frame we did not investigate, such as intrinsic reference frames. Second, they may not be adequately characterized by any reference frame.
Collapse
Affiliation(s)
- Aaron P Batista
- Department of Electrical Engineering and Neurosciences Program, Stanford University, Stanford, California 94305-4075, USA
| | | | | | | | | | | |
Collapse
|
77
|
Inaba N, Shinomoto S, Yamane S, Takemura A, Kawano K. MST Neurons Code for Visual Motion in Space Independent of Pursuit Eye Movements. J Neurophysiol 2007; 97:3473-83. [PMID: 17329625 DOI: 10.1152/jn.01054.2006] [Citation(s) in RCA: 63] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When a person tracks a small moving object, the visual images in the background of the visual scene move across his/her retina. It, however, is possible to estimate the actual motion of the images despite the eye-movement-induced motion. To understand the neural mechanism that reconstructs a stable visual world independent of eye movements, we explored areas MT (middle temporal) and MST (medial superior temporal) in the monkey cortex, both of which are known to be essential for visual motion analysis. We recorded the responses of neurons to a moving textured image that appeared briefly on the screen while the monkeys were performing smooth pursuit or stationary fixation tasks. Although neurons in both areas exhibited significant responses to the motion of the textured image with directional selectivity, the responses of MST neurons were mostly correlated with the motion of the image on the screen independent of pursuit eye movement, whereas the responses of MT neurons were mostly correlated with the motion of the image on the retina. Thus these MST neurons were more likely than MT neurons to distinguish between external and self-induced motion. The results are consistent with the idea that MST neurons code for visual motion in the external world while compensating for the counter-rotation of retinal images due to pursuit eye movements.
Collapse
Affiliation(s)
- Naoko Inaba
- Dept of Integrative Brain Science, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan.
| | | | | | | | | |
Collapse
|
78
|
Lee B, Pesaran B, Andersen RA. Translation speed compensation in the dorsal aspect of the medial superior temporal area. J Neurosci 2007; 27:2582-91. [PMID: 17344395 PMCID: PMC6672509 DOI: 10.1523/jneurosci.3416-06.2007] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The dorsal aspect of the medial superior temporal area (MSTd) is involved in the computation of heading direction from the focus of expansion (FOE) of the visual image. Our laboratory previously found that MSTd neurons adjust their focus tuning curves to compensate for shifts in the FOE produced by eye rotation (Bradley et al., 1996) as well as for changes in pursuit speed (Shenoy et al., 2002). The translation speed of an observer also affects the shift of the FOE. To investigate whether MSTd neurons can adjust their focus tuning curves to compensate for varying translation speeds, we recorded extracellular responses from 93 focus-tuned MSTd neurons in two rhesus monkeys (Macaca mulatta) performing pursuit eye movements across displays of varying translation speeds. We found that MSTd neurons had larger shifts in their tuning curves for slow translation speeds and smaller shifts for fast translation speeds. These shifts aligned the focus tuning curves with the true heading direction and not with the retinal position of the FOE. Because the eye was pursuing at the same rate for varying translation speeds, these results indicate that retinal cues related both to translation speed and extraretinal signals from pursuit eye movements are used by MSTd neurons to compute heading direction.
Collapse
Affiliation(s)
- Brian Lee
- Division of Biology, California Institute of Technology, Pasadena, California 91125
| | - Bijan Pesaran
- Division of Biology, California Institute of Technology, Pasadena, California 91125
| | - Richard A. Andersen
- Division of Biology, California Institute of Technology, Pasadena, California 91125
| |
Collapse
|
79
|
Fetsch CR, Wang S, Gu Y, DeAngelis GC, Angelaki DE. Spatial reference frames of visual, vestibular, and multimodal heading signals in the dorsal subdivision of the medial superior temporal area. J Neurosci 2007; 27:700-12. [PMID: 17234602 PMCID: PMC1995026 DOI: 10.1523/jneurosci.3553-06.2007] [Citation(s) in RCA: 101] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Heading perception is a complex task that generally requires the integration of visual and vestibular cues. This sensory integration is complicated by the fact that these two modalities encode motion in distinct spatial reference frames (visual, eye-centered; vestibular, head-centered). Visual and vestibular heading signals converge in the primate dorsal subdivision of the medial superior temporal area (MSTd), a region thought to contribute to heading perception, but the reference frames of these signals remain unknown. We measured the heading tuning of MSTd neurons by presenting optic flow (visual condition), inertial motion (vestibular condition), or a congruent combination of both cues (combined condition). Static eye position was varied from trial to trial to determine the reference frame of tuning (eye-centered, head-centered, or intermediate). We found that tuning for optic flow was predominantly eye-centered, whereas tuning for inertial motion was intermediate but closer to head-centered. Reference frames in the two unimodal conditions were rarely matched in single neurons and uncorrelated across the population. Notably, reference frames in the combined condition varied as a function of the relative strength and spatial congruency of visual and vestibular tuning. This represents the first investigation of spatial reference frames in a naturalistic, multimodal condition in which cues may be integrated to improve perceptual performance. Our results compare favorably with the predictions of a recent neural network model that uses a recurrent architecture to perform optimal cue integration, suggesting that the brain could use a similar computational strategy to integrate sensory signals expressed in distinct frames of reference.
Collapse
Affiliation(s)
- Christopher R. Fetsch
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Sentao Wang
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Yong Gu
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Gregory C. DeAngelis
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| | - Dora E. Angelaki
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110
| |
Collapse
|
80
|
Abstract
Recent evidence suggests that a key visual motion centre in the brain ignores extra-retinal motor information concerning reflexive eye movement. Instead it seems that neurons sensitive to oculomotor actions in this area fire at will.
Collapse
Affiliation(s)
- Tom C A Freeman
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff CF10 3AT, UK.
| |
Collapse
|
81
|
d'Avossa G, Tosetti M, Crespi S, Biagi L, Burr DC, Morrone MC. Spatiotopic selectivity of BOLD responses to visual motion in human area MT. Nat Neurosci 2006; 10:249-55. [PMID: 17195842 DOI: 10.1038/nn1824] [Citation(s) in RCA: 112] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2006] [Accepted: 11/28/2006] [Indexed: 11/08/2022]
Abstract
Many neurons in the monkey visual extrastriate cortex have receptive fields that are affected by gaze direction. In humans, psychophysical studies suggest that motion signals may be encoded in a spatiotopic fashion. Here we use functional magnetic resonance imaging to study spatial selectivity in the human middle temporal cortex (area MT or V5), an area that is clearly implicated in motion perception. The results show that the response of MT is modulated by gaze direction, generating a spatial selectivity based on screen rather than retinal coordinates. This area could be the neurophysiological substrate of the spatiotopic representation of motion signals.
Collapse
Affiliation(s)
- Giovanni d'Avossa
- Facoltà di Psicologia, Università Vita-Salute San Raffaele, Via Olgettina 58, 20132 Milan, Italy
| | | | | | | | | | | |
Collapse
|
82
|
Bex PJ, Falkenberg HK. Resolution of complex motion detectors in the central and peripheral visual field. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2006; 23:1598-607. [PMID: 16783422 DOI: 10.1364/josaa.23.001598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We examine how local direction signals are combined to compute the focus of radial motion (FRM) in random dot patterns and examine how this process changes across the visual field. Equivalent noise analysis showed that a loss in FRM accuracy was largely attributable to an increase in local motion detector noise with little or no change in efficiency across the visual field. The minimum separation for discriminating the foci of two overlapping optic flow patterns increased in the periphery faster than predicted from the resolution for a single FRM. This behavior requires that observers average numerous local velocities to estimate the FRM, which enables resistance to internal and external noise and endows the system with the property of position invariance. However, such pooling limits the precision with which multiple looming objects can be discriminated, especially in the peripheral visual field.
Collapse
Affiliation(s)
- Peter J Bex
- Institute of Ophthalmology, University College London, London EC1V 9EL, UK.
| | | |
Collapse
|
83
|
Goossens J, Dukelow SP, Menon RS, Vilis T, van den Berg AV. Representation of head-centric flow in the human motion complex. J Neurosci 2006; 26:5616-27. [PMID: 16723518 PMCID: PMC6675273 DOI: 10.1523/jneurosci.0730-06.2006] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Recent neuroimaging studies have identified putative homologs of macaque middle temporal area (area MT) and medial superior temporal area (area MST) in humans. Little is known about the integration of visual and nonvisual signals in human motion areas compared with monkeys. Through extra-retinal signals, the brain can factor out the components of visual flow on the retina that are induced by eye-in-head and head-in-space rotations and achieve a representation of flow relative to the head (head-centric flow) or body (body-centric flow). Here, we used functional magnetic resonance imaging to test whether extra-retinal eye-movement signals modulate responses to visual flow in the human MT+ complex. We distinguished between MT and MST and tested whether subdivisions of these areas may transform the retinal flow into head-centric flow. We report that interactions between eye-movement signals and visual flow are not evenly distributed across MT+. Pursuit hardly influenced the response of MT to flow, whereas the responses in MST to the same retinal stimuli were stronger during pursuit than during fixation. We also identified two subregions in which the flow-related responses were boosted significantly by pursuit, one overlapping part of MST. In addition, we found evidence of a metric relation between rotational flow relative to the head and fMRI signals in a subregion of MST. The latter findings provide an important advance over published single-cell recordings in monkey MST. A visual representation of the rotation of the head in the world derived from head-centric flow may supplement semicircular canals signals and is appropriate for cross-calibrating vestibular and visual signals.
Collapse
Affiliation(s)
- Jeroen Goossens
- Department of Biophysics, Radboud University Nijmegen Medical Centre, 6500 HB Nijmegen, The Netherlands.
| | | | | | | | | |
Collapse
|
84
|
Blohm G, Optican LM, Lefèvre P. A model that integrates eye velocity commands to keep track of smooth eye displacements. J Comput Neurosci 2006; 21:51-70. [PMID: 16633937 DOI: 10.1007/s10827-006-7199-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2005] [Revised: 01/12/2006] [Accepted: 01/13/2006] [Indexed: 12/20/2022]
Abstract
Past results have reported conflicting findings on the oculomotor system's ability to keep track of smooth eye movements in darkness. Whereas some results indicate that saccades cannot compensate for smooth eye displacements, others report that memory-guided saccades during smooth pursuit are spatially correct. Recently, it was shown that the amount of time before the saccade made a difference: short-latency saccades were retinotopically coded, whereas long-latency saccades were spatially coded. Here, we propose a model of the saccadic system that can explain the available experimental data. The novel part of this model consists of a delayed integration of efferent smooth eye velocity commands. Two alternative physiologically realistic neural mechanisms for this integration stage are proposed. Model simulations accurately reproduced prior findings. Thus, this model reconciles the earlier contradictory reports from the literature about compensation for smooth eye movements before saccades because it involves a slow integration process.
Collapse
Affiliation(s)
- Gunnar Blohm
- CESAME, Université catholique de Louvain, 4, avenue G. Lemaître, 1348, Louvain-la-Neuve, Belgium.
| | | | | |
Collapse
|
85
|
Gu Y, Watkins PV, Angelaki DE, DeAngelis GC. Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area. J Neurosci 2006; 26:73-85. [PMID: 16399674 PMCID: PMC1538979 DOI: 10.1523/jneurosci.2356-05.2006] [Citation(s) in RCA: 226] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Robust perception of self-motion requires integration of visual motion signals with nonvisual cues. Neurons in the dorsal subdivision of the medial superior temporal area (MSTd) may be involved in this sensory integration, because they respond selectively to global patterns of optic flow, as well as translational motion in darkness. Using a virtual-reality system, we have characterized the three-dimensional (3D) tuning of MSTd neurons to heading directions defined by optic flow alone, inertial motion alone, and congruent combinations of the two cues. Among 255 MSTd neurons, 98% exhibited significant 3D heading tuning in response to optic flow, whereas 64% were selective for heading defined by inertial motion. Heading preferences for visual and inertial motion could be aligned but were just as frequently opposite. Moreover, heading selectivity in response to congruent visual/vestibular stimulation was typically weaker than that obtained using optic flow alone, and heading preferences under congruent stimulation were dominated by the visual input. Thus, MSTd neurons generally did not integrate visual and nonvisual cues to achieve better heading selectivity. A simple two-layer neural network, which received eye-centered visual inputs and head-centered vestibular inputs, reproduced the major features of the MSTd data. The network was trained to compute heading in a head-centered reference frame under all stimulus conditions, such that it performed a selective reference-frame transformation of visual, but not vestibular, signals. The similarity between network hidden units and MSTd neurons suggests that MSTd may be an early stage of sensory convergence involved in transforming optic flow information into a (head-centered) reference frame that facilitates integration with vestibular signals.
Collapse
Affiliation(s)
- Yong Gu
- Department of Neurobiology, Washington University School of Medicine, St. Louis, Missouri 63110, USA
| | | | | | | |
Collapse
|
86
|
Souman JL, Hooge ITC, Wertheim AH. Frame of reference transformations in motion perception during smooth pursuit eye movements. J Comput Neurosci 2006; 20:61-76. [PMID: 16511654 DOI: 10.1007/s10827-006-5216-4] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2005] [Revised: 07/04/2005] [Accepted: 09/27/2005] [Indexed: 10/25/2022]
Abstract
Smooth pursuit eye movements change the retinal image velocity of objects in the visual field. In order to change from a retinocentric frame of reference into a head-centric one, the visual system has to take the eye movements into account. Studies on motion perception during smooth pursuit eye movements have measured either perceived speed or perceived direction during smooth pursuit to investigate this frame of reference transformation, but never both at the same time. We devised a new velocity matching task, in which participants matched both perceived speed and direction during fixation to that during pursuit. In Experiment 1, the velocity matches were determined for a range of stimulus directions, with the head-centric stimulus speed kept constant. In Experiment 2, the retinal stimulus speed was kept approximately constant, with the same range of stimulus directions. In both experiments, the velocity matches for all directions were shifted against the pursuit direction, suggesting an incomplete transformation of the frame of reference. The degree of compensation was approximately constant across stimulus direction. We fitted the classical linear model, the model of Turano and Massof (2001) and that of Freeman (2001) to the velocity matches. The model of Turano and Massof fitted the velocity matches best, but the differences between de model fits were quite small. Evaluation of the models and comparison to a few alternatives suggests that further specification of the potential effect of retinal image characteristics on the eye movement signal is needed.
Collapse
Affiliation(s)
- Jan L Souman
- Department of Psychonomics, Helmholtz Institute, Utrecht University, The Netherlands.
| | | | | |
Collapse
|
87
|
Merchant H, Georgopoulos AP. Neurophysiology of perceptual and motor aspects of interception. J Neurophysiol 2006; 95:1-13. [PMID: 16339504 DOI: 10.1152/jn.00422.2005] [Citation(s) in RCA: 69] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The interception of moving targets is a complex activity that involves a dynamic interplay of several perceptual and motor processes and therefore involves a rich interaction among several brain areas. Although the behavioral aspects of interception have been studied for the past three decades, it is only during the past decade that neural studies have been focused on this problem. In addition to the interception itself, several neural studies have explored, within that context, the underlying mechanisms concerning perceptual aspects of moving stimuli, such as optic flow and apparent motion. In this review, we discuss the wealth of knowledge that has accumulated on this topic with an emphasis on the results of neural studies in behaving monkeys.
Collapse
Affiliation(s)
- Hugo Merchant
- Instituto de Neurobiología, Universidad Nacional Autonoma de Mexico, Querétaro Qro, Mexico
| | | |
Collapse
|
88
|
Logan DJ, Duffy CJ. Cortical area MSTd combines visual cues to represent 3-D self-movement. ACTA ACUST UNITED AC 2005; 16:1494-507. [PMID: 16339087 DOI: 10.1093/cercor/bhj082] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
As arboreal primates move through the jungle, they are immersed in visual motion that they must distinguish from the movement of predators and prey. We recorded dorsal medial superior temporal (MSTd) cortical neuronal responses to visual motion stimuli simulating self-movement and object motion. MSTd neurons encode the heading of simulated self-movement in three-dimensional (3-D) space. 3-D heading responses can be evoked either by the large patterns of visual motion in optic flow or by the visual object motion seen when an observer passes an earth-fixed landmark. Responses to naturalistically combined optic flow and object motion depend on their relative directions: an object moving as part of the optic flow field has little effect on neuronal responses. In contrast, an object moving separately from the optic flow field has large effects, decreasing the amplitude of the population response and shifting the population's heading estimate to match the direction of object motion as the object moves toward central vision. These effects parallel those seen in human heading perception with minimal effects of objects moving with the optic flow and substantial effects of objects violating the optic flow. We conclude that MSTd can contribute to navigation by supporting 3-D heading estimation, potentially switching from optic flow to object cues when a moving object passes in front of the observer.
Collapse
Affiliation(s)
- David J Logan
- Department of Neurology, and the Center for Visual Science, The University of Rochester Medical Center, Rochester, NY 14642, USA
| | | |
Collapse
|
89
|
Souman JL, Hooge ITC, Wertheim AH. Localization and motion perception during smooth pursuit eye movements. Exp Brain Res 2005; 171:448-58. [PMID: 16331504 DOI: 10.1007/s00221-005-0287-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2005] [Accepted: 10/26/2005] [Indexed: 11/25/2022]
Abstract
We investigated the relationship between compensation for the effects of smooth pursuit eye movements in localization and motion perception. Participants had to indicate the perceived motion direction, the starting point and the end point of a vertically moving stimulus dot presented during horizontal smooth pursuit. The presentation duration of the stimulus was varied. From the indicated starting and end points, the motion direction was predicted and compared with the actual indicated directions. Both the directions predicted from localization and the indicated directions deviated from the physical directions, but the errors in the predicted directions were larger than those in the indicated directions. The results of a control experiment, in which the same tasks were performed during fixation, suggest that this difference reflects different transformations from a retinocentric to a head-centric frame of reference. This difference appears to be mainly due to an asymmetry in the effect of retinal image motion direction on localization during smooth pursuit.
Collapse
Affiliation(s)
- Jan L Souman
- Helmholtz Institute, Department of Psychonomics, Utrecht University, Utrecht, The Netherlands.
| | | | | |
Collapse
|
90
|
Thier P, Ilg UJ. The neural basis of smooth-pursuit eye movements. Curr Opin Neurobiol 2005; 15:645-52. [PMID: 16271460 DOI: 10.1016/j.conb.2005.10.013] [Citation(s) in RCA: 115] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2005] [Accepted: 10/21/2005] [Indexed: 11/26/2022]
Abstract
Smooth-pursuit eye movements are used to stabilize the image of a moving object of interest on the fovea, thus guaranteeing its high-acuity scrutiny. Such movements are based on a phylogenetically recent cerebro-ponto-cerebellar pathway that has evolved in parallel with foveal vision. Recent work has shown that a network of several cerebrocortical areas directs attention to objects of interest moving in three dimensions and reconstructs the trajectory of the target in extrapersonal space, thereby integrating various sources of multimodal sensory and efference copy information, as well as cognitive influences such as prediction. This cortical network is the starting point of a set of parallel cerebrofugal projections that use different parts of the dorsal pontine nuclei and the neighboring rostral nucleus reticularis tegmenti pontis as intermediate stations to feed two areas of the cerebellum, the flocculus-paraflocculus and the posterior vermis, which make mainly complementary contributions to the control of smooth pursuit.
Collapse
Affiliation(s)
- Peter Thier
- Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler Strasse 3, 72076 Tuebingen, Germany.
| | | |
Collapse
|
91
|
Abstract
Moving objects are detected by virtue of their shifting image on the retina. But to know how objects are moving in the world, we must take into account the rotation of our eyes, as well as the rotation of our head. A recent paper describes neurons that carry out this computation.
Collapse
Affiliation(s)
- David Bradley
- Psychology Department, The University of Chicago, 5848 South University Avenue, Green 314, Chicago, Illinois 60637, USA.
| |
Collapse
|
92
|
Ilg UJ, Schumann S, Thier P. Posterior parietal cortex neurons encode target motion in world-centered coordinates. Neuron 2004; 43:145-51. [PMID: 15233924 DOI: 10.1016/j.neuron.2004.06.006] [Citation(s) in RCA: 69] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2004] [Revised: 04/16/2004] [Accepted: 05/31/2004] [Indexed: 10/26/2022]
Abstract
The motion areas of posterior parietal cortex extract information on visual motion for perception as well as for the guidance of movement. It is usually assumed that neurons in posterior parietal cortex represent visual motion relative to the retina. Current models describing action guided by moving objects work successfully based on this assumption. However, here we show that the pursuit-related responses of a distinct group of neurons in area MST of monkeys are at odds with this view. Rather than signaling object image motion on the retina, they represent object motion in world-centered coordinates. This representation may simplify the coordination of object-directed action and ego motion-invariant visual perception.
Collapse
Affiliation(s)
- Uwe J Ilg
- Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str 3, D-72076 Tübingen, Germany.
| | | | | |
Collapse
|
93
|
Zhang T, Heuer HW, Britten KH. Parietal area VIP neuronal responses to heading stimuli are encoded in head-centered coordinates. Neuron 2004; 42:993-1001. [PMID: 15207243 DOI: 10.1016/j.neuron.2004.06.008] [Citation(s) in RCA: 87] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2004] [Revised: 04/19/2004] [Accepted: 06/04/2004] [Indexed: 11/27/2022]
Abstract
The ventral intraparietal area (VIP) is a multimodal parietal area, where visual responses are brisk, directional, and typically selective for complex optic flow patterns. VIP thus could provide signals useful for visual estimation of heading (self-motion direction). A central problem in heading estimation is how observers compensate for eye velocity, which distorts the retinal motion cues upon which perception depends. To find out if VIP could be useful for heading, we measured its responses to simulated trajectories, both with and without eye movements. Our results showed that most VIP neurons very strongly signal heading direction. Furthermore, the tuning of most VIP neurons was remarkably stable in the presence of eye movements. This stability was such that the population of VIP neurons represented heading very nearly in head-centered coordinates. This makes VIP the most robust source of such signals yet described, with properties ideal for supporting perception.
Collapse
Affiliation(s)
- Tao Zhang
- Center for Neuroscience, University of California, Davis, Davis, CA 95616, USA
| | | | | |
Collapse
|
94
|
Chapter 3 Building blocks for time-to-contact estimation by the brain. ACTA ACUST UNITED AC 2004. [DOI: 10.1016/s0166-4115(04)80005-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
95
|
Heuer HW, Britten KH. Optic flow signals in extrastriate area MST: comparison of perceptual and neuronal sensitivity. J Neurophysiol 2003; 91:1314-26. [PMID: 14534287 DOI: 10.1152/jn.00637.2003] [Citation(s) in RCA: 59] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The medial superior temporal area of extrastriate cortex (MST) contains signals selective for nonuniform patterns of motion often termed "optic flow." The presence of such tuning, however, does not necessarily imply involvement in perception. To quantify the relationship between these selective neuronal signals and the perception of optic flow, we designed a discrimination task that allowed us to simultaneously record neuronal and behavioral sensitivities to near-threshold optic flow stimuli tailored to MST cells' preferences. In this two-alternative forced-choice task, we controlled the salience of globally opposite patterns (e.g., expansion and contraction) by varying the coherence of the motion. Using these stimuli, we could both relate the sensitivity of neuronal signals in MST to the animal's behavioral sensitivity and also measure trial-by-trial correlation between neuronal signals and behavioral choices. Neurons in MST showed a wide range of sensitivities to these complex motion stimuli. Many neurons had sensitivities equal or superior to the monkey's threshold. On the other hand, trial-by-trial correlation between neuronal discharge and choice ("choice probability") was weak or nonexistent in our data. Together, these results lead us to conclude that MST contains sufficient information for threshold judgments of optic flow; however, the role of MST activity in optic flow discriminations may be less direct than in other visual motion tasks previously described by other laboratories.
Collapse
Affiliation(s)
- Hilary W Heuer
- Center for Neuroscience and Section of Neurobiology, Physiology and Behavior, University of California, Davis, California 95616,USA
| | | |
Collapse
|
96
|
Ben Hamed S, Page W, Duffy C, Pouget A. MSTd neuronal basis functions for the population encoding of heading direction. J Neurophysiol 2003; 90:549-58. [PMID: 12750416 DOI: 10.1152/jn.00639.2002] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Basis functions have been extensively used in models of neural computation because they can be combined linearly to approximate any nonlinear functions of the encoded variables. We investigated whether dorsal medial superior temporal (MSTd) area neurons use basis functions to simultaneously encode heading direction, eye position, and the velocity of ocular pursuit. Using optimal linear estimators, we first show that the head-centered and eye-centered position of a focus of expansion (FOE) in optic flow, pursuit direction, and eye position can all be estimated from the single-trial responses of 144 MSTd neurons with an average accuracy of 2-3 degrees, a value consistent with the discrimination thresholds measured in humans and monkeys. We then examined the format of the neural code for the head-centered position of the FOE, eye position, and pursuit direction. The basis function hypothesis predicts that a large majority of cells in MSTd should encode two or more signals simultaneously and combine these signals nonlinearly. Our analysis shows that 95% of the neurons encode two or more signals, whereas 76% code all three signals. Of the 95% of cells encoding two or more signals, 90% show nonlinear interactions between the encoded variables. These findings support the notion that MSTd may use basis functions to represent the FOE in optic flow, eye position, and pursuit.
Collapse
Affiliation(s)
- S Ben Hamed
- Department of Brain and Cognitive Science and the Center for Visual Science, University of Rochester, NY 14627, USA
| | | | | | | |
Collapse
|
97
|
Page WK, Duffy CJ. Heading representation in MST: sensory interactions and population encoding. J Neurophysiol 2003; 89:1994-2013. [PMID: 12686576 DOI: 10.1152/jn.00493.2002] [Citation(s) in RCA: 108] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Dorsal medial superior temporal cortex (MSTd)'s population response encodes heading direction from optic flow seen during fixation or pursuit. Vestibular responses in these neurons might enhance heading representation during self-movement in light or provide an alternative basis for heading representation during self-movement in darkness. We have compared these hypotheses by recording MSTd neuronal responses to translational self-movement in light and darkness, during fixation and pursuit. Translational movement in darkness, with gaze fixed, evokes transient vestibular responses during acceleration that reverse directionality during deceleration and persist without a fixation target. Movement in light increases the amplitude and duration of these responses so they mimic responses to simulated optic flow presented without translational movement. Pursuit of a stationary landmark during translational movement combines vestibular and visual effects with pursuit responses. Vestibular, visual, and pursuit effects interact so that single neuron heading responses vary across the stimulus period and between stimulus conditions. Combining single neuron responses by population vector summation yields stronger heading estimates in light than in darkness, with gaze fixed or during landmark pursuit. Adding translational movement to robust optic flow stimuli does not augment the population response. Vestibular signals enhance single neuron responses in light and maintain population heading estimation in darkness, potentially extending MSTd's heading representation across the continuum of naturalistic self-movement conditions.
Collapse
Affiliation(s)
- William K Page
- Departments of Neurology, Neurobiology, and Anatomy, Ophthalmology, Brain and Cognitive Sciences, and The Center for Visual Science, The University of Rochester Medical Center, Rochester, New York 14642, USA
| | | |
Collapse
|
98
|
Abstract
When we move forward the visual images on our retinas expand. Humans rely on the focus, or center, of this expansion to estimate their direction of self-motion or heading and, as long as the eyes are still, the retinal focus corresponds to the heading. However, smooth pursuit eye movements add visual motion to the expanding retinal image and displace the focus of expansion. In spite of this, humans accurately judge their heading during pursuit eye movements even though the retinal focus no longer corresponds to the heading. Recent studies in macaque suggest that correction for pursuit may occur in the dorsal aspect of the medial superior temporal area (MSTd); neurons in this area are tuned to the retinal position of the focus and they modify their tuning to partially compensate for the focus shift caused by pursuit. However, the question remains whether these neurons shift focus tuning more at faster pursuit speeds, to compensate for the larger focus shifts created by faster pursuit. To investigate this question, we recorded from 40 MSTd neurons while monkeys made pursuit eye movements at a range of speeds across simulated self- or object motion displays. We found that most MSTd neurons modify their focus tuning more at faster pursuit speeds, consistent with the idea that they encode heading and other motion parameters regardless of pursuit speed. Across the population, the median rate of compensation increase with pursuit speed was 51% as great as required for perfect compensation. We recorded from the same neurons in a simulated pursuit condition, in which gaze was fixed but the entire display counter-rotated to produce the same retinal image as during real pursuit. This condition yielded the result that retinal cues contribute to pursuit compensation; the rate of compensation increase was 30% of that required for accurate encoding of heading. The difference between these two conditions was significant (P < 0.05), indicating that extraretinal cues also contribute significantly. We found a systematic antialignment between preferred pursuit and preferred visual motion directions. Neurons may use this antialignment to combine retinal and extraretinal compensatory cues. These results indicate that many MSTd neurons compensate for pursuit velocity, pursuit direction as previously reported and pursuit speed, and further implicate MSTd as a critical stage in the computation of egomotion.
Collapse
Affiliation(s)
- Krishna V Shenoy
- Division of Biology, California Institute of Technology, Pasadena, California 91125, USA
| | | | | |
Collapse
|
99
|
Abstract
The posterior parietal cortex (PPC), historically believed to be a sensory structure, is now viewed as an area important for sensory-motor integration. Among its functions is the forming of intentions, that is, high-level cognitive plans for movement. There is a map of intentions within the PPC, with different subregions dedicated to the planning of eye movements, reaching movements, and grasping movements. These areas appear to be specialized for the multisensory integration and coordinate transformations required to convert sensory input to motor output. In several subregions of the PPC, these operations are facilitated by the use of a common distributed space representation that is independent of both sensory input and motor output. Attention and learning effects are also evident in the PPC. However, these effects may be general to cortex and operate in the PPC in the context of sensory-motor transformations.
Collapse
Affiliation(s)
- Richard A Andersen
- Division of Biology, California Institute of Technology, Mail Code 216-76, Pasadena 91125, USA.
| | | |
Collapse
|
100
|
Eskandar EN, Assad JA. Distinct nature of directional signals among parietal cortical areas during visual guidance. J Neurophysiol 2002; 88:1777-90. [PMID: 12364506 DOI: 10.1152/jn.2002.88.4.1777] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We examined neuronal signals in the monkey medial superior temporal area (MST), the medial intraparietal area (MIP), and the lateral intraparietal area (LIP) during visually guided hand movements. Two animals were trained to use a joystick to guide a spot to a target. Many neurons responded in a direction-selective manner in this guidance task. We tested whether the direction selectivity depended on the direction of the stimulus spot or the direction of the hand movement. First, in some trials, the moving spot disappeared transiently. Second, the mapping between the hand direction and the spot direction was reversed on alternate blocks of trials. Third, we recorded the spot's movement while the animals moved the joystick and then played back that movement while the animals fixated without moving the joystick. Neurons in the three parietal areas conveyed distinct directional information. MST neurons were active and directional only on visible trials in both joystick-movement mode and playback mode and were not affected by the direction of hand movement. MIP neurons were mainly directional with respect to the hand movement, although some MIP neurons were also selective for stimulus direction. MIP neurons were much less active in playback mode. LIP neurons were active and directional in both joystick-movement mode and playback mode. Directional signals in LIP were unrelated to planning saccades. The selectivity of LIP neurons also became evident hundreds of milliseconds before the start of movement. Since the direction of movement was consistent throughout a block of trials, these signals could provide a prediction of the upcoming direction of motion. We tested this by alternating blocks of trials in which the direction was consistent or randomized. The direction selectivity developed earlier on trials in which the upcoming direction could be predicted. These results suggest that LIP neurons combine "bottom-up" visual motion signals with extraretinal, predictive signals about stimulus motion.
Collapse
Affiliation(s)
- Emad N Eskandar
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115, USA
| | | |
Collapse
|