1
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts. Eur J Neurosci 2016; 42:2934-51. [PMID: 26448341 DOI: 10.1111/ejn.13093] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Revised: 09/14/2015] [Accepted: 09/30/2015] [Indexed: 11/27/2022]
Abstract
We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
2
|
Koval MJ, Hutchison RM, Lomber SG, Everling S. Effects of unilateral deactivations of dorsolateral prefrontal cortex and anterior cingulate cortex on saccadic eye movements. J Neurophysiol 2013; 111:787-803. [PMID: 24285866 DOI: 10.1152/jn.00626.2013] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
The dorsolateral prefrontal cortex (dlPFC) and anterior cingulate cortex (ACC) have both been implicated in the cognitive control of saccadic eye movements by single neuron recording studies in nonhuman primates and functional imaging studies in humans, but their relative roles remain unclear. Here, we reversibly deactivated either dlPFC or ACC subregions in macaque monkeys while the animals performed randomly interleaved pro- and antisaccades. In addition, we explored the whole-brain functional connectivity of these two regions by applying a seed-based resting-state functional MRI analysis in a separate cohort of monkeys. We found that unilateral dlPFC deactivation had stronger behavioral effects on saccades than unilateral ACC deactivation, and that the dlPFC displayed stronger functional connectivity with frontoparietal areas than the ACC. We suggest that the dlPFC plays a more prominent role in the preparation of pro- and antisaccades than the ACC.
Collapse
Affiliation(s)
- Michael J Koval
- Graduate Program in Neuroscience, University of Western Ontario, London, Ontario, Canada
| | | | | | | |
Collapse
|
3
|
Corrigan F, Grand D. Brainspotting: Recruiting the midbrain for accessing and healing sensorimotor memories of traumatic activation. Med Hypotheses 2013; 80:759-66. [DOI: 10.1016/j.mehy.2013.03.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2013] [Accepted: 03/08/2013] [Indexed: 01/14/2023]
|
4
|
Monteon JA, Wang H, Martinez-Trujillo J, Crawford JD. Frames of reference for eye-head gaze shifts evoked during frontal eye field stimulation. Eur J Neurosci 2013; 37:1754-65. [PMID: 23489744 DOI: 10.1111/ejn.12175] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2010] [Revised: 01/14/2013] [Accepted: 01/30/2013] [Indexed: 11/29/2022]
Abstract
The frontal eye field (FEF), in the prefrontal cortex, participates in the transformation of visual signals into saccade motor commands and in eye-head gaze control. The FEF is thought to show eye-fixed visual codes in head-restrained monkeys, but it is not known how it transforms these inputs into spatial codes for head-unrestrained gaze commands. Here, we tested if the FEF influences desired gaze commands within a simple eye-fixed frame, like the superior colliculus (SC), or in more complex egocentric frames like the supplementary eye fields (SEFs). We electrically stimulated 95 FEF sites in two head-unrestrained monkeys to evoke 3D eye-head gaze shifts and then mathematically rotated these trajectories into various reference frames. In theory, each stimulation site should specify a specific spatial goal when the evoked gaze shifts are plotted in the appropriate frame. We found that these motor output frames varied site by site, mainly within the eye-to-head frame continuum. Thus, consistent with the intermediate placement of the FEF within the high-level circuits for gaze control, its stimulation-evoked output showed an intermediate trend between the multiple reference frame codes observed in SEF-evoked gaze shifts and the simpler eye-fixed reference frame observed in SC-evoked movements. These results suggest that, although the SC, FEF and SEF carry eye-fixed information at the level of their unit response fields, this information is transformed differently in their output projections to the eye and head controllers.
Collapse
Affiliation(s)
- Jachin A Monteon
- Centre for Vision Research, York University, Toronto, ON, Canada
| | | | | | | |
Collapse
|
5
|
Van Grootel TJ, Van der Willigen RF, Van Opstal AJ. Experimental test of spatial updating models for monkey eye-head gaze shifts. PLoS One 2012; 7:e47606. [PMID: 23118883 PMCID: PMC3485288 DOI: 10.1371/journal.pone.0047606] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2012] [Accepted: 09/13/2012] [Indexed: 12/02/2022] Open
Abstract
How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.
Collapse
Affiliation(s)
- Tom J. Van Grootel
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Nijmegen, The Netherlands
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Robert F. Van der Willigen
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Nijmegen, The Netherlands
| | - A. John Van Opstal
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Nijmegen, The Netherlands
- * E-mail:
| |
Collapse
|
6
|
Farshadmanesh F, Byrne P, Wang H, Corneil BD, Crawford JD. Relationships between neck muscle electromyography and three-dimensional head kinematics during centrally induced torsional head perturbations. J Neurophysiol 2012; 108:2867-83. [PMID: 22956790 DOI: 10.1152/jn.00312.2012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The relationship between neck muscle electromyography (EMG) and torsional head rotation (about the nasooccipital axis) is difficult to assess during normal gaze behaviors with the head upright. Here, we induced acute head tilts similar to cervical dystonia (torticollis) in two monkeys by electrically stimulating 20 interstitial nucleus of Cajal (INC) sites or inactivating 19 INC sites by injection of muscimol. Animals engaged in a simple gaze fixation task while we recorded three-dimensional head kinematics and intramuscular EMG from six bilateral neck muscle pairs. We used a cross-validation-based stepwise regression to quantitatively examine the relationships between neck EMG and torsional head kinematics under three conditions: 1) unilateral INC stimulation (where the head rotated torsionally toward the side of stimulation); 2) corrective poststimulation movements (where the head returned toward upright); and 3) unilateral INC inactivation (where the head tilted toward the opposite side of inactivation). Our cross-validated results of corrective movements were slightly better than those obtained during unperturbed gaze movements and showed many more torsional terms, mostly related to velocity, although some orientation and acceleration terms were retained. In addition, several simplifying principles were identified. First, bilateral muscle pairs showed similar, but opposite EMG-torsional coupling terms, i.e., a change in torsional kinematics was associated with increased muscle activity on one side and decreased activity on the other side. s, whenever torsional terms were retained in a given muscle, they were independent of the inputs we tested, i.e., INC stimulation vs. corrective motion vs. INC inactivation, and left vs. right INC data. These findings suggest that, despite the complexity of the head-neck system, the brain can use a single, bilaterally coupled inverse model for torsional head control that is valid across different behaviors and movement directions. Combined with our previous data, these new data provide the terms for a more complete three-dimensional model of EMG: head rotation coupling for the muscles and gaze behaviors that we recorded.
Collapse
Affiliation(s)
- Farshad Farshadmanesh
- York Center for Vision Research, Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario, Canada
| | | | | | | | | |
Collapse
|
7
|
Intrinsic reference frames of superior colliculus visuomotor receptive fields during head-unrestrained gaze shifts. J Neurosci 2012; 31:18313-26. [PMID: 22171035 DOI: 10.1523/jneurosci.0990-11.2011] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A sensorimotor neuron's receptive field and its frame of reference are easily conflated within the natural variability of spatial behavior. Here, we capitalized on such natural variations in 3-D eye and head positions during head-unrestrained gaze shifts to visual targets in two monkeys: to determine whether intermediate/deep layer superior colliculus (SC) receptive fields code visual targets or gaze kinematics, within four different frames of reference. Visuomotor receptive fields were either characterized during gaze shifts to visual targets from a central fixation position (32 U) or were partially characterized from each of three initial fixation points (31 U). Natural variations of initial 3-D gaze and head orientation (including torsion) provided spatial separation between four different coordinate frame models (space, head, eye, fixed-vector relative to fixation), whereas natural saccade errors provided spatial separation between target and gaze positions. Using a new statistical method based on predictive sum-of-squares, we found that in our population of 63 neurons (1) receptive field fits to target positions were significantly better than fits to actual gaze shift locations and (2) eye-centered models gave significantly better fits than the head or space frame. An intermediate frames analysis confirmed that individual neuron fits were distributed target-in-eye coordinates. Gaze position "gain" effects with the spatial tuning required for a 3-D reference frame transformation were significant in 23% (7/31) of neurons tested. We conclude that the SC primarily represents gaze targets relative to the eye but also carries early signatures of the 3-D sensorimotor transformation.
Collapse
|
8
|
Constantin AG, Wang H, Monteon JA, Martinez-Trujillo JC, Crawford JD. 3-Dimensional eye-head coordination in gaze shifts evoked during stimulation of the lateral intraparietal cortex. Neuroscience 2009; 164:1284-302. [PMID: 19733631 DOI: 10.1016/j.neuroscience.2009.08.066] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2008] [Revised: 08/27/2009] [Accepted: 08/29/2009] [Indexed: 11/28/2022]
Abstract
Coordinated eye-head gaze shifts have been evoked during electrical stimulation of the frontal cortex (supplementary eye field (SEF) and frontal eye field (FEF)) and superior colliculus (SC), but less is known about the role of lateral intraparietal cortex (LIP) in head-unrestrained gaze shifts. To explore this, two monkeys (M1 and M2) were implanted with recording chambers and 3-D eye+ head search coils. Tungsten electrodes delivered trains of electrical pulses (usually 200 ms duration) to and around area LIP during head-unrestrained gaze fixations. A current of 200 muA consistently evoked small, short-latency contralateral gaze shifts from 152 sites in M1 and 243 sites in M2 (Constantin et al., 2007). Gaze kinematics were independent of stimulus amplitude and duration, except that subsequent saccades were suppressed. The average amplitude of the evoked gaze shifts was 8.46 degrees for M1 and 8.25 degrees for M2, with average head components of only 0.36 and 0.62 degrees respectively. The head's amplitude contribution to these movements was significantly smaller than in normal gaze shifts, and did not increase with behavioral adaptation. Stimulation-evoked gaze, eye and head movements qualitatively obeyed normal 3-D constraints (Donders' law and Listing's law), but with less precision. As in normal behavior, when the head was restrained LIP stimulation evoked eye-only saccades in Listing's plane, whereas when the head was not restrained, stimulation evoked saccades with position-dependent torsional components (driving the eye out of Listing's plane). In behavioral gaze-shifts, the vestibuloocular reflex (VOR) then drives torsion back into Listing's plane, but in the absence of subsequent head movement the stimulation-induced torsion was "left hanging". This suggests that the position-dependent torsional saccade components are preprogrammed, and that the oculomotor system was expecting a head movement command to follow the saccade. These data show that, unlike SEF, FEF, and SC stimulation in nearly identical conditions, LIP stimulation fails to produce normally-coordinated eye-head gaze shifts.
Collapse
Affiliation(s)
- A G Constantin
- Centre for Vision Research, York University, Toronto, ON, Canada M3J 1P3
| | | | | | | | | |
Collapse
|
9
|
Maier JX, Groh JM. Multisensory guidance of orienting behavior. Hear Res 2009; 258:106-12. [PMID: 19520151 DOI: 10.1016/j.heares.2009.05.008] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2008] [Revised: 05/15/2009] [Accepted: 05/20/2009] [Indexed: 11/18/2022]
Abstract
We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.
Collapse
Affiliation(s)
- Joost X Maier
- Center for Cognitive Neuroscience, Department of Neurobiology, Department of Psychology and Neuroscience, Duke University, LSRC B203, Durham NC 27708, USA.
| | | |
Collapse
|
10
|
Keith GP, DeSouza JFX, Yan X, Wang H, Crawford JD. A method for mapping response fields and determining intrinsic reference frames of single-unit activity: applied to 3D head-unrestrained gaze shifts. J Neurosci Methods 2009; 180:171-84. [PMID: 19427544 DOI: 10.1016/j.jneumeth.2009.03.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2009] [Revised: 03/08/2009] [Accepted: 03/09/2009] [Indexed: 10/21/2022]
Abstract
Natural movements towards a target show metric variations between trials. When movements combine contributions from multiple body-parts, such as head-unrestrained gaze shifts involving both eye and head rotation, the individual body-part movements may vary even more than the overall movement. The goal of this investigation was to develop a general method for both mapping sensory or motor response fields of neurons and determining their intrinsic reference frames, where these movement variations are actually utilized rather than avoided. We used head-unrestrained gaze shifts, three-dimensional (3D) geometry, and naturalistic distributions of eye and head orientation to explore the theoretical relationship between the intrinsic reference frame of a sensorimotor neuron's response field and the coherence of the activity when this response field is fitted non-parametrically using different kernel bandwidths in different reference frames. We measure how well the regression surface predicts unfitted data using the PREdictive Sum-of-Squares (PRESS) statistic. The reference frame with the smallest PRESS statistic was categorized as the intrinsic reference frame if the PRESS statistic was significantly larger in other reference frames. We show that the method works best when targets are at regularly spaced positions within the response field's active region, and that the method identifies the best kernel bandwidth for response field estimation. We describe how gain-field effects may be dealt with, and how to test neurons within a population that fall on a continuum between specific reference frames. This method may be applied to any spatially coherent single-unit activity related to sensation and/or movement during naturally varying behaviors.
Collapse
Affiliation(s)
- Gerald P Keith
- Canadian Action and Perception Network, York University, 4700 Keele Street, Toronto, Ontario M3J1P3, Canada
| | | | | | | | | |
Collapse
|
11
|
Constantin AG, Wang H, Martinez-Trujillo JC, Crawford JD. Frames of reference for gaze saccades evoked during stimulation of lateral intraparietal cortex. J Neurophysiol 2007; 98:696-709. [PMID: 17553952 DOI: 10.1152/jn.00206.2007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Previous studies suggest that stimulation of lateral intraparietal cortex (LIP) evokes saccadic eye movements toward eye- or head-fixed goals, whereas most single-unit studies suggest that LIP uses an eye-fixed frame with eye-position modulations. The goal of our study was to determine the reference frame for gaze shifts evoked during LIP stimulation in head-unrestrained monkeys. Two macaques (M1 and M2) were implanted with recording chambers over the right intraparietal sulcus and with search coils for recording three-dimensional eye and head movements. The LIP region was microstimulated using pulse trains of 300 Hz, 100-150 microA, and 200 ms. Eighty-five putative LIP sites in M1 and 194 putative sites in M2 were used in our quantitative analysis throughout this study. Average amplitude of the stimulation-evoked gaze shifts was 8.67 degrees for M1 and 7.97 degrees for M2 with very small head movements. When these gaze-shift trajectories were rotated into three coordinate frames (eye, head, and body), gaze endpoint distribution for all sites was most convergent to a common point when plotted in eye coordinates. Across all sites, the eye-centered model provided a significantly better fit compared with the head, body, or fixed-vector models (where the latter model signifies no modulation of the gaze trajectory as a function of initial gaze position). Moreover, the probability of evoking a gaze shift from any one particular position was modulated by the current gaze direction (independent of saccade direction). These results provide causal evidence that the motor commands from LIP encode gaze command in eye-fixed coordinates but are also subtly modulated by initial gaze position.
Collapse
Affiliation(s)
- A G Constantin
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | | | | | |
Collapse
|
12
|
Pathmanathan JS, Presnell R, Cromer JA, Cullen KE, Waitzman DM. Spatial characteristics of neurons in the central mesencephalic reticular formation (cMRF) of head-unrestrained monkeys. Exp Brain Res 2005; 168:455-70. [PMID: 16292575 DOI: 10.1007/s00221-005-0104-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2004] [Accepted: 12/03/2004] [Indexed: 10/25/2022]
Abstract
Prior studies of the central portion of the mesencephalic reticular formation (cMRF) have shown that in head-restrained monkeys, neurons discharge prior to saccades. Here, we provide a systematic analysis of the patterns of activity in cMRF neurons during head unrestrained gaze shifts. Two types of cMRF neurons were found: presaccadic neurons began to discharge before the onset of gaze movements, while postsaccadic neurons began to discharge after gaze shift onset and typically after the end of the gaze shift. Presaccadic neuronal responses were well correlated with gaze movements, while the discharge of postsaccadic neurons was more closely associated with head movements. The activity of presaccadic neurons was organized into gaze movement fields, while the activity of postsaccadic neurons was better organized into movement fields associated with head displacement. We found that cMRF neurons displayed both open and closed movement field responses. Neurons with closed movement fields discharged before a specific set of gaze (presaccadic) or head (postsaccadic) movement amplitudes and directions and had a clear distal boundary. Neurons with open movement fields discharged for gaze or head movements of a specific direction and also for movement amplitudes up to the limit of measurement (70 degrees). A subset of open movement field neurons displayed an increased discharge with increased gaze shift amplitudes, similar to pontine burst neurons, and were called monotonically increasing open movement field neurons. In contrast, neurons with non-monotonically open movement fields demonstrated activity for all gaze shift amplitudes, but their activity reached a plateau or declined gradually for gaze shifts beyond specific amplitudes. We suggest that presaccadic neurons with open movement fields participate in a descending pathway providing gaze signals to medium-lead burst neurons in the paramedian pontine reticular formation, while presaccadic closed movement field neurons may participate in feedback to the superior colliculus. The previously unrecognized group of postsaccadic cMRF neurons may provide signals of head position or velocity to the thalamus, cerebellum, or spinal cord.
Collapse
Affiliation(s)
- Jay S Pathmanathan
- Department of Neuroscience, University of Connecticut Health Center, Farmington, CT 06030, USA
| | | | | | | | | |
Collapse
|
13
|
Martinez-Trujillo JC, Medendorp WP, Wang H, Crawford JD. Frames of reference for eye-head gaze commands in primate supplementary eye fields. Neuron 2005; 44:1057-66. [PMID: 15603747 DOI: 10.1016/j.neuron.2004.12.004] [Citation(s) in RCA: 62] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2004] [Revised: 09/07/2004] [Accepted: 11/10/2004] [Indexed: 11/16/2022]
Abstract
The supplementary eye field (SEF) is a region within medial frontal cortex that integrates complex visuospatial information and controls eye-head gaze shifts. Here, we test if the SEF encodes desired gaze directions in a simple retinal (eye-centered) frame, such as the superior colliculus, or in some other, more complex frame. We electrically stimulated 55 SEF sites in two head-unrestrained monkeys to evoke 3D eye-head gaze shifts and then mathematically rotated these trajectories into various reference frames. Each stimulation site specified a specific spatial goal when plotted in its intrinsic frame. These intrinsic frames varied site by site, in a continuum from eye-, to head-, to space/body-centered coding schemes. This variety of coding schemes provides the SEF with a unique potential for implementing arbitrary reference frame transformations.
Collapse
Affiliation(s)
- Julio C Martinez-Trujillo
- Laboratory of Visuomotor Neuroscience, Centre for Vision Research, Canadian Institutes of Health Research, Group for Action and Perception and Department of Psychology, CSB York University, Toronto, Ontario M3J 1P3, Canada.
| | | | | | | |
Collapse
|