1
|
Li L, Flesch T, Ma C, Li J, Chen Y, Chen HT, Erlich JC. Encoding of 2D Self-Centered Plans and World-Centered Positions in the Rat Frontal Orienting Field. J Neurosci 2024; 44:e0018242024. [PMID: 39134418 PMCID: PMC11391499 DOI: 10.1523/jneurosci.0018-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 07/30/2024] [Accepted: 07/31/2024] [Indexed: 09/13/2024] Open
Abstract
The neural mechanisms of motor planning have been extensively studied in rodents. Preparatory activity in the frontal cortex predicts upcoming choice, but limitations of typical tasks have made it challenging to determine whether the spatial information is in a self-centered direction reference frame or a world-centered position reference frame. Here, we trained male rats to make delayed visually guided orienting movements to six different directions, with four different target positions for each direction, which allowed us to disentangle direction versus position tuning in neural activity. We recorded single unit activity from the rat frontal orienting field (FOF) in the secondary motor cortex, a region involved in planning orienting movements. Population analyses revealed that the FOF encodes two separate 2D maps of space. First, a 2D map of the planned and ongoing movement in a self-centered direction reference frame. Second, a 2D map of the animal's current position on the port wall in a world-centered reference frame. Thus, preparatory activity in the FOF represents self-centered upcoming movement directions, but FOF neurons multiplex both self- and world-reference frame variables at the level of single neurons. Neural network model comparison supports the view that despite the presence of world-centered representations, the FOF receives the target information as self-centered input and generates self-centered planning signals.
Collapse
Affiliation(s)
- Liujunli Li
- New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai 200062, Shanghai, China
- New York University Shanghai, Shanghai 200124, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai 200062, China
| | - Timo Flesch
- Oxford University, Oxford OX1 2JD, United Kingdom
| | - Ce Ma
- New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai 200062, Shanghai, China
- New York University Shanghai, Shanghai 200124, China
| | - Jingjie Li
- New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai 200062, Shanghai, China
- New York University Shanghai, Shanghai 200124, China
- Sainsbury Wellcome Centre, University College London, London W1T 4JG, United Kingdom
| | - Yizhou Chen
- New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai 200062, Shanghai, China
- New York University Shanghai, Shanghai 200124, China
| | - Hung-Tu Chen
- New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai 200062, Shanghai, China
- New York University Shanghai, Shanghai 200124, China
| | - Jeffrey C Erlich
- New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai 200062, Shanghai, China
- New York University Shanghai, Shanghai 200124, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai 200062, China
- Sainsbury Wellcome Centre, University College London, London W1T 4JG, United Kingdom
| |
Collapse
|
2
|
Liu X, Melcher D, Carrasco M, Hanning NM. Presaccadic preview shapes postsaccadic processing more where perception is poor. Proc Natl Acad Sci U S A 2024; 121:e2411293121. [PMID: 39236235 PMCID: PMC11406264 DOI: 10.1073/pnas.2411293121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2024] [Accepted: 07/31/2024] [Indexed: 09/07/2024] Open
Abstract
The presaccadic preview of a peripheral target enhances the efficiency of its postsaccadic processing, termed the extrafoveal preview effect. Peripheral visual performance-and thus the quality of the preview-varies around the visual field, even at isoeccentric locations: It is better along the horizontal than vertical meridian and along the lower than upper vertical meridian. To investigate whether these polar angle asymmetries influence the preview effect, we asked human participants to preview four tilted gratings at the cardinals, until a central cue indicated which one to saccade to. During the saccade, the target orientation either remained or slightly changed (valid/invalid preview). After saccade landing, participants discriminated the orientation of the (briefly presented) second grating. Stimulus contrast was titrated with adaptive staircases to assess visual performance. Expectedly, valid previews increased participants' postsaccadic contrast sensitivity. This preview benefit, however, was inversely related to polar angle perceptual asymmetries; largest at the upper, and smallest at the horizontal meridian. This finding reveals that the visual system compensates for peripheral asymmetries when integrating information across saccades, by selectively assigning higher weights to the less-well perceived preview information. Our study supports the recent line of evidence showing that perceptual dynamics around saccades vary with eye movement direction.
Collapse
Affiliation(s)
- Xiaoyi Liu
- Division of Science, Psychology Program, New York University Abu Dhabi, Abu Dhabi 129188, United Arab Emirates
- Department of Psychology, Princeton University, Princeton, NJ 08540
| | - David Melcher
- Division of Science, Psychology Program, New York University Abu Dhabi, Abu Dhabi 129188, United Arab Emirates
- Center for Brain and Health, NYUAD Research Institute, New York University Abu Dhabi, Abu Dhabi 129188, United Arab Emirates
| | - Marisa Carrasco
- Department of Psychology and Center for Neural Science, New York University, New York, NY 10012
| | - Nina M Hanning
- Department of Psychology and Center for Neural Science, New York University, New York, NY 10012
- Institut für Psychologie, Humboldt-Universität zu Berlin, Berlin 10099, Germany
| |
Collapse
|
3
|
Seo S, Bharmauria V, Schütz A, Yan X, Wang H, Crawford JD. Multiunit Frontal Eye Field Activity Codes the Visuomotor Transformation, But Not Gaze Prediction or Retrospective Target Memory, in a Delayed Saccade Task. eNeuro 2024; 11:ENEURO.0413-23.2024. [PMID: 39054056 PMCID: PMC11373882 DOI: 10.1523/eneuro.0413-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 07/27/2024] Open
Abstract
Single-unit (SU) activity-action potentials isolated from one neuron-has traditionally been employed to relate neuronal activity to behavior. However, recent investigations have shown that multiunit (MU) activity-ensemble neural activity recorded within the vicinity of one microelectrode-may also contain accurate estimations of task-related neural population dynamics. Here, using an established model-fitting approach, we compared the spatial codes of SU response fields with corresponding MU response fields recorded from the frontal eye fields (FEFs) in head-unrestrained monkeys (Macaca mulatta) during a memory-guided saccade task. Overall, both SU and MU populations showed a simple visuomotor transformation: the visual response coded target-in-eye coordinates, transitioning progressively during the delay toward a future gaze-in-eye code in the saccade motor response. However, the SU population showed additional secondary codes, including a predictive gaze code in the visual response and retention of a target code in the motor response. Further, when SUs were separated into regular/fast spiking neurons, these cell types showed different spatial code progressions during the late delay period, only converging toward gaze coding during the final saccade motor response. Finally, reconstructing MU populations (by summing SU data within the same sites) failed to replicate either the SU or MU pattern. These results confirm the theoretical and practical potential of MU activity recordings as a biomarker for fundamental sensorimotor transformations (e.g., target-to-gaze coding in the oculomotor system), while also highlighting the importance of SU activity for coding more subtle (e.g., predictive/memory) aspects of sensorimotor behavior.
Collapse
Affiliation(s)
- Serah Seo
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
- Department of Neurosurgery and Brain Repair, Morsani College of Medicine, University of South Florida, Tampa, Florida 33606
| | - Adrian Schütz
- Department of Neurophysics, Philipps-Universität Marburg, 35032 Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, 35032 Marburg, and Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Xiaogang Yan
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - Hongying Wang
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
4
|
Liu X, Melcher D, Carrasco M, Hanning NM. Pre-saccadic Preview Shapes Post-Saccadic Processing More Where Perception is Poor. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.18.541028. [PMID: 37292871 PMCID: PMC10245755 DOI: 10.1101/2023.05.18.541028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The pre-saccadic preview of a peripheral target enhances the efficiency of its post-saccadic processing, termed the extrafoveal preview effect. Peripheral visual performance -and thus the quality of the preview- varies around the visual field, even at iso-eccentric locations: it is better along the horizontal than vertical meridian and along the lower than upper vertical meridian. To investigate whether these polar angle asymmetries influence the preview effect, we asked human participants (to preview four tilted gratings at the cardinals, until a central cue indicated to which one to saccade. During the saccade, the target orientation either remained or slightly changed (valid/invalid preview). After saccade landing, participants discriminated the orientation of the (briefly presented) second grating. Stimulus contrast was titrated with adaptive staircases to assess visual performance. Expectedly, valid previews increased participants' post-saccadic contrast sensitivity. This preview benefit, however, was inversely related to polar angle perceptual asymmetries; largest at the upper, and smallest at the horizontal meridian. This finding reveals that the visual system compensates for peripheral asymmetries when integrating information across saccades, by selectively assigning higher weights to the less-well perceived preview information. Our study supports the recent line of evidence showing that perceptual dynamics around saccades vary with eye movement direction.
Collapse
|
5
|
Yang L, Jin M, Zhang C, Qian N, Zhang M. Distributions of Visual Receptive Fields from Retinotopic to Craniotopic Coordinates in the Lateral Intraparietal Area and Frontal Eye Fields of the Macaque. Neurosci Bull 2024; 40:171-181. [PMID: 37573519 PMCID: PMC10838878 DOI: 10.1007/s12264-023-01097-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 04/16/2023] [Indexed: 08/15/2023] Open
Abstract
Even though retinal images of objects change their locations following each eye movement, we perceive a stable and continuous world. One possible mechanism by which the brain achieves such visual stability is to construct a craniotopic coordinate by integrating retinal and extraretinal information. There have been several proposals on how this may be done, including eye-position modulation (gain fields) of retinotopic receptive fields (RFs) and craniotopic RFs. In the present study, we investigated coordinate systems used by RFs in the lateral intraparietal (LIP) cortex and frontal eye fields (FEF) and compared the two areas. We mapped the two-dimensional RFs of neurons in detail under two eye fixations and analyzed how the RF of a given neuron changes with eye position to determine its coordinate representation. The same recording and analysis procedures were applied to the two brain areas. We found that, in both areas, RFs were distributed from retinotopic to craniotopic representations. There was no significant difference between the distributions in the LIP and FEF. Only a small fraction of neurons was fully craniotopic, whereas most neurons were between the retinotopic and craniotopic representations. The distributions were strongly biased toward the retinotopic side but with significant craniotopic shifts. These results suggest that there is only weak evidence for craniotopic RFs in the LIP and FEF, and that transformation from retinotopic to craniotopic coordinates in these areas must rely on other factors such as gain fields.
Collapse
Affiliation(s)
- Lin Yang
- Key Laboratory of Cognitive Neuroscience and Learning, Division of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Min Jin
- Key Laboratory of Cognitive Neuroscience and Learning, Division of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Cong Zhang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Ning Qian
- Department of Neuroscience and Zuckerman Institute, Columbia University, New York, 10027, USA
| | - Mingsha Zhang
- Key Laboratory of Cognitive Neuroscience and Learning, Division of Psychology, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
6
|
Heusser MR, Jagadisan UK, Gandhi NJ. Drifting population dynamics with transient resets characterize sensorimotor transformation in the monkey superior colliculus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.03.522634. [PMID: 36711849 PMCID: PMC9881850 DOI: 10.1101/2023.01.03.522634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
To produce goal-directed eye movements known as saccades, we must channel sensory input from our environment through a process known as sensorimotor transformation. The behavioral output of this phenomenon (an accurate eye movement) is straightforward, but the coordinated activity of neurons underlying its dynamics is not well understood. We searched for a neural correlate of sensorimotor transformation in the activity patterns of simultaneously recorded neurons in the superior colliculus (SC) of three male rhesus monkeys performing a visually guided, delayed saccade task. Neurons in the intermediate layers produce a burst of spikes both following the appearance of a visual (sensory) stimulus and preceding an eye movement command, but many also exhibit a sustained activity level during the intervening time ("delay period"). This sustained activity could be representative of visual processing or motor preparation, along with countless cognitive processes. Using a novel measure we call the Visuomotor Proximity Index (VMPI), we pitted visual and motor signals against each other by measuring the degree to which each session's population activity (as summarized in a low-dimensional framework) could be considered more visual-like or more motor-like. The analysis highlighted two salient features of sensorimotor transformation. One, population activity on average drifted systematically toward a motor-like representation and intermittently reverted to a visual-like representation following a microsaccade. Two, activity patterns that drift to a stronger motor-like representation by the end of the delay period may enable a more rapid initiation of a saccade, substantiating the idea that this movement initiation mechanism is conserved across motor systems.
Collapse
Affiliation(s)
- Michelle R Heusser
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Uday K Jagadisan
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Neeraj J Gandhi
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
7
|
Abedi Khoozani P, Bharmauria V, Schütz A, Wildes RP, Crawford JD. Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts. Cereb Cortex Commun 2022; 3:tgac026. [PMID: 35909704 PMCID: PMC9334293 DOI: 10.1093/texcom/tgac026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 06/14/2022] [Accepted: 06/21/2022] [Indexed: 11/13/2022] Open
Abstract
Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric–egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Adrian Schütz
- Department of Neurophysics Phillips-University Marburg , Marburg 35037 , Germany
| | - Richard P Wildes
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Department of Electrical Engineering and Computer Science , York University, Toronto, ON M3J 1P3 , Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Departments of Psychology, Biology and Kinesiology & Health Sciences, York University , Toronto, Ontario M3J 1P3 , Canada
| |
Collapse
|
8
|
Caruso VC, Pages DS, Sommer MA, Groh JM. Compensating for a shifting world: evolving reference frames of visual and auditory signals across three multimodal brain areas. J Neurophysiol 2021; 126:82-94. [PMID: 33852803 DOI: 10.1152/jn.00385.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually guided saccades from variable initial fixation locations and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become "predominantly" eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.NEW & NOTEWORTHY Models for visual-auditory integration posit that visual signals are eye-centered throughout the brain, whereas auditory signals are converted from head-centered to eye-centered coordinates. We show instead that both modalities largely employ hybrid reference frames: neither fully head- nor eye-centered. Across three hubs of the oculomotor network (intraparietal cortex, frontal eye field, and superior colliculus) visual and auditory signals evolve from hybrid to a common eye-centered format via different dynamics across brain areas and time.
Collapse
Affiliation(s)
- Valeria C Caruso
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychiatry, University of Michigan, Ann Arbor, Michigan
| | - Daniel S Pages
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - Marc A Sommer
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Biomedical Engineering, Duke University, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Biomedical Engineering, Duke University, Durham, North Carolina
| |
Collapse
|
9
|
Spatiotemporal Coding in the Macaque Supplementary Eye Fields: Landmark Influence in the Target-to-Gaze Transformation. eNeuro 2021; 8:ENEURO.0446-20.2020. [PMID: 33318073 PMCID: PMC7877461 DOI: 10.1523/eneuro.0446-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022] Open
Abstract
Eye-centered (egocentric) and landmark-centered (allocentric) visual signals influence spatial cognition, navigation, and goal-directed action, but the neural mechanisms that integrate these signals for motor control are poorly understood. A likely candidate for egocentric/allocentric integration in the gaze control system is the supplementary eye fields (SEF), a mediofrontal structure with high-level “executive” functions, spatially tuned visual/motor response fields, and reciprocal projections with the frontal eye fields (FEF). To test this hypothesis, we trained two head-unrestrained monkeys (Macaca mulatta) to saccade toward a remembered visual target in the presence of a visual landmark that shifted during the delay, causing gaze end points to shift partially in the same direction. A total of 256 SEF neurons were recorded, including 68 with spatially tuned response fields. Model fits to the latter established that, like the FEF and superior colliculus (SC), spatially tuned SEF responses primarily showed an egocentric (eye-centered) target-to-gaze position transformation. However, the landmark shift influenced this default egocentric transformation: during the delay, motor neurons (with no visual response) showed a transient but unintegrated shift (i.e., not correlated with the target-to-gaze transformation), whereas during the saccade-related burst visuomotor (VM) neurons showed an integrated shift (i.e., correlated with the target-to-gaze transformation). This differed from our simultaneous FEF recordings (Bharmauria et al., 2020), which showed a transient shift in VM neurons, followed by an integrated response in all motor responses. Based on these findings and past literature, we propose that prefrontal cortex incorporates landmark-centered information into a distributed, eye-centered target-to-gaze transformation through a reciprocal prefrontal circuit.
Collapse
|
10
|
Sajad A, Sadeh M, Crawford JD. Spatiotemporal transformations for gaze control. Physiol Rep 2020; 8:e14533. [PMID: 32812395 PMCID: PMC7435051 DOI: 10.14814/phy2.14533] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 12/13/2022] Open
Abstract
Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time-locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head-unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye-centered, effector-independent parameters, and show-for the first time in ordinary gaze shifts-a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T-G continuum) that revealed task-dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Psychology DepartmentVanderbilt UniversityNashvilleTNUSA
| | - Morteza Sadeh
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Department of NeurosurgeryUniversity of Illinois at ChicagoChicagoILUSA
| | - John Douglas Crawford
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Vision: Science to Applications Program (VISTA)Neuroscience Graduate Diploma ProgramDepartments of Psychology, Biology, Kinesiology & Health SciencesYork UniversityTorontoONCanada
| |
Collapse
|
11
|
Timing Determines Tuning: A Rapid Spatial Transformation in Superior Colliculus Neurons during Reactive Gaze Shifts. eNeuro 2020; 7:ENEURO.0359-18.2019. [PMID: 31792117 PMCID: PMC6944480 DOI: 10.1523/eneuro.0359-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 10/12/2019] [Accepted: 10/14/2019] [Indexed: 11/21/2022] Open
Abstract
Gaze saccades, rapid shifts of the eyes and head toward a goal, have provided fundamental insights into the neural control of movement. For example, it has been shown that the superior colliculus (SC) transforms a visual target (T) code to future gaze (G) location commands after a memory delay. However, this transformation has not been observed in "reactive" saccades made directly to a stimulus, so its contribution to normal gaze behavior is unclear. Here, we tested this using a quantitative measure of the intermediate codes between T and G, based on variable errors in gaze endpoints. We demonstrate that a rapid spatial transformation occurs within the primate's SC (Macaca mulatta) during reactive saccades, involving a shift in coding from T, through intermediate codes, to G. This spatial shift progressed continuously both across and within cell populations [visual, visuomotor (VM), motor], rather than relaying discretely between populations with fixed spatial codes. These results suggest that the SC produces a rapid, noisy, and distributed transformation that contributes to variable errors in reactive gaze shifts.
Collapse
|
12
|
Schneider L, Dominguez-Vargas AU, Gibson L, Kagan I, Wilke M. Eye position signals in the dorsal pulvinar during fixation and goal-directed saccades. J Neurophysiol 2020; 123:367-391. [DOI: 10.1152/jn.00432.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor cortical areas contain eye position information thought to ensure perceptual stability across saccades and underlie spatial transformations supporting goal-directed actions. One pathway by which eye position signals could be relayed to and across cortical areas is via the dorsal pulvinar. Several studies have demonstrated saccade-related activity in the dorsal pulvinar, and we have recently shown that many neurons exhibit postsaccadic spatial preference. In addition, dorsal pulvinar lesions lead to gaze-holding deficits expressed as nystagmus or ipsilesional gaze bias, prompting us to investigate the effects of eye position. We tested three starting eye positions (−15°, 0°, 15°) in monkeys performing a visually cued memory saccade task. We found two main types of gaze dependence. First, ~50% of neurons showed dependence on static gaze direction during initial and postsaccadic fixation, and might be signaling the position of the eyes in the orbit or coding foveal targets in a head/body/world-centered reference frame. The population-derived eye position signal lagged behind the saccade. Second, many neurons showed a combination of eye-centered and gaze-dependent modulation of visual, memory, and saccadic responses to a peripheral target. A small subset showed effects consistent with eye position-dependent gain modulation. Analysis of reference frames across task epochs from visual cue to postsaccadic fixation indicated a transition from predominantly eye-centered encoding to representation of final gaze or foveated locations in nonretinocentric coordinates. These results show that dorsal pulvinar neurons carry information about eye position, which could contribute to steady gaze during postural changes and to reference frame transformations for visually guided eye and limb movements. NEW & NOTEWORTHY Work on the pulvinar focused on eye-centered visuospatial representations, but position of the eyes in the orbit is also an important factor that needs to be taken into account during spatial orienting and goal-directed reaching. We show that dorsal pulvinar neurons are influenced by eye position. Gaze direction modulated ongoing firing during stable fixation, as well as visual and saccade responses to peripheral targets, suggesting involvement of the dorsal pulvinar in spatial coordinate transformations.
Collapse
Affiliation(s)
- Lukas Schneider
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
| | - Adan-Ulises Dominguez-Vargas
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Escuela Nacional de Estudios Superiores Unidad-León, Universidad Nacional Autónoma de México, León, Guanajuato, Mexico
| | - Lydia Gibson
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
| | - Igor Kagan
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| | - Melanie Wilke
- Decision and Awareness Group, Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany
- Department of Cognitive Neurology, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| |
Collapse
|
13
|
Massot C, Jagadisan UK, Gandhi NJ. Sensorimotor transformation elicits systematic patterns of activity along the dorsoventral extent of the superior colliculus in the macaque monkey. Commun Biol 2019; 2:287. [PMID: 31396567 PMCID: PMC6677725 DOI: 10.1038/s42003-019-0527-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Accepted: 06/27/2019] [Indexed: 12/21/2022] Open
Abstract
The superior colliculus (SC) is an excellent substrate to study sensorimotor transformations. To date, the spatial and temporal properties of population activity along its dorsoventral axis have been inferred from single electrode studies. Here, we recorded SC population activity in non-human primates using a linear multi-contact array during delayed saccade tasks. We show that during the visual epoch, information appeared first in dorsal layers and systematically later in ventral layers. During the delay period, the laminar organization of low-spiking rate activity matched that of the visual epoch. During the pre-saccadic epoch, spiking activity emerged first in a more ventral layer, ~ 100 ms before saccade onset. This buildup of activity appeared later on nearby neurons situated both dorsally and ventrally, culminating in a synchronous burst across the dorsoventral axis, ~ 28 ms before saccade onset. Collectively, these results reveal a principled spatiotemporal organization of SC population activity underlying sensorimotor transformation for the control of gaze.
Collapse
Affiliation(s)
- Corentin Massot
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260 USA
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA 15260 USA
| | - Uday K. Jagadisan
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260 USA
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA 15260 USA
| | - Neeraj J. Gandhi
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260 USA
- Center for Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA 15260 USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA 15260 USA
| |
Collapse
|
14
|
John YJ, Zikopoulos B, Bullock D, Barbas H. Visual Attention Deficits in Schizophrenia Can Arise From Inhibitory Dysfunction in Thalamus or Cortex. COMPUTATIONAL PSYCHIATRY (CAMBRIDGE, MASS.) 2018; 2:223-257. [PMID: 30627672 PMCID: PMC6317791 DOI: 10.1162/cpsy_a_00023] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Accepted: 10/17/2018] [Indexed: 01/13/2023]
Abstract
Schizophrenia is associated with diverse cognitive deficits, including disorders of attention-related oculomotor behavior. At the structural level, schizophrenia is associated with abnormal inhibitory control in the circuit linking cortex and thalamus. We developed a spiking neural network model that demonstrates how dysfunctional inhibition can degrade attentive gaze control. Our model revealed that perturbations of two functionally distinct classes of cortical inhibitory neurons, or of the inhibitory thalamic reticular nucleus, disrupted processing vital for sustained attention to a stimulus, leading to distractibility. Because perturbation at each circuit node led to comparable but qualitatively distinct disruptions in attentive tracking or fixation, our findings support the search for new eye movement metrics that may index distinct underlying neural defects. Moreover, because the cortico-thalamic circuit is a common motif across sensory, association, and motor systems, the model and extensions can be broadly applied to study normal function and the neural bases of other cognitive deficits in schizophrenia.
Collapse
Affiliation(s)
- Yohan J. John
- Neural Systems Laboratory, Department of Health Sciences, Boston University, Boston, Massachusetts, USA
| | - Basilis Zikopoulos
- Human Systems Neuroscience Laboratory, Department of Health Sciences, Boston University, Boston, Massachusetts, USA
- Graduate Program for Neuroscience, Boston University, and School of Medicine, Boston, Massachusetts, USA
| | - Daniel Bullock
- Graduate Program for Neuroscience, Boston University, and School of Medicine, Boston, Massachusetts, USA
- Department of Psychological and Brain Sciences, Boston University, Boston, Massachusetts, USA
| | - Helen Barbas
- Neural Systems Laboratory, Department of Health Sciences, Boston University, Boston, Massachusetts, USA
- Graduate Program for Neuroscience, Boston University, and School of Medicine, Boston, Massachusetts, USA
| |
Collapse
|