1
|
Luabeya GN, Yan X, Freud E, Crawford JD. Influence of gaze, vision, and memory on hand kinematics in a placement task. J Neurophysiol 2024; 132:147-161. [PMID: 38836297 DOI: 10.1152/jn.00362.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 05/24/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024] Open
Abstract
People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.
Collapse
Affiliation(s)
- Gaelle N Luabeya
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
| | - Erez Freud
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
- Department of Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Fooken J, Baltaretu BR, Barany DA, Diaz G, Semrau JA, Singh T, Crawford JD. Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. J Neurosci 2023; 43:7511-7522. [PMID: 37940592 PMCID: PMC10634571 DOI: 10.1523/jneurosci.1373-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/15/2023] [Accepted: 08/18/2023] [Indexed: 11/10/2023] Open
Abstract
Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience, Queen's University, Kingston, Ontario K7L3N6, Canada
| | - Bianca R Baltaretu
- Department of Psychology, Justus Liebig University, Giessen, 35394, Germany
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, and Augusta University/University of Georgia Medical Partnership, Athens, Georgia 30602
| | - Gabriel Diaz
- Center for Imaging Science, Rochester Institute of Technology, Rochester, New York 14623
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, Delaware 19713
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, Pennsylvania 16802
| | - J Douglas Crawford
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
3
|
Wang HL, Kuo YT, Lo YC, Kuo CH, Chen BW, Wang CF, Wu ZY, Lee CE, Yang SH, Lin SH, Chen PC, Chen YY. Enhancing Prediction of Forelimb Movement Trajectory through a Calibrating-Feedback Paradigm Incorporating RAT Primary Motor and Agranular Cortical Ensemble Activity in the Goal-Directed Reaching Task. Int J Neural Syst 2023; 33:2350051. [PMID: 37632142 DOI: 10.1142/s012906572350051x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Complete reaching movements involve target sensing, motor planning, and arm movement execution, and this process requires the integration and communication of various brain regions. Previously, reaching movements have been decoded successfully from the motor cortex (M1) and applied to prosthetic control. However, most studies attempted to decode neural activities from a single brain region, resulting in reduced decoding accuracy during visually guided reaching motions. To enhance the decoding accuracy of visually guided forelimb reaching movements, we propose a parallel computing neural network using both M1 and medial agranular cortex (AGm) neural activities of rats to predict forelimb-reaching movements. The proposed network decodes M1 neural activities into the primary components of the forelimb movement and decodes AGm neural activities into internal feedforward information to calibrate the forelimb movement in a goal-reaching movement. We demonstrate that using AGm neural activity to calibrate M1 predicted forelimb movement can improve decoding performance significantly compared to neural decoders without calibration. We also show that the M1 and AGm neural activities contribute to controlling forelimb movement during goal-reaching movements, and we report an increase in the power of the local field potential (LFP) in beta and gamma bands over AGm in response to a change in the target distance, which may involve sensorimotor transformation and communication between the visual cortex and AGm when preparing for an upcoming reaching movement. The proposed parallel computing neural network with the internal feedback model improves prediction accuracy for goal-reaching movements.
Collapse
Affiliation(s)
- Han-Lin Wang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, No. 155, Sec. 2 Linong St., Taipei 112304, Taiwan
| | - Yun-Ting Kuo
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, No. 155, Sec. 2 Linong St., Taipei 112304, Taiwan
| | - Yu-Chun Lo
- The Ph.D. Program in Medical Neuroscience, College of Medical Science and Technology, Taipei Medical University, 12F., Education & Research Building, Shuang-Ho Campus, No. 301, Yuantong Rd., New Taipei City 235235, Taiwan
| | - Chao-Hung Kuo
- Department of Neurosurgery, Neurological Institute Taipei Veterans General Hospital, No. 201, Sec. 2 Shipai Rd., Taipei 11217, Taiwan
| | - Bo-Wei Chen
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, No. 155, Sec. 2 Linong St., Taipei 112304, Taiwan
| | - Ching-Fu Wang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, No. 155, Sec. 2 Linong St., Taipei 112304, Taiwan
- Biomedical Engineering Research and Development Center, National Yang Ming Chiao Tung University, No. 155, Sec. 2 Linong St., Taipei 112304, Taiwan
| | - Zu-Yu Wu
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, No. 155, Sec. 2 Linong St., Taipei 112304, Taiwan
| | - Chi-En Lee
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, No. 155, Sec. 2 Linong St., Taipei 112304, Taiwan
| | - Shih-Hung Yang
- Department of Mechanical Engineering, National Cheng Kung University, No. 1, University Rd., Tainan 70101, Taiwan
| | - Sheng-Huang Lin
- Department of Neurology, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, No. 707, Sec. 3 Zhongyang Rd., Hualien 97002, Taiwan
- Department of Neurology, School of Medicine, Tzu Chi University, No. 701, Sec. 3, Zhongyang Rd., Hualien 97004, Taiwan
| | - Po-Chuan Chen
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - You-Yin Chen
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, No. 155, Sec. 2 Linong St., Taipei 112304, Taiwan
- The Ph.D. Program in Medical Neuroscience, College of Medical Science and Technology, Taipei Medical University, 12F., Education & Research Building, Shuang-Ho Campus, No. 301, Yuantong Rd., New Taipei City 235235, Taiwan
| |
Collapse
|
4
|
Bencivenga F, Tullo MG, Maltempo T, von Gal A, Serra C, Pitzalis S, Galati G. Effector-selective modulation of the effective connectivity within frontoparietal circuits during visuomotor tasks. Cereb Cortex 2023; 33:2517-2538. [PMID: 35709758 PMCID: PMC10016057 DOI: 10.1093/cercor/bhac223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 05/05/2022] [Accepted: 05/06/2022] [Indexed: 11/13/2022] Open
Abstract
Despite extensive research, the functional architecture of the subregions of the dorsal posterior parietal cortex (PPC) involved in sensorimotor processing is far from clear. Here, we draw a thorough picture of the large-scale functional organization of the PPC to disentangle the fronto-parietal networks mediating visuomotor functions. To this aim, we reanalyzed available human functional magnetic resonance imaging data collected during the execution of saccades, hand, and foot pointing, and we combined individual surface-based activation, resting-state functional connectivity, and effective connectivity analyses. We described a functional distinction between a more lateral region in the posterior intraparietal sulcus (lpIPS), preferring saccades over pointing and coupled with the frontal eye fields (FEF) at rest, and a more medial portion (mpIPS) intrinsically correlated to the dorsal premotor cortex (PMd). Dynamic causal modeling revealed feedforward-feedback loops linking lpIPS with FEF during saccades and mpIPS with PMd during pointing, with substantial differences between hand and foot. Despite an intrinsic specialization of the action-specific fronto-parietal networks, our study reveals that their functioning is finely regulated according to the effector to be used, being the dynamic interactions within those networks differently modulated when carrying out a similar movement (i.e. pointing) but with distinct effectors (i.e. hand and foot).
Collapse
Affiliation(s)
- Federica Bencivenga
- Corresponding author: Department of Psychology, “Sapienza” University of Rome, Via dei Marsi 78, 00185 Rome, Italy.
| | | | - Teresa Maltempo
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Via Ardeatina 306/354, 00179 Roma, Italy
- Department of Movement, Human and Health Sciences, University of Rome “Foro Italico”, Piazza Lauro De Bosis 15, 00135 Roma, Italy
| | - Alessandro von Gal
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, Via dei Marsi 78, 00185 Roma, Italy
- PhD program in Behavioral Neuroscience, Sapienza University of Rome, Via dei Marsi 78, 00185 Roma, Italy
| | - Chiara Serra
- Department of Movement, Human and Health Sciences, University of Rome “Foro Italico”, Piazza Lauro De Bosis 15, 00135 Roma, Italy
| | - Sabrina Pitzalis
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Via Ardeatina 306/354, 00179 Roma, Italy
- Department of Movement, Human and Health Sciences, University of Rome “Foro Italico”, Piazza Lauro De Bosis 15, 00135 Roma, Italy
| | - Gaspare Galati
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, Via dei Marsi 78, 00185 Roma, Italy
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Via Ardeatina 306/354, 00179 Roma, Italy
| |
Collapse
|
5
|
Glennerster A. Understanding 3D vision as a policy network. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210448. [PMID: 36511403 PMCID: PMC9745881 DOI: 10.1098/rstb.2021.0448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
It is often assumed that the brain builds 3D coordinate frames, in retinal coordinates (with binocular disparity giving the third dimension), head-centred, body-centred and world-centred coordinates. This paper questions that assumption and begins to sketch an alternative based on, essentially, a set of reflexes. A 'policy network' is a term used in reinforcement learning to describe the set of actions that are generated by an agent depending on its current state. This is an untypical starting point for describing 3D vision, but a policy network can serve as a useful representation both for the 3D layout of a scene and the location of the observer within it. It avoids 3D reconstruction of the type used in computer vision but is similar to recent representations for navigation generated through reinforcement learning. A policy network for saccades (pure rotations of the camera/eye) is a logical starting point for understanding (i) an ego-centric representation of space (e.g. Marr's (Marr 1982 Vision: a computational investigation into the human representation and processing of visual information) 2[Formula: see text]-D sketch) and (ii) a hierarchical, compositional representation for navigation. The potential neural implementation of policy networks is straightforward; a network with a large range of sensory and task-related inputs such as the cerebellum would be capable of implementing this input/output function. This is not the case for 3D coordinate transformations in the brain: no neurally implementable proposals have yet been put forward that could carry out a transformation of a visual scene from retinal to world-based coordinates. Hence, if the representation underlying 3D vision can be described as a policy network (in which the actions are either saccades or head translations), this would be a significant step towards a neurally plausible model of 3D vision. This article is part of the theme issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Andrew Glennerster
- School of Psychology and Clinical Language Sciences, University of Reading, RG6 6AL Reading, UK
| |
Collapse
|
6
|
Murdison TS, Standage DI, Lefèvre P, Blohm G. Effector-dependent stochastic reference frame transformations alter decision-making. J Vis 2022; 22:1. [PMID: 35816048 PMCID: PMC9284468 DOI: 10.1167/jov.22.8.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Psychophysical, motor control, and modeling studies have revealed that sensorimotor reference frame transformations (RFTs) add variability to transformed signals. For perceptual decision-making, this phenomenon could decrease the fidelity of a decision signal's representation or alternatively improve its processing through stochastic facilitation. We investigated these two hypotheses under various sensorimotor RFT constraints. Participants performed a time-limited, forced-choice motion discrimination task under eight combinations of head roll and/or stimulus rotation while responding either with a saccade or button press. This paradigm, together with the use of a decision model, allowed us to parameterize and correlate perceptual decision behavior with eye-, head-, and shoulder-centered sensory and motor reference frames. Misalignments between sensory and motor reference frames produced systematic changes in reaction time and response accuracy. For some conditions, these changes were consistent with a degradation of motion evidence commensurate with a decrease in stimulus strength in our model framework. Differences in participant performance were explained by a continuum of eye–head–shoulder representations of accumulated motion evidence, with an eye-centered bias during saccades and a shoulder-centered bias during button presses. In addition, we observed evidence for stochastic facilitation during head-rolled conditions (i.e., head roll resulted in faster, more accurate decisions in oblique motion for a given stimulus–response misalignment). We show that perceptual decision-making and stochastic RFTs are inseparable within the present context. We show that by simply rolling one's head, perceptual decision-making is altered in a way that is predicted by stochastic RFTs.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,
| | - Dominic I Standage
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,School of Psychology, University of Birmingham, UK.,
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium.,
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,
| |
Collapse
|
7
|
Abedi Khoozani P, Bharmauria V, Schütz A, Wildes RP, Crawford JD. Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts. Cereb Cortex Commun 2022; 3:tgac026. [PMID: 35909704 PMCID: PMC9334293 DOI: 10.1093/texcom/tgac026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 06/14/2022] [Accepted: 06/21/2022] [Indexed: 11/13/2022] Open
Abstract
Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric–egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Adrian Schütz
- Department of Neurophysics Phillips-University Marburg , Marburg 35037 , Germany
| | - Richard P Wildes
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Department of Electrical Engineering and Computer Science , York University, Toronto, ON M3J 1P3 , Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Departments of Psychology, Biology and Kinesiology & Health Sciences, York University , Toronto, Ontario M3J 1P3 , Canada
| |
Collapse
|
8
|
Sajad A, Sadeh M, Crawford JD. Spatiotemporal transformations for gaze control. Physiol Rep 2020; 8:e14533. [PMID: 32812395 PMCID: PMC7435051 DOI: 10.14814/phy2.14533] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 12/13/2022] Open
Abstract
Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time-locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head-unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye-centered, effector-independent parameters, and show-for the first time in ordinary gaze shifts-a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T-G continuum) that revealed task-dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Psychology DepartmentVanderbilt UniversityNashvilleTNUSA
| | - Morteza Sadeh
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Department of NeurosurgeryUniversity of Illinois at ChicagoChicagoILUSA
| | - John Douglas Crawford
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Vision: Science to Applications Program (VISTA)Neuroscience Graduate Diploma ProgramDepartments of Psychology, Biology, Kinesiology & Health SciencesYork UniversityTorontoONCanada
| |
Collapse
|
9
|
Hadjidimitrakis K, Ghodrati M, Breveglieri R, Rosa MGP, Fattori P. Neural coding of action in three dimensions: Task- and time-invariant reference frames for visuospatial and motor-related activity in parietal area V6A. J Comp Neurol 2020; 528:3108-3122. [PMID: 32080849 DOI: 10.1002/cne.24889] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 01/14/2020] [Accepted: 02/10/2020] [Indexed: 12/13/2022]
Abstract
Goal-directed movements involve a series of neural computations that compare the sensory representations of goal location and effector position, and transform these into motor commands. Neurons in posterior parietal cortex (PPC) control several effectors (e.g., eye, hand, foot) and encode goal location in a variety of spatial coordinate systems, including those anchored to gaze direction, and to the positions of the head, shoulder, or hand. However, there is little evidence on whether reference frames depend also on the effector and/or type of motor response. We addressed this issue in macaque PPC area V6A, where previous reports using a fixate-to-reach in depth task, from different starting arm positions, indicated that most units use mixed body/hand-centered coordinates. Here, we applied singular value decomposition and gradient analyses to characterize the reference frames in V6A while the animals, instead of arm reaching, performed a nonspatial motor response (hand lift). We found that most neurons used mixed body/hand coordinates, instead of "pure" body-, or hand-centered coordinates. During the task progress the effect of hand position on activity became stronger compared to target location. Activity consistent with body-centered coding was present only in a subset of neurons active early in the task. Applying the same analyses to a population of V6A neurons recorded during the fixate-to-reach task yielded similar results. These findings suggest that V6A neurons use consistent reference frames between spatial and nonspatial motor responses, a functional property that may allow the integration of spatial awareness and movement control.
Collapse
Affiliation(s)
- Kostas Hadjidimitrakis
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy.,Department of Physiology and Biomedicine Discovery Institute, Monash University, Clayton, Victoria, Australia
| | - Masoud Ghodrati
- Department of Physiology and Biomedicine Discovery Institute, Monash University, Clayton, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain function, Monash University, Clayton, Victoria, Australia
| | - Rossella Breveglieri
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Marcello G P Rosa
- Department of Physiology and Biomedicine Discovery Institute, Monash University, Clayton, Victoria, Australia.,ARC Centre of Excellence for Integrative Brain function, Monash University, Clayton, Victoria, Australia
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
10
|
Niechwiej-Szwedo E, Meier K, Christian L, Nouredanesh M, Tung J, Bryden P, Giaschi D. Concurrent maturation of visuomotor skills and motion perception in typically-developing children and adolescents. Dev Psychobiol 2019; 62:353-367. [PMID: 31621075 DOI: 10.1002/dev.21931] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 09/18/2019] [Accepted: 09/22/2019] [Indexed: 12/24/2022]
Abstract
Perceptual and visuomotor skills undergo considerable development from early childhood into adolescence; however, the concurrent maturation of these skills has not yet been examined. This study assessed visuomotor function and motion perception in a cross-section of 226 typically-developing children between 4 and 16 years of age. Participants were tested on three tasks hypothesized to engage the dorsal visual stream: threading a bead on a needle, marking dots using a pen, and discriminating form defined by motion contrast. Mature performance was reached between 8 and 12 years, with youngest maturation for kinematic measures for a reach-to-grasp task, and oldest maturation for a precision tapping task. Performance on the motion perception task shared no association with motor skills after controlling for age.
Collapse
Affiliation(s)
| | | | - Lisa Christian
- Optometry and Vision Science, University of Waterloo, Waterloo, ON, Canada
| | - Mina Nouredanesh
- Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON, Canada
| | - James Tung
- Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Pamela Bryden
- Kinesiology and Physical Education, Wilfrid Laurier University, Waterloo, ON, Canada
| | - Deborah Giaschi
- Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
11
|
Blohm G, Alikhanian H, Gaetz W, Goltz H, DeSouza J, Cheyne D, Crawford J. Neuromagnetic signatures of the spatiotemporal transformation for manual pointing. Neuroimage 2019; 197:306-319. [DOI: 10.1016/j.neuroimage.2019.04.074] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 03/28/2019] [Accepted: 04/27/2019] [Indexed: 11/29/2022] Open
|
12
|
Abedi Khoozani P, Blohm G. Neck muscle spindle noise biases reaches in a multisensory integration task. J Neurophysiol 2018; 120:893-909. [PMID: 29742021 PMCID: PMC6171065 DOI: 10.1152/jn.00643.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Revised: 04/25/2018] [Accepted: 04/25/2018] [Indexed: 11/22/2022] Open
Abstract
Reference frame transformations (RFTs) are crucial components of sensorimotor transformations in the brain. Stochasticity in RFTs has been suggested to add noise to the transformed signal due to variability in transformation parameter estimates (e.g., angle) as well as the stochastic nature of computations in spiking networks of neurons. Here, we varied the RFT angle together with the associated variability and evaluated the behavioral impact in a reaching task that required variability-dependent visual-proprioceptive multisensory integration. Crucially, reaches were performed with the head either straight or rolled 30° to either shoulder, and we also applied neck loads of 0 or 1.8 kg (left or right) in a 3 × 3 design, resulting in different combinations of estimated head roll angle magnitude and variance required in RFTs. A novel three-dimensional stochastic model of multisensory integration across reference frames was fitted to the data and captured our main behavioral findings: 1) neck load biased head angle estimation across all head roll orientations, resulting in systematic shifts in reach errors; 2) increased neck muscle tone led to increased reach variability due to signal-dependent noise; and 3) both head roll and neck load created larger angular errors in reaches to visual targets away from the body compared with reaches toward the body. These results show that noise in muscle spindles and stochasticity in general have a tangible effect on RFTs underlying reach planning. Since RFTs are omnipresent in the brain, our results could have implications for processes as diverse as motor control, decision making, posture/balance control, and perception. NEW & NOTEWORTHY We show that increasing neck muscle tone systematically biases reach movements. A novel three-dimensional multisensory integration across reference frames model captures the data well and provides evidence that the brain must have online knowledge of full-body geometry together with the associated variability to plan reach movements accurately.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada
- Canadian Action and Perception Network , Toronto, Ontario , Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada
- Canadian Action and Perception Network , Toronto, Ontario , Canada
- Association for Canadian Neuroinformatics and Computational Neuroscience , Kingston, Ontario , Canada
| |
Collapse
|
13
|
Role of Rostral Fastigial Neurons in Encoding a Body-Centered Representation of Translation in Three Dimensions. J Neurosci 2018; 38:3584-3602. [PMID: 29487123 DOI: 10.1523/jneurosci.2116-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 02/01/2018] [Accepted: 02/20/2018] [Indexed: 11/21/2022] Open
Abstract
Many daily behaviors rely critically on estimates of our body motion. Such estimates must be computed by combining neck proprioceptive signals with vestibular signals that have been transformed from a head- to a body-centered reference frame. Recent studies showed that deep cerebellar neurons in the rostral fastigial nucleus (rFN) reflect these computations, but whether they explicitly encode estimates of body motion remains unclear. A key limitation in addressing this question is that, to date, cell tuning properties have only been characterized for a restricted set of motions across head-re-body orientations in the horizontal plane. Here we examined, for the first time, how 3D spatiotemporal tuning for translational motion varies with head-re-body orientation in both horizontal and vertical planes in the rFN of male macaques. While vestibular coding was profoundly influenced by head-re-body position in both planes, neurons typically reflected at most a partial transformation. However, their tuning shifts were not random but followed the specific spatial trajectories predicted for a 3D transformation. We show that these properties facilitate the linear decoding of fully body-centered motion representations in 3D with a broad range of temporal characteristics from small groups of 5-7 cells. These results demonstrate that the vestibular reference frame transformation required to compute body motion is indeed encoded by cerebellar neurons. We propose that maintaining partially transformed rFN responses with different spatiotemporal properties facilitates the creation of downstream body motion representations with a range of dynamic characteristics, consistent with the functional requirements for tasks such as postural control and reaching.SIGNIFICANCE STATEMENT Estimates of body motion are essential for many daily activities. Vestibular signals are important contributors to such estimates but must be transformed from a head- to a body-centered reference frame. Here, we provide the first direct demonstration that the cerebellum computes this transformation fully in 3D. We show that the output of these computations is reflected in the tuning properties of deep cerebellar rostral fastigial nucleus neurons in a specific distributed fashion that facilitates the efficient creation of body-centered translation estimates with a broad range of temporal properties (i.e., from acceleration to position). These findings support an important role for the rostral fastigial nucleus as a source of body translation estimates functionally relevant for behaviors ranging from postural control to perception.
Collapse
|
14
|
Abstract
The world has a complex, three-dimensional (3-D) spatial structure, but until recently the neural representation of space was studied primarily in planar horizontal environments. Here we review the emerging literature on allocentric spatial representations in 3-D and discuss the relations between 3-D spatial perception and the underlying neural codes. We suggest that the statistics of movements through space determine the topology and the dimensionality of the neural representation, across species and different behavioral modes. We argue that hippocampal place-cell maps are metric in all three dimensions, and might be composed of 2-D and 3-D fragments that are stitched together into a global 3-D metric representation via the 3-D head-direction cells. Finally, we propose that the hippocampal formation might implement a neural analogue of a Kalman filter, a standard engineering algorithm used for 3-D navigation.
Collapse
Affiliation(s)
- Arseny Finkelstein
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel;
| | - Liora Las
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel;
| | - Nachum Ulanovsky
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel;
| |
Collapse
|
15
|
Diverse coordinate frames on sensorimotor areas in visuomotor transformation. Sci Rep 2017; 7:14950. [PMID: 29097688 PMCID: PMC5668410 DOI: 10.1038/s41598-017-14579-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 10/12/2017] [Indexed: 11/08/2022] Open
Abstract
The visuomotor transformation during a goal-directed movement may involve a coordinate transformation from visual 'extrinsic' to muscle-like 'intrinsic' coordinate frames, which might be processed via a multilayer network architecture composed of neural basis functions. This theory suggests that the postural change during a goal-directed movement task alters activity patterns of the neurons in the intermediate layer of the visuomotor transformation that recieves both visual and proprioceptive inputs, and thus influence the multi-voxel pattern of the blood oxygenation level dependent signal. Using a recently developed multi-voxel pattern decoding method, we found extrinsic, intrinsic and intermediate coordinate frames along the visuomotor cortical pathways during a visuomotor control task. The presented results support the hypothesis that, in human, the extrinsic coordinate frame was transformed to the muscle-like frame over the dorsal pathway from the posterior parietal cortex and the dorsal premotor cortex to the primary motor cortex.
Collapse
|
16
|
Piserchia V, Breveglieri R, Hadjidimitrakis K, Bertozzi F, Galletti C, Fattori P. Mixed Body/Hand Reference Frame for Reaching in 3D Space in Macaque Parietal Area PEc. Cereb Cortex 2017; 27:1976-1990. [PMID: 26941385 DOI: 10.1093/cercor/bhw039] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The neural correlates of coordinate transformations from vision to action are expressed in the activity of posterior parietal cortex (PPC). It has been demonstrated that among the medial-most areas of the PPC, reaching targets are represented mainly in hand-centered coordinates in area PE, and in eye-centered, body-centered, and mixed body/hand-centered coordinates in area V6A. Here, we assessed whether neurons of area PEc, located between V6A and PE in the medial PPC, encode targets in body-centered, hand-centered, or mixed frame of reference during planning and execution of reaching. We studied 104 PEc cells in 3 Macaca fascicularis. The animals performed a reaching task toward foveated targets located at different depths and directions in darkness, starting with the hand from 2 positions located at different depths, one next to the trunk and the other far from it. We show that most PEc neurons encoded targets in a mixed body/hand-centered frame of reference. Although the effect of hand position was often rather strong, it was not as strong as reported previously in area PE. Our results suggest that area PEc represents an intermediate node in the gradual transformation from vision to action that takes place in the reaching network of the dorsomedial PPC.
Collapse
Affiliation(s)
- Valentina Piserchia
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Rossella Breveglieri
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Kostas Hadjidimitrakis
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy.,Department of Physiology, Monash University, Clayton, Victoria 3800, Australia
| | - Federica Bertozzi
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Claudio Galletti
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| |
Collapse
|
17
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts. Eur J Neurosci 2016; 42:2934-51. [PMID: 26448341 DOI: 10.1111/ejn.13093] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Revised: 09/14/2015] [Accepted: 09/30/2015] [Indexed: 11/27/2022]
Abstract
We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
18
|
Dowiasch S, Blohm G, Bremmer F. Neural correlate of spatial (mis-)localization during smooth eye movements. Eur J Neurosci 2016; 44:1846-55. [PMID: 27177769 PMCID: PMC5089592 DOI: 10.1111/ejn.13276] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2015] [Accepted: 04/19/2016] [Indexed: 11/29/2022]
Abstract
The dependence of neuronal discharge on the position of the eyes in the orbit is a functional characteristic of many visual cortical areas of the macaque. It has been suggested that these eye-position signals provide relevant information for a coordinate transformation of visual signals into a non-eye-centered frame of reference. This transformation could be an integral part for achieving visual perceptual stability across eye movements. Previous studies demonstrated close to veridical eye-position decoding during stable fixation as well as characteristic erroneous decoding across saccadic eye-movements. Here we aimed to decode eye position during smooth pursuit. We recorded neural activity in macaque area VIP during steady fixation, saccades and smooth-pursuit and investigated the temporal and spatial accuracy of eye position as decoded from the neuronal discharges. Confirming previous results, the activity of the majority of neurons depended linearly on horizontal and vertical eye position. The application of a previously introduced computational approach (isofrequency decoding) allowed eye position decoding with considerable accuracy during steady fixation. We applied the same decoder on the activity of the same neurons during smooth-pursuit. On average, the decoded signal was leading the current eye position. A model combining this constant lead of the decoded eye position with a previously described attentional bias ahead of the pursuit target describes the asymmetric mislocalization pattern for briefly flashed stimuli during smooth pursuit eye movements as found in human behavioral studies.
Collapse
Affiliation(s)
- Stefan Dowiasch
- Department of NeurophysicsPhilipps‐University MarburgKarl‐von‐Frisch‐Straße 8a35043MarburgGermany
| | | | - Frank Bremmer
- Department of NeurophysicsPhilipps‐University MarburgKarl‐von‐Frisch‐Straße 8a35043MarburgGermany
| |
Collapse
|
19
|
Mohsenzadeh Y, Dash S, Crawford JD. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements. Front Syst Neurosci 2016; 10:39. [PMID: 27242452 PMCID: PMC4867689 DOI: 10.3389/fnsys.2016.00039] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 04/19/2016] [Indexed: 12/02/2022] Open
Abstract
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks.
Collapse
Affiliation(s)
- Yalda Mohsenzadeh
- York Center for Vision Research, Canadian Action and Perception Network, York University Toronto, ON, Canada
| | - Suryadeep Dash
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Department of Physiology and Pharmacology, Robarts Research Institute, Western UniversityLondon, ON, Canada
| | - J Douglas Crawford
- York Center for Vision Research, Canadian Action and Perception Network, York UniversityToronto, ON, Canada; Departments of Psychology, Biology, and Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| |
Collapse
|
20
|
Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation. eNeuro 2016; 3:eN-TNWR-0040-16. [PMID: 27092335 PMCID: PMC4829728 DOI: 10.1523/eneuro.0040-16.2016] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Accepted: 03/23/2016] [Indexed: 01/01/2023] Open
Abstract
The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.
Collapse
|
21
|
Üstün C. A Sensorimotor Model for Computing Intended Reach Trajectories. PLoS Comput Biol 2016; 12:e1004734. [PMID: 26985662 PMCID: PMC4795795 DOI: 10.1371/journal.pcbi.1004734] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Accepted: 01/05/2016] [Indexed: 11/19/2022] Open
Abstract
The presumed role of the primate sensorimotor system is to transform reach targets from retinotopic to joint coordinates for producing motor output. However, the interpretation of neurophysiological data within this framework is ambiguous, and has led to the view that the underlying neural computation may lack a well-defined structure. Here, I consider a model of sensorimotor computation in which temporal as well as spatial transformations generate representations of desired limb trajectories, in visual coordinates. This computation is suggested by behavioral experiments, and its modular implementation makes predictions that are consistent with those observed in monkey posterior parietal cortex (PPC). In particular, the model provides a simple explanation for why PPC encodes reach targets in reference frames intermediate between the eye and hand, and further explains why these reference frames shift during movement. Representations in PPC are thus consistent with the orderly processing of information, provided we adopt the view that sensorimotor computation manipulates desired movement trajectories, and not desired movement endpoints. Does the brain explicitly plan entire movement trajectories or are these emergent properties of motor control? Although behavioral studies support the notion of trajectory planning for visually guided reaches, a neurobiologically plausible mechanism for this observation has been lacking. I discuss a model that generates representations of desired reach trajectories (i.e., paths and speed profiles) for point-to-point reaches. I show that the predictions of this model closely resemble the population responses of neurons in posterior parietal cortex, a visuomotor planning area of the monkey brain. Several aspects of population responses that are puzzling from the point of view of traditional sensorimotor models are coherently explained by this mechanism.
Collapse
Affiliation(s)
- Cevat Üstün
- Division of Biology, California Institute of Technology, Pasadena, California, United States of America
- * E-mail:
| |
Collapse
|
22
|
Lehky SR, Sereno ME, Sereno AB. Characteristics of Eye-Position Gain Field Populations Determine Geometry of Visual Space. Front Integr Neurosci 2016; 9:72. [PMID: 26834587 PMCID: PMC4718998 DOI: 10.3389/fnint.2015.00072] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Accepted: 12/21/2015] [Indexed: 11/17/2022] Open
Abstract
We have previously demonstrated differences in eye-position spatial maps for anterior inferotemporal cortex (AIT) in the ventral stream and lateral intraparietal cortex (LIP) in the dorsal stream, based on population decoding of gaze angle modulations of neural visual responses (i.e., eye-position gain fields). Here we explore the basis of such spatial encoding differences through modeling of gain field characteristics. We created a population of model neurons, each having a different eye-position gain field. This population was used to reconstruct eye-position visual space using multidimensional scaling. As gain field shapes have never been well-established experimentally, we examined different functions, including planar, sigmoidal, elliptical, hyperbolic, and mixtures of those functions. All functions successfully recovered positions, indicating weak constraints on allowable gain field shapes. We then used a genetic algorithm to modify the characteristics of model gain field populations until the recovered spatial maps closely matched those derived from monkey neurophysiological data in AIT and LIP. The primary differences found between model AIT and LIP gain fields were that AIT gain fields were more foveally dominated. That is, gain fields in AIT operated on smaller spatial scales and smaller dispersions than in LIP. Thus, we show that the geometry of eye-position visual space depends on the population characteristics of gain fields, and that differences in gain field characteristics for different cortical areas may underlie differences in the representation of space.
Collapse
Affiliation(s)
- Sidney R Lehky
- Computational Neurobiology Laboratory, The Salk Institute La Jolla, CA, USA
| | | | - Anne B Sereno
- Department of Neurobiology and Anatomy, University of Texas Medical School Houston, TX, USA
| |
Collapse
|
23
|
Galeazzi JM, Minini L, Stringer SM. The Development of Hand-Centered Visual Representations in the Primate Brain: A Computer Modeling Study Using Natural Visual Scenes. Front Comput Neurosci 2015; 9:147. [PMID: 26696876 PMCID: PMC4678233 DOI: 10.3389/fncom.2015.00147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2015] [Accepted: 11/23/2015] [Indexed: 11/23/2022] Open
Abstract
Neurons that respond to visual targets in a hand-centered frame of reference have been found within various areas of the primate brain. We investigate how hand-centered visual representations may develop in a neural network model of the primate visual system called VisNet, when the model is trained on images of the hand seen against natural visual scenes. The simulations show how such neurons may develop through a biologically plausible process of unsupervised competitive learning and self-organization. In an advance on our previous work, the visual scenes consisted of multiple targets presented simultaneously with respect to the hand. Three experiments are presented. First, VisNet was trained with computerized images consisting of a realistic image of a hand and a variety of natural objects, presented in different textured backgrounds during training. The network was then tested with just one textured object near the hand in order to verify if the output cells were capable of building hand-centered representations with a single localized receptive field. We explain the underlying principles of the statistical decoupling that allows the output cells of the network to develop single localized receptive fields even when the network is trained with multiple objects. In a second simulation we examined how some of the cells with hand-centered receptive fields decreased their shape selectivity and started responding to a localized region of hand-centered space as the number of objects presented in overlapping locations during training increases. Lastly, we explored the same learning principles training the network with natural visual scenes collected by volunteers. These results provide an important step in showing how single, localized, hand-centered receptive fields could emerge under more ecologically realistic visual training conditions.
Collapse
Affiliation(s)
- Juan M. Galeazzi
- Department of Experimental Psychology, Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, University of OxfordOxford, UK
| | | | | |
Collapse
|
24
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Computations underlying the visuomotor transformation for smooth pursuit eye movements. J Neurophysiol 2015; 113:1377-99. [PMID: 25475344 DOI: 10.1152/jn.00273.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| | - Guillaume Leclercq
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| |
Collapse
|
25
|
Abstract
Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction.
Collapse
|
26
|
Sajad A, Sadeh M, Keith GP, Yan X, Wang H, Crawford JD. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey. Cereb Cortex 2014; 25:3932-52. [PMID: 25491118 PMCID: PMC4585524 DOI: 10.1093/cercor/bhu279] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology
| | - Morteza Sadeh
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program School of Kinesiology and Health Sciences
| | - Gerald P Keith
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - Hongying Wang
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - John Douglas Crawford
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology School of Kinesiology and Health Sciences Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| |
Collapse
|
27
|
Reliability-dependent contributions of visual orientation cues in parietal cortex. Proc Natl Acad Sci U S A 2014; 111:18043-8. [PMID: 25427796 DOI: 10.1073/pnas.1421131111] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Creating accurate 3D representations of the world from 2D retinal images is a fundamental task for the visual system. However, the reliability of different 3D visual signals depends inherently on viewing geometry, such as how much an object is slanted in depth. Human perceptual studies have correspondingly shown that texture and binocular disparity cues for object orientation are combined according to their slant-dependent reliabilities. Where and how this cue combination occurs in the brain is currently unknown. Here, we search for neural correlates of this property in the macaque caudal intraparietal area (CIP) by measuring slant tuning curves using mixed-cue (texture + disparity) and cue-isolated (texture or disparity) planar stimuli. We find that texture cues contribute more to the mixed-cue responses of CIP neurons that prefer larger slants, consistent with theoretical and psychophysical results showing that the reliability of texture relative to disparity cues increases with slant angle. By analyzing responses to binocularly viewed texture stimuli with conflicting texture and disparity information, some cells that are sensitive to both cues when presented in isolation are found to disregard one of the cues during cue conflict. Additionally, the similarity between texture and mixed-cue responses is found to be greater when this cue conflict is eliminated by presenting the texture stimuli monocularly. The present findings demonstrate reliability-dependent contributions of visual orientation cues at the level of the CIP, thus revealing a neural correlate of this property of human visual perception.
Collapse
|
28
|
Lew EYL, Chavarriaga R, Silvoni S, Millán JDR. Single trial prediction of self-paced reaching directions from EEG signals. Front Neurosci 2014; 8:222. [PMID: 25136290 PMCID: PMC4117993 DOI: 10.3389/fnins.2014.00222] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2014] [Accepted: 07/07/2014] [Indexed: 11/23/2022] Open
Abstract
Early detection of movement intention could possibly minimize the delays in the activation of neuroprosthetic devices. As yet, single trial analysis using non-invasive approaches for understanding such movement preparation remains a challenging task. We studied the feasibility of predicting movement directions in self-paced upper limb center-out reaching tasks, i.e., spontaneous movements executed without an external cue that can better reflect natural motor behavior in humans. We reported results of non-invasive electroencephalography (EEG) recorded from mild stroke patients and able-bodied participants. Previous studies have shown that low frequency EEG oscillations are modulated by the intent to move and therefore, can be decoded prior to the movement execution. Motivated by these results, we investigated whether slow cortical potentials (SCPs) preceding movement onset can be used to classify reaching directions and evaluated the performance using 5-fold cross-validation. For able-bodied subjects, we obtained an average decoding accuracy of 76% (chance level of 25%) at 62.5 ms before onset using the amplitude of on-going SCPs with above chance level performances between 875 to 437.5 ms prior to onset. The decoding accuracy for the stroke patients was on average 47% with their paretic arms. Comparison of the decoding accuracy across different frequency ranges (i.e., SCPs, delta, theta, alpha, and gamma) yielded the best accuracy using SCPs filtered between 0.1 to 1 Hz. Across all the subjects, including stroke subjects, the best selected features were obtained mostly from the fronto-parietal regions, hence consistent with previous neurophysiological studies on arm reaching tasks. In summary, we concluded that SCPs allow the possibility of single trial decoding of reaching directions at least 312.5 ms before onset of reach.
Collapse
Affiliation(s)
- Eileen Y L Lew
- Defitech Chair in Non-Invasive Brain-Machine Interface, Center for Neuroprosthetics, School of Engineering, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland ; Laboratory for Experimental Research on Behavior, Institute of Psychology, University of Lausanne Lausanne, Switzerland
| | - Ricardo Chavarriaga
- Defitech Chair in Non-Invasive Brain-Machine Interface, Center for Neuroprosthetics, School of Engineering, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | - Stefano Silvoni
- Laboratory of Robotics and Kinematics, I.R.C.C.S. S. Camillo Hospital Foundation Venice, Italy
| | - José Del R Millán
- Defitech Chair in Non-Invasive Brain-Machine Interface, Center for Neuroprosthetics, School of Engineering, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| |
Collapse
|
29
|
Abstract
Reference frames are important for understanding sensory processing in the cortex. Previous work showed that vestibular heading signals in the ventral intraparietal area (VIP) are represented in body-centered coordinates. In contrast, vestibular heading tuning in the medial superior temporal area (MSTd) is approximately head centered. We considered the hypothesis that visual heading signals (from optic flow) in VIP might also be transformed into a body-centered representation, unlike visual heading tuning in MSTd, which is approximately eye centered. We distinguished among eye-centered, head-centered, and body-centered spatial reference frames by systematically varying both eye and head positions while rhesus monkeys viewed optic flow stimuli depicting various headings. We found that heading tuning of VIP neurons based on optic flow generally shifted with eye position, indicating an eye-centered spatial reference frame. This is similar to the representation of visual heading signals in MSTd, but contrasts sharply with the body-centered representation of vestibular heading signals in VIP. These findings demonstrate a clear dissociation between the spatial reference frames of visual and vestibular signals in VIP, and emphasize that frames of reference for neurons in parietal cortex can depend on the type of sensory stimulation.
Collapse
|
30
|
Khan AZ, Pisella L, Blohm G. Causal evidence for posterior parietal cortex involvement in visual-to-motor transformations of reach targets. Cortex 2013; 49:2439-48. [DOI: 10.1016/j.cortex.2012.12.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2012] [Revised: 08/30/2012] [Accepted: 12/04/2012] [Indexed: 11/25/2022]
|
31
|
Monkeys in space: Primate neural data suggest volumetric representations. Behav Brain Sci 2013; 36:555-6; discussion 571-87. [DOI: 10.1017/s0140525x13000447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractThe target article does not consider neural data on primate spatial representations, which we suggest provide grounds for believing that navigational space may be three-dimensional rather than quasi–two-dimensional. Furthermore, we question the authors' interpretation of rat neurophysiological data as indicating that the vertical dimension may be encoded in a neural structure separate from the two horizontal dimensions.
Collapse
|
32
|
Chang SWC. Coordinate transformation approach to social interactions. Front Neurosci 2013; 7:147. [PMID: 23970850 PMCID: PMC3748418 DOI: 10.3389/fnins.2013.00147] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2013] [Accepted: 08/01/2013] [Indexed: 01/25/2023] Open
Abstract
A coordinate transformation framework for understanding how neurons compute sensorimotor behaviors has generated significant advances toward our understanding of basic brain function. This influential scaffold focuses on neuronal encoding of spatial information represented in different coordinate systems (e.g., eye-centered, hand-centered) and how multiple brain regions partake in transforming these signals in order to ultimately generate a motor output. A powerful analogy can be drawn from the coordinate transformation framework to better elucidate how the nervous system computes cognitive variables for social behavior. Of particular relevance is how the brain represents information with respect to oneself and other individuals, such as in reward outcome assignment during social exchanges, in order to influence social decisions. In this article, I outline how the coordinate transformation framework can help guide our understanding of neural computations resulting in social interactions. Implications for numerous psychiatric disorders with impaired representations of self and others are also discussed.
Collapse
Affiliation(s)
- Steve W C Chang
- Center for Cognitive Neuroscience, Duke Institute for Brain Sciences, Duke University Durham, NC, USA ; Department of Psychology, Yale University New Haven, CT, USA
| |
Collapse
|
33
|
Hadjidimitrakis K, Bertozzi F, Breveglieri R, Fattori P, Galletti C. Body-centered, mixed, but not hand-centered coding of visual targets in the medial posterior parietal cortex during reaches in 3D space. Cereb Cortex 2013; 24:3209-20. [PMID: 23853212 DOI: 10.1093/cercor/bht181] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The frames of reference used by neurons in posterior parietal cortex (PPC) to encode spatial locations during arm reaching movements is a debated topic in modern neurophysiology. Traditionally, target location, encoded in retinocentric reference frame (RF) in caudal PPC, was assumed to be serially transformed to body-centered and then hand-centered coordinates rostrally. However, recent studies suggest that these transformations occur within a single area. The caudal PPC area V6A has been shown to represent reach targets in eye-centered, body-centered, and a combination of both RFs, but the presence of hand-centered coding has not been yet investigated. To examine this issue, 141 single neurons were recorded from V6A in 2 Macaca fascicularis monkeys while they performed a foveated reaching task in darkness. The targets were presented at different distances and lateralities from the body and were reached from initial hand positions located at different depths. Most V6A cells used body-centered, or mixed body- and hand-centered coordinates. Only a few neurons used pure hand-centered coordinates, thus clearly distinguishing V6A from nearby PPC regions. Our findings support the view of a gradual RF transformation in PPC and also highlight the impact of mixed frames of reference.
Collapse
Affiliation(s)
- K Hadjidimitrakis
- Department of Human and General Physiology Department of Pharmacy and Biotechnology, University of Bologna, Bologna 40126, Italy
| | - F Bertozzi
- Department of Human and General Physiology Department of Pharmacy and Biotechnology, University of Bologna, Bologna 40126, Italy
| | - R Breveglieri
- Department of Human and General Physiology Department of Pharmacy and Biotechnology, University of Bologna, Bologna 40126, Italy
| | - P Fattori
- Department of Human and General Physiology Department of Pharmacy and Biotechnology, University of Bologna, Bologna 40126, Italy
| | - C Galletti
- Department of Human and General Physiology Department of Pharmacy and Biotechnology, University of Bologna, Bologna 40126, Italy
| |
Collapse
|
34
|
Galeazzi JM, Mender BMW, Paredes M, Tromans JM, Evans BD, Minini L, Stringer SM. A self-organizing model of the visual development of hand-centred representations. PLoS One 2013; 8:e66272. [PMID: 23799086 PMCID: PMC3683017 DOI: 10.1371/journal.pone.0066272] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2012] [Accepted: 05/02/2013] [Indexed: 11/19/2022] Open
Abstract
We show how hand-centred visual representations could develop in the primate posterior parietal and premotor cortices during visually guided learning in a self-organizing neural network model. The model incorporates trace learning in the feed-forward synaptic connections between successive neuronal layers. Trace learning encourages neurons to learn to respond to input images that tend to occur close together in time. We assume that sequences of eye movements are performed around individual scenes containing a fixed hand-object configuration. Trace learning will then encourage individual cells to learn to respond to particular hand-object configurations across different retinal locations. The plausibility of this hypothesis is demonstrated in computer simulations.
Collapse
Affiliation(s)
- Juan M Galeazzi
- Department of Experimental Psychology, University of Oxford, Oxford, Oxfordshire, United Kingdom.
| | | | | | | | | | | | | |
Collapse
|
35
|
Abstract
Recent blood oxygenation level-dependent (BOLD) imaging work has suggested flexible coding frames for reach targets in human posterior parietal cortex, with a gaze-centered reference frame for visually guided reaches and a body-centered frame for proprioceptive reaches. However, BOLD activity, which reflects overall population activity, is insensitive to heterogeneous responses at the neuronal level and temporal dynamics between neurons. Neurons could synchronize in different frequency bands to form assemblies operating in different reference frames. Here we assessed the reference frames of oscillatory activity in parietal cortex during reach planning to nonvisible tactile stimuli. Under continuous recording of magneto-encephalographic data, subjects fixated either to the left or right of the body midline, while a tactile stimulus was presented to a nonvisible fingertip, located either to the left or right of gaze. After a delay, they had to reach toward the remembered stimulus location with the other hand. Our results show body-centered and gaze-centered reference frames underlying the power modulations in specific frequency bands. Whereas beta-band activity (18-30 Hz) in parietal regions showed body-centered spatial selectivity, the high gamma band (>60 Hz) demonstrated a transient remapping into gaze-centered coordinates in parietal and extrastriate visual areas. This gaze-centered coding was sustained in the low gamma (<60 Hz) and alpha (∼10 Hz) bands. Our results show that oscillating subpopulations encode remembered tactile targets for reaches relative to gaze, even though neither the sensory nor the motor output processes operate in this frame. We discuss these findings in the light of flexible control mechanisms across modalities and effectors.
Collapse
|
36
|
Monteon JA, Wang H, Martinez-Trujillo J, Crawford JD. Frames of reference for eye-head gaze shifts evoked during frontal eye field stimulation. Eur J Neurosci 2013; 37:1754-65. [PMID: 23489744 DOI: 10.1111/ejn.12175] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2010] [Revised: 01/14/2013] [Accepted: 01/30/2013] [Indexed: 11/29/2022]
Abstract
The frontal eye field (FEF), in the prefrontal cortex, participates in the transformation of visual signals into saccade motor commands and in eye-head gaze control. The FEF is thought to show eye-fixed visual codes in head-restrained monkeys, but it is not known how it transforms these inputs into spatial codes for head-unrestrained gaze commands. Here, we tested if the FEF influences desired gaze commands within a simple eye-fixed frame, like the superior colliculus (SC), or in more complex egocentric frames like the supplementary eye fields (SEFs). We electrically stimulated 95 FEF sites in two head-unrestrained monkeys to evoke 3D eye-head gaze shifts and then mathematically rotated these trajectories into various reference frames. In theory, each stimulation site should specify a specific spatial goal when the evoked gaze shifts are plotted in the appropriate frame. We found that these motor output frames varied site by site, mainly within the eye-to-head frame continuum. Thus, consistent with the intermediate placement of the FEF within the high-level circuits for gaze control, its stimulation-evoked output showed an intermediate trend between the multiple reference frame codes observed in SEF-evoked gaze shifts and the simpler eye-fixed reference frame observed in SC-evoked movements. These results suggest that, although the SC, FEF and SEF carry eye-fixed information at the level of their unit response fields, this information is transformed differently in their output projections to the eye and head controllers.
Collapse
Affiliation(s)
- Jachin A Monteon
- Centre for Vision Research, York University, Toronto, ON, Canada
| | | | | | | |
Collapse
|
37
|
Leclercq G, Lefèvre P, Blohm G. 3D kinematics using dual quaternions: theory and applications in neuroscience. Front Behav Neurosci 2013; 7:7. [PMID: 23443667 PMCID: PMC3576712 DOI: 10.3389/fnbeh.2013.00007] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2012] [Accepted: 01/28/2013] [Indexed: 12/02/2022] Open
Abstract
In behavioral neuroscience, many experiments are developed in 1 or 2 spatial dimensions, but when scientists tackle problems in 3-dimensions (3D), they often face problems or new challenges. Results obtained for lower dimensions are not always extendable in 3D. In motor planning of eye, gaze or arm movements, or sensorimotor transformation problems, the 3D kinematics of external (stimuli) or internal (body parts) must often be considered: how to describe the 3D position and orientation of these objects and link them together? We describe how dual quaternions provide a convenient way to describe the 3D kinematics for position only (point transformation) or for combined position and orientation (through line transformation), easily modeling rotations, translations or screw motions or combinations of these. We also derive expressions for the velocities of points and lines as well as the transformation velocities. Then, we apply these tools to a motor planning task for manual tracking and to the modeling of forward and inverse kinematics of a seven-dof three-link arm to show the interest of dual quaternions as a tool to build models for these kinds of applications.
Collapse
Affiliation(s)
- Guillaume Leclercq
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Université Catholique de Louvain Louvain-la-Neuve, Belgium ; Institute of Neuroscience, Université Catholique de Louvain Brussels, Belgium
| | | | | |
Collapse
|
38
|
Abstract
Competing models of sensorimotor computation predict different topological constraints in the brain. Some models propose population coding of particular reference frames in anatomically distinct nodes, whereas others require no such dedicated subpopulations and instead predict that regions will simultaneously code in multiple, intermediate, reference frames. Current empirical evidence is conflicting, partly due to difficulties involved in identifying underlying reference frames. Here, we independently varied the locations of hand, gaze, and target over many positions while recording from the dorsal aspect of parietal area 5. We find that the target is represented in a predominantly hand-centered reference frame here, contrasting with the relative code seen in dorsal premotor cortex and the mostly gaze-centered reference frame in the parietal reach region. This supports the hypothesis that different nodes of the sensorimotor circuit contain distinct and systematic representations, and this constrains the types of computational model that are neurobiologically relevant.
Collapse
Affiliation(s)
- Lindsay R Bremner
- Division of Biology, California Institute of Technology, Pasadena, CA 91125, USA.
| | | |
Collapse
|
39
|
Orban de Xivry JJ, Ahmadi-Pajouh MA, Harran MD, Salimpour Y, Shadmehr R. Changes in corticospinal excitability during reach adaptation in force fields. J Neurophysiol 2012; 109:124-36. [PMID: 23034365 DOI: 10.1152/jn.00785.2012] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Both abrupt and gradually imposed perturbations produce adaptive changes in motor output, but the neural basis of adaptation may be distinct. Here, we measured the state of the primary motor cortex (M1) and the corticospinal network during adaptation by measuring motor-evoked potentials (MEPs) before reach onset using transcranial magnetic stimulation of M1. Subjects reached in a force field in a schedule in which the field was introduced either abruptly or gradually over many trials. In both groups, by end of the training, muscles that countered the perturbation in a given direction increased their activity during the reach (labeled as the on direction for each muscle). In the abrupt group, in the period before the reach toward the on direction, MEPs in these muscles also increased, suggesting a direction-specific increase in the excitability of the corticospinal network. However, in the gradual group, these MEP changes were missing. After training, there was a period of washout. The MEPs did not return to baseline. Rather, in the abrupt group, off direction MEPs increased to match on direction MEPs. Therefore, we observed changes in corticospinal excitability in the abrupt but not gradual condition. Abrupt training includes the repetition of motor commands, and repetition may be the key factor that produces this plasticity. Furthermore, washout did not return MEPs to baseline, suggesting that washout engaged a new network that masked but did not erase the effects of previous adaptation. Abrupt but not gradual training appears to induce changes in M1 and/or corticospinal networks.
Collapse
|
40
|
Simulating the cortical 3D visuomotor transformation of reach depth. PLoS One 2012; 7:e41241. [PMID: 22815979 PMCID: PMC3397995 DOI: 10.1371/journal.pone.0041241] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Accepted: 06/22/2012] [Indexed: 11/22/2022] Open
Abstract
We effortlessly perform reach movements to objects in different directions and depths. However, how networks of cortical neurons compute reach depth from binocular visual inputs remains largely unknown. To bridge the gap between behavior and neurophysiology, we trained a feed-forward artificial neural network to uncover potential mechanisms that might underlie the 3D transformation of reach depth. Our physiologically-inspired 4-layer network receives distributed 3D visual inputs (1st layer) along with eye, head and vergence signals. The desired motor plan was coded in a population (3rd layer) that we read out (4th layer) using an optimal linear estimator. After training, our network was able to reproduce all known single-unit recording evidence on depth coding in the parietal cortex. Network analyses predict the presence of eye/head and vergence changes of depth tuning, pointing towards a gain-modulation mechanism of depth transformation. In addition, reach depth was computed directly from eye-centered (relative) visual distances, without explicit absolute depth coding. We suggest that these effects should be observable in parietal and pre-motor areas.
Collapse
|
41
|
Vesia M, Crawford JD. Specialization of reach function in human posterior parietal cortex. Exp Brain Res 2012; 221:1-18. [DOI: 10.1007/s00221-012-3158-9] [Citation(s) in RCA: 108] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2011] [Accepted: 06/21/2012] [Indexed: 10/28/2022]
|
42
|
De Meyer K, Spratling MW. A Model of Partial Reference Frame Transforms Through Pooling of Gain-Modulated Responses. Cereb Cortex 2012; 23:1230-9. [DOI: 10.1093/cercor/bhs117] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
43
|
Intrinsic reference frames of superior colliculus visuomotor receptive fields during head-unrestrained gaze shifts. J Neurosci 2012; 31:18313-26. [PMID: 22171035 DOI: 10.1523/jneurosci.0990-11.2011] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A sensorimotor neuron's receptive field and its frame of reference are easily conflated within the natural variability of spatial behavior. Here, we capitalized on such natural variations in 3-D eye and head positions during head-unrestrained gaze shifts to visual targets in two monkeys: to determine whether intermediate/deep layer superior colliculus (SC) receptive fields code visual targets or gaze kinematics, within four different frames of reference. Visuomotor receptive fields were either characterized during gaze shifts to visual targets from a central fixation position (32 U) or were partially characterized from each of three initial fixation points (31 U). Natural variations of initial 3-D gaze and head orientation (including torsion) provided spatial separation between four different coordinate frame models (space, head, eye, fixed-vector relative to fixation), whereas natural saccade errors provided spatial separation between target and gaze positions. Using a new statistical method based on predictive sum-of-squares, we found that in our population of 63 neurons (1) receptive field fits to target positions were significantly better than fits to actual gaze shift locations and (2) eye-centered models gave significantly better fits than the head or space frame. An intermediate frames analysis confirmed that individual neuron fits were distributed target-in-eye coordinates. Gaze position "gain" effects with the spatial tuning required for a 3-D reference frame transformation were significant in 23% (7/31) of neurons tested. We conclude that the SC primarily represents gaze targets relative to the eye but also carries early signatures of the 3-D sensorimotor transformation.
Collapse
|
44
|
Distinct motor plans form and retrieve distinct motor memories for physically identical movements. Curr Biol 2012; 22:432-6. [PMID: 22326201 DOI: 10.1016/j.cub.2012.01.042] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Revised: 12/01/2011] [Accepted: 01/20/2012] [Indexed: 11/20/2022]
Abstract
We can adapt movements to a novel dynamic environment (e.g., tool use, microgravity, and perturbation) by acquiring an internal model of the dynamics. Although multiple environments can be learned simultaneously if each environment is experienced with different limb movement kinematics, it is controversial as to whether multiple internal models for a particular movement can be learned and flexibly retrieved according to behavioral contexts. Here, we address this issue by using a novel visuomotor task. While participants reached to each of two targets located at a clockwise or counter-clockwise position, a gradually increasing visual rotation was applied in the clockwise or counter-clockwise direction, respectively, to the on-screen cursor representing the unseen hand position. This procedure implicitly led participants to perform physically identical pointing movements irrespective of their intentions (i.e., movement plans) to move their hand toward two distinct visual targets. Surprisingly, if each identical movement was executed according to a distinct movement plan, participants could readily adapt these movements to two opposing force fields simultaneously. The results demonstrate that multiple motor memories can be learned and flexibly retrieved, even for physically identical movements, according to distinct motor plans in a visual space.
Collapse
|
45
|
Chang SWC, Snyder LH. The representations of reach endpoints in posterior parietal cortex depend on which hand does the reaching. J Neurophysiol 2012; 107:2352-65. [PMID: 22298831 DOI: 10.1152/jn.00852.2011] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Neurons in the parietal reach region (PRR) have been implicated in the sensory-to-motor transformation required for reaching toward visually defined targets. The neurons in each cortical hemisphere might be specifically involved in planning movements of just one limb, or the PRR might code reach endpoints generically, independent of which limb will actually move. Previous work has shown that the preferred directions of PRR neurons are similar for right and left limb movements but that the amplitude of modulation may vary greatly. We now test the hypothesis that frames of reference and eye and hand gain field modulations will, like preferred directions, be independent of which hand moves. This was not the case. Many neurons show clear differences in both the frame of reference as well as in direction and strength of gain field modulations, depending on which hand is used to reach. The results suggest that the information that is conveyed from the PRR to areas closer to the motor output (the readout from the PRR) is different for each limb and that individual PRR neurons contribute either to controlling the contralateral-limb or else bimanual-limb control.
Collapse
Affiliation(s)
- Steve W C Chang
- Center for Cognitive Neuroscience, Department of Neurobiology, Duke University Medical Center, Durham, NC 27701, USA.
| | | |
Collapse
|
46
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
47
|
Gaze-centered spatial updating of reach targets across different memory delays. Vision Res 2011; 51:890-7. [PMID: 21219923 DOI: 10.1016/j.visres.2010.12.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2010] [Revised: 11/26/2010] [Accepted: 12/22/2010] [Indexed: 11/22/2022]
Abstract
Previous research has demonstrated that remembered targets for reaching are coded and updated relative to gaze, at least when the reaching movement is made soon after the target has been extinguished. In this study, we want to test whether reach targets are updated relative to gaze following different time delays. Reaching endpoints systematically varied as a function of gaze relative to target irrespective of whether the action was executed immediately or after a delay of 5 s, 8 s or 12 s. The present results suggest that memory traces for reach targets continue to be coded in a gaze-dependent reference frame if no external cues are present.
Collapse
|
48
|
Chen Y, Byrne P, Crawford JD. Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach. Neuropsychologia 2011; 49:49-60. [DOI: 10.1016/j.neuropsychologia.2010.10.031] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2010] [Revised: 10/18/2010] [Accepted: 10/29/2010] [Indexed: 10/18/2022]
|
49
|
Sabes PN. Sensory integration for reaching: models of optimality in the context of behavior and the underlying neural circuits. PROGRESS IN BRAIN RESEARCH 2011; 191:195-209. [PMID: 21741553 DOI: 10.1016/b978-0-444-53752-2.00004-7] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Although multisensory integration has been well modeled at the behavioral level, the link between these behavioral models and the underlying neural circuits is still not clear. This gap is even greater for the problem of sensory integration during movement planning and execution. The difficulty lies in applying simple models of sensory integration to the complex computations that are required for movement control and to the large networks of brain areas that perform these computations. Here I review psychophysical, computational, and physiological work on multisensory integration during movement planning, with an emphasis on goal-directed reaching. I argue that sensory transformations must play a central role in any modeling effort. In particular, the statistical properties of these transformations factor heavily into the way in which downstream signals are combined. As a result, our models of optimal integration are only expected to apply "locally," that is, independently for each brain area. I suggest that local optimality can be reconciled with globally optimal behavior if one views the collection of parietal sensorimotor areas not as a set of task-specific domains, but rather as a palette of complex, sensorimotor representations that are flexibly combined to drive downstream activity and behavior.
Collapse
Affiliation(s)
- Philip N Sabes
- Department of Physiology, Keck Center for Integrative Neuroscience, University of California, San Francisco, CA, USA.
| |
Collapse
|
50
|
Burns JK, Blohm G. Multi-sensory weights depend on contextual noise in reference frame transformations. Front Hum Neurosci 2010; 4:221. [PMID: 21165177 PMCID: PMC3002464 DOI: 10.3389/fnhum.2010.00221] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2010] [Accepted: 11/04/2010] [Indexed: 11/19/2022] Open
Abstract
During reach planning, we integrate multiple senses to estimate the location of the hand and the target, which is used to generate a movement. Visual and proprioceptive information are combined to determine the location of the hand. The goal of this study was to investigate whether multi-sensory integration is affected by extraretinal signals, such as head roll. It is believed that a coordinate matching transformation is required before vision and proprioception can be combined because proprioceptive and visual sensory reference frames do not generally align. This transformation utilizes extraretinal signals about current head roll position, i.e., to rotate proprioceptive signals into visual coordinates. Since head roll is an estimated sensory signal with noise, this head roll dependency of the reference frame transformation should introduce additional noise to the transformed signal, reducing its reliability and thus its weight in the multi-sensory integration. To investigate the role of noisy reference frame transformations on multi-sensory weighting, we developed a novel probabilistic (Bayesian) multi-sensory integration model (based on Sober and Sabes, 2003) that included explicit (noisy) reference frame transformations. We then performed a reaching experiment to test the model's predictions. To test for head roll dependent multi-sensory integration, we introduced conflicts between viewed and actual hand position and measured reach errors. Reach analysis revealed that eccentric head roll orientations led to an increase of movement variability, consistent with our model. We further found that the weighting of vision and proprioception depended on head roll, which we interpret as being a result of signal dependant noise. Thus, the brain has online knowledge of the statistics of its internal sensory representations. In summary, we show that sensory reliability is used in a context-dependent way to adjust multi-sensory integration weights for reaching.
Collapse
|