1
|
Sergeant-Perthuis G, Ruet N, Ognibene D, Tisserand Y, Williford K, Rudrauf D. Action of the Euclidean versus projective group on an agent's internal space in curiosity driven exploration. BIOLOGICAL CYBERNETICS 2025; 119:4. [PMID: 39820849 PMCID: PMC11742296 DOI: 10.1007/s00422-024-01001-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 10/18/2024] [Indexed: 01/19/2025]
Abstract
According to the Projective Consciousness Model (PCM), in human spatial awareness, 3-dimensional projective geometry structures information integration and action planning through perspective taking within an internal representation space. The way different perspectives are related to and transform a world model defines a specific perception and imagination scheme. In mathematics, such a collection of transformations corresponds to a 'group', whose 'actions' characterize the geometry of a space. Imbuing world models with a group structure may capture different agents' spatial awareness and affordance schemes. We used group action as a special class of policies for perspective-dependent control. We explored how such a geometric structure impacts agents' behaviors, comparing how the Euclidean versus projective groups act on epistemic value in active inference, drive curiosity, and exploration. We formally demonstrate and simulate how the groups induce distinct behaviors in a simple search task. The projective group's nonlinear magnification of information transformed epistemic value according to the choice of frame, generating behaviors of approach toward objects with uncertain locations due to limited sampling. The Euclidean group had no effect on epistemic value: no action was better than the initial idle state. In structuring a priori an agent's internal representation, we show how geometry can play a key role in information integration and action planning. Our results add further support to the PCM.
Collapse
Affiliation(s)
| | - Nils Ruet
- CIAMS, Université Paris-Saclay, Orsay & Université d'Orléans, Orléans, France
| | - Dimitri Ognibene
- Department of Psychology, University of Milano-Bicocca, Piazza dellÁteneo Nuovo, 1-20126, Milan, Italy
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | | | - Kenneth Williford
- Department of Philosophy and Humanities, The University of Texas at Arlington, Arlington, USA
| | - David Rudrauf
- CIAMS, Université Paris-Saclay, Orsay & Université d'Orléans, Orléans, France
| |
Collapse
|
2
|
Bharmauria V, Seo S, Crawford JD. Neural integration of egocentric and allocentric visual cues in the gaze system. J Neurophysiol 2025; 133:109-120. [PMID: 39584726 DOI: 10.1152/jn.00498.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Revised: 11/14/2024] [Accepted: 11/16/2024] [Indexed: 11/26/2024] Open
Abstract
A fundamental question in neuroscience is how the brain integrates egocentric (body-centered) and allocentric (landmark-centered) visual cues, but for many years this question was ignored in sensorimotor studies. This changed in recent behavioral experiments, but the underlying physiology of ego/allocentric integration remained largely unstudied. The specific goal of this review is to explain how prefrontal neurons integrate eye-centered and landmark-centered visual codes for optimal gaze behavior. First, we briefly review the whole brain/behavioral mechanisms for ego/allocentric integration in the human and summarize egocentric coding mechanisms in the primate gaze system. We then focus in more depth on cellular mechanisms for ego/allocentric coding in the frontal and supplementary eye fields. We first explain how prefrontal visual responses integrate eye-centered target and landmark codes to produce a transformation toward landmark-centered coordinates. Next, we describe what happens when a landmark shifts during the delay between seeing and acquiring a remembered target, initially resulting in independently coexisting ego/allocentric memory codes. We then describe how these codes are reintegrated in the motor burst for the gaze shift. Deep network simulations suggest that these properties emerge spontaneously for optimal gaze behavior. Finally, we synthesize these observations and relate them to normal brain function through a simplified conceptual model. Together, these results show that integration of visuospatial features continues well beyond visual cortex and suggest a general cellular mechanism for goal-directed visual behavior.
Collapse
Affiliation(s)
- Vishal Bharmauria
- The Tampa Human Neurophysiology Lab & Department of Neurosurgery and Brain Repair, Morsani College of Medicine, University of South Florida, Tampa, Florida, United States
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| | - Serah Seo
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
- Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - J Douglas Crawford
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Caceres AH, Barany DA, Dundon NM, Smith J, Marneweck M. Neural Encoding of Direction and Distance across Reference Frames in Visually Guided Reaching. eNeuro 2024; 11:ENEURO.0405-24.2024. [PMID: 39557568 PMCID: PMC11617137 DOI: 10.1523/eneuro.0405-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2024] [Revised: 10/19/2024] [Accepted: 11/06/2024] [Indexed: 11/20/2024] Open
Abstract
Goal-directed actions require transforming sensory information into motor plans defined across multiple parameters and reference frames. Substantial evidence supports the encoding of target direction in gaze- and body-centered coordinates within parietal and premotor regions. However, how the brain encodes the equally critical parameter of target distance remains less understood. Here, using Bayesian pattern component modeling of fMRI data during a delayed reach-to-target task, we dissociated the neural encoding of both target direction and the relative distances between target, gaze, and hand at early and late stages of motor planning. This approach revealed independent representations of direction and distance along the human dorsomedial reach pathway. During early planning, most premotor and superior parietal areas encoded a target's distance in single or multiple reference frames and encoded its direction. In contrast, distance encoding was magnified in gaze- and body-centric reference frames during late planning. These results emphasize a flexible and efficient human central nervous system that achieves goals by remapping sensory information related to multiple parameters, such as distance and direction, in the same brain areas.
Collapse
Affiliation(s)
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, Athens, Georgia 30602
- Department of Interdisciplinary Biomedical Sciences, School of Medicine, University of Georgia, Athens, Georgia 30606
| | - Neil M Dundon
- Department of Psychological and Brain Sciences, University of California Santa Barbara, Santa Barbara, California 93106
- Department of Child and Adolescent Psychiatry, Psychotherapy and Psychosomatics, University of Freiburg, Freiburg 79104, Germany
| | - Jolinda Smith
- Department of Human Physiology, University of Oregon, Eugene, Oregon 97403
| | - Michelle Marneweck
- Department of Human Physiology, University of Oregon, Eugene, Oregon 97403
- Institute of Neuroscience, University of Oregon, Eugene, Oregon 97403
- Phil and Penny Knight Campus for Accelerating Scientific Impact, Eugene, Oregon 97403
| |
Collapse
|
4
|
Gerb J, Brandt T, Dieterich M. Shape configuration of mental targets representation as a holistic measure in a 3D real world pointing test for spatial orientation. Sci Rep 2023; 13:20449. [PMID: 37993521 PMCID: PMC10665407 DOI: 10.1038/s41598-023-47821-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 11/18/2023] [Indexed: 11/24/2023] Open
Abstract
Deficits in spatial memory are often early signs of neurological disorders. Here, we analyzed the geometrical shape configuration of 2D-projections of pointing performances to a memorized array of spatially distributed targets in order to assess the feasibility of this new holistic analysis method. The influence of gender differences and cognitive impairment was taken into account in this methodological study. 56 right-handed healthy participants (28 female, mean age 48.89 ± 19.35 years) and 22 right-handed patients with heterogeneous cognitive impairment (12 female, mean age 71.73 ± 7.41 years) underwent a previously validated 3D-real-world pointing test (3D-RWPT). Participants were shown a 9-dot target matrix and afterwards asked to point towards each target in randomized order with closed eyes in different body positions relative to the matrix. Two-dimensional projections of these pointing vectors (i.e., the shapes resulting from the individual dots) were then quantified using morphological analyses. Shape configurations in healthy volunteers largely reflected the real-world target pattern with gender-dependent differences (ANCOVA area males vs. females F(1,73) = 9.00, p 3.69 × 10-3, partial η2 = 0.10, post-hoc difference = 38,350.43, pbonf=3.69 × 10-3**, Cohen's d 0.76, t 3.00). Patients with cognitive impairment showed distorted rectangularity with more large-scale errors, resulting in decreased overall average diameters and solidity (ANCOVA diameter normal cognition/cognitive impairment F(1,71) = 9.30, p 3.22 × 10-3, partial η2 = 0.09, post-hoc difference = 31.22, pbonf=3.19 × 10-3**, Cohen's d 0.92, t 3.05; solidity normal cognition/cognitive impairment F(1,71) = 7.79, p 6.75 × 10-3, partial η2 = 0.08, post-hoc difference = 0.07, pbonf=6.76 × 10-3** Cohen's d 0.84, t 2.79). Shape configuration analysis of the 3D-RWPT target array appears to be a suitable holistic measure of spatial performance in a pointing task. The results of this methodological investigation support further testing in a clinical study for differential diagnosis of disorders with spatial memory deficits.
Collapse
Affiliation(s)
- J Gerb
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians University, Munich, Germany.
| | - T Brandt
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians University, Munich, Germany
- Graduate School of Systemic Neuroscience, Ludwig-Maximilians University, Munich, Germany
- Hertie Senior Professor for Clinical Neuroscience, Ludwig-Maximilians University, Munich, Germany
| | - M Dieterich
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians University, Munich, Germany
- Graduate School of Systemic Neuroscience, Ludwig-Maximilians University, Munich, Germany
- Department of Neurology, Ludwig-Maximilians University, Munich, Germany
- Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| |
Collapse
|
5
|
Fooken J, Baltaretu BR, Barany DA, Diaz G, Semrau JA, Singh T, Crawford JD. Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. J Neurosci 2023; 43:7511-7522. [PMID: 37940592 PMCID: PMC10634571 DOI: 10.1523/jneurosci.1373-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/15/2023] [Accepted: 08/18/2023] [Indexed: 11/10/2023] Open
Abstract
Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience, Queen's University, Kingston, Ontario K7L3N6, Canada
| | - Bianca R Baltaretu
- Department of Psychology, Justus Liebig University, Giessen, 35394, Germany
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, and Augusta University/University of Georgia Medical Partnership, Athens, Georgia 30602
| | - Gabriel Diaz
- Center for Imaging Science, Rochester Institute of Technology, Rochester, New York 14623
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, Delaware 19713
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, Pennsylvania 16802
| | - J Douglas Crawford
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
6
|
Forster PP, Fiehler K, Karimpur H. Egocentric cues influence the allocentric spatial memory of object configurations for memory-guided actions. J Neurophysiol 2023; 130:1142-1149. [PMID: 37791381 DOI: 10.1152/jn.00149.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/05/2023] Open
Abstract
Allocentric and egocentric reference frames are used to code the spatial position of action targets in reference to objects in the environment, i.e., relative to landmarks (allocentric), or the observer (egocentric). Previous research investigated reference frames in isolation, for example, by shifting landmarks relative to the target and asking participants to reach to the remembered target location. Systematic reaching errors were found in the direction of the landmark shift and used as a proxy for allocentric spatial coding. Here, we examined the interaction of both allocentric and egocentric reference frames by shifting the landmarks as well as the observer. We asked participants to encode a three-dimensional configuration of balls and to reproduce this configuration from memory after a short delay followed by a landmark or an observer shift. We also manipulated the number of landmarks to test its effect on the use of allocentric and egocentric reference frames. We found that participants were less accurate when reproducing the configuration of balls after an observer shift, which was reflected in larger configurational errors. In addition, an increase in the number of landmarks led to a stronger reliance on allocentric cues and a weaker contribution of egocentric cues. In sum, our results highlight the important role of egocentric cues for allocentric spatial coding in the context of memory-guided actions.NEW & NOTEWORTHY Objects in our environment are coded relative to each other (allocentrically) and are thought to serve as independent and reliable cues (landmarks) in the context of unreliable egocentric signals. Contrary to this assumption, we demonstrate that egocentric cues alter the allocentric spatial memory, which could reflect recently discovered interactions between allocentric and egocentric neural processing pathways. Furthermore, additional landmarks lead to a higher contribution of allocentric and a lower contribution of egocentric cues.
Collapse
Affiliation(s)
- Pierre-Pascal Forster
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
7
|
Goldenkoff ER, Deluisi JA, Destiny DP, Lee TG, Michon KJ, Brissenden JA, Taylor SF, Polk TA, Vesia M. The behavioral and neural effects of parietal theta burst stimulation on the grasp network are stronger during a grasping task than at rest. Front Neurosci 2023; 17:1198222. [PMID: 37954875 PMCID: PMC10637360 DOI: 10.3389/fnins.2023.1198222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 10/05/2023] [Indexed: 11/14/2023] Open
Abstract
Repetitive transcranial magnetic stimulation (TMS) is widely used in neuroscience and clinical settings to modulate human cortical activity. The effects of TMS on neural activity depend on the excitability of specific neural populations at the time of stimulation. Accordingly, the brain state at the time of stimulation may influence the persistent effects of repetitive TMS on distal brain activity and associated behaviors. We applied intermittent theta burst stimulation (iTBS) to a region in the posterior parietal cortex (PPC) associated with grasp control to evaluate the interaction between stimulation and brain state. Across two experiments, we demonstrate the immediate responses of motor cortex activity and motor performance to state-dependent parietal stimulation. We randomly assigned 72 healthy adult participants to one of three TMS intervention groups, followed by electrophysiological measures with TMS and behavioral measures. Participants in the first group received iTBS to PPC while performing a grasping task concurrently. Participants in the second group received iTBS to PPC while in a task-free, resting state. A third group of participants received iTBS to a parietal region outside the cortical grasping network while performing a grasping task concurrently. We compared changes in motor cortical excitability and motor performance in the three stimulation groups within an hour of each intervention. We found that parietal stimulation during a behavioral manipulation that activates the cortical grasping network increased downstream motor cortical excitability and improved motor performance relative to stimulation during rest. We conclude that constraining the brain state with a behavioral task during brain stimulation has the potential to optimize plasticity induction in cortical circuit mechanisms that mediate movement processes.
Collapse
Affiliation(s)
| | - Joseph A. Deluisi
- School of Kinesiology, University of Michigan, Ann Arbor, MI, United States
| | - Danielle P. Destiny
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Taraz G. Lee
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Katherine J. Michon
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - James A. Brissenden
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Stephan F. Taylor
- Department of Psychiatry, University of Michigan, Ann Arbor, MI, United States
| | - Thad A. Polk
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Michael Vesia
- School of Kinesiology, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
8
|
Schütz A, Bharmauria V, Yan X, Wang H, Bremmer F, Crawford JD. Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Commun Biol 2023; 6:938. [PMID: 37704829 PMCID: PMC10499799 DOI: 10.1038/s42003-023-05291-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/26/2023] [Indexed: 09/15/2023] Open
Abstract
Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.
Collapse
Affiliation(s)
- Adrian Schütz
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Vishal Bharmauria
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Xiaogang Yan
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Hongying Wang
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Frank Bremmer
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - J Douglas Crawford
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada.
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
9
|
Bosco A, Filippini M, Borra D, Kirchner EA, Fattori P. Depth and direction effects in the prediction of static and shifted reaching goals from kinematics. Sci Rep 2023; 13:13115. [PMID: 37573413 PMCID: PMC10423273 DOI: 10.1038/s41598-023-40127-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/04/2023] [Indexed: 08/14/2023] Open
Abstract
The kinematic parameters of reach-to-grasp movements are modulated by action intentions. However, when an unexpected change in visual target goal during reaching execution occurs, it is still unknown whether the action intention changes with target goal modification and which is the temporal structure of the target goal prediction. We recorded the kinematics of the pointing finger and wrist during the execution of reaching movements in 23 naïve volunteers where the targets could be located at different directions and depths with respect to the body. During the movement execution, the targets could remain static for the entire duration of movement or shifted, with different timings, to another position. We performed temporal decoding of the final goals and of the intermediate trajectory from the past kinematics exploiting a recurrent neural network. We observed a progressive increase of the classification performance from the onset to the end of movement in both horizontal and sagittal dimensions, as well as in decoding shifted targets. The classification accuracy in decoding horizontal targets was higher than the classification accuracy of sagittal targets. These results are useful for establishing how human and artificial agents could take advantage from the observed kinematics to optimize their cooperation in three-dimensional space.
Collapse
Affiliation(s)
- A Bosco
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy.
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy.
| | - M Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
| | - D Borra
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Bologna, Italy
| | - E A Kirchner
- Department of Electrical Engineering and Information Technology, University of Duisburg-Essen, Duisburg, Germany
- Robotics Innovation Center, German Research Center for Artificial Intelligence GmbH, Kaiserslautern, Germany
| | - P Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
| |
Collapse
|
10
|
Ludwig D. The functions of consciousness in visual processing. Neurosci Conscious 2023; 2023:niac018. [PMID: 36628118 PMCID: PMC9825248 DOI: 10.1093/nc/niac018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 10/24/2022] [Accepted: 12/06/2022] [Indexed: 01/09/2023] Open
Abstract
Conscious experiences form a relatively diverse class of psychological phenomena, supported by a range of distinct neurobiological mechanisms. This diversity suggests that consciousness occupies a variety of different functional roles across different task domains, individuals, and species; a position I call functional pluralism. In this paper, I begin to tease out some of the functional contributions that consciousness makes to (human) visual processing. Consolidating research from across the cognitive sciences, I discuss semantic and spatiotemporal processing as specific points of comparison between the functional capabilities of the visual system in the presence and absence of conscious awareness. I argue that consciousness contributes a cluster of functions to visual processing; facilitating, among other things, (i) increased capacities for semantically processing informationally complex visual stimuli, (ii) increased spatiotemporal precision, and (iii) increased capacities for representational integration over large spatiotemporal intervals. This sort of analysis should ultimately yield a plurality of functional markers that can be used to guide future research in the philosophy and science of consciousness, some of which are not captured by popular theoretical frameworks like global workspace theory and information integration theory.
Collapse
Affiliation(s)
- Dylan Ludwig
- Department of Philosophy, York University, 4700 Keele Street, Toronto, ON M3J 1P3, Canada
| |
Collapse
|
11
|
Blohm G, Cheyne DO, Crawford JD. Parietofrontal oscillations show hand-specific interactions with top-down movement plans. J Neurophysiol 2022; 128:1518-1533. [PMID: 36321728 DOI: 10.1152/jn.00240.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
To generate a hand-specific reach plan, the brain must integrate hand-specific signals with the desired movement strategy. Although various neurophysiology/imaging studies have investigated hand-target interactions in simple reach-to-target tasks, the whole brain timing and distribution of this process remain unclear, especially for more complex, instruction-dependent motor strategies. Previously, we showed that a pro/anti pointing instruction influences magnetoencephalographic (MEG) signals in frontal cortex that then propagate recurrently through parietal cortex (Blohm G, Alikhanian H, Gaetz W, Goltz HC, DeSouza JF, Cheyne DO, Crawford JD. NeuroImage 197: 306-319, 2019). Here, we contrasted left versus right hand pointing in the same task to investigate 1) which cortical regions of interest show hand specificity and 2) which of those areas interact with the instructed motor plan. Eight bilateral areas, the parietooccipital junction (POJ), superior parietooccipital cortex (SPOC), supramarginal gyrus (SMG), medial/anterior interparietal sulcus (mIPS/aIPS), primary somatosensory/motor cortex (S1/M1), and dorsal premotor cortex (PMd), showed hand-specific changes in beta band power, with four of these (M1, S1, SMG, aIPS) showing robust activation before movement onset. M1, SMG, SPOC, and aIPS showed significant interactions between contralateral hand specificity and the instructed motor plan but not with bottom-up target signals. Separate hand/motor signals emerged relatively early and lasted through execution, whereas hand-motor interactions only occurred close to movement onset. Taken together with our previous results, these findings show that instruction-dependent motor plans emerge in frontal cortex and interact recurrently with hand-specific parietofrontal signals before movement onset to produce hand-specific motor behaviors.NEW & NOTEWORTHY The brain must generate different motor signals depending on which hand is used. The distribution and timing of hand use/instructed motor plan integration are not understood at the whole brain level. Using MEG we show that different action planning subnetworks code for hand usage and integrating hand use into a hand-specific motor plan. The timing indicates that frontal cortex first creates a general motor plan and then integrates hand specificity to produce a hand-specific motor plan.
Collapse
Affiliation(s)
- Gunnar Blohm
- Centre of Neuroscience Studies, Departments of Biomedical & Molecular Sciences, Mathematics & Statistics, and Psychology and School of Computing, Queen's University, Kingston, Ontario, Canada.,Centre for Vision Research, York University, Toronto, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Montreal, Quebec, Canada.,Vision: Science to Applications (VISTA) program, Departments of Psychology, Biology, and Kinesiology and Health Sciences and Neuroscience Graduate Diploma Program, York University, Toronto, Ontario, Canada
| | - Douglas O Cheyne
- Program in Neurosciences and Mental Health, The Hospital for Sick Children Research Institute, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Montreal, Quebec, Canada.,Vision: Science to Applications (VISTA) program, Departments of Psychology, Biology, and Kinesiology and Health Sciences and Neuroscience Graduate Diploma Program, York University, Toronto, Ontario, Canada
| |
Collapse
|
12
|
Gerb J, Brandt T, Dieterich M. Different strategies in pointing tasks and their impact on clinical bedside tests of spatial orientation. J Neurol 2022; 269:5738-5745. [PMID: 35258851 PMCID: PMC9553832 DOI: 10.1007/s00415-022-11015-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 01/27/2022] [Accepted: 02/05/2022] [Indexed: 11/24/2022]
Abstract
Deficits in spatial memory, orientation, and navigation are often early or neglected signs of degenerative and vestibular neurological disorders. A simple and reliable bedside test of these functions would be extremely relevant for diagnostic routine. Pointing at targets in the 3D environment is a basic well-trained common sensorimotor ability that provides a suitable measure. We here describe a smartphone-based pointing device using the built-in inertial sensors for analysis of pointing performance in azimuth and polar spatial coordinates. Interpretation of the vectors measured in this way is not trivial, since the individuals tested may use at least two different strategies: first, they may perform the task in an egocentric eye-based reference system by aligning the fingertip with the target retinotopically or second, by aligning the stretched arm and the index finger with the visual line of sight in allocentric world-based coordinates similar to using a rifle. The two strategies result in considerable differences of target coordinates. A pilot test with a further developed design of the device and an app for a standardized bedside utilization in five healthy volunteers revealed an overall mean deviation of less than 5° between the measured and the true coordinates. Future investigations of neurological patients comparing their performance before and after changes in body position (chair rotation) may allow differentiation of distinct orientational deficits in peripheral (vestibulopathy) or central (hippocampal or cortical) disorders.
Collapse
Affiliation(s)
- J. Gerb
- Department of Neurology, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377 Munich, Germany
- German Center for Vertigo and Balance Disorders, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377 Munich, Germany
| | - T. Brandt
- Graduate School of Systemic Neuroscience, Ludwig-Maximilians University, Munich, Germany
- German Center for Vertigo and Balance Disorders, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377 Munich, Germany
- Hertie Senior Professor for Clinical Neuroscience, Ludwig-Maximilians University, Munich, Germany
| | - M. Dieterich
- Department of Neurology, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377 Munich, Germany
- Graduate School of Systemic Neuroscience, Ludwig-Maximilians University, Munich, Germany
- German Center for Vertigo and Balance Disorders, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377 Munich, Germany
- Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| |
Collapse
|
13
|
Hadjidimitrakis K, De Vitis M, Ghodrati M, Filippini M, Fattori P. Anterior-posterior gradient in the integrated processing of forelimb movement direction and distance in macaque parietal cortex. Cell Rep 2022; 41:111608. [DOI: 10.1016/j.celrep.2022.111608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 07/16/2022] [Accepted: 10/14/2022] [Indexed: 11/09/2022] Open
|
14
|
Reeves SM, Cooper EA, Rodriguez R, Otero-Millan J. Head Orientation Influences Saccade Directions during Free Viewing. eNeuro 2022; 9:ENEURO.0273-22.2022. [PMID: 36351820 PMCID: PMC9787809 DOI: 10.1523/eneuro.0273-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/01/2022] [Accepted: 11/03/2022] [Indexed: 11/11/2022] Open
Abstract
When looking around a visual scene, humans make saccadic eye movements to fixate objects of interest. While the extraocular muscles can execute saccades in any direction, not all saccade directions are equally likely: saccades in horizontal and vertical directions are most prevalent. Here, we asked whether head orientation plays a role in determining saccade direction biases. Study participants (n = 14) viewed natural scenes and abstract fractals (radially symmetric patterns) through a virtual reality headset equipped with eye tracking. Participants' heads were stabilized and tilted at -30°, 0°, or 30° while viewing the images, which could also be tilted by -30°, 0°, and 30° relative to the head. To determine whether the biases in saccade direction changed with head tilt, we calculated polar histograms of saccade directions and cross-correlated pairs of histograms to find the angular displacement resulting in the maximum correlation. During free viewing of fractals, saccade biases largely followed the orientation of the head with an average displacement value of 24° when comparing head upright to head tilt in world-referenced coordinates (t (13) = 17.63, p < 0.001). There was a systematic offset of 2.6° in saccade directions, likely reflecting ocular counter roll (OCR; t (13) = 3.13, p = 0.008). When participants viewed an Earth upright natural scene during head tilt, we found that the orientation of the head still influenced saccade directions (t (13) = 3.7, p = 0.001). These results suggest that nonvisual information about head orientation, such as that acquired by vestibular sensors, likely plays a role in saccade generation.
Collapse
Affiliation(s)
- Stephanie M Reeves
- Herbert Wertheim School of Optometry and Vision Science, University of California Berkeley, Berkeley, 94720, CA
| | - Emily A Cooper
- Herbert Wertheim School of Optometry and Vision Science, University of California Berkeley, Berkeley, 94720, CA
- Helen Willis Neuroscience Institute, University of California Berkeley, Berkeley, 94720, CA
| | - Raul Rodriguez
- Herbert Wertheim School of Optometry and Vision Science, University of California Berkeley, Berkeley, 94720, CA
| | - Jorge Otero-Millan
- Herbert Wertheim School of Optometry and Vision Science, University of California Berkeley, Berkeley, 94720, CA
- Department of Neurology, Johns Hopkins University, Baltimore, 21231, MD
| |
Collapse
|
15
|
Abedi Khoozani P, Bharmauria V, Schütz A, Wildes RP, Crawford JD. Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts. Cereb Cortex Commun 2022; 3:tgac026. [PMID: 35909704 PMCID: PMC9334293 DOI: 10.1093/texcom/tgac026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 06/14/2022] [Accepted: 06/21/2022] [Indexed: 11/13/2022] Open
Abstract
Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric–egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Adrian Schütz
- Department of Neurophysics Phillips-University Marburg , Marburg 35037 , Germany
| | - Richard P Wildes
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Department of Electrical Engineering and Computer Science , York University, Toronto, ON M3J 1P3 , Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Departments of Psychology, Biology and Kinesiology & Health Sciences, York University , Toronto, Ontario M3J 1P3 , Canada
| |
Collapse
|
16
|
Abekawa N, Ito S, Gomi H. Gaze-specific motor memories for hand-reaching. Curr Biol 2022; 32:2747-2753.e6. [DOI: 10.1016/j.cub.2022.04.065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/23/2022] [Accepted: 04/22/2022] [Indexed: 10/18/2022]
|
17
|
Filippini M, Borra D, Ursino M, Magosso E, Fattori P. Decoding sensorimotor information from superior parietal lobule of macaque via Convolutional Neural Networks. Neural Netw 2022; 151:276-294. [PMID: 35452895 DOI: 10.1016/j.neunet.2022.03.044] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 01/17/2022] [Accepted: 03/29/2022] [Indexed: 10/18/2022]
Abstract
Despite the well-recognized role of the posterior parietal cortex (PPC) in processing sensory information to guide action, the differential encoding properties of this dynamic processing, as operated by different PPC brain areas, are scarcely known. Within the monkey's PPC, the superior parietal lobule hosts areas V6A, PEc, and PE included in the dorso-medial visual stream that is specialized in planning and guiding reaching movements. Here, a Convolutional Neural Network (CNN) approach is used to investigate how the information is processed in these areas. We trained two macaque monkeys to perform a delayed reaching task towards 9 positions (distributed on 3 different depth and direction levels) in the 3D peripersonal space. The activity of single cells was recorded from V6A, PEc, PE and fed to convolutional neural networks that were designed and trained to exploit the temporal structure of neuronal activation patterns, to decode the target positions reached by the monkey. Bayesian Optimization was used to define the main CNN hyper-parameters. In addition to discrete positions in space, we used the same network architecture to decode plausible reaching trajectories. We found that data from the most caudal V6A and PEc areas outperformed PE area in the spatial position decoding. In all areas, decoding accuracies started to increase at the time the target to reach was instructed to the monkey, and reached a plateau at movement onset. The results support a dynamic encoding of the different phases and properties of the reaching movement differentially distributed over a network of interconnected areas. This study highlights the usefulness of neurons' firing rate decoding via CNNs to improve our understanding of how sensorimotor information is encoded in PPC to perform reaching movements. The obtained results may have implications in the perspective of novel neuroprosthetic devices based on the decoding of these rich signals for faithfully carrying out patient's intentions.
Collapse
Affiliation(s)
- Matteo Filippini
- University of Bologna, Department of Biomedical and Neuromotor Sciences, Bologna, Italy.
| | - Davide Borra
- University of Bologna, Department of Electrical, Electronic and Information Engineering "Guglielmo Marconi", Cesena Campus, Cesena, Italy
| | - Mauro Ursino
- University of Bologna, Department of Electrical, Electronic and Information Engineering "Guglielmo Marconi", Cesena Campus, Cesena, Italy; Alma Mater Research Institute for Human-Centered Artificial Intelligence, Bologna, Italy
| | - Elisa Magosso
- University of Bologna, Department of Electrical, Electronic and Information Engineering "Guglielmo Marconi", Cesena Campus, Cesena, Italy; Alma Mater Research Institute for Human-Centered Artificial Intelligence, Bologna, Italy
| | - Patrizia Fattori
- University of Bologna, Department of Biomedical and Neuromotor Sciences, Bologna, Italy; Alma Mater Research Institute for Human-Centered Artificial Intelligence, Bologna, Italy.
| |
Collapse
|
18
|
Lappi O. Egocentric Chunking in the Predictive Brain: A Cognitive Basis of Expert Performance in High-Speed Sports. Front Hum Neurosci 2022; 16:822887. [PMID: 35496065 PMCID: PMC9039003 DOI: 10.3389/fnhum.2022.822887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 03/16/2022] [Indexed: 11/13/2022] Open
Abstract
What principles and mechanisms allow humans to encode complex 3D information, and how can it be so fast, so accurately and so flexibly transformed into coordinated action? How do these processes work when developed to the limit of human physiological and cognitive capacity-as they are in high-speed sports, such as alpine skiing or motor racing? High-speed sports present not only physical challenges, but present some of the biggest perceptual-cognitive demands for the brain. The skill of these elite athletes is in many ways an attractive model for studying human performance "in the wild", and its neurocognitive basis. This article presents a framework theory for how these abilities may be realized in high-speed sports. It draws on a careful analysis of the case of the motorsport athlete, as well as theoretical concepts from: (1) cognitive neuroscience of wayfinding, steering, and driving; (2) cognitive psychology of expertise; (3) cognitive modeling and machine learning; (4) human-in-the loop modellling in vehicle system dynamics and human performance engineering; (5) experimental research (in the laboratory and in the field) on human visual guidance. The distinctive contribution is the way these are integrated, and the concept of chunking is used in a novel way to analyze a high-speed sport. The mechanisms invoked are domain-general, and not specific to motorsport or the use of a particular type of vehicle (or any vehicle for that matter); the egocentric chunking hypothesis should therefore apply to any dynamic task that requires similar core skills. It offers a framework for neuroscientists, psychologists, engineers, and computer scientists working in the field of expert sports performance, and may be useful in translating fundamental research into theory-based insight and recommendations for improving real-world elite performance. Specific experimental predictions and applicability of the hypotheses to other sports are discussed.
Collapse
Affiliation(s)
- Otto Lappi
- Cognitive Science/Traffic Research Unit (TRU)/TRUlab, University of Helsinki, Helsinki, Finland
| |
Collapse
|
19
|
Zhang B, Wang F, Zhang Q, Naya Y. Distinct networks coupled with parietal cortex for spatial representations inside and outside the visual field. Neuroimage 2022; 252:119041. [PMID: 35231630 DOI: 10.1016/j.neuroimage.2022.119041] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 02/22/2022] [Accepted: 02/24/2022] [Indexed: 11/19/2022] Open
Abstract
Our mental representation of egocentric space is influenced by the disproportionate sensory perception of the body. Previous studies have focused on the neural architecture for egocentric representations within the visual field. However, the space representation underlying the body is still unclear. To address this problem, we applied both functional Magnitude Resonance Imaging (fMRI) and Magnetoencephalography (MEG) to a spatial-memory paradigm by using a virtual environment in which human participants remembered a target location left, right, or back relative to their own body. Both experiments showed larger involvement of the frontoparietal network in representing a retrieved target on the left/right side than on the back. Conversely, the medial temporal lobe (MTL)-parietal network was more involved in retrieving a target behind the participants. The MEG data showed an earlier activation of the MTL-parietal network than that of the frontoparietal network during retrieval of a target location. These findings suggest that the parietal cortex may represent the entire space around the self-body by coordinating two distinct brain networks.
Collapse
Affiliation(s)
- Bo Zhang
- School of Psychological and Cognitive Sciences, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China; Beijing Academy of Artificial Intelligence, Beijing, 100084, China; Tsinghua Laboratory of Brain and Intelligence, 160 Chengfu Rd., SanCaiTang Building, Haidian District, Beijing, 100084, China
| | - Fan Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Beijing 100101, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Biophysics, Chinese Academy of Sciences, 15 Datun Road, Beijing 100101, China; University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing 100049, China
| | - Qi Zhang
- School of Psychological and Cognitive Sciences, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China; School of Educational Science, Minnan Normal University, No. 36, Xianqianzhi Street, Zhangzhou 363000, China
| | - Yuji Naya
- School of Psychological and Cognitive Sciences, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China; IDG/McGovern Institute for Brain Research at Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China; Center for Life Sciences, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China; Beijing Key Laboratory of Behavior and Mental Health, Peking University, No. 52, Haidian Road, Haidian District, Beijing 100805, China.
| |
Collapse
|
20
|
Ning J, Li Z, Zhang X, Wang J, Chen D, Liu Q, Sun Y. Behavioral signatures of structured feature detection during courtship in Drosophila. Curr Biol 2022; 32:1211-1231.e7. [DOI: 10.1016/j.cub.2022.01.024] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 11/27/2021] [Accepted: 01/10/2022] [Indexed: 11/27/2022]
|
21
|
Matthis JS, Muller KS, Bonnen KL, Hayhoe MM. Retinal optic flow during natural locomotion. PLoS Comput Biol 2022; 18:e1009575. [PMID: 35192614 PMCID: PMC8896712 DOI: 10.1371/journal.pcbi.1009575] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/04/2022] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
We examine the structure of the visual motion projected on the retina during natural locomotion in real world environments. Bipedal gait generates a complex, rhythmic pattern of head translation and rotation in space, so without gaze stabilization mechanisms such as the vestibular-ocular-reflex (VOR) a walker's visually specified heading would vary dramatically throughout the gait cycle. The act of fixation on stable points in the environment nulls image motion at the fovea, resulting in stable patterns of outflow on the retinae centered on the point of fixation. These outflowing patterns retain a higher order structure that is informative about the stabilized trajectory of the eye through space. We measure this structure by applying the curl and divergence operations on the retinal flow velocity vector fields and found features that may be valuable for the control of locomotion. In particular, the sign and magnitude of foveal curl in retinal flow specifies the body's trajectory relative to the gaze point, while the point of maximum divergence in the retinal flow field specifies the walker's instantaneous overground velocity/momentum vector in retinotopic coordinates. Assuming that walkers can determine the body position relative to gaze direction, these time-varying retinotopic cues for the body's momentum could provide a visual control signal for locomotion over complex terrain. In contrast, the temporal variation of the eye-movement-free, head-centered flow fields is large enough to be problematic for use in steering towards a goal. Consideration of optic flow in the context of real-world locomotion therefore suggests a re-evaluation of the role of optic flow in the control of action during natural behavior.
Collapse
Affiliation(s)
- Jonathan Samir Matthis
- Department of Biology, Northeastern University, Boston, Massachusetts, United States of America
| | - Karl S. Muller
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| | - Kathryn L. Bonnen
- School of Optometry, Indiana University Bloomington, Bloomington, Indiana, United States of America
| | - Mary M. Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
22
|
Liu(刘) R, Bögels S, Bird G, Medendorp WP, Toni I. Hierarchical Integration of Communicative and Spatial Perspective‐Taking Demands in Sensorimotor Control of Referential Pointing. Cogn Sci 2022; 46:e13084. [PMID: 35066907 PMCID: PMC9287027 DOI: 10.1111/cogs.13084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 10/29/2021] [Accepted: 12/07/2021] [Indexed: 11/16/2022]
Abstract
Recognized as a simple communicative behavior, referential pointing is cognitively complex because it invites a communicator to consider an addressee's knowledge. Although we know referential pointing is affected by addressees’ physical location, it remains unclear whether and how communicators’ inferences about addressees’ mental representation of the interaction space influence sensorimotor control of referential pointing. The communicative perspective‐taking task requires a communicator to point at one out of multiple referents either to instruct an addressee which one should be selected (communicative, COM) or to predict which one the addressee will select (non‐communicative, NCOM), based on either which referents can be seen (Level‐1 perspective‐taking, PT1) or how the referents were perceived (Level‐2 perspective‐taking, PT2) by the addressee. Communicators took longer to initiate the movements in PT2 than PT1 trials, and they held their pointing fingers for longer at the referent in COM than NCOM trials. The novel findings of this study pertain to trajectory control of the pointing movements. Increasing both communicative and perspective‐taking demands led to longer pointing trajectories, with an under‐additive interaction between those two experimental factors. This finding suggests that participants generate communicative behaviors that are as informative as required rather than overly exaggerated displays, by integrating communicative and perspective‐taking information hierarchically during sensorimotor control. This observation has consequences for models of human communication. It implies that the format of communicative and perspective‐taking knowledge needs to be commensurate with the movement dynamics controlled by the sensorimotor system.
Collapse
Affiliation(s)
- Rui(睿) Liu(刘)
- Donders Institute for Brain, Cognition and Behaviour Radboud University
| | - Sara Bögels
- Donders Institute for Brain, Cognition and Behaviour Radboud University
| | - Geoffrey Bird
- Department of Experimental Psychology University of Oxford
- Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology & Neuroscience King's College London
| | | | - Ivan Toni
- Donders Institute for Brain, Cognition and Behaviour Radboud University
| |
Collapse
|
23
|
Abstract
This study demonstrates evidence for a foundational process underlying active vision in older infants during object play. Using head-mounted eye-tracking and motion capture, looks to an object are shown to be tightly linked to and synchronous with a stilled head, regardless of the duration of gaze, for infants 12 to 24 months of age. Despite being a developmental period of rapid and marked changes in motor abilities, the dynamic coordination of head stabilization and sustained gaze to a visual target is developmentally invariant during the examined age range. The findings indicate that looking with an aligned head and eyes is a fundamental property of human vision and highlights the importance of studying looking behavior in freely moving perceivers in everyday contexts, opening new questions about the role of body movement in both typical and atypical development of visual attention.
Collapse
Affiliation(s)
- Jeremy I Borjon
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA.,
| | - Drew H Abney
- Department of Psychology, University of Georgia, Athens, GA, USA.,
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA.,Department of Psychology, University of Texas, Austin, TX, USA.,
| | - Linda B Smith
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA.,School of Psychology, University of East Anglia, East Anglia, UK.,
| |
Collapse
|
24
|
Abekawa N, Gomi H, Diedrichsen J. Gaze control during reaching is flexibly modulated to optimize task outcome. J Neurophysiol 2021; 126:816-826. [PMID: 34320845 DOI: 10.1152/jn.00134.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When reaching for an object with the hand, the gaze is usually directed at the target. In a laboratory setting, fixation is strongly maintained at the reach target until the reaching is completed, a phenomenon known as "gaze anchoring." While conventional accounts of such tight eye-hand coordination have often emphasized the internal synergetic linkage between both motor systems, more recent optimal control theories regard motor coordination as the adaptive solution to task requirements. We here investigated to what degree gaze control during reaching is modulated by task demands. We adopted a gaze-anchoring paradigm in which participants had to reach for a target location. During the reach, they additionally had to make a saccadic eye movement to a salient visual cue presented at locations other than the target. We manipulated the task demands by independently changing reward contingencies for saccade reaction time (RT) and reaching accuracy. On average, both saccade RTs and reach error varied systematically with reward condition, with reach accuracy improving when the saccade was delayed. The distribution of the saccade RTs showed two types of eye movements: fast saccades with short RTs, and voluntary saccade with longer RTs. Increased reward for high reach accuracy reduced the probability of fast saccades but left their latency unchanged. The results suggest that gaze anchoring acts through a suppression of fast saccades, a mechanism that can be adaptively adjusted to the current task demands.NEW & NOTEWORTHY During visually guided reaching, our eyes usually fixate the target and saccades elsewhere are delayed ("gaze anchoring"). We here show that the degree of gaze anchoring is flexibly modulated by the reward contingencies of saccade latency and reach accuracy. Reach error became larger when saccades occurred earlier. These results suggest that early saccades are costly for reaching and the brain modulates inhibitory online coordination from the hand to the eye system depending on task requirements.
Collapse
Affiliation(s)
- Naotoshi Abekawa
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa, Japan.,Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Kanagawa, Japan
| | - Jörn Diedrichsen
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada.,Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| |
Collapse
|
25
|
Baarbé J, Vesia M, Brown MJN, Lizarraga KJ, Gunraj C, Jegatheeswaran G, Drummond NM, Rinchon C, Weissbach A, Saravanamuttu J, Chen R. Interhemispheric interactions between the right angular gyrus and the left motor cortex: a transcranial magnetic stimulation study. J Neurophysiol 2021; 125:1236-1250. [PMID: 33625938 DOI: 10.1152/jn.00642.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The interconnection of the angular gyrus of right posterior parietal cortex (PPC) and the left motor cortex (LM1) is essential for goal-directed hand movements. Previous work with transcranial magnetic stimulation (TMS) showed that right PPC stimulation increases LM1 excitability, but right PPC followed by left PPC-LM1 stimulation (LPPC-LM1) inhibits LM1 corticospinal output compared with LPPC-LM1 alone. It is not clear if right PPC-mediated inhibition of LPPC-LM1 is due to inhibition of left PPC or to combined effects of right and left PPC stimulation on LM1 excitability. We used paired-pulse TMS to study the extent to which combined right and left PPC stimulation, targeting the angular gyri, influences LM1 excitability. We tested 16 healthy subjects in five paired-pulsed TMS experiments using MRI-guided neuronavigation to target the angular gyri within PPC. We tested the effects of different right angular gyrus (RAG) and LM1 stimulation intensities on the influence of RAG on LM1 and on influence of left angular gyrus (LAG) on LM1 (LAG-LM1). We then tested the effects of RAG and LAG stimulation on LM1 short-interval intracortical facilitation (SICF), short-interval intracortical inhibition (SICI), and long-interval intracortical inhibition (LICI). The results revealed that RAG facilitated LM1, inhibited SICF, and inhibited LAG-LM1. Combined RAG-LAG stimulation did not affect SICI but increased LICI. These experiments suggest that RAG-mediated inhibition of LAG-LM1 is related to inhibition of early indirect (I)-wave activity and enhancement of GABAB receptor-mediated inhibition in LM1. The influence of RAG on LM1 likely involves ipsilateral connections from LAG to LM1 and heterotopic connections from RAG to LM1.NEW & NOTEWORTHY Goal-directed hand movements rely on the right and left angular gyri (RAG and LAG) and motor cortex (M1), yet how these brain areas functionally interact is unclear. Here, we show that RAG stimulation facilitated right hand motor output from the left M1 but inhibited indirect (I)-waves in M1. Combined RAG and LAG stimulation increased GABAB, but not GABAA, receptor-mediated inhibition in left M1. These findings highlight unique brain interactions between the RAG and left M1.
Collapse
Affiliation(s)
- Julianne Baarbé
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan
| | - Michael Vesia
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan
| | - Matt J N Brown
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan.,Department of Kinesiology, California State University, Sacramento, California
| | - Karlo J Lizarraga
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan.,Motor Physiology and Neuromodulation Program, Division of Movement Disorders and Center for Health + Technology, Department of Neurology, University of Rochester, Rochester, New York
| | - Carolyn Gunraj
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan
| | - Gaayathiri Jegatheeswaran
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan
| | - Neil M Drummond
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan
| | - Cricia Rinchon
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan
| | - Anne Weissbach
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan.,Institute of Systems Motor Science, University of Lübeck, Lübeck, Germany
| | - James Saravanamuttu
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan
| | - Robert Chen
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,School of Kinesiology, Brain Behavior Laboratory, University of Michigan, Ann Arbor, Michigan
| |
Collapse
|
26
|
Allison RS, Wilcox LM. Stereoscopic depth constancy from a different direction. Vision Res 2020; 178:70-78. [PMID: 33161145 DOI: 10.1016/j.visres.2020.10.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 10/14/2020] [Accepted: 10/18/2020] [Indexed: 10/23/2022]
Abstract
To calibrate stereoscopic depth from disparity our visual system must compensate for an object's egocentric location. Ideally, the perceived three-dimensional shape and size of objects in visual space should be invariant with their location such that rigid objects have a consistent identity and shape. These percepts should be accurate enough to support both perceptual judgments and visually-guided interaction. This theoretical note reviews the relationship of stereoscopic depth constancy to the geometry of stereoscopic space and seemingly esoteric concepts like the horopter. We argue that to encompass the full scope of stereoscopic depth constancy, researchers need to consider not just distance but also direction, that is 3D egocentric location in space. Judgements of surface orientation need to take into account the shape of the horopter and the computation of metric depth (when tasks depend on it) must compensate for direction as well as distance to calibrate disparities. We show that the concept of the horopter underlies these considerations and that the relationship between depth constancy and the horopter should be more explicit in the literature.
Collapse
|
27
|
Abstract
What are the principles of brain organization? In the motor domain, separate pathways were found for reaching and grasping actions performed by the hand. To what extent is this organization specific to the hand or based on abstract action types, regardless of which body part performs them? We tested people born without hands who perform actions with their feet. Activity in frontoparietal association motor areas showed preference for an action type (reaching or grasping), regardless of whether it was performed by the foot in people born without hands or by the hand in typically-developed controls. These findings provide evidence that some association areas are organized based on abstract functions of action types, independent of specific sensorimotor experience and parameters of specific body parts. Many parts of the visuomotor system guide daily hand actions, like reaching for and grasping objects. Do these regions depend exclusively on the hand as a specific body part whose movement they guide, or are they organized for the reaching task per se, for any body part used as an effector? To address this question, we conducted a neuroimaging study with people born without upper limbs—individuals with dysplasia—who use the feet to act, as they and typically developed controls performed reaching and grasping actions with their dominant effector. Individuals with dysplasia have no prior experience acting with hands, allowing us to control for hand motor imagery when acting with another effector (i.e., foot). Primary sensorimotor cortices showed selectivity for the hand in controls and foot in individuals with dysplasia. Importantly, we found a preference based on action type (reaching/grasping) regardless of the effector used in the association sensorimotor cortex, in the left intraparietal sulcus and dorsal premotor cortex, as well as in the basal ganglia and anterior cerebellum. These areas also showed differential response patterns between action types for both groups. Intermediate areas along a posterior–anterior gradient in the left dorsal premotor cortex gradually transitioned from selectivity based on the body part to selectivity based on the action type. These findings indicate that some visuomotor association areas are organized based on abstract action functions independent of specific sensorimotor parameters, paralleling sensory feature-independence in visual and auditory cortices in people born blind and deaf. Together, they suggest association cortices across action and perception may support specific computations, abstracted from low-level sensorimotor elements.
Collapse
|
28
|
Abstract
The development of the use of transcranial magnetic stimulation (TMS) in the study of psychological functions has entered a new phase of sophistication. This is largely due to an increasing physiological knowledge of its effects and to its being used in combination with other experimental techniques. This review presents the current state of our understanding of the mechanisms of TMS in the context of designing and interpreting psychological experiments. We discuss the major conceptual advances in behavioral studies using TMS. There are meaningful physiological and technical achievements to review, as well as a wealth of new perceptual and cognitive experiments. In doing so we summarize the different uses and challenges of TMS in mental chronometry, perception, awareness, learning, and memory.
Collapse
Affiliation(s)
- David Pitcher
- Department of Psychology, University of York, York YO10 5DD, United Kingdom;
| | - Beth Parkin
- Department of Psychology, University of Westminster, London W1W 6UW, United Kingdom;
| | - Vincent Walsh
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AR, United Kingdom;
| |
Collapse
|
29
|
Sajad A, Sadeh M, Crawford JD. Spatiotemporal transformations for gaze control. Physiol Rep 2020; 8:e14533. [PMID: 32812395 PMCID: PMC7435051 DOI: 10.14814/phy2.14533] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 12/13/2022] Open
Abstract
Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time-locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head-unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye-centered, effector-independent parameters, and show-for the first time in ordinary gaze shifts-a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T-G continuum) that revealed task-dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Psychology DepartmentVanderbilt UniversityNashvilleTNUSA
| | - Morteza Sadeh
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Department of NeurosurgeryUniversity of Illinois at ChicagoChicagoILUSA
| | - John Douglas Crawford
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Vision: Science to Applications Program (VISTA)Neuroscience Graduate Diploma ProgramDepartments of Psychology, Biology, Kinesiology & Health SciencesYork UniversityTorontoONCanada
| |
Collapse
|
30
|
Karimpur H, Kurz J, Fiehler K. The role of perception and action on the use of allocentric information in a large-scale virtual environment. Exp Brain Res 2020; 238:1813-1826. [PMID: 32500297 PMCID: PMC7438369 DOI: 10.1007/s00221-020-05839-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 05/23/2020] [Indexed: 01/10/2023]
Abstract
In everyday life, our brain constantly builds spatial representations of the objects surrounding us. Many studies have investigated the nature of these spatial representations. It is well established that we use allocentric information in real-time and memory-guided movements. Most studies relied on small-scale and static experiments, leaving it unclear whether similar paradigms yield the same results on a larger scale using dynamic objects. We created a virtual reality task that required participants to encode the landing position of a virtual ball thrown by an avatar. Encoding differed in the nature of the task in that it was either purely perceptual (“view where the ball landed while standing still”—Experiment 1) or involved an action (“intercept the ball with the foot just before it lands”—Experiment 2). After encoding, participants were asked to place a real ball at the remembered landing position in the virtual scene. In some trials, we subtly shifted either the thrower or the midfield line on a soccer field to manipulate allocentric coding of the ball’s landing position. In both experiments, we were able to replicate classic findings from small-scale experiments and to generalize these results to different encoding tasks (perception vs. action) and response modes (reaching vs. walking-and-placing). Moreover, we found that participants preferably encoded the ball relative to the thrower when they had to intercept the ball, suggesting that the use of allocentric information is determined by the encoding task by enhancing task-relevant allocentric information. Our findings indicate that results previously obtained from memory-guided reaching are not restricted to small-scale movements, but generalize to whole-body movements in large-scale dynamic scenes.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Johannes Kurz
- NemoLab-Neuromotor Behavior Laboratory, Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
31
|
Heuer A, Ohl S, Rolfs M. Memory for action: a functional view of selection in visual working memory. VISUAL COGNITION 2020. [DOI: 10.1080/13506285.2020.1764156] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Affiliation(s)
- Anna Heuer
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Sven Ohl
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Martin Rolfs
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
32
|
Karimpur H, Eftekharifar S, Troje NF, Fiehler K. Spatial coding for memory-guided reaching in visual and pictorial spaces. J Vis 2020; 20:1. [PMID: 32271893 PMCID: PMC7405696 DOI: 10.1167/jov.20.4.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 11/20/2019] [Indexed: 01/10/2023] Open
Abstract
An essential difference between pictorial space displayed as paintings, photographs, or computer screens, and the visual space experienced in the real world is that the observer has a defined location, and thus valid information about distance and direction of objects, in the latter but not in the former. Thus egocentric information should be more reliable in visual space, whereas allocentric information should be more reliable in pictorial space. The majority of studies relied on pictorial representations (images on a computer screen), leaving it unclear whether the same coding mechanisms apply in visual space. Using a memory-guided reaching task in virtual reality, we investigated allocentric coding in both visual space (on a table in virtual reality) and pictorial space (on a monitor that is on the table in virtual reality). Our results suggest that the brain uses allocentric information to represent objects in both pictorial and visual space. Contrary to our hypothesis, the influence of allocentric cues was stronger in visual space than in pictorial space, also after controlling for retinal stimulus size, confounding allocentric cues, and differences in presentation depth. We discuss possible reasons for stronger allocentric coding in visual than in pictorial space.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | | | - Nikolaus F. Troje
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
- Centre for Vision Research and Department of Biology, York University, Toronto, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
33
|
Task Errors Drive Memories That Improve Sensorimotor Adaptation. J Neurosci 2020; 40:3075-3088. [PMID: 32029533 DOI: 10.1523/jneurosci.1506-19.2020] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Revised: 01/20/2020] [Accepted: 01/25/2020] [Indexed: 11/21/2022] Open
Abstract
Traditional views of sensorimotor adaptation (i.e., adaptation of movements to perturbed sensory feedback) emphasize the role of automatic, implicit correction of sensory prediction errors. However, latent memories formed during sensorimotor adaptation, manifest as improved relearning (e.g., savings), have recently been attributed to strategic corrections of task errors (failures to achieve task goals). To dissociate contributions of task errors and sensory prediction errors to latent sensorimotor memories, we perturbed target locations to remove or enforce task errors during learning and/or test, with male/female human participants. Adaptation improved after learning in all conditions where participants were permitted to correct task errors, and did not improve whenever we prevented correction of task errors. Thus, previous correction of task errors was both necessary and sufficient to improve adaptation. In contrast, a history of sensory prediction errors was neither sufficient nor obligatory for improved adaptation. Limiting movement preparation time showed that the latent memories driven by learning to correct task errors take at least two forms: a time-consuming but flexible component, and a rapidly expressible, inflexible component. The results provide strong support for the idea that movement corrections driven by a failure to successfully achieve movement goals underpin motor memories that manifest as savings. Such persistent memories are not exclusively mediated by time-consuming strategic processes but also comprise a rapidly expressible but inflexible component. The distinct characteristics of these putative processes suggest dissociable underlying mechanisms, and imply that identification of the neural basis for adaptation and savings will require methods that allow such dissociations.SIGNIFICANCE STATEMENT Latent motor memories formed during sensorimotor adaptation manifest as improved adaptation when sensorimotor perturbations are reencountered. Conflicting theories suggest that this "savings" is underpinned by different mechanisms, including a memory of successful actions, a memory of errors, or an aiming strategy to correct task errors. Here we show that learning to correct task errors is sufficient to show improved subsequent adaptation with respect to naive performance, even when tested in the absence of task errors. In contrast, a history of sensory prediction errors is neither sufficient nor obligatory for improved adaptation. Finally, we show that latent sensorimotor memories driven by task errors comprise at least two distinct components: a time-consuming, flexible component, and a rapidly expressible, inflexible component.
Collapse
|
34
|
Lu Z, Fiehler K. Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex 2020; 125:203-214. [PMID: 32006875 DOI: 10.1016/j.cortex.2019.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 09/16/2019] [Accepted: 12/12/2019] [Indexed: 12/17/2022]
Abstract
The 2-streams model of vision suggests that egocentric and allocentric reference frames are utilized by the dorsal and the ventral stream for real-time and memory-guided movements, respectively. Recent studies argue against such a strict functional distinction and suggest that real-time and memory-guided movements recruit the same spatial maps. In this study we focus on allocentric spatial coding and updating of targets by using landmark information in real-time and memory-guided reaching. We presented participants with a naturalistic scene which consisted of six objects on a table that served as potential reach targets. Participants were informed about the target object after scene encoding, and were prompted by a go cue to reach to its position. After target identification a brief air-puff was applied to the participant's right eye inducing an eye blink. During the blink the target object disappeared from the scene, and in half of the trials the remaining objects, that functioned as landmarks, were shifted horizontally in the same direction. We found that landmark shifts systematically influenced participants' reaching endpoints irrespective of whether the movements were controlled online based on available target information (real-time movement) or memory-guided based on remembered target information (memory-guided movement). Overall, the effect of landmark shift was stronger for memory-guided than real-time reaching. Our findings suggest that humans can encode and update reach targets in an allocentric reference frame for both real-time and memory-guided movements and show stronger allocentric coding when the movement is based on memory.
Collapse
Affiliation(s)
- Zijian Lu
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany.
| | - Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus-Liebig University, Giessen, Germany.
| |
Collapse
|
35
|
Timing Determines Tuning: A Rapid Spatial Transformation in Superior Colliculus Neurons during Reactive Gaze Shifts. eNeuro 2020; 7:ENEURO.0359-18.2019. [PMID: 31792117 PMCID: PMC6944480 DOI: 10.1523/eneuro.0359-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 10/12/2019] [Accepted: 10/14/2019] [Indexed: 11/21/2022] Open
Abstract
Gaze saccades, rapid shifts of the eyes and head toward a goal, have provided fundamental insights into the neural control of movement. For example, it has been shown that the superior colliculus (SC) transforms a visual target (T) code to future gaze (G) location commands after a memory delay. However, this transformation has not been observed in "reactive" saccades made directly to a stimulus, so its contribution to normal gaze behavior is unclear. Here, we tested this using a quantitative measure of the intermediate codes between T and G, based on variable errors in gaze endpoints. We demonstrate that a rapid spatial transformation occurs within the primate's SC (Macaca mulatta) during reactive saccades, involving a shift in coding from T, through intermediate codes, to G. This spatial shift progressed continuously both across and within cell populations [visual, visuomotor (VM), motor], rather than relaying discretely between populations with fixed spatial codes. These results suggest that the SC produces a rapid, noisy, and distributed transformation that contributes to variable errors in reactive gaze shifts.
Collapse
|
36
|
Medendorp WP, Heed T. State estimation in posterior parietal cortex: Distinct poles of environmental and bodily states. Prog Neurobiol 2019; 183:101691. [DOI: 10.1016/j.pneurobio.2019.101691] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2019] [Revised: 08/12/2019] [Accepted: 08/29/2019] [Indexed: 01/06/2023]
|
37
|
Smeets JBJ, van der Kooij K, Brenner E. A review of grasping as the movements of digits in space. J Neurophysiol 2019; 122:1578-1597. [DOI: 10.1152/jn.00123.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is tempting to describe human reach-to-grasp movements in terms of two, more or less independent visuomotor channels, one relating hand transport to the object’s location and the other relating grip aperture to the object’s size. Our review of experimental work questions this framework for reasons that go beyond noting the dependence between the two channels. Both the lack of effect of size illusions on grip aperture and the finding that the variability in grip aperture does not depend on the object’s size indicate that size information is not used to control grip aperture. An alternative is to describe grip formation as emerging from controlling the movements of the digits in space. Each digit’s trajectory when grasping an object is remarkably similar to its trajectory when moving to tap the same position on its own. The similarity is also evident in the fast responses when the object is displaced. This review develops a new description of the speed-accuracy trade-off for multiple effectors that is applied to grasping. The most direct support for the digit-in-space framework is that prism-induced adaptation of each digit’s tapping movements transfers to that digit’s movements when grasping, leading to changes in grip aperture for adaptation in opposite directions for the two digits. We conclude that although grip aperture and hand transport are convenient variables to describe grasping, treating grasping as movements of the digits in space is a more suitable basis for understanding the neural control of grasping.
Collapse
Affiliation(s)
- Jeroen B. J. Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Katinka van der Kooij
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
38
|
Paschke K, Bähr M, Wüstenberg T, Wilke M. Trunk rotation and handedness modulate cortical activation in neglect-associated regions during temporal order judgments. NEUROIMAGE-CLINICAL 2019; 23:101898. [PMID: 31491819 PMCID: PMC6627032 DOI: 10.1016/j.nicl.2019.101898] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Revised: 05/13/2019] [Accepted: 06/13/2019] [Indexed: 12/02/2022]
Abstract
The rotation of the trunk around its vertical midline could be shown to bias visuospatial temporal judgments towards targets in the hemifield ipsilateral to the trunk orientation and to improve visuospatial performance in patients with visual neglect. However, the underlying brain mechanisms are not well understood. Therefore, the goal of the present study was to investigate the neural effects associated with egocentric midplane shifts under consideration of individual handedness. We employed a visuospatial temporal order judgment (TOJ) task in healthy right- and left-handed subjects while their trunk rotation was varied. Participants responded by a saccade towards the stimulus perceived first out of two stimuli presented with different stimulus onset asynchronies (SOA). Apart from gaze behavior, BOLD-fMRI responses were measured using functional magnetic resonance imaging (fMRI). Based on findings from spatial neglect research, analyses of fMRI-BOLD responses were focused on a bilateral fronto-temporo-parietal network comprising Brodmann areas 22, 39, 40, and 44, as well as the basal ganglia core nuclei (caudate, putamen, pallidum). We observed an acceleration of saccadic speed towards stimuli ipsilateral to the trunk orientation modulated by individual handedness. Left-handed participants showed the strongest behavioral and neural effects, suggesting greater susceptibility to manipulations of trunk orientation. With respect to the dominant hand, a rotation around the vertical trunk midline modulated the activation of an ipsilateral network comprising fronto-temporo-parietal regions and the putamen with the strongest effects for saccades towards the hemifield opposite to the dominant hand. Within the investigated network, the temporo-parietal junction (TPJ) appears to serve as a region integrating sensory, motor, and trunk position information. Our results are discussed in the context of gain modulatory and laterality effects. We examined the effect of trunk rotation on brain responses in neglect-associated areas.Trunk-related BOLD-fMRI activation patterns depend on handedness. They were modulated most during trunk rotation contralateral to the dominant hand. Trunk rotation and saccade direction show interaction effects at TPJ. TPJ serves as a region integrating sensory, motor, and trunk position information.
Collapse
Affiliation(s)
- Kerstin Paschke
- Department of Cognitive Neurology, University Medicine Göttingen, Robert-Koch-Str. 40, Göttingen 37075, Germany; German Center for Addiction Research in Childhood and Adolescence, University Medical Center Hamburg-Eppendorf, Martinistr. 52, Hamburg 20246, Germany; Department of Neurology, University Medicine Göttingen, Robert-Koch-Str. 40, Goettingen 37075, Germany.
| | - Mathias Bähr
- Department of Neurology, University Medicine Göttingen, Robert-Koch-Str. 40, Goettingen 37075, Germany; DFG Center for Nanoscale Microscopy & Molecular Physiology of the Brain (CNMPB), Germany
| | - Torsten Wüstenberg
- Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Berlin Institute of Health, Humboldt-Universität zu Berlin, Charité Campus Mitte, Charitéplatz 1, Berlin 10117, Germany; Systems Neuroscience in Psychiatry (SNiP), Central Institute of Mental Health, Mannheim, J5, Mannheim 68159, Germany
| | - Melanie Wilke
- Department of Cognitive Neurology, University Medicine Göttingen, Robert-Koch-Str. 40, Göttingen 37075, Germany; DFG Center for Nanoscale Microscopy & Molecular Physiology of the Brain (CNMPB), Germany; German Primate Center, Leibniz Institute for Primate Research, Kellnerweg 4, Göttingen 37077, Germany; Leibniz-science campus primate cognition, Germany
| |
Collapse
|
39
|
Blohm G, Alikhanian H, Gaetz W, Goltz H, DeSouza J, Cheyne D, Crawford J. Neuromagnetic signatures of the spatiotemporal transformation for manual pointing. Neuroimage 2019; 197:306-319. [DOI: 10.1016/j.neuroimage.2019.04.074] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 03/28/2019] [Accepted: 04/27/2019] [Indexed: 11/29/2022] Open
|
40
|
The neglected medial part of macaque area PE: segregated processing of reach depth and direction. Brain Struct Funct 2019; 224:2537-2557. [DOI: 10.1007/s00429-019-01923-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Accepted: 07/13/2019] [Indexed: 11/26/2022]
|
41
|
Tuhkanen S, Pekkanen J, Rinkkala P, Mole C, Wilkie RM, Lappi O. Humans Use Predictive Gaze Strategies to Target Waypoints for Steering. Sci Rep 2019; 9:8344. [PMID: 31171850 PMCID: PMC6554351 DOI: 10.1038/s41598-019-44723-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Accepted: 05/15/2019] [Indexed: 12/22/2022] Open
Abstract
A major unresolved question in understanding visually guided locomotion in humans is whether actions are driven solely by the immediately available optical information (model-free online control mechanisms), or whether internal models have a role in anticipating the future path. We designed two experiments to investigate this issue, measuring spontaneous gaze behaviour while steering, and predictive gaze behaviour when future path information was withheld. In Experiment 1 participants (N = 15) steered along a winding path with rich optic flow: gaze patterns were consistent with tracking waypoints on the future path 1–3 s ahead. In Experiment 2, participants (N = 12) followed a path presented only in the form of visual waypoints located on an otherwise featureless ground plane. New waypoints appeared periodically every 0.75 s and predictably 2 s ahead, except in 25% of the cases the waypoint at the expected location was not displayed. In these cases, there were always other visible waypoints for the participant to fixate, yet participants continued to make saccades to the empty, but predictable, waypoint locations (in line with internal models of the future path guiding gaze fixations). This would not be expected based upon existing model-free online steering control models, and strongly points to a need for models of steering control to include mechanisms for predictive gaze control that support anticipatory path following behaviours.
Collapse
Affiliation(s)
- Samuel Tuhkanen
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Jami Pekkanen
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Paavo Rinkkala
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland.,TRUlab, University of Helsinki, Helsinki, Finland
| | - Callum Mole
- School of Psychology, University of Leeds, Leeds, UK
| | | | - Otto Lappi
- Cognitive Science, Department of Digital Humanities & Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland. .,TRUlab, University of Helsinki, Helsinki, Finland.
| |
Collapse
|
42
|
Hadjidimitrakis K, Bakola S, Wong YT, Hagan MA. Mixed Spatial and Movement Representations in the Primate Posterior Parietal Cortex. Front Neural Circuits 2019; 13:15. [PMID: 30914925 PMCID: PMC6421332 DOI: 10.3389/fncir.2019.00015] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Accepted: 02/21/2019] [Indexed: 11/13/2022] Open
Abstract
The posterior parietal cortex (PPC) of humans and non-human primates plays a key role in the sensory and motor transformations required to guide motor actions to objects of interest in the environment. Despite decades of research, the anatomical and functional organization of this region is still a matter of contention. It is generally accepted that specialized parietal subregions and their functional counterparts in the frontal cortex participate in distinct segregated networks related to eye, arm and hand movements. However, experimental evidence obtained primarily from single neuron recording studies in non-human primates has demonstrated a rich mixing of signals processed by parietal neurons, calling into question ideas for a strict functional specialization. Here, we present a brief account of this line of research together with the basic trends in the anatomical connectivity patterns of the parietal subregions. We review, the evidence related to the functional communication between subregions of the PPC and describe progress towards using parietal neuron activity in neuroprosthetic applications. Recent literature suggests a role for the PPC not as a constellation of specialized functional subdomains, but as a dynamic network of sensorimotor loci that combine multiple signals and work in concert to guide motor behavior.
Collapse
Affiliation(s)
- Kostas Hadjidimitrakis
- Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Sophia Bakola
- Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Yan T Wong
- Department of Physiology, Monash University, Clayton, VIC, Australia.,Department of Electrical and Computer Science Engineering, Monash University, Clayton, VIC, Australia
| | - Maureen A Hagan
- Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| |
Collapse
|
43
|
Chivukula S, Jafari M, Aflalo T, Yong NA, Pouratian N. Cognition in Sensorimotor Control: Interfacing With the Posterior Parietal Cortex. Front Neurosci 2019; 13:140. [PMID: 30872993 PMCID: PMC6401528 DOI: 10.3389/fnins.2019.00140] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2017] [Accepted: 02/07/2019] [Indexed: 12/19/2022] Open
Abstract
Millions of people worldwide are afflicted with paralysis from a disruption of neural pathways between the brain and the muscles. Because their cortical architecture is often preserved, these patients are able to plan movements despite an inability to execute them. In such people, brain machine interfaces have great potential to restore lost function through neuroprosthetic devices, circumventing dysfunctional corticospinal circuitry. These devices have typically derived control signals from the motor cortex (M1) which provides information highly correlated with desired movement trajectories. However, sensorimotor control simultaneously engages multiple cognitive processes such as intent, state estimation, decision making, and the integration of multisensory feedback. As such, cortical association regions upstream of M1 such as the posterior parietal cortex (PPC) that are involved in higher order behaviors such as planning and learning, rather than in encoding movement itself, may enable enhanced, cognitive control of neuroprosthetics, termed cognitive neural prosthetics (CNPs). We illustrate in this review, through a small sampling, the cognitive functions encoded in the PPC and discuss their neural representation in the context of their relevance to motor neuroprosthetics. We aim to highlight through examples a role for cortical signals from the PPC in developing CNPs, and to inspire future avenues for exploration in their research and development.
Collapse
Affiliation(s)
- Srinivas Chivukula
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, United States
| | - Matiar Jafari
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, United States
| | - Tyson Aflalo
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, United States
| | - Nicholas Au Yong
- Department of Neurological Surgery, Los Angeles Medical Center, University of California, Los Angeles, Los Angeles, CA, United States
| | - Nader Pouratian
- Department of Neurological Surgery, Los Angeles Medical Center, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
44
|
Seegelke C, Wühr P. Compatibility between object size and response side in grasping: the left hand prefers smaller objects, the right hand prefers larger objects. PeerJ 2018; 6:e6026. [PMID: 30533312 PMCID: PMC6282946 DOI: 10.7717/peerj.6026] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Accepted: 10/29/2018] [Indexed: 11/20/2022] Open
Abstract
It has been proposed that the brain processes quantities such as space, size, number, and other magnitudes using a common neural metric, and that this common representation system reflects a direct link to motor control, because the integration of spatial, temporal, and other quantity-related information is fundamental for sensorimotor transformation processes. In the present study, we examined compatibility effects between physical stimulus size and spatial (response) location during a sensorimotor task. Participants reached and grasped for a small or large object with either their non-dominant left or their dominant right hand. Our results revealed that participants initiated left hand movements faster when grasping the small cube compared to the large cube, whereas they initiated right hand movements faster when grasping the large cube compared to the small cube. Moreover, the compatibility effect influenced the timing of grip aperture kinematics. These findings indicate that the interaction between object size and response hand affects the planning of grasping movements and supports the notion of a strong link between the cognitive representation of (object) size, spatial (response) parameters, and sensorimotor control.
Collapse
Affiliation(s)
- Christian Seegelke
- Biopsychology and Cognitive Neuroscience, Faculty of Psychology and Sport Sciences, Bielefeld University, Bielefeld, Germany
- Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Peter Wühr
- Institute of Psychology, TU Dortmund University, Dortmund, Germany
| |
Collapse
|
45
|
Helmbrecht TO, dal Maschio M, Donovan JC, Koutsouli S, Baier H. Topography of a Visuomotor Transformation. Neuron 2018; 100:1429-1445.e4. [DOI: 10.1016/j.neuron.2018.10.021] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 08/31/2018] [Accepted: 10/09/2018] [Indexed: 01/07/2023]
|
46
|
Sadibolova R, Tamè L, Longo MR. More than skin-deep: Integration of skin-based and musculoskeletal reference frames in localization of touch. J Exp Psychol Hum Percept Perform 2018; 44:1672-1682. [PMID: 30160504 PMCID: PMC6205026 DOI: 10.1037/xhp0000562] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Revised: 04/25/2018] [Accepted: 04/26/2018] [Indexed: 11/08/2022]
Abstract
The skin of the forearm is, in one sense, a flat 2-dimensional (2D) sheet, but in another sense approximately cylindrical, mirroring the 3-dimensional (3D) volumetric shape of the arm. The role of frames of reference based on the skin as a 2D sheet versus based on the musculoskeletal structure of the arm remains unclear. When we rotate the forearm from a pronated to a supinated posture, the skin on its surface is displaced. Thus, a marked location will slide with the skin across the underlying flesh, and the touch perceived at this location should follow this displacement if it is localized within a skin-based reference frame. We investigated, however, if the perceived tactile locations were also affected by the rearrangement in underlying musculoskeletal structure, that is, displaced medially and laterally on a pronated and supinated forearm, respectively. Participants pointed to perceived touches (Experiment 1), or marked them on a (3D) size-matched forearm on a computer screen (Experiment 2). The perceived locations were indeed displaced medially after forearm pronation in both response modalities. This misperception was reduced (Experiment 1), or absent altogether (Experiment 2) in the supinated posture when the actual stimulus grid moved laterally with the displaced skin. The grid was perceptually stretched at medial-lateral axis, and it was displaced distally, which suggest the influence of skin-based factors. Our study extends the tactile localization literature focused on the skin-based reference frame and on the effects of spatial positions of body parts by implicating the musculoskeletal factors in localization of touch on the body. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
|
47
|
Plasticity based on compensatory effector use in the association but not primary sensorimotor cortex of people born without hands. Proc Natl Acad Sci U S A 2018; 115:7801-7806. [PMID: 29997174 PMCID: PMC6065047 DOI: 10.1073/pnas.1803926115] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
What forces direct brain organization and its plasticity? When brain regions are deprived of their input, which regions reorganize based on compensation for the disability and experience, and which regions show topographically constrained plasticity? People born without hands activate their primary sensorimotor hand region while moving body parts used to compensate for this disability (e.g., their feet). This was taken to suggest a neural organization based on functions, such as performing manual-like dexterous actions, rather than on body parts, in primary sensorimotor cortex. We tested the selectivity for the compensatory body parts in the primary and association sensorimotor cortex of people born without hands (dysplasic individuals). Despite clear compensatory foot use, the primary sensorimotor hand area in the dysplasic subjects showed preference for adjacent body parts that are not compensatorily used as effectors. This suggests that function-based organization, proposed for congenital blindness and deafness, does not apply to the primary sensorimotor cortex deprivation in dysplasia. These findings stress the roles of neuroanatomical constraints like topographical proximity and connectivity in determining the functional development of primary cortex even in extreme, congenital deprivation. In contrast, increased and selective foot movement preference was found in dysplasics' association cortex in the inferior parietal lobule. This suggests that the typical motor selectivity of this region for manual actions may correspond to high-level action representations that are effector-invariant. These findings reveal limitations to compensatory plasticity and experience in modifying brain organization of early topographical cortex compared with association cortices driven by function-based organization.
Collapse
|
48
|
Bakker RS, Selen LPJ, Medendorp WP. Reference frames in the decisions of hand choice. J Neurophysiol 2018; 119:1809-1817. [DOI: 10.1152/jn.00738.2017] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
For the brain to decide on a reaching movement, it needs to select which hand to use. A number of body-centered factors affect this decision, such as the anticipated movement costs of each arm, recent choice success, handedness, and task demands. While the position of each hand relative to the target is also known to be an important spatial factor, it is unclear which reference frames coordinate the spatial aspects in the decisions of hand choice. Here we tested the role of gaze- and head-centered reference frames in a hand selection task. With their head and gaze oriented in different directions, we measured hand choice of 19 right-handed subjects instructed to make unimanual reaching movements to targets at various directions relative to their body. Using an adaptive procedure, we determined the target angle that led to equiprobable right/left hand choices. When gaze remained fixed relative to the body this balanced target angle shifted systematically with head orientation, and when head orientation remained fixed this choice measure shifted with gaze. These results suggest that a mixture of head- and gaze-centered reference frames is involved in the spatially guided decisions of hand choice, perhaps to flexibly bind this process to the mechanisms of target selection. NEW & NOTEWORTHY Decisions of target and hand choice are fundamental aspects of human reaching movements. While the reference frames involved in target choice have been identified, it is unclear which reference frames are involved in hand selection. We tested the role of gaze- and head-centered reference frames in a hand selection task. Findings emphasize the role of both spatial reference frames in the decisions of hand choice, in addition to known body-centered computations such anticipated movement costs and handedness.
Collapse
Affiliation(s)
- Romy S. Bakker
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Luc P. J. Selen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - W. Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
49
|
Chen Y, Monaco S, Crawford JD. Neural substrates for allocentric-to-egocentric conversion of remembered reach targets in humans. Eur J Neurosci 2018. [PMID: 29512943 DOI: 10.1111/ejn.13885] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Targets for goal-directed action can be encoded in allocentric coordinates (relative to another visual landmark), but it is not known how these are converted into egocentric commands for action. Here, we investigated this using a slow event-related fMRI paradigm, based on our previous behavioural finding that the allocentric-to-egocentric (Allo-Ego) conversion for reach is performed at the first possible opportunity. Participants were asked to remember (and eventually reach towards) the location of a briefly presented target relative to another visual landmark. After a first memory delay, participants were forewarned by a verbal instruction if the landmark would reappear at the same location (potentially allowing them to plan a reach following the auditory cue before the second delay), or at a different location where they had to wait for the final landmark to be presented before response, and then reach towards the remembered target location. As predicted, participants showed landmark-centred directional selectivity in occipital-temporal cortex during the first memory delay, and only developed egocentric directional selectivity in occipital-parietal cortex during the second delay for the 'Same cue' task, and during response for the 'Different cue' task. We then compared cortical activation between these two tasks at the times when the Allo-Ego conversion occurred, and found common activation in right precuneus, right presupplementary area and bilateral dorsal premotor cortex. These results confirm that the brain converts allocentric codes to egocentric plans at the first possible opportunity, and identify the four most likely candidate sites specific to the Allo-Ego transformation for reaches.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, Room 0009, Lassonde Building, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, ON, Canada
| | - Simona Monaco
- Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - J Douglas Crawford
- Center for Vision Research, Room 0009, Lassonde Building, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, ON, Canada.,Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| |
Collapse
|
50
|
Role of Rostral Fastigial Neurons in Encoding a Body-Centered Representation of Translation in Three Dimensions. J Neurosci 2018; 38:3584-3602. [PMID: 29487123 DOI: 10.1523/jneurosci.2116-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 02/01/2018] [Accepted: 02/20/2018] [Indexed: 11/21/2022] Open
Abstract
Many daily behaviors rely critically on estimates of our body motion. Such estimates must be computed by combining neck proprioceptive signals with vestibular signals that have been transformed from a head- to a body-centered reference frame. Recent studies showed that deep cerebellar neurons in the rostral fastigial nucleus (rFN) reflect these computations, but whether they explicitly encode estimates of body motion remains unclear. A key limitation in addressing this question is that, to date, cell tuning properties have only been characterized for a restricted set of motions across head-re-body orientations in the horizontal plane. Here we examined, for the first time, how 3D spatiotemporal tuning for translational motion varies with head-re-body orientation in both horizontal and vertical planes in the rFN of male macaques. While vestibular coding was profoundly influenced by head-re-body position in both planes, neurons typically reflected at most a partial transformation. However, their tuning shifts were not random but followed the specific spatial trajectories predicted for a 3D transformation. We show that these properties facilitate the linear decoding of fully body-centered motion representations in 3D with a broad range of temporal characteristics from small groups of 5-7 cells. These results demonstrate that the vestibular reference frame transformation required to compute body motion is indeed encoded by cerebellar neurons. We propose that maintaining partially transformed rFN responses with different spatiotemporal properties facilitates the creation of downstream body motion representations with a range of dynamic characteristics, consistent with the functional requirements for tasks such as postural control and reaching.SIGNIFICANCE STATEMENT Estimates of body motion are essential for many daily activities. Vestibular signals are important contributors to such estimates but must be transformed from a head- to a body-centered reference frame. Here, we provide the first direct demonstration that the cerebellum computes this transformation fully in 3D. We show that the output of these computations is reflected in the tuning properties of deep cerebellar rostral fastigial nucleus neurons in a specific distributed fashion that facilitates the efficient creation of body-centered translation estimates with a broad range of temporal properties (i.e., from acceleration to position). These findings support an important role for the rostral fastigial nucleus as a source of body translation estimates functionally relevant for behaviors ranging from postural control to perception.
Collapse
|