1
|
Taghizadeh B, Fortmann O, Gail A. Position- and scale-invariant object-centered spatial localization in monkey frontoparietal cortex dynamically adapts to cognitive demand. Nat Commun 2024; 15:3357. [PMID: 38637493 PMCID: PMC11026390 DOI: 10.1038/s41467-024-47554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 04/02/2024] [Indexed: 04/20/2024] Open
Abstract
Egocentric encoding is a well-known property of brain areas along the dorsal pathway. Different to previous experiments, which typically only demanded egocentric spatial processing during movement preparation, we designed a task where two male rhesus monkeys memorized an on-the-object target position and then planned a reach to this position after the object re-occurred at variable location with potentially different size. We found allocentric (in addition to egocentric) encoding in the dorsal stream reach planning areas, parietal reach region and dorsal premotor cortex, which is invariant with respect to the position, and, remarkably, also the size of the object. The dynamic adjustment from predominantly allocentric encoding during visual memory to predominantly egocentric during reach planning in the same brain areas and often the same neurons, suggests that the prevailing frame of reference is less a question of brain area or processing stream, but more of the cognitive demands.
Collapse
Affiliation(s)
- Bahareh Taghizadeh
- Sensorimotor Group, German Primate Center, Göttingen, Germany
- School of Cognitive Science, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5746, Tehran, Iran
| | - Ole Fortmann
- Sensorimotor Group, German Primate Center, Göttingen, Germany
- Faculty of Biology and Psychology, University of Göttingen, Göttingen, Germany
| | - Alexander Gail
- Sensorimotor Group, German Primate Center, Göttingen, Germany.
- Faculty of Biology and Psychology, University of Göttingen, Göttingen, Germany.
- Bernstein Center for Computational Neuroscience, Göttingen, Germany.
- Leibniz ScienceCampus Primate Cognition, Göttingen, Germany.
| |
Collapse
|
2
|
Abstract
The aim of the current study was to develop a novel task that allows for the quick assessment of spatial memory precision with minimal technical and training requirements. In this task, participants memorized the position of an object in a virtual room and then judged from a different perspective, whether the object has moved to the left or to the right. Results revealed that participants exhibited a systematic bias in their responses that we termed the reversed congruency effect. Specifically, they performed worse when the camera and the object moved in the same direction than when they moved in opposite directions. Notably, participants responded correctly in almost 100% of the incongruent trials, regardless of the distance by which the object was displaced. In Experiment 2, we showed that this effect cannot be explained by the movement of the object on the screen, but that it relates to the perspective shift and the movement of the object in the virtual world. We also showed that the presence of additional objects in the environment reduces the reversed congruency effect such that it no longer predicts performance. In Experiment 3, we showed that the reversed congruency effect is greater in older adults, suggesting that the quality of spatial memory and perspective-taking abilities are critical. Overall, our results suggest that this effect is driven by difficulties in the precise encoding of object locations in the environment and in understanding how perspective shifts affect the projected positions of the objects in the two-dimensional image.
Collapse
|
3
|
Karimpur H, Eftekharifar S, Troje NF, Fiehler K. Spatial coding for memory-guided reaching in visual and pictorial spaces. J Vis 2020; 20:1. [PMID: 32271893 PMCID: PMC7405696 DOI: 10.1167/jov.20.4.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 11/20/2019] [Indexed: 01/10/2023] Open
Abstract
An essential difference between pictorial space displayed as paintings, photographs, or computer screens, and the visual space experienced in the real world is that the observer has a defined location, and thus valid information about distance and direction of objects, in the latter but not in the former. Thus egocentric information should be more reliable in visual space, whereas allocentric information should be more reliable in pictorial space. The majority of studies relied on pictorial representations (images on a computer screen), leaving it unclear whether the same coding mechanisms apply in visual space. Using a memory-guided reaching task in virtual reality, we investigated allocentric coding in both visual space (on a table in virtual reality) and pictorial space (on a monitor that is on the table in virtual reality). Our results suggest that the brain uses allocentric information to represent objects in both pictorial and visual space. Contrary to our hypothesis, the influence of allocentric cues was stronger in visual space than in pictorial space, also after controlling for retinal stimulus size, confounding allocentric cues, and differences in presentation depth. We discuss possible reasons for stronger allocentric coding in visual than in pictorial space.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | | | - Nikolaus F. Troje
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
- Centre for Vision Research and Department of Biology, York University, Toronto, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
4
|
Mikula L, Gaveau V, Pisella L, Khan AZ, Blohm G. Learned rather than online relative weighting of visual-proprioceptive sensory cues. J Neurophysiol 2018; 119:1981-1992. [PMID: 29465322 DOI: 10.1152/jn.00338.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand's specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.
Collapse
Affiliation(s)
- Laura Mikula
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France.,School of Optometry, University of Montreal , Montreal, Quebec , Canada
| | - Valérie Gaveau
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France
| | - Laure Pisella
- Centre de Recherche en Neurosciences de Lyon, ImpAct Team, INSERM U1028, CNRS UMR 5292, Lyon 1 University, Bron Cedex, France
| | - Aarlenne Z Khan
- School of Optometry, University of Montreal , Montreal, Quebec , Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University , Kingston, Ontario , Canada
| |
Collapse
|
5
|
Klinghammer M, Blohm G, Fiehler K. Scene Configuration and Object Reliability Affect the Use of Allocentric Information for Memory-Guided Reaching. Front Neurosci 2017; 11:204. [PMID: 28450826 PMCID: PMC5390010 DOI: 10.3389/fnins.2017.00204] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 03/24/2017] [Indexed: 11/16/2022] Open
Abstract
Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information.
Collapse
Affiliation(s)
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's UniversityKingston, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus-Liebig-UniversityGiessen, Germany
| |
Collapse
|
6
|
Dassonville P, Lester BD, Reed SA. An allocentric exception confirms an egocentric rule: a comment on Taghizadeh and Gail (2014). Front Hum Neurosci 2014; 8:942. [PMID: 25520637 PMCID: PMC4251315 DOI: 10.3389/fnhum.2014.00942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2014] [Accepted: 11/04/2014] [Indexed: 11/29/2022] Open
Affiliation(s)
- Paul Dassonville
- Department of Psychology and Institute of Neuroscience, University of OregonEugene, OR, USA
- *Correspondence:
| | - Benjamin D. Lester
- Department of Neurology, University of Iowa Hospitals and ClinicsIowa City, IA, USA
| | - Scott A. Reed
- Department of Psychology and Institute of Neuroscience, University of OregonEugene, OR, USA
| |
Collapse
|