1
|
Baltaretu BR, Schuetz I, Võ MLH, Fiehler K. Scene semantics affects allocentric spatial coding for action in naturalistic (virtual) environments. Sci Rep 2024; 14:15549. [PMID: 38969745 PMCID: PMC11226608 DOI: 10.1038/s41598-024-66428-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 07/01/2024] [Indexed: 07/07/2024] Open
Abstract
Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene's hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments.
Collapse
Affiliation(s)
- Bianca R Baltaretu
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany.
| | - Immo Schuetz
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany
| | - Melissa L-H Võ
- Department of Psychology, Goethe University Frankfurt, 60323, Frankfurt am Main, Hesse, Germany
| | - Katja Fiehler
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany
| |
Collapse
|
2
|
Schuetz I, Baltaretu BR, Fiehler K. Where was this thing again? Evaluating methods to indicate remembered object positions in virtual reality. J Vis 2024; 24:10. [PMID: 38995109 PMCID: PMC11246095 DOI: 10.1167/jov.24.7.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2024] Open
Abstract
A current focus in sensorimotor research is the study of human perception and action in increasingly naturalistic tasks and visual environments. This is further enabled by the recent commercial success of virtual reality (VR) technology, which allows for highly realistic but well-controlled three-dimensional (3D) scenes. VR enables a multitude of different ways to interact with virtual objects, but only rarely are such interaction techniques evaluated and compared before being selected for a sensorimotor experiment. Here, we compare different response techniques for a memory-guided action task, in which participants indicated the position of a previously seen 3D object in a VR scene: pointing, using a virtual laser pointer of short or unlimited length, and placing, either the target object itself or a generic reference cube. Response techniques differed in availability of 3D object cues and requirement to physically move to the remembered object position by walking. Object placement was the most accurate but slowest due to repeated repositioning. When placing objects, participants tended to match the original object's orientation. In contrast, the laser pointer was fastest but least accurate, with the short pointer showing a good speed-accuracy compromise. Our findings can help researchers in selecting appropriate methods when studying naturalistic visuomotor behavior in virtual environments.
Collapse
Affiliation(s)
- Immo Schuetz
- Experimental Psychology, Justus Liebig University, Giessen, Germany
| | | | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps University Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
3
|
Musa L, Yan X, Crawford JD. Instruction alters the influence of allocentric landmarks in a reach task. J Vis 2024; 24:17. [PMID: 39073800 PMCID: PMC11290568 DOI: 10.1167/jov.24.7.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/06/2024] [Indexed: 07/30/2024] Open
Abstract
Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.
Collapse
Affiliation(s)
- Lina Musa
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
- Departments of Biology and Kinesiology & Health Sciences, York University, Toronto, ON, Canada
| |
Collapse
|
4
|
Bays PM, Schneegans S, Ma WJ, Brady TF. Representation and computation in visual working memory. Nat Hum Behav 2024; 8:1016-1034. [PMID: 38849647 DOI: 10.1038/s41562-024-01871-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 03/22/2024] [Indexed: 06/09/2024]
Abstract
The ability to sustain internal representations of the sensory environment beyond immediate perception is a fundamental requirement of cognitive processing. In recent years, debates regarding the capacity and fidelity of the working memory (WM) system have advanced our understanding of the nature of these representations. In particular, there is growing recognition that WM representations are not merely imperfect copies of a perceived object or event. New experimental tools have revealed that observers possess richer information about the uncertainty in their memories and take advantage of environmental regularities to use limited memory resources optimally. Meanwhile, computational models of visuospatial WM formulated at different levels of implementation have converged on common principles relating capacity to variability and uncertainty. Here we review recent research on human WM from a computational perspective, including the neural mechanisms that support it.
Collapse
Affiliation(s)
- Paul M Bays
- Department of Psychology, University of Cambridge, Cambridge, UK
| | | | - Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
| | - Timothy F Brady
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
5
|
Abstract
Working memory enables us to bridge past sensory information to upcoming future behaviour. Accordingly, by its very nature, working memory is concerned with two components: the past and the future. Yet, in conventional laboratory tasks, these two components are often conflated, such as when sensory information in working memory is encoded and tested at the same location. We developed a task in which we dissociated the past (encoded location) and future (to-be-tested location) attributes of visual contents in working memory. This enabled us to independently track the utilisation of past and future memory attributes through gaze, as observed during mnemonic selection. Our results reveal the joint consideration of past and future locations. This was prevalent even at the single-trial level of individual saccades that were jointly biased to the past and future. This uncovers the rich nature of working memory representations, whereby both past and future memory attributes are retained and can be accessed together when memory contents become relevant for behaviour.
Collapse
Affiliation(s)
- Baiwei Liu
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
| | - Zampeta-Sofia Alexopoulou
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
| | - Freek van Ede
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
| |
Collapse
|
6
|
Forster PP, Fiehler K, Karimpur H. Egocentric cues influence the allocentric spatial memory of object configurations for memory-guided actions. J Neurophysiol 2023; 130:1142-1149. [PMID: 37791381 DOI: 10.1152/jn.00149.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/05/2023] Open
Abstract
Allocentric and egocentric reference frames are used to code the spatial position of action targets in reference to objects in the environment, i.e., relative to landmarks (allocentric), or the observer (egocentric). Previous research investigated reference frames in isolation, for example, by shifting landmarks relative to the target and asking participants to reach to the remembered target location. Systematic reaching errors were found in the direction of the landmark shift and used as a proxy for allocentric spatial coding. Here, we examined the interaction of both allocentric and egocentric reference frames by shifting the landmarks as well as the observer. We asked participants to encode a three-dimensional configuration of balls and to reproduce this configuration from memory after a short delay followed by a landmark or an observer shift. We also manipulated the number of landmarks to test its effect on the use of allocentric and egocentric reference frames. We found that participants were less accurate when reproducing the configuration of balls after an observer shift, which was reflected in larger configurational errors. In addition, an increase in the number of landmarks led to a stronger reliance on allocentric cues and a weaker contribution of egocentric cues. In sum, our results highlight the important role of egocentric cues for allocentric spatial coding in the context of memory-guided actions.NEW & NOTEWORTHY Objects in our environment are coded relative to each other (allocentrically) and are thought to serve as independent and reliable cues (landmarks) in the context of unreliable egocentric signals. Contrary to this assumption, we demonstrate that egocentric cues alter the allocentric spatial memory, which could reflect recently discovered interactions between allocentric and egocentric neural processing pathways. Furthermore, additional landmarks lead to a higher contribution of allocentric and a lower contribution of egocentric cues.
Collapse
Affiliation(s)
- Pierre-Pascal Forster
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
7
|
Schütz A, Bharmauria V, Yan X, Wang H, Bremmer F, Crawford JD. Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Commun Biol 2023; 6:938. [PMID: 37704829 PMCID: PMC10499799 DOI: 10.1038/s42003-023-05291-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/26/2023] [Indexed: 09/15/2023] Open
Abstract
Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.
Collapse
Affiliation(s)
- Adrian Schütz
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Vishal Bharmauria
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Xiaogang Yan
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Hongying Wang
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Frank Bremmer
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - J Douglas Crawford
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada.
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
8
|
Hassan NM, Khan SAR, Ashraf MU, Sheikh AA. Interconnection between the role of blockchain technologies, supply chain integration, and circular economy: A case of small and medium-sized enterprises in Pakistan. Sci Prog 2023; 106:368504231186527. [PMID: 37437130 PMCID: PMC10358506 DOI: 10.1177/00368504231186527] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/14/2023]
Abstract
Increased industrialization has led to unprecedented resource depletion on a global scale. The current state of affairs has compelled practitioners and academics to investigate the role of sustainable technologies in greening the operations of businesses. Previous studies have attempted to examine the number of operational aspects for their role in making firms sustainable, yet the utility of blockchain technologies is in its infancy. The role of BT in enhancing integration across supply chains has been in the limelight in the recent past. At the same time, its ability to cause sustainable supply chain performance (SSCP) in sync with the circular economy (CE) and supply chain integration (SCI) has largely remained unexplored. Therefore, this study intends to examine the association between blockchain technologies (BTs) and SSCPs through integration to fill the empirical gaps. The study was pursued to investigate the moderating role of the CE on the relationship between multiple extents of SCI and SSCP. Based on dynamic capability theory (DCT), the study considered BT a dynamic resource. BTs are used to integrate and reenergize the relationships with upstream and downstream channel members in pursuit of sustainable performance outcomes. The study opted for a cross-sectional design, where data was collected through convenience sampling from 475 managers from SMEs operating across Pakistan. PLS-SEM was used to analyze the data and to generate the required empirical outcomes. Study results favored the significant association between BT and SSCP, followed by a significant mediating role of SCI dimensions and moderating role of the CE. The study's findings propagate the utility of BTs adoption for SMEs, which holds the potential for firms to achieve system-wide integration to achieve sustainable outcomes. The given empirical investigation holds valuable insights for practitioners and scholars intending to pursue research on the subject matter.
Collapse
Affiliation(s)
- Nadir Munir Hassan
- Department of Business Administration, Air University, Multan Campus, Multan, Pakistan
| | - Syed Abdul Rehman Khan
- School of Engineering Management, Xuzhou University of Technology, Xuzhou, China
- Ribat Business School, International University of Ribat, Morocco
| | - Muhammad Umair Ashraf
- Institute of Business, Management, and Administrative Sciences, The Islamia University of Bahawalpur, Bahawalpur, Pakistan
| | - Adnan Ahmed Sheikh
- Department of Business Administration, Air University, Multan Campus, Multan, Pakistan
| |
Collapse
|
9
|
Kähönen J. Psychedelic unselfing: self-transcendence and change of values in psychedelic experiences. Front Psychol 2023; 14:1104627. [PMID: 37388660 PMCID: PMC10300451 DOI: 10.3389/fpsyg.2023.1104627] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 03/31/2023] [Indexed: 07/01/2023] Open
Abstract
Psychedelic experiences have been shown to both facilitate (re)connection to one's values and change values, including enhancing aesthetic appreciation, promoting pro-environmental attitudes, and encouraging prosocial behavior. This article presents an empirically informed framework of philosophical psychology to understand how self-transcendence relates to psychedelic value changes. Most of the observed psychedelic value changes are toward the self-transcendent values of Schwartz's value theory. As psychedelics also reliably cause various self-transcendent experiences (STEs), a parsimonious hypothesis is that STEs change values toward self-transcendent values. I argue that STEs indeed can lead to value changes, and discuss the morally relevant process of self-transcendence through Iris Murdoch's concept of "unselfing". I argue that overt egocentric concerns easily bias one's valuations. Unselfing reduces egocentric attributions of salience and enhances non-egocentric attention to the world, widening one's perspective and shifting evaluation toward self-transcendent modes. Values are inherently tied to various evaluative contexts, and unselfing can attune the individual to evaluative contexts and accompanying values beyond the self. Understood this way, psychedelics can provide temporarily enhanced access to self-transcendent values and function as sources of aspiration and value change. However, contextual factors can complicate whether STEs lead to long-term changes in values. The framework is supported by various research strands establishing empirical and conceptual connections between long-term differences in egocentricity, STEs, and self-transcendent values. Furthermore, the link between unselfing and value changes is supported by phenomenological and theoretical analysis of psychedelic experiences, as well as empirical findings on their long-term effects. This article furthers understanding of psychedelic value changes and contributes to discussions on whether value changes are justified, whether they result from cultural context, and whether psychedelics could function as tools of moral neuroenhancement.
Collapse
|
10
|
Abedi Khoozani P, Bharmauria V, Schütz A, Wildes RP, Crawford JD. Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts. Cereb Cortex Commun 2022; 3:tgac026. [PMID: 35909704 PMCID: PMC9334293 DOI: 10.1093/texcom/tgac026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 06/14/2022] [Accepted: 06/21/2022] [Indexed: 11/13/2022] Open
Abstract
Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric–egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Adrian Schütz
- Department of Neurophysics Phillips-University Marburg , Marburg 35037 , Germany
| | - Richard P Wildes
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Department of Electrical Engineering and Computer Science , York University, Toronto, ON M3J 1P3 , Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Departments of Psychology, Biology and Kinesiology & Health Sciences, York University , Toronto, Ontario M3J 1P3 , Canada
| |
Collapse
|
11
|
Liu(刘) R, Bögels S, Bird G, Medendorp WP, Toni I. Hierarchical Integration of Communicative and Spatial Perspective‐Taking Demands in Sensorimotor Control of Referential Pointing. Cogn Sci 2022; 46:e13084. [PMID: 35066907 PMCID: PMC9287027 DOI: 10.1111/cogs.13084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 10/29/2021] [Accepted: 12/07/2021] [Indexed: 11/16/2022]
Abstract
Recognized as a simple communicative behavior, referential pointing is cognitively complex because it invites a communicator to consider an addressee's knowledge. Although we know referential pointing is affected by addressees’ physical location, it remains unclear whether and how communicators’ inferences about addressees’ mental representation of the interaction space influence sensorimotor control of referential pointing. The communicative perspective‐taking task requires a communicator to point at one out of multiple referents either to instruct an addressee which one should be selected (communicative, COM) or to predict which one the addressee will select (non‐communicative, NCOM), based on either which referents can be seen (Level‐1 perspective‐taking, PT1) or how the referents were perceived (Level‐2 perspective‐taking, PT2) by the addressee. Communicators took longer to initiate the movements in PT2 than PT1 trials, and they held their pointing fingers for longer at the referent in COM than NCOM trials. The novel findings of this study pertain to trajectory control of the pointing movements. Increasing both communicative and perspective‐taking demands led to longer pointing trajectories, with an under‐additive interaction between those two experimental factors. This finding suggests that participants generate communicative behaviors that are as informative as required rather than overly exaggerated displays, by integrating communicative and perspective‐taking information hierarchically during sensorimotor control. This observation has consequences for models of human communication. It implies that the format of communicative and perspective‐taking knowledge needs to be commensurate with the movement dynamics controlled by the sensorimotor system.
Collapse
Affiliation(s)
- Rui(睿) Liu(刘)
- Donders Institute for Brain, Cognition and Behaviour Radboud University
| | - Sara Bögels
- Donders Institute for Brain, Cognition and Behaviour Radboud University
| | - Geoffrey Bird
- Department of Experimental Psychology University of Oxford
- Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology & Neuroscience King's College London
| | | | - Ivan Toni
- Donders Institute for Brain, Cognition and Behaviour Radboud University
| |
Collapse
|
12
|
Crowe EM, Bossard M, Brenner E. Can ongoing movements be guided by allocentric visual information when the target is visible? J Vis 2021; 21:6. [PMID: 33427872 PMCID: PMC7804519 DOI: 10.1167/jov.21.1.6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
People use both egocentric (object-to-self) and allocentric (object-to-object) spatial information to interact with the world. Evidence for allocentric information guiding ongoing actions stems from studies in which people reached to where targets had previously been seen while other objects were moved. Since egocentric position judgments might fade or change when the target is removed, we sought for conditions in which people might benefit from relying on allocentric information when the target remains visible. We used a task that required participants to intercept targets that moved across a screen using a cursor that represented their finger but that moved by a different amount in a different plane. During each attempt, we perturbed the target, cursor, or background individually or all three simultaneously such that their relative positions did not change and there was no need to adjust the ongoing movement. An obvious way to avoid responding to such simultaneous perturbations is by relying on allocentric information. Relying on egocentric information would give a response that resembles the combined responses to the three isolated perturbations. The hand responded in accordance with the responses to the isolated perturbations despite the differences between how the finger and cursor moved. This response remained when the simultaneous perturbation was repeated many times, suggesting that participants hardly relied upon allocentric spatial information to control their ongoing visually guided actions.
Collapse
Affiliation(s)
- Emily M Crowe
- Department of Human Movement Sciences, Institute of Brain and Behaviour Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,
| | | | - Eli Brenner
- Department of Human Movement Sciences, Institute of Brain and Behaviour Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,
| |
Collapse
|
13
|
Spatiotemporal Coding in the Macaque Supplementary Eye Fields: Landmark Influence in the Target-to-Gaze Transformation. eNeuro 2021; 8:ENEURO.0446-20.2020. [PMID: 33318073 PMCID: PMC7877461 DOI: 10.1523/eneuro.0446-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022] Open
Abstract
Eye-centered (egocentric) and landmark-centered (allocentric) visual signals influence spatial cognition, navigation, and goal-directed action, but the neural mechanisms that integrate these signals for motor control are poorly understood. A likely candidate for egocentric/allocentric integration in the gaze control system is the supplementary eye fields (SEF), a mediofrontal structure with high-level “executive” functions, spatially tuned visual/motor response fields, and reciprocal projections with the frontal eye fields (FEF). To test this hypothesis, we trained two head-unrestrained monkeys (Macaca mulatta) to saccade toward a remembered visual target in the presence of a visual landmark that shifted during the delay, causing gaze end points to shift partially in the same direction. A total of 256 SEF neurons were recorded, including 68 with spatially tuned response fields. Model fits to the latter established that, like the FEF and superior colliculus (SC), spatially tuned SEF responses primarily showed an egocentric (eye-centered) target-to-gaze position transformation. However, the landmark shift influenced this default egocentric transformation: during the delay, motor neurons (with no visual response) showed a transient but unintegrated shift (i.e., not correlated with the target-to-gaze transformation), whereas during the saccade-related burst visuomotor (VM) neurons showed an integrated shift (i.e., correlated with the target-to-gaze transformation). This differed from our simultaneous FEF recordings (Bharmauria et al., 2020), which showed a transient shift in VM neurons, followed by an integrated response in all motor responses. Based on these findings and past literature, we propose that prefrontal cortex incorporates landmark-centered information into a distributed, eye-centered target-to-gaze transformation through a reciprocal prefrontal circuit.
Collapse
|
14
|
Karimpur H, Kurz J, Fiehler K. The role of perception and action on the use of allocentric information in a large-scale virtual environment. Exp Brain Res 2020; 238:1813-1826. [PMID: 32500297 PMCID: PMC7438369 DOI: 10.1007/s00221-020-05839-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 05/23/2020] [Indexed: 01/10/2023]
Abstract
In everyday life, our brain constantly builds spatial representations of the objects surrounding us. Many studies have investigated the nature of these spatial representations. It is well established that we use allocentric information in real-time and memory-guided movements. Most studies relied on small-scale and static experiments, leaving it unclear whether similar paradigms yield the same results on a larger scale using dynamic objects. We created a virtual reality task that required participants to encode the landing position of a virtual ball thrown by an avatar. Encoding differed in the nature of the task in that it was either purely perceptual (“view where the ball landed while standing still”—Experiment 1) or involved an action (“intercept the ball with the foot just before it lands”—Experiment 2). After encoding, participants were asked to place a real ball at the remembered landing position in the virtual scene. In some trials, we subtly shifted either the thrower or the midfield line on a soccer field to manipulate allocentric coding of the ball’s landing position. In both experiments, we were able to replicate classic findings from small-scale experiments and to generalize these results to different encoding tasks (perception vs. action) and response modes (reaching vs. walking-and-placing). Moreover, we found that participants preferably encoded the ball relative to the thrower when they had to intercept the ball, suggesting that the use of allocentric information is determined by the encoding task by enhancing task-relevant allocentric information. Our findings indicate that results previously obtained from memory-guided reaching are not restricted to small-scale movements, but generalize to whole-body movements in large-scale dynamic scenes.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Johannes Kurz
- NemoLab-Neuromotor Behavior Laboratory, Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
15
|
Bharmauria V, Sajad A, Li J, Yan X, Wang H, Crawford JD. Integration of Eye-Centered and Landmark-Centered Codes in Frontal Eye Field Gaze Responses. Cereb Cortex 2020; 30:4995-5013. [PMID: 32390052 DOI: 10.1093/cercor/bhaa090] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/07/2020] [Accepted: 03/23/2020] [Indexed: 12/19/2022] Open
Abstract
The visual system is thought to separate egocentric and allocentric representations, but behavioral experiments show that these codes are optimally integrated to influence goal-directed movements. To test if frontal cortex participates in this integration, we recorded primate frontal eye field activity during a cue-conflict memory delay saccade task. To dissociate egocentric and allocentric coordinates, we surreptitiously shifted a visual landmark during the delay period, causing saccades to deviate by 37% in the same direction. To assess the cellular mechanisms, we fit neural response fields against an egocentric (eye-centered target-to-gaze) continuum, and an allocentric shift (eye-to-landmark-centered) continuum. Initial visual responses best-fit target position. Motor responses (after the landmark shift) predicted future gaze position but embedded within the motor code was a 29% shift toward allocentric coordinates. This shift appeared transiently in memory-related visuomotor activity, and then reappeared in motor activity before saccades. Notably, fits along the egocentric and allocentric shift continua were initially independent, but became correlated across neurons just before the motor burst. Overall, these results implicate frontal cortex in the integration of egocentric and allocentric visual information for goal-directed action, and demonstrate the cell-specific, temporal progression of signal multiplexing for this process in the gaze system.
Collapse
Affiliation(s)
- Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Amirsaman Sajad
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37240, USA
| | - Jirui Li
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Hongying Wang
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - John Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Departments of Psychology, Biology and Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada M3J 1P3
| |
Collapse
|
16
|
Karimpur H, Eftekharifar S, Troje NF, Fiehler K. Spatial coding for memory-guided reaching in visual and pictorial spaces. J Vis 2020; 20:1. [PMID: 32271893 PMCID: PMC7405696 DOI: 10.1167/jov.20.4.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 11/20/2019] [Indexed: 01/10/2023] Open
Abstract
An essential difference between pictorial space displayed as paintings, photographs, or computer screens, and the visual space experienced in the real world is that the observer has a defined location, and thus valid information about distance and direction of objects, in the latter but not in the former. Thus egocentric information should be more reliable in visual space, whereas allocentric information should be more reliable in pictorial space. The majority of studies relied on pictorial representations (images on a computer screen), leaving it unclear whether the same coding mechanisms apply in visual space. Using a memory-guided reaching task in virtual reality, we investigated allocentric coding in both visual space (on a table in virtual reality) and pictorial space (on a monitor that is on the table in virtual reality). Our results suggest that the brain uses allocentric information to represent objects in both pictorial and visual space. Contrary to our hypothesis, the influence of allocentric cues was stronger in visual space than in pictorial space, also after controlling for retinal stimulus size, confounding allocentric cues, and differences in presentation depth. We discuss possible reasons for stronger allocentric coding in visual than in pictorial space.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | | | - Nikolaus F. Troje
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
- Centre for Vision Research and Department of Biology, York University, Toronto, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
17
|
Lu Z, Fiehler K. Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex 2020; 125:203-214. [PMID: 32006875 DOI: 10.1016/j.cortex.2019.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 09/16/2019] [Accepted: 12/12/2019] [Indexed: 12/17/2022]
Abstract
The 2-streams model of vision suggests that egocentric and allocentric reference frames are utilized by the dorsal and the ventral stream for real-time and memory-guided movements, respectively. Recent studies argue against such a strict functional distinction and suggest that real-time and memory-guided movements recruit the same spatial maps. In this study we focus on allocentric spatial coding and updating of targets by using landmark information in real-time and memory-guided reaching. We presented participants with a naturalistic scene which consisted of six objects on a table that served as potential reach targets. Participants were informed about the target object after scene encoding, and were prompted by a go cue to reach to its position. After target identification a brief air-puff was applied to the participant's right eye inducing an eye blink. During the blink the target object disappeared from the scene, and in half of the trials the remaining objects, that functioned as landmarks, were shifted horizontally in the same direction. We found that landmark shifts systematically influenced participants' reaching endpoints irrespective of whether the movements were controlled online based on available target information (real-time movement) or memory-guided based on remembered target information (memory-guided movement). Overall, the effect of landmark shift was stronger for memory-guided than real-time reaching. Our findings suggest that humans can encode and update reach targets in an allocentric reference frame for both real-time and memory-guided movements and show stronger allocentric coding when the movement is based on memory.
Collapse
Affiliation(s)
- Zijian Lu
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany.
| | - Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus-Liebig University, Giessen, Germany.
| |
Collapse
|
18
|
Nishimura N, Uchimura M, Kitazawa S. Automatic encoding of a target position relative to a natural scene. J Neurophysiol 2019; 122:1849-1860. [PMID: 31509471 DOI: 10.1152/jn.00032.2018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We previously showed that the brain automatically represents a target position for reaching relative to a large square in the background. In the present study, we tested whether a natural scene with many complex details serves as an effective background for representing a target. In the first experiment, we used upright and inverted pictures of a natural scene. A shift of pictures significantly attenuated prism adaptation of reaching movements as long as they were upright. In one-third of participants, adaptation was almost completely cancelled whether the pictures were upright or inverted. It was remarkable that there were two distinct groups of participants, one who relies fully on the allocentric coordinate and the other who depended only when the scene was upright. In the second experiment, we examined how long it takes for a novel upright scene to serve as a background. A shift of the novel scene had no significant effects when it was presented for 500 ms before presenting a target, but significant effects were recovered when presented for 1,500 ms. These results show that a natural scene serves as a background against which a target is automatically represented once we spend 1,500 ms in the scene.NEW & NOTEWORTHY Prism adaptation of reaching was attenuated by a shift of natural scenes as long as they were upright. In one-third of participants, adaptation was fully canceled whether the scene was upright or inverted. When an upright scene was novel, it took 1,500 ms to prepare the scene for allocentric coding. These results show that a natural scene serves as a background against which a target is automatically represented once we spend 1,500 ms in the scene.
Collapse
Affiliation(s)
- Nobuyuki Nishimura
- Department of Anesthesiology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Motoaki Uchimura
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan
| | - Shigeru Kitazawa
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| |
Collapse
|
19
|
Chen Y, Crawford JD. Allocentric representations for target memory and reaching in human cortex. Ann N Y Acad Sci 2019; 1464:142-155. [PMID: 31621922 DOI: 10.1111/nyas.14261] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 09/25/2019] [Accepted: 09/28/2019] [Indexed: 01/18/2023]
Abstract
The use of allocentric cues for movement guidance is complex because it involves the integration of visual targets and independent landmarks and the conversion of this information into egocentric commands for action. Here, we focus on the mechanisms for encoding reach targets relative to visual landmarks in humans. First, we consider the behavioral results suggesting that both of these cues influence target memory, but are then transformed-at the first opportunity-into egocentric commands for action. We then consider the cortical mechanisms for these behaviors. We discuss different allocentric versus egocentric mechanisms for coding of target directional selectivity in memory (inferior temporal gyrus versus superior occipital gyrus) and distinguish these mechanisms from parieto-frontal activation for planning egocentric direction of actual reach movements. Then, we consider where and how the former allocentric representations of remembered reach targets are converted into the latter egocentric plans. In particular, our recent neuroimaging study suggests that four areas in the parietal and frontal cortex (right precuneus, bilateral dorsal premotor cortex, and right presupplementary area) participate in this allo-to-ego conversion. Finally, we provide a functional overview describing how and why egocentric and landmark-centered representations are segregated early in the visual system, but then reintegrated in the parieto-frontal cortex for action.
Collapse
Affiliation(s)
- Ying Chen
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - J Douglas Crawford
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Center for Vision Research, Vision: Science to Applications (VISTA) Program, and Departments of Psychology, Biology, and Kinesiology & Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
20
|
Karimpur H, Morgenstern Y, Fiehler K. Facilitation of allocentric coding by virtue of object-semantics. Sci Rep 2019; 9:6263. [PMID: 31000759 PMCID: PMC6472393 DOI: 10.1038/s41598-019-42735-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 04/05/2019] [Indexed: 11/26/2022] Open
Abstract
In the field of spatial coding it is well established that we mentally represent objects for action not only relative to ourselves, egocentrically, but also relative to other objects (landmarks), allocentrically. Several factors facilitate allocentric coding, for example, when objects are task-relevant or constitute stable and reliable spatial configurations. What is unknown, however, is how object-semantics facilitate the formation of these spatial configurations and thus allocentric coding. Here we demonstrate that (i) we can quantify the semantic similarity of objects and that (ii) semantically similar objects can serve as a cluster of landmarks that are allocentrically coded. Participants arranged a set of objects based on their semantic similarity. These arrangements were then entered into a similarity analysis. Based on the results, we created two semantic classes of objects, natural and man-made, that we used in a virtual reality experiment. Participants were asked to perform memory-guided reaching movements toward the initial position of a target object in a scene while either semantically congruent or incongruent landmarks were shifted. We found that the reaching endpoints systematically deviated in the direction of landmark shift. Importantly, this effect was stronger for shifts of semantically congruent landmarks. Our findings suggest that object-semantics facilitate allocentric coding by creating stable spatial configurations.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University, Giessen, Germany.
| | | | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
| |
Collapse
|
21
|
Aagten-Murphy D, Bays PM. Independent working memory resources for egocentric and allocentric spatial information. PLoS Comput Biol 2019; 15:e1006563. [PMID: 30789899 PMCID: PMC6400418 DOI: 10.1371/journal.pcbi.1006563] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 03/05/2019] [Accepted: 10/15/2018] [Indexed: 12/25/2022] Open
Abstract
Visuospatial working memory enables us to maintain access to visual information for processing even when a stimulus is no longer present, due to occlusion, our own movements, or transience of the stimulus. Here we show that, when localizing remembered stimuli, the precision of spatial recall does not rely solely on memory for individual stimuli, but additionally depends on the relative distances between stimuli and visual landmarks in the surroundings. Across three separate experiments, we consistently observed a spatially selective improvement in the precision of recall for items located near a persistent landmark. While the results did not require that the landmark be visible throughout the memory delay period, it was essential that it was visible both during encoding and response. We present a simple model that can accurately capture human performance by considering relative (allocentric) spatial information as an independent localization estimate which degrades with distance and is optimally integrated with egocentric spatial information. Critically, allocentric information was encoded without cost to egocentric estimation, demonstrating independent storage of the two sources of information. Finally, when egocentric and allocentric estimates were put in conflict, the model successfully predicted the resulting localization errors. We suggest that the relative distance between stimuli represents an additional, independent spatial cue for memory recall. This cue information is likely to be critical for spatial localization in natural settings which contain an abundance of visual landmarks.
Collapse
Affiliation(s)
- David Aagten-Murphy
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- * E-mail:
| | - Paul M. Bays
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
22
|
Chen Y, Monaco S, Crawford JD. Neural substrates for allocentric-to-egocentric conversion of remembered reach targets in humans. Eur J Neurosci 2018. [PMID: 29512943 DOI: 10.1111/ejn.13885] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Targets for goal-directed action can be encoded in allocentric coordinates (relative to another visual landmark), but it is not known how these are converted into egocentric commands for action. Here, we investigated this using a slow event-related fMRI paradigm, based on our previous behavioural finding that the allocentric-to-egocentric (Allo-Ego) conversion for reach is performed at the first possible opportunity. Participants were asked to remember (and eventually reach towards) the location of a briefly presented target relative to another visual landmark. After a first memory delay, participants were forewarned by a verbal instruction if the landmark would reappear at the same location (potentially allowing them to plan a reach following the auditory cue before the second delay), or at a different location where they had to wait for the final landmark to be presented before response, and then reach towards the remembered target location. As predicted, participants showed landmark-centred directional selectivity in occipital-temporal cortex during the first memory delay, and only developed egocentric directional selectivity in occipital-parietal cortex during the second delay for the 'Same cue' task, and during response for the 'Different cue' task. We then compared cortical activation between these two tasks at the times when the Allo-Ego conversion occurred, and found common activation in right precuneus, right presupplementary area and bilateral dorsal premotor cortex. These results confirm that the brain converts allocentric codes to egocentric plans at the first possible opportunity, and identify the four most likely candidate sites specific to the Allo-Ego transformation for reaches.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, Room 0009, Lassonde Building, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, ON, Canada
| | - Simona Monaco
- Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - J Douglas Crawford
- Center for Vision Research, Room 0009, Lassonde Building, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, ON, Canada.,Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| |
Collapse
|
23
|
Schenk T, Hesse C. Do we have distinct systems for immediate and delayed actions? A selective review on the role of visual memory in action. Cortex 2018; 98:228-248. [DOI: 10.1016/j.cortex.2017.05.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 05/01/2017] [Accepted: 05/11/2017] [Indexed: 10/19/2022]
|
24
|
Chen Y, Crawford JD. Cortical Activation during Landmark-Centered vs. Gaze-Centered Memory of Saccade Targets in the Human: An FMRI Study. Front Syst Neurosci 2017; 11:44. [PMID: 28690501 PMCID: PMC5481872 DOI: 10.3389/fnsys.2017.00044] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2017] [Accepted: 06/06/2017] [Indexed: 11/13/2022] Open
Abstract
A remembered saccade target could be encoded in egocentric coordinates such as gaze-centered, or relative to some external allocentric landmark that is independent of the target or gaze (landmark-centered). In comparison to egocentric mechanisms, very little is known about such a landmark-centered representation. Here, we used an event-related fMRI design to identify brain areas supporting these two types of spatial coding (i.e., landmark-centered vs. gaze-centered) for target memory during the Delay phase where only target location, not saccade direction, was specified. The paradigm included three tasks with identical display of visual stimuli but different auditory instructions: Landmark Saccade (remember target location relative to a visual landmark, independent of gaze), Control Saccade (remember original target location relative to gaze fixation, independent of the landmark), and a non-spatial control, Color Report (report target color). During the Delay phase, the Control and Landmark Saccade tasks activated overlapping areas in posterior parietal cortex (PPC) and frontal cortex as compared to the color control, but with higher activation in PPC for target coding in the Control Saccade task and higher activation in temporal and occipital cortex for target coding in Landmark Saccade task. Gaze-centered directional selectivity was observed in superior occipital gyrus and inferior occipital gyrus, whereas landmark-centered directional selectivity was observed in precuneus and midposterior intraparietal sulcus. During the Response phase after saccade direction was specified, the parietofrontal network in the left hemisphere showed higher activation for rightward than leftward saccades. Our results suggest that cortical activation for coding saccade target direction relative to a visual landmark differs from gaze-centered directional selectivity for target memory, from the mechanisms for other types of allocentric tasks, and from the directionally selective mechanisms for saccade planning and execution.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada
| | - J D Crawford
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada.,Vision: Science to Applications Program, York University, TorontoON, Canada
| |
Collapse
|
25
|
Grasping occluded targets: investigating the influence of target visibility, allocentric cue presence, and direction of motion on gaze and grasp accuracy. Exp Brain Res 2017; 235:2705-2716. [PMID: 28597294 DOI: 10.1007/s00221-017-5004-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 06/01/2017] [Indexed: 10/19/2022]
Abstract
Participants executed right-handed reach-to-grasp movements toward horizontally translating targets. Visual feedback of the target when reaching, as well as the presence of additional cues placed above and below the target's path, was manipulated. Comparison of average fixations at reach onset and at the time of the grasp suggested that participants accurately extrapolated the occluded target's motion prior to reach onset, but not after the reach had been initiated, resulting in inaccurate grasp placements. Final gaze and grasp positions were more accurate when reaching for leftward moving targets, suggesting individuals use different grasp strategies when reaching for targets traveling away from the reaching hand. Additional cue presence appeared to impair participants' ability to extrapolate the disappeared target's motion, and caused grasps for occluded targets to be less accurate. Novel information is provided about the eye-hand strategies used when reaching for moving targets in unpredictable visual conditions.
Collapse
|
26
|
Klinghammer M, Blohm G, Fiehler K. Scene Configuration and Object Reliability Affect the Use of Allocentric Information for Memory-Guided Reaching. Front Neurosci 2017; 11:204. [PMID: 28450826 PMCID: PMC5390010 DOI: 10.3389/fnins.2017.00204] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 03/24/2017] [Indexed: 11/16/2022] Open
Abstract
Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information.
Collapse
Affiliation(s)
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's UniversityKingston, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus-Liebig-UniversityGiessen, Germany
| |
Collapse
|
27
|
Brenner E, Smeets JB. Accumulating visual information for action. PROGRESS IN BRAIN RESEARCH 2017; 236:75-95. [DOI: 10.1016/bs.pbr.2017.07.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
28
|
Klinghammer M, Schütz I, Blohm G, Fiehler K. Allocentric information is used for memory-guided reaching in depth: A virtual reality study. Vision Res 2016; 129:13-24. [PMID: 27789230 DOI: 10.1016/j.visres.2016.10.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2016] [Revised: 10/05/2016] [Accepted: 10/07/2016] [Indexed: 10/20/2022]
Abstract
Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After free visual exploration of this scene and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts, similar to our previous results from 2D presentations. This deviation was stronger for object shifts in depth than in the horizontal plane and independent of observer-target-distance. Reaching endpoints systematically varied with changes in object size. Our results suggest that allocentric information is used for coding targets for memory-guided reaching in depth. Thereby, retinal disparity and vergence as well as object size provide important binocular and monocular depth cues.
Collapse
Affiliation(s)
- Mathias Klinghammer
- Justus-Liebig-University, Experimental Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| | - Immo Schütz
- TU Chemnitz, Institut für Physik, Reichenhainer Str. 70, 09126 Chemnitz, Germany.
| | - Gunnar Blohm
- Queen's University, Centre for Neuroscience Studies, 18, Stuart Street, Kingston, Ontario K7L 3N6, Canada.
| | - Katja Fiehler
- Justus-Liebig-University, Experimental Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany.
| |
Collapse
|
29
|
Filimon F. Are All Spatial Reference Frames Egocentric? Reinterpreting Evidence for Allocentric, Object-Centered, or World-Centered Reference Frames. Front Hum Neurosci 2015; 9:648. [PMID: 26696861 PMCID: PMC4673307 DOI: 10.3389/fnhum.2015.00648] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Accepted: 11/16/2015] [Indexed: 12/19/2022] Open
Abstract
The use and neural representation of egocentric spatial reference frames is well-documented. In contrast, whether the brain represents spatial relationships between objects in allocentric, object-centered, or world-centered coordinates is debated. Here, I review behavioral, neuropsychological, neurophysiological (neuronal recording), and neuroimaging evidence for and against allocentric, object-centered, or world-centered spatial reference frames. Based on theoretical considerations, simulations, and empirical findings from spatial navigation, spatial judgments, and goal-directed movements, I suggest that all spatial representations may in fact be dependent on egocentric reference frames.
Collapse
Affiliation(s)
- Flavia Filimon
- Adaptive Behavior and Cognition, Max Planck Institute for Human Development Berlin, Germany ; Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany
| |
Collapse
|
30
|
Camors D, Jouffrais C, Cottereau BR, Durand JB. Allocentric coding: spatial range and combination rules. Vision Res 2015; 109:87-98. [PMID: 25749676 DOI: 10.1016/j.visres.2015.02.018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Revised: 02/23/2015] [Accepted: 02/24/2015] [Indexed: 11/18/2022]
Abstract
When a visual target is presented with neighboring landmarks, its location can be determined both relative to the self (egocentric coding) and relative to these landmarks (allocentric coding). In the present study, we investigated (1) how allocentric coding depends on the distance between the targets and their surrounding landmarks (i.e. the spatial range) and (2) how allocentric and egocentric coding interact with each other across targets-landmarks distances (i.e. the combination rules). Subjects performed a memory-based pointing task toward previously gazed targets briefly superimposed (200ms) on background images of cluttered city landscapes. A variable portion of the images was occluded in order to control the distance between the targets and the closest potential landmarks within those images. The pointing responses were performed after large saccades and the reappearance of the images at their initial location. However, in some trials, the images' elements were slightly shifted (±3°) in order to introduce a subliminal conflict between the allocentric and egocentric reference frames. The influence of allocentric coding in the pointing responses was found to decrease with increasing target-landmarks distances, although it remained significant even at the largest distances (⩾10°). Interestingly, both the decreasing influence of allocentric coding and the concomitant increase in pointing responses variability were well captured by a Bayesian model in which the weighted combination of allocentric and egocentric cues is governed by a coupling prior.
Collapse
Affiliation(s)
- D Camors
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France; Université de Toulouse, IRIT, Toulouse, France; CNRS, IRIT, Toulouse, France
| | - C Jouffrais
- Université de Toulouse, IRIT, Toulouse, France; CNRS, IRIT, Toulouse, France
| | - B R Cottereau
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France
| | - J B Durand
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France.
| |
Collapse
|
31
|
No effect of delay on the spatial representation of serial reach targets. Exp Brain Res 2015; 233:1225-35. [PMID: 25600817 PMCID: PMC4355444 DOI: 10.1007/s00221-015-4197-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Accepted: 01/05/2015] [Indexed: 11/19/2022]
Abstract
When reaching for remembered target locations, it has been argued that the brain primarily relies on egocentric metrics and especially target position relative to gaze when reaches are immediate, but that the visuo-motor system relies stronger on allocentric (i.e., object-centered) metrics when a reach is delayed. However, previous reports from our group have shown that reaches to single remembered targets are represented relative to gaze, even when static visual landmarks are available and reaches are delayed by up to 12 s. Based on previous findings which showed a stronger contribution of allocentric coding in serial reach planning, the present study aimed to determine whether delay influences the use of a gaze-dependent reference frame when reaching to two remembered targets in a sequence after a delay of 0, 5 or 12 s. Gaze was varied relative to the first and second target and shifted away from the target before each reach. We found that participants used egocentric and allocentric reference frames in combination with a stronger reliance on allocentric information regardless of whether reaches were executed immediately or after a delay. Our results suggest that the relative contributions of egocentric and allocentric reference frames for spatial coding and updating of sequential reach targets do not change with a memory delay between target presentation and reaching.
Collapse
|