1
|
Schuetz I, Baltaretu BR, Fiehler K. Where was this thing again? Evaluating methods to indicate remembered object positions in virtual reality. J Vis 2024; 24:10. [PMID: 38995109 PMCID: PMC11246095 DOI: 10.1167/jov.24.7.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2024] Open
Abstract
A current focus in sensorimotor research is the study of human perception and action in increasingly naturalistic tasks and visual environments. This is further enabled by the recent commercial success of virtual reality (VR) technology, which allows for highly realistic but well-controlled three-dimensional (3D) scenes. VR enables a multitude of different ways to interact with virtual objects, but only rarely are such interaction techniques evaluated and compared before being selected for a sensorimotor experiment. Here, we compare different response techniques for a memory-guided action task, in which participants indicated the position of a previously seen 3D object in a VR scene: pointing, using a virtual laser pointer of short or unlimited length, and placing, either the target object itself or a generic reference cube. Response techniques differed in availability of 3D object cues and requirement to physically move to the remembered object position by walking. Object placement was the most accurate but slowest due to repeated repositioning. When placing objects, participants tended to match the original object's orientation. In contrast, the laser pointer was fastest but least accurate, with the short pointer showing a good speed-accuracy compromise. Our findings can help researchers in selecting appropriate methods when studying naturalistic visuomotor behavior in virtual environments.
Collapse
Affiliation(s)
- Immo Schuetz
- Experimental Psychology, Justus Liebig University, Giessen, Germany
| | | | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps University Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
2
|
Musa L, Yan X, Crawford JD. Instruction alters the influence of allocentric landmarks in a reach task. J Vis 2024; 24:17. [PMID: 39073800 PMCID: PMC11290568 DOI: 10.1167/jov.24.7.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/06/2024] [Indexed: 07/30/2024] Open
Abstract
Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.
Collapse
Affiliation(s)
- Lina Musa
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
- Departments of Biology and Kinesiology & Health Sciences, York University, Toronto, ON, Canada
| |
Collapse
|
3
|
Luabeya GN, Yan X, Freud E, Crawford JD. Influence of gaze, vision, and memory on hand kinematics in a placement task. J Neurophysiol 2024; 132:147-161. [PMID: 38836297 DOI: 10.1152/jn.00362.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 05/24/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024] Open
Abstract
People usually reach for objects to place them in some position and orientation, but the placement component of this sequence is often ignored. For example, reaches are influenced by gaze position, visual feedback, and memory delays, but their influence on object placement is unclear. Here, we tested these factors in a task where participants placed and oriented a trapezoidal block against two-dimensional (2-D) visual templates displayed on a frontally located computer screen. In experiment 1, participants matched the block to three possible orientations: 0° (horizontal), +45° and -45°, with gaze fixated 10° to the left/right. The hand and template either remained illuminated (closed-loop), or visual feedback was removed (open-loop). Here, hand location consistently overshot the template relative to gaze, especially in the open-loop task; likewise, orientation was influenced by gaze position (depending on template orientation and visual feedback). In experiment 2, a memory delay was added, and participants sometimes performed saccades (toward, away from, or across the template). In this task, the influence of gaze on orientation vanished, but location errors were influenced by both template orientation and final gaze position. Contrary to our expectations, the previous saccade metrics also impacted placement overshoot. Overall, hand orientation was influenced by template orientation in a nonlinear fashion. These results demonstrate interactions between gaze and orientation signals in the planning and execution of hand placement and suggest different neural mechanisms for closed-loop, open-loop, and memory delay placement.NEW & NOTEWORTHY Eye-hand coordination studies usually focus on object acquisition, but placement is equally important. We investigated how gaze position influences object placement toward a 2-D template with different levels of visual feedback. Like reach, placement overestimated goal location relative to gaze and was influenced by previous saccade metrics. Gaze also modulated hand orientation, depending on template orientation and level of visual feedback. Gaze influence was feedback-dependent, with location errors having no significant effect after a memory delay.
Collapse
Affiliation(s)
- Gaelle N Luabeya
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
| | - Erez Freud
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
- Department of Psychology, York University, Toronto, Ontario, Canada
- Department of Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Forster PP, Fiehler K, Karimpur H. Egocentric cues influence the allocentric spatial memory of object configurations for memory-guided actions. J Neurophysiol 2023; 130:1142-1149. [PMID: 37791381 DOI: 10.1152/jn.00149.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/05/2023] Open
Abstract
Allocentric and egocentric reference frames are used to code the spatial position of action targets in reference to objects in the environment, i.e., relative to landmarks (allocentric), or the observer (egocentric). Previous research investigated reference frames in isolation, for example, by shifting landmarks relative to the target and asking participants to reach to the remembered target location. Systematic reaching errors were found in the direction of the landmark shift and used as a proxy for allocentric spatial coding. Here, we examined the interaction of both allocentric and egocentric reference frames by shifting the landmarks as well as the observer. We asked participants to encode a three-dimensional configuration of balls and to reproduce this configuration from memory after a short delay followed by a landmark or an observer shift. We also manipulated the number of landmarks to test its effect on the use of allocentric and egocentric reference frames. We found that participants were less accurate when reproducing the configuration of balls after an observer shift, which was reflected in larger configurational errors. In addition, an increase in the number of landmarks led to a stronger reliance on allocentric cues and a weaker contribution of egocentric cues. In sum, our results highlight the important role of egocentric cues for allocentric spatial coding in the context of memory-guided actions.NEW & NOTEWORTHY Objects in our environment are coded relative to each other (allocentrically) and are thought to serve as independent and reliable cues (landmarks) in the context of unreliable egocentric signals. Contrary to this assumption, we demonstrate that egocentric cues alter the allocentric spatial memory, which could reflect recently discovered interactions between allocentric and egocentric neural processing pathways. Furthermore, additional landmarks lead to a higher contribution of allocentric and a lower contribution of egocentric cues.
Collapse
Affiliation(s)
- Pierre-Pascal Forster
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
5
|
Crowe EM, Bossard M, Brenner E. Can ongoing movements be guided by allocentric visual information when the target is visible? J Vis 2021; 21:6. [PMID: 33427872 PMCID: PMC7804519 DOI: 10.1167/jov.21.1.6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
People use both egocentric (object-to-self) and allocentric (object-to-object) spatial information to interact with the world. Evidence for allocentric information guiding ongoing actions stems from studies in which people reached to where targets had previously been seen while other objects were moved. Since egocentric position judgments might fade or change when the target is removed, we sought for conditions in which people might benefit from relying on allocentric information when the target remains visible. We used a task that required participants to intercept targets that moved across a screen using a cursor that represented their finger but that moved by a different amount in a different plane. During each attempt, we perturbed the target, cursor, or background individually or all three simultaneously such that their relative positions did not change and there was no need to adjust the ongoing movement. An obvious way to avoid responding to such simultaneous perturbations is by relying on allocentric information. Relying on egocentric information would give a response that resembles the combined responses to the three isolated perturbations. The hand responded in accordance with the responses to the isolated perturbations despite the differences between how the finger and cursor moved. This response remained when the simultaneous perturbation was repeated many times, suggesting that participants hardly relied upon allocentric spatial information to control their ongoing visually guided actions.
Collapse
Affiliation(s)
- Emily M Crowe
- Department of Human Movement Sciences, Institute of Brain and Behaviour Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,
| | | | - Eli Brenner
- Department of Human Movement Sciences, Institute of Brain and Behaviour Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,
| |
Collapse
|
6
|
Maij F, Seegelke C, Medendorp WP, Heed T. External location of touch is constructed post-hoc based on limb choice. eLife 2020; 9:57804. [PMID: 32945257 PMCID: PMC7561349 DOI: 10.7554/elife.57804] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Accepted: 09/18/2020] [Indexed: 11/13/2022] Open
Abstract
When humans indicate on which hand a tactile stimulus occurred, they often err when their hands are crossed. This finding seemingly supports the view that the automatically determined touch location in external space affects limb assignment: the crossed right hand is localized in left space, and this conflict presumably provokes hand assignment errors. Here, participants judged on which hand the first of two stimuli, presented during a bimanual movement, had occurred, and then indicated its external location by a reach-to-point movement. When participants incorrectly chose the hand stimulated second, they pointed to where that hand had been at the correct, first time point, though no stimulus had occurred at that location. This behavior suggests that stimulus localization depended on hand assignment, not vice versa. It is, thus, incompatible with the notion of automatic computation of external stimulus location upon occurrence. Instead, humans construct external touch location post-hoc and on demand.
Collapse
Affiliation(s)
- Femke Maij
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Christian Seegelke
- Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany.,Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Tobias Heed
- Faculty of Psychology and Sports Science, Bielefeld University, Bielefeld, Germany.,Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
7
|
Two types of memory-based (pantomime) reaches distinguished by gaze anchoring in reach-to-grasp tasks. Behav Brain Res 2020; 381:112438. [PMID: 31857149 DOI: 10.1016/j.bbr.2019.112438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 12/13/2019] [Accepted: 12/14/2019] [Indexed: 11/24/2022]
Abstract
Comparisons of target-based reaching vs memory-based (pantomime) reaching have been used to obtain insight into the visuomotor control of reaching. The present study examined the contribution of gaze anchoring, reaching to a target that is under continuous gaze, to both target-based and memory-based reaching. Participants made target-based reaches for discs located on a table or food items located on a pedestal or they replaced the objects. They then made memory-based reaches in which they pantomimed their target-based reaches. Participants were fitted with hand sensors for kinematic tracking and an eye tracker to monitor gaze. When making target-based reaches, participants directed gaze to the target location from reach onset to offset without interrupting saccades. Similar gaze anchoring was present for memory-based reaches when the surface upon which the target had been placed remained. When the target and its surface were both removed there was no systematic relationship between gaze and the reach. Gaze anchoring was also present when participants replaced a target on a surface, a movement featuring a reach but little grasp. That memory-based reaches can be either gaze anchor-associated or gaze anchor-independent is discussed in relation to contemporary views of the neural control of reaching.
Collapse
|
8
|
Lu Z, Fiehler K. Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex 2020; 125:203-214. [PMID: 32006875 DOI: 10.1016/j.cortex.2019.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 09/16/2019] [Accepted: 12/12/2019] [Indexed: 12/17/2022]
Abstract
The 2-streams model of vision suggests that egocentric and allocentric reference frames are utilized by the dorsal and the ventral stream for real-time and memory-guided movements, respectively. Recent studies argue against such a strict functional distinction and suggest that real-time and memory-guided movements recruit the same spatial maps. In this study we focus on allocentric spatial coding and updating of targets by using landmark information in real-time and memory-guided reaching. We presented participants with a naturalistic scene which consisted of six objects on a table that served as potential reach targets. Participants were informed about the target object after scene encoding, and were prompted by a go cue to reach to its position. After target identification a brief air-puff was applied to the participant's right eye inducing an eye blink. During the blink the target object disappeared from the scene, and in half of the trials the remaining objects, that functioned as landmarks, were shifted horizontally in the same direction. We found that landmark shifts systematically influenced participants' reaching endpoints irrespective of whether the movements were controlled online based on available target information (real-time movement) or memory-guided based on remembered target information (memory-guided movement). Overall, the effect of landmark shift was stronger for memory-guided than real-time reaching. Our findings suggest that humans can encode and update reach targets in an allocentric reference frame for both real-time and memory-guided movements and show stronger allocentric coding when the movement is based on memory.
Collapse
Affiliation(s)
- Zijian Lu
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany.
| | - Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus-Liebig University, Giessen, Germany.
| |
Collapse
|
9
|
Vickers JN, Causer J, Vanhooren D. The Role of Quiet Eye Timing and Location in the Basketball Three-Point Shot: A New Research Paradigm. Front Psychol 2019; 10:2424. [PMID: 31736825 PMCID: PMC6836760 DOI: 10.3389/fpsyg.2019.02424] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 10/11/2019] [Indexed: 11/26/2022] Open
Abstract
We investigated three areas of uncertainty about the role of vision in basketball shooting, the timing of fixations (early, late), the location of fixations (hoop centre, non-centre) and the effect of the defender on performance. We also sought to overcome a limitation of past quiet eye studies that reported only one quiet eye (QE) period prior to a phase of the action. Elite basketball players received the pass and took three-point shots in undefended and defended conditions. Five sequential QE periods were analyzed that were initiated prior to each phase of the shooting action: QE catch, QE arm preparation, QE arm flexion, QE arm extension, and QE ball release. We used a novel design in which the number of hits and misses were held constant by condition, thus leaving the timing and location of QE fixations free to vary across the phases during an equal number of successful and unsuccessful trials. The number of QE fixations accounted for 87% of total fixations. The greatest percent occurred during QE catch (43.6%), followed by QE arm flexion (34.1%), QE arm extension (17.5%) and QE ball release (4.8%). No fixations were found prior to QE arm preparation, due to a saccade made immediately to the target after QE catch. Fixation frequency averaged 2.20 per trial, and 1.25 during the final shooting action, meaning that most participants had time for only one fixation as the shot was taken. Accuracy was enhanced when: (1) an early QE offset occurred prior to the catch, (2) an early saccade was made to the target, (3) a longer QE duration occurred during arm flexion, and (4) QE arm flexion was located on the centre of the hoop, rather than on non-centre locations. Overall, the results provide evidence that vision of the hoop was severely limited during the last phase of the shooting action (QE ball release). The significance of the results is explored in the discussion, along with a QE training program designed to improve three-point shooting. Overall, the results greatly expand the role of the QE in explaining optimal motor performance.
Collapse
Affiliation(s)
- Joan N. Vickers
- Faculty of Kinesiology, University of Calgary, Calgary, AB, Canada
| | - Joe Causer
- Research Institute for Sport and Exercise Sciences, Liverpool John Moores University, Liverpool, United Kingdom
| | - Dan Vanhooren
- Faculty of Kinesiology, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
10
|
Samuel S, Legg EW, Manchester C, Lurz R, Clayton NS. Where was I? Taking alternative visual perspectives can make us (briefly) misplace our own. Q J Exp Psychol (Hove) 2019; 73:468-477. [PMID: 31544626 DOI: 10.1177/1747021819881097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
How do we imagine what the world looks like from another visual perspective? The two most common proposals-embodiment and array rotation-imply that we must briefly imagine either movement of the self (embodiment) or movement of the scene (array rotation). What is not clear is what this process might mean for our real, egocentric perspective of the world. We present a novel task in which participants had to locate a target from an alternative perspective but make a manual response consistent with their own. We found that when errors occurred they were usually manual responses that would have been correct from the computed alternative perspective. This was the case both when participants were instructed to find the target from another perspective and when they were asked to imagine the scene itself rotated. We interpret this as direct evidence that perspective-taking leads to the brief adoption of a computed perspective-a new imagined relationship between ourselves and the scene-to the detriment of our own, egocentric point of view.
Collapse
Affiliation(s)
- Steven Samuel
- Department of Psychology, University of Cambridge, Cambridge, UK.,Department of Psychology, University of Essex, Colchester, UK
| | - Edward W Legg
- Department of Psychology, University of Cambridge, Cambridge, UK
| | | | - Robert Lurz
- Brooklyn College, The City University New York, New York, NY, USA
| | - Nicola S Clayton
- Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
11
|
Manson GA, Tremblay L, Lebar N, de Grosbois J, Mouchnino L, Blouin J. Auditory cues for somatosensory targets invoke visuomotor transformations: Behavioral and electrophysiological evidence. PLoS One 2019; 14:e0215518. [PMID: 31048853 PMCID: PMC6497427 DOI: 10.1371/journal.pone.0215518] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 04/03/2019] [Indexed: 11/18/2022] Open
Abstract
Prior to goal-directed actions, somatosensory target positions can be localized using either an exteroceptive or an interoceptive body representation. The goal of the present study was to investigate if the body representation selected to plan reaches to somatosensory targets is influenced by the sensory modality of the cue indicating the target’s location. In the first experiment, participants reached to somatosensory targets prompted by either an auditory or a vibrotactile cue. As a baseline condition, participants also performed reaches to visual targets prompted by an auditory cue. Gaze-dependent reaching errors were measured to determine the contribution of the exteroceptive representation to motor planning processes. The results showed that reaches to both auditory-cued somatosensory targets and auditory-cued visual targets exhibited larger gaze-dependent reaching errors than reaches to vibrotactile-cued somatosensory targets. Thus, an exteroceptive body representation was likely used to plan reaches to auditory-cued somatosensory targets but not to vibrotactile-cued somatosensory targets. The second experiment examined the influence of using an exteroceptive body representation to plan movements to somatosensory targets on pre-movement neural activations. Cortical responses to a task-irrelevant visual flash were measured as participants planned movements to either auditory-cued somatosensory or auditory-cued visual targets. Larger responses (i.e., visual-evoked potentials) were found when participants planned movements to somatosensory vs. visual targets, and source analyses revealed that these activities were localized to the left occipital and left posterior parietal areas. These results suggest that visual and visuomotor processing networks were more engaged when using the exteroceptive body representation to plan movements to somatosensory targets, than when planning movements to external visual targets.
Collapse
Affiliation(s)
- Gerome A. Manson
- Aix-Marseille University, CNRS, LNC FR 3C, Marseille, France
- University of Toronto, Centre for Motor Control, Faculty of Kinesiology and Physical Education, Toronto, Ontario, Canada
- * E-mail:
| | - Luc Tremblay
- University of Toronto, Centre for Motor Control, Faculty of Kinesiology and Physical Education, Toronto, Ontario, Canada
| | - Nicolas Lebar
- Aix-Marseille University, CNRS, LNC FR 3C, Marseille, France
| | - John de Grosbois
- University of Toronto, Centre for Motor Control, Faculty of Kinesiology and Physical Education, Toronto, Ontario, Canada
| | | | - Jean Blouin
- Aix-Marseille University, CNRS, LNC FR 3C, Marseille, France
| |
Collapse
|
12
|
Karimpur H, Morgenstern Y, Fiehler K. Facilitation of allocentric coding by virtue of object-semantics. Sci Rep 2019; 9:6263. [PMID: 31000759 PMCID: PMC6472393 DOI: 10.1038/s41598-019-42735-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 04/05/2019] [Indexed: 11/26/2022] Open
Abstract
In the field of spatial coding it is well established that we mentally represent objects for action not only relative to ourselves, egocentrically, but also relative to other objects (landmarks), allocentrically. Several factors facilitate allocentric coding, for example, when objects are task-relevant or constitute stable and reliable spatial configurations. What is unknown, however, is how object-semantics facilitate the formation of these spatial configurations and thus allocentric coding. Here we demonstrate that (i) we can quantify the semantic similarity of objects and that (ii) semantically similar objects can serve as a cluster of landmarks that are allocentrically coded. Participants arranged a set of objects based on their semantic similarity. These arrangements were then entered into a similarity analysis. Based on the results, we created two semantic classes of objects, natural and man-made, that we used in a virtual reality experiment. Participants were asked to perform memory-guided reaching movements toward the initial position of a target object in a scene while either semantically congruent or incongruent landmarks were shifted. We found that the reaching endpoints systematically deviated in the direction of landmark shift. Importantly, this effect was stronger for shifts of semantically congruent landmarks. Our findings suggest that object-semantics facilitate allocentric coding by creating stable spatial configurations.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University, Giessen, Germany.
| | | | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
| |
Collapse
|
13
|
Aagten-Murphy D, Bays PM. Independent working memory resources for egocentric and allocentric spatial information. PLoS Comput Biol 2019; 15:e1006563. [PMID: 30789899 PMCID: PMC6400418 DOI: 10.1371/journal.pcbi.1006563] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 03/05/2019] [Accepted: 10/15/2018] [Indexed: 12/25/2022] Open
Abstract
Visuospatial working memory enables us to maintain access to visual information for processing even when a stimulus is no longer present, due to occlusion, our own movements, or transience of the stimulus. Here we show that, when localizing remembered stimuli, the precision of spatial recall does not rely solely on memory for individual stimuli, but additionally depends on the relative distances between stimuli and visual landmarks in the surroundings. Across three separate experiments, we consistently observed a spatially selective improvement in the precision of recall for items located near a persistent landmark. While the results did not require that the landmark be visible throughout the memory delay period, it was essential that it was visible both during encoding and response. We present a simple model that can accurately capture human performance by considering relative (allocentric) spatial information as an independent localization estimate which degrades with distance and is optimally integrated with egocentric spatial information. Critically, allocentric information was encoded without cost to egocentric estimation, demonstrating independent storage of the two sources of information. Finally, when egocentric and allocentric estimates were put in conflict, the model successfully predicted the resulting localization errors. We suggest that the relative distance between stimuli represents an additional, independent spatial cue for memory recall. This cue information is likely to be critical for spatial localization in natural settings which contain an abundance of visual landmarks.
Collapse
Affiliation(s)
- David Aagten-Murphy
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- * E-mail:
| | - Paul M. Bays
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
14
|
Schenk T, Hesse C. Do we have distinct systems for immediate and delayed actions? A selective review on the role of visual memory in action. Cortex 2018; 98:228-248. [DOI: 10.1016/j.cortex.2017.05.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 05/01/2017] [Accepted: 05/11/2017] [Indexed: 10/19/2022]
|
15
|
Vazquez Y, Federici L, Pesaran B. Multiple spatial representations interact to increase reach accuracy when coordinating a saccade with a reach. J Neurophysiol 2017; 118:2328-2343. [PMID: 28768742 DOI: 10.1152/jn.00408.2017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Revised: 07/11/2017] [Accepted: 07/25/2017] [Indexed: 11/22/2022] Open
Abstract
Reaching is an essential behavior that allows primates to interact with the environment. Precise reaching to visual targets depends on our ability to localize and foveate the target. Despite this, how the saccade system contributes to improvements in reach accuracy remains poorly understood. To assess spatial contributions of eye movements to reach accuracy, we performed a series of behavioral psychophysics experiments in nonhuman primates (Macaca mulatta). We found that a coordinated saccade with a reach to a remembered target location increases reach accuracy without target foveation. The improvement in reach accuracy was similar to that obtained when the subject had visual information about the location of the current target in the visual periphery and executed the reach while maintaining central fixation. Moreover, we found that the increase in reach accuracy elicited by a coordinated movement involved a spatial coupling mechanism between the saccade and reach movements. We observed significant correlations between the saccade and reach errors for coordinated movements. In contrast, when the eye and arm movements were made to targets in different spatial locations, the magnitude of the error and the degree of correlation between the saccade and reach direction were determined by the spatial location of the eye and the hand targets. Hence, we propose that coordinated movements improve reach accuracy without target foveation due to spatial coupling between the reach and saccade systems. Spatial coupling could arise from a neural mechanism for coordinated visual behavior that involves interacting spatial representations.NEW & NOTEWORTHY How visual spatial representations guiding reach movements involve coordinated saccadic eye movements is unknown. Temporal coupling between the reach and saccade system during coordinated movements improves reach performance. However, the role of spatial coupling is unclear. Using behavioral psychophysics, we found that spatial coupling increases reach accuracy in addition to temporal coupling and visual acuity. These results suggest that a spatial mechanism to couple the reach and saccade systems increases the accuracy of coordinated movements.
Collapse
Affiliation(s)
- Yuriria Vazquez
- Center for Neural Science, New York University, New York, New York; and
| | - Laura Federici
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Bijan Pesaran
- Center for Neural Science, New York University, New York, New York; and
| |
Collapse
|
16
|
Gaze-centered coding of proprioceptive reach targets after effector movement: Testing the impact of online information, time of movement, and target distance. PLoS One 2017; 12:e0180782. [PMID: 28678886 PMCID: PMC5498052 DOI: 10.1371/journal.pone.0180782] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Accepted: 06/21/2017] [Indexed: 11/19/2022] Open
Abstract
In previous research, we demonstrated that spatial coding of proprioceptive reach targets depends on the presence of an effector movement (Mueller & Fiehler, Neuropsychologia, 2014, 2016). In these studies, participants were asked to reach in darkness with their right hand to a proprioceptive target (tactile stimulation on the finger tip) while their gaze was varied. They either moved their left, stimulated hand towards a target location or kept it stationary at this location where they received a touch on the fingertip to which they reached with their right hand. When the stimulated hand was moved, reach errors varied as a function of gaze relative to target whereas reach errors were independent of gaze when the hand was kept stationary. The present study further examines whether (a) the availability of proprioceptive online information, i.e. reaching to an online versus a remembered target, (b) the time of the effector movement, i.e. before or after target presentation, or (c) the target distance from the body influences gaze-centered coding of proprioceptive reach targets. We found gaze-dependent reach errors in the conditions which included a movement of the stimulated hand irrespective of whether proprioceptive information was available online or remembered. This suggests that an effector movement leads to gaze-centered coding for both online and remembered proprioceptive reach targets. Moreover, moving the stimulated hand before or after target presentation did not affect gaze-dependent reach errors, thus, indicating a continuous spatial update of positional signals of the stimulated hand rather than the target location per se. However, reaching to a location close to the body rather than farther away (but still within reachable space) generally decreased the influence of a gaze-centered reference frame.
Collapse
|
17
|
Brenner E, Smeets JB. Accumulating visual information for action. PROGRESS IN BRAIN RESEARCH 2017; 236:75-95. [DOI: 10.1016/bs.pbr.2017.07.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
18
|
Klinghammer M, Schütz I, Blohm G, Fiehler K. Allocentric information is used for memory-guided reaching in depth: A virtual reality study. Vision Res 2016; 129:13-24. [PMID: 27789230 DOI: 10.1016/j.visres.2016.10.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2016] [Revised: 10/05/2016] [Accepted: 10/07/2016] [Indexed: 10/20/2022]
Abstract
Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After free visual exploration of this scene and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts, similar to our previous results from 2D presentations. This deviation was stronger for object shifts in depth than in the horizontal plane and independent of observer-target-distance. Reaching endpoints systematically varied with changes in object size. Our results suggest that allocentric information is used for coding targets for memory-guided reaching in depth. Thereby, retinal disparity and vergence as well as object size provide important binocular and monocular depth cues.
Collapse
Affiliation(s)
- Mathias Klinghammer
- Justus-Liebig-University, Experimental Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany
| | - Immo Schütz
- TU Chemnitz, Institut für Physik, Reichenhainer Str. 70, 09126 Chemnitz, Germany.
| | - Gunnar Blohm
- Queen's University, Centre for Neuroscience Studies, 18, Stuart Street, Kingston, Ontario K7L 3N6, Canada.
| | - Katja Fiehler
- Justus-Liebig-University, Experimental Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany.
| |
Collapse
|
19
|
Filimon F. Are All Spatial Reference Frames Egocentric? Reinterpreting Evidence for Allocentric, Object-Centered, or World-Centered Reference Frames. Front Hum Neurosci 2015; 9:648. [PMID: 26696861 PMCID: PMC4673307 DOI: 10.3389/fnhum.2015.00648] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Accepted: 11/16/2015] [Indexed: 12/19/2022] Open
Abstract
The use and neural representation of egocentric spatial reference frames is well-documented. In contrast, whether the brain represents spatial relationships between objects in allocentric, object-centered, or world-centered coordinates is debated. Here, I review behavioral, neuropsychological, neurophysiological (neuronal recording), and neuroimaging evidence for and against allocentric, object-centered, or world-centered spatial reference frames. Based on theoretical considerations, simulations, and empirical findings from spatial navigation, spatial judgments, and goal-directed movements, I suggest that all spatial representations may in fact be dependent on egocentric reference frames.
Collapse
Affiliation(s)
- Flavia Filimon
- Adaptive Behavior and Cognition, Max Planck Institute for Human Development Berlin, Germany ; Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany
| |
Collapse
|
20
|
Camors D, Jouffrais C, Cottereau BR, Durand JB. Allocentric coding: spatial range and combination rules. Vision Res 2015; 109:87-98. [PMID: 25749676 DOI: 10.1016/j.visres.2015.02.018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Revised: 02/23/2015] [Accepted: 02/24/2015] [Indexed: 11/18/2022]
Abstract
When a visual target is presented with neighboring landmarks, its location can be determined both relative to the self (egocentric coding) and relative to these landmarks (allocentric coding). In the present study, we investigated (1) how allocentric coding depends on the distance between the targets and their surrounding landmarks (i.e. the spatial range) and (2) how allocentric and egocentric coding interact with each other across targets-landmarks distances (i.e. the combination rules). Subjects performed a memory-based pointing task toward previously gazed targets briefly superimposed (200ms) on background images of cluttered city landscapes. A variable portion of the images was occluded in order to control the distance between the targets and the closest potential landmarks within those images. The pointing responses were performed after large saccades and the reappearance of the images at their initial location. However, in some trials, the images' elements were slightly shifted (±3°) in order to introduce a subliminal conflict between the allocentric and egocentric reference frames. The influence of allocentric coding in the pointing responses was found to decrease with increasing target-landmarks distances, although it remained significant even at the largest distances (⩾10°). Interestingly, both the decreasing influence of allocentric coding and the concomitant increase in pointing responses variability were well captured by a Bayesian model in which the weighted combination of allocentric and egocentric cues is governed by a coupling prior.
Collapse
Affiliation(s)
- D Camors
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France; Université de Toulouse, IRIT, Toulouse, France; CNRS, IRIT, Toulouse, France
| | - C Jouffrais
- Université de Toulouse, IRIT, Toulouse, France; CNRS, IRIT, Toulouse, France
| | - B R Cottereau
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France
| | - J B Durand
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France.
| |
Collapse
|
21
|
No effect of delay on the spatial representation of serial reach targets. Exp Brain Res 2015; 233:1225-35. [PMID: 25600817 PMCID: PMC4355444 DOI: 10.1007/s00221-015-4197-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Accepted: 01/05/2015] [Indexed: 11/19/2022]
Abstract
When reaching for remembered target locations, it has been argued that the brain primarily relies on egocentric metrics and especially target position relative to gaze when reaches are immediate, but that the visuo-motor system relies stronger on allocentric (i.e., object-centered) metrics when a reach is delayed. However, previous reports from our group have shown that reaches to single remembered targets are represented relative to gaze, even when static visual landmarks are available and reaches are delayed by up to 12 s. Based on previous findings which showed a stronger contribution of allocentric coding in serial reach planning, the present study aimed to determine whether delay influences the use of a gaze-dependent reference frame when reaching to two remembered targets in a sequence after a delay of 0, 5 or 12 s. Gaze was varied relative to the first and second target and shifted away from the target before each reach. We found that participants used egocentric and allocentric reference frames in combination with a stronger reliance on allocentric information regardless of whether reaches were executed immediately or after a delay. Our results suggest that the relative contributions of egocentric and allocentric reference frames for spatial coding and updating of sequential reach targets do not change with a memory delay between target presentation and reaching.
Collapse
|
22
|
Abstract
The location of a remembered reach target can be encoded in egocentric and/or allocentric reference frames. Cortical mechanisms for egocentric reach are relatively well described, but the corresponding allocentric representations are essentially unknown. Here, we used an event-related fMRI design to distinguish human brain areas involved in these two types of representation. Our paradigm consisted of three tasks with identical stimulus display but different instructions: egocentric reach (remember absolute target location), allocentric reach (remember target location relative to a visual landmark), and a nonspatial control, color report (report color of target). During the delay phase (when only target location was specified), the egocentric and allocentric tasks elicited widely overlapping regions of cortical activity (relative to the control), but with higher activation in parietofrontal cortex for egocentric task and higher activation in early visual cortex for allocentric tasks. In addition, egocentric directional selectivity (target relative to gaze) was observed in the superior occipital gyrus and the inferior occipital gyrus, whereas allocentric directional selectivity (target relative to a visual landmark) was observed in the inferior temporal gyrus and inferior occipital gyrus. During the response phase (after movement direction had been specified either by reappearance of the visual landmark or a pro-/anti-reach instruction), the parietofrontal network resumed egocentric directional selectivity, showing higher activation for contralateral than ipsilateral reaches. These results show that allocentric and egocentric reach mechanisms use partially overlapping but different cortical substrates and that directional specification is different for target memory versus reach response.
Collapse
|
23
|
Use of exocentric and egocentric representations in the concurrent planning of sequential saccades. J Neurosci 2014; 34:16009-21. [PMID: 25429142 DOI: 10.1523/jneurosci.0328-14.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The concurrent planning of sequential saccades offers a simple model to study the nature of visuomotor transformations since the second saccade vector needs to be remapped to foveate the second target following the first saccade. Remapping is thought to occur through egocentric mechanisms involving an efference copy of the first saccade that is available around the time of its onset. In contrast, an exocentric representation of the second target relative to the first target, if available, can be used to directly code the second saccade vector. While human volunteers performed a modified double-step task, we examined the role of exocentric encoding in concurrent saccade planning by shifting the first target location well before the efference copy could be used by the oculomotor system. The impact of the first target shift on concurrent processing was tested by examining the end-points of second saccades following a shift of the second target during the first saccade. The frequency of second saccades to the old versus new location of the second target, as well as the propagation of first saccade localization errors, both indices of concurrent processing, were found to be significantly reduced in trials with the first target shift compared to those without it. A similar decrease in concurrent processing was obtained when we shifted the first target but kept constant the second saccade vector. Overall, these results suggest that the brain can use relatively stable visual landmarks, independent of efference copy-based egocentric mechanisms, for concurrent planning of sequential saccades.
Collapse
|
24
|
Mueller S, Fiehler K. Effector movement triggers gaze-dependent spatial coding of tactile and proprioceptive-tactile reach targets. Neuropsychologia 2014; 62:184-93. [DOI: 10.1016/j.neuropsychologia.2014.07.025] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2013] [Revised: 06/10/2014] [Accepted: 07/22/2014] [Indexed: 11/27/2022]
|
25
|
Fiehler K, Wolf C, Klinghammer M, Blohm G. Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment. Front Hum Neurosci 2014; 8:636. [PMID: 25202252 PMCID: PMC4141549 DOI: 10.3389/fnhum.2014.00636] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Accepted: 07/30/2014] [Indexed: 11/13/2022] Open
Abstract
When interacting with our environment we generally make use of egocentric and allocentric object information by coding object positions relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during memory-guided reaching to images of naturalistic scenes. Participants encoded a breakfast scene containing six objects on a table (local objects) and three objects in the environment (global objects). After a 2 s delay, a visual test scene reappeared for 1 s in which 1 local object was missing (= target) and of the remaining, 1, 3 or 5 local objects or one of the global objects were shifted to the left or to the right. The offset of the test scene prompted participants to reach to the target as precisely as possible. Only local objects served as potential reach targets and thus were task-relevant. When shifting objects we predicted accurate reaching if participants only used egocentric coding of object position and systematic shifts of reach endpoints if allocentric information were used for movement planning. We found that reaching movements were largely affected by allocentric shifts showing an increase in endpoint errors in the direction of object shifts with the number of local objects shifted. No effect occurred when one local or one global object was shifted. Our findings suggest that allocentric cues are indeed used by the brain for memory-guided reaching towards targets in naturalistic visual scenes. Moreover, the integration of egocentric and allocentric object information seems to depend on the extent of changes in the scene.
Collapse
Affiliation(s)
- Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Christian Wolf
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Mathias Klinghammer
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Gunnar Blohm
- Canadian Action and Perception Network (CAPnet), Centre for Neuroscience Studies, Queen's University Kingston, ON, Canada
| |
Collapse
|
26
|
MacInnes WJ, Hunt AR. Attentional load interferes with target localization across saccades. Exp Brain Res 2014; 232:3737-48. [PMID: 25138910 DOI: 10.1007/s00221-014-4062-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 08/01/2014] [Indexed: 11/30/2022]
Abstract
The retinal positions of objects in the world change with each eye movement, but we seem to have little trouble keeping track of spatial information from one fixation to the next. We examined the role of attention in trans-saccadic localization by asking participants to localize targets while performing an attentionally demanding secondary task. In the first experiment, attentional load decreased localization precision for a remembered target, but only when a saccade intervened between target presentation and report. We then repeated the experiment and included a salient landmark that shifted on half the trials. The shifting landmark had a larger effect on localization under high load, indicating that observers rely more on landmarks to make localization judgments under high than under low attentional load. The results suggest that attention facilitates trans-saccadic localization judgments based on spatial updating of gaze-centered coordinates when visual landmarks are not available. The availability of reliable landmarks (present in most natural circumstances) can compensate for the effects of scarce attentional resources on trans-saccadic localization.
Collapse
Affiliation(s)
- W Joseph MacInnes
- School of Psychology, University of Aberdeen, Aberdeen, AB24 3FX, UK
| | | |
Collapse
|
27
|
Mueller S, Fiehler K. Gaze-dependent spatial updating of tactile targets in a localization task. Front Psychol 2014; 5:66. [PMID: 24575060 PMCID: PMC3918658 DOI: 10.3389/fpsyg.2014.00066] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Accepted: 01/17/2014] [Indexed: 11/13/2022] Open
Abstract
There is concurrent evidence that visual reach targets are represented with respect to gaze. For tactile reach targets, we previously showed that an effector movement leads to a shift from a gaze-independent to a gaze-dependent reference frame. Here we aimed to unravel the influence of effector movement (gaze shift) on the reference frame of tactile stimuli using a spatial localization task (yes/no paradigm). We assessed how gaze direction (fixation left/right) alters the perceived spatial location (point of subjective equality) of sequentially presented tactile standard and visual comparison stimuli while effector movement (gaze fixed/shifted) and stimulus order (vis-tac/tac-vis) were varied. In the fixed-gaze condition, subjects maintained gaze at the fixation site throughout the trial. In the shifted-gaze condition, they foveated the first stimulus, then made a saccade toward the fixation site where they held gaze while the second stimulus appeared. Only when an effector movement occurred after the encoding of the tactile stimulus (shifted-gaze, tac-vis), gaze similarly influenced the perceived location of the tactile and the visual stimulus. In contrast, when gaze was fixed or a gaze shift occurred before encoding of the tactile stimulus, gaze differentially affected the perceived spatial relation of the tactile and the visual stimulus suggesting gaze-dependent coding of only one of the two stimuli. Consistent with previous findings this implies that visual stimuli vary with gaze irrespective of whether gaze is fixed or shifted. However, a gaze-dependent representation of tactile stimuli seems to critically depend on an effector movement (gaze shift) after tactile encoding triggering spatial updating of tactile targets in a gaze-dependent reference frame. Together with our recent findings on tactile reaching, the present results imply similar underlying reference frames for tactile spatial perception and action.
Collapse
Affiliation(s)
- Stefanie Mueller
- Department of Psychology, Justus-Liebig University Giessen Giessen, Germany
| | - Katja Fiehler
- Department of Psychology, Justus-Liebig University Giessen Giessen, Germany
| |
Collapse
|