1
|
Bharmauria V, Seo S, Crawford JD. Neural integration of egocentric and allocentric visual cues in the gaze system. J Neurophysiol 2025; 133:109-120. [PMID: 39584726 DOI: 10.1152/jn.00498.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Revised: 11/14/2024] [Accepted: 11/16/2024] [Indexed: 11/26/2024] Open
Abstract
A fundamental question in neuroscience is how the brain integrates egocentric (body-centered) and allocentric (landmark-centered) visual cues, but for many years this question was ignored in sensorimotor studies. This changed in recent behavioral experiments, but the underlying physiology of ego/allocentric integration remained largely unstudied. The specific goal of this review is to explain how prefrontal neurons integrate eye-centered and landmark-centered visual codes for optimal gaze behavior. First, we briefly review the whole brain/behavioral mechanisms for ego/allocentric integration in the human and summarize egocentric coding mechanisms in the primate gaze system. We then focus in more depth on cellular mechanisms for ego/allocentric coding in the frontal and supplementary eye fields. We first explain how prefrontal visual responses integrate eye-centered target and landmark codes to produce a transformation toward landmark-centered coordinates. Next, we describe what happens when a landmark shifts during the delay between seeing and acquiring a remembered target, initially resulting in independently coexisting ego/allocentric memory codes. We then describe how these codes are reintegrated in the motor burst for the gaze shift. Deep network simulations suggest that these properties emerge spontaneously for optimal gaze behavior. Finally, we synthesize these observations and relate them to normal brain function through a simplified conceptual model. Together, these results show that integration of visuospatial features continues well beyond visual cortex and suggest a general cellular mechanism for goal-directed visual behavior.
Collapse
Affiliation(s)
- Vishal Bharmauria
- The Tampa Human Neurophysiology Lab & Department of Neurosurgery and Brain Repair, Morsani College of Medicine, University of South Florida, Tampa, Florida, United States
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
| | - Serah Seo
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
- Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - J Douglas Crawford
- York Centre for Vision Research and Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario, Canada
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Schuetz I, Baltaretu BR, Fiehler K. Where was this thing again? Evaluating methods to indicate remembered object positions in virtual reality. J Vis 2024; 24:10. [PMID: 38995109 PMCID: PMC11246095 DOI: 10.1167/jov.24.7.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2024] Open
Abstract
A current focus in sensorimotor research is the study of human perception and action in increasingly naturalistic tasks and visual environments. This is further enabled by the recent commercial success of virtual reality (VR) technology, which allows for highly realistic but well-controlled three-dimensional (3D) scenes. VR enables a multitude of different ways to interact with virtual objects, but only rarely are such interaction techniques evaluated and compared before being selected for a sensorimotor experiment. Here, we compare different response techniques for a memory-guided action task, in which participants indicated the position of a previously seen 3D object in a VR scene: pointing, using a virtual laser pointer of short or unlimited length, and placing, either the target object itself or a generic reference cube. Response techniques differed in availability of 3D object cues and requirement to physically move to the remembered object position by walking. Object placement was the most accurate but slowest due to repeated repositioning. When placing objects, participants tended to match the original object's orientation. In contrast, the laser pointer was fastest but least accurate, with the short pointer showing a good speed-accuracy compromise. Our findings can help researchers in selecting appropriate methods when studying naturalistic visuomotor behavior in virtual environments.
Collapse
Affiliation(s)
- Immo Schuetz
- Experimental Psychology, Justus Liebig University, Giessen, Germany
| | | | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps University Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
3
|
Musa L, Yan X, Crawford JD. Instruction alters the influence of allocentric landmarks in a reach task. J Vis 2024; 24:17. [PMID: 39073800 PMCID: PMC11290568 DOI: 10.1167/jov.24.7.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/06/2024] [Indexed: 07/30/2024] Open
Abstract
Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.
Collapse
Affiliation(s)
- Lina Musa
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
- Departments of Biology and Kinesiology & Health Sciences, York University, Toronto, ON, Canada
| |
Collapse
|
4
|
Touchscreen Pointing and Swiping: The Effect of Background Cues and Target Visibility. Motor Control 2020; 24:422-434. [PMID: 32502971 DOI: 10.1123/mc.2019-0096] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 04/01/2020] [Accepted: 04/01/2020] [Indexed: 11/18/2022]
Abstract
By assessing the precision of gestural interactions with touchscreen targets, the authors investigate how the type of gesture, target location, and scene visibility impact movement endpoints. Participants made visually and memory-guided pointing and swiping gestures with a stylus to targets located in a semicircle. Specific differences in aiming errors were identified between swiping and pointing. In particular, participants overshot the target more when swiping than when pointing and swiping endpoints showed a stronger bias toward the oblique than pointing gestures. As expected, the authors also found specific differences between conditions with and without delays. Overall, the authors observed an influence on movement execution from each of the three parameters studied and uncovered that the information used to guide movement appears to be gesture specific.
Collapse
|
5
|
Nishimura N, Uchimura M, Kitazawa S. Automatic encoding of a target position relative to a natural scene. J Neurophysiol 2019; 122:1849-1860. [PMID: 31509471 DOI: 10.1152/jn.00032.2018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We previously showed that the brain automatically represents a target position for reaching relative to a large square in the background. In the present study, we tested whether a natural scene with many complex details serves as an effective background for representing a target. In the first experiment, we used upright and inverted pictures of a natural scene. A shift of pictures significantly attenuated prism adaptation of reaching movements as long as they were upright. In one-third of participants, adaptation was almost completely cancelled whether the pictures were upright or inverted. It was remarkable that there were two distinct groups of participants, one who relies fully on the allocentric coordinate and the other who depended only when the scene was upright. In the second experiment, we examined how long it takes for a novel upright scene to serve as a background. A shift of the novel scene had no significant effects when it was presented for 500 ms before presenting a target, but significant effects were recovered when presented for 1,500 ms. These results show that a natural scene serves as a background against which a target is automatically represented once we spend 1,500 ms in the scene.NEW & NOTEWORTHY Prism adaptation of reaching was attenuated by a shift of natural scenes as long as they were upright. In one-third of participants, adaptation was fully canceled whether the scene was upright or inverted. When an upright scene was novel, it took 1,500 ms to prepare the scene for allocentric coding. These results show that a natural scene serves as a background against which a target is automatically represented once we spend 1,500 ms in the scene.
Collapse
Affiliation(s)
- Nobuyuki Nishimura
- Department of Anesthesiology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| | - Motoaki Uchimura
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan
| | - Shigeru Kitazawa
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| |
Collapse
|
6
|
Chen Y, Crawford JD. Allocentric representations for target memory and reaching in human cortex. Ann N Y Acad Sci 2019; 1464:142-155. [PMID: 31621922 DOI: 10.1111/nyas.14261] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 09/25/2019] [Accepted: 09/28/2019] [Indexed: 01/18/2023]
Abstract
The use of allocentric cues for movement guidance is complex because it involves the integration of visual targets and independent landmarks and the conversion of this information into egocentric commands for action. Here, we focus on the mechanisms for encoding reach targets relative to visual landmarks in humans. First, we consider the behavioral results suggesting that both of these cues influence target memory, but are then transformed-at the first opportunity-into egocentric commands for action. We then consider the cortical mechanisms for these behaviors. We discuss different allocentric versus egocentric mechanisms for coding of target directional selectivity in memory (inferior temporal gyrus versus superior occipital gyrus) and distinguish these mechanisms from parieto-frontal activation for planning egocentric direction of actual reach movements. Then, we consider where and how the former allocentric representations of remembered reach targets are converted into the latter egocentric plans. In particular, our recent neuroimaging study suggests that four areas in the parietal and frontal cortex (right precuneus, bilateral dorsal premotor cortex, and right presupplementary area) participate in this allo-to-ego conversion. Finally, we provide a functional overview describing how and why egocentric and landmark-centered representations are segregated early in the visual system, but then reintegrated in the parieto-frontal cortex for action.
Collapse
Affiliation(s)
- Ying Chen
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - J Douglas Crawford
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Center for Vision Research, Vision: Science to Applications (VISTA) Program, and Departments of Psychology, Biology, and Kinesiology & Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
7
|
Clark BJ, Simmons CM, Berkowitz LE, Wilber AA. The retrosplenial-parietal network and reference frame coordination for spatial navigation. Behav Neurosci 2018; 132:416-429. [PMID: 30091619 PMCID: PMC6188841 DOI: 10.1037/bne0000260] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
The retrosplenial cortex is anatomically positioned to integrate sensory, motor, and visual information and is thought to have an important role in processing spatial information and guiding behavior through complex environments. Anatomical and theoretical work has argued that the retrosplenial cortex participates in spatial behavior in concert with input from the parietal cortex. Although the nature of these interactions is unknown, a central position is that the functional connectivity is hierarchical with egocentric spatial information processed in the parietal cortex and higher-level allocentric mappings generated in the retrosplenial cortex. Here, we review the evidence supporting this proposal. We begin by summarizing the key anatomical features of the retrosplenial-parietal network, and then review studies investigating the neural correlates of these regions during spatial behavior. Our summary of this literature suggests that the retrosplenial-parietal circuitry does not represent a strict hierarchical parcellation of function between the two regions but instead a heterogeneous mixture of egocentric-allocentric coding and integration across frames of reference. We also suggest that this circuitry should be represented as a gradient of egocentric-to-allocentric information processing from parietal to retrosplenial cortices, with more specialized encoding of global allocentric frameworks within the retrosplenial cortex and more specialized egocentric and local allocentric representations in parietal cortex. We conclude by identifying the major gaps in this literature and suggest new avenues of research. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
|
8
|
Chen Y, Monaco S, Crawford JD. Neural substrates for allocentric-to-egocentric conversion of remembered reach targets in humans. Eur J Neurosci 2018. [PMID: 29512943 DOI: 10.1111/ejn.13885] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Targets for goal-directed action can be encoded in allocentric coordinates (relative to another visual landmark), but it is not known how these are converted into egocentric commands for action. Here, we investigated this using a slow event-related fMRI paradigm, based on our previous behavioural finding that the allocentric-to-egocentric (Allo-Ego) conversion for reach is performed at the first possible opportunity. Participants were asked to remember (and eventually reach towards) the location of a briefly presented target relative to another visual landmark. After a first memory delay, participants were forewarned by a verbal instruction if the landmark would reappear at the same location (potentially allowing them to plan a reach following the auditory cue before the second delay), or at a different location where they had to wait for the final landmark to be presented before response, and then reach towards the remembered target location. As predicted, participants showed landmark-centred directional selectivity in occipital-temporal cortex during the first memory delay, and only developed egocentric directional selectivity in occipital-parietal cortex during the second delay for the 'Same cue' task, and during response for the 'Different cue' task. We then compared cortical activation between these two tasks at the times when the Allo-Ego conversion occurred, and found common activation in right precuneus, right presupplementary area and bilateral dorsal premotor cortex. These results confirm that the brain converts allocentric codes to egocentric plans at the first possible opportunity, and identify the four most likely candidate sites specific to the Allo-Ego transformation for reaches.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, Room 0009, Lassonde Building, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, ON, Canada
| | - Simona Monaco
- Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - J Douglas Crawford
- Center for Vision Research, Room 0009, Lassonde Building, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, ON, Canada.,Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| |
Collapse
|
9
|
Schenk T, Hesse C. Do we have distinct systems for immediate and delayed actions? A selective review on the role of visual memory in action. Cortex 2018; 98:228-248. [DOI: 10.1016/j.cortex.2017.05.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 05/01/2017] [Accepted: 05/11/2017] [Indexed: 10/19/2022]
|
10
|
Shim J, van der Kamp J. The Effects of Optical Illusions in Perception and Action in Peripersonal and Extrapersonal Space. Perception 2017; 46:1118-1126. [PMID: 28467169 DOI: 10.1177/0301006617707697] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
While the two visual system hypothesis tells a fairly compelling story about perception and action in peripersonal space (i.e., within arm's reach), its validity for extrapersonal space is very limited and highly controversial. Hence, the present purpose was to assess whether perception and action differences in peripersonal space hold in extrapersonal space and are modulated by the same factors. To this end, the effects of an optic illusion in perception and action in both peripersonal and extrapersonal space were compared in three groups that threw balls toward a target at a distance under different target eccentricity (i.e., with the target fixated and in peripheral field), viewing (i.e., binocular and monocular viewing), and delay conditions (i.e., immediate and delayed action). The illusory bias was smaller in action than in perception in peripersonal space, but this difference was significantly reduced in extrapersonal space, primarily because of a weakening bias in perception. No systematic modulation of target eccentricity, viewing, and delay arose. The findings suggest that the two visual system hypothesis is also valid for extra personal space.
Collapse
Affiliation(s)
| | - John van der Kamp
- Van der Boechrostraat, The Netherlands and VU University of Amsterdam
| |
Collapse
|
11
|
Klinghammer M, Blohm G, Fiehler K. Scene Configuration and Object Reliability Affect the Use of Allocentric Information for Memory-Guided Reaching. Front Neurosci 2017; 11:204. [PMID: 28450826 PMCID: PMC5390010 DOI: 10.3389/fnins.2017.00204] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 03/24/2017] [Indexed: 11/16/2022] Open
Abstract
Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information.
Collapse
Affiliation(s)
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's UniversityKingston, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus-Liebig-UniversityGiessen, Germany
| |
Collapse
|
12
|
Hesse C, Miller L, Buckingham G. Visual information about object size and object position are retained differently in the visual brain: Evidence from grasping studies. Neuropsychologia 2016; 91:531-543. [PMID: 27663865 DOI: 10.1016/j.neuropsychologia.2016.09.016] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 07/29/2016] [Accepted: 09/20/2016] [Indexed: 10/21/2022]
Abstract
Many experiments have examined how the visual information used for action control is represented in our brain, and whether or not visually-guided and memory-guided hand movements rely on dissociable visual representations that are processed in different brain areas (dorsal vs. ventral). However, little is known about how these representations decay over longer time periods and whether or not different visual properties are retained in a similar fashion. In three experiments we investigated how information about object size and object position affect grasping as visual memory demands increase. We found that position information decayed rapidly with increasing delays between viewing the object and initiating subsequent actions - impacting both the accuracy of the transport component (lower end-point accuracy) and the grasp component (larger grip apertures) of the movement. In contrast, grip apertures and fingertip forces remained well-adjusted to target size in conditions in which positional information was either irrelevant or provided, regardless of delay, indicating that object size is encoded in a more stable manner than object position. The findings provide evidence that different grasp-relevant properties are encoded differently by the visual system. Furthermore, we argue that caution is required when making inferences about object size representations based on alterations in the grip component as these variations are confounded with the accuracy with which object position is represented. Instead fingertip forces seem to provide a reliable and confound-free measure to assess internal size estimations in conditions of increased visual uncertainty.
Collapse
Affiliation(s)
| | - Louisa Miller
- Department of Psychiatry, University of Cambridge, UK
| | - Gavin Buckingham
- Department of Sport and Health Sciences, University of Exeter, UK
| |
Collapse
|
13
|
Camors D, Jouffrais C, Cottereau BR, Durand JB. Allocentric coding: spatial range and combination rules. Vision Res 2015; 109:87-98. [PMID: 25749676 DOI: 10.1016/j.visres.2015.02.018] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Revised: 02/23/2015] [Accepted: 02/24/2015] [Indexed: 11/18/2022]
Abstract
When a visual target is presented with neighboring landmarks, its location can be determined both relative to the self (egocentric coding) and relative to these landmarks (allocentric coding). In the present study, we investigated (1) how allocentric coding depends on the distance between the targets and their surrounding landmarks (i.e. the spatial range) and (2) how allocentric and egocentric coding interact with each other across targets-landmarks distances (i.e. the combination rules). Subjects performed a memory-based pointing task toward previously gazed targets briefly superimposed (200ms) on background images of cluttered city landscapes. A variable portion of the images was occluded in order to control the distance between the targets and the closest potential landmarks within those images. The pointing responses were performed after large saccades and the reappearance of the images at their initial location. However, in some trials, the images' elements were slightly shifted (±3°) in order to introduce a subliminal conflict between the allocentric and egocentric reference frames. The influence of allocentric coding in the pointing responses was found to decrease with increasing target-landmarks distances, although it remained significant even at the largest distances (⩾10°). Interestingly, both the decreasing influence of allocentric coding and the concomitant increase in pointing responses variability were well captured by a Bayesian model in which the weighted combination of allocentric and egocentric cues is governed by a coupling prior.
Collapse
Affiliation(s)
- D Camors
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France; Université de Toulouse, IRIT, Toulouse, France; CNRS, IRIT, Toulouse, France
| | - C Jouffrais
- Université de Toulouse, IRIT, Toulouse, France; CNRS, IRIT, Toulouse, France
| | - B R Cottereau
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France
| | - J B Durand
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France.
| |
Collapse
|
14
|
No effect of delay on the spatial representation of serial reach targets. Exp Brain Res 2015; 233:1225-35. [PMID: 25600817 PMCID: PMC4355444 DOI: 10.1007/s00221-015-4197-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Accepted: 01/05/2015] [Indexed: 11/19/2022]
Abstract
When reaching for remembered target locations, it has been argued that the brain primarily relies on egocentric metrics and especially target position relative to gaze when reaches are immediate, but that the visuo-motor system relies stronger on allocentric (i.e., object-centered) metrics when a reach is delayed. However, previous reports from our group have shown that reaches to single remembered targets are represented relative to gaze, even when static visual landmarks are available and reaches are delayed by up to 12 s. Based on previous findings which showed a stronger contribution of allocentric coding in serial reach planning, the present study aimed to determine whether delay influences the use of a gaze-dependent reference frame when reaching to two remembered targets in a sequence after a delay of 0, 5 or 12 s. Gaze was varied relative to the first and second target and shifted away from the target before each reach. We found that participants used egocentric and allocentric reference frames in combination with a stronger reliance on allocentric information regardless of whether reaches were executed immediately or after a delay. Our results suggest that the relative contributions of egocentric and allocentric reference frames for spatial coding and updating of sequential reach targets do not change with a memory delay between target presentation and reaching.
Collapse
|
15
|
Taghizadeh B, Gail A. Spatial task context makes short-latency reaches prone to induced Roelofs illusion. Front Hum Neurosci 2014; 8:673. [PMID: 25221500 PMCID: PMC4148936 DOI: 10.3389/fnhum.2014.00673] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2014] [Accepted: 08/12/2014] [Indexed: 11/29/2022] Open
Abstract
The perceptual localization of an object is often more prone to illusions than an immediate visuomotor action towards that object. The induced Roelofs effect (IRE) probes the illusory influence of task-irrelevant visual contextual stimuli on the processing of task-relevant visuospatial instructions during movement preparation. In the IRE, the position of a task-irrelevant visual object induces a shift in the localization of a visual target when subjects indicate the position of the target by verbal response, key-presses or delayed pointing to the target (“perception” tasks), but not when immediately pointing or reaching towards it without instructed delay (“action” tasks). This discrepancy was taken as evidence for the dual-visual-stream or perception-action hypothesis, but was later explained by a phasic distortion of the egocentric spatial reference frame which is centered on subjective straight-ahead (SSA) and used for reach planning. Both explanations critically depend on delayed movements to explain the IRE for action tasks. Here we ask: first, if the IRE can be observed for short-latency reaches; second, if the IRE in fact depends on a distorted egocentric frame of reference. Human subjects were tested in new versions of the IRE task in which the reach goal had to be localized with respect to another object, i.e., in an allocentric reference frame. First, we found an IRE even for immediate reaches in our allocentric task, but not for an otherwise similar egocentric control task. Second, the IRE depended on the position of the task-irrelevant frame relative to the reference object, not relative to SSA. We conclude that the IRE for reaching does not mandatorily depend on prolonged response delays, nor does it depend on motor planning in an egocentric reference frame. Instead, allocentric encoding of a movement goal is sufficient to make immediate reaches susceptible to IRE, underlining the context dependence of visuomotor illusions.
Collapse
Affiliation(s)
- Bahareh Taghizadeh
- Sensorimotor Group, German Primate Center, Leibniz Institute for Primate Research Göttingen, Germany ; Faculty of Biology and Psychology, Georg-August-Universität Göttingen, Germany
| | - Alexander Gail
- Sensorimotor Group, German Primate Center, Leibniz Institute for Primate Research Göttingen, Germany ; Faculty of Biology and Psychology, Georg-August-Universität Göttingen, Germany ; Bernstein Center for Computational Neuroscience Göttingen, Germany
| |
Collapse
|
16
|
Fiehler K, Wolf C, Klinghammer M, Blohm G. Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment. Front Hum Neurosci 2014; 8:636. [PMID: 25202252 PMCID: PMC4141549 DOI: 10.3389/fnhum.2014.00636] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Accepted: 07/30/2014] [Indexed: 11/13/2022] Open
Abstract
When interacting with our environment we generally make use of egocentric and allocentric object information by coding object positions relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during memory-guided reaching to images of naturalistic scenes. Participants encoded a breakfast scene containing six objects on a table (local objects) and three objects in the environment (global objects). After a 2 s delay, a visual test scene reappeared for 1 s in which 1 local object was missing (= target) and of the remaining, 1, 3 or 5 local objects or one of the global objects were shifted to the left or to the right. The offset of the test scene prompted participants to reach to the target as precisely as possible. Only local objects served as potential reach targets and thus were task-relevant. When shifting objects we predicted accurate reaching if participants only used egocentric coding of object position and systematic shifts of reach endpoints if allocentric information were used for movement planning. We found that reaching movements were largely affected by allocentric shifts showing an increase in endpoint errors in the direction of object shifts with the number of local objects shifted. No effect occurred when one local or one global object was shifted. Our findings suggest that allocentric cues are indeed used by the brain for memory-guided reaching towards targets in naturalistic visual scenes. Moreover, the integration of egocentric and allocentric object information seems to depend on the extent of changes in the scene.
Collapse
Affiliation(s)
- Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Christian Wolf
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Mathias Klinghammer
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Gunnar Blohm
- Canadian Action and Perception Network (CAPnet), Centre for Neuroscience Studies, Queen's University Kingston, ON, Canada
| |
Collapse
|
17
|
Hesse C, Schenk T. Delayed action does not always require the ventral stream: A study on a patient with visual form agnosia. Cortex 2014; 54:77-91. [DOI: 10.1016/j.cortex.2014.02.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2013] [Revised: 10/14/2013] [Accepted: 02/12/2014] [Indexed: 10/25/2022]
|
18
|
Wilber AA, Clark BJ, Forster TC, Tatsuno M, McNaughton BL. Interaction of egocentric and world-centered reference frames in the rat posterior parietal cortex. J Neurosci 2014; 34:5431-46. [PMID: 24741034 PMCID: PMC3988403 DOI: 10.1523/jneurosci.0511-14.2014] [Citation(s) in RCA: 137] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2014] [Revised: 03/03/2014] [Accepted: 03/07/2014] [Indexed: 01/02/2023] Open
Abstract
Navigation requires coordination of egocentric and allocentric spatial reference frames and may involve vectorial computations relative to landmarks. Creation of a representation of target heading relative to landmarks could be accomplished from neurons that encode the conjunction of egocentric landmark bearings with allocentric head direction. Landmark vector representations could then be created by combining these cells with distance encoding cells. Landmark vector cells have been identified in rodent hippocampus. Given remembered vectors at goal locations, it would be possible to use such cells to compute trajectories to hidden goals. To look for the first stage in this process, we assessed parietal cortical neural activity as a function of egocentric cue light location and allocentric head direction in rats running a random sequence to light locations around a circular platform. We identified cells that exhibit the predicted egocentric-by-allocentric conjunctive characteristics and anticipate orienting toward the goal.
Collapse
Affiliation(s)
- Aaron A Wilber
- Canadian Centre for Behavioural Neuroscience, The University of Lethbridge, Lethbridge, Alberta, Canada T1K 3M4
| | | | | | | | | |
Collapse
|
19
|
Borel L, Redon-Zouiteni C, Cauvin P, Dumitrescu M, Devèze A, Magnan J, Péruch P. Unilateral vestibular loss impairs external space representation. PLoS One 2014; 9:e88576. [PMID: 24523916 PMCID: PMC3921214 DOI: 10.1371/journal.pone.0088576] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2013] [Accepted: 01/08/2014] [Indexed: 11/18/2022] Open
Abstract
The vestibular system is responsible for a wide range of postural and oculomotor functions and maintains an internal, updated representation of the position and movement of the head in space. In this study, we assessed whether unilateral vestibular loss affects external space representation. Patients with Menière's disease and healthy participants were instructed to point to memorized targets in near (peripersonal) and far (extrapersonal) spaces in the absence or presence of a visual background. These individuals were also required to estimate their body pointing direction. Menière's disease patients were tested before unilateral vestibular neurotomy and during the recovery period (one week and one month after the operation), and healthy participants were tested at similar times. Unilateral vestibular loss impaired the representation of both the external space and the body pointing direction: in the dark, the configuration of perceived targets was shifted toward the lesioned side and compressed toward the contralesioned hemifield, with higher pointing error in the near space. Performance varied according to the time elapsed after neurotomy: deficits were stronger during the early stages, while gradual compensation occurred subsequently. These findings provide the first demonstration of the critical role of vestibular signals in the representation of external space and of body pointing direction in the early stages after unilateral vestibular loss.
Collapse
Affiliation(s)
- Liliane Borel
- Aix-Marseille Université, Marseille, France
- CNRS, UMR 7260 Laboratoire de Neurosciences Intégratives et Adaptatives, Marseille, France
| | | | | | - Michel Dumitrescu
- Aix-Marseille Université, Marseille, France
- CNRS, UMR 7260 Laboratoire de Neurosciences Intégratives et Adaptatives, Marseille, France
| | - Arnaud Devèze
- Aix-Marseille Université, Marseille, France
- Service d'Oto-Rhino-Laryngologie et Chirurgie Cervico-Faciale, Hôpital Nord, Marseille, France
| | - Jacques Magnan
- Aix-Marseille Université, Marseille, France
- CNRS, UMR 7260 Laboratoire de Neurosciences Intégratives et Adaptatives, Marseille, France
| | - Patrick Péruch
- Aix-Marseille Université, Marseille, France
- INSERM, UMR_S 1106 Institut de Neurosciences des Systèmes, Marseille, France
| |
Collapse
|
20
|
Schütz I, Henriques DYP, Fiehler K. Gaze-centered spatial updating in delayed reaching even in the presence of landmarks. Vision Res 2013; 87:46-52. [PMID: 23770521 DOI: 10.1016/j.visres.2013.06.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2013] [Revised: 05/29/2013] [Accepted: 06/01/2013] [Indexed: 11/16/2022]
Abstract
Previous results suggest that the brain predominantly relies on a constantly updated gaze-centered target representation to guide reach movements when no other visual information is available. In the present study, we investigated whether the addition of reliable visual landmarks influences the use of spatial reference frames for immediate and delayed reaching. Subjects reached immediately or after a delay of 8 or 12s to remembered target locations, either with or without landmarks. After target presentation and before reaching they shifted gaze to one of five different fixation points and held their gaze at this location until the end of the reach. With landmarks present, gaze-dependent reaching errors were smaller and more precise than when reaching without landmarks. Delay influenced neither reaching errors nor variability. These findings suggest that when landmarks are available, the brain seems to still use gaze-dependent representations but combine them with gaze-independent allocentric information to guide immediate or delayed reach movements to visual targets.
Collapse
Affiliation(s)
- I Schütz
- Department of Psychology, Justus-Liebig-University Giessen, Giessen, Germany.
| | | | | |
Collapse
|
21
|
Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects. Exp Brain Res 2012; 223:441-55. [PMID: 23076429 DOI: 10.1007/s00221-012-3270-x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2011] [Accepted: 09/11/2012] [Indexed: 10/27/2022]
Abstract
A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to visual bias with bimodal stimuli. Results highlight age-, memory-, and modality-dependent deterioration in the processing of auditory and visual space, as well as an age-related increase in the dominance of vision when localizing bimodal sources.
Collapse
|
22
|
Thompson AA, Glover CV, Henriques DY. Allocentrically implied target locations are updated in an eye-centred reference frame. Neurosci Lett 2012; 514:214-8. [PMID: 22425720 DOI: 10.1016/j.neulet.2012.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2011] [Revised: 02/16/2012] [Accepted: 03/01/2012] [Indexed: 10/28/2022]
|
23
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
24
|
Gaze-centered spatial updating of reach targets across different memory delays. Vision Res 2011; 51:890-7. [PMID: 21219923 DOI: 10.1016/j.visres.2010.12.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2010] [Revised: 11/26/2010] [Accepted: 12/22/2010] [Indexed: 11/22/2022]
Abstract
Previous research has demonstrated that remembered targets for reaching are coded and updated relative to gaze, at least when the reaching movement is made soon after the target has been extinguished. In this study, we want to test whether reach targets are updated relative to gaze following different time delays. Reaching endpoints systematically varied as a function of gaze relative to target irrespective of whether the action was executed immediately or after a delay of 5 s, 8 s or 12 s. The present results suggest that memory traces for reach targets continue to be coded in a gaze-dependent reference frame if no external cues are present.
Collapse
|
25
|
Chen Y, Byrne P, Crawford JD. Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach. Neuropsychologia 2011; 49:49-60. [DOI: 10.1016/j.neuropsychologia.2010.10.031] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2010] [Revised: 10/18/2010] [Accepted: 10/29/2010] [Indexed: 10/18/2022]
|
26
|
Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating. Vision Res 2010; 50:2661-70. [PMID: 20816887 DOI: 10.1016/j.visres.2010.08.038] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2010] [Revised: 08/16/2010] [Accepted: 08/31/2010] [Indexed: 11/22/2022]
Abstract
Numerous studies have investigated the phenomenon of egocentric spatial updating in gaze-centered coordinates, and some have studied the use of allocentric cues in visually-guided movement, but it is not known how these two mechanisms interact. Here, we tested whether gaze-centered and allocentric information combine at the time of viewing the target, or if the brain waits until the last possible moment. To do this, we took advantage of the well-known fact that pointing and reaching movements show gaze-centered 'retinal magnification' errors (RME) that update across saccades. During gaze fixation, we found that visual landmarks, and hence allocentric information, reduces RME for targets in the left visual hemifield but not in the right. When a saccade was made between viewing and reaching, this landmark-induced reduction in RME only depended on gaze at reach, not at encoding. Based on this finding, we argue that egocentric-allocentric combination occurs after the intervening saccade. This is consistent with previous findings in healthy and brain damaged subjects suggesting that the brain updates early spatial representations during eye movement and combines them at the time of action.
Collapse
|
27
|
Byrne PA, Crawford JD. Cue Reliability and a Landmark Stability Heuristic Determine Relative Weighting Between Egocentric and Allocentric Visual Information in Memory-Guided Reach. J Neurophysiol 2010; 103:3054-69. [DOI: 10.1152/jn.01008.2009] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark “shift” during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric–allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration—despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment—had a strong influence on egocentric–allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
Collapse
Affiliation(s)
- Patrick A. Byrne
- Centre for Vision Research,
- Canadian Action and Perception Network, and
| | - J. Douglas Crawford
- Centre for Vision Research,
- Canadian Action and Perception Network, and
- Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada
| |
Collapse
|
28
|
Abstract
The perception-action model proposes that vision-for-perception and vision-for-action are based on anatomically distinct and functionally independent streams within the visual cortex. This idea can account for diverse experimental findings, and has been hugely influential over the past two decades. The model itself comprises a set of core contrasts between the functional properties of the two visual streams. We critically review the evidence for these contrasts, arguing that each of them has either been refuted or found limited empirical support. We suggest that the perception-action model captures some broad patterns of functional localization, but that the specializations of the two streams are relative, not absolute. The ubiquity and extent of inter-stream interactions suggest that we should reject the idea that the ventral and dorsal streams are functionally independent processing pathways.
Collapse
Affiliation(s)
- Thomas Schenk
- a Wolfson Research Institute, Durham University , Stockton on Tees , UK
| | | |
Collapse
|
29
|
Bruno N, Franz VH. When is grasping affected by the Müller-Lyer illusion? A quantitative review. Neuropsychologia 2008; 47:1421-33. [PMID: 19059422 DOI: 10.1016/j.neuropsychologia.2008.10.031] [Citation(s) in RCA: 92] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2008] [Revised: 08/28/2008] [Accepted: 10/22/2008] [Indexed: 11/18/2022]
Abstract
Milner and Goodale (1995) [Milner, A. D., & Goodale, M. A. (1995). The visual brain in action. Oxford, UK: Oxford University Press] proposed a functional division of labor between vision-for-perception and vision-for-action. Their proposal is supported by neuropsychological, brain-imaging, and psychophysical evidence. However, it has remained controversial in the prediction that actions are not affected by visual illusions. Following up on a related review on pointing (see Bruno et al., 2008 [Bruno, N., Bernardis, P., & Gentilucci, M. (2008). Visually guided pointing, the Müller-Lyer illusion, and the functional interpretation of the dorsal-ventral split: Conclusions from 33 independent studies. Neuroscience and Biobehavioral Reviews, 32(3), 423-437]), here we re-analyze 18 studies on grasping objects embedded in the Müller-Lyer (ML) illusion. We find that median percent effects across studies are indeed larger for perceptual than for grasping measures. However, almost all grasping effects are larger than zero and the two distributions show substantial overlap and variability. A fine-grained analysis reveals that critical roles in accounting for this variability are played by the informational basis for guiding the action, by the number of trials per condition of the experiment, and by the angle of the illusion fins. When all these factors are considered together, the data support a difference between grasping and perception only when online visual feedback is available during movement. Thus, unlike pointing, grasping studies of the Müller-Lyer (ML) illusion suggest that the perceptual and motor effects of the illusion differ only because of online, feedback-driven corrections, and do not appear to support independent spatial representations for vision-for-action and vision-for-perception.
Collapse
Affiliation(s)
- Nicola Bruno
- Dipartimento di Psicologia, Università di Parma, Parma, Italy.
| | | |
Collapse
|
30
|
Bruno N, Bernardis P, Gentilucci M. Visually guided pointing, the Müller-Lyer illusion, and the functional interpretation of the dorsal-ventral split: Conclusions from 33 independent studies. Neurosci Biobehav Rev 2008; 32:423-37. [PMID: 17976722 DOI: 10.1016/j.neubiorev.2007.08.006] [Citation(s) in RCA: 83] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2007] [Revised: 06/26/2007] [Accepted: 08/21/2007] [Indexed: 10/22/2022]
|