1
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Misperception of motion in depth originates from an incomplete transformation of retinal signals. J Vis 2019; 19:21. [PMID: 31647515 DOI: 10.1167/19.12.21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Depth perception requires the use of an internal model of the eye-head geometry to infer distance from binocular retinal images and extraretinal 3D eye-head information, particularly ocular vergence. Similarly, for motion in depth perception, gaze angle is required to correctly interpret the spatial direction of motion from retinal images; however, it is unknown whether the brain can make adequate use of extraretinal version and vergence information to correctly transform binocular retinal motion into 3D spatial coordinates. Here we tested this hypothesis by asking participants to reconstruct the spatial trajectory of an isolated disparity stimulus moving in depth either peri-foveally or peripherally while participants' gaze was oriented at different vergence and version angles. We found large systematic errors in the perceived motion trajectory that reflected an intermediate reference frame between a purely retinal interpretation of binocular retinal motion (not accounting for veridical vergence and version) and the spatially correct motion. We quantify these errors with a 3D reference frame model accounting for target, eye, and head position upon motion percept encoding. This model could capture the behavior well, revealing that participants tended to underestimate their version by up to 17%, overestimate their vergence by up to 22%, and underestimate the overall change in retinal disparity by up to 64%, and that the use of extraretinal information depended on retinal eccentricity. Since such large perceptual errors are not observed in everyday viewing, we suggest that both monocular retinal cues and binocular extraretinal signals are required for accurate real-world motion in depth perception.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Guillaume Leclercq
- ICTEAM and Institute for Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM and Institute for Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| |
Collapse
|
2
|
Mostafa AA, ‘t Hart BM, Henriques DYP. Motor learning without moving: Proprioceptive and predictive hand localization after passive visuoproprioceptive discrepancy training. PLoS One 2019; 14:e0221861. [PMID: 31465524 PMCID: PMC6715176 DOI: 10.1371/journal.pone.0221861] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Accepted: 08/18/2019] [Indexed: 11/30/2022] Open
Abstract
An accurate estimate of limb position is necessary for movement planning, before and after motor learning. Where we localize our unseen hand after a reach depends on felt hand position, or proprioception, but in studies and theories on motor adaptation this is quite often neglected in favour of predicted sensory consequences based on efference copies of motor commands. Both sources of information should contribute, so here we set out to further investigate how much of hand localization depends on proprioception and how much on predicted sensory consequences. We use a training paradigm combining robot controlled hand movements with rotated visual feedback that eliminates the possibility to update predicted sensory consequences (‘exposure training’), but still recalibrates proprioception, as well as a classic training paradigm with self-generated movements in another set of participants. After each kind of training we measure participants’ hand location estimates based on both efference-based predictions and afferent proprioceptive signals with self-generated hand movements (‘active localization’) as well as based on proprioception only with robot-generated movements (‘passive localization’). In the exposure training group, we find indistinguishable shifts in passive and active hand localization, but after classic training, active localization shifts more than passive, indicating a contribution from updated predicted sensory consequences. Both changes in open-loop reaches and hand localization are only slightly smaller after exposure training as compared to after classic training, confirming that proprioception plays a large role in estimating limb position and in planning movements, even after adaptation. (data: https://doi.org/10.17605/osf.io/zfdth, preprint: https://doi.org/10.1101/384941)
Collapse
Affiliation(s)
- Ahmed A. Mostafa
- CVR / Kinesiology and Health Science, York University, Toronto, Ontario, Canada
- Faculty of Physical Education, Mansoura University, Mansoura, Egypt
| | - Bernard Marius ‘t Hart
- CVR / Kinesiology and Health Science, York University, Toronto, Ontario, Canada
- * E-mail:
| | | |
Collapse
|
3
|
Liu J, Ando H. Response Modality vs. Target Modality: Sensory Transformations and Comparisons in Cross-modal Slant Matching Tasks. Sci Rep 2018; 8:11068. [PMID: 30038316 PMCID: PMC6056512 DOI: 10.1038/s41598-018-29375-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 07/10/2018] [Indexed: 11/17/2022] Open
Abstract
Humans constantly combine multi-sensory spatial information to successfully interact with objects in peripersonal space. Previous studies suggest that sensory inputs of different modalities are encoded in different reference frames. In cross-modal tasks where the target and response modalities are different, it is unclear which reference frame these multiple sensory signals are transformed to for comparison. The current study used a slant perception and parallelity paradigm to explore this issue. Participants perceived (either visually or haptically) the slant of a reference board and were asked to either adjust an invisible test board by hand manipulation or to adjust a visible test board through verbal instructions to be physically parallel to the reference board. We examined the patterns of constant error and variability of unimodal and cross-modal tasks with various reference slant angles at different reference/test locations. The results revealed that rather than a mixture of the patterns of unimodal conditions, the pattern in cross-modal conditions depended almost entirely on the response modality and was not substantially affected by the target modality. Deviations in haptic response conditions could be predicted by the locations of the reference and test board, whereas the reference slant angle was an important predictor in visual response conditions.
Collapse
Affiliation(s)
- Juan Liu
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT) and Osaka University, Osaka, Japan.
| | - Hiroshi Ando
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT) and Osaka University, Osaka, Japan
| |
Collapse
|
4
|
Khademi M, Hondori HM, Dodakian L, Cramer S, Lopes CV. Comparing "pick and place" task in spatial Augmented Reality versus non-immersive Virtual Reality for rehabilitation setting. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2013:4613-6. [PMID: 24110762 DOI: 10.1109/embc.2013.6610575] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Introducing computer games to the rehabilitation market led to development of numerous Virtual Reality (VR) training applications. Although VR has provided tremendous benefit to the patients and caregivers, it has inherent limitations, some of which might be solved by replacing it with Augmented Reality (AR). The task of pick-and-place, which is part of many activities of daily living (ADL's), is one of the major affected functions stroke patients mainly expect to recover. We developed an exercise consisting of moving an object between various points, following a flash light that indicates the next target. The results show superior performance of subjects in spatial AR versus non-immersive VR setting. This could be due to the extraneous hand-eye coordination which exists in VR whereas it is eliminated in spatial AR.
Collapse
|
5
|
Mousavi Hondori H, Khademi M, Dodakian L, McKenzie A, Lopes CV, Cramer SC. Choice of Human-Computer Interaction Mode in Stroke Rehabilitation. Neurorehabil Neural Repair 2015; 30:258-65. [PMID: 26138411 DOI: 10.1177/1545968315593805] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND AND OBJECTIVE Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. METHODS Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. RESULTS Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. CONCLUSIONS Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients.
Collapse
|
6
|
Mueller S, Fiehler K. Effector movement triggers gaze-dependent spatial coding of tactile and proprioceptive-tactile reach targets. Neuropsychologia 2014; 62:184-93. [DOI: 10.1016/j.neuropsychologia.2014.07.025] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2013] [Revised: 06/10/2014] [Accepted: 07/22/2014] [Indexed: 11/27/2022]
|
7
|
Kagerer FA, Clark JE. Development of interactions between sensorimotor representations in school-aged children. Hum Mov Sci 2014; 34:164-77. [PMID: 24636697 DOI: 10.1016/j.humov.2014.02.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2011] [Revised: 02/09/2014] [Accepted: 02/09/2014] [Indexed: 11/15/2022]
Abstract
Reliable sensory-motor integration is a pre-requisite for optimal movement control; the functionality of this integration changes during development. Previous research has shown that motor performance of school-age children is characterized by higher variability, particularly under conditions where vision is not available, and movement planning and control is largely based on kinesthetic input. The purpose of the current study was to determine the characteristics of how kinesthetic-motor internal representations interact with visuo-motor representations during development. To this end, we induced a visuo-motor adaptation in 59 children, ranging from 5 to 12years of age, as well as in a group of adults, and measured initial directional error (IDE) and endpoint error (EPE) during a subsequent condition where visual feedback was not available, and participants had to rely on kinesthetic input. Our results show that older children (age range 9-12years) de-adapted significantly more than younger children (age range 5-8years) over the course of 36 trials in the absence of vision, suggesting that the kinesthetic-motor internal representation in the older children was utilized more efficiently to guide hand movements, and was comparable to the performance of the adults.
Collapse
Affiliation(s)
- Florian A Kagerer
- Dept. of Kinesiology, Michigan State University, East Lansing, MI 48824, USA.
| | - Jane E Clark
- Dept. of Kinesiology, University of Maryland, College Park, MD 20742, USA.
| |
Collapse
|
8
|
Mueller S, Fiehler K. Gaze-dependent spatial updating of tactile targets in a localization task. Front Psychol 2014; 5:66. [PMID: 24575060 PMCID: PMC3918658 DOI: 10.3389/fpsyg.2014.00066] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Accepted: 01/17/2014] [Indexed: 11/13/2022] Open
Abstract
There is concurrent evidence that visual reach targets are represented with respect to gaze. For tactile reach targets, we previously showed that an effector movement leads to a shift from a gaze-independent to a gaze-dependent reference frame. Here we aimed to unravel the influence of effector movement (gaze shift) on the reference frame of tactile stimuli using a spatial localization task (yes/no paradigm). We assessed how gaze direction (fixation left/right) alters the perceived spatial location (point of subjective equality) of sequentially presented tactile standard and visual comparison stimuli while effector movement (gaze fixed/shifted) and stimulus order (vis-tac/tac-vis) were varied. In the fixed-gaze condition, subjects maintained gaze at the fixation site throughout the trial. In the shifted-gaze condition, they foveated the first stimulus, then made a saccade toward the fixation site where they held gaze while the second stimulus appeared. Only when an effector movement occurred after the encoding of the tactile stimulus (shifted-gaze, tac-vis), gaze similarly influenced the perceived location of the tactile and the visual stimulus. In contrast, when gaze was fixed or a gaze shift occurred before encoding of the tactile stimulus, gaze differentially affected the perceived spatial relation of the tactile and the visual stimulus suggesting gaze-dependent coding of only one of the two stimuli. Consistent with previous findings this implies that visual stimuli vary with gaze irrespective of whether gaze is fixed or shifted. However, a gaze-dependent representation of tactile stimuli seems to critically depend on an effector movement (gaze shift) after tactile encoding triggering spatial updating of tactile targets in a gaze-dependent reference frame. Together with our recent findings on tactile reaching, the present results imply similar underlying reference frames for tactile spatial perception and action.
Collapse
Affiliation(s)
- Stefanie Mueller
- Department of Psychology, Justus-Liebig University Giessen Giessen, Germany
| | - Katja Fiehler
- Department of Psychology, Justus-Liebig University Giessen Giessen, Germany
| |
Collapse
|
9
|
Longo MR. The effects of immediate vision on implicit hand maps. Exp Brain Res 2014; 232:1241-7. [PMID: 24449015 DOI: 10.1007/s00221-014-3840-1] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2013] [Accepted: 01/09/2014] [Indexed: 12/24/2022]
Abstract
Perceiving the external spatial location of the limbs using position sense requires that immediate proprioceptive afferent signals be combined with a stored body model specifying the size and shape of the body. Longo and Haggard (Proc Natl Acad Sci USA 107:11727-11732, 2010) developed a method to isolate and measure this body model in the case of the hand in which participants judge the perceived location in external space of several landmarks on their occluded hand. The spatial layout of judgments of different landmarks is used to construct implicit hand maps, which can then be compared with actual hand shape. Studies using this paradigm have revealed that the body model of the hand is massively distorted, in a highly stereotyped way across individuals, with large underestimation of finger length and overestimation of hand width. Previous studies using this paradigm have allowed participants to see the locations of their judgments on the occluding board. Several previous studies have demonstrated that immediate vision, even when wholly non-informative, can alter processing of somatosensory signals and alter the reference frame in which they are localised. The present study therefore investigated whether immediate vision contributes to the distortions of implicit hand maps described previously. Participants judged the external spatial location of the tips and knuckles of their occluded left hand either while being able to see where they were pointing (as in previous studies) or while blindfolded. The characteristic distortions of implicit hand maps reported previously were clearly apparent in both conditions, demonstrating that the distortions are not an artefact of immediate vision. However, there were significant differences in the magnitude of distortions in the two conditions, suggesting that vision may modulate representations of body size and shape, even when entirely non-informative.
Collapse
Affiliation(s)
- Matthew R Longo
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK,
| |
Collapse
|
10
|
Tagliabue M, McIntyre J. When kinesthesia becomes visual: a theoretical justification for executing motor tasks in visual space. PLoS One 2013; 8:e68438. [PMID: 23861903 PMCID: PMC3702599 DOI: 10.1371/journal.pone.0068438] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2013] [Accepted: 05/29/2013] [Indexed: 01/21/2023] Open
Abstract
Several experimental studies in the literature have shown that even when performing purely kinesthetic tasks, such as reaching for a kinesthetically felt target with a hidden hand, the brain reconstructs a visual representation of the movement. In our previous studies, however, we did not observe any role of a visual representation of the movement in a purely kinesthetic task. This apparent contradiction could be related to a fundamental difference between the studied tasks. In our study subjects used the same hand to both feel the target and to perform the movement, whereas in most other studies, pointing to a kinesthetic target consisted of pointing with one hand to the finger of the other, or to some other body part. We hypothesize, therefore, that it is the necessity of performing inter-limb transformations that induces a visual representation of purely kinesthetic tasks. To test this hypothesis we asked subjects to perform the same purely kinesthetic task in two conditions: INTRA and INTER. In the former they used the right hand to both perceive the target and to reproduce its orientation. In the latter, subjects perceived the target with the left hand and responded with the right. To quantify the use of a visual representation of the movement we measured deviations induced by an imperceptible conflict that was generated between visual and kinesthetic reference frames. Our hypothesis was confirmed by the observed deviations of responses due to the conflict in the INTER, but not in the INTRA, condition. To reconcile these observations with recent theories of sensori-motor integration based on maximum likelihood estimation, we propose here a new model formulation that explicitly considers the effects of covariance between sensory signals that are directly available and internal representations that are ‘reconstructed’ from those inputs through sensori-motor transformations.
Collapse
Affiliation(s)
- Michele Tagliabue
- Centre d'Etude de la Sensorimotricité, (CNRS UMR 8194), Université Paris Descartes, Institut des Neurosciences et de la Cognition, Sorbonne Paris Cité, Paris, France.
| | | |
Collapse
|
11
|
Murdison TS, Paré-Bingley CA, Blohm G. Evidence for a retinal velocity memory underlying the direction of anticipatory smooth pursuit eye movements. J Neurophysiol 2013; 110:732-47. [PMID: 23678014 DOI: 10.1152/jn.00991.2012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To compute spatially correct smooth pursuit eye movements, the brain uses both retinal motion and extraretinal signals about the eyes and head in space (Blohm and Lefèvre 2010). However, when smooth eye movements rely solely on memorized target velocity, such as during anticipatory pursuit, it is unknown if this velocity memory also accounts for extraretinal information, such as head roll and ocular torsion. To answer this question, we used a novel behavioral updating paradigm in which participants pursued a repetitive, spatially constant fixation-gap-ramp stimulus in series of five trials. During the first four trials, participants' heads were rolled toward one shoulder, inducing ocular counterroll (OCR). With each repetition, participants increased their anticipatory pursuit gain, indicating a robust encoding of velocity memory. On the fifth trial, they rolled their heads to the opposite shoulder before pursuit, also inducing changes in ocular torsion. Consequently, for spatially accurate anticipatory pursuit, the velocity memory had to be updated across changes in head roll and ocular torsion. We tested how the velocity memory accounted for head roll and OCR by observing the effects of changes to these signals on anticipatory trajectories of the memory decoding (fifth) trials. We found that anticipatory pursuit was updated for changes in head roll; however, we observed no evidence of compensation for OCR, representing the absence of ocular torsion signals within the velocity memory. This indicated that the directional component of the memory must be coded retinally and updated to account for changes in head roll, but not OCR.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | | | | |
Collapse
|
12
|
Wilke C, Synofzik M, Lindner A. Sensorimotor recalibration depends on attribution of sensory prediction errors to internal causes. PLoS One 2013; 8:e54925. [PMID: 23359818 PMCID: PMC3554678 DOI: 10.1371/journal.pone.0054925] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2012] [Accepted: 12/20/2012] [Indexed: 11/18/2022] Open
Abstract
Sensorimotor learning critically depends on error signals. Learning usually tries to minimise these error signals to guarantee optimal performance. Errors can, however, have both internal causes, resulting from one’s sensorimotor system, and external causes, resulting from external disturbances. Does learning take into account the perceived cause of error information? Here, we investigated the recalibration of internal predictions about the sensory consequences of one’s actions. Since these predictions underlie the distinction of self- and externally produced sensory events, we assumed them to be recalibrated only by prediction errors attributed to internal causes. When subjects were confronted with experimentally induced visual prediction errors about their pointing movements in virtual reality, they recalibrated the predicted visual consequences of their movements. Recalibration was not proportional to the externally generated prediction error, but correlated with the error component which subjects attributed to internal causes. We also revealed adaptation in subjects’ motor performance which reflected their recalibrated sensory predictions. Thus, causal attribution of error information is essential for sensorimotor learning.
Collapse
Affiliation(s)
- Carlo Wilke
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Matthis Synofzik
- Department of Neurodegeneration, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- German Centre for Neurodegenerative Diseases, University of Tübingen, Tübingen, Germany
| | - Axel Lindner
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- * E-mail:
| |
Collapse
|
13
|
Ambrosini E, Ciavarro M, Pelle G, Perrucci MG, Galati G, Fattori P, Galletti C, Committeri G. Behavioral investigation on the frames of reference involved in visuomotor transformations during peripheral arm reaching. PLoS One 2012; 7:e51856. [PMID: 23272180 PMCID: PMC3521756 DOI: 10.1371/journal.pone.0051856] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2012] [Accepted: 11/13/2012] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Several psychophysical experiments found evidence for the involvement of gaze-centered and/or body-centered coordinates in arm-movement planning and execution. Here we aimed at investigating the frames of reference involved in the visuomotor transformations for reaching towards visual targets in space by taking target eccentricity and performing hand into account. METHODOLOGY/PRINCIPAL FINDINGS We examined several performance measures while subjects reached, in complete darkness, memorized targets situated at different locations relative to the gaze and/or to the body, thus distinguishing between an eye-centered and a body-centered frame of reference involved in the computation of the movement vector. The errors seem to be mainly affected by the visual hemifield of the target, independently from its location relative to the body, with an overestimation error in the horizontal reaching dimension (retinal exaggeration effect). The use of several target locations within the perifoveal visual field allowed us to reveal a novel finding, that is, a positive linear correlation between horizontal overestimation errors and target retinal eccentricity. In addition, we found an independent influence of the performing hand on the visuomotor transformation process, with each hand misreaching towards the ipsilateral side. CONCLUSIONS While supporting the existence of an internal mechanism of target-effector integration in multiple frames of reference, the present data, especially the linear overshoot at small target eccentricities, clearly indicate the primary role of gaze-centered coding of target location in the visuomotor transformation for reaching.
Collapse
Affiliation(s)
- Ettore Ambrosini
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University “G. d’Annunzio”, Chieti, Italy
- Institute of Advanced Biomedical Technologies - ITAB, Foundation G. d’Annunzio, Chieti, Italy
| | - Marco Ciavarro
- Institute of Advanced Biomedical Technologies - ITAB, Foundation G. d’Annunzio, Chieti, Italy
- Department of Human and General Physiology and Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Gina Pelle
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University “G. d’Annunzio”, Chieti, Italy
- Institute of Advanced Biomedical Technologies - ITAB, Foundation G. d’Annunzio, Chieti, Italy
| | - Mauro Gianni Perrucci
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University “G. d’Annunzio”, Chieti, Italy
- Institute of Advanced Biomedical Technologies - ITAB, Foundation G. d’Annunzio, Chieti, Italy
| | - Gaspare Galati
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- Laboratory of Neuropsychology, Foundation Santa Lucia, Rome, Italy
| | - Patrizia Fattori
- Department of Human and General Physiology and Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Claudio Galletti
- Department of Human and General Physiology and Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Giorgia Committeri
- Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University “G. d’Annunzio”, Chieti, Italy
- Institute of Advanced Biomedical Technologies - ITAB, Foundation G. d’Annunzio, Chieti, Italy
- * E-mail:
| |
Collapse
|
14
|
Byrne PA, Henriques DYP. When more is less: increasing allocentric visual information can switch visual-proprioceptive combination from an optimal to sub-optimal process. Neuropsychologia 2012; 51:26-37. [PMID: 23142707 DOI: 10.1016/j.neuropsychologia.2012.10.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2012] [Revised: 08/16/2012] [Accepted: 10/05/2012] [Indexed: 10/27/2022]
Abstract
When reaching for an object in the environment, the brain often has access to multiple independent estimates of that object's location. For example, if someone places their coffee cup on a table, then later they know where it is because they see it, but also because they remember how their reaching limb was oriented when they placed the cup. Intuitively, one would expect more accurate reaches if either of these estimates were improved (e.g., if a light were turned on so the cup were more visible). It is now well-established that the brain tends to combine two or more estimates about the same stimulus as a maximum-likelihood estimator (MLE), which is the best thing to do when estimates are unbiased. Even in the presence of small biases, relying on the MLE rule is still often better than choosing a single estimate. For this work, we designed a reaching task in which human subjects could integrate proprioceptive and allocentric (landmark-relative) visual information to reach for a remembered target. Even though both of these modalities contain some level of bias, we demonstrate via simulation that our subjects should use an MLE rule in preference to relying on one modality or the other in isolation. Furthermore, we show that when visual information is poor, subjects do, indeed, combine information in this way. However, when we improve the quality of visual information, subjects counter-intuitively switch to a sub-optimal strategy that occasionally includes reliance on a single modality.
Collapse
Affiliation(s)
- Patrick A Byrne
- Centre for Vision Research, Science, York University, 4700 Keele Street, Toronto, ON, Canada M3J 1P3.
| | | |
Collapse
|
15
|
Pritchett LM, Carnevale MJ, Harris LR. Reference frames for coding touch location depend on the task. Exp Brain Res 2012; 222:437-45. [PMID: 22941315 DOI: 10.1007/s00221-012-3231-4] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2012] [Accepted: 08/11/2012] [Indexed: 11/26/2022]
Abstract
The position of gaze (eye plus head position) relative to body is known to alter the perceived locations of sensory targets. This effect suggests that perceptual space is at least partially coded in a gaze-centered reference frame. However, the direction of the effects reported has not been consistent. Here, we investigate the cause of a discrepancy between reported directions of shift in tactile localization related to head position. We demonstrate that head eccentricity can cause errors in touch localization in either the same or opposite direction as the head is turned depending on the procedure used. When head position is held eccentric during both the presentation of a touch and the response, there is a shift in the direction opposite to the head. When the head is returned to center before reporting, the shift is in the same direction as head eccentricity. We rule out a number of possible explanations for the difference and conclude that when the head is moved between a touch and response the touch is coded in a predominantly gaze-centered reference frame, whereas when the head remains stationary a predominantly body-centered reference frame is used. The mechanism underlying these displacements in perceived location is proposed to involve an underestimated gaze signal. We propose a model demonstrating how this single neural error could cause localization errors in either direction depending on whether the gaze or body midline is used as a reference. This model may be useful in explaining gaze-related localization errors in other modalities.
Collapse
Affiliation(s)
- Lisa M Pritchett
- Centre for Vision Research, York University, Toronto, ON, Canada.
| | | | | |
Collapse
|
16
|
Jones SA, Fiehler K, Henriques DY. A task-dependent effect of memory and hand-target on proprioceptive localization. Neuropsychologia 2012; 50:1462-70. [DOI: 10.1016/j.neuropsychologia.2012.02.031] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2011] [Revised: 02/24/2012] [Accepted: 02/27/2012] [Indexed: 10/28/2022]
|
17
|
Abstract
Direction of gaze (eye angle + head angle) has been shown to be important for representing space for action, implying a crucial role of vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here, we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target while facing it. Before they reached to the target's remembered location, they turned their head toward an eccentric direction that also induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from the eyes and head is available.
Collapse
|
18
|
Thompson AA, Glover CV, Henriques DY. Allocentrically implied target locations are updated in an eye-centred reference frame. Neurosci Lett 2012; 514:214-8. [PMID: 22425720 DOI: 10.1016/j.neulet.2012.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2011] [Revised: 02/16/2012] [Accepted: 03/01/2012] [Indexed: 10/28/2022]
|
19
|
Abstract
Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.
Collapse
|
20
|
Wilke C, Synofzik M, Lindner A. The valence of action outcomes modulates the perception of one's actions. Conscious Cogn 2011; 21:18-29. [PMID: 21757377 DOI: 10.1016/j.concog.2011.06.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2011] [Revised: 06/02/2011] [Accepted: 06/06/2011] [Indexed: 01/05/2023]
Abstract
When interacting with the world, we need to distinguish whether sensory information results from external events or from our own actions. The nervous system most likely draws this distinction by comparing the actual sensory input with an internal prediction about the sensory consequences of one's actions. However, interacting with the world also requires an evaluation of the outcomes of self-action, e.g. in terms of their affective valence. Here we show that subjects' perceived pointing direction does not only depend on predictive and sensory signals related to the performed action itself, but also on the affective valence of the action outcome: subjects perceived their movements as directed towards positive and away from negative outcomes. Our findings suggest that the non-conceptual perception of the sensory consequences of self-action builds on both sensorimotor information related directly to self-action and a post hoc evaluation of the affective action outcome.
Collapse
Affiliation(s)
- Carlo Wilke
- Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Germany
| | | | | |
Collapse
|
21
|
Perceived touch location is coded using a gaze signal. Exp Brain Res 2011; 213:229-34. [PMID: 21559744 DOI: 10.1007/s00221-011-2713-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2010] [Accepted: 04/26/2011] [Indexed: 10/18/2022]
Abstract
The location of a touch to the skin, first coded in body coordinates, may be transformed into retinotopic coordinates to facilitate visual-tactile integration. In order for the touch location to be transformed into a retinotopic reference frame, the location of the eyes and head must be taken into account. Previous studies have found eye position-related errors (Harrar and Harris in Exp Brain Res 203:615-620, 2009) and head position-related errors (Ho and Spence Brain Res 1144:136-141, 2007) in tactile localization, indicating that imperfect versions of eye and head signals may be used in the body-to-visual coordinate transformation. Here, we investigated the combined effects of head and eye position on the perceived location of a mechanical touch to the arm. Subjects reported the perceived position of a touch that was presented while their head was positioned to the left, right, or center of the body and their eyes were positioned to the left, right, or center in their orbits. The perceived location of a touch shifted in the direction of both head and the eyes by approximately the same amount. We interpret these shifts as being consistent with touch location being coded in a visual reference frame with a gaze signal used to compute the transformation.
Collapse
|
22
|
Selen L, Medendorp W. Saccadic updating of object orientation for grasping movements. Vision Res 2011; 51:898-907. [DOI: 10.1016/j.visres.2011.01.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2010] [Revised: 12/29/2010] [Accepted: 01/04/2011] [Indexed: 10/18/2022]
|
23
|
Fiehler K, Bannert MM, Bischoff M, Blecker C, Stark R, Vaitl D, Franz VH, Rösler F. Working memory maintenance of grasp-target information in the human posterior parietal cortex. Neuroimage 2011; 54:2401-11. [DOI: 10.1016/j.neuroimage.2010.09.080] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2010] [Revised: 08/14/2010] [Accepted: 09/27/2010] [Indexed: 11/16/2022] Open
|
24
|
Jones SAH, Henriques DYP. Memory for proprioceptive and multisensory targets is partially coded relative to gaze. Neuropsychologia 2010; 48:3782-92. [PMID: 20934442 DOI: 10.1016/j.neuropsychologia.2010.10.001] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2010] [Revised: 09/21/2010] [Accepted: 10/01/2010] [Indexed: 11/25/2022]
Abstract
We examined the effect of gaze direction relative to target location on reach endpoint errors made to proprioceptive and multisensory targets. We also explored if and how visual and proprioceptive information about target location are integrated to guide reaches. Participants reached to their unseen left hand in one of three target locations (left of body midline, body midline, or right or body midline), while it remained at a target site (online), or after it was removed from this location (remembered), and also after the target hand had been briefly lit before reaching (multisensory target). The target hand was guided to a target location using a robot-generated path. Reaches were made with the right hand in complete darkness, while gaze was varied in one of four eccentric directions. Horizontal reach errors systematically varied relative to gaze for all target modalities; not only for visually remembered and online proprioceptive targets as has been found in previous studies, but for the first time, also for remembered proprioceptive targets and proprioceptive targets that were briefly visible. These results suggest that the brain represents the locations of online and remembered proprioceptive reach targets, as well as visual-proprioceptive reach targets relative to gaze, along with other motor-related representations. Our results, however, do not suggest that visual and proprioceptive information are optimally integrated when coding the location of multisensory reach targets in this paradigm.
Collapse
|
25
|
Interactions between gaze-centered and allocentric representations of reach target location in the presence of spatial updating. Vision Res 2010; 50:2661-70. [PMID: 20816887 DOI: 10.1016/j.visres.2010.08.038] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2010] [Revised: 08/16/2010] [Accepted: 08/31/2010] [Indexed: 11/22/2022]
Abstract
Numerous studies have investigated the phenomenon of egocentric spatial updating in gaze-centered coordinates, and some have studied the use of allocentric cues in visually-guided movement, but it is not known how these two mechanisms interact. Here, we tested whether gaze-centered and allocentric information combine at the time of viewing the target, or if the brain waits until the last possible moment. To do this, we took advantage of the well-known fact that pointing and reaching movements show gaze-centered 'retinal magnification' errors (RME) that update across saccades. During gaze fixation, we found that visual landmarks, and hence allocentric information, reduces RME for targets in the left visual hemifield but not in the right. When a saccade was made between viewing and reaching, this landmark-induced reduction in RME only depended on gaze at reach, not at encoding. Based on this finding, we argue that egocentric-allocentric combination occurs after the intervening saccade. This is consistent with previous findings in healthy and brain damaged subjects suggesting that the brain updates early spatial representations during eye movement and combines them at the time of action.
Collapse
|