1
|
Musa L, Yan X, Crawford JD. Instruction alters the influence of allocentric landmarks in a reach task. J Vis 2024; 24:17. [PMID: 39073800 PMCID: PMC11290568 DOI: 10.1167/jov.24.7.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/06/2024] [Indexed: 07/30/2024] Open
Abstract
Allocentric landmarks have an implicit influence on aiming movements, but it is not clear how an explicit instruction (to aim relative to a landmark) influences reach accuracy and precision. Here, 12 participants performed a task with two instruction conditions (egocentric vs. allocentric) but with similar sensory and motor conditions. Participants fixated gaze near the center of a display aligned with their right shoulder while a target stimulus briefly appeared alongside a visual landmark in one visual field. After a brief mask/memory delay the landmark then reappeared at a different location (same or opposite visual field), creating an ego/allocentric conflict. In the egocentric condition, participants were instructed to ignore the landmark and point toward the remembered location of the target. In the allocentric condition, participants were instructed to remember the initial target location relative to the landmark and then reach relative to the shifted landmark (same or opposite visual field). To equalize motor execution between tasks, participants were instructed to anti-point (point to the visual field opposite to the remembered target) on 50% of the egocentric trials. Participants were more accurate and precise and quicker to react in the allocentric condition, especially when pointing to the opposite field. We also observed a visual field effect, where performance was worse overall in the right visual field. These results suggest that, when egocentric and allocentric cues conflict, explicit use of the visual landmark provides better reach performance than reliance on noisy egocentric signals. Such instructions might aid rehabilitation when the egocentric system is compromised by disease or injury.
Collapse
Affiliation(s)
- Lina Musa
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- Department of Psychology, York University, Toronto, ON, Canada
- Departments of Biology and Kinesiology & Health Sciences, York University, Toronto, ON, Canada
| |
Collapse
|
2
|
Morfoisse T, Herrera Altamira G, Angelini L, Clément G, Beraneck M, McIntyre J, Tagliabue M. Modality-Independent Effect of Gravity in Shaping the Internal Representation of 3D Space for Visual and Haptic Object Perception. J Neurosci 2024; 44:e2457202023. [PMID: 38267257 PMCID: PMC10977025 DOI: 10.1523/jneurosci.2457-20.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 12/20/2023] [Accepted: 12/22/2023] [Indexed: 01/26/2024] Open
Abstract
Visual and haptic perceptions of 3D shape are plagued by distortions, which are influenced by nonvisual factors, such as gravitational vestibular signals. Whether gravity acts directly on the visual or haptic systems or at a higher, modality-independent level of information processing remains unknown. To test these hypotheses, we examined visual and haptic 3D shape perception by asking male and female human subjects to perform a "squaring" task in upright and supine postures and in microgravity. Subjects adjusted one edge of a 3D object to match the length of another in each of the three canonical reference planes, and we recorded the matching errors to obtain a characterization of the perceived 3D shape. The results show opposing, body-centered patterns of errors for visual and haptic modalities, whose amplitudes are negatively correlated, suggesting that they arise in distinct, modality-specific representations that are nevertheless linked at some level. On the other hand, weightlessness significantly modulated both visual and haptic perceptual distortions in the same way, indicating a common, modality-independent origin for gravity's effects. Overall, our findings show a link between modality-specific visual and haptic perceptual distortions and demonstrate a role of gravity-related signals on a modality-independent internal representation of the body and peripersonal 3D space used to interpret incoming sensory inputs.
Collapse
Affiliation(s)
- Theo Morfoisse
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition Center, Paris F-75006, France
| | - Gabriela Herrera Altamira
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition Center, Paris F-75006, France
| | - Leonardo Angelini
- HumanTech Institute, University of Applied Sciences Western Switzerland//HES-SO, Fribourg 1700, Switzerland
- School of Management Fribourg, University of Applied Sciences Western Switzerland//HES-SO, Fribourg 1700, Switzerland
| | - Gilles Clément
- Université de Caen Normandie, Inserm, COMETE U1075, CYCERON, CHU de Caen, Normandie Univ, Caen 14000, France
| | - Mathieu Beraneck
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition Center, Paris F-75006, France
| | - Joseph McIntyre
- Tecnalia, Basque Research and Technology Alliance, San Sebastian 20009, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao 48009, Spain
| | - Michele Tagliabue
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition Center, Paris F-75006, France
| |
Collapse
|
3
|
Hadjidimitrakis K, De Vitis M, Ghodrati M, Filippini M, Fattori P. Anterior-posterior gradient in the integrated processing of forelimb movement direction and distance in macaque parietal cortex. Cell Rep 2022; 41:111608. [DOI: 10.1016/j.celrep.2022.111608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 07/16/2022] [Accepted: 10/14/2022] [Indexed: 11/09/2022] Open
|
4
|
Phataraphruk P, Rahman Q, Lakshminarayanan K, Fruchtman M, Buneo CA. Posture dependent factors influence movement variability when reaching to nearby virtual objects. Front Neurosci 2022; 16:971382. [PMID: 36389217 PMCID: PMC9641121 DOI: 10.3389/fnins.2022.971382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 09/20/2022] [Indexed: 11/04/2023] Open
Abstract
Reaching movements are subject to noise arising during the sensing, planning and execution phases of movement production, which contributes to movement variability. When vision of the moving hand is available, reach endpoint variability appears to be strongly influenced by internal noise associated with the specification and/or online updating of movement plans in visual coordinates. In contrast, without hand vision, endpoint variability appears more dependent upon movement direction, suggesting a greater influence of execution noise. Given that execution noise acts in part at the muscular level, we hypothesized that reaching variability should depend not only on movement direction but initial arm posture as well. Moreover, given that the effects of execution noise are more apparent when hand vision is unavailable, we reasoned that postural effects would be more evident when visual feedback was withheld. To test these hypotheses, participants planned memory-guided reaching movements to three frontal plane targets using one of two initial arm postures ("adducted" or "abducted"), attained by rotating the arm about the shoulder-hand axis. In this way, variability was examined for two sets of movements that were largely identical in endpoint coordinates but different in joint/muscle-based coordinates. We found that patterns of reaching variability differed in several respects when movements were initiated with different arm postures. These postural effects were evident shortly after movement onset, near the midpoints of the movements, and again at the endpoints. At the endpoints, posture dependent effects interacted with effects of visual feedback to determine some aspects of variability. These results suggest that posture dependent execution noise interacts with feedback control mechanisms and biomechanical factors to determine patterns of reach endpoint variability in 3D space.
Collapse
Affiliation(s)
| | | | | | | | - Christopher A. Buneo
- Visuomotor Learning Laboratory, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, United States
| |
Collapse
|
5
|
Abstract
On average, we redirect our gaze with a frequency at about 3 Hz. In real life, gaze shifts consist of eye and head movements. Much research has focused on how the accuracy of eye movements is monitored and calibrated. By contrast, little is known about how head movements remain accurate. I wondered whether serial dependencies between artificially induced errors in head movement targeting and the immediately following head movement might recalibrate movement accuracy. I also asked whether head movement targeting errors would influence visual localization. To this end, participants wore a head mounted display and performed head movements to targets, which were displaced as soon as the start of the head movement was detected. I found that target displacements influenced head movement amplitudes in the same trial, indicating that participants could adjust their movement online to reach the new target location. However, I also found serial dependencies between the target displacement in trial n-1 and head movements amplitudes in the following trial n. I did not find serial dependencies between target displacements and visuomotor localization. The results reveal that serial dependencies recalibrate head movement accuracy.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Institute for Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| |
Collapse
|
6
|
Cortex-dependent corrections as the tongue reaches for and misses targets. Nature 2021; 594:82-87. [PMID: 34012117 DOI: 10.1038/s41586-021-03561-9] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2019] [Accepted: 04/16/2021] [Indexed: 11/08/2022]
Abstract
Precise tongue control is necessary for drinking, eating and vocalizing1-3. However, because tongue movements are fast and difficult to resolve, neural control of lingual kinematics remains poorly understood. Here we combine kilohertz-frame-rate imaging and a deep-learning-based neural network to resolve 3D tongue kinematics in mice drinking from a water spout. Successful licks required corrective submovements that-similar to online corrections during primate reaches4-11-occurred after the tongue missed unseen, distant or displaced targets. Photoinhibition of anterolateral motor cortex impaired corrections, which resulted in hypometric licks that missed the spout. Neural activity in anterolateral motor cortex reflected upcoming, ongoing and past corrective submovements, as well as errors in predicted spout contact. Although less than a tenth of a second in duration, a single mouse lick exhibits the hallmarks of online motor control associated with a primate reach, including cortex-dependent corrections after misses.
Collapse
|
7
|
Chen Y, Crawford JD. Allocentric representations for target memory and reaching in human cortex. Ann N Y Acad Sci 2019; 1464:142-155. [PMID: 31621922 DOI: 10.1111/nyas.14261] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 09/25/2019] [Accepted: 09/28/2019] [Indexed: 01/18/2023]
Abstract
The use of allocentric cues for movement guidance is complex because it involves the integration of visual targets and independent landmarks and the conversion of this information into egocentric commands for action. Here, we focus on the mechanisms for encoding reach targets relative to visual landmarks in humans. First, we consider the behavioral results suggesting that both of these cues influence target memory, but are then transformed-at the first opportunity-into egocentric commands for action. We then consider the cortical mechanisms for these behaviors. We discuss different allocentric versus egocentric mechanisms for coding of target directional selectivity in memory (inferior temporal gyrus versus superior occipital gyrus) and distinguish these mechanisms from parieto-frontal activation for planning egocentric direction of actual reach movements. Then, we consider where and how the former allocentric representations of remembered reach targets are converted into the latter egocentric plans. In particular, our recent neuroimaging study suggests that four areas in the parietal and frontal cortex (right precuneus, bilateral dorsal premotor cortex, and right presupplementary area) participate in this allo-to-ego conversion. Finally, we provide a functional overview describing how and why egocentric and landmark-centered representations are segregated early in the visual system, but then reintegrated in the parieto-frontal cortex for action.
Collapse
Affiliation(s)
- Ying Chen
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - J Douglas Crawford
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Center for Vision Research, Vision: Science to Applications (VISTA) Program, and Departments of Psychology, Biology, and Kinesiology & Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Lucas A, Tomlinson T, Rohani N, Chowdhury R, Solla SA, Katsaggelos AK, Miller LE. Neural Networks for Modeling Neural Spiking in S1 Cortex. Front Syst Neurosci 2019; 13:13. [PMID: 30983978 PMCID: PMC6449471 DOI: 10.3389/fnsys.2019.00013] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2018] [Accepted: 03/11/2019] [Indexed: 11/23/2022] Open
Abstract
Somatosensation is composed of two distinct modalities: touch, arising from sensors in the skin, and proprioception, resulting primarily from sensors in the muscles, combined with these same cutaneous sensors. In contrast to the wealth of information about touch, we know quite less about the nature of the signals giving rise to proprioception at the cortical level. Likewise, while there is considerable interest in developing encoding models of touch-related neurons for application to brain machine interfaces, much less emphasis has been placed on an analogous proprioceptive interface. Here we investigate the use of Artificial Neural Networks (ANNs) to model the relationship between the firing rates of single neurons in area 2, a largely proprioceptive region of somatosensory cortex (S1) and several types of kinematic variables related to arm movement. To gain a better understanding of how these kinematic variables interact to create the proprioceptive responses recorded in our datasets, we train ANNs under different conditions, each involving a different set of input and output variables. We explore the kinematic variables that provide the best network performance, and find that the addition of information about joint angles and/or muscle lengths significantly improves the prediction of neural firing rates. Our results thus provide new insight regarding the complex representations of the limb motion in S1: that the firing rates of neurons in area 2 may be more closely related to the activity of peripheral sensors than it is to extrinsic hand position. In addition, we conduct numerical experiments to determine the sensitivity of ANN models to various choices of training design and hyper-parameters. Our results provide a baseline and new tools for future research that utilizes machine learning to better describe and understand the activity of neurons in S1.
Collapse
Affiliation(s)
- Alice Lucas
- Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL, United States
| | - Tucker Tomlinson
- Department of Physiology, Northwestern University, Chicago, IL, United States
| | - Neda Rohani
- Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL, United States
| | - Raeed Chowdhury
- Department of Physiology, Northwestern University, Chicago, IL, United States
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States
| | - Sara A. Solla
- Department of Physiology, Northwestern University, Chicago, IL, United States
- Department of Physics and Astronomy, Northwestern University, Evanston, IL, United States
| | - Aggelos K. Katsaggelos
- Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL, United States
| | - Lee E. Miller
- Department of Physiology, Northwestern University, Chicago, IL, United States
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States
- Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of Chicago, Chicago, IL, United States
| |
Collapse
|
9
|
Liu J, Ando H. Response Modality vs. Target Modality: Sensory Transformations and Comparisons in Cross-modal Slant Matching Tasks. Sci Rep 2018; 8:11068. [PMID: 30038316 PMCID: PMC6056512 DOI: 10.1038/s41598-018-29375-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 07/10/2018] [Indexed: 11/17/2022] Open
Abstract
Humans constantly combine multi-sensory spatial information to successfully interact with objects in peripersonal space. Previous studies suggest that sensory inputs of different modalities are encoded in different reference frames. In cross-modal tasks where the target and response modalities are different, it is unclear which reference frame these multiple sensory signals are transformed to for comparison. The current study used a slant perception and parallelity paradigm to explore this issue. Participants perceived (either visually or haptically) the slant of a reference board and were asked to either adjust an invisible test board by hand manipulation or to adjust a visible test board through verbal instructions to be physically parallel to the reference board. We examined the patterns of constant error and variability of unimodal and cross-modal tasks with various reference slant angles at different reference/test locations. The results revealed that rather than a mixture of the patterns of unimodal conditions, the pattern in cross-modal conditions depended almost entirely on the response modality and was not substantially affected by the target modality. Deviations in haptic response conditions could be predicted by the locations of the reference and test board, whereas the reference slant angle was an important predictor in visual response conditions.
Collapse
Affiliation(s)
- Juan Liu
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT) and Osaka University, Osaka, Japan.
| | - Hiroshi Ando
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT) and Osaka University, Osaka, Japan
| |
Collapse
|
10
|
Chen Y, Monaco S, Crawford JD. Neural substrates for allocentric-to-egocentric conversion of remembered reach targets in humans. Eur J Neurosci 2018. [PMID: 29512943 DOI: 10.1111/ejn.13885] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Targets for goal-directed action can be encoded in allocentric coordinates (relative to another visual landmark), but it is not known how these are converted into egocentric commands for action. Here, we investigated this using a slow event-related fMRI paradigm, based on our previous behavioural finding that the allocentric-to-egocentric (Allo-Ego) conversion for reach is performed at the first possible opportunity. Participants were asked to remember (and eventually reach towards) the location of a briefly presented target relative to another visual landmark. After a first memory delay, participants were forewarned by a verbal instruction if the landmark would reappear at the same location (potentially allowing them to plan a reach following the auditory cue before the second delay), or at a different location where they had to wait for the final landmark to be presented before response, and then reach towards the remembered target location. As predicted, participants showed landmark-centred directional selectivity in occipital-temporal cortex during the first memory delay, and only developed egocentric directional selectivity in occipital-parietal cortex during the second delay for the 'Same cue' task, and during response for the 'Different cue' task. We then compared cortical activation between these two tasks at the times when the Allo-Ego conversion occurred, and found common activation in right precuneus, right presupplementary area and bilateral dorsal premotor cortex. These results confirm that the brain converts allocentric codes to egocentric plans at the first possible opportunity, and identify the four most likely candidate sites specific to the Allo-Ego transformation for reaches.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, Room 0009, Lassonde Building, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, ON, Canada
| | - Simona Monaco
- Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - J Douglas Crawford
- Center for Vision Research, Room 0009, Lassonde Building, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, ON, Canada.,Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| |
Collapse
|
11
|
Russo M, Cesqui B, La Scaleia B, Ceccarelli F, Maselli A, Moscatelli A, Zago M, Lacquaniti F, d'Avella A. Intercepting virtual balls approaching under different gravity conditions: evidence for spatial prediction. J Neurophysiol 2017; 118:2421-2434. [PMID: 28768737 DOI: 10.1152/jn.00025.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2017] [Revised: 07/25/2017] [Accepted: 07/29/2017] [Indexed: 11/22/2022] Open
Abstract
To accurately time motor responses when intercepting falling balls we rely on an internal model of gravity. However, whether and how such a model is also used to estimate the spatial location of interception is still an open question. Here we addressed this issue by asking 25 participants to intercept balls projected from a fixed location 6 m in front of them and approaching along trajectories with different arrival locations, flight durations, and gravity accelerations (0g and 1g). The trajectories were displayed in an immersive virtual reality system with a wide field of view. Participants intercepted approaching balls with a racket, and they were free to choose the time and place of interception. We found that participants often achieved a better performance with 1g than 0g balls. Moreover, the interception points were distributed along the direction of a 1g path for both 1g and 0g balls. In the latter case, interceptions tended to cluster on the upper half of the racket, indicating that participants aimed at a lower position than the actual 0g path. These results suggest that an internal model of gravity was probably used in predicting the interception locations. However, we found that the difference in performance between 1g and 0g balls was modulated by flight duration, the difference being larger for faster balls. In addition, the number of peaks in the hand speed profiles increased with flight duration, suggesting that visual information was used to adjust the motor response, correcting the prediction to some extent.NEW & NOTEWORTHY Here we show that an internal model of gravity plays a key role in predicting where to intercept a fast-moving target. Participants also assumed an accelerated motion when intercepting balls approaching in a virtual environment at constant velocity. We also show that the role of visual information in guiding interceptive movement increases when more time is available.
Collapse
Affiliation(s)
- Marta Russo
- Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy;
| | - Benedetta Cesqui
- Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Barbara La Scaleia
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | | | - Antonella Maselli
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Alessandro Moscatelli
- Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Myrka Zago
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Francesco Lacquaniti
- Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy.,Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy; and
| | - Andrea d'Avella
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy.,Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Messina, Italy
| |
Collapse
|
12
|
VanGilder P, Shi Y, Apker G, Dyson K, Buneo CA. Multisensory Interactions Influence Neuronal Spike Train Dynamics in the Posterior Parietal Cortex. PLoS One 2016; 11:e0166786. [PMID: 28033334 PMCID: PMC5199055 DOI: 10.1371/journal.pone.0166786] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2016] [Accepted: 11/03/2016] [Indexed: 12/11/2022] Open
Abstract
Although significant progress has been made in understanding multisensory interactions at the behavioral level, their underlying neural mechanisms remain relatively poorly understood in cortical areas, particularly during the control of action. In recent experiments where animals reached to and actively maintained their arm position at multiple spatial locations while receiving either proprioceptive or visual-proprioceptive position feedback, multisensory interactions were shown to be associated with reduced spiking (i.e. subadditivity) as well as reduced intra-trial and across-trial spiking variability in the superior parietal lobule (SPL). To further explore the nature of such interaction-induced changes in spiking variability we quantified the spike train dynamics of 231 of these neurons. Neurons were classified as Poisson, bursty, refractory, or oscillatory (in the 13–30 Hz “beta-band”) based on their spike train power spectra and autocorrelograms. No neurons were classified as Poisson-like in either the proprioceptive or visual-proprioceptive conditions. Instead, oscillatory spiking was most commonly observed with many neurons exhibiting these oscillations under only one set of feedback conditions. The results suggest that the SPL may belong to a putative beta-synchronized network for arm position maintenance and that position estimation may be subserved by different subsets of neurons within this network depending on available sensory information. In addition, the nature of the observed spiking variability suggests that models of multisensory interactions in the SPL should account for both Poisson-like and non-Poisson variability.
Collapse
Affiliation(s)
- Paul VanGilder
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, United States of America
| | - Ying Shi
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, United States of America
| | - Gregory Apker
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, United States of America
| | - Keith Dyson
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, United States of America
| | - Christopher A. Buneo
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, United States of America
- * E-mail:
| |
Collapse
|
13
|
van der Graaff MCW, Brenner E, Smeets JBJ. Vector and position coding in goal-directed movements. Exp Brain Res 2016; 235:681-689. [PMID: 27858127 PMCID: PMC5315739 DOI: 10.1007/s00221-016-4828-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 11/08/2016] [Indexed: 11/10/2022]
Abstract
Two different ways to code a goal-directed movement have been proposed in the literature: vector coding and position coding. Assuming that the code is fine-tuned if a movement is immediately repeated, one can predict that repeating movements to the same endpoint will increase precision if movements are coded in terms of the position of the endpoint. Repeating the same movement vector at slightly different positions will increase precision if movements are coded in terms of vectors. Following this reasoning, Hudson and Landy (J Neurophys 108(10):2708–2716, 2012) found evidence for both types of coding when participants moved their hand over a table while the target and feedback were provided on a vertical screen. Do we also see evidence for both types of coding if participants repeat movements within a more natural visuo-motor mapping? To find out, we repeated the study of Hudson and Landy (J Neurophys 108(10):2708–2716, 2012), but our participants made movements directly to the targets. We compared the same movements embedded in blocks of repetitions of endpoints and blocks of repetitions of movement vectors. Within blocks, the movements were presented in a random order. We found no benefit of repeating either a position or a vector. We subsequently repeated the experiment with a similar mapping between movements and images to those used by Hudson and Landy and found that participants only clearly benefit from repeating a position. We conclude that repeating a position is particularly useful when dealing with unusual visuo-motor mappings.
Collapse
Affiliation(s)
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
| |
Collapse
|
14
|
Brunamonti E, Genovesio A, Pani P, Caminiti R, Ferraina S. Reaching-related Neurons in Superior Parietal Area 5: Influence of the Target Visibility. J Cogn Neurosci 2016; 28:1828-1837. [DOI: 10.1162/jocn_a_01004] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Reaching movements require the integration of both somatic and visual information. These signals can have different relevance, depending on whether reaches are performed toward visual or memorized targets. We tested the hypothesis that under such conditions, therefore depending on target visibility, posterior parietal neurons integrate differently somatic and visual signals. Monkeys were trained to execute both types of reaches from different hand resting positions and in total darkness. Neural activity was recorded in Area 5 (PE) and analyzed by focusing on the preparatory epoch, that is, before movement initiation. Many neurons were influenced by the initial hand position, and most of them were further modulated by the target visibility. For the same starting position, we found a prevalence of neurons with activity that differed depending on whether hand movement was performed toward memorized or visual targets. This result suggests that posterior parietal cortex integrates available signals in a flexible way based on contextual demands.
Collapse
|
15
|
Abstract
Voluntary movement is a result of signals transmitted through a communication channel that links the internal world in our minds to the physical world around us. Intention can be considered the desire to effect change on our environment, and this is contained in the signals from the brain, passed through the nervous system to converge on muscles that generate displacements and forces on our surroundings. The resulting changes in the world act to generate sensations that feed back to the nervous system, closing the control loop. This Perspective discusses the experimental and theoretical underpinnings of current models of movement generation and the way they are modulated by external information. Movement systems embody intentionality and prediction, two factors that are propelling a revolution in engineering. Development of movement models that include the complexities of the external world may allow a better understanding of the neuronal populations regulating these processes, as well as the development of solutions for autonomous vehicles and robots, and neural prostheses for those who are motor impaired.
Collapse
Affiliation(s)
- Andrew B Schwartz
- Department of Neurobiology, School of Medicine, University of Pittsburgh, E1440 BSTWR, 200 Lothrop Street, Pittsburgh, PA 15213, USA.
| |
Collapse
|
16
|
La Scaleia B, Zago M, Lacquaniti F. Hand interception of occluded motion in humans: a test of model-based vs. on-line control. J Neurophysiol 2015; 114:1577-92. [PMID: 26133803 DOI: 10.1152/jn.00475.2015] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2015] [Accepted: 06/26/2015] [Indexed: 11/22/2022] Open
Abstract
Two control schemes have been hypothesized for the manual interception of fast visual targets. In the model-free on-line control, extrapolation of target motion is based on continuous visual information, without resorting to physical models. In the model-based control, instead, a prior model of target motion predicts the future spatiotemporal trajectory. To distinguish between the two hypotheses in the case of projectile motion, we asked participants to hit a ball that rolled down an incline at 0.2 g and then fell in air at 1 g along a parabola. By varying starting position, ball velocity and trajectory differed between trials. Motion on the incline was always visible, whereas parabolic motion was either visible or occluded. We found that participants were equally successful at hitting the falling ball in both visible and occluded conditions. Moreover, in different trials the intersection points were distributed along the parabolic trajectories of the ball, indicating that subjects were able to extrapolate an extended segment of the target trajectory. Remarkably, this trend was observed even at the very first repetition of movements. These results are consistent with the hypothesis of model-based control, but not with on-line control. Indeed, ball path and speed during the occlusion could not be extrapolated solely from the kinematic information obtained during the preceding visible phase. The only way to extrapolate ball motion correctly during the occlusion was to assume that the ball would fall under gravity and air drag when hidden from view. Such an assumption had to be derived from prior experience.
Collapse
Affiliation(s)
- Barbara La Scaleia
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy;
| | - Myrka Zago
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Francesco Lacquaniti
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy; Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy; and Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy
| |
Collapse
|
17
|
Using the precision of the primate to study the origins of movement variability. Neuroscience 2015; 296:92-100. [DOI: 10.1016/j.neuroscience.2015.01.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2014] [Revised: 01/05/2015] [Accepted: 01/06/2015] [Indexed: 12/28/2022]
|
18
|
Efficiency of visual feedback integration differs between dominant and non-dominant arms during a reaching task. Exp Brain Res 2014; 233:317-27. [DOI: 10.1007/s00221-014-4116-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2014] [Accepted: 09/23/2014] [Indexed: 11/26/2022]
|
19
|
Lee D, Poizner H, Corcos DM, Henriques DY. Unconstrained reaching modulates eye-hand coupling. Exp Brain Res 2014; 232:211-23. [PMID: 24121521 DOI: 10.1007/s00221-013-3732-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2012] [Accepted: 10/01/2013] [Indexed: 11/30/2022]
Abstract
Eye–hand coordination is a crucial element of goal-directed movements. However, few studies have looked at the extent to which unconstrained movements of the eyes and hand made to targets influence each other. We studied human participants who moved either their eyes or both their eyes and hand to one of three static or flashed targets presented in 3D space. The eyes were directed, and hand was located at a common start position on either the right or left side of the body. We found that the velocity and scatter of memory-guided saccades (flashed targets) differed significantly when produced in combination with a reaching movement than when produced alone. Specifically, when accompanied by a reach, peak saccadic velocities were lower than when the eye moved alone. Peak saccade velocities, as well as latencies, were also highly correlated with those for reaching movements, especially for the briefly flashed targets compared to the continuous visible target. The scatter of saccade endpoints was greater when the saccades were produced with the reaching movement than when produced without, and the size of the scatter for both saccades and reaches was weakly correlated. These findings suggest that the saccades and reaches made to 3D targets are weakly to moderately coupled both temporally and spatially and that this is partly the result of the arm movement influencing the eye movement. Taken together, this study provides further evidence that the oculomotor and arm motor systems interact above and beyond any common target representations shared by the two motor systems.
Collapse
|
20
|
Ferrari-Toniolo S, Papazachariadis O, Visco-Comandini F, Salvati M, D’Elia A, Di Berardino F, Caminiti R, Battaglia-Mayer A. A visuomotor disorder in the absence of movement: Does Optic Ataxia generalize to learned isometric hand action? Neuropsychologia 2014; 63:59-71. [DOI: 10.1016/j.neuropsychologia.2014.07.029] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Revised: 07/14/2014] [Accepted: 07/25/2014] [Indexed: 11/16/2022]
|
21
|
Khanafer S, Cressman EK. Sensory integration during reaching: the effects of manipulating visual target availability. Exp Brain Res 2014; 232:3833-46. [DOI: 10.1007/s00221-014-4064-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Accepted: 08/01/2014] [Indexed: 11/24/2022]
|
22
|
La Scaleia B, Lacquaniti F, Zago M. Neural extrapolation of motion for a ball rolling down an inclined plane. PLoS One 2014; 9:e99837. [PMID: 24940874 PMCID: PMC4062474 DOI: 10.1371/journal.pone.0099837] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2014] [Accepted: 05/12/2014] [Indexed: 11/19/2022] Open
Abstract
It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.
Collapse
Affiliation(s)
- Barbara La Scaleia
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Francesco Lacquaniti
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
- Centre of Space Bio-medicine, University of Rome Tor Vergata, Rome, Italy
| | - Myrka Zago
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- * E-mail:
| |
Collapse
|
23
|
Tagliabue M, McIntyre J. A modular theory of multisensory integration for motor control. Front Comput Neurosci 2014; 8:1. [PMID: 24550816 PMCID: PMC3908447 DOI: 10.3389/fncom.2014.00001] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Accepted: 01/06/2014] [Indexed: 11/13/2022] Open
Abstract
To control targeted movements, such as reaching to grasp an object or hammering a nail, the brain can use divers sources of sensory information, such as vision and proprioception. Although a variety of studies have shown that sensory signals are optimally combined according to principles of maximum likelihood, increasing evidence indicates that the CNS does not compute a single, optimal estimation of the target's position to be compared with a single optimal estimation of the hand. Rather, it employs a more modular approach in which the overall behavior is built by computing multiple concurrent comparisons carried out simultaneously in a number of different reference frames. The results of these individual comparisons are then optimally combined in order to drive the hand. In this article we examine at a computational level two formulations of concurrent models for sensory integration and compare this to the more conventional model of converging multi-sensory signals. Through a review of published studies, both our own and those performed by others, we produce evidence favoring the concurrent formulations. We then examine in detail the effects of additive signal noise as information flows through the sensorimotor system. By taking into account the noise added by sensorimotor transformations, one can explain why the CNS may shift its reliance on one sensory modality toward a greater reliance on another and investigate under what conditions those sensory transformations occur. Careful consideration of how transformed signals will co-vary with the original source also provides insight into how the CNS chooses one sensory modality over another. These concepts can be used to explain why the CNS might, for instance, create a visual representation of a task that is otherwise limited to the kinesthetic domain (e.g., pointing with one hand to a finger on the other) and why the CNS might choose to recode sensory information in an external reference frame.
Collapse
Affiliation(s)
- Michele Tagliabue
- Centre d'Étude de la Sensorimotricité, (CNRS UMR 8194), Institut des Neurosciences et de la Cognition, Université Paris Descartes, Sorbonne Paris Cité Paris, France
| | - Joseph McIntyre
- Centre d'Étude de la Sensorimotricité, (CNRS UMR 8194), Institut des Neurosciences et de la Cognition, Université Paris Descartes, Sorbonne Paris Cité Paris, France
| |
Collapse
|
24
|
Larsson M. The optic chiasm: a turning point in the evolution of eye/hand coordination. Front Zool 2013; 10:41. [PMID: 23866932 PMCID: PMC3729728 DOI: 10.1186/1742-9994-10-41] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2013] [Accepted: 07/09/2013] [Indexed: 01/23/2023] Open
Abstract
The primate visual system has a uniquely high proportion of ipsilateral retinal projections, retinal ganglial cells that do not cross the midline in the optic chiasm. The general assumption is that this developed due to the selective advantage of accurate depth perception through stereopsis. Here, the hypothesis that the need for accurate eye-forelimb coordination substantially influenced the evolution of the primate visual system is presented. Evolutionary processes may change the direction of retinal ganglial cells. Crossing, or non-crossing, in the optic chiasm determines which hemisphere receives visual feedback in reaching tasks. Each hemisphere receives little tactile and proprioceptive information about the ipsilateral hand. The eye-forelimb hypothesis proposes that abundant ipsilateral retinal projections developed in the primate brain to synthesize, in a single hemisphere, visual, tactile, proprioceptive, and motor information about a given hand, and that this improved eye-hand coordination and optimized the size of the brain. If accurate eye-hand coordination was a major factor in the evolution of stereopsis, stereopsis is likely to be highly developed for activity in the area where the hands most often operate.The primate visual system is ideally suited for tasks within arm's length and in the inferior visual field, where most manual activity takes place. Altering of ocular dominance in reaching tasks, reduced cross-modal cuing effects when arms are crossed, response of neurons in the primary motor cortex to viewed actions of a hand, multimodal neuron response to tactile as well as visual events, and extensive use of multimodal sensory information in reaching maneuvers support the premise that benefits of accurate limb control influenced the evolution of the primate visual system. The eye-forelimb hypothesis implies that evolutionary change toward hemidecussation in the optic chiasm provided parsimonious neural pathways in animals developing frontal vision and visually guided forelimbs, and also suggests a new perspective on vision convergence in prey and predatory animals.
Collapse
Affiliation(s)
- Matz Larsson
- The Cardiology Clinic, Örebro University Hospital, SE - 701 85, Örebro, Sweden.
| |
Collapse
|
25
|
Tagliabue M, McIntyre J. When kinesthesia becomes visual: a theoretical justification for executing motor tasks in visual space. PLoS One 2013; 8:e68438. [PMID: 23861903 PMCID: PMC3702599 DOI: 10.1371/journal.pone.0068438] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2013] [Accepted: 05/29/2013] [Indexed: 01/21/2023] Open
Abstract
Several experimental studies in the literature have shown that even when performing purely kinesthetic tasks, such as reaching for a kinesthetically felt target with a hidden hand, the brain reconstructs a visual representation of the movement. In our previous studies, however, we did not observe any role of a visual representation of the movement in a purely kinesthetic task. This apparent contradiction could be related to a fundamental difference between the studied tasks. In our study subjects used the same hand to both feel the target and to perform the movement, whereas in most other studies, pointing to a kinesthetic target consisted of pointing with one hand to the finger of the other, or to some other body part. We hypothesize, therefore, that it is the necessity of performing inter-limb transformations that induces a visual representation of purely kinesthetic tasks. To test this hypothesis we asked subjects to perform the same purely kinesthetic task in two conditions: INTRA and INTER. In the former they used the right hand to both perceive the target and to reproduce its orientation. In the latter, subjects perceived the target with the left hand and responded with the right. To quantify the use of a visual representation of the movement we measured deviations induced by an imperceptible conflict that was generated between visual and kinesthetic reference frames. Our hypothesis was confirmed by the observed deviations of responses due to the conflict in the INTER, but not in the INTRA, condition. To reconcile these observations with recent theories of sensori-motor integration based on maximum likelihood estimation, we propose here a new model formulation that explicitly considers the effects of covariance between sensory signals that are directly available and internal representations that are ‘reconstructed’ from those inputs through sensori-motor transformations.
Collapse
Affiliation(s)
- Michele Tagliabue
- Centre d'Etude de la Sensorimotricité, (CNRS UMR 8194), Université Paris Descartes, Institut des Neurosciences et de la Cognition, Sorbonne Paris Cité, Paris, France.
| | | |
Collapse
|
26
|
Tagliabue M, Arnoux L, McIntyre J. Keep your head on straight: facilitating sensori-motor transformations for eye-hand coordination. Neuroscience 2013; 248:88-94. [PMID: 23732231 DOI: 10.1016/j.neuroscience.2013.05.051] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2013] [Revised: 05/16/2013] [Accepted: 05/17/2013] [Indexed: 10/26/2022]
Abstract
In many day-to-day situations humans manifest a marked tendency to hold the head vertical while performing sensori-motor actions. For instance, when performing coordinated whole-body motor tasks, such as skiing, gymnastics or simply walking, and even when driving a car, human subjects will strive to keep the head aligned with the gravito-inertial vector. Until now, this phenomenon has been thought of as a means to limit variations of sensory signals emanating from the eyes and inner ears. Recent theories suggest that for the task of aligning the hand to a target, the CNS compares target and hand concurrently in both visual and kinesthetic domains, rather than combining sensory data into a single, multimodal reference frame. This implies that when sensory information is lacking in one modality, it must be 'reconstructed' based on information from the other. Here we asked subjects to reach to a visual target with the unseen hand. In this situation, the CNS might reconstruct the orientation of the target in kinesthetic space or reconstruct the orientation of the hand in visual space, or both. By having subjects tilt the head during target acquisition or during movement execution, we show a greater propensity to perform the sensory reconstruction that can be achieved when the head is held upright. These results suggest that the reason humans tend to keep their head upright may also have to do with how the brain manipulates and stores spatial information between reference frames and between sensory modalities, rather than only being tied to the specific problem of stabilizing visual and vestibular inputs.
Collapse
Affiliation(s)
- M Tagliabue
- Centre d'Etude de la Sensorimotricité, CNRS UMR 8194, Université Paris Descartes, Institut des Neurosciences et de la Cognition, 75006 Paris, France.
| | - L Arnoux
- Centre d'Etude de la Sensorimotricité, CNRS UMR 8194, Université Paris Descartes, Institut des Neurosciences et de la Cognition, 75006 Paris, France
| | - J McIntyre
- Centre d'Etude de la Sensorimotricité, CNRS UMR 8194, Université Paris Descartes, Institut des Neurosciences et de la Cognition, 75006 Paris, France
| |
Collapse
|
27
|
Hadjidimitrakis K, Bertozzi F, Breveglieri R, Bosco A, Galletti C, Fattori P. Common Neural Substrate for Processing Depth and Direction Signals for Reaching in the Monkey Medial Posterior Parietal Cortex. Cereb Cortex 2013; 24:1645-57. [DOI: 10.1093/cercor/bht021] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
28
|
Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects. Exp Brain Res 2012; 223:441-55. [PMID: 23076429 DOI: 10.1007/s00221-012-3270-x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2011] [Accepted: 09/11/2012] [Indexed: 10/27/2022]
Abstract
A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to visual bias with bimodal stimuli. Results highlight age-, memory-, and modality-dependent deterioration in the processing of auditory and visual space, as well as an age-related increase in the dominance of vision when localizing bimodal sources.
Collapse
|
29
|
Shi Y, Buneo CA. Movement variability resulting from different noise sources: a simulation study. Hum Mov Sci 2012; 31:772-90. [PMID: 22795761 DOI: 10.1016/j.humov.2011.07.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2011] [Revised: 07/12/2011] [Accepted: 07/14/2011] [Indexed: 11/19/2022]
Abstract
Limb movements are highly variable due in part to noise occurring at different stages of movement production, from sensing the position of the limb to the issuing of motor commands. Here we used a simulation approach to predict the effects of noise associated with (1) sensing the position of the limb ('position sensing noise') and (2) planning an appropriate movement vector ('trajectory planning noise'), as well as the combined effects of these factors, on arm movement variability across the workspace. Results were compared to those predicted by a previous model of the noise associated with movement execution. We found that the effects of sensing and planning related noise on movement variability were highly dependent upon both the planned movement direction and the initial configuration of the arm and differed in several respects from the effects of execution noise. In addition, sensing and planning noise interacted in a complex manner across movement directions. These results provide important insights into the relative roles of sensing, planning and execution noise in movement variability that could prove useful for understanding and addressing the exaggerated variability that arises from neurological damage, and for interpreting neurophysiological investigations that seek to relate neural variability to behavioral variability.
Collapse
Affiliation(s)
- Y Shi
- School of Biological and Health Systems Engineering, Arizona State University, P.O. Box 879709, Tempe, AZ 85287, USA.
| | | |
Collapse
|
30
|
Funahashi S. Space representation in the prefrontal cortex. Prog Neurobiol 2012; 103:131-55. [PMID: 22521602 DOI: 10.1016/j.pneurobio.2012.04.002] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Revised: 04/04/2012] [Accepted: 04/04/2012] [Indexed: 11/30/2022]
Abstract
The representation of space and its function in the prefrontal cortex have been examined using a variety of behavioral tasks. Among them, since the delayed-response task requires the temporary maintenance of spatial information, this task has been used to examine the mechanisms of spatial representation. In addition, the concept of working memory to explain prefrontal functions has helped us to understand the nature and functions of space representation in the prefrontal cortex. The detailed analysis of delay-period activity observed in spatial working memory tasks has provided important information for understanding space representation in the prefrontal cortex. Directional delay-period activity has been shown to be a neural correlate of the mechanism for temporarily maintaining information and represent spatial information for the visual cue and the saccade. In addition, many task-related prefrontal neurons exhibit spatially selective activities. These neurons are also important components of spatial information processing. In fact, information flow from sensory-related neurons to motor-related neurons has been demonstrated, along with a change in spatial representation as the trial progresses. The dynamic functional interactions among neurons exhibiting different task-related activities and representing different aspects of information could play an essential role in information processing. In addition, information provided from other cortical or subcortical areas might also be necessary for the representation of space in the prefrontal cortex. To better understand the representation of space and its function in the prefrontal cortex, we need to understand the nature of functional interactions between the prefrontal cortex and other cortical and subcortical areas.
Collapse
Affiliation(s)
- Shintaro Funahashi
- Kokoro Research Center, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan.
| |
Collapse
|
31
|
Eye-hand coordination when the body moves: dynamic egocentric and exocentric sensory encoding. Neurosci Lett 2012; 513:78-83. [PMID: 22343022 DOI: 10.1016/j.neulet.2012.02.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2011] [Revised: 02/01/2012] [Accepted: 02/03/2012] [Indexed: 11/21/2022]
Abstract
We investigated whether the human brain encodes and memorizes object orientations with respect to external references, such as gravity and visual landmarks, or whether it uses egocentric representations of the task. To this end, we applied a new analysis to a previously reported experiment on a reach-to-grasp-like movement, in which we used sensory conflict to identify how the CNS encodes target and hand orientation. Whereas in the preceding study deviations of responses provoked by the conflict provided evidence for the simultaneous use of visual and kinesthetic representations of target and hand (Tagliabue and McIntyre, 2011 [20]), here we used an analysis of response variability in the presence of conflict to test for ego- versus exo-centric encoding within each sensory modality. Our results show an increase of response variability with the amplitude of the head rotation, indicative of errors that accumulate when updating egocentric representations during head movements. In addition, the effect of conflict on error accumulation showed that the brain selects different information about the head movement for the updating, depending on the modality of the egocentric representation (visual or kinesthetic) that is retained. In particular, the CNS appears to privilege the sensory information about head movement that can most easily be combined with each internal representation. Moreover, a combined analysis of response variability and response deviations induced by the conflict suggests the coexistence of independent ego- and exo-centered internal representations within each sensory modality.
Collapse
|
32
|
Niechwiej-Szwedo E, Goltz HC, Chandrakumar M, Wong AMF. The effect of sensory uncertainty due to amblyopia (lazy eye) on the planning and execution of visually-guided 3D reaching movements. PLoS One 2012; 7:e31075. [PMID: 22363549 PMCID: PMC3281912 DOI: 10.1371/journal.pone.0031075] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2011] [Accepted: 01/01/2012] [Indexed: 01/09/2023] Open
Abstract
BACKGROUND Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. METHODS Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50-100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R(2)) which correlates the spatial position of the limb during the movement to endpoint position. RESULTS Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R(2) values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. CONCLUSION Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their abnormal stereopsis.
Collapse
Affiliation(s)
- Ewa Niechwiej-Szwedo
- Department of Ophthalmology and Vision Sciences, The Hospital for Sick Children, Toronto, Canada
| | - Herbert C. Goltz
- Department of Ophthalmology and Vision Sciences, The Hospital for Sick Children, Toronto, Canada
- University of Toronto, Toronto, Canada
| | | | - Agnes M. F. Wong
- Department of Ophthalmology and Vision Sciences, The Hospital for Sick Children, Toronto, Canada
- University of Toronto, Toronto, Canada
| |
Collapse
|
33
|
Chang SWC, Snyder LH. The representations of reach endpoints in posterior parietal cortex depend on which hand does the reaching. J Neurophysiol 2012; 107:2352-65. [PMID: 22298831 DOI: 10.1152/jn.00852.2011] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Neurons in the parietal reach region (PRR) have been implicated in the sensory-to-motor transformation required for reaching toward visually defined targets. The neurons in each cortical hemisphere might be specifically involved in planning movements of just one limb, or the PRR might code reach endpoints generically, independent of which limb will actually move. Previous work has shown that the preferred directions of PRR neurons are similar for right and left limb movements but that the amplitude of modulation may vary greatly. We now test the hypothesis that frames of reference and eye and hand gain field modulations will, like preferred directions, be independent of which hand moves. This was not the case. Many neurons show clear differences in both the frame of reference as well as in direction and strength of gain field modulations, depending on which hand is used to reach. The results suggest that the information that is conveyed from the PRR to areas closer to the motor output (the readout from the PRR) is different for each limb and that individual PRR neurons contribute either to controlling the contralateral-limb or else bimanual-limb control.
Collapse
Affiliation(s)
- Steve W C Chang
- Center for Cognitive Neuroscience, Department of Neurobiology, Duke University Medical Center, Durham, NC 27701, USA.
| | | |
Collapse
|
34
|
Infants and adults reaching in the dark. Exp Brain Res 2011; 217:237-49. [DOI: 10.1007/s00221-011-2984-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2011] [Accepted: 12/10/2011] [Indexed: 11/25/2022]
|
35
|
Apker GA, Buneo CA. Contribution of execution noise to arm movement variability in three-dimensional space. J Neurophysiol 2011; 107:90-102. [PMID: 21975450 DOI: 10.1152/jn.00495.2011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Reaching movements are subject to noise associated with planning and execution, but precisely how these noise sources interact to determine patterns of endpoint variability in three-dimensional space is not well understood. For frontal plane movements, variability is largest along the depth axis (the axis along which visual planning noise is greatest), with execution noise contributing to this variability along the movement direction. Here we tested whether these noise sources interact in a similar way for movements directed in depth. Subjects performed sequences of two movements from a single starting position to targets that were either both contained within a frontal plane ("frontal sequences") or where the first was within the frontal plane and the second was directed in depth ("depth sequences"). For both sequence types, movements were performed with or without visual feedback of the hand. When visual feedback was available, endpoint distributions for frontal and depth sequences were generally anisotropic, with the principal axes of variability being strongly aligned with the depth axis. Without visual feedback, endpoint distributions for frontal sequences were relatively isotropic and movement direction dependent, while those for depth sequences were similar to those with visual feedback. Overall, the results suggest that in the presence of visual feedback, endpoint variability is dominated by uncertainty associated with planning and updating visually guided movements. In addition, the results suggest that without visual feedback, increased uncertainty in hand position estimation effectively unmasks the effect of execution-related noise, resulting in patterns of endpoint variability that are highly movement direction dependent.
Collapse
Affiliation(s)
- Gregory A Apker
- School of Biological and Health Systems Engineering, Arizona State Univ., P.O. Box 879709, Tempe, AZ 85287-9709, USA
| | | |
Collapse
|
36
|
Scheidt RA, Ghez C, Asnani S. Patterns of hypermetria and terminal cocontraction during point-to-point movements demonstrate independent action of trajectory and postural controllers. J Neurophysiol 2011; 106:2368-82. [PMID: 21849613 DOI: 10.1152/jn.00763.2010] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We examined elbow muscle activities and movement kinematics to determine how subjects combine elementary control actions in performing movements with one and two trajectory segments. In reaching, subjects made a rapid elbow flexion to a visual target before stabilizing the limb with either a low or a higher level of elbow flexor/extensor coactivity (CoA), which was cued by target diameter. Cursor diameter provided real-time biofeedback of actual muscle CoA. In reversing, the limb was to reverse direction within the target and return to the origin with minimal CoA. We previously reported that subjects overshoot the goal when attempting a reversal after first having learned to reach accurately to the same target. Here we test the hypothesis that this hypermetria results because reversals co-opt the initial feedforward control action from the preceding trained reach, thereby failing to account for task-dependent changes in limb impedance induced by differences in flexor/extensor coactivity as the target is acquired (higher in reaching than reversing). Instructed increases in elbow CoA began mid-reach, thus increasing elbow impedance and reducing transient oscillations present in low CoA movments. Flexor EMG alone increased at movement onset. Test reversals incorporated the initial agonist activity of previous reaches but not the increased coactivity at the target, thus leading to overshoot. Moreover, we observed elevated coactivity in reversals upon returning to the origin even though coactivity in reaching was centered at the goal target. These findings refute the idea that the brain necessarily invokes distinct unitary control actions for reaches and reversals made to the same target. Instead, reaches and reversals share a common control action that initiates trajectories toward their target and another later control action that terminates movement and stabilizes the limb about its final resting posture, which differs in the two tasks.
Collapse
Affiliation(s)
- Robert A Scheidt
- Dept. of Biomedical Engineering, Olin Engineering Center, 303, PO Box 1881, Marquette Univ., Milwaukee, WI 53201-1881, USA.
| | | | | |
Collapse
|
37
|
Necessity is the mother of invention: reconstructing missing sensory information in multiple, concurrent reference frames for eye-hand coordination. J Neurosci 2011; 31:1397-409. [PMID: 21273424 DOI: 10.1523/jneurosci.0623-10.2011] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
When aligning the hand to grasp an object, the CNS combines multiple sensory inputs encoded in multiple reference frames. Previous studies suggest that when a direct comparison of target and hand is possible via a single sensory modality, the CNS avoids performing unnecessary coordinate transformations that add noise. But when target and hand do not share a common sensory modality (e.g., aligning the unseen hand to a visual target), at least one coordinate transformation is required. Similarly, body movements may occur between target acquisition and manual response, requiring that egocentric target information be updated or transformed to external reference frames to compensate. Here, we asked subjects to align the hand to an external target, where the target could be presented visually or kinesthetically and feedback about the hand was visual, kinesthetic, or both. We used a novel technique of imposing conflict between external visual and gravito-kinesthetic reference frames when subjects tilted the head during an instructed memory delay. By comparing experimental results to analytical models based on principles of maximum likelihood, we showed that multiple transformations above the strict minimum may be performed, but only if the task precludes a unimodal comparison of egocentric target and hand information. Thus, for cross-modal tasks, or when head movements are involved, the CNS creates and uses both kinesthetic and visual representations. We conclude that the necessity of producing at least one coordinate transformation activates multiple, concurrent internal representations, the functionality of which depends on the alignment of the head with respect to gravity.
Collapse
|
38
|
Abstract
OBJECTIVE The objective of this study was to evaluate whether adding a pointing task would influence functional reach test performance in younger and older adults. DESIGN While standing on a force plate, 20 older (73 ± 8 yrs) and 20 younger (23 ± 1 yrs) adults were randomly administered a modification of the functional reach test and the functional point test. Functional pointing involved reaching and pointing at the farthest possible target in a series of 1.27-cm colored craft pom-poms attached at 2.54-cm intervals on a yardstick. RESULTS Both older adults (P = 0.001) and younger adults (P = 0.043) reached farther using the functional point test. Older adults also increased their anterior center of pressure displacement with this test (P = 0.037). CONCLUSIONS The addition of a pointing task can make the original clinical test more functional and increase reaching distance in both older and younger adults. Further research is needed to determine whether functional pointing challenges subjects' stability limits more than the traditional test does and offers greater sensitivity in the evaluation of functional balance and fall risk.
Collapse
|
39
|
Behavioral and neural correlates of communication via pointing. PLoS One 2011; 6:e17719. [PMID: 21423659 PMCID: PMC3057969 DOI: 10.1371/journal.pone.0017719] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2010] [Accepted: 02/11/2011] [Indexed: 12/03/2022] Open
Abstract
Communicative pointing is a human specific gesture which allows sharing information about a visual item with another person. It sets up a three-way relationship between a subject who points, an addressee and an object. Yet psychophysical and neuroimaging studies have focused on non-communicative pointing, which implies a two-way relationship between a subject and an object without the involvement of an addressee, and makes such gesture comparable to touching or grasping. Thus, experimental data on the communicating function of pointing remain scarce. Here, we examine whether the communicative value of pointing modifies both its behavioral and neural correlates by comparing pointing with or without communication. We found that when healthy participants pointed repeatedly at the same object, the communicative interaction with an addressee induced a spatial reshaping of both the pointing trajectories and the endpoint variability. Our finding supports the hypothesis that a change in reference frame occurs when pointing conveys a communicative intention. In addition, measurement of regional cerebral blood flow using H2O15 PET-scan showed that pointing when communicating with an addressee activated the right posterior superior temporal sulcus and the right medial prefrontal cortex, in contrast to pointing without communication. Such a right hemisphere network suggests that the communicative value of pointing is related to processes involved in taking another person's perspective. This study brings to light the need for future studies on communicative pointing and its neural correlates by unraveling the three-way relationship between subject, object and an addressee.
Collapse
|
40
|
Gaze-centered spatial updating of reach targets across different memory delays. Vision Res 2011; 51:890-7. [PMID: 21219923 DOI: 10.1016/j.visres.2010.12.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2010] [Revised: 11/26/2010] [Accepted: 12/22/2010] [Indexed: 11/22/2022]
Abstract
Previous research has demonstrated that remembered targets for reaching are coded and updated relative to gaze, at least when the reaching movement is made soon after the target has been extinguished. In this study, we want to test whether reach targets are updated relative to gaze following different time delays. Reaching endpoints systematically varied as a function of gaze relative to target irrespective of whether the action was executed immediately or after a delay of 5 s, 8 s or 12 s. The present results suggest that memory traces for reach targets continue to be coded in a gaze-dependent reference frame if no external cues are present.
Collapse
|
41
|
Chen Y, Byrne P, Crawford JD. Time course of allocentric decay, egocentric decay, and allocentric-to-egocentric conversion in memory-guided reach. Neuropsychologia 2011; 49:49-60. [DOI: 10.1016/j.neuropsychologia.2010.10.031] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2010] [Revised: 10/18/2010] [Accepted: 10/29/2010] [Indexed: 10/18/2022]
|
42
|
Burns JK, Blohm G. Multi-sensory weights depend on contextual noise in reference frame transformations. Front Hum Neurosci 2010; 4:221. [PMID: 21165177 PMCID: PMC3002464 DOI: 10.3389/fnhum.2010.00221] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2010] [Accepted: 11/04/2010] [Indexed: 11/19/2022] Open
Abstract
During reach planning, we integrate multiple senses to estimate the location of the hand and the target, which is used to generate a movement. Visual and proprioceptive information are combined to determine the location of the hand. The goal of this study was to investigate whether multi-sensory integration is affected by extraretinal signals, such as head roll. It is believed that a coordinate matching transformation is required before vision and proprioception can be combined because proprioceptive and visual sensory reference frames do not generally align. This transformation utilizes extraretinal signals about current head roll position, i.e., to rotate proprioceptive signals into visual coordinates. Since head roll is an estimated sensory signal with noise, this head roll dependency of the reference frame transformation should introduce additional noise to the transformed signal, reducing its reliability and thus its weight in the multi-sensory integration. To investigate the role of noisy reference frame transformations on multi-sensory weighting, we developed a novel probabilistic (Bayesian) multi-sensory integration model (based on Sober and Sabes, 2003) that included explicit (noisy) reference frame transformations. We then performed a reaching experiment to test the model's predictions. To test for head roll dependent multi-sensory integration, we introduced conflicts between viewed and actual hand position and measured reach errors. Reach analysis revealed that eccentric head roll orientations led to an increase of movement variability, consistent with our model. We further found that the weighting of vision and proprioception depended on head roll, which we interpret as being a result of signal dependant noise. Thus, the brain has online knowledge of the statistics of its internal sensory representations. In summary, we show that sensory reliability is used in a context-dependent way to adjust multi-sensory integration weights for reaching.
Collapse
|
43
|
Liu X, Mosier KM, Mussa-Ivaldi FA, Casadio M, Scheidt RA. Reorganization of finger coordination patterns during adaptation to rotation and scaling of a newly learned sensorimotor transformation. J Neurophysiol 2010; 105:454-73. [PMID: 20980541 DOI: 10.1152/jn.00247.2010] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We examined how people organize redundant kinematic control variables (finger joint configurations) while learning to make goal-directed movements of a virtual object (a cursor) within a low-dimensional task space (a computer screen). Subjects participated in three experiments performed on separate days. Learning progressed rapidly on day 1, resulting in reduced target capture error and increased cursor trajectory linearity. On days 2 and 3, one group of subjects adapted to a rotation of the nominal map, imposed either stepwise or randomly over trials. Another group experienced a scaling distortion. We report two findings. First, adaptation rates and memory-dependent motor command updating depended on distortion type. Stepwise application and removal of the rotation induced a marked increase in finger motion variability but scaling did not, suggesting that the rotation initiated a more exhaustive search through the space of viable finger motions to resolve the target capture task than did scaling. Indeed, subjects formed new coordination patterns in compensating the rotation but relied on patterns established during baseline practice to compensate the scaling. These findings support the idea that the brain compensates direction and extent errors separately and in computationally distinct ways, but are inconsistent with the idea that once a task is learned, command updating is limited to those degrees of freedom contributing to performance (thereby minimizing energetic or similar costs of control). Second, we report that subjects who learned a scaling while moving to just one target generalized more narrowly across directions than those who learned a rotation. This contrasts with results from whole-arm reaching studies, where a learned scaling generalizes more broadly across direction than rotation. Based on inverse- and forward-dynamics analyses of reaching with the arm, we propose the difference in results derives from extensive exposure in reaching with familiar arm dynamics versus the novelty of the manual task.
Collapse
Affiliation(s)
- Xiaolin Liu
- Department of Biomedical Engineering, Marquette University, Milwaukee, WI 53201-1881, USA
| | | | | | | | | |
Collapse
|
44
|
Bédard P, Wu M, Sanes JN. Brain activation related to combinations of gaze position, visual input, and goal-directed hand movements. Cereb Cortex 2010; 21:1273-82. [PMID: 20974688 DOI: 10.1093/cercor/bhq205] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Humans reach to and acquire objects by transforming visual targets into action commands. How the brain integrates goals specified in a visual framework to signals into a suitable framework for an action plan requires clarification whether visual input, per se, interacts with gaze position to formulate action plans. To further evaluate brain control of visual-motor integration, we assessed brain activation, using functional magnetic resonance imaging. Humans performed goal-directed movements toward visible or remembered targets while fixating gaze left or right from center. We dissociated movement planning from performance using a delayed-response task and manipulated target visibility by its availability throughout the delay or blanking it 500 ms after onset. We found strong effects of gaze orientation on brain activation during planning and interactive effects of target visibility and gaze orientation on movement-related activation during performance in parietal and premotor cortices (PM), cerebellum, and basal ganglia, with more activation for rightward gaze at a visible target and no gaze modulation for movements directed toward remembered targets. These results demonstrate effects of gaze position on PM and movement-related processes and provide new information how visual signals interact with gaze position in transforming visual inputs into motor goals.
Collapse
Affiliation(s)
- Patrick Bédard
- Department of Neuroscience, Alpert Medical School of Brown University, Providence, RI 02912, USA
| | | | | |
Collapse
|
45
|
Mutha PK, Sainburg RL, Haaland KY. Coordination deficits in ideomotor apraxia during visually targeted reaching reflect impaired visuomotor transformations. Neuropsychologia 2010; 48:3855-67. [PMID: 20875439 DOI: 10.1016/j.neuropsychologia.2010.09.018] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2010] [Revised: 09/08/2010] [Accepted: 09/17/2010] [Indexed: 10/19/2022]
Abstract
Ideomotor limb apraxia, commonly defined as a disorder of skilled, purposeful movement, is characterized by spatiotemporal deficits during a variety of actions. These deficits have been attributed to damage to, or impaired retrieval of, stored representations of learned actions, especially object-related movements. However, such deficits might also arise from impaired visuomotor transformation mechanisms that operate in parallel to or downstream from mechanisms for storage of action representations. These transformation processes convert extrinsic visual information into intrinsic neural commands appropriate for the desired motion. These processes are a key part of the movement planning process and performance errors due to inadequate transformations have been shown to increase with the dynamic complexity of the movement. This hypothesis predicts that apraxic patients should show planning deficits when reaching to visual targets, especially when the coordination and/or dynamic requirements of the task increase. Three groups (18 healthy controls, 9 non-apraxic and 9 apraxic left hemisphere damaged patients) performed reaching movements to visual targets that varied in the degree of interjoint coordination required. Relative to the other two groups, apraxic patients made larger initial direction errors and showed higher variability during their movements, especially when reaching to the target with the highest intersegmental coordination requirement. These problems were associated with poor coordination of shoulder and elbow torques early in the movement, consistent with poor movement planning. These findings suggest that the requirement to transform extrinsic visual information into intrinsic motor commands impedes the ability to accurately plan a visually targeted movement in ideomotor limb apraxia.
Collapse
|
46
|
Apker GA, Darling TK, Buneo CA. Interacting noise sources shape patterns of arm movement variability in three-dimensional space. J Neurophysiol 2010; 104:2654-66. [PMID: 20844108 DOI: 10.1152/jn.00590.2010] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Reaching movements are subject to noise in both the planning and execution phases of movement production. The interaction of these noise sources during natural movements is not well understood, despite its importance for understanding movement variability in neurologically intact and impaired individuals. Here we examined the interaction of planning and execution related noise during the production of unconstrained reaching movements. Subjects performed sequences of two movements to targets arranged in three vertical planes separated in depth. The starting position for each sequence was also varied in depth with the target plane; thus required movement sequences were largely contained within the vertical plane of the targets. Each final target in a sequence was approached from two different directions, and these movements were made with or without visual feedback of the moving hand. These combined aspects of the design allowed us to probe the interaction of execution and planning related noise with respect to reach endpoint variability. In agreement with previous studies, we found that reach endpoint distributions were highly anisotropic. The principal axes of movement variability were largely aligned with the depth axis, i.e., the axis along which visual planning related noise would be expected to dominate, and were not generally well aligned with the direction of the movement vector. Our results suggest that visual planning-related noise plays a dominant role in determining anisotropic patterns of endpoint variability in three-dimensional space, with execution noise adding to this variability in a movement direction-dependent manner.
Collapse
Affiliation(s)
- Gregory A Apker
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ 85287-9709, USA
| | | | | |
Collapse
|
47
|
Beurze SM, Toni I, Pisella L, Medendorp WP. Reference frames for reach planning in human parietofrontal cortex. J Neurophysiol 2010; 104:1736-45. [PMID: 20660416 DOI: 10.1152/jn.01044.2009] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To plan a reaching movement, the brain must integrate information about the spatial goal of the reach with positional information about the selected hand. Recent monkey neurophysiological evidence suggests that a mixture of reference frames is involved in this process. Here, using 3T functional magnetic resonance imaging (fMRI), we tested the role of gaze-centered and body-centered reference frames in reach planning in the human brain. Fourteen human subjects planned and executed arm movements to memorized visual targets, while hand starting position and gaze direction were monitored and varied on a trial-by-trial basis. We further introduced a variable delay between target presentation and movement onset to dissociate cerebral preparatory activity from stimulus- and movement-related responses. By varying the position of the target and hand relative to the gaze line, we distinguished cerebral responses that increased for those movements requiring the integration of peripheral target and hand positions in a gaze-centered frame. Posterior parietal and dorsal premotor areas showed such gaze-centered integration effects. In regions closer to the primary motor cortex, body-centered hand position effects were found. These results suggest that, in humans, spatially contiguous neuronal populations operate in different frames of reference, supporting sensorimotor transformations according to gaze-centered or body-centered coordinates. The former appears suited for calculating a difference vector between target and hand location, whereas the latter may be related to the implementation of a joint-based motor command.
Collapse
Affiliation(s)
- S M Beurze
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, P.O. Box 9104, NL-6500 HE, Nijmegen, The Netherlands
| | | | | | | |
Collapse
|
48
|
Scheidt RA, Lillis KP, Emerson SJ. Visual, motor and attentional influences on proprioceptive contributions to perception of hand path rectilinearity during reaching. Exp Brain Res 2010; 204:239-54. [PMID: 20532489 PMCID: PMC2935593 DOI: 10.1007/s00221-010-2308-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2010] [Accepted: 05/19/2010] [Indexed: 11/27/2022]
Abstract
We examined how proprioceptive contributions to perception of hand path straightness are influenced by visual, motor and attentional sources of performance variability during horizontal planar reaching. Subjects held the handle of a robot that constrained goal-directed movements of the hand to the paths of controlled curvature. Subjects attempted to detect the presence of hand path curvature during both active (subject driven) and passive (robot driven) movements that either required active muscle force production or not. Subjects were less able to discriminate curved from straight paths when actively reaching for a target versus when the robot moved their hand through the same curved paths. This effect was especially evident during robot-driven movements requiring concurrent activation of lengthening but not shortening muscles. Subjects were less likely to report curvature and were more variable in reporting when movements appeared straight in a novel "visual channel" condition previously shown to block adaptive updating of motor commands in response to deviations from a straight-line hand path. Similarly, compromised performance was obtained when subjects simultaneously performed a distracting secondary task (key pressing with the contralateral hand). The effects compounded when these last two treatments were combined. It is concluded that environmental, intrinsic and attentional factors all impact the ability to detect deviations from a rectilinear hand path during goal-directed movement by decreasing proprioceptive contributions to limb state estimation. In contrast, response variability increased only in experimental conditions thought to impose additional attentional demands on the observer. Implications of these results for perception and other sensorimotor behaviors are discussed.
Collapse
Affiliation(s)
- Robert A Scheidt
- Department of Biomedical Engineering, Marquette University, Olin Engineering Center, 303, P.O. Box 1881, Milwaukee, WI, 53201-1881, USA.
| | | | | |
Collapse
|
49
|
Chang SWC, Papadimitriou C, Snyder LH. Using a compound gain field to compute a reach plan. Neuron 2010; 64:744-55. [PMID: 20005829 DOI: 10.1016/j.neuron.2009.11.005] [Citation(s) in RCA: 73] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/18/2009] [Indexed: 10/20/2022]
Abstract
A gain field, the scaling of a tuned neuronal response by a postural signal, may help support neuronal computation. Here, we characterize eye and hand position gain fields in the parietal reach region (PRR). Eye and hand gain fields in individual PRR neurons are similar in magnitude but opposite in sign to one another. This systematic arrangement produces a compound gain field that is proportional to the distance between gaze location and initial hand position. As a result, the visual response to a target for an upcoming reach is scaled by the initial gaze-to-hand distance. Such a scaling is similar to what would be predicted in a neural network that mediates between eye- and hand-centered representations of target location. This systematic arrangement supports a role of PRR in visually guided reaching and provides strong evidence that gain fields are used for neural computations.
Collapse
Affiliation(s)
- Steve W C Chang
- Department of Anatomy and Neurobiology, Washington University in St. Louis School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
50
|
Shi Y, Buneo CA. Exploring the role of sensor noise in movement variability. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2009; 2009:4970-3. [PMID: 19964654 DOI: 10.1109/iembs.2009.5334096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Numerical simulations were used to explore the consequences of a spatially non-uniform sense of hand position on arm movements in the horizontal plane. Isotropic or anisotropic position errors were introduced into several starting hand positions and the resulting errors in movement direction were quantified. Two separate simulations were performed. In one simulation planned movement directions were defined relative to the starting position of the hand. Movement errors generated in this simulation resulted from a failure to compensate for differing initial conditions. In a second simulation planned movement directions were defined by the vector joining the sensed starting position with a fixed target position. Movement errors in this simulation resulted from both uncompensated changes in initial conditions as well as errors in movement planning. In both simulations, directional error variability generally increased for starting positions closer to the body. These effects were most pronounced for the anisotropic distribution of starting positions, particularly under conditions where movements were directed toward a fixed spatial location.
Collapse
Affiliation(s)
- Ying Shi
- Harrington Department of Bioengineering, Arizona State University, Tempe, AZ 85287, USA
| | | |
Collapse
|