1
|
Shin J, Chung Y. The effects of treadmill training with visual feedback and rhythmic auditory cue on gait and balance in chronic stroke patients: A randomized controlled trial. NeuroRehabilitation 2022; 51:443-453. [DOI: 10.3233/nre-220099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
BACKGROUND: Many stroke patients show reduced walking abilities, characterized by asymmetric walking patterns. For such patients, restoration of walking symmetry is important. OBJECTIVE: This study investigates the effect of treadmill training with visual feedback and rhythmic auditory cue (VF+RAC) for walking symmetry on spatiotemporal gait parameters and balance abilities. METHODS: Thirty-two patients with chronic stroke participated in this study. Participants were randomized to either the VF+RAC (n = 16) or the Control (n = 16) group. The VF+RAC group received treadmill training with VF and RAC, and the Control group underwent treadmill training without any visual and auditory stimulation. VF+RAC and Control groups were trained three times per week for eight weeks. After eight weeks of training, the spatiotemporal gait parameters, Timed up and go test, and Berg balance scale were measured. RESULTS: The VF+RAC group significantly improved balance and spatiotemporal parameters except for non-paretic single limb support compared to the Control group. CONCLUSIONS: This study demonstrated that treadmill training with VF+RAC significantly improved spatiotemporal gait symmetry, including other gait parameters, and enhanced balance abilities in stroke patients. Therefore, treadmill training with VF+RAC could be a beneficial intervention in clinical settings for stroke patients who need improvement in their gait and balance abilities.
Collapse
Affiliation(s)
- Jin Shin
- Department of Physical Medicine and Rehabilitation, Gyeong-in Rehabilitation Center Hospital, Incheon, Republic of Korea
| | - Yijung Chung
- Department of Physical Therapy, College of Health and Welfare, Sahmyook University, Seoul, Republicof Korea
| |
Collapse
|
2
|
Goettker A, Fiehler K, Voudouris D. Somatosensory target information is used for reaching but not for saccadic eye movements. J Neurophysiol 2020; 124:1092-1102. [DOI: 10.1152/jn.00258.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A systematic investigation of contributions of different somatosensory modalities (proprioception, kinesthesia, tactile) for goal-directed movements is missing. Here we demonstrate that while eye movements are not affected by different types of somatosensory information, reach precision improves when two different types of information are available. Moreover, reach accuracy and gaze precision to unseen somatosensory targets improve when performing coordinated eye-hand movements, suggesting bidirectional contributions of efferent information in reach and eye movement control.
Collapse
Affiliation(s)
- Alexander Goettker
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Dimitris Voudouris
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
3
|
Wilke C, Synofzik M, Lindner A. Sensorimotor recalibration depends on attribution of sensory prediction errors to internal causes. PLoS One 2013; 8:e54925. [PMID: 23359818 PMCID: PMC3554678 DOI: 10.1371/journal.pone.0054925] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2012] [Accepted: 12/20/2012] [Indexed: 11/18/2022] Open
Abstract
Sensorimotor learning critically depends on error signals. Learning usually tries to minimise these error signals to guarantee optimal performance. Errors can, however, have both internal causes, resulting from one’s sensorimotor system, and external causes, resulting from external disturbances. Does learning take into account the perceived cause of error information? Here, we investigated the recalibration of internal predictions about the sensory consequences of one’s actions. Since these predictions underlie the distinction of self- and externally produced sensory events, we assumed them to be recalibrated only by prediction errors attributed to internal causes. When subjects were confronted with experimentally induced visual prediction errors about their pointing movements in virtual reality, they recalibrated the predicted visual consequences of their movements. Recalibration was not proportional to the externally generated prediction error, but correlated with the error component which subjects attributed to internal causes. We also revealed adaptation in subjects’ motor performance which reflected their recalibrated sensory predictions. Thus, causal attribution of error information is essential for sensorimotor learning.
Collapse
Affiliation(s)
- Carlo Wilke
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Matthis Synofzik
- Department of Neurodegeneration, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- German Centre for Neurodegenerative Diseases, University of Tübingen, Tübingen, Germany
| | - Axel Lindner
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- * E-mail:
| |
Collapse
|
4
|
Byrne PA, Henriques DYP. When more is less: increasing allocentric visual information can switch visual-proprioceptive combination from an optimal to sub-optimal process. Neuropsychologia 2012; 51:26-37. [PMID: 23142707 DOI: 10.1016/j.neuropsychologia.2012.10.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2012] [Revised: 08/16/2012] [Accepted: 10/05/2012] [Indexed: 10/27/2022]
Abstract
When reaching for an object in the environment, the brain often has access to multiple independent estimates of that object's location. For example, if someone places their coffee cup on a table, then later they know where it is because they see it, but also because they remember how their reaching limb was oriented when they placed the cup. Intuitively, one would expect more accurate reaches if either of these estimates were improved (e.g., if a light were turned on so the cup were more visible). It is now well-established that the brain tends to combine two or more estimates about the same stimulus as a maximum-likelihood estimator (MLE), which is the best thing to do when estimates are unbiased. Even in the presence of small biases, relying on the MLE rule is still often better than choosing a single estimate. For this work, we designed a reaching task in which human subjects could integrate proprioceptive and allocentric (landmark-relative) visual information to reach for a remembered target. Even though both of these modalities contain some level of bias, we demonstrate via simulation that our subjects should use an MLE rule in preference to relying on one modality or the other in isolation. Furthermore, we show that when visual information is poor, subjects do, indeed, combine information in this way. However, when we improve the quality of visual information, subjects counter-intuitively switch to a sub-optimal strategy that occasionally includes reliance on a single modality.
Collapse
Affiliation(s)
- Patrick A Byrne
- Centre for Vision Research, Science, York University, 4700 Keele Street, Toronto, ON, Canada M3J 1P3.
| | | |
Collapse
|
5
|
Substituting auditory for visual feedback to adapt to altered dynamic and kinematic environments during reaching. Exp Brain Res 2012; 221:33-41. [PMID: 22733310 DOI: 10.1007/s00221-012-3144-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2012] [Accepted: 06/06/2012] [Indexed: 10/28/2022]
Abstract
The arm movement control system often relies on visual feedback to drive motor adaptation and to help specify desired trajectories. Here we studied whether kinematic errors that were indicated with auditory feedback could be used to control reaching in a way comparable with when vision was available. We randomized twenty healthy adult subjects to receive either visual or auditory feedback of their movement trajectory error with respect to a line as they performed timed reaching movements while holding a robotic joystick. We delivered auditory feedback using spatialized pink noise, the loudness and location of which reflected kinematic error. After a baseline period, we unexpectedly perturbed the reaching trajectories using a perpendicular viscous force field applied by the joystick. Subjects adapted to the force field as well with auditory feedback as they did with visual feedback and exhibited comparable after effects when the force field was removed. When we changed the reference trajectory to be a trapezoid instead of a line, subjects shifted their trajectories by about the same amount with either auditory or visual feedback of error. These results indicate that arm motor networks can readily incorporate auditory feedback to alter internal models and desired trajectories, a finding with implications for the organization of the arm motor control adaptation system as well as sensory substitution and motor training technologies.
Collapse
|
6
|
Jones SAH, Byrne PA, Fiehler K, Henriques DYP. Reach endpoint errors do not vary with movement path of the proprioceptive target. J Neurophysiol 2012; 107:3316-24. [DOI: 10.1152/jn.00901.2011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Previous research has shown that reach endpoints vary with the starting position of the reaching hand and the location of the reach target in space. We examined the effect of movement direction of a proprioceptive target-hand, immediately preceding a reach, on reach endpoints to that target. Participants reached to visual, proprioceptive (left target-hand), or visual-proprioceptive targets (left target-hand illuminated for 1 s prior to reach onset) with their right hand. Six sites served as starting and final target locations (35 target movement directions in total). Reach endpoints do not vary with the movement direction of the proprioceptive target, but instead appear to be anchored to some other reference (e.g., body). We also compared reach endpoints across the single and dual modality conditions. Overall, the pattern of reaches for visual-proprioceptive targets resembled those for proprioceptive targets, while reach precision resembled those for the visual targets. We did not, however, find evidence for integration of vision and proprioception based on a maximum-likelihood estimator in these tasks.
Collapse
Affiliation(s)
- Stephanie A. H. Jones
- The School of Health and Human Performance, Dalhousie University, Halifax, Nova Scotia
| | - Patrick A. Byrne
- School of Kinesiology and Health Science, York University, Toronto, Canada; and
| | - Katja Fiehler
- Department of Psychology, Justus-Liebig University, Giessen, Germany
| | | |
Collapse
|
7
|
Squeri V, Sciutti A, Gori M, Masia L, Sandini G, Konczak J. Two hands, one perception: how bimanual haptic information is combined by the brain. J Neurophysiol 2012; 107:544-50. [DOI: 10.1152/jn.00756.2010] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Humans routinely use both of their hands to gather information about shape and texture of objects. Yet, the mechanisms of how the brain combines haptic information from the two hands to achieve a unified percept are unclear. This study systematically measured the haptic precision of humans exploring a virtual curved object contour with one or both hands to understand if the brain integrates haptic information from the two hemispheres. Bayesian perception theory predicts that redundant information from both hands should improve haptic estimates. Thus exploring an object with two hands should yield haptic precision that is superior to unimanual exploration. A bimanual robotic manipulandum passively moved the hands of 20 blindfolded, right-handed adult participants along virtual curved contours. Subjects indicated which contour was more “curved” (forced choice) between two stimuli of different curvature. Contours were explored uni- or bimanually at two orientations (toward or away from the body midline). Respective psychophysical discrimination thresholds were computed. First, subjects showed a tendency for one hand to be more sensitive than the other with most of the subjects exhibiting a left-hand bias. Second, bimanual thresholds were mostly within the range of the corresponding unimanual thresholds and were not predicted by a maximum-likelihood estimation (MLE) model. Third, bimanual curvature perception tended to be biased toward the motorically dominant hand, not toward the haptically more sensitive left hand. Two-handed exploration did not necessarily improve haptic sensitivity. We found no evidence that haptic information from both hands is integrated using a MLE mechanism. Rather, results are indicative of a process of “sensory selection”, where information from the dominant right hand is used, although the left, nondominant hand may yield more precise haptic estimates.
Collapse
Affiliation(s)
- Valentina Squeri
- Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy; and
| | - Alessandra Sciutti
- Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy; and
| | - Monica Gori
- Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy; and
| | - Lorenzo Masia
- Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy; and
| | - Giulio Sandini
- Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy; and
| | - Juergen Konczak
- Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Genova, Italy; and
- Human Sensorimotor Control Laboratory, University of Minnesota, Minneapolis, Minnesota
| |
Collapse
|