1
|
Crowe EM, Smeets JBJ, Brenner E. Online updating of obstacle positions when intercepting a virtual target. Exp Brain Res 2023:10.1007/s00221-023-06634-5. [PMID: 37244877 DOI: 10.1007/s00221-023-06634-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 05/11/2023] [Indexed: 05/29/2023]
Abstract
People rely upon sensory information in the environment to guide their actions. Ongoing goal-directed arm movements are constantly adjusted to the latest estimate of both the target and hand's positions. Does the continuous guidance of ongoing arm movements also consider the latest visual information of the position of obstacles in the surrounding? To find out, we asked participants to slide their finger across a screen to intercept a laterally moving virtual target while moving through a gap that was created by two virtual circular obstacles. At a fixed time during each trial, the target suddenly jumped slightly laterally while still continuing to move. In half the trials, the size of the gap changed at the same moment as the target jumped. As expected, participants adjusted their movements in response to the target jump. Importantly, the magnitude of this response depended on the new size of the gap. If participants were told that the circles were irrelevant, changing the gap between them had no effect on the responses. This shows that obstacles' instantaneous positions can be considered when visually guiding goal-directed movements.
Collapse
Affiliation(s)
- Emily M Crowe
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, 1081 BT, Amsterdam, The Netherlands.
- School of Psychology, University of Nottingham, University Park, Nottingham, NG7 2RD, UK.
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, 1081 BT, Amsterdam, The Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Institute of Brain and Behavior Amsterdam, Amsterdam Movement Sciences, Vrije Universiteit Amsterdam, 1081 BT, Amsterdam, The Netherlands
| |
Collapse
|
2
|
When two worlds collide: the influence of an obstacle in peripersonal space on multisensory encoding. Exp Brain Res 2021; 239:1715-1726. [PMID: 33779791 PMCID: PMC8277606 DOI: 10.1007/s00221-021-06072-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 01/02/2021] [Indexed: 11/06/2022]
Abstract
Multisensory coding of the space surrounding our body, the peripersonal space, is crucial for motor control. Recently, it has been proposed that an important function of multisensory coding is that it allows anticipation of the tactile consequences of contact with a nearby object. Indeed, performing goal-directed actions (i.e. pointing and grasping) induces a continuous visuotactile remapping as a function of on-line sensorimotor requirements. Here, we investigated whether visuotactile remapping can be induced by obstacles, e.g. objects that are not the target of the grasping movement. In the current experiment, we used a cross-modal obstacle avoidance paradigm, in which participants reached past an obstacle to grasp a second object. Participants indicated the location of tactile targets delivered to the hand during the grasping movement, while a visual cue was sometimes presented simultaneously on the to-be-avoided object. The tactile and visual stimulation was triggered when the reaching hand passed a position that was drawn randomly from a continuous set of predetermined locations (between 0 and 200 mm depth at 5 mm intervals). We observed differences in visuotactile interaction during obstacle avoidance dependent on the location of the stimulation trigger: visual interference was enhanced for tactile stimulation that occurred when the hand was near the to-be-avoided object. We show that to-be-avoided obstacles, which are relevant for action but are not to-be-interacted with (as the terminus of an action), automatically evoke the tactile consequences of interaction. This shows that visuotactile remapping extends to obstacle avoidance and that this process is flexible.
Collapse
|
3
|
Wispinski NJ, Gallivan JP, Chapman CS. Models, movements, and minds: bridging the gap between decision making and action. Ann N Y Acad Sci 2020; 1464:30-51. [DOI: 10.1111/nyas.13973] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Revised: 08/20/2018] [Accepted: 09/06/2018] [Indexed: 11/29/2022]
Affiliation(s)
| | - Jason P. Gallivan
- Centre for Neuroscience StudiesQueen's University Kingston Ontario Canada
- Department of PsychologyQueen's University Kingston Ontario Canada
- Department of Biomedical and Molecular SciencesQueen's University Kingston Ontario Canada
| | - Craig S. Chapman
- Faculty of Kinesiology, Sport, and RecreationUniversity of Alberta Edmonton Alberta Canada
- Neuroscience and Mental Health Institute, University of Alberta Edmonton Alberta Canada
| |
Collapse
|
4
|
Lu Z, Fiehler K. Spatial updating of allocentric landmark information in real-time and memory-guided reaching. Cortex 2020; 125:203-214. [PMID: 32006875 DOI: 10.1016/j.cortex.2019.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 09/16/2019] [Accepted: 12/12/2019] [Indexed: 12/17/2022]
Abstract
The 2-streams model of vision suggests that egocentric and allocentric reference frames are utilized by the dorsal and the ventral stream for real-time and memory-guided movements, respectively. Recent studies argue against such a strict functional distinction and suggest that real-time and memory-guided movements recruit the same spatial maps. In this study we focus on allocentric spatial coding and updating of targets by using landmark information in real-time and memory-guided reaching. We presented participants with a naturalistic scene which consisted of six objects on a table that served as potential reach targets. Participants were informed about the target object after scene encoding, and were prompted by a go cue to reach to its position. After target identification a brief air-puff was applied to the participant's right eye inducing an eye blink. During the blink the target object disappeared from the scene, and in half of the trials the remaining objects, that functioned as landmarks, were shifted horizontally in the same direction. We found that landmark shifts systematically influenced participants' reaching endpoints irrespective of whether the movements were controlled online based on available target information (real-time movement) or memory-guided based on remembered target information (memory-guided movement). Overall, the effect of landmark shift was stronger for memory-guided than real-time reaching. Our findings suggest that humans can encode and update reach targets in an allocentric reference frame for both real-time and memory-guided movements and show stronger allocentric coding when the movement is based on memory.
Collapse
Affiliation(s)
- Zijian Lu
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany.
| | - Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University, Giessen, Germany; Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus-Liebig University, Giessen, Germany.
| |
Collapse
|
5
|
Baldauf D. Visual Selection of the Future Reach Path in Obstacle Avoidance. J Cogn Neurosci 2018; 30:1846-1857. [DOI: 10.1162/jocn_a_01310] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In two EEG experiments, we studied the role of visual attention during the preparation of manual movements around an obstacle. Participants performed rapid hand movements to a goal position avoiding a central obstacle either on the left or right side, depending on the pitch of the acoustical go signal. We used a dot probe paradigm to analyze the deployment of spatial attention in the visual field during the motor preparation. Briefly after the go signal but still before the hand movement actually started, a visual transient was flashed either on the planned pathway of the hand (congruent trials) or on the opposite, movement-irrelevant side (incongruent trials). The P1/N1 components that were evoked by the onset of the dot probe were enhanced in congruent trials where the visual transient was presented on the planned path of the hand. The results indicate that, during movement preparation, attention is allocated selectively to the planned trajectory the hand is going to take around the obstacle.
Collapse
|
6
|
Shah AK, Patton JL. Dissociating two sources of variability using a safety-margin model. IEEE Int Conf Rehabil Robot 2017; 2017:152-157. [PMID: 28813810 DOI: 10.1109/icorr.2017.8009238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Neurological trauma can have a devastating effect on activities of daily living. One of the consequences is an increased amount of variability in the system, which can challenge individuals to stay within safe and stable regions of operation. There are multiple sources of movement variability; two of these are neuromotor noise and action-tolerance variability. The amount of neuromotor noise that is uncontrollable can impose limitations on reshaping variability. Action-tolerance variability, which can be reshaped through experience, and neuromotor noise, a certain amount of which cannot be altered, are often conflated when discussing motor variability. We attempted to disambiguate the two using an adaptive model, producing distinct "signatures" of neuromotor noise and action-tolerance variability within a task and compare with experimental data on stroke and healthy. Not all stroke survivors could adapt to the task, as predicted for those with greater neuromotor noise. Possible applications of this model can inform us of potential to influence distributions in stroke survivors and other individuals who have had a neurological injury. Additionally, we could design new training environments specifically tailored to the needs of the individual. This technique may also help disambiguate the type of brain injury suffered by stroke survivors.
Collapse
|
7
|
Ladouce S, Donaldson DI, Dudchenko PA, Ietswaart M. Understanding Minds in Real-World Environments: Toward a Mobile Cognition Approach. Front Hum Neurosci 2017; 10:694. [PMID: 28127283 PMCID: PMC5226959 DOI: 10.3389/fnhum.2016.00694] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 12/29/2016] [Indexed: 11/13/2022] Open
Abstract
There is a growing body of evidence that important aspects of human cognition have been marginalized, or overlooked, by traditional cognitive science. In particular, the use of laboratory-based experiments in which stimuli are artificial, and response options are fixed, inevitably results in findings that are less ecologically valid in relation to real-world behavior. In the present review we highlight the opportunities provided by a range of new mobile technologies that allow traditionally lab-bound measurements to now be collected during natural interactions with the world. We begin by outlining the theoretical support that mobile approaches receive from the development of embodied accounts of cognition, and we review the widening evidence that illustrates the importance of examining cognitive processes in their context. As we acknowledge, in practice, the development of mobile approaches brings with it fresh challenges, and will undoubtedly require innovation in paradigm design and analysis. If successful, however, the mobile cognition approach will offer novel insights in a range of areas, including understanding the cognitive processes underlying navigation through space and the role of attention during natural behavior. We argue that the development of real-world mobile cognition offers both increased ecological validity, and the opportunity to examine the interactions between perception, cognition and action-rather than examining each in isolation.
Collapse
|
8
|
Stone KD, Gonzalez CLR. The contributions of vision and haptics to reaching and grasping. Front Psychol 2015; 6:1403. [PMID: 26441777 PMCID: PMC4584943 DOI: 10.3389/fpsyg.2015.01403] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2015] [Accepted: 09/02/2015] [Indexed: 11/23/2022] Open
Abstract
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference.
Collapse
Affiliation(s)
- Kayla D Stone
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, Lethbridge AB, Canada
| | - Claudia L R Gonzalez
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, Lethbridge AB, Canada
| |
Collapse
|
9
|
Cluff T, Crevecoeur F, Scott SH. A perspective on multisensory integration and rapid perturbation responses. Vision Res 2015; 110:215-22. [DOI: 10.1016/j.visres.2014.06.011] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2014] [Revised: 06/01/2014] [Accepted: 06/23/2014] [Indexed: 10/25/2022]
|
10
|
Cisek P, Pastor-Bernier A. On the challenges and mechanisms of embodied decisions. Philos Trans R Soc Lond B Biol Sci 2014; 369:20130479. [PMID: 25267821 PMCID: PMC4186232 DOI: 10.1098/rstb.2013.0479] [Citation(s) in RCA: 95] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Neurophysiological studies of decision-making have focused primarily on elucidating the mechanisms of classic economic decisions, for which the relevant variables are the values of expected outcomes and action is simply the means of reporting the selected choice. By contrast, here we focus on the particular challenges of embodied decision-making faced by animals interacting with their environment in real time. In such scenarios, the choices themselves as well as their relative costs and benefits are defined by the momentary geometry of the immediate environment and change continuously during ongoing activity. To deal with the demands of embodied activity, animals require an architecture in which the sensorimotor specification of potential actions, their valuation, selection and even execution can all take place in parallel. Here, we review behavioural and neurophysiological data supporting a proposed brain architecture for dealing with such scenarios, which we argue set the evolutionary foundation for the organization of the mammalian brain.
Collapse
Affiliation(s)
- Paul Cisek
- Groupe de Recherche sur le Système Nerveux Central (GRSNC), Département de Neuroscience, Université de Montréal, C.P. 6128 Succursale Centre-ville, Montréal, Québec, Canada H3C 3J7
| | - Alexandre Pastor-Bernier
- Department of Physiology, Development and Neuroscience (PDN), University of Cambridge, Cambridge, UK
| |
Collapse
|
11
|
Aivar MP, Brenner E, Smeets JBJ. Hitting a target is fundamentally different from avoiding obstacles. Vision Res 2014; 110:166-78. [PMID: 25454701 DOI: 10.1016/j.visres.2014.10.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2014] [Revised: 09/18/2014] [Accepted: 10/12/2014] [Indexed: 10/24/2022]
Abstract
To successfully move our hand to a target, it is important not only to consider the target of our movements but also to consider other objects in the environment that may act as obstacles. We previously found that the time needed to respond to a change in position was considerably longer for a displacement of an obstacle than for a displacement of the target (Aivar, Brenner, & Smeets, 2008. Experimental Brain Research 190, 251-264). In that study, the movement constraints imposed by the obstacles differed from those imposed by the target. To examine whether the latency is really different for targets and obstacles, irrespective of any constraints they impose, we modified the design of the previous experiment to make sure that the constraints were matched. In each trial, two aligned 'objects' of the same size were presented at different distances to the left of the initial position of the hand. Each of these objects could either be a target or a gap (opening between two obstacles). Participants were instructed to pass through both objects. All possible combinations of these two objects were tested: gap-target, target-gap, gap-gap, target-target. On some trials one of the objects changed position after movement onset. Participants systematically responded faster to the displacement of a target than to the displacement of a gap at the same location. We conclude that targets are prioritized over obstacles in movement control.
Collapse
Affiliation(s)
- M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Campus de Cantoblanco, s/n, 28049 Madrid, Spain.
| | - Eli Brenner
- Faculty of Human Movement Sciences, VU University Amsterdam, Van der Boechorststraat 9, 1081 BT Amsterdam, The Netherlands.
| | - Jeroen B J Smeets
- Faculty of Human Movement Sciences, VU University Amsterdam, Van der Boechorststraat 9, 1081 BT Amsterdam, The Netherlands.
| |
Collapse
|
12
|
Fiehler K, Wolf C, Klinghammer M, Blohm G. Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment. Front Hum Neurosci 2014; 8:636. [PMID: 25202252 PMCID: PMC4141549 DOI: 10.3389/fnhum.2014.00636] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Accepted: 07/30/2014] [Indexed: 11/13/2022] Open
Abstract
When interacting with our environment we generally make use of egocentric and allocentric object information by coding object positions relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during memory-guided reaching to images of naturalistic scenes. Participants encoded a breakfast scene containing six objects on a table (local objects) and three objects in the environment (global objects). After a 2 s delay, a visual test scene reappeared for 1 s in which 1 local object was missing (= target) and of the remaining, 1, 3 or 5 local objects or one of the global objects were shifted to the left or to the right. The offset of the test scene prompted participants to reach to the target as precisely as possible. Only local objects served as potential reach targets and thus were task-relevant. When shifting objects we predicted accurate reaching if participants only used egocentric coding of object position and systematic shifts of reach endpoints if allocentric information were used for movement planning. We found that reaching movements were largely affected by allocentric shifts showing an increase in endpoint errors in the direction of object shifts with the number of local objects shifted. No effect occurred when one local or one global object was shifted. Our findings suggest that allocentric cues are indeed used by the brain for memory-guided reaching towards targets in naturalistic visual scenes. Moreover, the integration of egocentric and allocentric object information seems to depend on the extent of changes in the scene.
Collapse
Affiliation(s)
- Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Christian Wolf
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Mathias Klinghammer
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Gunnar Blohm
- Canadian Action and Perception Network (CAPnet), Centre for Neuroscience Studies, Queen's University Kingston, ON, Canada
| |
Collapse
|
13
|
Sarlegna FR, Mutha PK. The influence of visual target information on the online control of movements. Vision Res 2014; 110:144-54. [PMID: 25038472 DOI: 10.1016/j.visres.2014.07.001] [Citation(s) in RCA: 54] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2014] [Revised: 07/02/2014] [Accepted: 07/04/2014] [Indexed: 11/25/2022]
Abstract
The continuously changing properties of our environment require constant monitoring of our actions and updating of our motor commands based on the task goals. Such updating relies upon our predictions about the sensory consequences of our movement commands, as well as sensory feedback received during movement execution. Here we focus on how visual information about target location is used to update and guide ongoing actions so that the task goal is successfully achieved. We review several studies that have manipulated vision of the target in a variety of ways, ranging from complete removal of visual target information to changes in visual target properties after movement onset to examine how such changes are accounted for during motor execution. We also examined the specific role of a critical neural structure, the parietal cortex, and argue that a fundamental challenge for the future is to understand how visual information about target location is integrated with other streams of information, during movement execution, to estimate the state of the body and the environment in order to ensure optimal motor performance.
Collapse
Affiliation(s)
| | - Pratik K Mutha
- Indian Institute of Technology Gandhinagar, Ahmedabad 382424, Gujarat, India
| |
Collapse
|
14
|
Abstract
In this review, we describe the current models of dorsal and ventral streams in vision, audition and touch. Available theories take their first steps from the model of Milner and Goodale, which was developed to explain how human actions can be efficiently carried out using visual information. Since then, similar concepts have also been applied to other sensory modalities. We propose that advances in the knowledge of brain functioning can be achieved through models explaining action and perception patterns independently from sensory modalities.
Collapse
Affiliation(s)
- Anna Sedda
- Department of Humanistic Studies- Psychology Section, University of Pavia, Pavia 27100, Italy.
| | | |
Collapse
|
15
|
Huber M, Kupferberg A, Lenz C, Knoll A, Brandt T, Glasauer S. Spatiotemporal movement planning and rapid adaptation for manual interaction. PLoS One 2013; 8:e64982. [PMID: 23724112 PMCID: PMC3665711 DOI: 10.1371/journal.pone.0064982] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2012] [Accepted: 04/19/2013] [Indexed: 12/13/2022] Open
Abstract
Many everyday tasks require the ability of two or more individuals to coordinate their actions with others to increase efficiency. Such an increase in efficiency can often be observed even after only very few trials. Previous work suggests that such behavioral adaptation can be explained within a probabilistic framework that integrates sensory input and prior experience. Even though higher cognitive abilities such as intention recognition have been described as probabilistic estimation depending on an internal model of the other agent, it is not clear whether much simpler daily interaction is consistent with a probabilistic framework. Here, we investigate whether the mechanisms underlying efficient coordination during manual interactions can be understood as probabilistic optimization. For this purpose we studied in several experiments a simple manual handover task concentrating on the action of the receiver. We found that the duration until the receiver reacts to the handover decreases over trials, but strongly depends on the position of the handover. We then replaced the human deliverer by different types of robots to further investigate the influence of the delivering movement on the reaction of the receiver. Durations were found to depend on movement kinematics and the robot’s joint configuration. Modeling the task was based on the assumption that the receiver’s decision to act is based on the accumulated evidence for a specific handover position. The evidence for this handover position is collected from observing the hand movement of the deliverer over time and, if appropriate, by integrating this sensory likelihood with prior expectation that is updated over trials. The close match of model simulations and experimental results shows that the efficiency of handover coordination can be explained by an adaptive probabilistic fusion of a-priori expectation and online estimation.
Collapse
Affiliation(s)
- Markus Huber
- Center for Sensorimotor Research, Institute for Clinical Neuroscience, Ludwig-Maximilian University Munich, Munich, Germany.
| | | | | | | | | | | |
Collapse
|
16
|
Pardhan S, Gonzalez-Alvarez C, Subramanian A, Chung STL. How do flanking objects affect reaching and grasping behavior in participants with macular disorders? Invest Ophthalmol Vis Sci 2012; 53:6687-94. [PMID: 22918639 PMCID: PMC4608677 DOI: 10.1167/iovs.12-9821] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2012] [Revised: 05/31/2012] [Accepted: 08/20/2012] [Indexed: 11/24/2022] Open
Abstract
PURPOSE To investigate how objects (flankers) placed on either side of a target affect reaching and grasping behavior in visually impaired (VI) subjects due to macular disorders compared with age-matched normals. METHODS Subjects reached out to grasp a cylindrical target placed on its own and when it had two identical objects (flankers) placed either half or one target diameter away on each side of the target. A motion analysis system (Vicon 460) recorded and reconstructed the 3-dimemsional (3D) hand and finger movements. Kinematic data for transport and grasping mechanisms were measured. RESULTS In subjects with VI, crowding effected the overall movement duration, time after maximum velocity, and maximum grip aperture. Maximum effect was shown when the flankers were placed close to the target (high-level crowding) with a decreased effect shown for flankers placed farther away (medium-level crowding). Compared with normals, subjects with VI generally took longer to initiate the hand movement and to complete the movement. Time after maximum velocity and time after maximum grip aperture were also longer in subjects with VI. No interaction effects were found for any of the indices for the different levels of crowding in the two visual groups. CONCLUSIONS Reaching and grasping behavior is compromised in subjects with VI due to macular disorders compared with normals, and crowding affected performance for both normal subjects and those with VI. Flankers placed half an object diameter away showed greater deterioration than those placed further away.
Collapse
Affiliation(s)
- Shahina Pardhan
- Vision and Eye Research Unit, Postgraduate Medical Institute, Anglia Ruskin University, Cambridge, United Kingdom.
| | | | | | | |
Collapse
|
17
|
Mental blocks: fMRI reveals top-down modulation of early visual cortex when obstacles interfere with grasp planning. Neuropsychologia 2011; 49:1703-17. [PMID: 21376065 DOI: 10.1016/j.neuropsychologia.2011.02.048] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2010] [Revised: 01/27/2011] [Accepted: 02/23/2011] [Indexed: 11/21/2022]
Abstract
When grasping an object, the fingers, hand and arm rarely collide with other non-target objects in the workspace. Kinematic studies of neurological patients (Schindler et al., 2004) and healthy participants (Chapman and Goodale, 2010a) suggest that the location of potential obstacles and the degree of interference they pose are encoded by the dorsal visual stream during action planning. Here, we used a slow event-related paradigm in functional magnetic resonance imaging (fMRI) to examine the neural encoding of obstacles in normal participants. Fifteen right-handed participants grasped a square target object with a thumb-front or thumb-side wrist-posture with (1) no obstacle present, (2) an obstacle behind the target object (interfering with the thumb-front grasp), or (3) an obstacle beside the target object (interfering with the thumb-side grasp). Within a specified network of areas involved in planning, a group voxelwise analysis revealed that one area in the left posterior intraparietal sulcus (pIPS) and one in early visual cortex were modulated by the degree of obstacle interference, and that this modulation occurred prior to movement execution. Given previous reports of a functional link between IPS and early visual cortex, we suggest that the increasing activity in the IPS with obstacle interference provides the top-down signal to suppress the corresponding obstacle coding in early visual areas, where we observed that activity decreased with interference. This is the first concrete evidence that the planning of a grasping movement can modulate early visual cortex and provides a unifying framework for understanding the dual role played by the IPS in motor planning and attentional orienting.
Collapse
|