1
|
Guo LL, Niemeier M. Phase-Dependent Visual and Sensorimotor Integration of Features for Grasp Computations before and after Effector Specification. J Neurosci 2024; 44:e2208232024. [PMID: 39019614 PMCID: PMC11326866 DOI: 10.1523/jneurosci.2208-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 07/03/2024] [Accepted: 07/10/2024] [Indexed: 07/19/2024] Open
Abstract
The simple act of viewing and grasping an object involves complex sensorimotor control mechanisms that have been shown to vary as a function of multiple object and other task features such as object size, shape, weight, and wrist orientation. However, these features have been mostly studied in isolation. In contrast, given the nonlinearity of motor control, its computations require multiple features to be incorporated concurrently. Therefore, the present study tested the hypothesis that grasp computations integrate multiple task features superadditively in particular when these features are relevant for the same action phase. We asked male and female human participants to reach-to-grasp objects of different shapes and sizes with different wrist orientations. Also, we delayed the movement onset using auditory signals to specify which effector to use. Using electroencephalography and representative dissimilarity analysis to map the time course of cortical activity, we found that grasp computations formed superadditive integrated representations of grasp features during different planning phases of grasping. Shape-by-size representations and size-by-orientation representations occurred before and after effector specification, respectively, and could not be explained by single-feature models. These observations are consistent with the brain performing different preparatory, phase-specific computations; visual object analysis to identify grasp points at abstract visual levels; and downstream sensorimotor preparatory computations for reach-to-grasp trajectories. Our results suggest the brain adheres to the needs of nonlinear motor control for integration. Furthermore, they show that examining the superadditive influence of integrated representations can serve as a novel lens to map the computations underlying sensorimotor control.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology Scarborough, University of Toronto, Toronto, Ontario M1C1A4, Canada
| | - Matthias Niemeier
- Department of Psychology Scarborough, University of Toronto, Toronto, Ontario M1C1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N3M6, Canada
| |
Collapse
|
2
|
Guo LL, Oghli YS, Frost A, Niemeier M. Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control. J Neurosci 2021; 41:9210-9222. [PMID: 34551938 PMCID: PMC8570828 DOI: 10.1523/jneurosci.0992-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 09/13/2021] [Accepted: 09/16/2021] [Indexed: 11/21/2022] Open
Abstract
Current understanding of the neural processes underlying human grasping suggests that grasp computations involve gradients of higher to lower level representations and, relatedly, visual to motor processes. However, it is unclear whether these processes evolve in a strictly canonical manner from higher to intermediate and to lower levels given that this knowledge importantly relies on functional imaging, which lacks temporal resolution. To examine grasping in fine temporal detail here we used multivariate EEG analysis. We asked participants to grasp objects while controlling the time at which crucial elements of grasp programs were specified. We first specified the orientation with which participants should grasp objects, and only after a delay we instructed participants about which effector to use to grasp, either the right or the left hand. We also asked participants to grasp with both hands because bimanual and left-hand grasping share intermediate-level grasp representations. We observed that grasp programs evolved in a canonical manner from visual representations, which were independent of effectors to motor representations that distinguished between effectors. However, we found that intermediate representations of effectors that partially distinguished between effectors arose after representations that distinguished among all effector types. Our results show that grasp computations do not proceed in a strictly hierarchically canonical fashion, highlighting the importance of the fine temporal resolution of EEG for a comprehensive understanding of human grasp control.SIGNIFICANCE STATEMENT A long-standing assumption of the grasp computations is that grasp representations progress from higher to lower level control in a regular, or canonical, fashion. Here, we combined EEG and multivariate pattern analysis to characterize the temporal dynamics of grasp representations while participants viewed objects and were subsequently cued to execute an unimanual or bimanual grasp. Interrogation of the temporal dynamics revealed that lower level effector representations emerged before intermediate levels of grasp representations, thereby suggesting a partially noncanonical progression from higher to lower and then to intermediate level grasp control.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Yazan Shamli Oghli
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Adam Frost
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N 3M6, Canada
- Vision: Science to Applications, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
3
|
Interhemispheric co-alteration of brain homotopic regions. Brain Struct Funct 2021; 226:2181-2204. [PMID: 34170391 PMCID: PMC8354999 DOI: 10.1007/s00429-021-02318-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 06/07/2021] [Indexed: 11/11/2022]
Abstract
Asymmetries in gray matter alterations raise important issues regarding the pathological co-alteration between hemispheres. Since homotopic areas are the most functionally connected sites between hemispheres and gray matter co-alterations depend on connectivity patterns, it is likely that this relationship might be mirrored in homologous interhemispheric co-altered areas. To explore this issue, we analyzed data of patients with Alzheimer’s disease, schizophrenia, bipolar disorder and depressive disorder from the BrainMap voxel-based morphometry database. We calculated a map showing the pathological homotopic anatomical co-alteration between homologous brain areas. This map was compared with the meta-analytic homotopic connectivity map obtained from the BrainMap functional database, so as to have a meta-analytic connectivity modeling map between homologous areas. We applied an empirical Bayesian technique so as to determine a directional pathological co-alteration on the basis of the possible tendencies in the conditional probability of being co-altered of homologous brain areas. Our analysis provides evidence that: the hemispheric homologous areas appear to be anatomically co-altered; this pathological co-alteration is similar to the pattern of connectivity exhibited by the couples of homologues; the probability to find alterations in the areas of the left hemisphere seems to be greater when their right homologues are also altered than vice versa, an intriguing asymmetry that deserves to be further investigated and explained.
Collapse
|
4
|
Ozana A, Ganel T. A double dissociation between action and perception in bimanual grasping: evidence from the Ponzo and the Wundt-Jastrow illusions. Sci Rep 2020; 10:14665. [PMID: 32887921 PMCID: PMC7473850 DOI: 10.1038/s41598-020-71734-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 07/24/2020] [Indexed: 11/11/2022] Open
Abstract
Research on visuomotor control suggests that visually guided actions toward objects rely on functionally distinct computations with respect to perception. For example, a double dissociation between grasping and between perceptual estimates was reported in previous experiments that pit real against illusory object size differences in the context of the Ponzo illusion. While most previous research on the relation between action and perception focused on one-handed grasping, everyday visuomotor interactions also entail the simultaneous use of both hands to grasp objects that are larger in size. Here, we examined whether this double dissociation extends to bimanual movement control. In Experiment 1, participants were presented with different-sized objects embedded in the Ponzo Illusion. In Experiment 2, we tested whether the dissociation between perception and action extends to a different illusion, the Wundt-Jastrow illusion, which has not been previously used in grasping experiments. In both experiments, bimanual grasping trajectories reflected the differences in physical size between the objects; At the same time, perceptual estimates reflected the differences in illusory size between the objects. These results suggest that the double dissociation between action and perception generalizes to bimanual movement control. Unlike conscious perception, bimanual grasping movements are tuned to real-world metrics, and can potentially resist irrelevant information on relative size and depth.
Collapse
Affiliation(s)
- Aviad Ozana
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel.
| |
Collapse
|
5
|
The left cerebral hemisphere may be dominant for the control of bimanual symmetric reach-to-grasp movements. Exp Brain Res 2019; 237:3297-3311. [PMID: 31664489 DOI: 10.1007/s00221-019-05672-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 10/19/2019] [Indexed: 12/20/2022]
Abstract
Previous research has established that the left cerebral hemisphere is dominant for the control of continuous bimanual movements. The lateralisation of motor control for discrete bimanual movements, in contrast, is underexplored. The purpose of the current study was to investigate which (if either) hemisphere is dominant for discrete bimanual movements. Twenty-one participants made bimanual reach-to-grasp movements towards pieces of candy. Participants grasped the candy to either place it in their mouths (grasp-to-eat) or in a receptacle near their mouths (grasp-to-place). Research has shown smaller maximum grip apertures (MGAs) for unimanual grasp-to-eat movements than unimanual grasp-to-place movements when controlled by the left hemisphere. In Experiment 1, participants made bimanual symmetric movements where both hands made grasp-to-eat or grasp-to-place movements. We hypothesised that a left hemisphere dominance for bimanual movements would cause smaller MGAs in both hands during bimanual grasp-to-eat movements compared to those in bimanual grasp-to-place movements. The results revealed that MGAs were indeed smaller for bimanual grasp-to-eat movements than grasp-to-place movements. This supports that the left hemisphere may be dominant for the control of bimanual symmetric movements, which agrees with studies on continuous bimanual movements. In Experiment 2, participants made bimanual asymmetric movements where one hand made a grasp-to-eat movement while the other hand made a grasp-to-place movement. The results failed to support the potential predictions of left hemisphere dominance, right hemisphere dominance, or contralateral control.
Collapse
|
6
|
Chen J, Kaur J, Abbas H, Wu M, Luo W, Osman S, Niemeier M. Evidence for a common mechanism of spatial attention and visual awareness: Towards construct validity of pseudoneglect. PLoS One 2019; 14:e0212998. [PMID: 30845258 PMCID: PMC6405131 DOI: 10.1371/journal.pone.0212998] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Accepted: 02/05/2019] [Indexed: 11/19/2022] Open
Abstract
Present knowledge of attention and awareness centres on deficits in patients with right brain damage who show severe forms of inattention to the left, called spatial neglect. Yet the functions that are lost in neglect are poorly understood. In healthy people, they might produce “pseudoneglect”—subtle biases to the left found in various tests that could complement the leftward deficits in neglect. But pseudoneglect measures are poorly correlated. Thus, it is unclear whether they reflect anything but distinct surface features of the tests. To probe for a common mechanism, here we asked whether visual noise, known to increase leftward biases in the grating-scales task, has comparable effects on other measures of pseudoneglect. We measured biases using three perceptual tasks that require judgments about size (landmark task), luminance (greyscales task) and spatial frequency (grating-scales task), as well as two visual search tasks that permitted serial and parallel search or parallel search alone. In each task, we randomly selected pixels of the stimuli and set them to random luminance values, much like a poor TV signal. We found that participants biased their perceptual judgments more to the left with increasing levels of noise, regardless of task. Also, noise amplified the difference between long and short lines in the landmark task. In contrast, biases during visual searches were not influenced by noise. Our data provide crucial evidence that different measures of perceptual pseudoneglect, but not exploratory pseudoneglect, share a common mechanism. It can be speculated that this common mechanism feeds into specific, right-dominant processes of global awareness involved in the integration of visual information across the two hemispheres.
Collapse
Affiliation(s)
- Jiaqing Chen
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Jagjot Kaur
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Hana Abbas
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Ming Wu
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Wenyi Luo
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Sinan Osman
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Centre for Vision Research, York University, Toronto, Ontario, Canada
- * E-mail:
| |
Collapse
|
7
|
Guo LL, Patel N, Niemeier M. Emergent Synergistic Grasp-Like Behavior in a Visuomotor Joint Action Task: Evidence for Internal Forward Models as Building Blocks of Human Interactions. Front Hum Neurosci 2019; 13:37. [PMID: 30787873 PMCID: PMC6372946 DOI: 10.3389/fnhum.2019.00037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 01/23/2019] [Indexed: 11/13/2022] Open
Abstract
Central to the mechanistic understanding of the human mind is to clarify how cognitive functions arise from simpler sensory and motor functions. A longstanding assumption is that forward models used by sensorimotor control to anticipate actions also serve to incorporate other people's actions and intentions, and give rise to sensorimotor interactions between people, and even abstract forms of interactions. That is, forward models could aid core aspects of human social cognition. To test whether forward models can be used to coordinate interactions, here we measured the movements of pairs of participants in a novel joint action task. For the task they collaborated to lift an object, each of them using fingers of one hand to push against the object from opposite sides, just like a single person would use two hands to grasp the object bimanually. Perturbations of the object were applied randomly as they are known to impact grasp-specific movement components in common grasping tasks. We found that co-actors quickly learned to make grasp-like movements with grasp components that showed coordination on average based on action observation of peak deviation and velocity of their partner's trajectories. Our data suggest that co-actors adopted pre-existing bimanual grasp programs for their own body to use forward models of their partner's effectors. This is consistent with the long-held assumption that human higher-order cognitive functions may take advantage of sensorimotor forward models to plan social behavior. New and Noteworthy: Taking an approach of sensorimotor neuroscience, our work provides evidence for a long-held belief that the coordination of physical as well as abstract interactions between people originates from certain sensorimotor control processes that form mental representations of people's bodies and actions, called forward models. With a new joint action paradigm and several new analysis approaches we show that, indeed, people coordinate each other's interactions based on forward models and mutual action observation.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology, University of Toronto Scarborough, Scarborough, ON, Canada
| | - Namita Patel
- Department of Psychology, University of Toronto Scarborough, Scarborough, ON, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Scarborough, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| |
Collapse
|
8
|
Shared right-hemispheric representations of sensorimotor goals in dynamic task environments. Exp Brain Res 2019; 237:977-987. [PMID: 30694342 DOI: 10.1007/s00221-019-05478-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 01/14/2019] [Indexed: 10/27/2022]
Abstract
Functional behaviour affords that we form goals to integrate sensory information about the world around us with suitable motor actions, such as when we plan to grab an object with a hand. However, much research has tested grasping in static scenarios where goals are pursued with repetitive movements, whereas dynamic contexts require goals to be pursued even when changes in the environment require a change in the actions to attain them. To study grasp goals in dynamic environments here, we employed a task where the goal remained the same but the execution of the movement changed; we primed participants to grasp objects either with their right or left hand, and occasionally they had to switch to grasping with both. Switch costs should be minimal if grasp goal representations were used continuously, for example, within the left dominant hemisphere. But remapped or re-computed goal representations should delay movements. We found that switching from right-hand grasping to bimanual grasping delayed reaction times but switching from left-hand grasping to bimanual grasping did not. Further, control experiments showed that the lateralized switch costs were not caused by asymmetric inhibition between hemispheres or switches between usual and unusual tasks. Our results show that the left hemisphere does not serve a general role of sensorimotor grasp goal representation. Instead, sensorimotor grasp goals appear to be represented at intermediate levels of abstraction, downstream from cognitive task representations, yet upstream from the control of the grasping effectors.
Collapse
|
9
|
Right Hemisphere Contributions to Bilateral Force Control in Chronic Stroke: A Preliminary Report. J Stroke Cerebrovasc Dis 2018; 27:3218-3223. [PMID: 30093198 DOI: 10.1016/j.jstrokecerebrovasdis.2018.07.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Accepted: 07/05/2018] [Indexed: 11/20/2022] Open
Abstract
BACKGROUND Bilateral motor control deficits poststroke may be lateralized by hemisphere damage. This preliminary study investigated bilateral force control between left and right hemisphere-damaged groups at baseline and after coupled bilateral movement training with neuromuscular stimulation. METHODS Stroke participants (8 left hemisphere and 6 right hemisphere cerebrovascular accidents) performed a bilateral isometric force control task at 3 submaximal force levels (5%, 25%, and 50% of maximum voluntary contraction [MVC]) before and after training. Force accuracy, force variability, and interlimb force coordination were analyzed in 3-way mixed design ANOVAs (2 × 2 × 3; Group × Test Session × Force Level) with repeated measures on test session and force level. RESULTS The findings indicated that force accuracy and variability at 50% of MVC in the right hemisphere-damaged group were more impaired than lower targeted force levels at baseline, and the impairment at the highest target level was improved after coupled bilateral movement training. However, these patterns were not observed in the left hemisphere-damaged group. CONCLUSIONS Current findings support a proposition that the right hemisphere presumably contributes to controlling bilateral force production.
Collapse
|
10
|
Abstract
According to Weber’s law, a fundamental principle of perception, visual resolution decreases in a linear fashion with an increase in object size. Previous studies have shown, however, that unlike for perception, grasping does not adhere to Weber’s law. Yet, this research was limited by the fact that perception and grasping were examined for a restricted range of stimulus sizes bounded by the maximum fingers span. The purpose of the current study was to test the generality of the dissociation between perception and action across a different type of visuomotor task, that of bimanual grasping. Bimanual grasping also allows to effectively measure visual resolution during perception and action across a wide range of stimulus sizes compared to unimanual grasps. Participants grasped or estimated the sizes of large objects using both their hands. The results showed that bimanual grasps violated Weber’s law throughout the entire movement trajectory. In contrast, Just Noticeable Differences (JNDs) for perceptual estimations of the objects increased linearly with size, in agreement with Weber’s law. The findings suggest that visuomotor control, across different types of actions and for a large range of size, is based on absolute rather than on relative representation of object size.
Collapse
|
11
|
Le A, Vesia M, Yan X, Crawford JD, Niemeier M. Parietal area BA7 integrates motor programs for reaching, grasping, and bimanual coordination. J Neurophysiol 2017; 117:624-636. [PMID: 27832593 PMCID: PMC5288481 DOI: 10.1152/jn.00299.2016] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Accepted: 11/08/2016] [Indexed: 11/22/2022] Open
Abstract
Skillful interaction with the world requires that the brain uses a multitude of sensorimotor programs and subroutines, such as for reaching, grasping, and the coordination of the two body halves. However, it is unclear how these programs operate together. Networks for reaching, grasping, and bimanual coordination might converge in common brain areas. For example, Brodmann area 7 (BA7) is known to activate in disparate tasks involving the three types of movements separately. Here, we asked whether BA7 plays a key role in integrating coordinated reach-to-grasp movements for both arms together. To test this, we applied transcranial magnetic stimulation (TMS) to disrupt BA7 activity in the left and right hemispheres, while human participants performed a bimanual size-perturbation grasping task using the index and middle fingers of both hands to grasp a rectangular object whose orientation (and thus grasp-relevant width dimension) might or might not change. We found that TMS of the right BA7 during object perturbation disrupted the bimanual grasp and transport/coordination components, and TMS over the left BA7 disrupted unimanual grasps. These results show that right BA7 is causally involved in the integration of reach-to-grasp movements of the two arms. NEW & NOTEWORTHY Our manuscript describes a role of human Brodmann area 7 (BA7) in the integration of multiple visuomotor programs for reaching, grasping, and bimanual coordination. Our results are the first to suggest that right BA7 is critically involved in the coordination of reach-to-grasp movements of the two arms. The results complement previous reports of right-hemisphere lateralization for bimanual grasps.
Collapse
Affiliation(s)
- Ada Le
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario, Canada
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Michael Vesia
- Centre for Vision Research, York University, Toronto, Ontario, Canada
- Division of Neurology and Krembil Neuroscience Centre, Toronto Western Research Institute, University of Toronto, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada
- Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada; and
- Canadian Action and Perception Network, Toronto, Ontario, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario, Canada;
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| |
Collapse
|
12
|
Parsa B, Ambike S, Terekhov A, Zatsiorsky VM, Latash ML. Analytical Inverse Optimization in Two-Hand Prehensile Tasks. J Mot Behav 2016; 48:424-34. [PMID: 27254391 DOI: 10.1080/00222895.2015.1123140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The authors explored application of analytical inverse optimization (ANIO) method to the normal finger forces in unimanual and bimanual prehensile tasks with discrete and continuously changing constraints. The subjects held an instrumented handle vertically with one or two hands. The external torque and grip force changed across trials or within a trial continuously. Principal component analysis showed similar percentages of variance accounted for by the first two principal components across tasks and conditions. Compared to unimanual tasks, bimanual tasks showed significantly more frequent inability to find a cost function leading to a stable solution. In cases of stable solutions, similar second-order polynomials were computed as cost functions across tasks and condition. The bimanual tasks, however, showed significantly worse goodness-of-fit index values. The authors show that ANIO can be used in tasks with slowly changing constraints making it an attractive tool to study optimality of performance in special populations. They also show that ANIO can fail in multifinger tasks, likely due to irreproducible behavior across trials, more likely to happen in bimanual tasks compared to unimanual tasks.
Collapse
Affiliation(s)
- Behnoosh Parsa
- a Department of Kinesiology , The Pennsylvania State University University Park , Pennsylvania
| | - Satyajit Ambike
- b Department of Health and Kinesiology , Purdue University , South Bend , Indiana
| | - Alexander Terekhov
- c Laboratory of Psychology of Perception, University of Paris Descartes , France
| | - Vladimir M Zatsiorsky
- a Department of Kinesiology , The Pennsylvania State University University Park , Pennsylvania
| | - Mark L Latash
- a Department of Kinesiology , The Pennsylvania State University University Park , Pennsylvania
| |
Collapse
|
13
|
Le A, Niemeier M. Visual field preferences of object analysis for grasping with one hand. Front Hum Neurosci 2014; 8:782. [PMID: 25324766 PMCID: PMC4181231 DOI: 10.3389/fnhum.2014.00782] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Accepted: 09/15/2014] [Indexed: 11/13/2022] Open
Abstract
When we grasp an object using one hand, the opposite hemisphere predominantly guides the motor control of grasp movements (Davare et al., 2007; Rice et al., 2007). However, it is unclear whether visual object analysis for grasp control relies more on inputs (a) from the contralateral than the ipsilateral visual field, (b) from one dominant visual field regardless of the grasping hand, or (c) from both visual fields equally. For bimanual grasping of a single object we have recently demonstrated a visual field preference for the left visual field (Le and Niemeier, 2013a,b), consistent with a general right-hemisphere dominance for sensorimotor control of bimanual grasps (Le et al., 2014). But visual field differences have never been tested for unimanual grasping. Therefore, here we asked right-handed participants to fixate to the left or right of an object and then grasp the object either with their right or left hand using a precision grip. We found that participants grasping with their right hand performed better with objects in the right visual field: maximum grip apertures (MGAs) were more closely matched to the object width and were smaller than for objects in the left visual field. In contrast, when people grasped with their left hand, preferences switched to the left visual field. What is more, MGA scaling with the left hand showed greater visual field differences compared to right-hand grasping. Our data suggest that, visual object analysis for unimanual grasping shows a preference for visual information from the ipsilateral visual field, and that the left hemisphere is better equipped to control grasps in both visual fields.
Collapse
Affiliation(s)
- Ada Le
- Psychology, University of Toronto Scarborough Toronto, ON, Canada
| | | |
Collapse
|
14
|
Chen J, Niemeier M. Distractor removal amplifies spatial frequency-specific crossover of the attentional bias: a psychophysical and Monte Carlo simulation study. Exp Brain Res 2014; 232:4001-19. [DOI: 10.1007/s00221-014-4082-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2014] [Accepted: 08/19/2014] [Indexed: 11/28/2022]
|
15
|
Is there a left hemispheric asymmetry for tool affordance processing? Neuropsychologia 2013; 51:2690-701. [DOI: 10.1016/j.neuropsychologia.2013.09.023] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2013] [Revised: 09/10/2013] [Accepted: 09/14/2013] [Indexed: 11/22/2022]
|
16
|
Left visual field preference for a bimanual grasping task with ecologically valid object sizes. Exp Brain Res 2013; 230:187-96. [PMID: 23857170 DOI: 10.1007/s00221-013-3643-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Accepted: 06/30/2013] [Indexed: 10/26/2022]
Abstract
Grasping using two forelimbs in opposition to one another is evolutionary older than the hand with an opposable thumb (Whishaw and Coles in Behav Brain Res 77:135-148, 1996); yet, the mechanisms for bimanual grasps remain unclear. Similar to unimanual grasping, the localization of matching stable grasp points on an object is computationally expensive and so it makes sense for the signals to converge in a single cortical hemisphere. Indeed, bimanual grasps are faster and more accurate in the left visual field, and are disrupted if there is transcranial stimulation of the right hemisphere (Le and Niemeier in Exp Brain Res 224:263-273, 2013; Le et al. in Cereb Cortex. doi: 10.1093/cercor/bht115, 2013). However, research so far has tested the right hemisphere dominance based on small objects only, which are usually grasped with one hand, whereas bimanual grasping is more commonly used for objects that are too big for a single hand. Because grasping large objects might involve different neural circuits than grasping small objects (Grol et al. in J Neurosci 27:11877-11887, 2007), here we tested whether a left visual field/right hemisphere dominance for bimanual grasping exists with large and thus more ecologically valid objects or whether the right hemisphere dominance is a function of object size. We asked participants to fixate to the left or right of an object and to grasp the object with the index and middle fingers of both hands. Consistent with previous observations, we found that for objects in the left visual field, the maximum grip apertures were scaled closer to the object width and were smaller and less variable, than for objects in the right visual field. Our results demonstrate that bimanual grasping is predominantly controlled by the right hemisphere, even in the context of grasping larger objects.
Collapse
|
17
|
Le A, Vesia M, Yan X, Niemeier M, Crawford JD. The Right Anterior Intraparietal Sulcus Is Critical for Bimanual Grasping: A TMS Study. Cereb Cortex 2013; 24:2591-603. [DOI: 10.1093/cercor/bht115] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|