1
|
Guo LL, Oghli YS, Frost A, Niemeier M. Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control. J Neurosci 2021; 41:9210-9222. [PMID: 34551938 PMCID: PMC8570828 DOI: 10.1523/jneurosci.0992-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 09/13/2021] [Accepted: 09/16/2021] [Indexed: 11/21/2022] Open
Abstract
Current understanding of the neural processes underlying human grasping suggests that grasp computations involve gradients of higher to lower level representations and, relatedly, visual to motor processes. However, it is unclear whether these processes evolve in a strictly canonical manner from higher to intermediate and to lower levels given that this knowledge importantly relies on functional imaging, which lacks temporal resolution. To examine grasping in fine temporal detail here we used multivariate EEG analysis. We asked participants to grasp objects while controlling the time at which crucial elements of grasp programs were specified. We first specified the orientation with which participants should grasp objects, and only after a delay we instructed participants about which effector to use to grasp, either the right or the left hand. We also asked participants to grasp with both hands because bimanual and left-hand grasping share intermediate-level grasp representations. We observed that grasp programs evolved in a canonical manner from visual representations, which were independent of effectors to motor representations that distinguished between effectors. However, we found that intermediate representations of effectors that partially distinguished between effectors arose after representations that distinguished among all effector types. Our results show that grasp computations do not proceed in a strictly hierarchically canonical fashion, highlighting the importance of the fine temporal resolution of EEG for a comprehensive understanding of human grasp control.SIGNIFICANCE STATEMENT A long-standing assumption of the grasp computations is that grasp representations progress from higher to lower level control in a regular, or canonical, fashion. Here, we combined EEG and multivariate pattern analysis to characterize the temporal dynamics of grasp representations while participants viewed objects and were subsequently cued to execute an unimanual or bimanual grasp. Interrogation of the temporal dynamics revealed that lower level effector representations emerged before intermediate levels of grasp representations, thereby suggesting a partially noncanonical progression from higher to lower and then to intermediate level grasp control.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Yazan Shamli Oghli
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Adam Frost
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N 3M6, Canada
- Vision: Science to Applications, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
2
|
Ozana A, Ganel T. A double dissociation between action and perception in bimanual grasping: evidence from the Ponzo and the Wundt-Jastrow illusions. Sci Rep 2020; 10:14665. [PMID: 32887921 PMCID: PMC7473850 DOI: 10.1038/s41598-020-71734-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 07/24/2020] [Indexed: 11/11/2022] Open
Abstract
Research on visuomotor control suggests that visually guided actions toward objects rely on functionally distinct computations with respect to perception. For example, a double dissociation between grasping and between perceptual estimates was reported in previous experiments that pit real against illusory object size differences in the context of the Ponzo illusion. While most previous research on the relation between action and perception focused on one-handed grasping, everyday visuomotor interactions also entail the simultaneous use of both hands to grasp objects that are larger in size. Here, we examined whether this double dissociation extends to bimanual movement control. In Experiment 1, participants were presented with different-sized objects embedded in the Ponzo Illusion. In Experiment 2, we tested whether the dissociation between perception and action extends to a different illusion, the Wundt-Jastrow illusion, which has not been previously used in grasping experiments. In both experiments, bimanual grasping trajectories reflected the differences in physical size between the objects; At the same time, perceptual estimates reflected the differences in illusory size between the objects. These results suggest that the double dissociation between action and perception generalizes to bimanual movement control. Unlike conscious perception, bimanual grasping movements are tuned to real-world metrics, and can potentially resist irrelevant information on relative size and depth.
Collapse
Affiliation(s)
- Aviad Ozana
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel.
| |
Collapse
|
3
|
Lee-Miller T, Santello M, Gordon AM. Hand forces and placement are modulated and covary during anticipatory control of bimanual manipulation. J Neurophysiol 2019; 121:2276-2290. [DOI: 10.1152/jn.00760.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Dexterous object manipulation relies on the feedforward and feedback control of kinetics (forces) and kinematics (hand shaping and digit placement). Lifting objects with an uneven mass distribution involves the generation of compensatory moments at object lift-off to counter object torques. This is accomplished through the modulation and covariation of digit forces and placement, which has been shown to be a general feature of unimanual manipulation. These feedforward anticipatory processes occur before performance-specific feedback. Whether this adaptation is a feature unique to unimanual dexterous manipulation or general across unimanual and bimanual manipulation is not known. We investigated the generation of compensatory moments through hand placement and force modulation during bimanual manipulation of an object with variable center of mass. Participants were instructed to prevent object roll during the lift. Similar to unimanual grasping, we found modulation and covariation of hand forces and placement for successful performance. Thus this motor adaptation of the anticipatory control of compensatory moment is a general feature across unimanual and bimanual effectors. Our results highlight the involvement of high-level representation of manipulation goals and underscore a sensorimotor circuitry for anticipatory control through a continuum of force and placement modulation of object manipulation across a range of effectors. NEW & NOTEWORTHY This is the first study, to our knowledge, to show that successful bimanual manipulation of objects with asymmetrical centers of mass is performed through the modulation and covariation of hand forces and placements to generate compensatory moments. Digit force-to-placement modulation is thus a general phenomenon across multiple effectors, such as the fingers of one hand, and both hands. This adds to our understanding of integrating low-level internal representations of object properties into high-level task representations.
Collapse
Affiliation(s)
- Trevor Lee-Miller
- Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, New York
| | - Marco Santello
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, Arizona
| | - Andrew M. Gordon
- Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, New York
| |
Collapse
|
4
|
Chen J, Kaur J, Abbas H, Wu M, Luo W, Osman S, Niemeier M. Evidence for a common mechanism of spatial attention and visual awareness: Towards construct validity of pseudoneglect. PLoS One 2019; 14:e0212998. [PMID: 30845258 PMCID: PMC6405131 DOI: 10.1371/journal.pone.0212998] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Accepted: 02/05/2019] [Indexed: 11/19/2022] Open
Abstract
Present knowledge of attention and awareness centres on deficits in patients with right brain damage who show severe forms of inattention to the left, called spatial neglect. Yet the functions that are lost in neglect are poorly understood. In healthy people, they might produce “pseudoneglect”—subtle biases to the left found in various tests that could complement the leftward deficits in neglect. But pseudoneglect measures are poorly correlated. Thus, it is unclear whether they reflect anything but distinct surface features of the tests. To probe for a common mechanism, here we asked whether visual noise, known to increase leftward biases in the grating-scales task, has comparable effects on other measures of pseudoneglect. We measured biases using three perceptual tasks that require judgments about size (landmark task), luminance (greyscales task) and spatial frequency (grating-scales task), as well as two visual search tasks that permitted serial and parallel search or parallel search alone. In each task, we randomly selected pixels of the stimuli and set them to random luminance values, much like a poor TV signal. We found that participants biased their perceptual judgments more to the left with increasing levels of noise, regardless of task. Also, noise amplified the difference between long and short lines in the landmark task. In contrast, biases during visual searches were not influenced by noise. Our data provide crucial evidence that different measures of perceptual pseudoneglect, but not exploratory pseudoneglect, share a common mechanism. It can be speculated that this common mechanism feeds into specific, right-dominant processes of global awareness involved in the integration of visual information across the two hemispheres.
Collapse
Affiliation(s)
- Jiaqing Chen
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Jagjot Kaur
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Hana Abbas
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Ming Wu
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Wenyi Luo
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Sinan Osman
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Centre for Vision Research, York University, Toronto, Ontario, Canada
- * E-mail:
| |
Collapse
|
5
|
Guo LL, Patel N, Niemeier M. Emergent Synergistic Grasp-Like Behavior in a Visuomotor Joint Action Task: Evidence for Internal Forward Models as Building Blocks of Human Interactions. Front Hum Neurosci 2019; 13:37. [PMID: 30787873 PMCID: PMC6372946 DOI: 10.3389/fnhum.2019.00037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 01/23/2019] [Indexed: 11/13/2022] Open
Abstract
Central to the mechanistic understanding of the human mind is to clarify how cognitive functions arise from simpler sensory and motor functions. A longstanding assumption is that forward models used by sensorimotor control to anticipate actions also serve to incorporate other people's actions and intentions, and give rise to sensorimotor interactions between people, and even abstract forms of interactions. That is, forward models could aid core aspects of human social cognition. To test whether forward models can be used to coordinate interactions, here we measured the movements of pairs of participants in a novel joint action task. For the task they collaborated to lift an object, each of them using fingers of one hand to push against the object from opposite sides, just like a single person would use two hands to grasp the object bimanually. Perturbations of the object were applied randomly as they are known to impact grasp-specific movement components in common grasping tasks. We found that co-actors quickly learned to make grasp-like movements with grasp components that showed coordination on average based on action observation of peak deviation and velocity of their partner's trajectories. Our data suggest that co-actors adopted pre-existing bimanual grasp programs for their own body to use forward models of their partner's effectors. This is consistent with the long-held assumption that human higher-order cognitive functions may take advantage of sensorimotor forward models to plan social behavior. New and Noteworthy: Taking an approach of sensorimotor neuroscience, our work provides evidence for a long-held belief that the coordination of physical as well as abstract interactions between people originates from certain sensorimotor control processes that form mental representations of people's bodies and actions, called forward models. With a new joint action paradigm and several new analysis approaches we show that, indeed, people coordinate each other's interactions based on forward models and mutual action observation.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology, University of Toronto Scarborough, Scarborough, ON, Canada
| | - Namita Patel
- Department of Psychology, University of Toronto Scarborough, Scarborough, ON, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Scarborough, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| |
Collapse
|
6
|
Shared right-hemispheric representations of sensorimotor goals in dynamic task environments. Exp Brain Res 2019; 237:977-987. [PMID: 30694342 DOI: 10.1007/s00221-019-05478-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 01/14/2019] [Indexed: 10/27/2022]
Abstract
Functional behaviour affords that we form goals to integrate sensory information about the world around us with suitable motor actions, such as when we plan to grab an object with a hand. However, much research has tested grasping in static scenarios where goals are pursued with repetitive movements, whereas dynamic contexts require goals to be pursued even when changes in the environment require a change in the actions to attain them. To study grasp goals in dynamic environments here, we employed a task where the goal remained the same but the execution of the movement changed; we primed participants to grasp objects either with their right or left hand, and occasionally they had to switch to grasping with both. Switch costs should be minimal if grasp goal representations were used continuously, for example, within the left dominant hemisphere. But remapped or re-computed goal representations should delay movements. We found that switching from right-hand grasping to bimanual grasping delayed reaction times but switching from left-hand grasping to bimanual grasping did not. Further, control experiments showed that the lateralized switch costs were not caused by asymmetric inhibition between hemispheres or switches between usual and unusual tasks. Our results show that the left hemisphere does not serve a general role of sensorimotor grasp goal representation. Instead, sensorimotor grasp goals appear to be represented at intermediate levels of abstraction, downstream from cognitive task representations, yet upstream from the control of the grasping effectors.
Collapse
|
7
|
Abstract
According to Weber’s law, a fundamental principle of perception, visual resolution decreases in a linear fashion with an increase in object size. Previous studies have shown, however, that unlike for perception, grasping does not adhere to Weber’s law. Yet, this research was limited by the fact that perception and grasping were examined for a restricted range of stimulus sizes bounded by the maximum fingers span. The purpose of the current study was to test the generality of the dissociation between perception and action across a different type of visuomotor task, that of bimanual grasping. Bimanual grasping also allows to effectively measure visual resolution during perception and action across a wide range of stimulus sizes compared to unimanual grasps. Participants grasped or estimated the sizes of large objects using both their hands. The results showed that bimanual grasps violated Weber’s law throughout the entire movement trajectory. In contrast, Just Noticeable Differences (JNDs) for perceptual estimations of the objects increased linearly with size, in agreement with Weber’s law. The findings suggest that visuomotor control, across different types of actions and for a large range of size, is based on absolute rather than on relative representation of object size.
Collapse
|
8
|
Le A, Vesia M, Yan X, Crawford JD, Niemeier M. Parietal area BA7 integrates motor programs for reaching, grasping, and bimanual coordination. J Neurophysiol 2017; 117:624-636. [PMID: 27832593 PMCID: PMC5288481 DOI: 10.1152/jn.00299.2016] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Accepted: 11/08/2016] [Indexed: 11/22/2022] Open
Abstract
Skillful interaction with the world requires that the brain uses a multitude of sensorimotor programs and subroutines, such as for reaching, grasping, and the coordination of the two body halves. However, it is unclear how these programs operate together. Networks for reaching, grasping, and bimanual coordination might converge in common brain areas. For example, Brodmann area 7 (BA7) is known to activate in disparate tasks involving the three types of movements separately. Here, we asked whether BA7 plays a key role in integrating coordinated reach-to-grasp movements for both arms together. To test this, we applied transcranial magnetic stimulation (TMS) to disrupt BA7 activity in the left and right hemispheres, while human participants performed a bimanual size-perturbation grasping task using the index and middle fingers of both hands to grasp a rectangular object whose orientation (and thus grasp-relevant width dimension) might or might not change. We found that TMS of the right BA7 during object perturbation disrupted the bimanual grasp and transport/coordination components, and TMS over the left BA7 disrupted unimanual grasps. These results show that right BA7 is causally involved in the integration of reach-to-grasp movements of the two arms. NEW & NOTEWORTHY Our manuscript describes a role of human Brodmann area 7 (BA7) in the integration of multiple visuomotor programs for reaching, grasping, and bimanual coordination. Our results are the first to suggest that right BA7 is critically involved in the coordination of reach-to-grasp movements of the two arms. The results complement previous reports of right-hemisphere lateralization for bimanual grasps.
Collapse
Affiliation(s)
- Ada Le
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario, Canada
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Michael Vesia
- Centre for Vision Research, York University, Toronto, Ontario, Canada
- Division of Neurology and Krembil Neuroscience Centre, Toronto Western Research Institute, University of Toronto, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - J Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada
- Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada; and
- Canadian Action and Perception Network, Toronto, Ontario, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario, Canada;
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Le A, Niemeier M. Visual field preferences of object analysis for grasping with one hand. Front Hum Neurosci 2014; 8:782. [PMID: 25324766 PMCID: PMC4181231 DOI: 10.3389/fnhum.2014.00782] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Accepted: 09/15/2014] [Indexed: 11/13/2022] Open
Abstract
When we grasp an object using one hand, the opposite hemisphere predominantly guides the motor control of grasp movements (Davare et al., 2007; Rice et al., 2007). However, it is unclear whether visual object analysis for grasp control relies more on inputs (a) from the contralateral than the ipsilateral visual field, (b) from one dominant visual field regardless of the grasping hand, or (c) from both visual fields equally. For bimanual grasping of a single object we have recently demonstrated a visual field preference for the left visual field (Le and Niemeier, 2013a,b), consistent with a general right-hemisphere dominance for sensorimotor control of bimanual grasps (Le et al., 2014). But visual field differences have never been tested for unimanual grasping. Therefore, here we asked right-handed participants to fixate to the left or right of an object and then grasp the object either with their right or left hand using a precision grip. We found that participants grasping with their right hand performed better with objects in the right visual field: maximum grip apertures (MGAs) were more closely matched to the object width and were smaller than for objects in the left visual field. In contrast, when people grasped with their left hand, preferences switched to the left visual field. What is more, MGA scaling with the left hand showed greater visual field differences compared to right-hand grasping. Our data suggest that, visual object analysis for unimanual grasping shows a preference for visual information from the ipsilateral visual field, and that the left hemisphere is better equipped to control grasps in both visual fields.
Collapse
Affiliation(s)
- Ada Le
- Psychology, University of Toronto Scarborough Toronto, ON, Canada
| | | |
Collapse
|
10
|
Chen J, Niemeier M. Distractor removal amplifies spatial frequency-specific crossover of the attentional bias: a psychophysical and Monte Carlo simulation study. Exp Brain Res 2014; 232:4001-19. [DOI: 10.1007/s00221-014-4082-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2014] [Accepted: 08/19/2014] [Indexed: 11/28/2022]
|