1
|
Guo LL, Niemeier M. Phase-Dependent Visual and Sensorimotor Integration of Features for Grasp Computations before and after Effector Specification. J Neurosci 2024; 44:e2208232024. [PMID: 39019614 PMCID: PMC11326866 DOI: 10.1523/jneurosci.2208-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 07/03/2024] [Accepted: 07/10/2024] [Indexed: 07/19/2024] Open
Abstract
The simple act of viewing and grasping an object involves complex sensorimotor control mechanisms that have been shown to vary as a function of multiple object and other task features such as object size, shape, weight, and wrist orientation. However, these features have been mostly studied in isolation. In contrast, given the nonlinearity of motor control, its computations require multiple features to be incorporated concurrently. Therefore, the present study tested the hypothesis that grasp computations integrate multiple task features superadditively in particular when these features are relevant for the same action phase. We asked male and female human participants to reach-to-grasp objects of different shapes and sizes with different wrist orientations. Also, we delayed the movement onset using auditory signals to specify which effector to use. Using electroencephalography and representative dissimilarity analysis to map the time course of cortical activity, we found that grasp computations formed superadditive integrated representations of grasp features during different planning phases of grasping. Shape-by-size representations and size-by-orientation representations occurred before and after effector specification, respectively, and could not be explained by single-feature models. These observations are consistent with the brain performing different preparatory, phase-specific computations; visual object analysis to identify grasp points at abstract visual levels; and downstream sensorimotor preparatory computations for reach-to-grasp trajectories. Our results suggest the brain adheres to the needs of nonlinear motor control for integration. Furthermore, they show that examining the superadditive influence of integrated representations can serve as a novel lens to map the computations underlying sensorimotor control.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology Scarborough, University of Toronto, Toronto, Ontario M1C1A4, Canada
| | - Matthias Niemeier
- Department of Psychology Scarborough, University of Toronto, Toronto, Ontario M1C1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N3M6, Canada
| |
Collapse
|
2
|
Lee N, Guo LL, Nestor A, Niemeier M. Computation on Demand: Action-Specific Representations of Visual Task Features Arise during Distinct Movement Phases. J Neurosci 2024; 44:e2100232024. [PMID: 38789263 PMCID: PMC11255428 DOI: 10.1523/jneurosci.2100-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 05/07/2024] [Accepted: 05/15/2024] [Indexed: 05/26/2024] Open
Abstract
The intention to act influences the computations of various task-relevant features. However, little is known about the time course of these computations. Furthermore, it is commonly held that these computations are governed by conjunctive neural representations of the features. But, support for this view comes from paradigms arbitrarily combining task features and affordances, thus requiring representations in working memory. Therefore, the present study used electroencephalography and a well-rehearsed task with features that afford minimal working memory representations to investigate the temporal evolution of feature representations and their potential integration in the brain. Female and male human participants grasped objects or touched them with a knuckle. Objects had different shapes and were made of heavy or light materials with shape and weight being relevant for grasping, not for "knuckling." Using multivariate analysis showed that representations of object shape were similar for grasping and knuckling. However, only for grasping did early shape representations reactivate at later phases of grasp planning, suggesting that sensorimotor control signals feed back to the early visual cortex. Grasp-specific representations of material/weight only arose during grasp execution after object contact during the load phase. A trend for integrated representations of shape and material also became grasp-specific but only briefly during the movement onset. These results suggest that the brain generates action-specific representations of relevant features as required for the different subcomponents of its action computations. Our results argue against the view that goal-directed actions inevitably join all features of a task into a sustained and unified neural representation.
Collapse
Affiliation(s)
- Nina Lee
- Department of Psychology at Scarborough, University of Toronto, Scarborough, Ontario M1C1A4, Canada
| | - Lin Lawrence Guo
- Department of Psychology at Scarborough, University of Toronto, Scarborough, Ontario M1C1A4, Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, Scarborough, Ontario M1C1A4, Canada
| | - Matthias Niemeier
- Department of Psychology at Scarborough, University of Toronto, Scarborough, Ontario M1C1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N3M6, Canada
| |
Collapse
|
3
|
Hooks K, El-Said R, Fu Q. Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limb. Front Hum Neurosci 2023; 17:1302647. [PMID: 38021246 PMCID: PMC10663285 DOI: 10.3389/fnhum.2023.1302647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 10/25/2023] [Indexed: 12/01/2023] Open
Abstract
Fundamental to human movement is the ability to interact with objects in our environment. How one reaches an object depends on the object's shape and intended interaction afforded by the object, e.g., grasp and transport. Extensive research has revealed that the motor intention of reach-to-grasp can be decoded from cortical activities using EEG signals. The goal of the present study is to determine the extent to which information encoded in the EEG signals is shared between two limbs to enable cross-hand decoding. We performed an experiment in which human subjects (n = 10) were tasked to interact with a novel object with multiple affordances using either right or left hands. The object had two vertical handles attached to a horizontal base. A visual cue instructs what action (lift or touch) and whether the left or right handle should be used for each trial. EEG was recorded and processed from bilateral frontal-central-parietal regions (30 channels). We trained LDA classifiers using data from trials performed by one limb and tested the classification accuracy using data from trials performed by the contralateral limb. We found that the type of hand-object interaction can be decoded with approximately 59 and 69% peak accuracy in the planning and execution stages, respectively. Interestingly, the decoding accuracy of the reaching directions was dependent on how EEG channels in the testing dataset were spatially mirrored, and whether directions were labeled in the extrinsic (object-centered) or intrinsic (body-centered) coordinates.
Collapse
Affiliation(s)
- Kevin Hooks
- Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United States
| | - Refaat El-Said
- College of Medicine, University of Central Florida, Orlando, FL, United States
| | - Qiushi Fu
- Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United States
- Biionix Cluster, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
4
|
Rens G, Figley TD, Gallivan JP, Liu Y, Culham JC. Grasping with a Twist: Dissociating Action Goals from Motor Actions in Human Frontoparietal Circuits. J Neurosci 2023; 43:5831-5847. [PMID: 37474309 PMCID: PMC10423047 DOI: 10.1523/jneurosci.0009-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 05/23/2023] [Accepted: 06/23/2023] [Indexed: 07/22/2023] Open
Abstract
In daily life, prehension is typically not the end goal of hand-object interactions but a precursor for manipulation. Nevertheless, functional MRI (fMRI) studies investigating manual manipulation have primarily relied on prehension as the end goal of an action. Here, we used slow event-related fMRI to investigate differences in neural activation patterns between prehension in isolation and prehension for object manipulation. Sixteen (seven males and nine females) participants were instructed either to simply grasp the handle of a rotatable dial (isolated prehension) or to grasp and turn it (prehension for object manipulation). We used representational similarity analysis (RSA) to investigate whether the experimental conditions could be discriminated from each other based on differences in task-related brain activation patterns. We also used temporal multivoxel pattern analysis (tMVPA) to examine the evolution of regional activation patterns over time. Importantly, we were able to differentiate isolated prehension and prehension for manipulation from activation patterns in the early visual cortex, the caudal intraparietal sulcus (cIPS), and the superior parietal lobule (SPL). Our findings indicate that object manipulation extends beyond the putative cortical grasping network (anterior intraparietal sulcus, premotor and motor cortices) to include the superior parietal lobule and early visual cortex.SIGNIFICANCE STATEMENT A simple act such as turning an oven dial requires not only that the CNS encode the initial state (starting dial orientation) of the object but also the appropriate posture to grasp it to achieve the desired end state (final dial orientation) and the motor commands to achieve that state. Using advanced temporal neuroimaging analysis techniques, we reveal how such actions unfold over time and how they differ between object manipulation (turning a dial) versus grasping alone. We find that a combination of brain areas implicated in visual processing and sensorimotor integration can distinguish between the complex and simple tasks during planning, with neural patterns that approximate those during the actual execution of the action.
Collapse
Affiliation(s)
- Guy Rens
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
- Laboratorium voor Neuro- en Psychofysiologie, Department of Neurosciences, Katholieke Universiteit Leuven, Leuven 3000, Belgium
- Leuven Brain Institute, Katholieke Universiteit Leuven, Leuven 3000, Belgium
| | - Teresa D Figley
- Graduate Program in Neuroscience, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Jason P Gallivan
- Departments of Psychology & Biomedical and Molecular Sciences, Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
| | - Yuqi Liu
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
- Institute of Neuroscience, Chinese Academy of Sciences Center for Excellence in Brain Sciences and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
- Graduate Program in Neuroscience, University of Western Ontario, London, Ontario N6A 5C2, Canada
| |
Collapse
|
5
|
Hua L, Gao F, Leong C, Yuan Z. Neural decoding dissociates perceptual grouping between proximity and similarity in visual perception. Cereb Cortex 2022; 33:3803-3815. [PMID: 35973163 DOI: 10.1093/cercor/bhac308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 07/13/2022] [Accepted: 07/14/2022] [Indexed: 11/13/2022] Open
Abstract
Unlike single grouping principle, cognitive neural mechanism underlying the dissociation across two or more grouping principles is still unclear. In this study, a dimotif lattice paradigm that can adjust the strength of one grouping principle was used to inspect how, when, and where the processing of two grouping principles (proximity and similarity) were carried out in human brain. Our psychophysical findings demonstrated that similarity grouping effect was enhanced with reduced proximity effect when the grouping cues of proximity and similarity were presented simultaneously. Meanwhile, EEG decoding was performed to reveal the specific cognitive patterns involved in each principle by using time-resolved MVPA. More importantly, the onsets of dissociation between 2 grouping principles coincided within 3 time windows: the early-stage proximity-defined local visual element arrangement in middle occipital cortex, the middle-stage processing for feature selection modulating low-level visual cortex such as inferior occipital cortex and fusiform cortex, and the high-level cognitive integration to make decisions for specific grouping preference in the parietal areas. In addition, it was discovered that the brain responses were highly correlated with behavioral grouping. Therefore, our study provides direct evidence for a link between the human perceptual space of grouping decision-making and neural space of brain activation patterns.
Collapse
Affiliation(s)
- Lin Hua
- Centre for Cognitive and Brain Sciences, N21 Research Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China.,Faculty of Health Sciences, E12 Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China
| | - Fei Gao
- Centre for Cognitive and Brain Sciences, N21 Research Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China
| | - Chantat Leong
- Centre for Cognitive and Brain Sciences, N21 Research Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China.,Faculty of Health Sciences, E12 Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China
| | - Zhen Yuan
- Centre for Cognitive and Brain Sciences, N21 Research Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China.,Faculty of Health Sciences, E12 Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China
| |
Collapse
|
6
|
Guo LL, Oghli YS, Frost A, Niemeier M. Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control. J Neurosci 2021; 41:9210-9222. [PMID: 34551938 PMCID: PMC8570828 DOI: 10.1523/jneurosci.0992-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 09/13/2021] [Accepted: 09/16/2021] [Indexed: 11/21/2022] Open
Abstract
Current understanding of the neural processes underlying human grasping suggests that grasp computations involve gradients of higher to lower level representations and, relatedly, visual to motor processes. However, it is unclear whether these processes evolve in a strictly canonical manner from higher to intermediate and to lower levels given that this knowledge importantly relies on functional imaging, which lacks temporal resolution. To examine grasping in fine temporal detail here we used multivariate EEG analysis. We asked participants to grasp objects while controlling the time at which crucial elements of grasp programs were specified. We first specified the orientation with which participants should grasp objects, and only after a delay we instructed participants about which effector to use to grasp, either the right or the left hand. We also asked participants to grasp with both hands because bimanual and left-hand grasping share intermediate-level grasp representations. We observed that grasp programs evolved in a canonical manner from visual representations, which were independent of effectors to motor representations that distinguished between effectors. However, we found that intermediate representations of effectors that partially distinguished between effectors arose after representations that distinguished among all effector types. Our results show that grasp computations do not proceed in a strictly hierarchically canonical fashion, highlighting the importance of the fine temporal resolution of EEG for a comprehensive understanding of human grasp control.SIGNIFICANCE STATEMENT A long-standing assumption of the grasp computations is that grasp representations progress from higher to lower level control in a regular, or canonical, fashion. Here, we combined EEG and multivariate pattern analysis to characterize the temporal dynamics of grasp representations while participants viewed objects and were subsequently cued to execute an unimanual or bimanual grasp. Interrogation of the temporal dynamics revealed that lower level effector representations emerged before intermediate levels of grasp representations, thereby suggesting a partially noncanonical progression from higher to lower and then to intermediate level grasp control.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Yazan Shamli Oghli
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Adam Frost
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N 3M6, Canada
- Vision: Science to Applications, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|