1
|
MacNeil RR, Enns JT. The "What" and "How" of Pantomime Actions. Vision (Basel) 2024; 8:58. [PMID: 39449391 PMCID: PMC11503306 DOI: 10.3390/vision8040058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Revised: 09/17/2024] [Accepted: 09/23/2024] [Indexed: 10/26/2024] Open
Abstract
Pantomimes are human actions that simulate ideas, objects, and events, commonly used in conversation, performance art, and gesture-based interfaces for computing and controlling robots. Yet, their underlying neurocognitive mechanisms are not well understood. In this review, we examine pantomimes through two parallel lines of research: (1) the two visual systems (TVS) framework for visually guided action, and (2) the neuropsychological literature on limb apraxia. Historically, the TVS framework has considered pantomime actions as expressions of conscious perceptual processing in the ventral stream, but an emerging view is that they are jointly influenced by ventral and dorsal stream processing. Within the apraxia literature, pantomimes were historically viewed as learned motor schemas, but there is growing recognition that they include creative and improvised actions. Both literatures now recognize that pantomimes are often created spontaneously, sometimes drawing on memory and always requiring online cognitive control. By highlighting this convergence of ideas, we aim to encourage greater collaboration across these two research areas, in an effort to better understand these uniquely human behaviors.
Collapse
Affiliation(s)
- Raymond R. MacNeil
- Department of Psychology, The University of British Columbia, Vancouver, BC V6T 1Z4, Canada;
| | | |
Collapse
|
2
|
Kamohara C, Nakajima M, Nozaki Y, Ieda T, Kawamura K, Horikoshi K, Miyahara R, Akiba C, Ogino I, Karagiozov KL, Miyajima M, Kondo A, Sakamoto M. A new test for evaluation of marginal cognitive function deficits in idiopathic normal pressure hydrocephalus through expressing texture recognition by sound symbolic words. Front Aging Neurosci 2024; 16:1456242. [PMID: 39360232 PMCID: PMC11445636 DOI: 10.3389/fnagi.2024.1456242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Accepted: 09/03/2024] [Indexed: 10/04/2024] Open
Abstract
Introduction The number of dementia patients is increasing with population aging. Preclinical detection of dementia in patients is essential for access to adequate treatment. In previous studies, dementia patients showed texture recognition difficulties. Onomatopoeia or sound symbolic words (SSW) are intuitively associated with texture impressions and are less likely to be affected by aphasia and description of material perception can be easily obtained. In this study, we aimed to create a test of texture recognition ability expressed by SSW to detect the presence of mild cognitive disorders. Methods The sound symbolic words texture recognition test (SSWTRT) is constructed from 12 close-up photos of various materials and participants were to choose the best SSW out of 8 choices to describe surface texture in the images in Japanese. All 102 participants seen in Juntendo University Hospital from January to August 2023 had a diagnosis of possible iNPH (age mean 77.9, SD 6.7). The answers were scored on a comprehensive scale of 0 to 1. Neuropsychological assessments included MMSE, FAB, and the Rey Auditory Verbal Learning Test (RAVLT), Pegboard Test, and Stroop Test from the EU-iNPH Grading Scale (GS). In study 1 the correlation between SSWTRT and the neuropsychological tests were analyzed. In study 2, participants were divided into two groups: the Normal Cognition group (Group A, n = 37) with MMSE scores of 28 points or above, and the Mild Cognitive Impairment group (Group B, n = 50) with scores ranging from 22 to 27 points, and its predictability were analyzed. Results In study 1, the total SSWTRT score had a moderate correlation with the neuropsychological test results. In study 2, there were significant differences in the SSWTRT scores between groups A and B. ROC analysis results showed that the SSWTR test was able to predict the difference between the normal and mildly impaired cognition groups. Conclusion The developed SSWTRT reflects the assessment results of neuropsychological tests in cognitive deterioration and was able to detect early cognitive deficits. This test not only relates to visual perception but is likely to have an association with verbal fluency and memory ability, which are frontal lobe functions.
Collapse
Affiliation(s)
- Chihiro Kamohara
- Research Institute for Diseases of Old Age, Juntendo University School of Medicine, Tokyo, Japan
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Madoka Nakajima
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Yuji Nozaki
- Department of Informatics, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Taiki Ieda
- Department of Informatics, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Kaito Kawamura
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
- Department of Neurosurgery, Saiseikai Kawaguchi General Hospital, Saitama, Japan
| | - Kou Horikoshi
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Ryo Miyahara
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Chihiro Akiba
- Department of Neurosurgery, Juntendo Koto Geriatric Medical Center, Tokyo, Japan
| | - Ikuko Ogino
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | | | - Masakazu Miyajima
- Department of Neurosurgery, Juntendo Koto Geriatric Medical Center, Tokyo, Japan
| | - Akihide Kondo
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Maki Sakamoto
- Department of Informatics, Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
3
|
Borra E, Gerbella M, Rozzi S, Luppino G. Neural substrate for the engagement of the ventral visual stream in motor control in the macaque monkey. Cereb Cortex 2024; 34:bhae354. [PMID: 39227311 DOI: 10.1093/cercor/bhae354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 07/05/2024] [Accepted: 08/16/2024] [Indexed: 09/05/2024] Open
Abstract
The present study aimed to describe the cortical connectivity of a sector located in the ventral bank of the superior temporal sulcus in the macaque (intermediate area TEa and TEm [TEa/m]), which appears to represent the major source of output of the ventral visual stream outside the temporal lobe. The retrograde tracer wheat germ agglutinin was injected in the intermediate TEa/m in four macaque monkeys. The results showed that 58-78% of labeled cells were located within ventral visual stream areas other than the TE complex. Outside the ventral visual stream, there were connections with the memory-related medial temporal area 36 and the parahippocampal cortex, orbitofrontal areas involved in encoding subjective values of stimuli for action selection, and eye- or hand-movement related parietal (LIP, AIP, and SII), prefrontal (12r, 45A, and 45B) areas, and a hand-related dysgranular insula field. Altogether these data provide a solid substrate for the engagement of the ventral visual stream in large scale cortical networks for skeletomotor or oculomotor control. Accordingly, the role of the ventral visual stream could go beyond pure perceptual processes and could be also finalized to the neural mechanisms underlying the control of voluntary motor behavior.
Collapse
Affiliation(s)
- Elena Borra
- Dipartimento di Medicina e Chirurgia, Unità di Neuroscienze, Università di Parma, Parma, Italy
| | - Marzio Gerbella
- Dipartimento di Medicina e Chirurgia, Unità di Neuroscienze, Università di Parma, Parma, Italy
| | - Stefano Rozzi
- Dipartimento di Medicina e Chirurgia, Unità di Neuroscienze, Università di Parma, Parma, Italy
| | - Giuseppe Luppino
- Dipartimento di Medicina e Chirurgia, Unità di Neuroscienze, Università di Parma, Parma, Italy
| |
Collapse
|
4
|
Lee N, Guo LL, Nestor A, Niemeier M. Computation on Demand: Action-Specific Representations of Visual Task Features Arise during Distinct Movement Phases. J Neurosci 2024; 44:e2100232024. [PMID: 38789263 PMCID: PMC11255428 DOI: 10.1523/jneurosci.2100-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 05/07/2024] [Accepted: 05/15/2024] [Indexed: 05/26/2024] Open
Abstract
The intention to act influences the computations of various task-relevant features. However, little is known about the time course of these computations. Furthermore, it is commonly held that these computations are governed by conjunctive neural representations of the features. But, support for this view comes from paradigms arbitrarily combining task features and affordances, thus requiring representations in working memory. Therefore, the present study used electroencephalography and a well-rehearsed task with features that afford minimal working memory representations to investigate the temporal evolution of feature representations and their potential integration in the brain. Female and male human participants grasped objects or touched them with a knuckle. Objects had different shapes and were made of heavy or light materials with shape and weight being relevant for grasping, not for "knuckling." Using multivariate analysis showed that representations of object shape were similar for grasping and knuckling. However, only for grasping did early shape representations reactivate at later phases of grasp planning, suggesting that sensorimotor control signals feed back to the early visual cortex. Grasp-specific representations of material/weight only arose during grasp execution after object contact during the load phase. A trend for integrated representations of shape and material also became grasp-specific but only briefly during the movement onset. These results suggest that the brain generates action-specific representations of relevant features as required for the different subcomponents of its action computations. Our results argue against the view that goal-directed actions inevitably join all features of a task into a sustained and unified neural representation.
Collapse
Affiliation(s)
- Nina Lee
- Department of Psychology at Scarborough, University of Toronto, Scarborough, Ontario M1C1A4, Canada
| | - Lin Lawrence Guo
- Department of Psychology at Scarborough, University of Toronto, Scarborough, Ontario M1C1A4, Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, Scarborough, Ontario M1C1A4, Canada
| | - Matthias Niemeier
- Department of Psychology at Scarborough, University of Toronto, Scarborough, Ontario M1C1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N3M6, Canada
| |
Collapse
|
5
|
Mahon BZ, Almeida J. Reciprocal interactions among parietal and occipito-temporal representations support everyday object-directed actions. Neuropsychologia 2024; 198:108841. [PMID: 38430962 PMCID: PMC11498102 DOI: 10.1016/j.neuropsychologia.2024.108841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 02/19/2024] [Accepted: 02/25/2024] [Indexed: 03/05/2024]
Abstract
Everyday interactions with common manipulable objects require the integration of conceptual knowledge about objects and actions with real-time sensory information about the position, orientation and volumetric structure of the grasp target. The ability to successfully interact with everyday objects involves analysis of visual form and shape, surface texture, material properties, conceptual attributes such as identity, function and typical context, and visuomotor processing supporting hand transport, grasp form, and object manipulation. Functionally separable brain regions across the dorsal and ventral visual pathways support the processing of these different object properties and, in cohort, are necessary for functional object use. Object-directed grasps display end-state-comfort: they anticipate in form and force the shape and material properties of the grasp target, and how the object will be manipulated after it is grasped. End-state-comfort is the default for everyday interactions with manipulable objects and implies integration of information across the ventral and dorsal visual pathways. We propose a model of how visuomotor and action representations in parietal cortex interact with object representations in ventral and lateral occipito-temporal cortex. One pathway, from the supramarginal gyrus to the middle and inferior temporal gyrus, supports the integration of action-related information, including hand and limb position (supramarginal gyrus) with conceptual attributes and an appreciation of the action goal (middle temporal gyrus). A second pathway, from posterior IPS to the fusiform gyrus and collateral sulcus supports the integration of grasp parameters (IPS) with the surface texture and material properties (e.g., weight distribution) of the grasp target. Reciprocal interactions among these regions are part of a broader network of regions that support everyday functional object interactions.
Collapse
Affiliation(s)
- Bradford Z Mahon
- Department of Psychology, Carnegie Mellon University, USA; Neuroscience Institute, Carnegie Mellon University, USA; Department of Neurosurgery, University of Rochester Medical Center, USA.
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| |
Collapse
|
6
|
Fairchild GT, Holler DE, Fabbri S, Gomez MA, Walsh-Snow JC. Naturalistic Object Representations Depend on Distance and Size Cues. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.16.585308. [PMID: 38559105 PMCID: PMC10980039 DOI: 10.1101/2024.03.16.585308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Egocentric distance and real-world size are important cues for object perception and action. Nevertheless, most studies of human vision rely on two-dimensional pictorial stimuli that convey ambiguous distance and size information. Here, we use fMRI to test whether pictures are represented differently in the human brain from real, tangible objects that convey unambiguous distance and size cues. Participants directly viewed stimuli in two display formats (real objects and matched printed pictures of those objects) presented at different egocentric distances (near and far). We measured the effects of format and distance on fMRI response amplitudes and response patterns. We found that fMRI response amplitudes in the lateral occipital and posterior parietal cortices were stronger overall for real objects than for pictures. In these areas and many others, including regions involved in action guidance, responses to real objects were stronger for near vs. far stimuli, whereas distance had little effect on responses to pictures-suggesting that distance determines relevance to action for real objects, but not for pictures. Although stimulus distance especially influenced response patterns in dorsal areas that operate in the service of visually guided action, distance also modulated representations in ventral cortex, where object responses are thought to remain invariant across contextual changes. We observed object size representations for both stimulus formats in ventral cortex but predominantly only for real objects in dorsal cortex. Together, these results demonstrate that whether brain responses reflect physical object characteristics depends on whether the experimental stimuli convey unambiguous information about those characteristics. Significance Statement Classic frameworks of vision attribute perception of inherent object characteristics, such as size, to the ventral visual pathway, and processing of spatial characteristics relevant to action, such as distance, to the dorsal visual pathway. However, these frameworks are based on studies that used projected images of objects whose actual size and distance from the observer were ambiguous. Here, we find that when object size and distance information in the stimulus is less ambiguous, these characteristics are widely represented in both visual pathways. Our results provide valuable new insights into the brain representations of objects and their various physical attributes in the context of naturalistic vision.
Collapse
|
7
|
Gillies G, Fukuda K, Cant JS. The role of visual working memory in capacity-limited cross-modal ensemble coding. Neuropsychologia 2024; 192:108745. [PMID: 38096982 DOI: 10.1016/j.neuropsychologia.2023.108745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 12/01/2023] [Accepted: 12/04/2023] [Indexed: 12/19/2023]
Abstract
Ensemble coding refers to the brain's ability to rapidly extract summary statistics, such as average size and average cost, from a large set of visual stimuli. Although ensemble coding is thought to circumvent a capacity limit of visual working memory, we recently observed a VWM-like capacity limit in an ensemble task where observers extracted the average sweetness of groups of food pictures (i.e., they could only integrate information from four out of six available items), thus suggesting the involvement of VWM in this novel form of cross-modal ensemble coding. Therefore, across two experiments we investigated if this cross-modal ensemble capacity limit could be explained by individual differences in VWM processing. To test this, observers performed both an ensemble task and a VWM task, and we determined 1) how much information they integrated into their ensemble percepts, and 2) how much information they remembered from those displays. Interestingly, we found that individual differences in VWM capacity did not explain differences in performance on the ensemble coding task (i.e., high-capacity individuals did not have significantly higher "ensemble abilities" than low-capacity individuals). While our data cannot definitively state whether or not VWM is necessary to perform the ensemble task, we conclude that it is certainly not sufficient to support this cognitive process. We speculate that the capacity limit may be explained by 1) a bottleneck at the perceptual stage (i.e., a failure to process multiple visual features across multiple items, as there are no singular features that convey taste), or 2) the interaction of multiple cognitive systems (e.g., VWM, gustatory working memory, long term memory). Our results highlight the importance of examining ensemble perception across multiple sensory and cognitive domains to provide a clearer picture of the mechanisms underlying everyday behavior.
Collapse
|
8
|
Klein LK, Maiello G, Stubbs K, Proklova D, Chen J, Paulun VC, Culham JC, Fleming RW. Distinct Neural Components of Visually Guided Grasping during Planning and Execution. J Neurosci 2023; 43:8504-8514. [PMID: 37848285 PMCID: PMC10711727 DOI: 10.1523/jneurosci.0335-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 07/18/2023] [Accepted: 09/06/2023] [Indexed: 10/19/2023] Open
Abstract
Selecting suitable grasps on three-dimensional objects is a challenging visuomotor computation, which involves combining information about an object (e.g., its shape, size, and mass) with information about the actor's body (e.g., the optimal grasp aperture and hand posture for comfortable manipulation). Here, we used functional magnetic resonance imaging to investigate brain networks associated with these distinct aspects during grasp planning and execution. Human participants of either sex viewed and then executed preselected grasps on L-shaped objects made of wood and/or brass. By leveraging a computational approach that accurately predicts human grasp locations, we selected grasp points that disentangled the role of multiple grasp-relevant factors, that is, grasp axis, grasp size, and object mass. Representational Similarity Analysis revealed that grasp axis was encoded along dorsal-stream regions during grasp planning. Grasp size was first encoded in ventral stream areas during grasp planning then in premotor regions during grasp execution. Object mass was encoded in ventral stream and (pre)motor regions only during grasp execution. Premotor regions further encoded visual predictions of grasp comfort, whereas the ventral stream encoded grasp comfort during execution, suggesting its involvement in haptic evaluation. These shifts in neural representations thus capture the sensorimotor transformations that allow humans to grasp objects.SIGNIFICANCE STATEMENT Grasping requires integrating object properties with constraints on hand and arm postures. Using a computational approach that accurately predicts human grasp locations by combining such constraints, we selected grasps on objects that disentangled the relative contributions of object mass, grasp size, and grasp axis during grasp planning and execution in a neuroimaging study. Our findings reveal a greater role of dorsal-stream visuomotor areas during grasp planning, and, surprisingly, increasing ventral stream engagement during execution. We propose that during planning, visuomotor representations initially encode grasp axis and size. Perceptual representations of object material properties become more relevant instead as the hand approaches the object and motor programs are refined with estimates of the grip forces required to successfully lift the object.
Collapse
Affiliation(s)
- Lina K Klein
- Department of Experimental Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
| | - Guido Maiello
- School of Psychology, University of Southampton, Southampton SO17 1PS, United Kingdom
| | - Kevin Stubbs
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Daria Proklova
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, 510631, China
- Key Laboratory of Brain, Cognition and Education Sciences, South China Normal University, Guangzhou 510631, China
| | - Vivian C Paulun
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Roland W Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, 35390 Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University Giessen, Giessen, Germany, 35390
| |
Collapse
|
9
|
McGarity-Shipley MR, Markovik Jantz S, Johansson RS, Wolpert DM, Flanagan JR. Fast Feedback Responses to Categorical Sensorimotor Errors That Do Not Indicate Error Magnitude Are Optimized Based on Short- and Long-Term Memory. J Neurosci 2023; 43:8525-8535. [PMID: 37884350 PMCID: PMC10711696 DOI: 10.1523/jneurosci.1990-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 09/25/2023] [Accepted: 10/03/2023] [Indexed: 10/28/2023] Open
Abstract
Skilled motor performance depends critically on rapid corrective responses that act to preserve the goal of the movement in the face of perturbations. Although it is well established that the gain of corrective responses elicited while reaching toward objects adapts to different contexts, little is known about the adaptability of corrective responses supporting the manipulation of objects after they are grasped. Here, we investigated the adaptability of the corrective response elicited when an object being lifted is heavier than expected and fails to lift off when predicted. This response involves a monotonic increase in vertical load force triggered, within ∼90 ms, by the absence of expected sensory feedback signaling lift off and terminated when actual lift off occurs. Critically, because the actual weight of the object cannot be directly sensed at the moment the object fails to lift off, any adaptation of the corrective response would have to be based on memory from previous lifts. We show that when humans, including men and women, repeatedly lift an object that on occasional catch trials increases from a baseline weight to a fixed heavier weight, they scale the gain of the response (i.e., the rate of force increase) to the heavier weight within two to three catch trials. We also show that the gain of the response scales, on the first catch trial, with the baseline weight of the object. Thus, the gain of the lifting response can be adapted by both short- and long-term experience. Finally, we demonstrate that this adaptation preserves the efficacy of the response across contexts.SIGNIFICANCE STATEMENT Here, we present the first investigation of the adaptability of the corrective lifting response elicited when an object is heavier than expected and fails to lift off when predicted. A striking feature of the response, which is driven by a sensory prediction error arising from the absence of expected sensory feedback, is that the magnitude of the error is unknown. That is, the motor system only receives a categorical error indicating that the object is heavier than expected but not its actual weight. Although the error magnitude is not known at the moment the response is elicited, we show that the response can be scaled to predictions of error magnitude based on both recent and long-term memories.
Collapse
Affiliation(s)
| | - Simona Markovik Jantz
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
| | - Roland S Johansson
- Physiology Section, Department of Integrative Medical Biology, Umeå University, SE-901 87 Umeå, Sweden
| | - Daniel M Wolpert
- Department of Neuroscience, Columbia University, New York, New York, 10027
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York 10027
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
- Department of Psychology, Queen's University, Kingston, Ontario K7L 3N6, Canada
| |
Collapse
|
10
|
von Gal A, Boccia M, Nori R, Verde P, Giannini AM, Piccardi L. Neural networks underlying visual illusions: An activation likelihood estimation meta-analysis. Neuroimage 2023; 279:120335. [PMID: 37591478 DOI: 10.1016/j.neuroimage.2023.120335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 07/05/2023] [Accepted: 08/14/2023] [Indexed: 08/19/2023] Open
Abstract
Visual illusions have long been used to study visual perception and contextual integration. Neuroimaging studies employ illusions to identify the brain regions involved in visual perception and how they interact. We conducted an Activation Likelihood Estimation (ALE) meta-analysis and meta-analytic connectivity modeling on fMRI studies using static and motion illusions to reveal the neural signatures of illusory processing and to investigate the degree to which different areas are commonly recruited in perceptual inference. The resulting networks encompass ventral and dorsal regions, including the inferior and middle occipital cortices bilaterally in both types of illusions. The static and motion illusion networks selectively included the right posterior parietal cortex and the ventral premotor cortex respectively. Overall, these results describe a network of areas crucially involved in perceptual inference relying on feed-back and feed-forward interactions between areas of the ventral and dorsal visual pathways. The same network is proposed to be involved in hallucinogenic symptoms characteristic of schizophrenia and other disorders, with crucial implications in the use of illusions as biomarkers.
Collapse
Affiliation(s)
| | - Maddalena Boccia
- Department of Psychology, Sapienza University of Rome, Rome, Italy; Cognitive and Motor Rehabilitation and Neuroimaging Unit, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Raffaella Nori
- Department of Psychology, University of Bologna, Bologna, Italy
| | - Paola Verde
- Italian Air Force Experimental Flight Center, Aerospace Medicine Department, Pratica di Mare, Rome, Italy
| | | | - Laura Piccardi
- Department of Psychology, Sapienza University of Rome, Rome, Italy; San Raffaele Cassino Hospital, Cassino, FR, Italy
| |
Collapse
|
11
|
Almeida J, Fracasso A, Kristensen S, Valério D, Bergström F, Chakravarthi R, Tal Z, Walbrin J. Neural and behavioral signatures of the multidimensionality of manipulable object processing. Commun Biol 2023; 6:940. [PMID: 37709924 PMCID: PMC10502059 DOI: 10.1038/s42003-023-05323-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 09/04/2023] [Indexed: 09/16/2023] Open
Abstract
Understanding how we recognize objects requires unravelling the variables that govern the way we think about objects and the neural organization of object representations. A tenable hypothesis is that the organization of object knowledge follows key object-related dimensions. Here, we explored, behaviorally and neurally, the multidimensionality of object processing. We focused on within-domain object information as a proxy for the decisions we typically engage in our daily lives - e.g., identifying a hammer in the context of other tools. We extracted object-related dimensions from subjective human judgments on a set of manipulable objects. We show that the extracted dimensions are cognitively interpretable and relevant - i.e., participants are able to consistently label them, and these dimensions can guide object categorization; and are important for the neural organization of knowledge - i.e., they predict neural signals elicited by manipulable objects. This shows that multidimensionality is a hallmark of the organization of manipulable object knowledge.
Collapse
Affiliation(s)
- Jorge Almeida
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
| | - Alessio Fracasso
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Stephanie Kristensen
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Daniela Valério
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Fredrik Bergström
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- Department of Psychology, University of Gothenburg, Gothenburg, Sweden
| | | | - Zohar Tal
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Jonathan Walbrin
- Proaction Lab, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
12
|
Emonds AMX, Srinath R, Nielsen KJ, Connor CE. Object representation in a gravitational reference frame. eLife 2023; 12:e81701. [PMID: 37561119 PMCID: PMC10414968 DOI: 10.7554/elife.81701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 07/19/2023] [Indexed: 08/11/2023] Open
Abstract
When your head tilts laterally, as in sports, reaching, and resting, your eyes counterrotate less than 20%, and thus eye images rotate, over a total range of about 180°. Yet, the world appears stable and vision remains normal. We discovered a neural strategy for rotational stability in anterior inferotemporal cortex (IT), the final stage of object vision in primates. We measured object orientation tuning of IT neurons in macaque monkeys tilted +25 and -25° laterally, producing ~40° difference in retinal image orientation. Among IT neurons with consistent object orientation tuning, 63% remained stable with respect to gravity across tilts. Gravitational tuning depended on vestibular/somatosensory but also visual cues, consistent with previous evidence that IT processes scene cues for gravity's orientation. In addition to stability across image rotations, an internal gravitational reference frame is important for physical understanding of a world where object position, posture, structure, shape, movement, and behavior interact critically with gravity.
Collapse
Affiliation(s)
- Alexandriya MX Emonds
- Department of Biomedical Engineering, Johns Hopkins University School of MedicineBaltimoreUnited States
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins UniversityBaltimoreUnited States
| | - Ramanujan Srinath
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins UniversityBaltimoreUnited States
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of MedicineBaltimoreUnited States
| | - Kristina J Nielsen
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins UniversityBaltimoreUnited States
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of MedicineBaltimoreUnited States
| | - Charles E Connor
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins UniversityBaltimoreUnited States
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of MedicineBaltimoreUnited States
| |
Collapse
|
13
|
Rens G, Figley TD, Gallivan JP, Liu Y, Culham JC. Grasping with a Twist: Dissociating Action Goals from Motor Actions in Human Frontoparietal Circuits. J Neurosci 2023; 43:5831-5847. [PMID: 37474309 PMCID: PMC10423047 DOI: 10.1523/jneurosci.0009-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 05/23/2023] [Accepted: 06/23/2023] [Indexed: 07/22/2023] Open
Abstract
In daily life, prehension is typically not the end goal of hand-object interactions but a precursor for manipulation. Nevertheless, functional MRI (fMRI) studies investigating manual manipulation have primarily relied on prehension as the end goal of an action. Here, we used slow event-related fMRI to investigate differences in neural activation patterns between prehension in isolation and prehension for object manipulation. Sixteen (seven males and nine females) participants were instructed either to simply grasp the handle of a rotatable dial (isolated prehension) or to grasp and turn it (prehension for object manipulation). We used representational similarity analysis (RSA) to investigate whether the experimental conditions could be discriminated from each other based on differences in task-related brain activation patterns. We also used temporal multivoxel pattern analysis (tMVPA) to examine the evolution of regional activation patterns over time. Importantly, we were able to differentiate isolated prehension and prehension for manipulation from activation patterns in the early visual cortex, the caudal intraparietal sulcus (cIPS), and the superior parietal lobule (SPL). Our findings indicate that object manipulation extends beyond the putative cortical grasping network (anterior intraparietal sulcus, premotor and motor cortices) to include the superior parietal lobule and early visual cortex.SIGNIFICANCE STATEMENT A simple act such as turning an oven dial requires not only that the CNS encode the initial state (starting dial orientation) of the object but also the appropriate posture to grasp it to achieve the desired end state (final dial orientation) and the motor commands to achieve that state. Using advanced temporal neuroimaging analysis techniques, we reveal how such actions unfold over time and how they differ between object manipulation (turning a dial) versus grasping alone. We find that a combination of brain areas implicated in visual processing and sensorimotor integration can distinguish between the complex and simple tasks during planning, with neural patterns that approximate those during the actual execution of the action.
Collapse
Affiliation(s)
- Guy Rens
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
- Laboratorium voor Neuro- en Psychofysiologie, Department of Neurosciences, Katholieke Universiteit Leuven, Leuven 3000, Belgium
- Leuven Brain Institute, Katholieke Universiteit Leuven, Leuven 3000, Belgium
| | - Teresa D Figley
- Graduate Program in Neuroscience, University of Western Ontario, London, Ontario N6A 5C2, Canada
| | - Jason P Gallivan
- Departments of Psychology & Biomedical and Molecular Sciences, Centre for Neuroscience Studies, Queen's University, Kingston, Ontario K7L 3N6, Canada
| | - Yuqi Liu
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
- Institute of Neuroscience, Chinese Academy of Sciences Center for Excellence in Brain Sciences and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada
- Graduate Program in Neuroscience, University of Western Ontario, London, Ontario N6A 5C2, Canada
| |
Collapse
|
14
|
Marciniak Dg Agra K, Dg Agra P. F = ma. Is the macaque brain Newtonian? Cogn Neuropsychol 2023; 39:376-408. [PMID: 37045793 DOI: 10.1080/02643294.2023.2191843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/14/2023]
Abstract
Intuitive Physics, the ability to anticipate how the physical events involving mass objects unfold in time and space, is a central component of intelligent systems. Intuitive physics is a promising tool for gaining insight into mechanisms that generalize across species because both humans and non-human primates are subject to the same physical constraints when engaging with the environment. Physical reasoning abilities are widely present within the animal kingdom, but monkeys, with acute 3D vision and a high level of dexterity, appreciate and manipulate the physical world in much the same way humans do.
Collapse
Affiliation(s)
- Karolina Marciniak Dg Agra
- The Rockefeller University, Laboratory of Neural Circuits, New York, NY, USA
- Center for Brain, Minds and Machines, Cambridge, MA, USA
| | - Pedro Dg Agra
- The Rockefeller University, Laboratory of Neural Circuits, New York, NY, USA
- Center for Brain, Minds and Machines, Cambridge, MA, USA
| |
Collapse
|
15
|
Navarro-Guerrero N, Toprak S, Josifovski J, Jamone L. Visuo-haptic object perception for robots: an overview. Auton Robots 2023. [DOI: 10.1007/s10514-023-10091-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
Abstract
AbstractThe object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions.
Collapse
|
16
|
Zhang Z, Cesanek E, Ingram JN, Flanagan JR, Wolpert DM. Object weight can be rapidly predicted, with low cognitive load, by exploiting learned associations between the weights and locations of objects. J Neurophysiol 2023; 129:285-297. [PMID: 36350057 PMCID: PMC9886355 DOI: 10.1152/jn.00414.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 10/30/2022] [Indexed: 11/11/2022] Open
Abstract
Weight prediction is critical for dexterous object manipulation. Previous work has focused on lifting objects presented in isolation and has examined how the visual appearance of an object is used to predict its weight. Here we tested the novel hypothesis that when interacting with multiple objects, as is common in everyday tasks, people exploit the locations of objects to directly predict their weights, bypassing slower and more demanding processing of visual properties to predict weight. Using a three-dimensional robotic and virtual reality system, we developed a task in which participants were presented with a set of objects. In each trial a randomly chosen object translated onto the participant's hand and they had to anticipate the object's weight by generating an equivalent upward force. Across conditions we could control whether the visual appearance and/or location of the objects were informative as to their weight. Using this task, and a set of analogous web-based experiments, we show that when location information was predictive of the objects' weights participants used this information to achieve faster prediction than observed when prediction is based on visual appearance. We suggest that by "caching" associations between locations and weights, the sensorimotor system can speed prediction while also lowering working memory demands involved in predicting weight from object visual properties.NEW & NOTEWORTHY We use a novel object support task using a three-dimensional robotic interface and virtual reality system to provide evidence that the locations of objects are used to predict their weights. Using location information, rather than the visual appearance of the objects, supports fast prediction, thereby avoiding processes that can be demanding on working memory.
Collapse
Affiliation(s)
- Zhaoran Zhang
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| | - Evan Cesanek
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| | - James N Ingram
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| | - J Randall Flanagan
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Daniel M Wolpert
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| |
Collapse
|
17
|
Moskowitz JB, Berger SA, Fooken J, Castelhano MS, Gallivan JP, Flanagan JR. The influence of movement-related costs when searching to act and acting to search. J Neurophysiol 2023; 129:115-130. [PMID: 36475897 DOI: 10.1152/jn.00305.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Real-world search behavior often involves limb movements, either during search or after search. Here we investigated whether movement-related costs influence search behavior in two kinds of search tasks. In our visual search tasks, participants made saccades to find a target object among distractors and then moved a cursor, controlled by the handle of a robotic manipulandum, to the target. In our manual search tasks, participants moved the cursor to perform the search, placing it onto objects to reveal their identity as either a target or a distractor. In all tasks, there were multiple targets. Across experiments, we manipulated either the effort or time costs associated with movement such that these costs varied across the search space. We varied effort by applying different resistive forces to the handle, and we varied time costs by altering the speed of the cursor. Our analysis of cursor and eye movements during manual and visual search, respectively, showed that effort influenced manual search but did not influence visual search. In contrast, time costs influenced both visual and manual search. Our results demonstrate that, in addition to perceptual and cognitive factors, movement-related costs can also influence search behavior.NEW & NOTEWORTHY Numerous studies have investigated the perceptual and cognitive factors that influence decision making about where to look, or move, in search tasks. However, little is known about how search is influenced by movement-related costs associated with acting on an object once it has been visually located or acting during manual search. In this article, we show that movement time costs can bias visual and manual search and that movement effort costs bias manual search.
Collapse
Affiliation(s)
- Joshua B Moskowitz
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Sarah A Berger
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jolande Fooken
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Monica S Castelhano
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jason P Gallivan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
18
|
Choi JS, Choi MH. A study on brain neuronal activation based on the load in upper limb exercise (STROBE). Medicine (Baltimore) 2022; 101:e30761. [PMID: 36197190 PMCID: PMC9509160 DOI: 10.1097/md.0000000000030761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
This study aimed to determine the level of brain activation in separate regions, including the lobes, cerebellum, and limbic system, depending on the weight of an object during elbow flexion and extension exercise using functional magnetic resonance imaging (fMRI). The study was conducted on ten male undergraduates (22.4 ± 1.2 years). The functional images of the brain were obtained using the 3T MRI. The participants performed upper limb flexion and extension exercise at a constant speed and as the weight of the object for lifting was varied (0 g and 1000 g). The experiment consisted of four blocks that constituted 8 minutes. Each block was designed to comprise a rest phase (1 minute) and a lifting phase (1 minute). The results showed that, in the parietal lobe, the activation was higher for the 0 g-motion condition than for the 1000 g-motion condition; however, in the occipital lobe, cerebellum, sub-lobar, and limbic system, the activation was higher for the 1000 g-motion condition than for the 0 g-motion condition. The brain region for the perception of object weight was identified as the ventral area (occipital, temporal, and frontal lobe), and the activation of the ventral pathway is suggested to have increased as the object came into vision and as its shape, size, and weight were perceived. For holding an object in hand, compared to not holding it, the exercise load was greater for controlling the motion to maintain the posture (arm angle at 90°), controlling the speed to repeat the motion at a constant speed, and producing an accurate posing. Therefore, to maintain such varied conditions, the activation level increased in the regions associated with control and regulation through the motion coordination from vision to arm movements (control of muscles). A characteristic reduced activation was observed in the regions associated with visuo-vestibular interaction and voluntary movement when the exercise involved lifting a 1000-g object compared to the exercise without object lifting.
Collapse
Affiliation(s)
- Jin-Seung Choi
- Biomedical Engineering, Research Institute of Biomedical Engineering, School of ICT Convergence Engineering, College of Science and Technology, Konkuk University, Chungju, South Korea
| | - Mi-Hyun Choi
- Biomedical Engineering, Research Institute of Biomedical Engineering, School of ICT Convergence Engineering, College of Science and Technology, Konkuk University, Chungju, South Korea
- *Correspondence: Mi-Hyun Choi, Biomedical Engineering, Research Institute of Biomedical Engineering, School of ICT Convergence Engineering, College of Science and Technology, Konkuk University, 268 Chungwon-daero, Chungju-si, Chungcheongbuk-do, 27478, South Korea (e-mail: )
| |
Collapse
|
19
|
van Polanen V, Buckingham G, Davare M. The effects of TMS over the anterior intraparietal area on anticipatory fingertip force scaling and the size-weight illusion. J Neurophysiol 2022; 128:290-301. [PMID: 35294305 PMCID: PMC9363003 DOI: 10.1152/jn.00265.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 02/10/2022] [Accepted: 03/12/2022] [Indexed: 11/22/2022] Open
Abstract
When lifting an object skillfully, fingertip forces need to be carefully scaled to the object's weight, which can be inferred from its apparent size and material. This anticipatory force scaling ensures smooth and efficient lifting movements. However, even with accurate motor plans, weight perception can still be biased. In the size-weight illusion, objects of different size but equal weight are perceived to differ in heaviness, with the small object perceived to be heavier than the large object. The neural underpinnings of anticipatory force scaling to object size and the size-weight illusion are largely unknown. In this study, we tested the role of anterior intraparietal cortex (aIPS) in predictive force scaling and the size-weight illusion, by applying continuous theta burst stimulation (cTBS) prior to participants lifting objects of different sizes. Participants received cTBS over aIPS, the primary motor cortex (control area), or Sham stimulation. We found no evidence that aIPS stimulation affected the size-weight illusion. Effects were, however, found on anticipatory force scaling, where grip force was less tuned to object size during initial lifts. These findings suggest that aIPS is not involved in the perception of object weight but plays a transient role in the sensorimotor predictions related to object size. NEW & NOTEWORTHY Skilled object manipulation requires forming anticipatory motor plans according to the object's properties. Here, we demonstrate the role of anterior intraparietal sulcus (aIPS) in anticipatory grip force scaling to object size, particularly during initial lifting experience. Interestingly, this role was not maintained after continued practice and was not related to perceptual judgments measured with the size-weight illusion.
Collapse
Affiliation(s)
- Vonne van Polanen
- Movement Control and Neuroplasticity Research Group, Department of Movement Sciences, Biomedical Sciences group, KU Leuven, Leuven, Belgium
- Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Gavin Buckingham
- Department of Sport and Health Sciences, University of Exeter, Exeter, United Kingdom
| | - Marco Davare
- Faculty of Life Sciences and Medicine, King's College London, London, United Kingdom
| |
Collapse
|
20
|
Multisensory information about changing object properties can be used to quickly correct predictive force scaling for object lifting. Exp Brain Res 2022; 240:2121-2133. [PMID: 35786747 DOI: 10.1007/s00221-022-06404-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 06/18/2022] [Indexed: 11/04/2022]
Abstract
Sensory information about object properties, such as size or material, can be used to make an estimate of object weight and to generate an accurate motor plan to lift the object. When object properties change, the motor plan needs to be corrected based on the new information. The current study investigated whether such corrections could be made quickly, after the movement was initiated. Participants had to grasp and lift objects of different weights that could be indicated with different cues. During the reaching phase, the cue could change to indicate a different weight and participants had to quickly adjust their planned forces in order to lift the object skilfully. The object weight was cued with different object sizes (Experiment 1) or materials (Experiment 2) and the cue was presented in different sensory modality conditions: visually, haptically or both (visuohaptic). Results showed that participants could adjust their planned forces based on both size and material. Furthermore, corrections could be made in the visual, haptic and visuohaptic conditions, although the multisensory condition did not outperform the conditions with one sensory modality. These results suggest that motor plans can be quickly corrected based on sensory information about object properties from different sensory modalities. These findings provide insights into the information that can be shared between brain areas for the online control of hand-object interactions.
Collapse
|
21
|
Mahon BZ. Domain-specific connectivity drives the organization of object knowledge in the brain. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:221-244. [PMID: 35964974 PMCID: PMC11498098 DOI: 10.1016/b978-0-12-823493-8.00028-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The goal of this chapter is to review neuropsychological and functional MRI findings that inform a theory of the causes of functional specialization for semantic categories within occipito-temporal cortex-the ventral visual processing pathway. The occipito-temporal pathway supports visual object processing and recognition. The theoretical framework that drives this review considers visual object recognition through the lens of how "downstream" systems interact with the outputs of visual recognition processes. Those downstream processes include conceptual interpretation, grasping and object use, navigating and orienting in an environment, physical reasoning about the world, and inferring future actions and the inner mental states of agents. The core argument of this chapter is that innately constrained connectivity between occipito-temporal areas and other regions of the brain is the basis for the emergence of neural specificity for a limited number of semantic domains in the brain.
Collapse
Affiliation(s)
- Bradford Z Mahon
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States.
| |
Collapse
|
22
|
Neupärtl N, Tatai F, Rothkopf CA. Naturalistic embodied interactions elicit intuitive physical behaviour in accordance with Newtonian physics. Cogn Neuropsychol 2021; 38:440-454. [PMID: 34877918 DOI: 10.1080/02643294.2021.2008890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
The success of visuomotor interactions in everyday activities such as grasping or sliding a cup is inescapably governed by the laws of physics. Research on intuitive physics has predominantly investigated reasoning about objects' behaviour involving binary forced choice responses. We investigated how the type of visuomotor response influences participants' beliefs about physical quantities and their lawful relationship implicit in their active behaviour. Participants propelled pucks towards targets positioned at different distances. Analysis with a probabilistic model of interactions showed that subjects adopted the non-linear control prescribed by Newtonian physics when sliding real pucks in a virtual environment even in the absence of visual feedback. However, they used a linear heuristic when viewing the scene on a monitor and interactions were implemented through key presses. These results support the notion of probabilistic internal physics models but additionally suggest that humans can take advantage of embodied, sensorimotor, multimodal representations in physical scenarios.
Collapse
Affiliation(s)
- Nils Neupärtl
- Institute of Psychology, TU Darmstadt, Darmstadt, Germany.,Centre for Cognitive Science, TU Darmstadt, Darmstadt, Germany
| | - Fabian Tatai
- Institute of Psychology, TU Darmstadt, Darmstadt, Germany.,Centre for Cognitive Science, TU Darmstadt, Darmstadt, Germany
| | - Constantin A Rothkopf
- Institute of Psychology, TU Darmstadt, Darmstadt, Germany.,Centre for Cognitive Science, TU Darmstadt, Darmstadt, Germany.,Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| |
Collapse
|
23
|
The contributions of the ventral and the dorsal visual streams to the automatic processing of action relations of familiar and unfamiliar object pairs. Neuroimage 2021. [DOI: 10.1016/j.neuroimage.2021.118629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
24
|
Cesanek E, Zhang Z, Ingram JN, Wolpert DM, Flanagan JR. Motor memories of object dynamics are categorically organized. eLife 2021; 10:71627. [PMID: 34796873 PMCID: PMC8635978 DOI: 10.7554/elife.71627] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
The ability to predict the dynamics of objects, linking applied force to motion, underlies our capacity to perform many of the tasks we carry out on a daily basis. Thus, a fundamental question is how the dynamics of the myriad objects we interact with are organized in memory. Using a custom-built three-dimensional robotic interface that allowed us to simulate objects of varying appearance and weight, we examined how participants learned the weights of sets of objects that they repeatedly lifted. We find strong support for the novel hypothesis that motor memories of object dynamics are organized categorically, in terms of families, based on covariation in their visual and mechanical properties. A striking prediction of this hypothesis, supported by our findings and not predicted by standard associative map models, is that outlier objects with weights that deviate from the family-predicted weight will never be learned despite causing repeated lifting errors.
Collapse
Affiliation(s)
- Evan Cesanek
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - Zhaoran Zhang
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - James N Ingram
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - Daniel M Wolpert
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - J Randall Flanagan
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| |
Collapse
|
25
|
Knights E, Mansfield C, Tonin D, Saada J, Smith FW, Rossit S. Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions. J Neurosci 2021; 41:5263-5273. [PMID: 33972399 PMCID: PMC8211542 DOI: 10.1523/jneurosci.0083-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 03/23/2021] [Accepted: 03/29/2021] [Indexed: 02/02/2023] Open
Abstract
Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies discovered selective visual responses in occipitotemporal and parietal cortices for viewing pictures of hands or tools, which are assumed to reflect action processing, but this has rarely been directly investigated. Here, we examined the responses of independently visually defined category-selective brain areas when participants grasped 3D tools (N = 20; 9 females). Using real-action fMRI and multivoxel pattern analysis, we found that grasp typicality representations (i.e., whether a tool is grasped appropriately for use) were decodable from hand-selective areas in occipitotemporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. Importantly, these effects were exclusive for actions with tools, but not for biomechanically matched actions with control nontools. In addition, grasp typicality decoding was significantly higher in hand than tool-selective parietal regions. Notably, grasp typicality representations were automatically evoked even when there was no requirement for tool use and participants were naive to object category (tool vs nontools). Finding a specificity for typical tool grasping in hand-selective, rather than tool-selective, regions challenges the long-standing assumption that activation for viewing tool images reflects sensorimotor processing linked to tool manipulation. Instead, our results show that typicality representations for tool grasping are automatically evoked in visual regions specialized for representing the human hand, the primary tool of the brain for interacting with the world.
Collapse
Affiliation(s)
- Ethan Knights
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, United Kingdom
| | - Courtney Mansfield
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Diana Tonin
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Janak Saada
- Department of Radiology, Norfolk and Norwich University Hospitals NHS Foundation Trust, Norwich NR4 7UY, United Kingdom
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Stéphanie Rossit
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| |
Collapse
|
26
|
Gale DJ, Areshenkoff CN, Honda C, Johnsrude IS, Flanagan JR, Gallivan JP. Motor Planning Modulates Neural Activity Patterns in Early Human Auditory Cortex. Cereb Cortex 2021; 31:2952-2967. [PMID: 33511976 PMCID: PMC8107793 DOI: 10.1093/cercor/bhaa403] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 12/14/2020] [Accepted: 12/14/2020] [Indexed: 11/13/2022] Open
Abstract
It is well established that movement planning recruits motor-related cortical brain areas in preparation for the forthcoming action. Given that an integral component to the control of action is the processing of sensory information throughout movement, we predicted that movement planning might also modulate early sensory cortical areas, readying them for sensory processing during the unfolding action. To test this hypothesis, we performed 2 human functional magnetic resonance imaging studies involving separate delayed movement tasks and focused on premovement neural activity in early auditory cortex, given the area's direct connections to the motor system and evidence that it is modulated by motor cortex during movement in rodents. We show that effector-specific information (i.e., movements of the left vs. right hand in Experiment 1 and movements of the hand vs. eye in Experiment 2) can be decoded, well before movement, from neural activity in early auditory cortex. We find that this motor-related information is encoded in a separate subregion of auditory cortex than sensory-related information and is present even when movements are cued visually instead of auditorily. These findings suggest that action planning, in addition to preparing the motor system for movement, involves selectively modulating primary sensory areas based on the intended action.
Collapse
Affiliation(s)
- Daniel J Gale
- Centre for Neuroscience Studies, Queen’s University, Kingston, Ontario K7L 3N6, Canada
| | - Corson N Areshenkoff
- Centre for Neuroscience Studies, Queen’s University, Kingston, Ontario K7L 3N6, Canada
- Department of Psychology, Queen’s University, Kingston, Ontario K7L 3N6, Canada
| | - Claire Honda
- Department of Psychology, Queen’s University, Kingston, Ontario K7L 3N6, Canada
| | - Ingrid S Johnsrude
- Department of Psychology, University of Western Ontario, London, Ontario, N6A 3K7, Canada
- School of Communication Sciences and Disorders, University of Western Ontario, London, Ontario, N6A 3K7, Canada
- Brain and Mind Institute, University of Western Ontario, London, Ontario, N6A 3K7, Canada
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen’s University, Kingston, Ontario K7L 3N6, Canada
- Department of Psychology, Queen’s University, Kingston, Ontario K7L 3N6, Canada
| | - Jason P Gallivan
- Centre for Neuroscience Studies, Queen’s University, Kingston, Ontario K7L 3N6, Canada
- Department of Psychology, Queen’s University, Kingston, Ontario K7L 3N6, Canada
- Department of Biomedical and Molecular Sciences, Queen’s University, Kingston, Ontario K7L 3N6, Canada
| |
Collapse
|
27
|
Bergström F, Wurm M, Valério D, Lingnau A, Almeida J. Decoding stimuli (tool-hand) and viewpoint invariant grasp-type information. Cortex 2021; 139:152-165. [PMID: 33873036 DOI: 10.1016/j.cortex.2021.03.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 02/01/2021] [Accepted: 03/04/2021] [Indexed: 01/30/2023]
Abstract
When we see a manipulable object (henceforth tool) or a hand performing a grasping movement, our brain is automatically tuned to how that tool can be grasped (i.e., its affordance) or what kind of grasp that hand is performing (e.g., a power or precision grasp). However, it remains unclear where visual information related to tools or hands are transformed into abstract grasp representations. We therefore investigated where different levels of abstractness in grasp information are processed: grasp information that is invariant to the kind of stimuli that elicits it (tool-hand invariance); and grasp information that is hand-specific but viewpoint-invariant (viewpoint invariance). We focused on brain areas activated when viewing both tools and hands, i.e., the posterior parietal cortices (PPC), ventral premotor cortices (PMv), and lateral occipitotemporal cortex/posterior middle temporal cortex (LOTC/pMTG). To test for invariant grasp representations, we presented participants with tool images and grasp videos (from first or third person perspective; 1pp or 3pp) inside an MRI scanner, and cross-decoded power versus precision grasps across (i) grasp perspectives (viewpoint invariance), (ii) tool images and grasp 1pp videos (tool-hand 1pp invariance), and (iii) tool images and grasp 3pp videos (tool-hand 3pp invariance). Tool-hand 1pp, but not tool-hand 3pp, invariant grasp information was found in left PPC, whereas viewpoint-invariant information was found bilaterally in PPC, left PMv, and left LOTC/pMTG. These findings suggest different levels of abstractness-where visual information is transformed into stimuli-invariant grasp representations/tool affordances in left PPC, and viewpoint invariant but hand-specific grasp representations in the hand network.
Collapse
Affiliation(s)
- Fredrik Bergström
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal.
| | - Moritz Wurm
- Center for Mind/ Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Daniela Valério
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| | - Angelika Lingnau
- Center for Mind/ Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy; Institute of Psychology, University of Regensburg, Regensburg, Germany
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| |
Collapse
|
28
|
Schmid AC, Boyaci H, Doerschner K. Dynamic dot displays reveal material motion network in the human brain. Neuroimage 2020; 228:117688. [PMID: 33385563 DOI: 10.1016/j.neuroimage.2020.117688] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 11/20/2020] [Accepted: 12/19/2020] [Indexed: 11/26/2022] Open
Abstract
There is growing research interest in the neural mechanisms underlying the recognition of material categories and properties. This research field, however, is relatively more recent and limited compared to investigations of the neural mechanisms underlying object and scene category recognition. Motion is particularly important for the perception of non-rigid materials, but the neural basis of non-rigid material motion remains unexplored. Using fMRI, we investigated which brain regions respond preferentially to material motion versus other types of motion. We introduce a new database of stimuli - dynamic dot materials - that are animations of moving dots that induce vivid percepts of various materials in motion, e.g. flapping cloth, liquid waves, wobbling jelly. Control stimuli were scrambled versions of these same animations and rigid three-dimensional rotating dots. Results showed that isolating material motion properties with dynamic dots (in contrast with other kinds of motion) activates a network of cortical regions in both ventral and dorsal visual pathways, including areas normally associated with the processing of surface properties and shape, and extending to somatosensory and premotor cortices. We suggest that such a widespread preference for material motion is due to strong associations between stimulus properties. For example viewing dots moving in a specific pattern not only elicits percepts of material motion; one perceives a flexible, non-rigid shape, identifies the object as a cloth flapping in the wind, infers the object's weight under gravity, and anticipates how it would feel to reach out and touch the material. These results are a first important step in mapping out the cortical architecture and dynamics in material-related motion processing.
Collapse
Affiliation(s)
- Alexandra C Schmid
- Department of Psychology, Justus Liebig University Giessen, Giessen 35394, Germany.
| | - Huseyin Boyaci
- Department of Psychology, Justus Liebig University Giessen, Giessen 35394, Germany; Department of Psychology, A.S. Brain Research Center, and National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey.
| | - Katja Doerschner
- Department of Psychology, Justus Liebig University Giessen, Giessen 35394, Germany; Department of Psychology, A.S. Brain Research Center, and National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
29
|
Whitwell RL, Katz NJ, Goodale MA, Enns JT. The Role of Haptic Expectations in Reaching to Grasp: From Pantomime to Natural Grasps and Back Again. Front Psychol 2020; 11:588428. [PMID: 33391110 PMCID: PMC7773727 DOI: 10.3389/fpsyg.2020.588428] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 11/17/2020] [Indexed: 11/13/2022] Open
Abstract
When we reach to pick up an object, our actions are effortlessly informed by the object's spatial information, the position of our limbs, stored knowledge of the object's material properties, and what we want to do with the object. A substantial body of evidence suggests that grasps are under the control of "automatic, unconscious" sensorimotor modules housed in the "dorsal stream" of the posterior parietal cortex. Visual online feedback has a strong effect on the hand's in-flight grasp aperture. Previous work of ours exploited this effect to show that grasps are refractory to cued expectations for visual feedback. Nonetheless, when we reach out to pretend to grasp an object (pantomime grasp), our actions are performed with greater cognitive effort and they engage structures outside of the dorsal stream, including the ventral stream. Here we ask whether our previous finding would extend to cued expectations for haptic feedback. Our method involved a mirror apparatus that allowed participants to see a "virtual" target cylinder as a reflection in the mirror at the start of all trials. On "haptic feedback" trials, participants reached behind the mirror to grasp a size-matched cylinder, spatially coincident with the virtual one. On "no-haptic feedback" trials, participants reached behind the mirror and grasped into "thin air" because no cylinder was present. To manipulate haptic expectation, we organized the haptic conditions into blocked, alternating, and randomized schedules with and without verbal cues about the availability of haptic feedback. Replicating earlier work, we found the strongest haptic effects with the blocked schedules and the weakest effects in the randomized uncued schedule. Crucially, the haptic effects in the cued randomized schedule was intermediate. An analysis of the influence of the upcoming and immediately preceding haptic feedback condition in the cued and uncued random schedules showed that cuing the upcoming haptic condition shifted the haptic influence on grip aperture from the immediately preceding trial to the upcoming trial. These findings indicate that, unlike cues to the availability of visual feedback, participants take advantage of cues to the availability of haptic feedback, flexibly engaging pantomime, and natural modes of grasping to optimize the movement.
Collapse
Affiliation(s)
- Robert L Whitwell
- Department of Psychology, The University of British Columbia, Vancouver, BC, Canada
| | - Nathan J Katz
- Department of Psychology, Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
| | - Melvyn A Goodale
- Department of Psychology, Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
| | - James T Enns
- Department of Psychology, The University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
30
|
Indovina I, Bosco G, Riccelli R, Maffei V, Lacquaniti F, Passamonti L, Toschi N. Structural connectome and connectivity lateralization of the multimodal vestibular cortical network. Neuroimage 2020; 222:117247. [PMID: 32798675 PMCID: PMC7779422 DOI: 10.1016/j.neuroimage.2020.117247] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Revised: 07/28/2020] [Accepted: 08/05/2020] [Indexed: 01/05/2023] Open
Abstract
Unlike other sensory systems, the structural connectivity patterns of the human vestibular cortex remain a matter of debate. Based on their functional properties and hypothesized centrality within the vestibular network, the ‘core’ cortical regions of this network are thought to be areas in the posterior peri-sylvian cortex, in particular the retro-insula (previously named the posterior insular cortex-PIC), and the subregion OP2 of the parietal operculum. To study the vestibular network, structural connectivity matrices from n=974 healthy individuals drawn from the public Human Connectome Project (HCP) repository were estimated using multi-shell diffusion-weighted data followed by probabilistic tractography and spherical-deconvolution informed filtering of tractograms in combination with subject-specific grey-matter parcellations. Weighted graph-theoretical measures, modularity, and ‘hubness’ of the multimodal vestibular network were then estimated, and a structural lateralization index was defined in order to assess the difference in fiber density of homonym regions in the right and left hemisphere. Differences in connectivity patterns between OP2 and PIC were also estimated. We found that the bilateral intraparietal sulcus, PIC, and to a lesser degree OP2, are key ‘hub’ regions within the multimodal vestibular network. PIC and OP2 structural connectivity patterns were lateralized to the left hemisphere, while structural connectivity patterns of the posterior peri-sylvian supramarginal and superior temporal gyri were lateralized to the right hemisphere. These lateralization patterns were independent of handedness. We also found that the structural connectivity pattern of PIC is consistent with a key role of PIC in visuo-vestibular processing and that the structural connectivity pattern of OP2 is consistent with integration of mainly vestibular somato-sensory and motor information. These results suggest an analogy between PIC and the simian visual posterior sylvian (VPS) area and OP2 and the simian parieto-insular vestibular cortex (PIVC). Overall, these findings may provide novel insights to the current models of vestibular function, as well as to the understanding of the complexity and lateralized signs of vestibular syndromes.
Collapse
Affiliation(s)
- Iole Indovina
- Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, 98125 Messina, Italy; Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, via Ardeatina 354, 00179 Rome, Italy.
| | - Gianfranco Bosco
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, via Ardeatina 354, 00179 Rome, Italy; Department of Systems Medicine and Centre of Space BioMedicine, University of Rome Tor Vergata, 00173 Rome, Italy
| | - Roberta Riccelli
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, via Ardeatina 354, 00179 Rome, Italy
| | - Vincenzo Maffei
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, via Ardeatina 354, 00179 Rome, Italy
| | - Francesco Lacquaniti
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, via Ardeatina 354, 00179 Rome, Italy; Department of Systems Medicine and Centre of Space BioMedicine, University of Rome Tor Vergata, 00173 Rome, Italy
| | - Luca Passamonti
- Department of Clinical Neurosciences, University of Cambridge, UK; Institute of Bioimaging & Molecular Physiology, National Research Council, Milano, Italy; IRCCS San Camillo Hospital, Venice, Italy.
| | - Nicola Toschi
- Department of Biomedicine and Prevention, University of Rome "Tor Vergata", 00133 Rome, Italy; Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, USA
| |
Collapse
|
31
|
Marneweck M, Grafton ST. Overt and Covert Object Features Mediate Timing of Patterned Brain Activity during Motor Planning. Cereb Cortex Commun 2020; 1:tgaa080. [PMID: 34296138 PMCID: PMC8152879 DOI: 10.1093/texcom/tgaa080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 10/05/2020] [Accepted: 10/27/2020] [Indexed: 12/02/2022] Open
Abstract
Humans are seamless in their ability to efficiently and reliably generate fingertip forces to gracefully interact with objects. Such interactions rarely end in awkward outcomes like spilling, crushing, or tilting given advanced motor planning. Here we combine multiband imaging with deconvolution- and Bayesian pattern component modeling of functional magnetic resonance imaging data and in-scanner kinematics, revealing compelling evidence that the human brain differentially represents preparatory information for skillful object interactions depending on the saliency of visual cues. Earlier patterned activity was particularly evident in ventral visual processing stream-, but also selectively in dorsal visual processing stream and cerebellum in conditions of heightened uncertainty when an object’s superficial shape was incompatible rather than compatible with a key underlying object feature.
Collapse
Affiliation(s)
- Michelle Marneweck
- Department of Human Physiology, University of Oregon, Eugene, OR 97403-1249, USA.,Monash Biomedical Imaging, Monash University, Melbourne, Victoria 3168, Australia.,Turner Institute for Brain and Mental Health, Monash University, Melbourne, Victoria 3168, Australia
| | - Scott T Grafton
- Department of Psychological & Brain Sciences, University of California Santa Barbara, Santa Barbara, CA 93106, USA
| |
Collapse
|
32
|
Gallivan JP, Chapman CS, Gale DJ, Flanagan JR, Culham JC. Selective Modulation of Early Visual Cortical Activity by Movement Intention. Cereb Cortex 2020; 29:4662-4678. [PMID: 30668674 DOI: 10.1093/cercor/bhy345] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 11/21/2018] [Accepted: 12/22/2018] [Indexed: 12/22/2022] Open
Abstract
The primate visual system contains myriad feedback projections from higher- to lower-order cortical areas, an architecture that has been implicated in the top-down modulation of early visual areas during working memory and attention. Here we tested the hypothesis that these feedback projections also modulate early visual cortical activity during the planning of visually guided actions. We show, across three separate human functional magnetic resonance imaging (fMRI) studies involving object-directed movements, that information related to the motor effector to be used (i.e., limb, eye) and action goal to be performed (i.e., grasp, reach) can be selectively decoded-prior to movement-from the retinotopic representation of the target object(s) in early visual cortex. We also find that during the planning of sequential actions involving objects in two different spatial locations, that motor-related information can be decoded from both locations in retinotopic cortex. Together, these findings indicate that movement planning selectively modulates early visual cortical activity patterns in an effector-specific, target-centric, and task-dependent manner. These findings offer a neural account of how motor-relevant target features are enhanced during action planning and suggest a possible role for early visual cortex in instituting a sensorimotor estimate of the visual consequences of movement.
Collapse
Affiliation(s)
- Jason P Gallivan
- Department of Psychology, Queen's University, Kingston, Ontario, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada.,Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Craig S Chapman
- Faculty of Physical Education and Recreation, University of Alberta, Alberta, Canada
| | - Daniel J Gale
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - J Randall Flanagan
- Department of Psychology, Queen's University, Kingston, Ontario, Canada.,Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario, Canada.,Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
33
|
Abstract
The size-weight illusion is a perceptual illusion where smaller objects are judged as heavier than equally weighted larger objects. A previous informal report suggests that visual form agnosic patient DF does not experience the size-weight illusion when vision is the only available cue to object size. We tested this experimentally, comparing the magnitudes of DF's visual, kinesthetic and visual-kinesthetic size-weight illusions to those of 28 similarly-aged controls. A modified t-test found that DF's visual size-weight illusion was significantly smaller than that of controls (zcc = -1.7). A test of simple dissociation based on the Revised Standardized Difference Test found that the discrepancy between the magnitude of DF's visual and kinesthetic size-weight illusions was not significantly different from that of controls (zdcc = -1.054), thereby failing to establish a dissociation between the visual and kinesthetic conditions. These results are consistent with previous suggestions that visual form agnosia, following ventral visual stream damage, is associated with an abnormally reduced size-weight illusion. The results, however, do not confirm that this reduction is specific to the use of visual size cues to predict object weight, rather than reflecting more general changes in the processing of object size cues or in the use of predictive strategies for lifting.
Collapse
Affiliation(s)
| | - Anna Sedda
- School of Social Sciences,Psychology, Heriot-Watt University , Edinburgh, UK
| | | | - Robert D McIntosh
- Human Cognitive Neuroscience, Psychology, University of Edinburgh , Edinburgh, UK
| |
Collapse
|
34
|
Gaze direction influences grasping actions towards unseen, haptically explored, objects. Sci Rep 2020; 10:15774. [PMID: 32978418 PMCID: PMC7519081 DOI: 10.1038/s41598-020-72554-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 08/04/2020] [Indexed: 11/25/2022] Open
Abstract
Haptic exploration produces mental object representations that can be memorized for subsequent object-directed behaviour. Storage of haptically-acquired object images (HOIs), engages, besides canonical somatosensory areas, the early visual cortex (EVC). Clear evidence for a causal contribution of EVC to HOI representation is still lacking. The use of visual information by the grasping system undergoes necessarily a frame of reference shift by integrating eye-position. We hypothesize that if the motor system uses HOIs stored in a retinotopic coding in the visual cortex, then its use is likely to depend at least in part on eye position. We measured the kinematics of 4 fingers in the right hand of 15 healthy participants during the task of grasping different unseen objects behind an opaque panel, that had been previously explored haptically. The participants never saw the object and operated exclusively based on haptic information. The position of the object was fixed, in front of the participant, but the subject’s gaze varied from trial to trial between 3 possible positions, towards the unseen object or away from it, on either side. Results showed that the middle and little fingers’ kinematics during reaching for the unseen object changed significantly according to gaze position. In a control experiment we showed that intransitive hand movements were not modulated by gaze direction. Manipulating eye-position produces small but significant configuration errors, (behavioural errors due to shifts in frame of reference) possibly related to an eye-centered frame of reference, despite the absence of visual information, indicating sharing of resources between the haptic and the visual/oculomotor system to delayed haptic grasping.
Collapse
|
35
|
Klein LK, Maiello G, Paulun VC, Fleming RW. Predicting precision grip grasp locations on three-dimensional objects. PLoS Comput Biol 2020; 16:e1008081. [PMID: 32750070 PMCID: PMC7428291 DOI: 10.1371/journal.pcbi.1008081] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 08/14/2020] [Accepted: 06/22/2020] [Indexed: 11/18/2022] Open
Abstract
We rarely experience difficulty picking up objects, yet of all potential contact points on the surface, only a small proportion yield effective grasps. Here, we present extensive behavioral data alongside a normative model that correctly predicts human precision grasping of unfamiliar 3D objects. We tracked participants' forefinger and thumb as they picked up objects of 10 wood and brass cubes configured to tease apart effects of shape, weight, orientation, and mass distribution. Grasps were highly systematic and consistent across repetitions and participants. We employed these data to construct a model which combines five cost functions related to force closure, torque, natural grasp axis, grasp aperture, and visibility. Even without free parameters, the model predicts individual grasps almost as well as different individuals predict one another's, but fitting weights reveals the relative importance of the different constraints. The model also accurately predicts human grasps on novel 3D-printed objects with more naturalistic geometries and is robust to perturbations in its key parameters. Together, the findings provide a unified account of how we successfully grasp objects of different 3D shape, orientation, mass, and mass distribution.
Collapse
Affiliation(s)
- Lina K. Klein
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Guido Maiello
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- * E-mail:
| | - Vivian C. Paulun
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Roland W. Fleming
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
36
|
Garcea FE, Greene C, Grafton ST, Buxbaum LJ. Structural Disconnection of the Tool Use Network after Left Hemisphere Stroke Predicts Limb Apraxia Severity. Cereb Cortex Commun 2020; 1:tgaa035. [PMID: 33134927 PMCID: PMC7573742 DOI: 10.1093/texcom/tgaa035] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/10/2020] [Accepted: 07/13/2020] [Indexed: 12/23/2022] Open
Abstract
Producing a tool use gesture is a complex process drawing upon the integration of stored knowledge of tools and their associated actions with sensory-motor mechanisms supporting the planning and control of hand and arm actions. Understanding how sensory-motor systems in parietal cortex interface with semantic representations of actions and objects in the temporal lobe remains a critical issue and is hypothesized to be a key determinant of the severity of limb apraxia, a deficit in producing skilled action after left hemisphere stroke. We used voxel-based and connectome-based lesion-symptom mapping with data from 57 left hemisphere stroke participants to assess the lesion sites and structural disconnection patterns associated with poor tool use gesturing. We found that structural disconnection among the left inferior parietal lobule, lateral and ventral temporal cortices, and middle and superior frontal gyri predicted the severity of tool use gesturing performance. Control analyses demonstrated that reductions in right-hand grip strength were associated with motor system disconnection, largely bypassing regions supporting tool use gesturing. Our findings provide evidence that limb apraxia may arise, in part, from a disconnection between conceptual representations in the temporal lobe and mechanisms enabling skilled action production in the inferior parietal lobule.
Collapse
Affiliation(s)
- Frank E Garcea
- Moss Rehabilitation Research Institute, Elkins Park, PA 19027, USA
- University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Clint Greene
- Department of Psychological and Brain Sciences, University of California at Santa Barbara, Santa Barbara, CA 93016, USA
| | - Scott T Grafton
- Department of Psychological and Brain Sciences, University of California at Santa Barbara, Santa Barbara, CA 93016, USA
| | - Laurel J Buxbaum
- Moss Rehabilitation Research Institute, Elkins Park, PA 19027, USA
- Department of Rehabilitation Medicine, Thomas Jefferson University, Philadelphia, PA 19107, USA
| |
Collapse
|
37
|
van Polanen V, Rens G, Davare M. The role of the anterior intraparietal sulcus and the lateral occipital cortex in fingertip force scaling and weight perception during object lifting. J Neurophysiol 2020; 124:557-573. [PMID: 32667252 PMCID: PMC7500375 DOI: 10.1152/jn.00771.2019] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Skillful object lifting relies on scaling fingertip forces according to the object’s weight. When no visual cues about weight are available, force planning relies on previous lifting experience. Recently, we showed that previously lifted objects also affect weight estimation, as objects are perceived to be lighter when lifted after heavy objects compared with after light ones. Here, we investigated the underlying neural mechanisms mediating these effects. We asked participants to lift objects and estimate their weight. Simultaneously, we applied transcranial magnetic stimulation (TMS) during the dynamic loading or static holding phase. Two subject groups received TMS over either the anterior intraparietal sulcus (aIPS) or the lateral occipital area (LO), known to be important nodes in object grasping and perception. We hypothesized that TMS over aIPS and LO during object lifting would alter force scaling and weight perception. Contrary to our hypothesis, we did not find effects of aIPS or LO stimulation on force planning or weight estimation caused by previous lifting experience. However, we found that TMS over both areas increased grip forces, but only when applied during dynamic loading, and decreased weight estimation, but only when applied during static holding, suggesting time-specific effects. Interestingly, our results also indicate that TMS over LO, but not aIPS, affected load force scaling specifically for heavy objects, which further indicates that load and grip forces might be controlled differently. These findings provide new insights on the interactions between brain networks mediating action and perception during object manipulation. NEW & NOTEWORTHY This article provides new insights into the neural mechanisms underlying object lifting and perception. Using transcranial magnetic stimulation during object lifting, we show that effects of previous experience on force scaling and weight perception are not mediated by the anterior intraparietal sulcus or the lateral occipital cortex (LO). In contrast, we highlight a unique role for LO in load force scaling, suggesting different brain processes for grip and load force scaling in object manipulation.
Collapse
Affiliation(s)
- Vonne van Polanen
- Movement Control and Neuroplasticity Research Group, Department of Movement Sciences, Biomedical Sciences Group, KU Leuven, Leuven, Belgium.,Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Guy Rens
- Movement Control and Neuroplasticity Research Group, Department of Movement Sciences, Biomedical Sciences Group, KU Leuven, Leuven, Belgium.,Leuven Brain Institute, KU Leuven, Leuven, Belgium.,The Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
| | - Marco Davare
- Department of Clinical Sciences, College of Health and Life Sciences, Brunel University London, Uxbridge, United Kingdom
| |
Collapse
|
38
|
Marneweck M, Grafton ST. Representational Neural Mapping of Dexterous Grasping Before Lifting in Humans. J Neurosci 2020; 40:2708-2716. [PMID: 32015024 PMCID: PMC7096143 DOI: 10.1523/jneurosci.2791-19.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2019] [Revised: 01/09/2020] [Accepted: 01/28/2020] [Indexed: 11/21/2022] Open
Abstract
The ability of humans to reach and grasp objects in their environment has been the mainstay paradigm for characterizing the neural circuitry driving object-centric actions. Although much is known about hand shaping, a persistent question is how the brain orchestrates and integrates the grasp with lift forces of the fingers in a coordinated manner. The objective of the current study was to investigate how the brain represents grasp configuration and lift force during a dexterous object-centric action in a large sample of male and female human subjects. BOLD activity was measured as subjects used a precision-grasp to lift an object with a center of mass (CoM) on the left or right with the goal of minimizing tilting the object. The extent to which grasp configuration and lift force varied between left and right CoM conditions was manipulated by grasping the object collinearly (requiring a non-collinear force distribution) or non-collinearly (requiring more symmetrical forces). Bayesian variational representational similarity analyses on fMRI data assessed the evidence that a set of cortical and cerebellar regions were sensitive to grasp configuration or lift force differences between CoM conditions at differing time points during a grasp to lift action. In doing so, we reveal strong evidence that grasping and lift force are not represented by spatially separate functionally specialized regions, but by the same regions at differing time points. The coordinated grasp to lift effort is shown to be under dorsolateral (PMv and AIP) more than dorsomedial control, and under SPL7, somatosensory PSC, ventral LOC and cerebellar control.SIGNIFICANCE STATEMENT Clumsy disasters such as spilling, dropping, and crushing during our daily interactions with objects are a rarity rather than the norm. These disasters are avoided in part as a result of our orchestrated anticipatory efforts to integrate and coordinate grasping and lifting of object interactions, all before the lift of an object even commences. How the brain orchestrates this integration process has been largely neglected by historical approaches independently and solely focusing on reaching and grasping and the neural principles that guide them. Here, we test the extent to which grasping and lifting are represented in a spatially or temporally distinct manner and identified strong evidence for the consecutive emergence of sensitivity to grasping, then lifting within the same region.
Collapse
Affiliation(s)
- Michelle Marneweck
- Michelle Marneweck, School of Psychological Sciences, Monash University, Clayton, Victoria, 3800, Australia Scott Grafton, and
| | - Scott T Grafton
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, 93106
| |
Collapse
|
39
|
Abstract
New fMRI experiments and machine learning are helping to identify how the mass of objects is processed in the brain.
Collapse
Affiliation(s)
- Grant Fairchild
- Department of Psychology, University of Nevada Reno, Reno, United States
| | - Jacqueline C Snow
- Department of Psychology, University of Nevada Reno, Reno, United States
| |
Collapse
|
40
|
Schwettmann S, Tenenbaum JB, Kanwisher N. Invariant representations of mass in the human brain. eLife 2019; 8:46619. [PMID: 31845887 PMCID: PMC7007217 DOI: 10.7554/elife.46619] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Accepted: 12/10/2019] [Indexed: 01/14/2023] Open
Abstract
An intuitive understanding of physical objects and events is critical for successfully interacting with the world. Does the brain achieve this understanding by running simulations in a mental physics engine, which represents variables such as force and mass, or by analyzing patterns of motion without encoding underlying physical quantities? To investigate, we scanned participants with fMRI while they viewed videos of objects interacting in scenarios indicating their mass. Decoding analyses in brain regions previously implicated in intuitive physical inference revealed mass representations that generalized across variations in scenario, material, friction, and motion energy. These invariant representations were found during tasks without action planning, and tasks focusing on an orthogonal dimension (object color). Our results support an account of physical reasoning where abstract physical variables serve as inputs to a forward model of dynamics, akin to a physics engine, in parietal and frontal cortex.
Collapse
Affiliation(s)
- Sarah Schwettmann
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States.,Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, United States.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| | - Joshua B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States.,Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, United States.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, United States
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States.,Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, United States.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| |
Collapse
|
41
|
Gallivan JP, Chapman CS, Wolpert DM, Flanagan JR. Decision-making in sensorimotor control. Nat Rev Neurosci 2019; 19:519-534. [PMID: 30089888 DOI: 10.1038/s41583-018-0045-9] [Citation(s) in RCA: 137] [Impact Index Per Article: 27.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Skilled sensorimotor interactions with the world result from a series of decision-making processes that determine, on the basis of information extracted during the unfolding sequence of events, which movements to make and when and how to make them. Despite this inherent link between decision-making and sensorimotor control, research into each of these two areas has largely evolved in isolation, and it is only fairly recently that researchers have begun investigating how they interact and, together, influence behaviour. Here, we review recent behavioural, neurophysiological and computational research that highlights the role of decision-making processes in the selection, planning and control of goal-directed movements in humans and nonhuman primates.
Collapse
Affiliation(s)
- Jason P Gallivan
- Centre for Neuroscience Studies and Department of Psychology, Queen's University, Kingston, Ontario, Canada. .,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada.
| | - Craig S Chapman
- Faculty of Kinesiology, Sport, and Recreation and Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
| | - Daniel M Wolpert
- Department of Engineering, University of Cambridge, Cambridge, UK.,Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA
| | - J Randall Flanagan
- Centre for Neuroscience Studies and Department of Psychology, Queen's University, Kingston, Ontario, Canada.
| |
Collapse
|
42
|
Garcea FE, Almeida J, Sims MH, Nunno A, Meyers SP, Li YM, Walter K, Pilcher WH, Mahon BZ. Domain-Specific Diaschisis: Lesions to Parietal Action Areas Modulate Neural Responses to Tools in the Ventral Stream. Cereb Cortex 2019; 29:3168-3181. [PMID: 30169596 PMCID: PMC6933536 DOI: 10.1093/cercor/bhy183] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Revised: 07/04/2018] [Indexed: 12/31/2022] Open
Abstract
Neural responses to small manipulable objects ("tools") in high-level visual areas in ventral temporal cortex (VTC) provide an opportunity to test how anatomically remote regions modulate ventral stream processing in a domain-specific manner. Prior patient studies indicate that grasp-relevant information can be computed about objects by dorsal stream structures independently of processing in VTC. Prior functional neuroimaging studies indicate privileged functional connectivity between regions of VTC exhibiting tool preferences and regions of parietal cortex supporting object-directed action. Here we test whether lesions to parietal cortex modulate tool preferences within ventral and lateral temporal cortex. We found that lesions to the left anterior intraparietal sulcus, a region that supports hand-shaping during object grasping and manipulation, modulate tool preferences in left VTC and in the left posterior middle temporal gyrus. Control analyses demonstrated that neural responses to "place" stimuli in left VTC were unaffected by lesions to parietal cortex, indicating domain-specific consequences for ventral stream neural responses in the setting of parietal lesions. These findings provide causal evidence that neural specificity for "tools" in ventral and lateral temporal lobe areas may arise, in part, from online inputs to VTC from parietal areas that receive inputs via the dorsal visual pathway.
Collapse
Affiliation(s)
- Frank E Garcea
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Language Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Visual Science, 274 Meliora Hall, Rochester, NY, USA
- Moss Rehabilitation Research Institute, 50 Township Line Road, Elkins Park, PA, USA
| | - Jorge Almeida
- University of Coimbra, Faculty of Psychology and Educational Sciences, Rua do Colégio Novo, Coimbra, Portugal
- University of Coimbra, Proaction Laboratory, Faculty of Psychology and Educational Sciences, Rua do Colégio Novo, Coimbra, Portugal
| | - Maxwell H Sims
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
| | - Andrew Nunno
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
| | - Steven P Meyers
- University of Rochester Medical Center, Department of Imaging Sciences, 601 Elmwood Avenue, Rochester, NY, USA
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Yan Michael Li
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Kevin Walter
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Webster H Pilcher
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Bradford Z Mahon
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Language Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Visual Science, 274 Meliora Hall, Rochester, NY, USA
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
- Department of Neurology, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, USA
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, USA
| |
Collapse
|
43
|
Garcea FE, Buxbaum LJ. Gesturing tool use and tool transport actions modulates inferior parietal functional connectivity with the dorsal and ventral object processing pathways. Hum Brain Mapp 2019; 40:2867-2883. [PMID: 30900321 DOI: 10.1002/hbm.24565] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 02/22/2019] [Accepted: 02/26/2019] [Indexed: 12/13/2022] Open
Abstract
Interacting with manipulable objects (tools) requires the integration of diverse computations supported by anatomically remote regions. Previous functional neuroimaging research has demonstrated the left supramarginal (SMG) exhibits functional connectivity to both ventral and dorsal pathways, supporting the integration of ventrally-mediated tool properties and conceptual knowledge with dorsally-computed volumetric and structural representations of tools. This architecture affords us the opportunity to test whether interactions between the left SMG, ventral visual pathway, and dorsal visual pathway are differentially modulated when participants plan and generate tool-directed gestures emphasizing functional manipulation (tool use gesturing) or structure-based grasping (tool transport gesturing). We found that functional connectivity between the left SMG, ventral temporal cortex (bilateral fusiform gyri), and dorsal visual pathway (left superior parietal lobule/posterior intraparietal sulcus) was maximal for tool transport planning and gesturing, whereas functional connectivity between the left SMG, left ventral anterior temporal lobe, and left frontal operculum was maximal for tool use planning and gesturing. These results demonstrate that functional connectivity to the left SMG is differentially modulated by tool use and tool transport gesturing, suggesting that distinct tool features computed by the two object processing pathways are integrated in the parietal lobe in the service of tool-directed action.
Collapse
Affiliation(s)
- Frank E Garcea
- Moss Rehabilitation Research Institute, Albert Einstein Healthcare Network, Elkins Park, Pennsylvania.,Cognitive Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Laurel J Buxbaum
- Moss Rehabilitation Research Institute, Albert Einstein Healthcare Network, Elkins Park, Pennsylvania.,Department of Rehabilitation Medicine, Thomas Jefferson University, Philadelphia, Pennsylvania
| |
Collapse
|
44
|
Saccone EJ, Chouinard PA. The influence of size in weight illusions is unique relative to other object features. Psychon Bull Rev 2019; 26:77-89. [PMID: 30187441 DOI: 10.3758/s13423-018-1519-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Research into weight illusions has provided valuable insight into the functioning of the human perceptual system. Associations between the weight of an object and its other features, such as its size, material, density, conceptual information, or identity, influence our expectations and perceptions of weight. Earlier accounts of weight illusions underscored the importance of previous interactions with objects in the formation of these associations. In this review, we propose a theory that the influence of size on weight perception could be driven by innate and phylogenetically older mechanisms, and that it is therefore more deep-seated than the effects of other features that influence our perception of an object's weight. To do so, we first consider the different associations that exist between the weight of an object and its other features and discuss how different object features influence weight perception in different weight illusions. After this, we consider the cognitive, neurological, and developmental evidence, highlighting the uniqueness of size-weight associations and how they might be reinforced rather than driven by experience alone. In the process, we propose a novel neuroanatomical account of how size might influence weight perception differently than other object features do.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- School of Psychology and Public Health, La Trobe University, Edwards Road, Flora Hill, Victoria, 3552, Australia.
| | - Philippe A Chouinard
- School of Psychology and Public Health, La Trobe University, Edwards Road, Flora Hill, Victoria, 3552, Australia
| |
Collapse
|
45
|
Paulun VC, Buckingham G, Goodale MA, Fleming RW. The material-weight illusion disappears or inverts in objects made of two materials. J Neurophysiol 2019; 121:996-1010. [PMID: 30673359 PMCID: PMC6520622 DOI: 10.1152/jn.00199.2018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
The material-weight illusion (MWI) occurs when an object that looks heavy (e.g., stone) and one that looks light (e.g., Styrofoam) have the same mass. When such stimuli are lifted, the heavier-looking object feels lighter than the lighter-looking object, presumably because well-learned priors about the density of different materials are violated. We examined whether a similar illusion occurs when a certain weight distribution is expected (such as the metal end of a hammer being heavier), but weight is uniformly distributed. In experiment 1, participants lifted bipartite objects that appeared to be made of two materials (combinations of stone, Styrofoam, and wood) but were manipulated to have a uniform weight distribution. Most participants experienced an inverted MWI (i.e., the heavier-looking side felt heavier), suggesting an integration of incoming sensory information with density priors. However, a replication of the classic MWI was found when the objects appeared to be uniformly made of just one of the materials (experiment 2). Both illusions seemed to be independent of the forces used when the objects were lifted. When lifting bipartite objects but asked to judge the weight of the whole object, participants experienced no illusion (experiment 3). In experiment 4, we investigated weight perception in objects with a nonuniform weight distribution and again found evidence for an integration of prior and sensory information. Taken together, our seemingly contradictory results challenge most theories about the MWI. However, Bayesian integration of competing density priors with the likelihood of incoming sensory information may explain the opposing illusions. NEW & NOTEWORTHY We report a novel weight illusion that contradicts all current explanations of the material-weight illusion: When lifting an object composed of two materials, the heavier-looking side feels heavier, even when the true weight distribution is uniform. The opposite (classic) illusion is found when the same materials are lifted in two separate objects. Identifying the common mechanism underlying both illusions will have implications for perception more generally. A potential candidate is Bayesian inference with competing priors.
Collapse
Affiliation(s)
- Vivian C Paulun
- Department of Psychology, University of Giessen , Giessen , Germany.,Brain and Mind Institute, Western University , London, Ontario , Canada
| | - Gavin Buckingham
- Department of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter , Exeter , United Kingdom
| | - Melvyn A Goodale
- Brain and Mind Institute, Western University , London, Ontario , Canada
| | - Roland W Fleming
- Department of Psychology, University of Giessen , Giessen , Germany
| |
Collapse
|
46
|
Decoding Brain States for Planning Functional Grasps of Tools: A Functional Magnetic Resonance Imaging Multivoxel Pattern Analysis Study. J Int Neuropsychol Soc 2018; 24:1013-1025. [PMID: 30196800 DOI: 10.1017/s1355617718000590] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
OBJECTIVES We used multivoxel pattern analysis (MVPA) to investigate neural selectivity for grasp planning within the left-lateralized temporo-parieto-frontal network of areas (praxis representation network, PRN) typically associated with tool-related actions, as studied with traditional neuroimaging contrasts. METHODS We used data from 20 participants whose task was to plan functional grasps of tools, with either right or left hands. Region of interest and whole-brain searchlight analyses were performed to show task-related neural patterns. RESULTS MVPA revealed significant contributions to functional grasp planning from the anterior intraparietal sulcus (aIPS) and its immediate vicinities, supplemented by inputs from posterior subdivisions of IPS, and the ventral lateral occipital complex (vLOC). Moreover, greater local selectivity was demonstrated in areas near the superior parieto-occipital cortex and dorsal premotor cortex, putatively forming the dorso-dorsal stream. CONCLUSIONS A contribution from aIPS, consistent with its role in prospective grasp formation and/or encoding of relevant tool properties (e.g., potential graspable parts), is likely to accompany the retrieval of manipulation and/or mechanical knowledge subserved by the supramarginal gyrus for achieving action goals. An involvement of vLOC indicates that MVPA is particularly sensitive to coding of object properties, their identities and even functions, for a support of grip formation. Finally, the engagement of the superior parieto-frontal regions as revealed by MVPA is consistent with their selectivity for transient features of tools (i.e., variable affordances) for anticipatory hand postures. These outcomes support the notion that, compared to traditional approaches, MVPA can reveal more fine-grained patterns of neural activity. (JINS, 2018, 24, 1013-1025).
Collapse
|
47
|
Neural Mechanisms of Material Perception: Quest on Shitsukan. Neuroscience 2018; 392:329-347. [PMID: 30213767 DOI: 10.1016/j.neuroscience.2018.09.001] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Revised: 08/13/2018] [Accepted: 09/03/2018] [Indexed: 01/11/2023]
Abstract
In recent years, a growing body of research has addressed the nature and mechanism of material perception. Material perception entails perceiving and recognizing a material, surface quality or internal state of an object based on sensory stimuli such as visual, tactile, and/or auditory sensations. This process is ongoing in every aspect of daily life. We can, for example, easily distinguish whether an object is made of wood or metal, or whether a surface is rough or smooth. Judging whether the ground is wet or dry or whether a fish is fresh also involves material perception. Information obtained through material perception can be used to govern actions toward objects and to make decisions about whether to approach an object or avoid it. Because the physical processes leading to sensory signals related to material perception is complicated, it has been difficult to manipulate experimental stimuli in a rigorous manner. However, that situation is now changing thanks to advances in technology and knowledge in related fields. In this article, we will review what is currently known about the neural mechanisms responsible for material perception. We will show that cortical areas in the ventral visual pathway are strongly involved in material perception. Our main focus is on vision, but every sensory modality is involved in material perception. Information obtained through different sensory modalities is closely linked in material perception. Such cross-modal processing is another important feature of material perception, and will also be covered in this review.
Collapse
|
48
|
Buckingham G, Holler D, Michelakakis EE, Snow JC. Preserved Object Weight Processing after Bilateral Lateral Occipital Complex Lesions. J Cogn Neurosci 2018; 30:1683-1690. [PMID: 30024326 DOI: 10.1162/jocn_a_01314] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Object interaction requires knowledge of the weight of an object, as well as its shape. The lateral occipital complex (LOC), an area within the ventral visual pathway, is well known to be critically involved in processing visual shape information. Recently, however, LOC has also been implicated in coding object weight before grasping-a result that is surprising because weight is a nonvisual object property that is more relevant for motor interaction than visual perception. Here, we examined the causal role of LOC in perceiving heaviness and in determining appropriate fingertip forces during object lifting. We studied perceptions of heaviness and lifting behavior in a neuropsychological patient (M.C.) who has large bilateral occipitotemporal lesions that include LOC. We compared the patient's performance to a group of 18 neurologically healthy age-matched controls. Participants were asked to lift and report the perceived heaviness of a set of equally weighted spherical objects of various sizes-stimuli which typically induce the size-weight illusion, in which the smaller objects feel heavier than the larger objects despite having identical mass. Despite her ventral stream lesions, M.C. experienced a robust size-weight illusion induced by visual cues to object volume, and the magnitude of the illusion in M.C. was comparable to age-matched controls. Similarly, M.C. evinced predictive fingertip force scaling to visual size cues during her initial lifts of the objects that were well within the normal range. These single-case neuropsychological findings suggest that LOC is unlikely to play a causal role in computing object weight.
Collapse
|
49
|
Garcea FE, Chen Q, Vargas R, Narayan DA, Mahon BZ. Task- and domain-specific modulation of functional connectivity in the ventral and dorsal object-processing pathways. Brain Struct Funct 2018; 223:2589-2607. [PMID: 29536173 PMCID: PMC6252262 DOI: 10.1007/s00429-018-1641-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2017] [Accepted: 03/01/2018] [Indexed: 01/08/2023]
Abstract
A whole-brain network of regions collectively supports the ability to recognize and use objects-the Tool Processing Network. Little is known about how functional interactions within the Tool Processing Network are modulated in a task-dependent manner. We designed an fMRI experiment in which participants were required to either generate object pantomimes or to carry out a picture matching task over the same images of tools, while holding all aspects of stimulus presentation constant across the tasks. The Tool Processing Network was defined with an independent functional localizer, and functional connectivity within the network was measured during the pantomime and picture matching tasks. Relative to tool picture matching, tool pantomiming led to an increase in functional connectivity between ventral stream regions and left parietal and frontal-motor areas; in contrast, the matching task was associated with an increase in functional connectivity among regions in ventral temporo-occipital cortex, and between ventral temporal regions and the left inferior parietal lobule. Graph-theory analyses over the functional connectivity data indicated that the left premotor cortex and left lateral occipital complex were hub-like (exhibited high betweenness centrality) during tool pantomiming, while ventral stream regions (left medial fusiform gyrus and left posterior middle temporal gyrus) were hub-like during the picture matching task. These results demonstrate task-specific modulation of functional interactions among a common set of regions, and indicate dynamic coupling of anatomically remote regions in task-dependent manner.
Collapse
Affiliation(s)
- Frank E Garcea
- Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, NY, 14627-0268, USA
- Center for Visual Science, University of Rochester, Rochester, USA
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | - Quanjing Chen
- Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, NY, 14627-0268, USA
| | - Roger Vargas
- School of Mathematical Sciences, Rochester Institute of Technology, Rochester, USA
| | - Darren A Narayan
- School of Mathematical Sciences, Rochester Institute of Technology, Rochester, USA
| | - Bradford Z Mahon
- Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, NY, 14627-0268, USA.
- Center for Visual Science, University of Rochester, Rochester, USA.
- Department of Neurosurgery, University of Rochester Medical Center, Rochester, USA.
- Department of Neurology, University of Rochester Medical Center, Rochester, USA.
| |
Collapse
|
50
|
Geers L, Pesenti M, Andres M. Visual illusions modify object size estimates for prospective action judgements. Neuropsychologia 2018; 117:211-221. [PMID: 29883576 DOI: 10.1016/j.neuropsychologia.2018.06.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Revised: 05/16/2018] [Accepted: 06/04/2018] [Indexed: 11/18/2022]
Abstract
How does the eye guide the hand in an ever-changing world? The perception-action model posits that visually-guided actions rely on object size estimates that are computed from an egocentric perspective independently of the visual context. Accordingly, adjusting grip aperture to object size should be resistant to illusions emerging from the contrast between a target and surrounding elements. However, experimental studies gave discrepant results that have remained difficult to explain so far. Visual and proprioceptive information of the acting hand are potential sources of ambiguity in previous studies because the on-line corrections they allow may contribute to masking the illusory effect. To overcome this problem, we investigated the effect on prospective action judgements of the Ebbinghaus illusion, a visual illusion in which the perceived size of a central circle varies according to the size of surrounding circles. Participants had to decide whether they thought they would be able to grasp the central circle of an Ebbinghaus display between their index finger and thumb, without moving their hands. A control group had to judge the size of the central circle relative to a standard. Experiment 1 showed that the illusion affected perceptual and grasping judgements similarly. We further investigated the interaction between visual illusions and grip aperture representation by examining the effect of concurrent motor tasks on grasping judgements. We showed that participants underestimated their ability to grasp the circle when they were squeezing a ball between their index finger and thumb (Experiment 2), whereas they overestimated their ability when their fingers were spread apart (Experiment 3). The illusion also affected the grasping judgement task and modulated the interference of the squeezing movement, with the illusion of largeness enhancing the underestimation of one's grasping ability observed in Experiment 2. We conclude that visual context and body posture both influence action anticipation, and that perception and action support each other.
Collapse
Affiliation(s)
- Laurie Geers
- Psychological Sciences Research Institute, Université catholique de Louvain, Place Cardinal Mercier 10, Louvain-la-Neuve, Belgium.
| | - Mauro Pesenti
- Psychological Sciences Research Institute, Université catholique de Louvain, Place Cardinal Mercier 10, Louvain-la-Neuve, Belgium; Institute of Neuroscience, Université catholique de Louvain, Avenue Mounier 53, Brussels, Belgium.
| | - Michael Andres
- Psychological Sciences Research Institute, Université catholique de Louvain, Place Cardinal Mercier 10, Louvain-la-Neuve, Belgium; Institute of Neuroscience, Université catholique de Louvain, Avenue Mounier 53, Brussels, Belgium.
| |
Collapse
|