1
|
Blauch NM, Plaut DC, Vin R, Behrmann M. Individual variation in the functional lateralization of human ventral temporal cortex: Local competition and long-range coupling. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.10.15.618268. [PMID: 39464049 PMCID: PMC11507683 DOI: 10.1101/2024.10.15.618268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/29/2024]
Abstract
The ventral temporal cortex (VTC) of the human cerebrum is critically engaged in computations related to high-level vision. One intriguing aspect of this region is its asymmetric organization and functional lateralization. Notably, in the VTC, neural responses to words are stronger in the left hemisphere, whereas neural responses to faces are stronger in the right hemisphere. Converging evidence has suggested that left-lateralized word responses emerge to couple efficiently with left-lateralized frontotemporal language regions, but evidence is more mixed regarding the sources of the right-lateralization for face perception. Here, we use individual differences as a tool to adjudicate between three theories of VTC organization arising from: 1) local competition between words and faces, 2) local competition between faces and other categories, 3) long-range coupling with VTC and frontotemporal areas subject to their own local competition. First, in an in-house functional MRI experiment, we demonstrated that individual differences in laterality are both substantial and reliable within a right-handed population of young adults. We found no (anti-)correlation in the laterality of word and face selectivity relative to object responses, and a positive correlation when using selectivity relative to a fixation baseline, challenging ideas of local competition between words and faces. We next examined broader local competition with faces using the large-scale Human Connectome Project (HCP) dataset. Face and tool laterality were significantly anti-correlated, while face and body laterality were positively correlated, consistent with the idea that generic local representational competition and cooperation may shape face lateralization. Last, we assessed the role of long-range coupling in the development of VTC laterality. Within our in-house experiment, substantial correlation was evident between VTC text laterality and several other nodes of a distributed text-processing circuit. In the HCP data, VTC face laterality was both negatively correlated with frontotemporal language laterality, and positively correlated with social perception laterality in the same areas, consistent with a long-range coupling effect between face and social processing representations, driven by local competition between language and social processing. We conclude that both local and long-range interactions shape the heterogeneous hemispheric specializations in high-level visual cortex.
Collapse
Affiliation(s)
- Nicholas M Blauch
- Program in Neural Computation, Carnegie Mellon University
- Neuroscience Institute, Carnegie Mellon University
- Department of Psychology, Harvard University
| | - David C Plaut
- Department of Psychology, Carnegie Mellon University
- Neuroscience Institute, Carnegie Mellon University
| | - Raina Vin
- Department of Psychology, Carnegie Mellon University
- Neurosciences Graduate Program, Yale University
| | - Marlene Behrmann
- Department of Psychology, Carnegie Mellon University
- Neuroscience Institute, Carnegie Mellon University
- Department of Opthamology, University of Pittsburgh
| |
Collapse
|
2
|
El Rassi Y, Handjaras G, Perciballi C, Leo A, Papale P, Corbetta M, Ricciardi E, Betti V. A visual representation of the hand in the resting somatomotor regions of the human brain. Sci Rep 2024; 14:18298. [PMID: 39112629 PMCID: PMC11306329 DOI: 10.1038/s41598-024-69248-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 08/02/2024] [Indexed: 08/10/2024] Open
Abstract
Hand visibility affects motor control, perception, and attention, as visual information is integrated into an internal model of somatomotor control. Spontaneous brain activity, i.e., at rest, in the absence of an active task, is correlated among somatomotor regions that are jointly activated during motor tasks. Recent studies suggest that spontaneous activity patterns not only replay task activation patterns but also maintain a model of the body's and environment's statistical regularities (priors), which may be used to predict upcoming behavior. Here, we test whether spontaneous activity in the human somatomotor cortex as measured using fMRI is modulated by visual stimuli that display hands vs. non-hand stimuli and by the use/action they represent. A multivariate pattern analysis was performed to examine the similarity between spontaneous activity patterns and task-evoked patterns to the presentation of natural hands, robot hands, gloves, or control stimuli (food). In the left somatomotor cortex, we observed a stronger (multivoxel) spatial correlation between resting state activity and natural hand picture patterns compared to other stimuli. No task-rest similarity was found in the visual cortex. Spontaneous activity patterns in somatomotor brain regions code for the visual representation of human hands and their use.
Collapse
Affiliation(s)
- Yara El Rassi
- IMT School for Advanced Studies Lucca, 55100, Lucca, Italy
| | | | | | - Andrea Leo
- IMT School for Advanced Studies Lucca, 55100, Lucca, Italy
- Department of Translational Research and Advanced Technologies, In Medicine and Surgery - University of Pisa, 56126, Pisa, Italy
| | - Paolo Papale
- IMT School for Advanced Studies Lucca, 55100, Lucca, Italy
- Department of Vision & Cognition, Netherlands Institute for Neuroscience (KNAW), Meibergdreef 47, 1105 BA, Amsterdam, The Netherlands
| | - Maurizio Corbetta
- Department of Neuroscience and Padova Neuroscience Center (PNC), University of Padua, 35131, Padua, Italy
- Venetian Institute of Molecular Medicine (VIMM), 35129, Padua, Italy
| | | | - Viviana Betti
- IRCCS Fondazione Santa Lucia, 00179, Rome, Italy.
- Department of Psychology, Sapienza University of Rome, 00185, Rome, Italy.
| |
Collapse
|
3
|
Bougou V, Vanhoyland M, Bertrand A, Van Paesschen W, Op De Beeck H, Janssen P, Theys T. Neuronal tuning and population representations of shape and category in human visual cortex. Nat Commun 2024; 15:4608. [PMID: 38816391 PMCID: PMC11139926 DOI: 10.1038/s41467-024-49078-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 05/22/2024] [Indexed: 06/01/2024] Open
Abstract
Object recognition and categorization are essential cognitive processes which engage considerable neural resources in the human ventral visual stream. However, the tuning properties of human ventral stream neurons for object shape and category are virtually unknown. We performed large-scale recordings of spiking activity in human Lateral Occipital Complex in response to stimuli in which the shape dimension was dissociated from the category dimension. Consistent with studies in nonhuman primates, the neuronal representations were primarily shape-based, although we also observed category-like encoding for images of animals. Surprisingly, linear decoders could reliably classify stimulus category even in data sets that were entirely shape-based. In addition, many recording sites showed an interaction between shape and category tuning. These results represent a detailed study on shape and category coding at the neuronal level in the human ventral visual stream, furnishing essential evidence that reconciles human imaging and macaque single-cell studies.
Collapse
Affiliation(s)
- Vasiliki Bougou
- Research Group of Experimental Neurosurgery and Neuroanatomy, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
- Laboratory for Neuro-and Psychophysiology, Research Group Neurophysiology, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
| | - Michaël Vanhoyland
- Research Group of Experimental Neurosurgery and Neuroanatomy, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
- Laboratory for Neuro-and Psychophysiology, Research Group Neurophysiology, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
- Department of Neurosurgery, University Hospitals Leuven, Leuven, Belgium
| | | | - Wim Van Paesschen
- Department of Neurology, University Hospitals Leuven, Leuven, Belgium
- Laboratory for Epilepsy Research, KU Leuven, Leuven, Belgium
| | - Hans Op De Beeck
- Laboratory Biological Psychology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Peter Janssen
- Laboratory for Neuro-and Psychophysiology, Research Group Neurophysiology, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium.
| | - Tom Theys
- Research Group of Experimental Neurosurgery and Neuroanatomy, Department of Neurosciences, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
- Department of Neurosurgery, University Hospitals Leuven, Leuven, Belgium
| |
Collapse
|
4
|
Karlsson EM, Carey DP. Hemispheric asymmetry of hand and tool perception in left- and right-handers with known language dominance. Neuropsychologia 2024; 196:108837. [PMID: 38428518 DOI: 10.1016/j.neuropsychologia.2024.108837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 12/20/2023] [Accepted: 02/21/2024] [Indexed: 03/03/2024]
Abstract
Regions in the brain that are selective for images of hands and tools have been suggested to be lateralised to the left hemisphere of right-handed individuals. In left-handers, many functions related to tool use or tool pantomime may also depend more on the left hemisphere. This result seems surprising, given that the dominant hand of these individuals is controlled by the right hemisphere. One explanation is that the left hemisphere is dominant for speech and language in the majority of left-handers, suggesting a supraordinate control system for complex motor sequencing that is required for skilled tool use, as well as for speech. In the present study, we examine if this left-hemispheric specialisation extends to perception of hands and tools in left- and right-handed individuals. We, crucially, also include a group of left-handers with right-hemispheric language dominance to examine their asymmetry biases. The results suggest that tools lateralise to the left hemisphere in most right-handed individuals with left-hemispheric language dominance. Tools also lateralise to the language dominant hemisphere in right-hemispheric language dominant left-handers, but the result for left-hemispheric language dominant left-handers are more varied, and no clear bias towards one hemisphere is found. Hands did not show a group-level asymmetry pattern in any of the groups. These results suggest a more complex picture regarding hemispheric overlap of hand and tool representations, and that visual appearance of tools may be driven in part by both language dominance and the hemisphere which controls the motor-dominant hand.
Collapse
Affiliation(s)
- Emma M Karlsson
- Institute of Cognitive Neuroscience, School of Psychology and Sport Science, Bangor University, Bangor, UK; Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium.
| | - David P Carey
- Institute of Cognitive Neuroscience, School of Psychology and Sport Science, Bangor University, Bangor, UK
| |
Collapse
|
5
|
Ip K, Kusyk N, Stephen ID, Brooks KR. Did you skip leg day? The neural mechanisms of muscle perception for body parts. Cortex 2024; 171:75-89. [PMID: 37980724 DOI: 10.1016/j.cortex.2023.10.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 09/18/2023] [Accepted: 10/02/2023] [Indexed: 11/21/2023]
Abstract
While the neural mechanisms underpinning the perception of muscularity are poorly understood, recent progress has been made using the psychophysical technique of visual adaptation. Prolonged visual exposure to high (low) muscularity bodies causes subsequently viewed bodies to appear less (more) muscular, revealing a recalibration of the neural populations encoding muscularity. Here, we use visual adaptation to further elucidate the tuning properties of the neural processes underpinning muscle perception for the upper and lower halves of the body. Participants manipulated the apparent muscularity of upper and lower bodies until they appeared 'normal', prior to and following exposure to a series of top/bottom halves of bodies that were either high or low in muscularity. In Experiment 1, participants were adapted to isolated own-gender body halves from one of four conditions; increased (muscularity) upper (body half), increased lower, decreased upper, or decreased lower. Despite the presence of muscle aftereffects when the body halves the participants viewed and manipulated were congruent, there was only weak evidence of muscle aftereffect transfer between the upper and lower halves of the body. Aftereffects were significantly weaker when body halves were incongruent, implying minimal overlap in the neural mechanisms encoding muscularity for body half. Experiment 2 examined the generalisability of Experiment 1's findings in a more ecologically valid context using whole-body stimuli, producing a similar pattern of results as Experiment 1, but with no evidence of cross-adaptation. Taken together, the findings are most consistent with muscle-encoding neural populations that are body-half selective. As visual adaptation has been implicated in cases of body size and shape misperception, the present study furthers our current understanding of how these perceptual inaccuracies, particularly those involving muscularity, are developed, maintained, and may potentially be treated.
Collapse
Affiliation(s)
- Keefe Ip
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia.
| | - Nicole Kusyk
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
| | - Ian D Stephen
- NTU Psychology, Nottingham Trent University, Nottingham, England, UK
| | - Kevin R Brooks
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia; Perception and Action Research Centre (PARC), Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, NSW, Australia; Lifespan Health & Wellbeing Research Centre, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
6
|
Gandolfo M, Abassi E, Balgova E, Downing PE, Papeo L, Koldewyn K. Converging evidence that left extrastriate body area supports visual sensitivity to social interactions. Curr Biol 2024; 34:343-351.e5. [PMID: 38181794 DOI: 10.1016/j.cub.2023.12.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/25/2023] [Accepted: 12/05/2023] [Indexed: 01/07/2024]
Abstract
Navigating our complex social world requires processing the interactions we observe. Recent psychophysical and neuroimaging studies provide parallel evidence that the human visual system may be attuned to efficiently perceive dyadic interactions. This work implies, but has not yet demonstrated, that activity in body-selective cortical regions causally supports efficient visual perception of interactions. We adopt a multi-method approach to close this important gap. First, using a large fMRI dataset (n = 92), we found that the left hemisphere extrastriate body area (EBA) responds more to face-to-face than non-facing dyads. Second, we replicated a behavioral marker of visual sensitivity to interactions: categorization of facing dyads is more impaired by inversion than non-facing dyads. Third, in a pre-registered experiment, we used fMRI-guided transcranial magnetic stimulation to show that online stimulation of the left EBA, but not a nearby control region, abolishes this selective inversion effect. Activity in left EBA, thus, causally supports the efficient perception of social interactions.
Collapse
Affiliation(s)
- Marco Gandolfo
- Donders Institute, Radboud University, Nijmegen 6525GD, the Netherlands; Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| | - Etienne Abassi
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Eva Balgova
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK; Department of Psychology, Aberystwyth University, Aberystwyth SY23 3UX, Ceredigion, UK
| | - Paul E Downing
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK
| | - Liuba Papeo
- Institut des Sciences Cognitives, Marc Jeannerod, Lyon 69500, France
| | - Kami Koldewyn
- Department of Psychology, Bangor University, Bangor LL572AS, Gwynedd, UK.
| |
Collapse
|
7
|
Leferink CA, DeKraker J, Brunec IK, Köhler S, Moscovitch M, Walther DB. Organization of pRF size along the AP axis of the hippocampus and adjacent medial temporal cortex is related to specialization for scenes versus faces. Cereb Cortex 2024; 34:bhad429. [PMID: 37991278 DOI: 10.1093/cercor/bhad429] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 10/13/2023] [Accepted: 10/14/2023] [Indexed: 11/23/2023] Open
Abstract
The hippocampus is largely recognized for its integral contributions to memory processing. By contrast, its role in perceptual processing remains less clear. Hippocampal properties vary along the anterior-posterior (AP) axis. Based on past research suggesting a gradient in the scale of features processed along the AP extent of the hippocampus, the representations have been proposed to vary as a function of granularity along this axis. One way to quantify such granularity is with population receptive field (pRF) size measured during visual processing, which has so far received little attention. In this study, we compare the pRF sizes within the hippocampus to its activation for images of scenes versus faces. We also measure these functional properties in surrounding medial temporal lobe (MTL) structures. Consistent with past research, we find pRFs to be larger in the anterior than in the posterior hippocampus. Critically, our analysis of surrounding MTL regions, the perirhinal cortex, entorhinal cortex, and parahippocampal cortex shows a similar correlation between scene sensitivity and larger pRF size. These findings provide conclusive evidence for a tight relationship between the pRF size and the sensitivity to image content in the hippocampus and adjacent medial temporal cortex.
Collapse
Affiliation(s)
- Charlotte A Leferink
- Department of Psychology, University of Toronto, Department of Psychology, 100 St George Street, Toronto, ON M5S 3G3, Canada
| | - Jordan DeKraker
- Department of Psychology, Western University, Social Science Centre Rm 7418, Western University, London, ON N6A 3K7, Canada
| | - Iva K Brunec
- Department of Psychology, University of Pennsylvania, 425 S. University Ave, Stephen A. Levin Bldg. Philadelphia, PA, 19104-6241, United States
| | - Stefan Köhler
- Department of Psychology, Western University, Social Science Centre Rm 7418, Western University, London, ON N6A 3K7, Canada
| | - Morris Moscovitch
- Department of Psychology, University of Toronto, Department of Psychology, 100 St George Street, Toronto, ON M5S 3G3, Canada
- Rotman Research Institute, Baycrest, Baycrest Centre for Geriatric Care, 3560 Bathurst Street, Toronto, ON M6A 2E1, Canada
| | - Dirk B Walther
- Department of Psychology, University of Toronto, Department of Psychology, 100 St George Street, Toronto, ON M5S 3G3, Canada
- Rotman Research Institute, Baycrest, Baycrest Centre for Geriatric Care, 3560 Bathurst Street, Toronto, ON M6A 2E1, Canada
| |
Collapse
|
8
|
Ambroziak KB, Bofill MA, Azañón E, Longo MR. Perceptual aftereffects of adiposity transfer from hands to whole bodies. Exp Brain Res 2023; 241:2371-2379. [PMID: 37620437 DOI: 10.1007/s00221-023-06686-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 08/08/2023] [Indexed: 08/26/2023]
Abstract
Adaptation aftereffects for features such as identity and gender have been shown to transfer between faces and bodies, and faces and body parts, i.e. hands. However, no studies have investigated transfer of adaptation aftereffects between whole bodies and body parts. The present study investigated whether visual adaptation aftereffects transfer between hands and whole bodies in the context of adiposity judgements (i.e. how thin or fat a body is). On each trial, participants had to decide whether the body they saw was thinner or fatter than average. Participants performed the task before and after exposure to a thin/fat hand. Consistent with body adaptation studies, after exposure to a slim hand participants judged subsequently presented bodies to be fatter than after adaptation to a fat hand. These results suggest that there may be links between visual representations of body adiposity for whole bodies and body parts.
Collapse
Affiliation(s)
- Klaudia B Ambroziak
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| | - Marina Araujo Bofill
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK
| | - Elena Azañón
- Institute of Psychology, Otto-Von-Guericke University, Universitätsplatz 2, 39016, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39106, Magdeburg, Germany
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118, Magdeburg, Germany
| | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| |
Collapse
|
9
|
Moreau Q, Parrotta E, Pesci UG, Era V, Candidi M. Early categorization of social affordances during the visual encoding of bodily stimuli. Neuroimage 2023; 274:120151. [PMID: 37191657 DOI: 10.1016/j.neuroimage.2023.120151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 04/27/2023] [Accepted: 04/30/2023] [Indexed: 05/17/2023] Open
Abstract
Interpersonal interactions rely on various communication channels, both verbal and non-verbal, through which information regarding one's intentions and emotions are perceived. Here, we investigated the neural correlates underlying the visual processing of hand postures conveying social affordances (i.e., hand-shaking), compared to control stimuli such as hands performing non-social actions (i.e., grasping) or showing no movement at all. Combining univariate and multivariate analysis on electroencephalography (EEG) data, our results indicate that occipito-temporal electrodes show early differential processing of stimuli conveying social information compared to non-social ones. First, the amplitude of the Early Posterior Negativity (EPN, an Event-Related Potential related to the perception of body parts) is modulated differently during the perception of social and non-social content carried by hands. Moreover, our multivariate classification analysis (MultiVariate Pattern Analysis - MVPA) expanded the univariate results by revealing early (<200 ms) categorization of social affordances over occipito-parietal sites. In conclusion, we provide new evidence suggesting that the encoding of socially relevant hand gestures is categorized in the early stages of visual processing.
Collapse
Affiliation(s)
- Q Moreau
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy.
| | - E Parrotta
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - U G Pesci
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - V Era
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - M Candidi
- Department of Psychology, Sapienza University, Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy.
| |
Collapse
|
10
|
Mathieu B, Abillama A, Moré S, Mercier C, Simoneau M, Danna J, Mouchnino L, Blouin J. Seeing our hand or a tool during visually-guided actions: Different effects on the somatosensory and visual cortices. Neuropsychologia 2023; 185:108582. [PMID: 37121267 DOI: 10.1016/j.neuropsychologia.2023.108582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 03/11/2023] [Accepted: 04/27/2023] [Indexed: 05/02/2023]
Abstract
The processing of proprioceptive information in the context of a conflict between visual and somatosensory feedbacks deteriorates motor performance. Previous studies have shown that seeing one's hand increases the weighting assigned to arm somatosensory inputs. In this light, we hypothesized that the sensory conflict, when tracing the contour of a shape with mirror-reversed vision, will be greater for participants who trace with a stylus seen in their hand (Hand group, n = 17) than for participants who trace with the tip of rod without seen their hand (Tool group, n = 15). Based on this hypothesis, we predicted that the tracing performance with mirror vision will be more deteriorated for the Hand group than for the Tool group, and we predicted a greater gating of somatosensory information for the Hand group to reduce the sensory conflict. The participants of both groups followed the outline of a shape in two visual conditions. Direct vision: the participants saw the hand or portion of a light 40 cm rod directly. Mirror Vision: the hand or the rod was seen through a mirror. We measured tracing performance using a digitizing tablet and the cortical activity with electroencephalography. Behavioral analyses revealed that the tracing performance of both groups was similarly impaired by mirror vision. However, contrasting the spectral content of the cortical oscillatory activity between the Mirror and Direct conditions, we observed that tracing with mirror vision resulted in significantly larger alpha (8-12 Hz) and beta (15-25 Hz) powers in the somatosensory cortex for participants of the Hand group. The somatosensory alpha and beta powers did not significantly differ between Mirror and Direct vision conditions for the Tool group. For both groups, tracing with mirror vision altered the activity of the visual cortex: decreased alpha power for the Hand group, decreased alpha and beta power for the Tool group. Overall, these results suggest that seeing the hand enhanced the sensory conflict when tracing with mirror vision and that the increase of alpha and beta powers in the somatosensory cortex served to reduce the weight assigned to somatosensory information. The increased activity of the visual cortex observed for both groups in the mirror vision condition suggests greater visual processing with increased task difficulty. Finally, the fact that the participants of the Tool group did not show better tracing performance than those of the Hand group suggests that tracing deterioration resulted from a sensorimotor conflict (as opposed to a visuo-proprioceptive conflict).
Collapse
Affiliation(s)
- Benjamin Mathieu
- Laboratoire de Neurosciences Cognitives (LNC), Aix-Marseille Université/ CNRS, Marseille, France.
| | - Antonin Abillama
- Laboratoire de Neurosciences Cognitives (LNC), Aix-Marseille Université/ CNRS, Marseille, France.
| | - Simon Moré
- Laboratoire de Neurosciences Cognitives (LNC), Aix-Marseille Université/ CNRS, Marseille, France
| | - Catherine Mercier
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS) Du CIUSSS de La Capitale-Nationale, Québec, Québec, Canada; Faculté de Médecine, Université Laval, Québec, Canada
| | - Martin Simoneau
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS) Du CIUSSS de La Capitale-Nationale, Québec, Québec, Canada; Faculté de Médecine, Université Laval, Québec, Canada
| | - Jérémy Danna
- Laboratoire de Neurosciences Cognitives (LNC), Aix-Marseille Université/ CNRS, Marseille, France
| | - Laurence Mouchnino
- Laboratoire de Neurosciences Cognitives (LNC), Aix-Marseille Université/ CNRS, Marseille, France; Institut Universitaire de France (IUF), Paris, France
| | - Jean Blouin
- Laboratoire de Neurosciences Cognitives (LNC), Aix-Marseille Université/ CNRS, Marseille, France
| |
Collapse
|
11
|
Bracci S, Mraz J, Zeman A, Leys G, Op de Beeck H. The representational hierarchy in human and artificial visual systems in the presence of object-scene regularities. PLoS Comput Biol 2023; 19:e1011086. [PMID: 37115763 PMCID: PMC10171658 DOI: 10.1371/journal.pcbi.1011086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 05/10/2023] [Accepted: 04/09/2023] [Indexed: 04/29/2023] Open
Abstract
Human vision is still largely unexplained. Computer vision made impressive progress on this front, but it is still unclear to which extent artificial neural networks approximate human object vision at the behavioral and neural levels. Here, we investigated whether machine object vision mimics the representational hierarchy of human object vision with an experimental design that allows testing within-domain representations for animals and scenes, as well as across-domain representations reflecting their real-world contextual regularities such as animal-scene pairs that often co-occur in the visual environment. We found that DCNNs trained in object recognition acquire representations, in their late processing stage, that closely capture human conceptual judgements about the co-occurrence of animals and their typical scenes. Likewise, the DCNNs representational hierarchy shows surprising similarities with the representational transformations emerging in domain-specific ventrotemporal areas up to domain-general frontoparietal areas. Despite these remarkable similarities, the underlying information processing differs. The ability of neural networks to learn a human-like high-level conceptual representation of object-scene co-occurrence depends upon the amount of object-scene co-occurrence present in the image set thus highlighting the fundamental role of training history. Further, although mid/high-level DCNN layers represent the category division for animals and scenes as observed in VTC, its information content shows reduced domain-specific representational richness. To conclude, by testing within- and between-domain selectivity while manipulating contextual regularities we reveal unknown similarities and differences in the information processing strategies employed by human and artificial visual systems.
Collapse
Affiliation(s)
- Stefania Bracci
- Center for Mind/Brain Sciences-CIMeC, University of Trento, Rovereto, Italy
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Jakob Mraz
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Astrid Zeman
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Gaëlle Leys
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Hans Op de Beeck
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| |
Collapse
|
12
|
Johnsdorf M, Kisker J, Gruber T, Schöne B. Comparing encoding mechanisms in realistic virtual reality and conventional 2D laboratory settings: Event-related potentials in a repetition suppression paradigm. Front Psychol 2023; 14:1051938. [PMID: 36777234 PMCID: PMC9912617 DOI: 10.3389/fpsyg.2023.1051938] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 01/06/2023] [Indexed: 01/28/2023] Open
Abstract
Although the human brain is adapted to function within three-dimensional environments, conventional laboratory research commonly investigates cognitive mechanisms in a reductionist approach using two-dimensional stimuli. However, findings regarding mnemonic processes indicate that realistic experiences in Virtual Reality (VR) are stored in richer and more intertwined engrams than those obtained from the conventional laboratory. Our study aimed to further investigate the generalizability of laboratory findings and to differentiate whether the processes underlying memory formation differ between VR and the conventional laboratory already in early encoding stages. Therefore, we investigated the Repetition Suppression (RS) effect as a correlate of the earliest instance of mnemonic processes under conventional laboratory conditions and in a realistic virtual environment. Analyses of event-related potentials (ERPs) indicate that the ERP deflections at several electrode clusters were lower in VR compared to the PC condition. These results indicate an optimized distribution of cognitive resources in realistic contexts. The typical RS effect was replicated under both conditions at most electrode clusters for a late time window. Additionally, a specific RS effect was found in VR at anterior electrodes for a later time window, indicating more extensive encoding processes in VR compared to the laboratory. Specifically, electrotomographic results (VARETA) indicate multimodal integration involving a broad cortical network and higher cognitive processes during the encoding of realistic objects. Our data suggest that object perception under realistic conditions, in contrast to the conventional laboratory, requires multisensory integration involving an interconnected functional system, facilitating the formation of intertwined memory traces in realistic environments.
Collapse
|
13
|
Bracci S, Op de Beeck HP. Understanding Human Object Vision: A Picture Is Worth a Thousand Representations. Annu Rev Psychol 2023; 74:113-135. [PMID: 36378917 DOI: 10.1146/annurev-psych-032720-041031] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Objects are the core meaningful elements in our visual environment. Classic theories of object vision focus upon object recognition and are elegant and simple. Some of their proposals still stand, yet the simplicity is gone. Recent evolutions in behavioral paradigms, neuroscientific methods, and computational modeling have allowed vision scientists to uncover the complexity of the multidimensional representational space that underlies object vision. We review these findings and propose that the key to understanding this complexity is to relate object vision to the full repertoire of behavioral goals that underlie human behavior, running far beyond object recognition. There might be no such thing as core object recognition, and if it exists, then its importance is more limited than traditionally thought.
Collapse
Affiliation(s)
- Stefania Bracci
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy;
| | - Hans P Op de Beeck
- Leuven Brain Institute, Research Unit Brain & Cognition, KU Leuven, Leuven, Belgium;
| |
Collapse
|
14
|
Atilgan H, Koi JXJ, Wong E, Laakso I, Matilainen N, Pasqualotto A, Tanaka S, Chen SHA, Kitada R. Functional relevance of the extrastriate body area for visual and haptic object recognition: a preregistered fMRI-guided TMS study. Cereb Cortex Commun 2023; 4:tgad005. [PMID: 37188067 PMCID: PMC10176024 DOI: 10.1093/texcom/tgad005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 03/22/2023] [Accepted: 03/27/2023] [Indexed: 05/17/2023] Open
Abstract
The extrastriate body area (EBA) is a region in the lateral occipito-temporal cortex (LOTC), which is sensitive to perceived body parts. Neuroimaging studies suggested that EBA is related to body and tool processing, regardless of the sensory modalities. However, how essential this region is for visual tool processing and nonvisual object processing remains a matter of controversy. In this preregistered fMRI-guided repetitive transcranial magnetic stimulation (rTMS) study, we examined the causal involvement of EBA in multisensory body and tool recognition. Participants used either vision or haptics to identify 3 object categories: hands, teapots (tools), and cars (control objects). Continuous theta-burst stimulation (cTBS) was applied over left EBA, right EBA, or vertex (control site). Performance for visually perceived hands and teapots (relative to cars) was more strongly disrupted by cTBS over left EBA than over the vertex, whereas no such object-specific effect was observed in haptics. The simulation of the induced electric fields confirmed that the cTBS affected regions including EBA. These results indicate that the LOTC is functionally relevant for visual hand and tool processing, whereas the rTMS over EBA may differently affect object recognition between the 2 sensory modalities.
Collapse
Affiliation(s)
- Hicret Atilgan
- Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore 639818, Singapore
| | - J X Janice Koi
- Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore 639818, Singapore
| | - Ern Wong
- IMT School for Advanced Studies Lucca, Piazza S. Francesco, 19, 55100 Lucca LU, Italy
| | - Ilkka Laakso
- Department of Electrical Engineering and Automation, Aalto University, Otakaari 3, 02150 Espoo, Finland
| | - Noora Matilainen
- Department of Electrical Engineering and Automation, Aalto University, Otakaari 3, 02150 Espoo, Finland
| | - Achille Pasqualotto
- Faculty of Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8577, Japan
| | - Satoshi Tanaka
- Department of Psychology, Hamamatsu University School of Medicine, 1-20-1 Handayama, Higashi Ward, Hamamatsu, Shizuoka 431-3192, Japan
| | - S H Annabel Chen
- Psychology, School of Social Sciences, Nanyang Technological University, 48 Nanyang Avenue, Singapore 639818, Singapore
- Centre for Research and Development in Learning, Nanyang Technological University, 61 Nanyang Drive, Singapore 637335, Singapore
- Lee Kong Chian School of Medicine (LKCMedicine), Nanyang Technological University, 11 Mandalay Road, Singapore 308232, Singapore
| | - Ryo Kitada
- Corresponding author: Graduate School of Intercultural Studies, Kobe University, 12-1 Tsurukabuto, Nada Ward, Kobe, Hyogo 657-0013, Japan.
| |
Collapse
|
15
|
Emotion is perceived accurately from isolated body parts, especially hands. Cognition 2023; 230:105260. [PMID: 36058103 DOI: 10.1016/j.cognition.2022.105260] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 11/21/2022]
Abstract
Body posture and configuration provide important visual cues about the emotion states of other people. We know that bodily form is processed holistically, however, emotion recognition may depend on different mechanisms; certain body parts, such as the hands, may be especially important for perceiving emotion. This study therefore compared participants' emotion recognition performance when shown images of full bodies, or of isolated hands, arms, heads and torsos. Across three experiments, emotion recognition accuracy was above chance for all body parts. While emotions were recognized most accurately from full bodies, recognition performance from the hands was more accurate than for other body parts. Representational similarity analysis further showed that the pattern of errors for the hands was related to that for full bodies. Performance was reduced when stimuli were inverted, showing a clear body inversion effect. The high performance for hands was not due only to the fact that there are two hands, as performance remained well above chance even when just one hand was shown. These results demonstrate that emotions can be decoded from body parts. Furthermore, certain features, such as the hands, are more important to emotion perception than others. STATEMENT OF RELEVANCE: Successful social interaction relies on accurately perceiving emotional information from others. Bodies provide an abundance of emotion cues; however, the way in which emotional bodies and body parts are perceived is unclear. We investigated this perceptual process by comparing emotion recognition for body parts with that for full bodies. Crucially, we found that while emotions were most accurately recognized from full bodies, emotions were also classified accurately when images of isolated hands, arms, heads and torsos were seen. Of the body parts shown, emotion recognition from the hands was most accurate. Furthermore, shared patterns of emotion classification for hands and full bodies suggested that emotion recognition mechanisms are shared for full bodies and body parts. That the hands are key to emotion perception is important evidence in its own right. It could also be applied to interventions for individuals who find it difficult to read emotions from faces and bodies.
Collapse
|
16
|
Sartin S, Ranzini M, Scarpazza C, Monaco S. Cortical areas involved in grasping and reaching actions with and without visual information: An ALE meta-analysis of neuroimaging studies. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 4:100070. [PMID: 36632448 PMCID: PMC9826890 DOI: 10.1016/j.crneur.2022.100070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 11/23/2022] [Accepted: 12/18/2022] [Indexed: 12/31/2022] Open
Abstract
The functional specialization of the ventral stream in Perception and the dorsal stream in Action is the cornerstone of the leading model proposed by Goodale and Milner in 1992. This model is based on neuropsychological evidence and has been a matter of debate for almost three decades, during which the dual-visual stream hypothesis has received much attention, including support and criticism. The advent of functional magnetic resonance imaging (fMRI) has allowed investigating the brain areas involved in Perception and Action, and provided useful data on the functional specialization of the two streams. Research on this topic has been quite prolific, yet no meta-analysis so far has explored the spatial convergence in the involvement of the two streams in Action. The present meta-analysis (N = 53 fMRI and PET studies) was designed to reveal the specific neural activations associated with Action (i.e., grasping and reaching movements), and the extent to which visual information affects the involvement of the two streams during motor control. Our results provide a comprehensive view of the consistent and spatially convergent neural correlates of Action based on neuroimaging studies conducted over the past two decades. In particular, occipital-temporal areas showed higher activation likelihood in the Vision compared to the No vision condition, but no difference between reach and grasp actions. Frontal-parietal areas were consistently involved in both reach and grasp actions regardless of visual availability. We discuss our results in light of the well-established dual-visual stream model and frame these findings in the context of recent discoveries obtained with advanced fMRI methods, such as multivoxel pattern analysis.
Collapse
Affiliation(s)
- Samantha Sartin
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Italy
| | | | - Cristina Scarpazza
- Department of General Psychology, University of Padua, Italy,IRCCS San Camillo Hospital, Venice, Italy
| | - Simona Monaco
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Italy,Corresponding author. CIMeC - Center for Mind/Brain Sciences, University of Trento, Via delle Regole 101, 38123, Trento, Italy.
| |
Collapse
|
17
|
Amaral L, Donato R, Valério D, Caparelli-Dáquer E, Almeida J, Bergström F. Disentangling hand and tool processing: Distal effects of neuromodulation. Cortex 2022; 157:142-154. [PMID: 36283136 DOI: 10.1016/j.cortex.2022.08.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 05/29/2022] [Accepted: 08/24/2022] [Indexed: 12/15/2022]
Abstract
Neural processing within a local brain region that responds to more than one object category (e.g., hands and tools) nonetheless have different functional connectivity patterns with other distal brain areas, which suggests that local processing can affect and/or be affected by processing in distal areas, in a category-specific way. Here we wanted to test whether administering either a hand- or tool-related training task in tandem with transcranial direct current stimulation (tDCS) to a region that responds both to hands and tools (posterior middle temporal gyrus; pMTG), modulated local and distal neural processing more for the trained than the untrained category in a subsequent fMRI task. After each combined tDCS/training session, participants viewed images of tools, hands, and animals, in an fMRI scanner. Using multivoxel pattern analysis, we found that tDCS stimulation to pMTG indeed improved the classification accuracy between tools vs. animals, but only when combined with a tool and not a hand training task. Surprisingly, tDCS stimulation to pMTG also improved classification accuracy between hands vs. animals when combined with a tool but not a hand training task. Our findings suggest that overlapping but functionally-specific networks may be engaged separately by using a category-specific training task together with tDCS - a strategy that can be applied more broadly to other cognitive domains using tDCS. By hypothesis, these effects on local processing are a direct result of within-domain connectivity constraints from domain-specific networks that are at play in the processing and organization of object representations.
Collapse
Affiliation(s)
- Lénia Amaral
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra. Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra. Portugal
| | - Rita Donato
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra. Portugal; Department of General Psychology, University of Padova, Italy; Human Inspired Technology Centre, University of Padova, Italy
| | - Daniela Valério
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra. Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra. Portugal
| | - Egas Caparelli-Dáquer
- Laboratory of Electrical Stimulation of the Nervous System (LabEEL), Rio de Janeiro State University, Brazil
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra. Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra. Portugal.
| | - Fredrik Bergström
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra. Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra. Portugal; Department of Psychology, University of Gothenburg, Sweden.
| |
Collapse
|
18
|
Abstract
Visual representations of bodies, in addition to those of faces, contribute to the recognition of con- and heterospecifics, to action recognition, and to nonverbal communication. Despite its importance, the neural basis of the visual analysis of bodies has been less studied than that of faces. In this article, I review what is known about the neural processing of bodies, focusing on the macaque temporal visual cortex. Early single-unit recording work suggested that the temporal visual cortex contains representations of body parts and bodies, with the dorsal bank of the superior temporal sulcus representing bodily actions. Subsequent functional magnetic resonance imaging studies in both humans and monkeys showed several temporal cortical regions that are strongly activated by bodies. Single-unit recordings in the macaque body patches suggest that these represent mainly body shape features. More anterior patches show a greater viewpoint-tolerant selectivity for body features, which may reflect a processing principle shared with other object categories, including faces. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Rufin Vogels
- Laboratorium voor Neuro- en Psychofysiologie, KU Leuven, Belgium; .,Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
19
|
Features and Extra-Striate Body Area Representations of Diagnostic Body Parts in Anger and Fear Perception. Brain Sci 2022; 12:brainsci12040466. [PMID: 35447997 PMCID: PMC9028525 DOI: 10.3390/brainsci12040466] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 03/19/2022] [Accepted: 03/29/2022] [Indexed: 02/04/2023] Open
Abstract
Social species perceive emotion via extracting diagnostic features of body movements. Although extensive studies have contributed to knowledge on how the entire body is used as context for decoding bodily expression, we know little about whether specific body parts (e.g., arms and legs) transmit enough information for body understanding. In this study, we performed behavioral experiments using the Bubbles paradigm on static body images to directly explore diagnostic body parts for categorizing angry, fearful and neutral expressions. Results showed that subjects recognized emotional bodies through diagnostic features from the torso with arms. We then conducted a follow-up functional magnetic resonance imaging (fMRI) experiment on body part images to examine whether diagnostic parts modulated body-related brain activity and corresponding neural representations. We found greater activations of the extra-striate body area (EBA) in response to both anger and fear than neutral for the torso and arms. Representational similarity analysis showed that neural patterns of the EBA distinguished different bodily expressions. Furthermore, the torso with arms and whole body had higher similarities in EBA representations relative to the legs and whole body, and to the head and whole body. Taken together, these results indicate that diagnostic body parts (i.e., torso with arms) can communicate bodily expression in a detectable manner.
Collapse
|
20
|
Errante A, Rossi Sebastiano A, Ziccarelli S, Bruno V, Rozzi S, Pia L, Fogassi L, Garbarini F. Structural connectivity associated with the sense of body ownership: a diffusion tensor imaging and disconnection study in patients with bodily awareness disorder. Brain Commun 2022; 4:fcac032. [PMID: 35233523 PMCID: PMC8882004 DOI: 10.1093/braincomms/fcac032] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 12/25/2021] [Accepted: 02/09/2022] [Indexed: 12/02/2022] Open
Abstract
The brain mechanisms underlying the emergence of a normal sense of body ownership can be investigated starting from pathological conditions in which body awareness is selectively impaired. Here, we focused on pathological embodiment, a body ownership disturbance observed in brain-damaged patients who misidentify other people’s limbs as their own. We investigated whether such body ownership disturbance can be classified as a disconnection syndrome, using three different approaches based on diffusion tensor imaging: (i) reconstruction of disconnectome maps in a large sample (N = 70) of stroke patients with and without pathological embodiment; (ii) probabilistic tractography, performed on the age-matched healthy controls (N = 16), to trace cortical connections potentially interrupted in patients with pathological embodiment and spared in patients without this pathological condition; (iii) probabilistic ‘in vivo’ tractography on two patients without and one patient with pathological embodiment. The converging results revealed the arcuate fasciculus and the third branch of the superior longitudinal fasciculus as mainly involved fibre tracts in patients showing pathological embodiment, suggesting that this condition could be related to the disconnection between frontal, parietal and temporal areas. This evidence raises the possibility of a ventral self-body recognition route including regions where visual (computed in occipito-temporal areas) and sensorimotor (stored in premotor and parietal areas) body representations are integrated, giving rise to a normal sense of body ownership.
Collapse
Affiliation(s)
- Antonino Errante
- Department of Medicine and Surgery, University of Parma, Parma, 43125, Italy
| | | | - Settimio Ziccarelli
- Department of Medicine and Surgery, University of Parma, Parma, 43125, Italy
| | - Valentina Bruno
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy
| | - Stefano Rozzi
- Department of Medicine and Surgery, University of Parma, Parma, 43125, Italy
| | - Lorenzo Pia
- SAMBA Research Group, Psychology Department, University of Turin, Turin 10123, Italy
- Neuroscience Institute of Turin (NIT), Turin 10123, Italy
| | - Leonardo Fogassi
- Department of Medicine and Surgery, University of Parma, Parma, 43125, Italy
| | - Francesca Garbarini
- MANIBUS Lab, Psychology Department, University of Turin, Turin 10123, Italy
- Neuroscience Institute of Turin (NIT), Turin 10123, Italy
| |
Collapse
|
21
|
Mahon BZ. Domain-specific connectivity drives the organization of object knowledge in the brain. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:221-244. [PMID: 35964974 PMCID: PMC11498098 DOI: 10.1016/b978-0-12-823493-8.00028-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The goal of this chapter is to review neuropsychological and functional MRI findings that inform a theory of the causes of functional specialization for semantic categories within occipito-temporal cortex-the ventral visual processing pathway. The occipito-temporal pathway supports visual object processing and recognition. The theoretical framework that drives this review considers visual object recognition through the lens of how "downstream" systems interact with the outputs of visual recognition processes. Those downstream processes include conceptual interpretation, grasping and object use, navigating and orienting in an environment, physical reasoning about the world, and inferring future actions and the inner mental states of agents. The core argument of this chapter is that innately constrained connectivity between occipito-temporal areas and other regions of the brain is the basis for the emergence of neural specificity for a limited number of semantic domains in the brain.
Collapse
Affiliation(s)
- Bradford Z Mahon
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States.
| |
Collapse
|
22
|
Marrazzo G, Vaessen MJ, de Gelder B. Decoding the difference between explicit and implicit body expression representation in high level visual, prefrontal and inferior parietal cortex. Neuroimage 2021; 243:118545. [PMID: 34478822 DOI: 10.1016/j.neuroimage.2021.118545] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/30/2021] [Accepted: 08/31/2021] [Indexed: 11/28/2022] Open
Abstract
Recent studies provide an increasing understanding of how visual objects categories like faces or bodies are represented in the brain and also raised the question whether a category based or more dynamic network inspired models are more powerful. Two important and so far sidestepped issues in this debate are, first, how major category attributes like the emotional expression directly influence category representation and second, whether category and attribute representation are sensitive to task demands. This study investigated the impact of a crucial category attribute like emotional expression on category area activity and whether this varies with the participants' task. Using (fMRI) we measured BOLD responses while participants viewed whole body expressions and performed either an explicit (emotion) or an implicit (shape) recognition task. Our results based on multivariate methods show that the type of task is the strongest determinant of brain activity and can be decoded in EBA, VLPFC and IPL. Brain activity was higher for the explicit task condition in VLPFC and was not emotion specific. This pattern suggests that during explicit recognition of the body expression, body category representation may be strengthened, and emotion and action related activity suppressed. Taken together these results stress the importance of the task and of the role of category attributes for understanding the functional organization of high level visual cortex.
Collapse
Affiliation(s)
- Giuseppe Marrazzo
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, the Netherlands
| | - Maarten J Vaessen
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, the Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, the Netherlands; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom.
| |
Collapse
|
23
|
Conson M, Di Rosa A, Polito F, Zappullo I, Baiano C, Trojano L. "Mind the thumb": Judging hand laterality is anchored on the thumb position. Acta Psychol (Amst) 2021; 219:103388. [PMID: 34392012 DOI: 10.1016/j.actpsy.2021.103388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Revised: 06/28/2021] [Accepted: 08/06/2021] [Indexed: 11/16/2022] Open
Abstract
People can decide whether the image of a hand represents a left or a right one. The laterality judgment mainly implies mentally imaging own hand movement (motor simulation) if the stimulus represents a palm, or analysing visual cues, as hand asymmetry, if the stimulus reproduces a dorsum. Here, by capitalizing on evidence underscoring the key role of thumb-palm complex in motor dexterity of human hand, we hypothesise that activation of motor or visual processes when judging hand laterality is due to the different relevance of palm-thumb and dorsum-thumb combinations to hand action. To test this thumb-anchored strategy, in a laterality judgment experiment, we concurrently manipulated the thumb position (flexed or extended) with respect to palm and dorsum, and the human likeness of the hand shape (influencing the salience of the thumb with respect to the hand shape). The main results demonstrated that viewing the flexed thumb from palm or dorsum elicited motor simulation, while viewing the extended thumb activated motor simulation when combined with palm but not dorsum. The present data highlight the pivotal role of the thumb in hand laterality judgments, consistent with its key role in human in-hand manipulation.
Collapse
Affiliation(s)
- Massimiliano Conson
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy.
| | - Alessandro Di Rosa
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| | - Francesco Polito
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| | - Isa Zappullo
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| | - Chiara Baiano
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| | - Luigi Trojano
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| |
Collapse
|
24
|
Angelini M, Del Vecchio M, Lopomo NF, Gobbo M, Avanzini P. Perspective-dependent activation of frontoparietal circuits during the observation of a static body effector. Brain Res 2021; 1769:147604. [PMID: 34332965 DOI: 10.1016/j.brainres.2021.147604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 06/16/2021] [Accepted: 07/27/2021] [Indexed: 10/20/2022]
Abstract
The perspective from which body-related stimuli are observed plays a fundamental role in modulating cerebral activity during the processing of others' bodies and actions. Previous research has shown perspective-dependent cerebral responses during the observation of both ongoing actions and static images of an acting body with implied motion information, with an advantage for the egocentric viewpoint. The present high-density EEG study assessed event-related potentials triggered by the presentation of a forearm at rest before reach-to-grasp actions, shown from four different viewpoints. Through a spatiotemporal analysis of the scalp electric field and the localization of cortical generators, our study revealed overall different processing for the third-person perspective relative to other viewpoints, mainly due to a later activation of motor-premotor regions. Since observing a static body effector often precedes action observation, our results integrate previous evidence of perspective-dependent encoding, with cascade implications on the design of neurorehabilitative or motor learning interventions based on action observation.
Collapse
Affiliation(s)
- Monica Angelini
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, Sede di Parma, Parma, Italy; Dipartimento di Ingegneria dell'Informazione, Università degli Studi di Brescia, Brescia, Italy.
| | - Maria Del Vecchio
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, Sede di Parma, Parma, Italy
| | - Nicola Francesco Lopomo
- Dipartimento di Ingegneria dell'Informazione, Università degli Studi di Brescia, Brescia, Italy
| | - Massimiliano Gobbo
- Dipartimento di Scienze Cliniche e Sperimentali, Università degli Studi di Brescia, Brescia, Italy
| | - Pietro Avanzini
- Consiglio Nazionale delle Ricerche (CNR), Istituto di Neuroscienze, Sede di Parma, Parma, Italy.
| |
Collapse
|
25
|
Zhang Z, Zeidman P, Nelissen N, Filippini N, Diedrichsen J, Bracci S, Friston K, Rounis E. Neural Correlates of Hand-Object Congruency Effects during Action Planning. J Cogn Neurosci 2021; 33:1487-1503. [PMID: 34496373 DOI: 10.1162/jocn_a_01728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Selecting hand actions to manipulate an object is affected both by perceptual factors and by action goals. Affordances may contribute to "stimulus-response" congruency effects driven by habitual actions to an object. In previous studies, we have demonstrated an influence of the congruency between hand and object orientations on response times when reaching to turn an object, such as a cup. In this study, we investigated how the representation of hand postures triggered by planning to turn a cup was influenced by this congruency effect, in an fMRI scanning environment. Healthy participants were asked to reach and turn a real cup that was placed in front of them either in an upright orientation or upside-down. They were instructed to use a hand orientation that was either congruent or incongruent with the cup orientation. As expected, the motor responses were faster when the hand and cup orientations were congruent. There was increased activity in a network of brain regions involving object-directed actions during action planning, which included bilateral primary and extrastriate visual, medial, and superior temporal areas, as well as superior parietal, primary motor, and premotor areas in the left hemisphere. Specific activation of the dorsal premotor cortex was associated with hand-object orientation congruency during planning and prior to any action taking place. Activity in that area and its connectivity with the lateral occipito-temporal cortex increased when planning incongruent (goal-directed) actions. The increased activity in premotor areas in trials where the orientation of the hand was incongruent to that of the object suggests a role in eliciting competing representations specified by hand postures in lateral occipito-temporal cortex.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Elisabeth Rounis
- University of Oxford.,West Middlesex University Hospital, Isleworth
| |
Collapse
|
26
|
Knights E, Mansfield C, Tonin D, Saada J, Smith FW, Rossit S. Hand-Selective Visual Regions Represent How to Grasp 3D Tools: Brain Decoding during Real Actions. J Neurosci 2021; 41:5263-5273. [PMID: 33972399 PMCID: PMC8211542 DOI: 10.1523/jneurosci.0083-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 03/23/2021] [Accepted: 03/29/2021] [Indexed: 02/02/2023] Open
Abstract
Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies discovered selective visual responses in occipitotemporal and parietal cortices for viewing pictures of hands or tools, which are assumed to reflect action processing, but this has rarely been directly investigated. Here, we examined the responses of independently visually defined category-selective brain areas when participants grasped 3D tools (N = 20; 9 females). Using real-action fMRI and multivoxel pattern analysis, we found that grasp typicality representations (i.e., whether a tool is grasped appropriately for use) were decodable from hand-selective areas in occipitotemporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. Importantly, these effects were exclusive for actions with tools, but not for biomechanically matched actions with control nontools. In addition, grasp typicality decoding was significantly higher in hand than tool-selective parietal regions. Notably, grasp typicality representations were automatically evoked even when there was no requirement for tool use and participants were naive to object category (tool vs nontools). Finding a specificity for typical tool grasping in hand-selective, rather than tool-selective, regions challenges the long-standing assumption that activation for viewing tool images reflects sensorimotor processing linked to tool manipulation. Instead, our results show that typicality representations for tool grasping are automatically evoked in visual regions specialized for representing the human hand, the primary tool of the brain for interacting with the world.
Collapse
Affiliation(s)
- Ethan Knights
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, United Kingdom
| | - Courtney Mansfield
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Diana Tonin
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Janak Saada
- Department of Radiology, Norfolk and Norwich University Hospitals NHS Foundation Trust, Norwich NR4 7UY, United Kingdom
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Stéphanie Rossit
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| |
Collapse
|
27
|
Pann A, Bonnard M, Felician O, Romaiguère P. The Extrastriate Body Area and identity processing: An fMRI guided TMS study. Physiol Rep 2021; 9:e14711. [PMID: 33938163 PMCID: PMC8090840 DOI: 10.14814/phy2.14711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 12/11/2020] [Accepted: 12/14/2020] [Indexed: 12/03/2022] Open
Abstract
The extrastriate body area (EBA) is a body‐selective focal region located in the lateral occipito‐temporal cortex that responds strongly to images of human bodies and body parts in comparison with other classes of stimuli. Whether EBA contributes also to the body recognition of self versus others remains in debate. We investigated whether EBA contributes to self‐other distinction and whether there might be a hemispheric‐side specificity to that contribution using double‐pulse transcranial magnetic stimulation (TMS) in right‐handed participants. Prior to the TMS experiment, all participants underwent an fMRI localizer task to determine individual EBA location. TMS was then applied over either right EBA, left EBA or vertex, while participants performed an identification task in which images of self or others' right, or left hands were presented. TMS over both EBAs slowed responses, with no identity‐specific effect. However, TMS applied over right EBA induced significantly more errors on other's hands than noTMS, TMS over left EBA or over the Vertex, when applied at 100–110 ms after image onset. The last three conditions did not differ, nor was there any difference for self‐hands. These findings suggest that EBA participates in self/other discrimination.
Collapse
Affiliation(s)
- Alizée Pann
- Aix Marseille Univ, INSERM, INS, Inst Neurosc Syst, Marseille, France
| | - Mireille Bonnard
- Aix Marseille Univ, INSERM, INS, Inst Neurosc Syst, Marseille, France
| | - Olivier Felician
- Aix Marseille Univ, APHM, INS, Hôpital de la Timone, Service de Neurologie et de Neuropsychologie, Marseille, France
| | | |
Collapse
|
28
|
Distinct Functional and Structural Connectivity of the Human Hand-Knob Supported by Intraoperative Findings. J Neurosci 2021; 41:4223-4233. [PMID: 33827936 DOI: 10.1523/jneurosci.1574-20.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Revised: 01/02/2021] [Accepted: 01/10/2021] [Indexed: 12/15/2022] Open
Abstract
Fine motor skills rely on the control of hand muscles exerted by a region of primary motor cortex (M1) that has been extensively investigated in monkeys. Although neuroimaging enables the exploration of this system also in humans, indirect measurements of brain activity prevent causal definitions of hand motor representations, which can be achieved using data obtained during brain mapping in tumor patients. High-frequency direct electrical stimulation delivered at rest (HF-DES-Rest) on the hand-knob region of the precentral gyrus has identified two sectors showing differences in cortical excitability. Using quantitative analysis of motor output elicited with HF DES-Rest, we characterized two sectors based on their excitability, higher in the posterior and lower in the anterior sector. We studied whether the different cortical excitability of these two regions reflected differences in functional connectivity (FC) and structural connectivity (SC). Using healthy adults from the Human Connectome Project (HCP), we computed FC and SC of the anterior and the posterior hand-knob sectors identified within a large cohort of patients. The comparison of FC of the two seeds showed that the anterior hand-knob, relative to the posterior hand-knob, showed stronger functional connections with a bilateral set of parietofrontal areas responsible for integrating perceptual and cognitive hand-related sensorimotor processes necessary for goal-related actions. This was reflected in different patterns of SC between the two sectors. Our results suggest that the human hand-knob is a functionally and structurally heterogeneous region organized along a motor-cognitive gradient.SIGNIFICANCE STATEMENT The capability to perform complex manipulative tasks is one of the major characteristics of primates and relies on the fine control of hand muscles exerted by a highly specialized region of the precentral gyrus, often termed the "hand-knob" sector. Using intraoperative brain mapping, we identify two hand-knob sectors (posterior and anterior) characterized by differences in cortical excitability. Based on resting-state functional connectivity (FC) and tractography in healthy subjects, we show that posterior and anterior hand-knob sectors differ in their functional connectivity (FC) and structural connectivity (SC) with frontoparietal regions. Thus, anteroposterior differences in cortical excitability are paralleled by differences in FC and SC that likely reflect a motor (posterior) to cognitive (anterior) organization of this cortical region.
Collapse
|
29
|
Overlapping but distinct: Distal connectivity dissociates hand and tool processing networks. Cortex 2021; 140:1-13. [PMID: 33901719 DOI: 10.1016/j.cortex.2021.03.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 01/18/2021] [Accepted: 03/04/2021] [Indexed: 12/31/2022]
Abstract
The processes and organizational principles of information involved in object recognition have been a subject of intense debate. These research efforts led to the understanding that local computations and feedforward/feedback connections are essential to our representations and their organization. Recent data, however, has demonstrated that distal computations also play a role in how information is locally processed. Here we focus on how long-range connectivity and local functional organization of information are related, by exploring regions that show overlapping category-preferences for two categories and testing whether their connections are related with distal representations in a category-specific way. We used an approach that relates functional connectivity with distal areas to local voxel-wise category-preferences. Specifically, we focused on two areas that show an overlap in category-preferences for tools and hands-the inferior parietal lobule/anterior intraparietal sulcus (IPL/aIPS) and the posterior middle temporal gyrus/lateral occipital temporal cortex (pMTG/LOTC) - and how connectivity from these two areas relate to voxel-wise category-preferences in two ventral temporal regions dedicated to the processing of tools and hands separately-the left medial fusiform gyrus and the fusiform body area respectively-as well as across the brain. We show that the functional connections of the two overlap areas correlate with categorical preferences for each category independently. These results show that regions that process both tools and hands maintain object topography in a category-specific way. This potentially allows for a category-specific flow of information that is pertinent to computing object representations.
Collapse
|
30
|
Bergström F, Wurm M, Valério D, Lingnau A, Almeida J. Decoding stimuli (tool-hand) and viewpoint invariant grasp-type information. Cortex 2021; 139:152-165. [PMID: 33873036 DOI: 10.1016/j.cortex.2021.03.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 02/01/2021] [Accepted: 03/04/2021] [Indexed: 01/30/2023]
Abstract
When we see a manipulable object (henceforth tool) or a hand performing a grasping movement, our brain is automatically tuned to how that tool can be grasped (i.e., its affordance) or what kind of grasp that hand is performing (e.g., a power or precision grasp). However, it remains unclear where visual information related to tools or hands are transformed into abstract grasp representations. We therefore investigated where different levels of abstractness in grasp information are processed: grasp information that is invariant to the kind of stimuli that elicits it (tool-hand invariance); and grasp information that is hand-specific but viewpoint-invariant (viewpoint invariance). We focused on brain areas activated when viewing both tools and hands, i.e., the posterior parietal cortices (PPC), ventral premotor cortices (PMv), and lateral occipitotemporal cortex/posterior middle temporal cortex (LOTC/pMTG). To test for invariant grasp representations, we presented participants with tool images and grasp videos (from first or third person perspective; 1pp or 3pp) inside an MRI scanner, and cross-decoded power versus precision grasps across (i) grasp perspectives (viewpoint invariance), (ii) tool images and grasp 1pp videos (tool-hand 1pp invariance), and (iii) tool images and grasp 3pp videos (tool-hand 3pp invariance). Tool-hand 1pp, but not tool-hand 3pp, invariant grasp information was found in left PPC, whereas viewpoint-invariant information was found bilaterally in PPC, left PMv, and left LOTC/pMTG. These findings suggest different levels of abstractness-where visual information is transformed into stimuli-invariant grasp representations/tool affordances in left PPC, and viewpoint invariant but hand-specific grasp representations in the hand network.
Collapse
Affiliation(s)
- Fredrik Bergström
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal.
| | - Moritz Wurm
- Center for Mind/ Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Daniela Valério
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| | - Angelika Lingnau
- Center for Mind/ Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy; Institute of Psychology, University of Regensburg, Regensburg, Germany
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| |
Collapse
|
31
|
Myga KA, Ambroziak KB, Tamè L, Farnè A, Longo MR. Whole-hand perceptual maps of joint location. Exp Brain Res 2021; 239:1235-1246. [PMID: 33590275 DOI: 10.1007/s00221-021-06043-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Accepted: 01/16/2021] [Indexed: 11/24/2022]
Abstract
Hands play a fundamental role in everyday behaviour. Nevertheless, healthy adults show striking misrepresentations of their hands which have been documented by a wide range of studies addressing various aspects of body representation. For example, when asked to indicate the location within the hand of the knuckles, people place them substantially farther forward than they actually are. Previous research, however, has focused exclusively on the knuckles at the base of each finger, not considering the other knuckles in the fingers. This study, therefore, aimed to investigate conceptual knowledge of the structure of the whole hand, by investigating judgements of the location of all 14 knuckle joints in the hand. Participants localised each of the 14 knuckles of their own hand (Experiment 1) or of the experimenter's hand (Experiment 2) on a hand silhouette. We measured whether there are systematic localisation biases. The results showed highly similar pattern of mislocalisation for the knuckles of one's own hand and those of another person's hand, suggesting that people share an abstract conceptual knowledge about the hand structure. In line with previous reports, we showed that the metacarpophalangeal joints at the base of the fingers are judged as substantially father forward in the hand than they actually are. Moreover, for the first time we showed a gradient of this bias, with progressive reduction of distal bias from more proximal to more distal joints. In sum, people think their finger segments are roughly the same, and that their fingers are shorter than they are.
Collapse
Affiliation(s)
- Kasia A Myga
- Department of Psychological Sciences, University of London, Malet Street, Bloomsbury, London, WC1E 7HX, UK.
| | - Klaudia B Ambroziak
- Department of Psychological Sciences, University of London, Malet Street, Bloomsbury, London, WC1E 7HX, UK
| | - Luigi Tamè
- Department of Psychological Sciences, University of London, Malet Street, Bloomsbury, London, WC1E 7HX, UK.,School of Psychology, University of Kent, Keyenes College, Canterbury, CT2 7NO, UK
| | - Alessandro Farnè
- Integrative Multisensory Perception Action & Cognition Team-ImpAct, Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, Lyon, France.,Claude Bernard University Lyon 1, 43 Boulevard du 11 Novembre 1918, 69100, Villeurbanne, France.,Hospices Civils de Lyon, Neuro-immersion, Villeurbanne, Lyon, France.,Centre for Mind/Brain Sciences, University of Trento, Corso Angelo, Corso Bettini, 31, 38068, Rovereto, TN, Italy
| | - Matthew R Longo
- Department of Psychological Sciences, University of London, Malet Street, Bloomsbury, London, WC1E 7HX, UK
| |
Collapse
|
32
|
Sivakumar P, Quinlan DJ, Stubbs KM, Culham JC. Grasping performance depends upon the richness of hand feedback. Exp Brain Res 2021; 239:835-846. [PMID: 33403432 DOI: 10.1007/s00221-020-06025-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 12/19/2020] [Indexed: 11/28/2022]
Abstract
Although visual feedback of the hand allows fast and accurate grasping actions, little is known about whether the nature of feedback of the hand affects performance. We investigated kinematics during precision grasping (with the index finger and thumb) when participants received different levels of hand feedback, with or without visual feedback of the target. Specifically, we compared performance when participants saw (1) no hand feedback; (2) only the two critical points on the index finger and thumb tips; (3) 21 points on all digit tips and hand joints; (4) 21 points connected by a "skeleton", or (5) full feedback of the hand wearing a glove. When less hand feedback was available, participants took longer to execute the movement because they allowed more time to slow the reach and close the hand. When target feedback was unavailable, participants took longer to plan the movement and reached with higher velocity. We were particularly interested in investigating maximum grip aperture (MGA), which can reflect the margin of error that participants allow to compensate for uncertainty. A trend suggested that MGA was smallest when ample feedback was available (skeleton and full hand feedback, regardless of target feedback) and when only essential information about hand and target was provided (2-point hand feedback + target feedback) but increased when non-essential points were included (21-point feedback). These results suggest that visual feedback of the hand affects grasping performance and that, while more feedback is usually beneficial, this is not necessarily always the case.
Collapse
Affiliation(s)
- Prajith Sivakumar
- Department of Biology, University of Western Ontario, London, Canada.,Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada
| | - Derek J Quinlan
- Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada.,BrainsCAN, University of Western Ontario, London, ON, Canada.,Department of Psychology, Huron University College, London, ON, Canada
| | - Kevin M Stubbs
- Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada.,BrainsCAN, University of Western Ontario, London, ON, Canada.,Department of Psychology, University of Western Ontario, London, ON, Canada
| | - Jody C Culham
- Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada. .,Department of Psychology, University of Western Ontario, London, ON, Canada.
| |
Collapse
|
33
|
Kuroki M, Fukui T. Visual Hand Recognition in Hand Laterality and Self-Other Discrimination Tasks: Relationships to Autistic Traits and Positive Body Image. Front Psychol 2020; 11:587080. [PMID: 33343460 PMCID: PMC7744968 DOI: 10.3389/fpsyg.2020.587080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 11/10/2020] [Indexed: 11/13/2022] Open
Abstract
In a study concerning visual body part recognition, a "self-advantage" effect, whereby self-related body stimuli are processed faster and more accurately than other-related body stimuli, was revealed, and the emergence of this effect is assumed to be tightly linked to implicit motor simulation, which is activated when performing a hand laterality judgment task in which hand ownership is not explicitly required. Here, we ran two visual hand recognition tasks, namely, a hand laterality judgment task and a self-other discrimination task, to investigate (i) whether the self-advantage emerged even if implicit motor imagery was assumed to be working less efficiently and (ii) how individual traits [such as autistic traits and the extent of positive self-body image, as assessed via the Autism Spectrum Quotient (AQ) and the Body Appreciation Scale-2 (BAS-2), respectively] modulate performance in these hand recognition tasks. Participants were presented with hand images in two orientations [i.e., upright (egocentric) and upside-down (allocentric)] and asked to judge whether it was a left or right hand (an implicit hand laterality judgment task). They were also asked to determine whether it was their own, or another person's hand (an explicit self-other discrimination task). Data collected from men and women were analyzed separately. The self-advantage effect in the hand laterality judgment task was not revealed, suggesting that only two orientation conditions are not enough to trigger this motor simulation. Furthermore, the men's group showed a significant positive correlation between AQ scores and reaction times (RTs) in the laterality judgment task, while the women's group showed a significant negative correlation between AQ scores and differences in RTs and a significant positive correlation between BAS-2 scores and dprime in the self-other discrimination task. These results suggest that men and women differentially adopt specific strategies and/or execution processes for implicit and explicit hand recognition tasks.
Collapse
Affiliation(s)
- Mayumi Kuroki
- Graduate School of Systems Design, Tokyo Metropolitan University, Hino, Japan
| | - Takao Fukui
- Graduate School of Systems Design, Tokyo Metropolitan University, Hino, Japan
| |
Collapse
|
34
|
Conson M, Polito F, Di Rosa A, Trojano L, Cordasco G, Esposito A, Turi M. 'Not only faces': specialized visual representation of human hands revealed by adaptation. ROYAL SOCIETY OPEN SCIENCE 2020; 7:200948. [PMID: 33489261 PMCID: PMC7813241 DOI: 10.1098/rsos.200948] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 11/20/2020] [Indexed: 06/12/2023]
Abstract
Classical neurophysiological studies demonstrated that the monkey brain is equipped with neurons selectively representing the visual shape of the primate hand. Neuroimaging in humans provided data suggesting that a similar representation can be found in humans. Here, we investigated the selectivity of hand representation in humans by means of the visual adaptation technique. Results showed that participants' judgement of human-likeness of a visual probe representing a human hand was specifically reduced by a visual adaptation procedure when using a human hand adaptor but not when using an anthropoid robotic hand or a non-primate animal paw adaptor. Instead, human-likeness of the anthropoid robotic hand was affected by both human and robotic adaptors. No effect was found when using a non-primate animal paw as adaptor or probe. These results support the existence of specific neural mechanisms encoding human hand in the human's visual system.
Collapse
Affiliation(s)
- Massimiliano Conson
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| | - Francesco Polito
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| | - Alessandro Di Rosa
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| | - Luigi Trojano
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| | - Gennaro Cordasco
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| | - Anna Esposito
- Department of Psychology, University of Campania Luigi Vanvitelli, Caserta, Italy
| | - Marco Turi
- Stella Maris Mediterraneo Foundation, Chiaromonte, Potenza, Italy
| |
Collapse
|
35
|
Fukui T, Murayama A, Miura A. Recognizing Your Hand and That of Your Romantic Partner. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:E8256. [PMID: 33182290 PMCID: PMC7664891 DOI: 10.3390/ijerph17218256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 10/26/2020] [Accepted: 11/05/2020] [Indexed: 11/16/2022]
Abstract
Although the hand is an important organ in interpersonal interactions, focusing on this body part explicitly is less common in daily life compared with the face. We investigated (i) whether a person's recognition of their own hand is different from their recognition of another person's hand (i.e., self hand vs. other's hand) and (ii) whether a close social relationship affects hand recognition (i.e., a partner's hand vs. an unknown person's hand). For this aim, we ran an experiment in which participants took part in one of two discrimination tasks: (i) a self-others discrimination task or (ii) a partner/unknown opposite-sex person discrimination task. In these tasks, participants were presented with a hand image and asked to select one of two responses, self (partner) or other (unknown persons), as quickly and accurately as possible. We manipulated hand ownership (self (partner)/other(unknown person)), hand image laterality (right/left), and visual perspective of hand image (upright/upside-down). A main effect of hand ownership in both tasks (i.e., self vs. other and partner vs. unknown person) was found, indicating longer reaction times for self and partner images. The results suggest that close social relationships modulate hand recognition-namely, "self-expansion" to a romantic partner could occur at explicit visual hand recognition.
Collapse
|
36
|
Brand J, Piccirelli M, Hepp-Reymond MC, Eng K, Michels L. Brain Activation During Visually Guided Finger Movements. Front Hum Neurosci 2020; 14:309. [PMID: 32922274 PMCID: PMC7456884 DOI: 10.3389/fnhum.2020.00309] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 07/13/2020] [Indexed: 11/13/2022] Open
Abstract
Computer interaction via visually guided hand movements often employs either abstract cursor-based feedback or virtual hand (VH) representations of varying degrees of realism. The effect of changing this visual feedback in virtual reality settings is currently unknown. In this study, 19 healthy right-handed adults performed index finger movements (“action”) and observed movements (“observation”) with four different types of visual feedback: a simple circular cursor (CU), a point light (PL) pattern indicating finger joint positions, a shadow cartoon hand (SH) and a realistic VH. Finger movements were recorded using a data glove, and eye-tracking was recorded optically. We measured brain activity using functional magnetic resonance imaging (fMRI). Both action and observation conditions showed stronger fMRI signal responses in the occipitotemporal cortex compared to baseline. The action conditions additionally elicited elevated bilateral activations in motor, somatosensory, parietal, and cerebellar regions. For both conditions, feedback of a hand with a moving finger (SH, VH) led to higher activations than CU or PL feedback, specifically in early visual regions and the occipitotemporal cortex. Our results show the stronger recruitment of a network of cortical regions during visually guided finger movements with human hand feedback when compared to a visually incomplete hand and abstract feedback. This information could have implications for the design of visually guided tasks involving human body parts in both research and application or training-related paradigms.
Collapse
Affiliation(s)
- Johannes Brand
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| | - Marco Piccirelli
- Department of Neuroradiology, University Hospital Zurich, Zurich, Switzerland.,Klinisches Neurozentrum, University Hospital Zurich, Zurich, Switzerland
| | - Marie-Claude Hepp-Reymond
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| | - Kynan Eng
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| | - Lars Michels
- Department of Neuroradiology, University Hospital Zurich, Zurich, Switzerland.,Klinisches Neurozentrum, University Hospital Zurich, Zurich, Switzerland
| |
Collapse
|
37
|
Conson M, Cecere R, Baiano C, De Bellis F, Forgione G, Zappullo I, Trojano L. Implicit Motor Imagery and the Lateral Occipitotemporal Cortex: Hints for Tailoring Non-Invasive Brain Stimulation. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17165851. [PMID: 32806702 PMCID: PMC7459529 DOI: 10.3390/ijerph17165851] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Revised: 08/09/2020] [Accepted: 08/10/2020] [Indexed: 12/13/2022]
Abstract
Background: Recent evidence has converged in showing that the lateral occipitotemporal cortex is over-recruited during implicit motor imagery in elderly and in patients with neurodegenerative disorders, such as Parkinson’s disease. These data suggest that when automatically imaging movements, individuals exploit neural resources in the visual areas to compensate for the decline in activating motor representations. Thus, the occipitotemporal cortex could represent a cortical target of non-invasive brain stimulation combined with cognitive training to enhance motor imagery performance. Here, we aimed at shedding light on the role of the left and right lateral occipitotemporal cortex in implicit motor imagery. Methods: We applied online, high-frequency, repetitive transcranial magnetic stimulation (rTMS) over the left and right lateral occipitotemporal cortex while healthy right-handers judged the laterality of hand images. Results: With respect to the sham condition, left hemisphere stimulation specifically reduced accuracy in judging the laterality of right-hand images. Instead, the hallmark of motor simulation, i.e., the biomechanical effect, was never influenced by rTMS. Conclusions: The lateral occipitotemporal cortex seems to be involved in mental representation of the dominant hand, at least in right-handers, but not in reactivating sensorimotor information during simulation. These findings provide useful hints for developing combined brain stimulation and behavioural trainings to improve motor imagery.
Collapse
Affiliation(s)
- Massimiliano Conson
- Laboratory of Developmental Neuropsychology, Department of Psychology, University of Campania Luigi Vanvitelli, 81100 Caserta, Italy; (R.C.); (C.B.); (G.F.); (I.Z.)
- Correspondence: ; Tel.: +39-08-2327-5327
| | - Roberta Cecere
- Laboratory of Developmental Neuropsychology, Department of Psychology, University of Campania Luigi Vanvitelli, 81100 Caserta, Italy; (R.C.); (C.B.); (G.F.); (I.Z.)
| | - Chiara Baiano
- Laboratory of Developmental Neuropsychology, Department of Psychology, University of Campania Luigi Vanvitelli, 81100 Caserta, Italy; (R.C.); (C.B.); (G.F.); (I.Z.)
| | - Francesco De Bellis
- Laboratory of Neuropsychology, Department of Psychology, University of Campania Luigi Vanvitelli, 81100 Caserta, Italy; (F.D.B.); (L.T.)
| | - Gabriela Forgione
- Laboratory of Developmental Neuropsychology, Department of Psychology, University of Campania Luigi Vanvitelli, 81100 Caserta, Italy; (R.C.); (C.B.); (G.F.); (I.Z.)
| | - Isa Zappullo
- Laboratory of Developmental Neuropsychology, Department of Psychology, University of Campania Luigi Vanvitelli, 81100 Caserta, Italy; (R.C.); (C.B.); (G.F.); (I.Z.)
| | - Luigi Trojano
- Laboratory of Neuropsychology, Department of Psychology, University of Campania Luigi Vanvitelli, 81100 Caserta, Italy; (F.D.B.); (L.T.)
| |
Collapse
|
38
|
Rosenke M, Davidenko N, Grill-Spector K, Weiner KS. Combined Neural Tuning in Human Ventral Temporal Cortex Resolves the Perceptual Ambiguity of Morphed 2D Images. Cereb Cortex 2020; 30:4882-4898. [PMID: 32372098 PMCID: PMC7391265 DOI: 10.1093/cercor/bhaa081] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
We have an amazing ability to categorize objects in the world around us. Nevertheless, how cortical regions in human ventral temporal cortex (VTC), which is critical for categorization, support this behavioral ability, is largely unknown. Here, we examined the relationship between neural responses and behavioral performance during the categorization of morphed silhouettes of faces and hands, which are animate categories processed in cortically adjacent regions in VTC. Our results reveal that the combination of neural responses from VTC face- and body-selective regions more accurately explains behavioral categorization than neural responses from either region alone. Furthermore, we built a model that predicts a person's behavioral performance using estimated parameters of brain-behavior relationships from a different group of people. Moreover, we show that this brain-behavior model generalizes to adjacent face- and body-selective regions in lateral occipitotemporal cortex. Thus, while face- and body-selective regions are located within functionally distinct domain-specific networks, cortically adjacent regions from both networks likely integrate neural responses to resolve competing and perceptually ambiguous information from both categories.
Collapse
Affiliation(s)
- Mona Rosenke
- Psychology Department, Stanford University, Stanford, CA 94305, USA
| | - Nicolas Davidenko
- Psychology Department, University of California, Santa Cruz, Santa Cruz, CA 95064, USA
| | - Kalanit Grill-Spector
- Psychology Department, Stanford University, Stanford, CA 94305, USA
- Neuroscience Institute, Stanford University, Stanford, CA 94305, USA
| | - Kevin S Weiner
- Psychology Department, University of California, Berkeley, Berkeley, CA 94720, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
39
|
Moreau Q, Parrotta E, Era V, Martelli ML, Candidi M. Role of the occipito-temporal theta rhythm in hand visual identification. J Neurophysiol 2020; 123:167-177. [DOI: 10.1152/jn.00267.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
Neuroimaging and EEG studies have shown that passive observation of the full body and of specific body parts is associated with 1) activity of an occipito-temporal region named the extrastriate body area (EBA), 2) amplitude modulations of a specific posterior event-related potential (ERP) component (N1/N190), and 3) a theta-band (4–7 Hz) synchronization recorded from occipito-temporal electrodes compatible with the location of EBA. To characterize the functional role of the occipito-temporal theta-band increase during the processing of body-part stimuli, we recorded EEG from healthy participants while they were engaged in an identification task (match-to-sample) of images of hands and nonbody control images (leaves). In addition to confirming that occipito-temporal electrodes show a larger N1 for hand images compared with control stimuli, cluster-based analysis revealed an occipito-temporal cluster showing an increased theta power when hands are presented (compared with leaves) and show that this theta increase is higher for identified hands compared with nonidentified ones while not being significantly different between not identified nonhand stimuli. Finally, single trial multivariate pattern analysis revealed that time-frequency modulation in the theta band is a better marker for classifying the identification of hand images than the ERP modulation. The present results support the notion that theta activity over the occipito-temporal cortex is an informative marker of hand visual processing and may reflect the activity of a network coding for stimulus identity. NEW & NOTEWORTHY Hands provide crucial information regarding the identity of others, which is a key information for social processes. We recorded EEG activity of healthy participants during the visual identification of hand images. The combination of univariate and multivariate pattern analysis in time- and time-frequency domain highlights the functional role of theta (4–7 Hz) activity over visual areas during hand identification and emphasizes the robustness of this neuromarker in occipito-temporal visual processing dynamics.
Collapse
Affiliation(s)
- Quentin Moreau
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- Istituto di Ricovero e Cura a Carattere Scientifico, Fondazione Santa Lucia, Rome
| | - Eleonora Parrotta
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- Istituto di Ricovero e Cura a Carattere Scientifico, Fondazione Santa Lucia, Rome
| | - Vanessa Era
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- Istituto di Ricovero e Cura a Carattere Scientifico, Fondazione Santa Lucia, Rome
| | - Maria Luisa Martelli
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- Istituto di Ricovero e Cura a Carattere Scientifico, Fondazione Santa Lucia, Rome
| | - Matteo Candidi
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- Istituto di Ricovero e Cura a Carattere Scientifico, Fondazione Santa Lucia, Rome
| |
Collapse
|
40
|
Ross P, Flack T. Removing Hand Form Information Specifically Impairs Emotion Recognition for Fearful and Angry Body Stimuli. Perception 2019; 49:98-112. [PMID: 31801026 DOI: 10.1177/0301006619893229] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Emotion perception research has largely been dominated by work on facial expressions, but emotion is also strongly conveyed from the body. Research exploring emotion recognition from the body tends to refer to “the body” as a whole entity. However, the body is made up of different components (hands, arms, trunk, etc.), all of which could be differentially contributing to emotion recognition. We know that the hands can help to convey actions and, in particular, are important for social communication through gestures, but we currently do not know to what extent the hands influence emotion recognition from the body. Here, 93 adults viewed static emotional body stimuli with either the hands, arms, or both components removed and completed a forced-choice emotion recognition task. Removing the hands significantly reduced recognition accuracy for fear and anger but made no significant difference to the recognition of happiness and sadness. Removing the arms had no effect on emotion recognition accuracy compared with the full-body stimuli. These results suggest the hands may play a key role in the recognition of emotions from the body.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, UK
| | - Tessa Flack
- School of Psychology, University of Lincoln, UK
| |
Collapse
|
41
|
Abstract
In this study I examined the role of the hands in scene perception. In Experiment 1, eye movements during free observation of natural scenes were analyzed. Fixations to faces and hands were compared under several conditions, including scenes with and without faces, with and without hands, and without a person. The hands were either resting (e.g., lying on the knees) or interacting with objects (e.g., holding a bottle). Faces held an absolute attentional advantage, regardless of hand presence. Importantly, fixations to interacting hands were faster and more frequent than those to resting hands, suggesting attentional priority to interacting hands. The interacting-hand advantage could not be attributed to perceptual saliency or to the hand-owner (i.e., the depicted person) gaze being directed at the interacting hand. Experiment 2 confirmed the interacting-hand advantage in a visual search paradigm with more controlled stimuli. The present results indicate that the key to understanding the role of attention in person perception is the competitive interaction among objects such as faces, hands, and objects interacting with the person.
Collapse
|
42
|
Mowbray R, Gottwald JM, Zhao M, Atkinson AP, Cowie D. The development of visually guided stepping. Exp Brain Res 2019; 237:2875-2883. [PMID: 31471678 PMCID: PMC6794234 DOI: 10.1007/s00221-019-05629-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Accepted: 08/14/2019] [Indexed: 12/03/2022]
Abstract
Adults use vision during stepping and walking to fine-tune foot placement. However, the developmental profile of visually guided stepping is unclear. We asked (1) whether children use online vision to fine-tune precise steps and (2) whether precision stepping develops as part of broader visuomotor development, alongside other fundamental motor skills like reaching. With 6-(N = 11), 7-(N = 11), 8-(N = 11)-year-olds and adults (N = 15), we manipulated visual input during steps and reaches. Using motion capture, we measured step and reach error, and postural stability. We expected (1) both steps and reaches would be visually guided (2) with similar developmental profiles (3) foot placement biases that promote stability, and (4) correlations between postural stability and step error. Children used vision to fine-tune both steps and reaches. At all ages, foot placement was biased (albeit not in the predicted directions). Contrary to our predictions, step error was not correlated with postural stability. By 8 years, children's step and reach error were adult-like. Despite similar visual control mechanisms, stepping and reaching had different developmental profiles: step error reduced with age whilst reach error was lower and stable with age. We argue that the development of both visually guided and non-visually guided action is limb-specific.
Collapse
Affiliation(s)
- Rachel Mowbray
- Department of Psychology, Durham University, South Road, Durham, DH1 3LE, UK.
| | - Janna M Gottwald
- Department of Psychology, Durham University, South Road, Durham, DH1 3LE, UK
- Department of Psychology, Uppsala University, Box 1225, 75121, Uppsala, Sweden
| | - Manfei Zhao
- Department of Psychology, Durham University, South Road, Durham, DH1 3LE, UK
| | - Anthony P Atkinson
- Department of Psychology, Durham University, South Road, Durham, DH1 3LE, UK
| | - Dorothy Cowie
- Department of Psychology, Durham University, South Road, Durham, DH1 3LE, UK
| |
Collapse
|
43
|
Predictive coding of action intentions in dorsal and ventral visual stream is based on visual anticipations, memory-based information and motor preparation. Brain Struct Funct 2019; 224:3291-3308. [PMID: 31673774 DOI: 10.1007/s00429-019-01970-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Accepted: 10/16/2019] [Indexed: 10/25/2022]
Abstract
Predictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in the absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available and vice versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.
Collapse
|
44
|
Op de Beeck HP, Pillet I, Ritchie JB. Factors Determining Where Category-Selective Areas Emerge in Visual Cortex. Trends Cogn Sci 2019; 23:784-797. [PMID: 31327671 DOI: 10.1016/j.tics.2019.06.006] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 06/21/2019] [Accepted: 06/21/2019] [Indexed: 11/26/2022]
Abstract
A hallmark of functional localization in the human brain is the presence of areas in visual cortex specialized for representing particular categories such as faces and words. Why do these areas appear where they do during development? Recent findings highlight several general factors to consider when answering this question. Experience-driven category selectivity arises in regions that have: (i) pre-existing selectivity for properties of the stimulus, (ii) are appropriately placed in the computational hierarchy of the visual system, and (iii) exhibit domain-specific patterns of connectivity to nonvisual regions. In other words, cortical location of category selectivity is constrained by what category will be represented, how it will be represented, and why the representation will be used.
Collapse
Affiliation(s)
- Hans P Op de Beeck
- Department of Brain and Cognition and Leuven Brain Institute, KU Leuven, Belgium. @kuleuven.be
| | - Ineke Pillet
- Department of Brain and Cognition and Leuven Brain Institute, KU Leuven, Belgium
| | - J Brendan Ritchie
- Department of Brain and Cognition and Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
45
|
van den Heiligenberg FMZ, Orlov T, Macdonald SN, Duff EP, Henderson Slater D, Beckmann CF, Johansen-Berg H, Culham JC, Makin TR. Artificial limb representation in amputees. Brain 2019. [PMID: 29534154 PMCID: PMC5917779 DOI: 10.1093/brain/awy054] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
The human brain contains multiple hand-selective areas, in both the sensorimotor and visual systems. Could our brain repurpose neural resources, originally developed for supporting hand function, to represent and control artificial limbs? We studied individuals with congenital or acquired hand-loss (hereafter one-handers) using functional MRI. We show that the more one-handers use an artificial limb (prosthesis) in their everyday life, the stronger visual hand-selective areas in the lateral occipitotemporal cortex respond to prosthesis images. This was found even when one-handers were presented with images of active prostheses that share the functionality of the hand but not necessarily its visual features (e.g. a ‘hook’ prosthesis). Further, we show that daily prosthesis usage determines large-scale inter-network communication across hand-selective areas. This was demonstrated by increased resting state functional connectivity between visual and sensorimotor hand-selective areas, proportional to the intensiveness of everyday prosthesis usage. Further analysis revealed a 3-fold coupling between prosthesis activity, visuomotor connectivity and usage, suggesting a possible role for the motor system in shaping use-dependent representation in visual hand-selective areas, and/or vice versa. Moreover, able-bodied control participants who routinely observe prosthesis usage (albeit less intensively than the prosthesis users) showed significantly weaker associations between degree of prosthesis observation and visual cortex activity or connectivity. Together, our findings suggest that altered daily motor behaviour facilitates prosthesis-related visual processing and shapes communication across hand-selective areas. This neurophysiological substrate for prosthesis embodiment may inspire rehabilitation approaches to improve usage of existing substitutionary devices and aid implementation of future assistive and augmentative technologies.
Collapse
Affiliation(s)
- Fiona M Z van den Heiligenberg
- Institute of Cognitive Neuroscience, University College London, London, UK.,FMRIB Centre, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, UK.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Tanya Orlov
- Neurobiology Department, Life Sciences Institute, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Scott N Macdonald
- Brain and Mind Institute, Department of Psychology, University of Western Ontario, Canada
| | - Eugene P Duff
- FMRIB Centre, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, UK
| | - David Henderson Slater
- FMRIB Centre, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, UK.,Oxford Centre for Enablement, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Christian F Beckmann
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Heidi Johansen-Berg
- FMRIB Centre, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, UK
| | - Jody C Culham
- Brain and Mind Institute, Department of Psychology, University of Western Ontario, Canada
| | - Tamar R Makin
- Institute of Cognitive Neuroscience, University College London, London, UK.,FMRIB Centre, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, UK.,Wellcome Centre for Human Neuroimaging, University College London, London, UK
| |
Collapse
|
46
|
Manson GA, Tremblay L, Lebar N, de Grosbois J, Mouchnino L, Blouin J. Auditory cues for somatosensory targets invoke visuomotor transformations: Behavioral and electrophysiological evidence. PLoS One 2019; 14:e0215518. [PMID: 31048853 PMCID: PMC6497427 DOI: 10.1371/journal.pone.0215518] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 04/03/2019] [Indexed: 11/18/2022] Open
Abstract
Prior to goal-directed actions, somatosensory target positions can be localized using either an exteroceptive or an interoceptive body representation. The goal of the present study was to investigate if the body representation selected to plan reaches to somatosensory targets is influenced by the sensory modality of the cue indicating the target’s location. In the first experiment, participants reached to somatosensory targets prompted by either an auditory or a vibrotactile cue. As a baseline condition, participants also performed reaches to visual targets prompted by an auditory cue. Gaze-dependent reaching errors were measured to determine the contribution of the exteroceptive representation to motor planning processes. The results showed that reaches to both auditory-cued somatosensory targets and auditory-cued visual targets exhibited larger gaze-dependent reaching errors than reaches to vibrotactile-cued somatosensory targets. Thus, an exteroceptive body representation was likely used to plan reaches to auditory-cued somatosensory targets but not to vibrotactile-cued somatosensory targets. The second experiment examined the influence of using an exteroceptive body representation to plan movements to somatosensory targets on pre-movement neural activations. Cortical responses to a task-irrelevant visual flash were measured as participants planned movements to either auditory-cued somatosensory or auditory-cued visual targets. Larger responses (i.e., visual-evoked potentials) were found when participants planned movements to somatosensory vs. visual targets, and source analyses revealed that these activities were localized to the left occipital and left posterior parietal areas. These results suggest that visual and visuomotor processing networks were more engaged when using the exteroceptive body representation to plan movements to somatosensory targets, than when planning movements to external visual targets.
Collapse
Affiliation(s)
- Gerome A. Manson
- Aix-Marseille University, CNRS, LNC FR 3C, Marseille, France
- University of Toronto, Centre for Motor Control, Faculty of Kinesiology and Physical Education, Toronto, Ontario, Canada
- * E-mail:
| | - Luc Tremblay
- University of Toronto, Centre for Motor Control, Faculty of Kinesiology and Physical Education, Toronto, Ontario, Canada
| | - Nicolas Lebar
- Aix-Marseille University, CNRS, LNC FR 3C, Marseille, France
| | - John de Grosbois
- University of Toronto, Centre for Motor Control, Faculty of Kinesiology and Physical Education, Toronto, Ontario, Canada
| | | | - Jean Blouin
- Aix-Marseille University, CNRS, LNC FR 3C, Marseille, France
| |
Collapse
|
47
|
Gopinath K, Krishnamurthy V, Lacey S, Sathian K. Accounting for Non-Gaussian Sources of Spatial Correlation in Parametric Functional Magnetic Resonance Imaging Paradigms II: A Method to Obtain First-Level Analysis Residuals with Uniform and Gaussian Spatial Autocorrelation Function and Independent and Identically Distributed Time-Series. Brain Connect 2018; 8:10-21. [PMID: 29161884 DOI: 10.1089/brain.2017.0522] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.
Collapse
Affiliation(s)
- Kaundinya Gopinath
- 1 Department of Radiology and Imaging Sciences, Emory University , Atlanta, Georgia
| | | | - Simon Lacey
- 2 Department of Neurology, Emory University , Atlanta, Georgia
| | - K Sathian
- 2 Department of Neurology, Emory University , Atlanta, Georgia .,3 Department of Rehabilitation Medicine, Emory University , Atlanta, Georgia .,4 Department of Psychology, Emory University , Atlanta, Georgia .,5 Rehabilitation R&D Center for Visual and Neurocognitive Rehabilitation , Atlanta VAMC, Decatur, Georgia
| |
Collapse
|
48
|
Repetitive Transcranial Magnetic Stimulation Over the Left Posterior Middle Temporal Gyrus Reduces Wrist Velocity During Emblematic Hand Gesture Imitation. Brain Topogr 2018; 32:332-341. [PMID: 30411178 PMCID: PMC6373290 DOI: 10.1007/s10548-018-0684-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Accepted: 10/26/2018] [Indexed: 12/22/2022]
Abstract
Results from neuropsychological studies, and neuroimaging and behavioural experiments with healthy individuals, suggest that the imitation of meaningful and meaningless actions may be reliant on different processing routes. The left posterior middle temporal gyrus (pMTG) is one area that might be important for the recognition and imitation of meaningful actions. We studied the role of the left pMTG in imitation using repetitive transcranial magnetic stimulation (rTMS) and two-person motion-tracking. Participants imitated meaningless and emblematic meaningful hand and finger gestures performed by a confederate actor whilst both individuals were motion-tracked. rTMS was applied during action observation (before imitation) over the left pMTG or a vertex control site. Since meaningless action imitation has been previously associated with a greater wrist velocity and longer correction period at the end of the movement, we hypothesised that stimulation over the left pMTG would increase wrist velocity and extend the correction period of meaningful actions (i.e., due to interference with action recognition). We also hypothesised that imitator accuracy (actor-imitator correspondence) would be reduced following stimulation over the left pMTG. Contrary to our hypothesis, we found that stimulation over the pMTG, but not the vertex, during action observation reduced wrist velocity when participants later imitated meaningful, but not meaningless, hand gestures. These results provide causal evidence for a role of the left pMTG in the imitation of meaningful gestures, and may also be in keeping with proposals that left posterior temporal regions play a role in the production of postural components of gesture.
Collapse
|
49
|
Bracci S, Caramazza A, Peelen MV. View-invariant representation of hand postures in the human lateral occipitotemporal cortex. Neuroimage 2018; 181:446-452. [DOI: 10.1016/j.neuroimage.2018.07.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Revised: 05/28/2018] [Accepted: 07/01/2018] [Indexed: 12/14/2022] Open
|
50
|
Walsh E, Vormberg A, Hannaford J, Longo MR. Inversion produces opposite size illusions for faces and bodies. Acta Psychol (Amst) 2018; 191:15-24. [PMID: 30195177 DOI: 10.1016/j.actpsy.2018.08.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 06/21/2018] [Accepted: 08/27/2018] [Indexed: 11/15/2022] Open
Abstract
Faces are complex, multidimensional, and meaningful visual stimuli. Recently, Araragi, Aotani, & Kitaoka (2012) demonstrated an intriguing face size illusion whereby an inverted face is perceived as larger than a physically identical upright face. Like the face, the human body is a highly familiar and important stimulus in our lives. Here, we investigated the specificity of the size underestimation of upright faces illusion, testing whether similar effects also hold for bodies, hands, and everyday objects. Experiments 1a and 1b replicated the face-size illusion. No size illusion was observed for hands or objects. Unexpectedly, a reverse size illusion was observed for bodies, so that upright bodies were perceived as larger than their inverted counterparts. Experiment 2 showed that the face illusion was maintained even when the photographic contrast polarity of the stimuli was reversed, indicating that the visual system driving the illusion relies on geometric featural information rather than image contrast. In Experiment 2, the reverse size illusion for bodies failed to reach significance. Our findings show that size illusions caused by inversion show a high level of category specificity, with opposite illusions for faces and bodies.
Collapse
Affiliation(s)
- Eamonn Walsh
- Department of Psychological Sciences, Birkbeck, University of London, UK; Department of Basic and Clinical Neuroscience, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK.
| | - Alexandra Vormberg
- Department of Psychological Sciences, Birkbeck, University of London, UK; Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Germany; Frankfurt Institute for Advanced Studies (FIAS), Germany
| | - Josie Hannaford
- Department of Psychological Sciences, Birkbeck, University of London, UK
| | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck, University of London, UK
| |
Collapse
|