1
|
Butz MV, Mittenbühler M, Schwöbel S, Achimova A, Gumbsch C, Otte S, Kiebel S. Contextualizing predictive minds. Neurosci Biobehav Rev 2025; 168:105948. [PMID: 39580009 DOI: 10.1016/j.neubiorev.2024.105948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 09/13/2024] [Accepted: 11/16/2024] [Indexed: 11/25/2024]
Abstract
The structure of human memory seems to be optimized for efficient prediction, planning, and behavior. We propose that these capacities rely on a tripartite structure of memory that includes concepts, events, and contexts-three layers that constitute the mental world model. We suggest that the mechanism that critically increases adaptivity and flexibility is the tendency to contextualize. This tendency promotes local, context-encoding abstractions, which focus event- and concept-based planning and inference processes on the task and situation at hand. As a result, cognitive contextualization offers a solution to the frame problem-the need to select relevant features of the environment from the rich stream of sensorimotor signals. We draw evidence for our proposal from developmental psychology and neuroscience. Adopting a computational stance, we present evidence from cognitive modeling research which suggests that context sensitivity is a feature that is critical for maximizing the efficiency of cognitive processes. Finally, we turn to recent deep-learning architectures which independently demonstrate how context-sensitive memory can emerge in a self-organized learning system constrained by cognitively-inspired inductive biases.
Collapse
Affiliation(s)
- Martin V Butz
- Cognitive Modeling, Faculty of Science, University of Tübingen, Sand 14, Tübingen 72076, Germany.
| | - Maximilian Mittenbühler
- Cognitive Modeling, Faculty of Science, University of Tübingen, Sand 14, Tübingen 72076, Germany
| | - Sarah Schwöbel
- Cognitive Computational Neuroscience, Faculty of Psychology, TU Dresden, School of Science, Dresden 01062, Germany
| | - Asya Achimova
- Cognitive Modeling, Faculty of Science, University of Tübingen, Sand 14, Tübingen 72076, Germany
| | - Christian Gumbsch
- Cognitive Modeling, Faculty of Science, University of Tübingen, Sand 14, Tübingen 72076, Germany; Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, TU Dresden, Dresden 01069, Germany
| | - Sebastian Otte
- Cognitive Modeling, Faculty of Science, University of Tübingen, Sand 14, Tübingen 72076, Germany; Adaptive AI Lab, Institute of Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, Lübeck 23562, Germany
| | - Stefan Kiebel
- Cognitive Computational Neuroscience, Faculty of Psychology, TU Dresden, School of Science, Dresden 01062, Germany
| |
Collapse
|
2
|
Becchio C, Pullar K, Scaliti E, Panzeri S. Kinematic coding: Measuring information in naturalistic behaviour. Phys Life Rev 2024; 51:442-458. [PMID: 39603216 DOI: 10.1016/j.plrev.2024.11.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 11/14/2024] [Indexed: 11/29/2024]
Abstract
Recent years have seen an explosion of interest in naturalistic behaviour and in machine learning tools for automatically tracking it. However, questions about what to measure, how to measure it, and how to relate naturalistic behaviour to neural activity and cognitive processes remain unresolved. In this Perspective, we propose a general experimental and computational framework - kinematic coding - for measuring how information about cognitive states is encoded in structured patterns of behaviour and how this information is read out by others during social interactions. This framework enables the design of new experiments and the generation of testable hypotheses that link behaviour, cognition, and neural activity at the single-trial level. Researchers can employ this framework to identify single-subject, single-trial encoding and readout computations and address meaningful questions about how information encoded in bodily motion is transmitted and communicated.
Collapse
Affiliation(s)
- Cristina Becchio
- Department of Neurology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany.
| | - Kiri Pullar
- Department of Neurology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany; Institute for Neural Information Processing, Center for Molecular Neurobiology Hamburg, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Eugenio Scaliti
- Department of Neurology, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany; Department of Management "Valter Cantino", University of Turin, Turin, Italy; Human Science and Technologies, University of Turin, Turin, Italy
| | - Stefano Panzeri
- Institute for Neural Information Processing, Center for Molecular Neurobiology Hamburg, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany.
| |
Collapse
|
3
|
Simonelli F, Handjaras G, Benuzzi F, Bernardi G, Leo A, Duzzi D, Cecchetti L, Nichelli PF, Porro CA, Pietrini P, Ricciardi E, Lui F. Sensitivity and specificity of the action observation network to kinematics, target object, and gesture meaning. Hum Brain Mapp 2024; 45:e26762. [PMID: 39037079 PMCID: PMC11261593 DOI: 10.1002/hbm.26762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 05/23/2024] [Accepted: 06/02/2024] [Indexed: 07/23/2024] Open
Abstract
Hierarchical models have been proposed to explain how the brain encodes actions, whereby different areas represent different features, such as gesture kinematics, target object, action goal, and meaning. The visual processing of action-related information is distributed over a well-known network of brain regions spanning separate anatomical areas, attuned to specific stimulus properties, and referred to as action observation network (AON). To determine the brain organization of these features, we measured representational geometries during the observation of a large set of transitive and intransitive gestures in two independent functional magnetic resonance imaging experiments. We provided evidence for a partial dissociation between kinematics, object characteristics, and action meaning in the occipito-parietal, ventro-temporal, and lateral occipito-temporal cortex, respectively. Importantly, most of the AON showed low specificity to all the explored features, and representational spaces sharing similar information content were spread across the cortex without being anatomically adjacent. Overall, our results support the notion that the AON relies on overlapping and distributed coding and may act as a unique representational space instead of mapping features in a modular and segregated manner.
Collapse
Affiliation(s)
| | | | - Francesca Benuzzi
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and NeurotechnologyUniversity of Modena and Reggio EmiliaModenaItaly
| | | | - Andrea Leo
- IMT School for Advanced Studies LuccaLuccaItaly
| | - Davide Duzzi
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and NeurotechnologyUniversity of Modena and Reggio EmiliaModenaItaly
| | | | - Paolo F. Nichelli
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and NeurotechnologyUniversity of Modena and Reggio EmiliaModenaItaly
| | - Carlo A. Porro
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and NeurotechnologyUniversity of Modena and Reggio EmiliaModenaItaly
| | | | | | - Fausta Lui
- Department of Biomedical, Metabolic and Neural Sciences and Center for Neuroscience and NeurotechnologyUniversity of Modena and Reggio EmiliaModenaItaly
| |
Collapse
|
4
|
Casartelli L, Maronati C, Cavallo A. From neural noise to co-adaptability: Rethinking the multifaceted architecture of motor variability. Phys Life Rev 2023; 47:245-263. [PMID: 37976727 DOI: 10.1016/j.plrev.2023.10.036] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 10/27/2023] [Indexed: 11/19/2023]
Abstract
In the last decade, the source and the functional meaning of motor variability have attracted considerable attention in behavioral and brain sciences. This construct classically combined different levels of description, variable internal robustness or coherence, and multifaceted operational meanings. We provide here a comprehensive review of the literature with the primary aim of building a precise lexicon that goes beyond the generic and monolithic use of motor variability. In the pars destruens of the work, we model three domains of motor variability related to peculiar computational elements that influence fluctuations in motor outputs. Each domain is in turn characterized by multiple sub-domains. We begin with the domains of noise and differentiation. However, the main contribution of our model concerns the domain of adaptability, which refers to variation within the same exact motor representation. In particular, we use the terms learning and (social)fitting to specify the portions of motor variability that depend on our propensity to learn and on our largely constitutive propensity to be influenced by external factors. A particular focus is on motor variability in the context of the sub-domain named co-adaptability. Further groundbreaking challenges arise in the modeling of motor variability. Therefore, in a separate pars construens, we attempt to characterize these challenges, addressing both theoretical and experimental aspects as well as potential clinical implications for neurorehabilitation. All in all, our work suggests that motor variability is neither simply detrimental nor beneficial, and that studying its fluctuations can provide meaningful insights for future research.
Collapse
Affiliation(s)
- Luca Casartelli
- Theoretical and Cognitive Neuroscience Unit, Scientific Institute IRCCS E. MEDEA, Italy
| | - Camilla Maronati
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy
| | - Andrea Cavallo
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy; C'MoN Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.
| |
Collapse
|
5
|
Vannuscorps G, Caramazza A. Effector-specific motor simulation supplements core action recognition processes in adverse conditions. Soc Cogn Affect Neurosci 2023; 18:nsad046. [PMID: 37688518 PMCID: PMC10576201 DOI: 10.1093/scan/nsad046] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 08/10/2023] [Accepted: 09/05/2023] [Indexed: 09/11/2023] Open
Abstract
Observing other people acting activates imitative motor plans in the observer. Whether, and if so when and how, such 'effector-specific motor simulation' contributes to action recognition remains unclear. We report that individuals born without upper limbs (IDs)-who cannot covertly imitate upper-limb movements-are significantly less accurate at recognizing degraded (but not intact) upper-limb than lower-limb actions (i.e. point-light animations). This finding emphasizes the need to reframe the current controversy regarding the role of effector-specific motor simulation in action recognition: instead of focusing on the dichotomy between motor and non-motor theories, the field would benefit from new hypotheses specifying when and how effector-specific motor simulation may supplement core action recognition processes to accommodate the full variety of action stimuli that humans can recognize.
Collapse
Affiliation(s)
- Gilles Vannuscorps
- Psychological Sciences Research Institute, Université catholique de Louvain, Place Cardinal Mercier 10, 1348, Louvain-la-Neuve, Belgium
- Institute of Neuroscience, Université catholique de Louvain, Avenue E. Mounier 53, Brussels 1200, Belgium
- Department of Psychology, Harvard University, Kirkland Street 33, Cambridge, MA 02138, USA
| | - Alfonso Caramazza
- Department of Psychology, Harvard University, Kirkland Street 33, Cambridge, MA 02138, USA
- CIMEC (Center for Mind-Brain Sciences), University of Trento, Via delle Regole 101, Mattarello TN 38123, Italy
| |
Collapse
|
6
|
Schubotz RI, Ebel SJ, Elsner B, Weiss PH, Wörgötter F. Tool mastering today - an interdisciplinary perspective. Front Psychol 2023; 14:1191792. [PMID: 37397285 PMCID: PMC10311916 DOI: 10.3389/fpsyg.2023.1191792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/19/2023] [Indexed: 07/04/2023] Open
Abstract
Tools have coined human life, living conditions, and culture. Recognizing the cognitive architecture underlying tool use would allow us to comprehend its evolution, development, and physiological basis. However, the cognitive underpinnings of tool mastering remain little understood in spite of long-time research in neuroscientific, psychological, behavioral and technological fields. Moreover, the recent transition of tool use to the digital domain poses new challenges for explaining the underlying processes. In this interdisciplinary review, we propose three building blocks of tool mastering: (A) perceptual and motor abilities integrate to tool manipulation knowledge, (B) perceptual and cognitive abilities to functional tool knowledge, and (C) motor and cognitive abilities to means-end knowledge about tool use. This framework allows for integrating and structuring research findings and theoretical assumptions regarding the functional architecture of tool mastering via behavior in humans and non-human primates, brain networks, as well as computational and robotic models. An interdisciplinary perspective also helps to identify open questions and to inspire innovative research approaches. The framework can be applied to studies on the transition from classical to modern, non-mechanical tools and from analogue to digital user-tool interactions in virtual reality, which come with increased functional opacity and sensorimotor decoupling between tool user, tool, and target. By working towards an integrative theory on the cognitive architecture of the use of tools and technological assistants, this review aims at stimulating future interdisciplinary research avenues.
Collapse
Affiliation(s)
- Ricarda I. Schubotz
- Department of Biological Psychology, Institute for Psychology, University of Münster, Münster, Germany
| | - Sonja J. Ebel
- Human Biology & Primate Cognition, Institute of Biology, Leipzig University, Leipzig, Germany
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Birgit Elsner
- Developmental Psychology, Department of Psychology, University of Potsdam, Potsdam, Germany
| | - Peter H. Weiss
- Cognitive Neurology, Department of Neurology, University Hospital Cologne, Cologne, Germany
- Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich, Jülich, Germany
| | - Florentin Wörgötter
- Inst. of Physics 3 and Bernstein Center for Computational Neuroscience, Georg August University Göttingen, Göttingen, Germany
| |
Collapse
|
7
|
Scaliti E, Pullar K, Borghini G, Cavallo A, Panzeri S, Becchio C. Kinematic priming of action predictions. Curr Biol 2023:S0960-9822(23)00687-5. [PMID: 37339628 DOI: 10.1016/j.cub.2023.05.055] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 04/06/2023] [Accepted: 05/24/2023] [Indexed: 06/22/2023]
Abstract
The ability to anticipate what others will do next is crucial for navigating social, interactive environments. Here, we develop an experimental and analytical framework to measure the implicit readout of prospective intention information from movement kinematics. Using a primed action categorization task, we first demonstrate implicit access to intention information by establishing a novel form of priming, which we term kinematic priming: subtle differences in movement kinematics prime action prediction. Next, using data collected from the same participants in a forced-choice intention discrimination task 1 h later, we quantify single-trial intention readout-the amount of intention information read by individual perceivers in individual kinematic primes-and assess whether it can be used to predict the amount of kinematic priming. We demonstrate that the amount of kinematic priming, as indexed by both response times (RTs) and initial fixations to a given probe, is directly proportional to the amount of intention information read by the individual perceiver at the single-trial level. These results demonstrate that human perceivers have rapid, implicit access to intention information encoded in movement kinematics and highlight the potential of our approach to reveal the computations that permit the readout of this information with single-subject, single-trial resolution.
Collapse
Affiliation(s)
- Eugenio Scaliti
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy; Department of Neurology, University Medical Center Hamburg-Eppendorf (UKE), Martinistrasse 52, 20246 Hamburg, Germany
| | - Kiri Pullar
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy
| | - Giulia Borghini
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy
| | - Andrea Cavallo
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy; Department of Psychology, Università degli Studi di Torino, Via Giuseppe Verdi, 10, 10124 Torino, Italy
| | - Stefano Panzeri
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy; Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, 20251 Hamburg, Germany.
| | - Cristina Becchio
- Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen, 83, 16152 Genova, Italy; Department of Neurology, University Medical Center Hamburg-Eppendorf (UKE), Martinistrasse 52, 20246 Hamburg, Germany.
| |
Collapse
|
8
|
Zanini A, Dureux A, Selvanayagam J, Everling S. Ultra-high field fMRI identifies an action-observation network in the common marmoset. Commun Biol 2023; 6:553. [PMID: 37217698 DOI: 10.1038/s42003-023-04942-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 05/15/2023] [Indexed: 05/24/2023] Open
Abstract
The observation of others' actions activates a network of temporal, parietal and premotor/prefrontal areas in macaque monkeys and humans. This action-observation network (AON) has been shown to play important roles in social action monitoring, learning by imitation, and social cognition in both species. It is unclear whether a similar network exists in New-World primates, which separated from Old-Word primates ~35 million years ago. Here we used ultra-high field fMRI at 9.4 T in awake common marmosets (Callithrix jacchus) while they watched videos depicting goal-directed (grasping food) or non-goal-directed actions. The observation of goal-directed actions activates a temporo-parieto-frontal network, including areas 6 and 45 in premotor/prefrontal cortices, areas PGa-IPa, FST and TE in occipito-temporal region and areas V6A, MIP, LIP and PG in the occipito-parietal cortex. These results show overlap with the humans and macaques' AON, demonstrating the existence of an evolutionarily conserved network that likely predates the separation of Old and New-World primates.
Collapse
Affiliation(s)
- Alessandro Zanini
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, Canada.
| | - Audrey Dureux
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, Canada
| | - Janahan Selvanayagam
- Department of Physiology and Pharmacology, University of Western Ontario, London, ON, Canada
| | - Stefan Everling
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, Canada
- Department of Physiology and Pharmacology, University of Western Ontario, London, ON, Canada
| |
Collapse
|
9
|
d'Avella A, Russo M, Berger DJ, Maselli A. Neuromuscular invariants in action execution and perception: Comment on "Motor invariants in action execution and perception" by Torricelli et al. Phys Life Rev 2023; 45:63-65. [PMID: 37121137 DOI: 10.1016/j.plrev.2023.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 04/20/2023] [Indexed: 05/02/2023]
Affiliation(s)
- Andrea d'Avella
- Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Italy; Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy.
| | - Marta Russo
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy; Department of Neurology, Tor Vergata Polyclinic, Rome, Italy
| | - Denise J Berger
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | | |
Collapse
|
10
|
Ciceri T, Malerba G, Gatti A, Diella E, Peruzzo D, Biffi E, Casartelli L. Context expectation influences the gait pattern biomechanics. Sci Rep 2023; 13:5644. [PMID: 37024572 PMCID: PMC10079826 DOI: 10.1038/s41598-023-32665-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 03/30/2023] [Indexed: 04/08/2023] Open
Abstract
Beyond classical aspects related to locomotion (biomechanics), it has been hypothesized that walking pattern is influenced by a combination of distinct computations including online sensory/perceptual sampling and the processing of expectations (neuromechanics). Here, we aimed to explore the potential impact of contrasting scenarios ("risky and potentially dangerous" scenario; "safe and comfortable" scenario) on walking pattern in a group of healthy young adults. Firstly, and consistently with previous literature, we confirmed that the scenario influences gait pattern when it is recalled concurrently to participants' walking activity (motor interference). More intriguingly, our main result showed that participants' gait pattern is also influenced by the contextual scenario when it is evoked only before the start of walking activity (motor expectation). This condition was designed to test the impact of expectations (risky scenario vs. safe scenario) on gait pattern, and the stimulation that preceded walking activity served as prior. Noteworthy, we combined statistical and machine learning (Support-Vector Machine classifier) approaches to stratify distinct levels of analyses that explored the multi-facets architecture of walking. In a nutshell, our combined statistical and machine learning analyses converge in suggesting that walking before steps is not just a paradox.
Collapse
Affiliation(s)
- Tommaso Ciceri
- Department of Information Engineering, University of Padova, Padua, PD, Italy
- Neuroimaging Lab, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy
| | - Giorgia Malerba
- Bioengineering Lab, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy
| | - Alice Gatti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, MI, Italy
| | - Eleonora Diella
- Bioengineering Lab, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy
| | - Denis Peruzzo
- Neuroimaging Lab, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy
| | - Emilia Biffi
- Bioengineering Lab, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy.
| | - Luca Casartelli
- Theoretical and Cognitive Neuroscience Unit, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy
| |
Collapse
|
11
|
Setti F, Handjaras G, Bottari D, Leo A, Diano M, Bruno V, Tinti C, Cecchetti L, Garbarini F, Pietrini P, Ricciardi E. A modality-independent proto-organization of human multisensory areas. Nat Hum Behav 2023; 7:397-410. [PMID: 36646839 PMCID: PMC10038796 DOI: 10.1038/s41562-022-01507-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 12/05/2022] [Indexed: 01/18/2023]
Abstract
The processing of multisensory information is based upon the capacity of brain regions, such as the superior temporal cortex, to combine information across modalities. However, it is still unclear whether the representation of coherent auditory and visual events requires any prior audiovisual experience to develop and function. Here we measured brain synchronization during the presentation of an audiovisual, audio-only or video-only version of the same narrative in distinct groups of sensory-deprived (congenitally blind and deaf) and typically developed individuals. Intersubject correlation analysis revealed that the superior temporal cortex was synchronized across auditory and visual conditions, even in sensory-deprived individuals who lack any audiovisual experience. This synchronization was primarily mediated by low-level perceptual features, and relied on a similar modality-independent topographical organization of slow temporal dynamics. The human superior temporal cortex is naturally endowed with a functional scaffolding to yield a common representation across multisensory events.
Collapse
Affiliation(s)
- Francesca Setti
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Davide Bottari
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Andrea Leo
- Department of Translational Research and Advanced Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Matteo Diano
- Department of Psychology, University of Turin, Turin, Italy
| | - Valentina Bruno
- Manibus Lab, Department of Psychology, University of Turin, Turin, Italy
| | - Carla Tinti
- Department of Psychology, University of Turin, Turin, Italy
| | - Luca Cecchetti
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Pietro Pietrini
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | |
Collapse
|
12
|
Emuk Y, Kahraman T, Sengul Y. The acute effects of action observation training on upper extremity functions, cognitive processes and reaction times: a randomized controlled trial. J Comp Eff Res 2022; 11:987-998. [PMID: 35770659 DOI: 10.2217/cer-2022-0079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Aim: To investigate the acute effects of action observation training on upper extremity functions, cognitive functions and response time in healthy, young adults. Materials & methods: A total of 60 participants were randomly divided into five groups: the self-action observation group, action observation group, action practice group, non-action observation group and control group. The Jebsen-Taylor hand function test (JTHFT), nine-hole peg test, serial reaction time task and d2 test of attention were applied to the participants before and after the interventions. Results: JTHFT performance with both non-dominant and dominant hands improved significantly compared with baseline in all groups (p < 0.001). JTHFT performance with non-dominant and dominant hands differed between the groups (p < 0.001). Conclusion: Action observation training seems to enhance the performance of upper extremity-related functions. Observing self-actions resulted in statistically significant positive changes in more variables compared with other methods. However, its clinical effectiveness over the other methods should be investigated in future long-term studies. Clinical Trial Registration: NCT04932057 (ClinicalTrials.gov).
Collapse
Affiliation(s)
- Yusuf Emuk
- Dokuz Eylul University, Graduate School of Health Sciences, Izmir, Turkey.,Izmir Katip Celebi University, Faculty of Health Sciences, Department of Physiotherapy and Rehabilitation, Izmir, Turkey
| | - Turhan Kahraman
- Izmir Katip Celebi University, Faculty of Health Sciences, Department of Physiotherapy and Rehabilitation, Izmir, Turkey
| | - Yesim Sengul
- Dokuz Eylul University, Faculty of Physical Therapy and Rehabilitation, Izmir, Turkey
| |
Collapse
|
13
|
Karpinskaia VY, Pechenkova EV, Zelenskaya IS, Lyakhovetskii VA. Vision for Perception and Vision for Action in Space Travelers. Front Physiol 2022; 13:806578. [PMID: 35360254 PMCID: PMC8963356 DOI: 10.3389/fphys.2022.806578] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 02/10/2022] [Indexed: 11/13/2022] Open
Affiliation(s)
- Valeriia Yu. Karpinskaia
- Laboratory of Neurovisualization, N.P. Bechtereva Institute of the Human Brain (Russian Academy of Sciences), St. Petersburg, Russia
- *Correspondence: Valeriia Yu. Karpinskaia
| | | | - Inna S. Zelenskaya
- Laboratory of Gravitational Physiology of the Sensorimotor System, Institute of Biomedical Problems, Russian Academy of Sciences, Moscow, Russia
| | - Vsevolod A. Lyakhovetskii
- Laboratory of Movement Physiology, Pavlov Institute of Physiology, Russian Academy of Sciences, St. Petersburg, Russia
| |
Collapse
|
14
|
Sadeghi S, Schmidt SNL, Mier D, Hass J. Effective Connectivity of the Human Mirror Neuron System During Social Cognition. Soc Cogn Affect Neurosci 2022; 17:732-743. [PMID: 35086135 PMCID: PMC9340111 DOI: 10.1093/scan/nsab138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 11/15/2021] [Accepted: 01/27/2022] [Indexed: 11/17/2022] Open
Abstract
The human mirror neuron system (MNS) can be considered the neural basis of social cognition. Identifying the global network structure of this system can provide significant progress in the field. In this study, we use dynamic causal modeling (DCM) to determine the effective connectivity between central regions of the MNS for the first time during different social cognition tasks. Sixty-seven healthy participants completed fMRI scanning while performing social cognition tasks, including imitation, empathy and theory of mind. Superior temporal sulcus (STS), inferior parietal lobule (IPL) and Brodmann area 44 (BA44) formed the regions of interest for DCM. Varying connectivity patterns, 540 models were built and fitted for each participant. By applying group-level analysis, Bayesian model selection and Bayesian model averaging, the optimal family and model for all experimental tasks were found. For all social-cognitive processes, effective connectivity from STS to IPL and from STS to BA44 was found. For imitation, additional mutual connections occurred between STS and BA44, as well as BA44 and IPL. The results suggest inverse models in which the motor regions BA44 and IPL receive sensory information from the STS. In contrast, for imitation, a sensory loop with an exchange of motor-to-sensory and sensory-to-motor information seems to exist.
Collapse
Affiliation(s)
- Sadjad Sadeghi
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Mannheim 68159, Germany
- Department of Physics and Astronomy, Heidelberg University, Heidelberg 69120, Germany
| | | | | | - Joachim Hass
- Correspondence should be addressed to Joachim Hass, Faculty of Applied Psychology, SRH University of Applied Sciences, Maria-Probst-Strasse 3A, Heidelberg 69123, Germany. E-mail:
| |
Collapse
|
15
|
Kilteni K, Engeler P, Boberg I, Maurex L, Ehrsson HH. No evidence for somatosensory attenuation during action observation of self-touch. Eur J Neurosci 2021; 54:6422-6444. [PMID: 34463971 DOI: 10.1111/ejn.15436] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 08/12/2021] [Accepted: 08/13/2021] [Indexed: 11/28/2022]
Abstract
The discovery of mirror neurons in the macaque brain in the 1990s triggered investigations on putative human mirror neurons and their potential functionality. The leading proposed function has been action understanding: Accordingly, we understand the actions of others by 'simulating' them in our own motor system through a direct matching of the visual information to our own motor programmes. Furthermore, it has been proposed that this simulation involves the prediction of the sensory consequences of the observed action, similar to the prediction of the sensory consequences of our executed actions. Here, we tested this proposal by quantifying somatosensory attenuation behaviourally during action observation. Somatosensory attenuation manifests during voluntary action and refers to the perception of self-generated touches as less intense than identical externally generated touches because the self-generated touches are predicted from the motor command. Therefore, we reasoned that if an observer simulates the observed action and, thus, he/she predicts its somatosensory consequences, then he/she should attenuate tactile stimuli simultaneously delivered to his/her corresponding body part. In three separate experiments, we found a systematic attenuation of touches during executed self-touch actions, but we found no evidence for attenuation when such actions were observed. Failure to observe somatosensory attenuation during observation of self-touch is not compatible with the hypothesis that the putative human mirror neuron system automatically predicts the sensory consequences of the observed action. In contrast, our findings emphasize a sharp distinction between the motor representations of self and others.
Collapse
Affiliation(s)
| | - Patrick Engeler
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Ida Boberg
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Linnea Maurex
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - H Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
16
|
Ziaeetabar F, Pomp J, Pfeiffer S, El-Sourani N, Schubotz RI, Tamosiunaite M, Wörgötter F. Using enriched semantic event chains to model human action prediction based on (minimal) spatial information. PLoS One 2020; 15:e0243829. [PMID: 33370343 PMCID: PMC7769489 DOI: 10.1371/journal.pone.0243829] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 11/26/2020] [Indexed: 11/23/2022] Open
Abstract
Predicting other people’s upcoming action is key to successful social interactions. Previous studies have started to disentangle the various sources of information that action observers exploit, including objects, movements, contextual cues and features regarding the acting person’s identity. We here focus on the role of static and dynamic inter-object spatial relations that change during an action. We designed a virtual reality setup and tested recognition speed for ten different manipulation actions. Importantly, all objects had been abstracted by emulating them with cubes such that participants could not infer an action using object information. Instead, participants had to rely only on the limited information that comes from the changes in the spatial relations between the cubes. In spite of these constraints, participants were able to predict actions in, on average, less than 64% of the action’s duration. Furthermore, we employed a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of different types of spatial relations: (a) objects’ touching/untouching, (b) static spatial relations between objects and (c) dynamic spatial relations between objects during an action. Assuming the eSEC as an underlying model, we show, using information theoretical analysis, that humans mostly rely on a mixed-cue strategy when predicting actions. Machine-based action prediction is able to produce faster decisions based on individual cues. We argue that human strategy, though slower, may be particularly beneficial for prediction of natural and more complex actions with more variable or partial sources of information. Our findings contribute to the understanding of how individuals afford inferring observed actions’ goals even before full goal accomplishment, and may open new avenues for building robots for conflict-free human-robot cooperation.
Collapse
Affiliation(s)
- Fatemeh Ziaeetabar
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany
- * E-mail:
| | - Jennifer Pomp
- Department of Psychology, University of Münster, Münster, Germany
| | - Stefan Pfeiffer
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany
| | | | | | - Minija Tamosiunaite
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany
- Department of Informatics, Vytautas Magnus University, Kaunas, Lithuania
| | - Florentin Wörgötter
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany
| |
Collapse
|
17
|
Gangopadhyay P, Chawla M, Dal Monte O, Chang SWC. Prefrontal-amygdala circuits in social decision-making. Nat Neurosci 2020; 24:5-18. [PMID: 33169032 DOI: 10.1038/s41593-020-00738-9] [Citation(s) in RCA: 125] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 10/02/2020] [Indexed: 12/20/2022]
Abstract
An increasing amount of research effort is being directed toward investigating the neural bases of social cognition from a systems neuroscience perspective. Evidence from multiple animal species is beginning to provide a mechanistic understanding of the substrates of social behaviors at multiple levels of neurobiology, ranging from those underlying high-level social constructs in humans and their more rudimentary underpinnings in monkeys to circuit-level and cell-type-specific instantiations of social behaviors in rodents. Here we review literature examining the neural mechanisms of social decision-making in humans, non-human primates and rodents, focusing on the amygdala and the medial and orbital prefrontal cortical regions and their functional interactions. We also discuss how the neuropeptide oxytocin impacts these circuits and their downstream effects on social behaviors. Overall, we conclude that regulated interactions of neuronal activity in the prefrontal-amygdala pathways critically contribute to social decision-making in the brains of primates and rodents.
Collapse
Affiliation(s)
| | - Megha Chawla
- Department of Psychology, Yale University, New Haven, CT, USA
| | - Olga Dal Monte
- Department of Psychology, Yale University, New Haven, CT, USA.,Department of Psychology, University of Turin, Torino, Italy
| | - Steve W C Chang
- Department of Psychology, Yale University, New Haven, CT, USA. .,Department of Neuroscience, Yale University School of Medicine, New Haven, CT, USA. .,Kavli Institute for Neuroscience, Yale University School of Medicine, New Haven, CT, USA.
| |
Collapse
|
18
|
Carey LM, Mak-Yuen YYK, Matyas TA. The Functional Tactile Object Recognition Test: A Unidimensional Measure With Excellent Internal Consistency for Haptic Sensing of Real Objects After Stroke. Front Neurosci 2020; 14:542590. [PMID: 33071730 PMCID: PMC7538651 DOI: 10.3389/fnins.2020.542590] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 08/17/2020] [Indexed: 01/01/2023] Open
Abstract
Introduction Our hands, with their exquisite sensors, work in concert with our sensing brain to extract sensory attributes of objects as we engage in daily activities. One in two people with stroke experience impaired body sensation, with negative impact on hand use and return to previous valued activities. Valid, quantitative tools are critical to measure somatosensory impairment after stroke. The functional Tactile Object Recognition Test (fTORT) is a quantitative measure of tactile (haptic) object recognition designed to test one’s ability to recognize everyday objects across seven sensory attributes using 14 object sets. However, to date, knowledge of the nature of object recognition errors is limited, and the internal consistency of performance across item scores and dimensionality of the measure have not been established. Objectives To describe the original development and construction of the test, characterize the distribution and nature of performance errors after stroke, and to evaluate the internal consistency of item scores and dimensionality of the fTORT. Method Data from existing cohorts of stroke survivors (n = 115) who were assessed on the fTORT quantitative measure of sensory performance were extracted and pooled. Item and scale analyses were conducted on the raw item data. The distribution and type of errors were characterized. Results The 14 item sets of the fTORT form a well-behaved unidimensional scale and demonstrate excellent internal consistency (Cronbach alpha of 0.93). Deletion of any item failed to improve the Cronbach score. Most items displayed a bimodal score distribution, with function and attribute errors (score 0) or correct response (score 3) being most common. A smaller proportion of one- or two-attribute errors occurred. The total score range differentiated performance over a wide range of object recognition impairment. Conclusion Unidimensional scale and similar factor loadings across all items support simple addition of the 14 item scores on the fTORT. Therapists can use the fTORT to quantify impaired tactile object recognition in people with stroke based on the current set of items. New insights on the nature of haptic object recognition impairment after stroke are revealed.
Collapse
Affiliation(s)
- Leeanne M Carey
- Department of Occupational Therapy, Social Work and Social Policy, School of Allied Health, Human Services and Sport, College of Science, Health and Engineering, La Trobe University, Melbourne, VIC, Australia.,Neurorehabilitation and Recovery, The Florey Institute of Neuroscience and Mental Health, Heidelberg, VIC, Australia
| | - Yvonne Y K Mak-Yuen
- Department of Occupational Therapy, Social Work and Social Policy, School of Allied Health, Human Services and Sport, College of Science, Health and Engineering, La Trobe University, Melbourne, VIC, Australia.,Neurorehabilitation and Recovery, The Florey Institute of Neuroscience and Mental Health, Heidelberg, VIC, Australia
| | - Thomas A Matyas
- Department of Occupational Therapy, Social Work and Social Policy, School of Allied Health, Human Services and Sport, College of Science, Health and Engineering, La Trobe University, Melbourne, VIC, Australia
| |
Collapse
|
19
|
Transient Disruption of the Inferior Parietal Lobule Impairs the Ability to Attribute Intention to Action. Curr Biol 2020; 30:4594-4605.e7. [PMID: 32976808 DOI: 10.1016/j.cub.2020.08.104] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 08/03/2020] [Accepted: 08/28/2020] [Indexed: 01/10/2023]
Abstract
Although it is well established that fronto-parietal regions are active during action observation, whether they play a causal role in the ability to infer others' intentions from visual kinematics remains undetermined. In the experiments reported here, we combined offline continuous theta burst stimulation (cTBS) with computational modeling to reveal and causally probe single-trial computations in the inferior parietal lobule (IPL) and inferior frontal gyrus (IFG). Participants received cTBS over the left anterior IPL and the left IFG pars orbitalis in separate sessions before completing an intention discrimination task (discriminate intention of observed reach-to-grasp acts) or a kinematic discrimination task unrelated to intention (discriminate peak wrist height of the same acts). We targeted intention-sensitive regions whose fMRI activity, recorded when observing the same reach-to-grasp acts, could accurately discriminate intention. We found that transient disruption of activity of the left IPL, but not the IFG, impaired the observer's ability to attribute intention to action. Kinematic discrimination unrelated to intention, in contrast, was largely unaffected. Computational analyses of how encoding (mapping of intention to movement kinematics) and readout (mapping of kinematics to intention choices) intersect at the single-trial level revealed that IPL cTBS did not diminish the overall sensitivity of intention readout to movement kinematics. Rather, it selectively misaligned intention readout with respect to encoding, deteriorating mapping from informative kinematic features to intention choices. These results provide causal evidence of how the left anterior IPL computes mapping from kinematics to intentions.
Collapse
|
20
|
Motor resonance in monkey parietal and premotor cortex during action observation: Influence of viewing perspective and effector identity. Neuroimage 2020; 224:117398. [PMID: 32971263 DOI: 10.1016/j.neuroimage.2020.117398] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 08/24/2020] [Accepted: 09/16/2020] [Indexed: 11/22/2022] Open
Abstract
Observing others performing motor acts like grasping has been shown to elicit neural responses in the observer`s parieto-frontal motor network, which typically becomes active when the observer would perform these actions him/herself. While some human studies suggested strongest motor resonance during observation of first person or egocentric perspectives compared to third person or allocentric perspectives, other research either report the opposite or did not find any viewpoint-related preferences in parieto-premotor cortices. Furthermore, it has been suggested that these motor resonance effects are lateralized in the parietal cortex depending on the viewpoint and identity of the observed effector (left vs right hand). Other studies, however, do not find such straightforward hand identity dependent motor resonance effects. In addition to these conflicting findings in human studies, to date, little is known about the modulatory role of viewing perspective and effector identity (left or right hand) on motor resonance effects in monkey parieto-premotor cortices. Here, we investigated the extent to which different viewpoints of observed conspecific hand actions yield motor resonance in rhesus monkeys using fMRI. Observing first person, lateral and third person viewpoints of conspecific hand actions yielded significant activations throughout the so-called action observation network, including STS, parietal and frontal cortices. Although region-of-interest analysis of parietal and premotor motor/mirror neuron regions AIP, PFG and F5, showed robust responses in these regions during action observation in general, a clear preference for egocentric or allocentric perspectives was not evident. Moreover, except for lateralized effects due to visual field biases, motor resonance in the monkey brain during grasping observation did not reflect hand identity dependent coding.
Collapse
|
21
|
Poyo Solanas M, Vaessen M, de Gelder B. Computation-Based Feature Representation of Body Expressions in the Human Brain. Cereb Cortex 2020; 30:6376-6390. [DOI: 10.1093/cercor/bhaa196] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 06/04/2020] [Accepted: 06/26/2020] [Indexed: 01/31/2023] Open
Abstract
Abstract
Humans and other primate species are experts at recognizing body expressions. To understand the underlying perceptual mechanisms, we computed postural and kinematic features from affective whole-body movement videos and related them to brain processes. Using representational similarity and multivoxel pattern analyses, we showed systematic relations between computation-based body features and brain activity. Our results revealed that postural rather than kinematic features reflect the affective category of the body movements. The feature limb contraction showed a central contribution in fearful body expression perception, differentially represented in action observation, motor preparation, and affect coding regions, including the amygdala. The posterior superior temporal sulcus differentiated fearful from other affective categories using limb contraction rather than kinematics. The extrastriate body area and fusiform body area also showed greater tuning to postural features. The discovery of midlevel body feature encoding in the brain moves affective neuroscience beyond research on high-level emotion representations and provides insights in the perceptual features that possibly drive automatic emotion perception.
Collapse
Affiliation(s)
- Marta Poyo Solanas
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200 MD, The Netherlands
| | - Maarten Vaessen
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200 MD, The Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200 MD, The Netherlands
- Department of Computer Science, University College London, London WC1E 6BT, UK
| |
Collapse
|
22
|
Motor cortical inhibition during concurrent action execution and action observation. Neuroimage 2020; 208:116445. [DOI: 10.1016/j.neuroimage.2019.116445] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Revised: 11/12/2019] [Accepted: 12/05/2019] [Indexed: 11/23/2022] Open
|
23
|
Chackochan VT, Sanguineti V. Incomplete information about the partner affects the development of collaborative strategies in joint action. PLoS Comput Biol 2019; 15:e1006385. [PMID: 31830100 PMCID: PMC6907753 DOI: 10.1371/journal.pcbi.1006385] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2018] [Accepted: 09/30/2019] [Indexed: 11/25/2022] Open
Abstract
Physical interaction with a partner plays an essential role in our life experience and is the basis of many daily activities. When two physically coupled humans have different and partly conflicting goals, they face the challenge of negotiating some type of collaboration. This requires that both participants understand their partner’s state and current actions. But, how would the collaboration be affected if information about their partner were unreliable or incomplete? We designed an experiment in which two players (a dyad) are mechanically connected through a virtual spring, but cannot see each other. They were instructed to perform reaching movements with the same start and end position, but through different via-points. In different groups of dyads we varied the amount of information provided to each player about his/her partner: haptic only (the interaction force perceived through the virtual spring), visuo-haptic (the interaction force is also displayed on the screen), and partner visible (in addition to interaction force, partner position is continuously displayed on the screen). We found that incomplete information about the partner affects not only the speed at which collaboration is achieved (less information, slower learning), but also the actual collaboration strategy. In particular, incomplete or unreliable information leads to an interaction strategy characterized by alternating leader-follower roles. Conversely, more reliable information leads to more synchronous behaviors, in which no specific roles can be identified. Simulations based on a combination of game theory and Bayesian estimation suggested that synchronous behaviors correspond to optimal interaction (Nash equilibrium). Roles emerge as sub-optimal forms of interaction, which minimize the need to account for the partner. These findings suggest that collaborative strategies in joint action are shaped by the trade-off between the task requirements and the uncertainty of the information available about the partner. Many activities in daily life involve physical interaction with a partner or opponent. In many situations, they have conflicting goals and need to negotiate some form of collaboration. Although very common, these situations have rarely been studied empirically. In this study, we specifically address what is a ‘optimal’ collaboration and how it can be achieved. We also address how developing a collaboration is affected by uncertainty about partner actions. Through a combination of empirical studies and computer simulations based on game theory, we show that subject pairs (dyads) are capable of developing stable collaborations, but the learned collaboration strategy depends on the reliability of the information about the partner. High-information dyads converge to optimal strategies in a game-theoretic sense. Low-information dyads converge to strategies that minimize the need to know about the partner. These findings are consistent with a game-theoretic learning model which relies on estimates of partner actions, but not partner goals. This similarity sheds some light on the minimal computational machinery which is necessary to an intelligent agent in order to develop stable physical collaborations with a human partner.
Collapse
Affiliation(s)
- Vinil T. Chackochan
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genova, Italy
| | - Vittorio Sanguineti
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genova, Italy
- * E-mail:
| |
Collapse
|
24
|
Eatherington CJ, Marinelli L, Lõoke M, Battaglini L, Mongillo P. Local Dot Motion, Not Global Configuration, Determines Dogs' Preference for Point-Light Displays. Animals (Basel) 2019; 9:E661. [PMID: 31489919 PMCID: PMC6770411 DOI: 10.3390/ani9090661] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 08/27/2019] [Accepted: 09/03/2019] [Indexed: 11/21/2022] Open
Abstract
Visual perception remains an understudied area of dog cognition, particularly the perception of biological motion where the small amount of previous research has created an unclear impression regarding dogs' visual preference towards different types of point-light displays. To date, no thorough investigation has been conducted regarding which aspects of the motion contained in point-light displays attract dogs. To test this, pet dogs (N = 48) were presented with pairs of point-light displays with systematic manipulation of motion features (i.e., upright or inverted orientation, coherent or scrambled configuration, human or dog species). Results revealed a significant effect of inversion, with dogs directing significantly longer looking time towards upright than inverted dog point-light displays; no effect was found for scrambling or the scrambling-inversion interaction. No looking time bias was found when dogs were presented with human point-light displays, regardless of their orientation or configuration. The results of the current study imply that dogs' visual preference is driven by the motion of individual dots in accordance with gravity, rather than the point-light display's global arrangement, regardless their long exposure to human motion.
Collapse
Affiliation(s)
- Carla J Eatherington
- Laboratory of Applied Ethology, Department of Comparative Biomedicine and Food Science, University of Padua, Viale dell'Università 16, 35020 Legnaro, Italy.
| | - Lieta Marinelli
- Laboratory of Applied Ethology, Department of Comparative Biomedicine and Food Science, University of Padua, Viale dell'Università 16, 35020 Legnaro, Italy.
| | - Miina Lõoke
- Laboratory of Applied Ethology, Department of Comparative Biomedicine and Food Science, University of Padua, Viale dell'Università 16, 35020 Legnaro, Italy.
| | - Luca Battaglini
- Department of General Psychology, University of Padua, Via Venezia 8, 35131 Padova, Italy.
| | - Paolo Mongillo
- Laboratory of Applied Ethology, Department of Comparative Biomedicine and Food Science, University of Padua, Viale dell'Università 16, 35020 Legnaro, Italy.
| |
Collapse
|
25
|
Xu B, Kankanhalli MS, Zhao Q. Ultra-rapid object categorization in real-world scenes with top-down manipulations. PLoS One 2019; 14:e0214444. [PMID: 30969988 PMCID: PMC6457495 DOI: 10.1371/journal.pone.0214444] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Accepted: 03/13/2019] [Indexed: 11/18/2022] Open
Abstract
Humans are able to achieve visual object recognition rapidly and effortlessly. Object categorization is commonly believed to be achieved by interaction between bottom-up and top-down cognitive processing. In the ultra-rapid categorization scenario where the stimuli appear briefly and response time is limited, it is assumed that a first sweep of feedforward information is sufficient to discriminate whether or not an object is present in a scene. However, whether and how feedback/top-down processing is involved in such a brief duration remains an open question. To this end, here, we would like to examine how different top-down manipulations, such as category level, category type and real-world size, interact in ultra-rapid categorization. We have constructed a dataset comprising real-world scene images with a built-in measurement of target object display size. Based on this set of images, we have measured ultra-rapid object categorization performance by human subjects. Standard feedforward computational models representing scene features and a state-of-the-art object detection model were employed for auxiliary investigation. The results showed the influences from 1) animacy (animal, vehicle, food), 2) level of abstraction (people, sport), and 3) real-world size (four target size levels) on ultra-rapid categorization processes. This had an impact to support the involvement of top-down processing when rapidly categorizing certain objects, such as sport at a fine grained level. Our work on human vs. model comparisons also shed light on possible collaboration and integration of the two that may be of interest to both experimental and computational vision researches. All the collected images and behavioral data as well as code and models are publicly available at https://osf.io/mqwjz/.
Collapse
Affiliation(s)
- Bingjie Xu
- NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore, Singapore
| | | | - Qi Zhao
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States of America
| |
Collapse
|
26
|
Casartelli L. Stability and flexibility in multisensory sampling: insights from perceptual illusions. J Neurophysiol 2019; 121:1588-1590. [PMID: 30840541 DOI: 10.1152/jn.00060.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
Neural, oscillatory, and computational counterparts of multisensory processing remain a crucial challenge for neuroscientists. Converging evidence underlines a certain efficiency in balancing stability and flexibility of sensory sampling, supporting the general idea that multiple parallel and hierarchically organized processing stages in the brain contribute to our understanding of the (sensory/perceptual) world. Intriguingly, how temporal dynamics impact and modulate multisensory processes in our brain can be investigated benefiting from studies on perceptual illusions.
Collapse
Affiliation(s)
- Luca Casartelli
- Scientific Institute IRCCS E. Medea, Child Psychopathology Unit, Bosisio Parini, Italy
| |
Collapse
|
27
|
Agent-based representations of objects and actions in the monkey pre-supplementary motor area. Proc Natl Acad Sci U S A 2019; 116:2691-2700. [PMID: 30696759 PMCID: PMC6377463 DOI: 10.1073/pnas.1810890116] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Information about objects around us is essential for planning actions and for predicting those of others. Here, we studied pre-supplementary motor area F6 neurons with a task in which monkeys viewed and grasped (or refrained from grasping) objects, and then observed a human doing the same task. We found "action-related neurons" encoding selectively monkey's own action [self-type (ST)], another agent's action [other-type (OT)], or both [self- and other-type (SOT)]. Interestingly, we found "object-related neurons" exhibiting the same type of selectivity before action onset: Indeed, distinct sets of neurons discharged when visually presented objects were targeted by the monkey's own action (ST), another agent's action (OT), or both (SOT). Notably, object-related neurons appear to signal self and other's intention to grasp and the most likely grip type that will be performed, whereas action-related neurons encode a general goal attainment signal devoid of any specificity for the observed grip type. Time-resolved cross-modal population decoding revealed that F6 neurons first integrate information about object and context to generate an agent-shared signal specifying whether and how the object will be grasped, which progressively turns into a broader agent-based goal attainment signal during action unfolding. Importantly, shared representation of objects critically depends upon their location in the observer's peripersonal space, suggesting an "object-mirroring" mechanism through which observers could accurately predict others' impending action by recruiting the same motor representation they would activate if they were to act upon the same object in the same context.
Collapse
|
28
|
Observing Action Sequences Elicits Sequence-Specific Neural Representations in Frontoparietal Brain Regions. J Neurosci 2018; 38:10114-10128. [PMID: 30282731 PMCID: PMC6596197 DOI: 10.1523/jneurosci.1597-18.2018] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Revised: 08/29/2018] [Accepted: 09/19/2018] [Indexed: 01/07/2023] Open
Abstract
Learning new skills by watching others is important for social and motor development throughout the lifespan. Prior research has suggested that observational learning shares common substrates with physical practice at both cognitive and brain levels. In addition, neuroimaging studies have used multivariate analysis techniques to understand neural representations in a variety of domains, including vision, audition, memory, and action, but few studies have investigated neural plasticity in representational space. Therefore, although movement sequences can be learned by observing other people's actions, a largely unanswered question in neuroscience is how experience shapes the representational space of neural systems. Here, across a sample of male and female participants, we combined pretraining and posttraining fMRI sessions with 6 d of observational practice to determine whether the observation of action sequences elicits sequence-specific representations in human frontoparietal brain regions and the extent to which these representations become more distinct with observational practice. Our results showed that observed action sequences are modeled by distinct patterns of activity in frontoparietal cortex and that such representations largely generalize to very similar, but untrained, sequences. These findings advance our understanding of what is modeled during observational learning (sequence-specific information), as well as how it is modeled (reorganization of frontoparietal cortex is similar to that previously shown following physical practice). Therefore, on a more fine-grained neural level than demonstrated previously, our findings reveal how the representational structure of frontoparietal cortex maps visual information onto motor circuits in order to enhance motor performance. SIGNIFICANCE STATEMENT Learning by watching others is a cornerstone in the development of expertise and skilled behavior. However, it remains unclear how visual signals are mapped onto motor circuits for such learning to occur. Here, we show that observed action sequences are modeled by distinct patterns of activity in frontoparietal cortex and that such representations largely generalize to very similar, but untrained, sequences. These findings advance our understanding of what is modeled during observational learning (sequence-specific information), as well as how it is modeled (reorganization of frontoparietal cortex is similar to that previously shown following physical practice). More generally, these findings demonstrate how motor circuit involvement in the perception of action sequences shows high fidelity to prior work, which focused on physical performance of action sequences.
Collapse
|
29
|
Vaessen MJ, Abassi E, Mancini M, Camurri A, de Gelder B. Computational Feature Analysis of Body Movements Reveals Hierarchical Brain Organization. Cereb Cortex 2018; 29:3551-3560. [DOI: 10.1093/cercor/bhy228] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Revised: 08/20/2018] [Accepted: 08/21/2018] [Indexed: 11/13/2022] Open
Abstract
Abstract
Social species spend considerable time observing the body movements of others to understand their actions, predict their emotions, watch their games, or enjoy their dance movements. Given the important information obtained from body movements, we still know surprisingly little about the details of brain mechanisms underlying movement perception. In this fMRI study, we investigated the relations between movement features obtained from automated computational analyses of video clips and the corresponding brain activity. Our results show that low-level computational features map to specific brain areas related to early visual- and motion-sensitive regions, while mid-level computational features are related to dynamic aspects of posture encoded in occipital–temporal cortex, posterior superior temporal sulcus and superior parietal lobe. Furthermore, behavioral features obtained from subjective ratings correlated with activity in higher action observation regions. Our computational feature-based analysis suggests that the neural mechanism of movement encoding is organized in the brain not so much by semantic categories than by feature statistics of the body movements.
Collapse
Affiliation(s)
- Maarten J Vaessen
- Department of Cognitive Neuroscience, Brain and Emotion Laboratory, Faculty of Psychology and Neuroscience, Maastricht University, EV Maastricht, the Netherlands
| | - Etienne Abassi
- Department of Cognitive Neuroscience, Brain and Emotion Laboratory, Faculty of Psychology and Neuroscience, Maastricht University, EV Maastricht, the Netherlands
| | - Maurizio Mancini
- Department of Informatics, Casa Paganini-InfoMus Research Centre, Bioengineering, Robotics, and Systems Engineering (DIBRIS), University of Genoa, Genova, Italy
| | - Antonio Camurri
- Department of Informatics, Casa Paganini-InfoMus Research Centre, Bioengineering, Robotics, and Systems Engineering (DIBRIS), University of Genoa, Genova, Italy
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Brain and Emotion Laboratory, Faculty of Psychology and Neuroscience, Maastricht University, EV Maastricht, the Netherlands
- Department of Computer Science, University College London, London, England, United Kingdom
| |
Collapse
|
30
|
Koul A, Cavallo A, Cauda F, Costa T, Diano M, Pontil M, Becchio C. Action Observation Areas Represent Intentions From Subtle Kinematic Features. Cereb Cortex 2018; 28:2647-2654. [PMID: 29722797 PMCID: PMC5998953 DOI: 10.1093/cercor/bhy098] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 03/15/2018] [Indexed: 12/05/2022] Open
Abstract
Mirror neurons have been proposed to underlie humans' ability to understand others' actions and intentions. Despite 2 decades of research, however, the exact computational and neuronal mechanisms implied in this ability remain unclear. In the current study, we investigated whether, in the absence of contextual cues, regions considered to be part of the human mirror neuron system represent intention from movement kinematics. A total of 21 participants observed reach-to-grasp movements, performed with either the intention to drink or to pour while undergoing functional magnetic resonance imaging. Multivoxel pattern analysis revealed successful decoding of intentions from distributed patterns of activity in a network of structures comprising the inferior parietal lobule, the superior parietal lobule, the inferior frontal gyrus, and the middle frontal gyrus. Consistent with the proposal that parietal regions play a key role in intention understanding, classifier weights were higher in the inferior parietal region. These results provide the first demonstration that putative mirror neuron regions represent subtle differences in movement kinematics to read the intention of an observed motor act.
Collapse
Affiliation(s)
- Atesh Koul
- Department of Psychology, University of Torino, Torino, Italy
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Andrea Cavallo
- Department of Psychology, University of Torino, Torino, Italy
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Franco Cauda
- Department of Psychology, University of Torino, Torino, Italy
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Torino, Torino, Italy
- Focus Lab, Department of Psychology, University of Torino, Torino, Italy
| | - Tommaso Costa
- Department of Psychology, University of Torino, Torino, Italy
- GCS-fMRI, Koelliker Hospital and Department of Psychology, University of Torino, Torino, Italy
- Focus Lab, Department of Psychology, University of Torino, Torino, Italy
| | - Matteo Diano
- Department of Psychology, University of Torino, Torino, Italy
| | - Massimiliano Pontil
- Computational Statistics and Machine Learning, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
- Department of Computer Science, University College London, London, UK
| | - Cristina Becchio
- Department of Psychology, University of Torino, Torino, Italy
- C’MON, Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
31
|
Fiave PA, Sharma S, Jastorff J, Nelissen K. Investigating common coding of observed and executed actions in the monkey brain using cross-modal multi-variate fMRI classification. Neuroimage 2018; 178:306-317. [PMID: 29787867 DOI: 10.1016/j.neuroimage.2018.05.043] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 05/11/2018] [Accepted: 05/17/2018] [Indexed: 11/30/2022] Open
Abstract
Mirror neurons are generally described as a neural substrate hosting shared representations of actions, by simulating or 'mirroring' the actions of others onto the observer's own motor system. Since single neuron recordings are rarely feasible in humans, it has been argued that cross-modal multi-variate pattern analysis (MVPA) of non-invasive fMRI data is a suitable technique to investigate common coding of observed and executed actions, allowing researchers to infer the presence of mirror neurons in the human brain. In an effort to close the gap between monkey electrophysiology and human fMRI data with respect to the mirror neuron system, here we tested this proposal for the first time in the monkey. Rhesus monkeys either performed reach-and-grasp or reach-and-touch motor acts with their right hand in the dark or observed videos of human actors performing similar motor acts. Unimodal decoding showed that both executed or observed motor acts could be decoded from numerous brain regions. Specific portions of rostral parietal, premotor and motor cortices, previously shown to house mirror neurons, in addition to somatosensory regions, yielded significant asymmetric action-specific cross-modal decoding. These results validate the use of cross-modal multi-variate fMRI analyses to probe the representations of own and others' actions in the primate brain and support the proposed mapping of others' actions onto the observer's own motor cortices.
Collapse
Affiliation(s)
- Prosper Agbesi Fiave
- Laboratory for Neuro- & Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Saloni Sharma
- Laboratory for Neuro- & Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Jan Jastorff
- Research Group Psychiatry, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Koen Nelissen
- Laboratory for Neuro- & Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, Belgium.
| |
Collapse
|
32
|
Antunes G, Faria da Silva SF, Simoes de Souza FM. Mirror Neurons Modeled Through Spike-Timing-Dependent Plasticity are Affected by Channelopathies Associated with Autism Spectrum Disorder. Int J Neural Syst 2018; 28:1750058. [DOI: 10.1142/s0129065717500587] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Mirror neurons fire action potentials both when the agent performs a certain behavior and watches someone performing a similar action. Here, we present an original mirror neuron model based on the spike-timing-dependent plasticity (STDP) between two morpho-electrical models of neocortical pyramidal neurons. Both neurons fired spontaneously with basal firing rate that follows a Poisson distribution, and the STDP between them was modeled by the triplet algorithm. Our simulation results demonstrated that STDP is sufficient for the rise of mirror neuron function between the pairs of neocortical neurons. This is a proof of concept that pairs of neocortical neurons associating sensory inputs to motor outputs could operate like mirror neurons. In addition, we used the mirror neuron model to investigate whether channelopathies associated with autism spectrum disorder could impair the modeled mirror function. Our simulation results showed that impaired hyperpolarization-activated cationic currents (Ih) affected the mirror function between the pairs of neocortical neurons coupled by STDP.
Collapse
Affiliation(s)
- Gabriela Antunes
- Department of Physics, Faculdade de Filosofia, Ciencias e Letras de Ribeirao Preto, Universidade de Sao Paulo, Ribeirao Preto, SP, Brazil
| | | | - Fabio M. Simoes de Souza
- Center for Mathematics, Computation and Cognition, Federal University of ABC, Sao Bernardo do Campo, SP, Brazil
| |
Collapse
|
33
|
Moseley RL, Pulvermüller F. What can autism teach us about the role of sensorimotor systems in higher cognition? New clues from studies on language, action semantics, and abstract emotional concept processing. Cortex 2018; 100:149-190. [DOI: 10.1016/j.cortex.2017.11.019] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 05/17/2017] [Accepted: 11/21/2017] [Indexed: 01/08/2023]
|
34
|
Casartelli L, Federici A, Biffi E, Molteni M, Ronconi L. Are We "Motorically" Wired to Others? High-Level Motor Computations and Their Role in Autism. Neuroscientist 2017; 24:568-581. [PMID: 29271293 DOI: 10.1177/1073858417750466] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
High-level motor computations reflect abstract components far apart from the mere motor performance. Neural correlates of these computations have been explored both in nonhuman and human primates, supporting the idea that our brain recruits complex nodes for motor representations. Of note, these computations have exciting implications for social cognition, and they also entail important challenges in the context of autism. Here, we focus on these challenges benefiting from recent studies addressing motor interference, motor resonance, and high-level motor planning. In addition, we suggest new ideas about how one maps and shares the (motor) space with others. Taken together, these issues inspire intriguing and fascinating questions about the social tendency of our high-level motor computations, and this tendency may indicate that we are "motorically" wired to others. Thus, after furnishing preliminary insights on putative neural nodes involved in these computations, we focus on how the hypothesized social nature of high-level motor computations may be anomalous or limited in autism, and why this represents a critical challenge for the future.
Collapse
Affiliation(s)
- Luca Casartelli
- 1 Child Psychopathology Unit, Scientific Institute IRCCS E. Medea, Bosisio Parini, Lecco, Italy
| | - Alessandra Federici
- 1 Child Psychopathology Unit, Scientific Institute IRCCS E. Medea, Bosisio Parini, Lecco, Italy
| | - Emilia Biffi
- 2 Bioengeenering Laboratory, Scientific Institute IRCCS E. Medea, Bosisio Parini, Lecco, Italy
| | - Massimo Molteni
- 1 Child Psychopathology Unit, Scientific Institute IRCCS E. Medea, Bosisio Parini, Lecco, Italy
| | - Luca Ronconi
- 1 Child Psychopathology Unit, Scientific Institute IRCCS E. Medea, Bosisio Parini, Lecco, Italy.,3 Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Trento, Italy
| |
Collapse
|
35
|
Parisi GI, Tani J, Weber C, Wermter S. Lifelong learning of human actions with deep neural network self-organization. Neural Netw 2017; 96:137-149. [DOI: 10.1016/j.neunet.2017.09.001] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Revised: 08/23/2017] [Accepted: 09/01/2017] [Indexed: 10/18/2022]
|
36
|
Nelissen K, Vanduffel W. Action Categorization in Rhesus Monkeys: discrimination of grasping from non-grasping manual motor acts. Sci Rep 2017; 7:15094. [PMID: 29118339 PMCID: PMC5678109 DOI: 10.1038/s41598-017-15378-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2017] [Accepted: 10/25/2017] [Indexed: 11/09/2022] Open
Abstract
The ability to recognize others’ actions is an important aspect of social behavior. While neurophysiological and behavioral research in monkeys has offered a better understanding of how the primate brain processes this type of information, further insight with respect to the neural correlates of action recognition requires tasks that allow recording of brain activity or perturbing brain regions while monkeys simultaneously make behavioral judgements about certain aspects of observed actions. Here we investigated whether rhesus monkeys could actively discriminate videos showing grasping or non-grasping manual motor acts in a two-alternative categorization task. After monkeys became proficient in this task, we tested their ability to generalize to a number of untrained, novel videos depicting grasps or other manual motor acts. Monkeys generalized to a wide range of novel human or conspecific grasping and non-grasping motor acts. They failed, however, for videos showing unfamiliar actions such as a non-biological effector performing a grasp, or a human hand touching an object with the back of the hand. This study shows the feasibility of training monkeys to perform active judgements about certain aspects of observed actions, instrumental for causal investigations into the neural correlates of action recognition.
Collapse
Affiliation(s)
- Koen Nelissen
- Laboratory for Neuro- & Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, 3000, Belgium.
| | - Wim Vanduffel
- Laboratory for Neuro- & Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, 3000, Belgium.,Massachusetts General Hospital, Harvard Medical School, Athinoula A. Martino's Center for Biomedical Imaging, Charlestown, Massachusetts, 02129, USA
| |
Collapse
|
37
|
Spatial and viewpoint selectivity for others' observed actions in monkey ventral premotor mirror neurons. Sci Rep 2017; 7:8231. [PMID: 28811605 PMCID: PMC5557915 DOI: 10.1038/s41598-017-08956-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Accepted: 07/17/2017] [Indexed: 01/09/2023] Open
Abstract
The spatial location and viewpoint of observed actions are closely linked in natural social settings. For example, actions observed from a subjective viewpoint necessarily occur within the observer’s peripersonal space. Neurophysiological studies have shown that mirror neurons (MNs) of the monkey ventral premotor area F5 can code the spatial location of live observed actions. Furthermore, F5 MN discharge can also be modulated by the viewpoint from which filmed actions are seen. Nonetheless, whether and to what extent MNs can integrate viewpoint and spatial location of live observed actions remains unknown. We addressed this issue by comparing the activity of 148 F5 MNs while macaque monkeys observed an experimenter grasping in three different combinations of viewpoint and spatial location, namely, lateral view in the (1) extrapersonal and (2) peripersonal space and (3) subjective view in the peripersonal space. We found that the majority of MNs were space-selective (60.8%): those selective for the peripersonal space exhibited a preference for the subjective viewpoint both at the single-neuron and population level, whereas space-unselective neurons were view invariant. These findings reveal the existence of a previously neglected link between spatial and viewpoint selectivity in MN activity during live-action observation.
Collapse
|
38
|
Pulvermüller F. Neural reuse of action perception circuits for language, concepts and communication. Prog Neurobiol 2017; 160:1-44. [PMID: 28734837 DOI: 10.1016/j.pneurobio.2017.07.001] [Citation(s) in RCA: 124] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 05/12/2017] [Accepted: 07/13/2017] [Indexed: 10/19/2022]
Abstract
Neurocognitive and neurolinguistics theories make explicit statements relating specialized cognitive and linguistic processes to specific brain loci. These linking hypotheses are in need of neurobiological justification and explanation. Recent mathematical models of human language mechanisms constrained by fundamental neuroscience principles and established knowledge about comparative neuroanatomy offer explanations for where, when and how language is processed in the human brain. In these models, network structure and connectivity along with action- and perception-induced correlation of neuronal activity co-determine neurocognitive mechanisms. Language learning leads to the formation of action perception circuits (APCs) with specific distributions across cortical areas. Cognitive and linguistic processes such as speech production, comprehension, verbal working memory and prediction are modelled by activity dynamics in these APCs, and combinatorial and communicative-interactive knowledge is organized in the dynamics within, and connections between APCs. The network models and, in particular, the concept of distributionally-specific circuits, can account for some previously not well understood facts about the cortical 'hubs' for semantic processing and the motor system's role in language understanding and speech sound recognition. A review of experimental data evaluates predictions of the APC model and alternative theories, also providing detailed discussion of some seemingly contradictory findings. Throughout, recent disputes about the role of mirror neurons and grounded cognition in language and communication are assessed critically.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy & Humanities, WE4, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099 Berlin, Germany; Einstein Center for Neurosciences, Berlin 10117 Berlin, Germany.
| |
Collapse
|
39
|
|
40
|
Donnarumma F, Costantini M, Ambrosini E, Friston K, Pezzulo G. Action perception as hypothesis testing. Cortex 2017; 89:45-60. [PMID: 28226255 PMCID: PMC5383736 DOI: 10.1016/j.cortex.2017.01.016] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2016] [Revised: 11/21/2016] [Accepted: 01/18/2017] [Indexed: 01/27/2023]
Abstract
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing.
Collapse
Affiliation(s)
- Francesco Donnarumma
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Marcello Costantini
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK; Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Ettore Ambrosini
- Department of Neuroscience, University of Padua, Padua, Italy; Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Karl Friston
- The Wellcome Trust Centre for Neuroimaging, UCL, London, UK
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
| |
Collapse
|
41
|
Physically interacting individuals estimate the partner’s goal to enhance their movements. Nat Hum Behav 2017. [DOI: 10.1038/s41562-017-0054] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|