1
|
Chung H, Meyer M, Debnath R, Fox NA, Woodward A. Neural correlates of familiar and unfamiliar action in infancy. J Exp Child Psychol 2022; 220:105415. [PMID: 35339810 PMCID: PMC9086142 DOI: 10.1016/j.jecp.2022.105415] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 02/24/2022] [Accepted: 02/25/2022] [Indexed: 11/15/2022]
Abstract
Behavioral evidence shows that experience with an action shapes action perception. Neural mirroring has been suggested as a mechanism underlying this behavioral phenomenon. Suppression of electroencephalogram (EEG) power in the mu frequency band, an index of motor activation, typically reflects neural mirroring. However, contradictory findings exist regarding the association between mu suppression and motor familiarity in infant EEG studies. In this study, we investigated the neural underpinnings reflecting the role of familiarity in action perception. We measured neural processing of familiar (grasp) and novel (tool-use) actions in 9- and 12-month-old infants. Specifically, we measured infants' distinct motor/visual activity and explored functional connectivity associated with these processes. Mu suppression was stronger for grasping than for tool use, whereas significant mu and occipital alpha (indexing visual activity) suppression were evident for both actions. Interestingly, selective motor-visual functional connectivity was found during observation of familiar action, a pattern not observed for novel action. Thus, the neural correlates of perception of familiar actions may be best understood in terms of a functional neural network rather than isolated regional activity. Our findings provide novel insights on analytic approaches for identifying motor-specific neural activity while also considering neural networks involved in observing motorically familiar versus unfamiliar actions.
Collapse
Affiliation(s)
| | - Marlene Meyer
- University of Chicago, Chicago, IL 60637, USA; Donders Institute, Radboud University, 6525 GD Nijmegen, the Netherlands
| | - Ranjan Debnath
- Leibniz Institute for Neurobiology, 39118 Magdeburg, Germany
| | - Nathan A Fox
- University of Maryland, College Park, MD 20742, USA
| | | |
Collapse
|
2
|
Addabbo M, Roberti E, Colombo L, Picciolini O, Turati C. Newborns' early attuning to hand-to-mouth coordinated actions. Dev Sci 2021; 25:e13162. [PMID: 34291540 PMCID: PMC9286559 DOI: 10.1111/desc.13162] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 07/06/2021] [Accepted: 07/16/2021] [Indexed: 11/30/2022]
Abstract
Already inside the womb, fetuses frequently bring their hands to the mouth, anticipating hand‐to‐mouth contact by opening the mouth. Here, we explored whether 2‐day‐old newborns discriminate between hand actions directed towards different targets of the face—that is, a thumb that reaches the mouth and a thumb that reaches the chin. Newborns looked longer towards the thumb‐to‐mouth compared to the thumb‐to‐chin action only in the presence, and not absence, of anticipatory mouth opening movements, preceding the thumb arrival. Overall, our results show that newborns are sensitive to hand‐to‐face coordinated actions, being capable to discriminate between body‐related actions directed towards different targets of the face, but only when a salient visual cue that anticipates the target of the action is present. The role of newborns’ sensorimotor experience with hand‐to‐mouth gestures in driving this capacity is discussed.
Collapse
Affiliation(s)
- Margaret Addabbo
- Department of Psychology, University of Milan-Bicocca, Milano, Italy
| | - Elisa Roberti
- Department of Psychology, University of Milan-Bicocca, Milano, Italy
| | - Lorenzo Colombo
- Neonatal Intensive Care Unit, Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Odoardo Picciolini
- Pediatric Physical Medicine & Rehabilitation Unit, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Chiara Turati
- Department of Psychology, University of Milan-Bicocca, Milano, Italy
| |
Collapse
|
3
|
Animates engender robust memory representations in adults and young children. Cognition 2020; 201:104284. [PMID: 32276235 DOI: 10.1016/j.cognition.2020.104284] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 03/18/2020] [Accepted: 03/29/2020] [Indexed: 11/24/2022]
Abstract
The animate monitoring hypothesis proposes that humans are predisposed to attend preferentially to animate entities in the environment (New, Cosmides, & Tooby, 2007). However, there have to date been no developmental investigations of animate monitoring in younger populations, despite the relevance of such evidence to this hypothesis. Here we demonstrate that adults and preschoolers recall a novel sequence of action with greater fidelity if it involves an animate over an inanimate. Experiments 1 (adults) and 2 (preschoolers) provide initial support for this phenomena, when a familiar animate (a dog) is used in the sequence instead of a block. Experiment 2 also revealed that a beetle is not clearly superior to a block, hinting at a possible hierarchy of animacy. Experiment 3 provided the clearest evidence for this memory advantage in preschoolers, when a novel animate that was perceptually identical to two other inanimate controls enhanced memory for the sequence. These results indicate that animate monitoring does not require extensive experience to develop, and could possibly be the result of innate dispositions.
Collapse
|
4
|
Ganglmayer K, Attig M, Daum MM, Paulus M. Infants’ perception of goal-directed actions: A multi-lab replication reveals that infants anticipate paths and not goals. Infant Behav Dev 2019; 57:101340. [DOI: 10.1016/j.infbeh.2019.101340] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Revised: 03/22/2019] [Accepted: 07/10/2019] [Indexed: 10/26/2022]
|
5
|
Addabbo M, Vacaru SV, Meyer M, Hunnius S. 'Something in the way you move': Infants are sensitive to emotions conveyed in action kinematics. Dev Sci 2019; 23:e12873. [PMID: 31144771 DOI: 10.1111/desc.12873] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2018] [Revised: 05/20/2019] [Accepted: 05/27/2019] [Indexed: 11/28/2022]
Abstract
Body movements, as well as faces, communicate emotions. Research in adults has shown that the perception of action kinematics has a crucial role in understanding others' emotional experiences. Still, little is known about infants' sensitivity to body emotional expressions, since most of the research in infancy focused on faces. While there is some first evidence that infants can recognize emotions conveyed in whole-body postures, it is still an open question whether they can extract emotional information from action kinematics. We measured electromyographic (EMG) activity over the muscles involved in happy (zygomaticus major, ZM), angry (corrugator supercilii, CS) and fearful (frontalis, F) facial expressions, while 11-month-old infants observed the same action performed with either happy or angry kinematics. Results demonstrate that infants responded to angry and happy kinematics with matching facial reactions. In particular, ZM activity increased while CS activity decreased in response to happy kinematics and vice versa for angry kinematics. Our results show for the first time that infants can rely on kinematic information to pick up on the emotional content of an action. Thus, from very early in life, action kinematics represent a fundamental and powerful source of information in revealing others' emotional state.
Collapse
Affiliation(s)
- Margaret Addabbo
- Department of Psychology, University of Milano-Bicocca, Milano, Italy
| | - Stefania V Vacaru
- Donders Institute for Brain, Cognition, and Behaviour, Radbound University, Nijmegen, The Netherlands
| | - Marlene Meyer
- Department of Psychology, University of Chicago, Chicago, Illinois
| | - Sabine Hunnius
- Donders Institute for Brain, Cognition, and Behaviour, Radbound University, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Levine D, Buchsbaum D, Hirsh‐Pasek K, Golinkoff RM. Finding events in a continuous world: A developmental account. Dev Psychobiol 2018; 61:376-389. [DOI: 10.1002/dev.21804] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 09/21/2018] [Accepted: 10/10/2018] [Indexed: 11/10/2022]
|
7
|
Novack MA, Filippi CA, Goldin-Meadow S, Woodward AL. Actions speak louder than gestures when you are 2 years old. Dev Psychol 2018; 54:1809-1821. [PMID: 30234335 PMCID: PMC6152821 DOI: 10.1037/dev0000553] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Interpreting iconic gestures can be challenging for children. Here, we explore the features and functions of iconic gestures that make them more challenging for young children to interpret than instrumental actions. In Study 1, we show that 2.5-year-olds are able to glean size information from handshape in a simple gesture, although their performance is significantly worse than 4-year-olds'. Studies 2 to 4 explore the boundary conditions of 2.5-year-olds' gesture understanding. In Study 2, 2.5-year-old children have an easier time interpreting size information in hands that reach than in hands that gesture. In Study 3, we tease apart the perceptual features and functional objectives of reaches and gestures. We created a context in which an action has the perceptual features of a reach (extending the hand toward an object) but serves the function of a gesture (the object is behind a barrier and not obtainable; the hand thus functions to represent, rather than reach for, the object). In this context, children struggle to interpret size information in the hand, suggesting that gesture's representational function (rather than its perceptual features) is what makes it hard for young children to interpret. A distance control (Study 4) in which a person holds a box in gesture space (close to the body) demonstrates that children's difficulty interpreting static gesture cannot be attributed to the physical distance between a gesture and its referent. Together, these studies provide evidence that children's struggle to interpret iconic gesture may stem from its status as representational action. (PsycINFO Database Record
Collapse
Affiliation(s)
- Miriam A. Novack
- The University of Chicago, Chicago, IL
- Northwestern University, Evanston, IL
| | - Courtney A. Filippi
- The University of Chicago, Chicago, IL
- National Institute of Health, Bethesda, MD
| | | | | |
Collapse
|
8
|
Krogh-Jespersen S, Kaldy Z, Valadez AG, Carter AS, Woodward AL. Goal prediction in 2-year-old children with and without autism spectrum disorder: An eye-tracking study. Autism Res 2018; 11:870-882. [PMID: 29405645 PMCID: PMC6026049 DOI: 10.1002/aur.1936] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Revised: 01/16/2018] [Accepted: 01/19/2018] [Indexed: 11/10/2022]
Abstract
This study examined the predictive reasoning abilities of typically developing (TD) infants and 2-year-old children with autism spectrum disorder (ASD) in an eye-tracking paradigm. Participants watched a video of a goal-directed action in which a human actor reached for and grasped one of two objects. At test, the objects switched locations. Across these events, we measured: visual anticipation of the action outcome with kinematic cues (i.e., a completed reaching behavior); goal prediction of the action outcome without kinematic cues (i.e., an incomplete reach); and latencies to generate predictions across these two tasks. Results revealed similarities in action anticipation across groups when trajectory information regarding the intended goal was present; however, when predicting the goal without kinematic cues, developmental and diagnostic differences became evident. Younger TD children generated goal-based visual predictions, whereas older TD children were not systematic in their visual predictions. In contrast to both TD groups, children with ASD generated location-based predictions, suggesting that their visual predictions may reflect visuomotor perseveration. Together, these results suggest differences in early predictive reasoning abilities. Autism Res 2018, 11: 870-882. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY The current study examines the ability to generate visual predictions regarding other people's goal-directed actions, specifically reaching and grasping an object, in infants and children with and without autism spectrum disorder. Results showed no differences in abilities when movement information about a person's goal was evident; however, differences were evident across age and clinical diagnoses when relying on previous knowledge to generate a visual prediction.
Collapse
|
9
|
Krogh-Jespersen S, Woodward AL. Reaching the goal: Active experience facilitates 8-month-old infants' prospective analysis of goal-based actions. J Exp Child Psychol 2018; 171:31-45. [PMID: 29499431 DOI: 10.1016/j.jecp.2018.01.014] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Revised: 12/26/2017] [Accepted: 01/22/2018] [Indexed: 11/27/2022]
Abstract
From early in development, infants view others' actions as structured by intentions, and this action knowledge may be supported by shared action production/perception systems. Because the motor system is inherently prospective, infants' understanding of goal-directed actions should support predictions of others' future actions, yet little is known about the nature and developmental origins of this ability, specifically whether young infants use the goal-directed nature of an action to rapidly predict future social behaviors and whether their action experience influences this ability. Across three conditions, we varied the level of action experience infants engaged in to determine whether motor priming influenced infants' ability to generate rapid social predictions. Results revealed that young infants accurately generated goal-based visual predictions when they had previously been reaching for objects; however, infants who passively observed a demonstration were less successful. Further analyses showed that engaging the cognitively based prediction system to generate goal-based predictions following motor engagement resulted in slower latencies to predict, suggesting that these smart predictions take more time to deploy. Thus, 8-month-old infants may have motor representations of goal-directed actions, yet this is not sufficient for them to predict others' actions; rather, their own action experience supports the ability to rapidly implement knowledge to predict future behavior.
Collapse
Affiliation(s)
| | - Amanda L Woodward
- Department of Psychology, University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
10
|
Loucks J, Sommerville J. Developmental Change in Action Perception: Is Motor Experience the Cause? INFANCY 2018. [DOI: 10.1111/infa.12231] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
11
|
Abstract
A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal--that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15, 495-514, 2008)--has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon's function is its purpose rather than its precipitating cause--the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism--it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.
Collapse
Affiliation(s)
- Miriam A Novack
- Department of Psychology, University of Chicago, Chicago, IL, 60637, USA.
| | | |
Collapse
|