1
|
Geangu E, Smith WAP, Mason HT, Martinez-Cedillo AP, Hunter D, Knight MI, Liang H, del Carmen Garcia de Soria Bazan M, Tse ZTH, Rowland T, Corpuz D, Hunter J, Singh N, Vuong QC, Abdelgayed MRS, Mullineaux DR, Smith S, Muller BR. EgoActive: Integrated Wireless Wearable Sensors for Capturing Infant Egocentric Auditory-Visual Statistics and Autonomic Nervous System Function 'in the Wild'. SENSORS (BASEL, SWITZERLAND) 2023; 23:7930. [PMID: 37765987 PMCID: PMC10534696 DOI: 10.3390/s23187930] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/25/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023]
Abstract
There have been sustained efforts toward using naturalistic methods in developmental science to measure infant behaviors in the real world from an egocentric perspective because statistical regularities in the environment can shape and be shaped by the developing infant. However, there is no user-friendly and unobtrusive technology to densely and reliably sample life in the wild. To address this gap, we present the design, implementation and validation of the EgoActive platform, which addresses limitations of existing wearable technologies for developmental research. EgoActive records the active infants' egocentric perspective of the world via a miniature wireless head-mounted camera concurrently with their physiological responses to this input via a lightweight, wireless ECG/acceleration sensor. We also provide software tools to facilitate data analyses. Our validation studies showed that the cameras and body sensors performed well. Families also reported that the platform was comfortable, easy to use and operate, and did not interfere with daily activities. The synchronized multimodal data from the EgoActive platform can help tease apart complex processes that are important for child development to further our understanding of areas ranging from executive function to emotion processing and social learning.
Collapse
Affiliation(s)
- Elena Geangu
- Psychology Department, University of York, York YO10 5DD, UK; (A.P.M.-C.); (M.d.C.G.d.S.B.)
| | - William A. P. Smith
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - Harry T. Mason
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | | | - David Hunter
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Marina I. Knight
- Department of Mathematics, University of York, York YO10 5DD, UK; (M.I.K.); (D.R.M.)
| | - Haipeng Liang
- School of Engineering and Materials Science, Queen Mary University of London, London E1 2AT, UK; (H.L.); (Z.T.H.T.)
| | | | - Zion Tsz Ho Tse
- School of Engineering and Materials Science, Queen Mary University of London, London E1 2AT, UK; (H.L.); (Z.T.H.T.)
| | - Thomas Rowland
- Protolabs, Halesfield 8, Telford TF7 4QN, UK; (T.R.); (D.C.)
| | - Dom Corpuz
- Protolabs, Halesfield 8, Telford TF7 4QN, UK; (T.R.); (D.C.)
| | - Josh Hunter
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - Nishant Singh
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Quoc C. Vuong
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK;
| | - Mona Ragab Sayed Abdelgayed
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - David R. Mullineaux
- Department of Mathematics, University of York, York YO10 5DD, UK; (M.I.K.); (D.R.M.)
| | - Stephen Smith
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Bruce R. Muller
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| |
Collapse
|
2
|
Geangu E, Vuong QC. Seven-months-old infants show increased arousal to static emotion body expressions: Evidence from pupil dilation. INFANCY 2023. [PMID: 36917082 DOI: 10.1111/infa.12535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 12/23/2022] [Accepted: 02/10/2023] [Indexed: 03/16/2023]
Abstract
Human body postures provide perceptual cues that can be used to discriminate and recognize emotions. It was previously found that 7-months-olds' fixation patterns discriminated fear from other emotion body expressions but it is not clear whether they also process the emotional content of those expressions. The emotional content of visual stimuli can increase arousal level resulting in pupil dilations. To provide evidence that infants also process the emotional content of expressions, we analyzed variations in pupil in response to emotion stimuli. Forty-eight 7-months-old infants viewed adult body postures expressing anger, fear, happiness and neutral expressions, while their pupil size was measured. There was a significant emotion effect between 1040 and 1640 ms after image onset, when fear elicited larger pupil dilations than neutral expressions. A similar trend was found for anger expressions. Our results suggest that infants have increased arousal to negative-valence body expressions. Thus, in combination with previous fixation results, the pupil data show that infants as young as 7-months can perceptually discriminate static body expressions and process the emotional content of those expressions. The results extend information about infant processing of emotion expressions conveyed through other means (e.g., faces).
Collapse
Affiliation(s)
- Elena Geangu
- Department of Psychology, University of York, York, UK
| | - Quoc C Vuong
- Biosciences Institute and School of Psychology, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
3
|
Torricelli F, Tomassini A, Pezzulo G, Pozzo T, Fadiga L, D'Ausilio A. Motor invariants in action execution and perception. Phys Life Rev 2023; 44:13-47. [PMID: 36462345 DOI: 10.1016/j.plrev.2022.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
The nervous system is sensitive to statistical regularities of the external world and forms internal models of these regularities to predict environmental dynamics. Given the inherently social nature of human behavior, being capable of building reliable predictive models of others' actions may be essential for successful interaction. While social prediction might seem to be a daunting task, the study of human motor control has accumulated ample evidence that our movements follow a series of kinematic invariants, which can be used by observers to reduce their uncertainty during social exchanges. Here, we provide an overview of the most salient regularities that shape biological motion, examine the role of these invariants in recognizing others' actions, and speculate that anchoring socially-relevant perceptual decisions to such kinematic invariants provides a key computational advantage for inferring conspecifics' goals and intentions.
Collapse
Affiliation(s)
- Francesco Torricelli
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185 Rome, Italy
| | - Thierry Pozzo
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, F-21000, Dijon, France
| | - Luciano Fadiga
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alessandro D'Ausilio
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy.
| |
Collapse
|
4
|
Sato Y, Kitazaki M, Itakura S, Morita T, Sakuraba Y, Tomonaga M, Hirata S. Great apes' understanding of biomechanics: eye-tracking experiments using three-dimensional computer-generated animations. Primates 2021; 62:735-747. [PMID: 34302253 DOI: 10.1007/s10329-021-00932-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 06/30/2021] [Indexed: 11/27/2022]
Abstract
Visual processing of the body movements of other animals is important for adaptive animal behaviors. It is widely known that animals can distinguish articulated animal movements even when they are just represented by points of light such that only information about biological motion is retained. However, the extent to which nonhuman great apes comprehend the underlying structural and physiological constraints affecting each moving body part, i.e., biomechanics, is still unclear. To address this, we examined the understanding of biomechanics in bonobos (Pan paniscus) and chimpanzees (Pan troglodytes), following a previous study on humans (Homo sapiens). Apes underwent eye tracking while viewing three-dimensional computer-generated (CG) animations of biomechanically possible or impossible elbow movements performed by a human, robot, or nonhuman ape. Overall, apes did not differentiate their gaze between possible and impossible movements of elbows. However, some apes looked at elbows for longer when viewing impossible vs. possible robot movements, which indicates that they may have had knowledge of biomechanics and that this knowledge could be extended to a novel agent. These mixed results make it difficult to draw a firm conclusion regarding the extent to which apes understand biomechanics. We discuss some methodological features that may be responsible for the results, as well as implications for future nonhuman animal studies involving the presentation of CG animations or measurement of gaze behaviors.
Collapse
Affiliation(s)
- Yutaro Sato
- Wildlife Research Center, Kyoto University, 2-24 Tanakasekiden, Sakyo, Kyoto, 6068203, Japan.
| | - Michiteru Kitazaki
- Department of Computer Science and Engineering, Toyohashi University of Technology, 1-1 Hibarigaoka, Tempakucho, Toyohashi, Aichi, 441-8580, Japan
| | - Shoji Itakura
- Center for Baby Science, Doshisha University, 4-1-1 Kizugawadai, Kizugawa, Kyoto, 6190225, Japan
| | - Tomoyo Morita
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, 1-1 Yamadaoka, Suita, Osaka, 5650871, Japan
| | - Yoko Sakuraba
- Wildlife Research Center, Kyoto University, 2-24 Tanakasekiden, Sakyo, Kyoto, 6068203, Japan
- Center for Research and Education of Wildlife, Kyoto City Zoo, Okazaki Koen, Okazakihoshojicho, Sakyo, Kyoto, 6068333, Japan
| | | | - Satoshi Hirata
- Wildlife Research Center, Kyoto University, 2-24 Tanakasekiden, Sakyo, Kyoto, 6068203, Japan
| |
Collapse
|
5
|
McMahon E, Kim D, Mehr SA, Nakayama K, Spelke ES, Vaziri-Pashkam M. The ability to predict actions of others from distributed cues is still developing in 6- to 8-year-old children. J Vis 2021; 21:14. [PMID: 34003244 PMCID: PMC8131995 DOI: 10.1167/jov.21.5.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Adults use distributed cues in the bodies of others to predict and counter their actions. To investigate the development of this ability, we had adults and 6- to 8-year-old children play a competitive game with a confederate who reached toward one of two targets. Child and adult participants, who sat across from the confederate, attempted to beat the confederate to the target by touching it before the confederate did. Adults used cues distributed through the head, shoulders, torso, and arms to predict the reaching actions. Children, in contrast, used cues in the arms and torso, but we did not find any evidence that they could use cues in the head or shoulders to predict the actions. These results provide evidence for a change in the ability to respond rapidly to predictive cues to others’ actions from childhood to adulthood. Despite humans’ sensitivity to action goals even in infancy, the ability to read cues from the body for action prediction in rapid interactive settings is still developing in children as old as 6 to 8 years of age.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA.,
| | - Daniel Kim
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA.,
| | - Samuel A Mehr
- Department of Psychology, Harvard University, Cambridge, MA, USA.,Data Science Initiative, Harvard University, Cambridge, MA, USA.,School of Psychology, Victoria University of Wellington, Wellington, New Zealand.,
| | - Ken Nakayama
- Department of Psychology, Harvard University, Cambridge, MA, USA.,
| | | | - Maryam Vaziri-Pashkam
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA.,
| |
Collapse
|
6
|
Do infants represent human actions cross-modally? An ERP visual-auditory priming study. Biol Psychol 2021; 160:108047. [PMID: 33596461 DOI: 10.1016/j.biopsycho.2021.108047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 01/15/2021] [Accepted: 02/08/2021] [Indexed: 12/27/2022]
Abstract
Recent findings indicate that 7-months-old infants perceive and represent the sounds inherent to moving human bodies. However, it is not known whether infants integrate auditory and visual information in representations of specific human actions. To address this issue, we used ERPs to investigate infants' neural sensitivity to the correspondence between sounds and images of human actions. In a cross-modal priming paradigm, 7-months-olds were presented with the sounds generated by two types of human body movement, walking and handclapping, after watching the kinematics of those actions in either a congruent or incongruent manner. ERPs recorded from frontal, central and parietal electrodes in response to action sounds indicate that 7-months-old infants perceptually link the visual and auditory cues of human actions. However, at this age these percepts do not seem to be integrated in cognitive multimodal representations of human actions.
Collapse
|
7
|
Geangu E, Vuong QC. Look up to the body: An eye-tracking investigation of 7-months-old infants' visual exploration of emotional body expressions. Infant Behav Dev 2020; 60:101473. [PMID: 32739668 DOI: 10.1016/j.infbeh.2020.101473] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 07/22/2020] [Accepted: 07/22/2020] [Indexed: 02/02/2023]
Abstract
The human body is an important source of information to infer a person's emotional state. Research with adult observers indicate that the posture of the torso, arms and hands provide important perceptual cues for recognising anger, fear and happy expressions. Much less is known about whether infants process body regions differently for different body expressions. To address this issue, we used eye tracking to investigate whether infants' visual exploration patterns differed when viewing body expressions. Forty-eight 7-months-old infants were randomly presented with static images of adult female bodies expressing anger, fear and happiness, as well as an emotionally-neutral posture. Facial cues to emotional state were removed by masking the faces. We measured the proportion of looking time, proportion and number of fixations, and duration of fixations on the head, upper body and lower body regions for the different expressions. We showed that infants explored the upper body more than the lower body. Importantly, infants at this age fixated differently on different body regions depending on the expression of the body posture. In particular, infants spent a larger proportion of their looking times and had longer fixation durations on the upper body for fear relative to the other expressions. These results extend and replicate the information about infant processing of emotional expressions displayed by human bodies, and they support the hypothesis that infants' visual exploration of human bodies is driven by the upper body.
Collapse
|
8
|
Decroix J, Roger C, Kalénine S. Neural dynamics of grip and goal integration during the processing of others' actions with objects: An ERP study. Sci Rep 2020; 10:5065. [PMID: 32193497 PMCID: PMC7081278 DOI: 10.1038/s41598-020-61963-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 03/06/2020] [Indexed: 11/17/2022] Open
Abstract
Recent behavioural evidence suggests that when processing others’ actions, motor acts and goal-related information both contribute to action recognition. Yet the neuronal mechanisms underlying the dynamic integration of the two action dimensions remain unclear. This study aims to elucidate the ERP components underlying the processing and integration of grip and goal-related information. The electrophysiological activity of 28 adults was recorded during the processing of object-directed action photographs (e.g., writing with pencil) containing either grip violations (e.g. upright pencil grasped with atypical-grip), goal violations (e.g., upside-down pencil grasped with typical-grip), both grip and goal violations (e.g., upside-down pencil grasped with atypical-grip), or no violations. Participants judged whether actions were overall typical or not according to object typical use. Brain activity was sensitive to the congruency between grip and goal information on the N400, reflecting the semantic integration between the two dimensions. On earlier components, brain activity was affected by grip and goal typicality independently. Critically, goal typicality but not grip typicality affected brain activity on the N300, supporting an earlier role of goal-related representations in action recognition. Findings provide new insights on the neural temporal dynamics of the integration of motor acts and goal-related information during the processing of others’ actions.
Collapse
Affiliation(s)
- Jérémy Decroix
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000, Lille, France
| | - Clémence Roger
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000, Lille, France
| | - Solène Kalénine
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000, Lille, France.
| |
Collapse
|
9
|
What first drives visual attention during the recognition of object-directed actions? The role of kinematics and goal information. Atten Percept Psychophys 2020; 81:2400-2409. [PMID: 31292941 DOI: 10.3758/s13414-019-01784-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
The recognition of others' object-directed actions is known to involve the decoding of both the visual kinematics of the action and the action goal. Yet whether action recognition is first guided by the processing of visual kinematics or by a prediction about the goal of the actor remains debated. In order to provide experimental evidence to this issue, the present study aimed at investigating whether visual attention would be preferentially captured by visual kinematics or by action goal information when processing others' actions. In a visual search task, participants were asked to find correct actions (e.g., drinking from glass) among distractor actions. Distractors actions contained grip and/or goal violations and could therefore share the correct goal and/or the correct grip with the target. The time course of fixation proportion on each distractor action has been taken as an indicator of visual attention allocation. Results show that visual attention is first captured by the distractor action with similar goal. Then the withdrawal of visual attention from the action distractor with similar goal suggests a later attentional capture by the action distractor with similar grip. Overall, results are in line with predictive approaches of action understanding, which assume that observers first make a prediction about the actor's goal before verifying this prediction using the visual kinematics of the action.
Collapse
|
10
|
Quadrelli E, Geangu E, Turati C. Human action sounds elicit sensorimotor activation early in life. Cortex 2019; 117:323-335. [DOI: 10.1016/j.cortex.2019.05.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Revised: 12/24/2018] [Accepted: 05/03/2019] [Indexed: 11/29/2022]
|
11
|
Decroix J, Kalénine S. Timing of grip and goal activation during action perception: a priming study. Exp Brain Res 2018; 236:2411-2426. [DOI: 10.1007/s00221-018-5309-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 06/07/2018] [Indexed: 01/23/2023]
|
12
|
Tinelli F, Cioni G, Sandini G, Turi M, Morrone MC. Visual information from observing grasping movement in allocentric and egocentric perspectives: development in typical children. Exp Brain Res 2017; 235:2039-2047. [PMID: 28352948 DOI: 10.1007/s00221-017-4944-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 03/14/2017] [Indexed: 01/02/2023]
Abstract
Development of the motor system lags behind that of the visual system and might delay some visual properties more closely linked to action. We measured the developmental trajectory of the discrimination of object size from observation of the biological motion of a grasping action in egocentric and allocentric viewpoints (observing action of others or self), in children and adolescents from 5 to 18 years of age. Children of 5-7 years of age performed the task at chance, indicating a delayed ability to understand the goal of the action. We found a progressive improvement in the ability of discrimination from 9 to 18 years, which parallels the development of fine motor control. Only after 9 years of age did we observe an advantage for the egocentric view, as previously reported for adults. Given that visual and haptic sensitivity of size discrimination, as well as biological motion, are mature in early adolescence, we interpret our results as reflecting immaturity of the influence of the motor system on visual perception.
Collapse
Affiliation(s)
- Francesca Tinelli
- Department of Developmental Neuroscience, Stella Maris Scientific Institute, Pisa, Italy
| | - Giovanni Cioni
- Department of Developmental Neuroscience, Stella Maris Scientific Institute, Pisa, Italy
- Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy
| | - Giulio Sandini
- Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di Tecnologia, via Morego 30, 16163, Genoa, Italy
| | - Marco Turi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
- Fondazione Stella Maris Mediterraneo, Chiaromonte, Potenza, Italy
| | - Maria Concetta Morrone
- Department of Developmental Neuroscience, Stella Maris Scientific Institute, Pisa, Italy.
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.
| |
Collapse
|
13
|
Senna I, Addabbo M, Bolognini N, Longhi E, Macchi Cassia V, Turati C. Infants' Visual Recognition of Pincer Grip Emerges Between 9 and 12 Months of Age. INFANCY 2016; 22:389-402. [PMID: 33158356 DOI: 10.1111/infa.12163] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The development of the ability to recognize the whole human body shape has long been investigated in infants, while less is known about their ability to recognize the shape of single body parts, and in particular their biomechanical constraints. This study aimed to explore whether 9- and 12-month-old infants have knowledge of a hand-grasping movement (i.e., pincer grip), being able to recognize violations of the hand's anatomical constraints during the observation of that movement. Using a preferential looking paradigm, we showed that 12-month-olds discriminate between biomechanically possible and impossible pincer grips, preferring the former over the latter (Experiment 1). This capacity begins to emerge by 9 months of age, modulated by infants' own sensorimotor experience with pincer grip (Experiment 2). Our findings indicate that the ability to visually discriminate between pincer grasps differing in their biomechanical properties develops between 9 and 12 months of age, and that experience with self-produced hand movements might help infants in building a representation of the hand that encompasses knowledge of the physical constraints of this body part.
Collapse
Affiliation(s)
- Irene Senna
- Cognitive Neuroscience Department and Cognitive Interaction Technology-Center of Excellence, Bielefeld University.,Department of Psychology & NeuroMi, Milan Center for Neuroscience, University of Milan-Bicocca
| | - Margaret Addabbo
- Department of Psychology & NeuroMi, Milan Center for Neuroscience, University of Milan-Bicocca
| | - Nadia Bolognini
- Department of Psychology & NeuroMi, Milan Center for Neuroscience, University of Milan-Bicocca.,Laboratory of Neuropsychology, Istituto Auxologico Italiano
| | - Elena Longhi
- Department of Psychology & NeuroMi, Milan Center for Neuroscience, University of Milan-Bicocca.,Research Department of Clinical, Educational and Health Psychology, University College London
| | - Viola Macchi Cassia
- Department of Psychology & NeuroMi, Milan Center for Neuroscience, University of Milan-Bicocca
| | - Chiara Turati
- Department of Psychology & NeuroMi, Milan Center for Neuroscience, University of Milan-Bicocca
| |
Collapse
|
14
|
Natale E, Addabbo M, Marchis IC, Bolognini N, Macchi Cassia V, Turati C. Action priming with biomechanically possible and impossible grasps: ERP evidence from 6-month-old infants. Soc Neurosci 2016; 12:560-569. [PMID: 27266367 DOI: 10.1080/17470919.2016.1197853] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Coding the direction of others' gestures is a fundamental human ability, since it allows the observer to attend and react to sources of potential interest in the environment. Shifts of attention triggered by action observation have been reported to occur early in infancy. Yet, the neurophysiological underpinnings of such action priming and the properties of gestures that might be crucial for it remain unknown. Here, we addressed these issues by recording electroencephalographic activity (EEG) from 6-month-old infants cued with spatially non-predictive hand grasping toward or away from the position of a target object, i.e., valid and invalid trials, respectively. Half of the infants were cued with a gesture executable by a human hand (possible gesture) and the other half with a gesture impossible to be executed by a human hand. Results show that the amplitude enhancement of the posterior N290 component in response to targets in valid trials, as compared to invalid trials, was present only for infants seeing possible gestures, while it was absent for infants seeing impossible gestures. These findings suggest that infants detect the biomechanical properties of human movements when processing hand gestures, relying on this information to orient their visual attention toward the target object.
Collapse
Affiliation(s)
- E Natale
- a Department of Psychology and NeuroMI, Milan Center for Neuroscience , University of Milano-Bicocca , Milan , Italy
| | - M Addabbo
- a Department of Psychology and NeuroMI, Milan Center for Neuroscience , University of Milano-Bicocca , Milan , Italy
| | - I C Marchis
- a Department of Psychology and NeuroMI, Milan Center for Neuroscience , University of Milano-Bicocca , Milan , Italy
| | - N Bolognini
- a Department of Psychology and NeuroMI, Milan Center for Neuroscience , University of Milano-Bicocca , Milan , Italy.,b Laboratory of Neuropsychology , IRCCS Istituto Auxologico Italiano , Milan , Italy
| | - V Macchi Cassia
- a Department of Psychology and NeuroMI, Milan Center for Neuroscience , University of Milano-Bicocca , Milan , Italy
| | - C Turati
- a Department of Psychology and NeuroMI, Milan Center for Neuroscience , University of Milano-Bicocca , Milan , Italy
| |
Collapse
|
15
|
Abstract
An important element in social interactions is predicting the goals of others, including the goals of others' manual actions. Over a decade ago, Flanagan and Johansson demonstrated that, when observing other people reaching for objects, the observer's gaze arrives at the goal before the action is completed. Moreover, those authors proposed that this behavior was mediated by an embodied process, which takes advantage of the observer's motor knowledge. Here, we scrutinize work that has followed that seminal article. We include studies on adults that have used combined eye tracking and transcranial magnetic stimulation technologies to test causal hypotheses about underlying brain circuits. We also include developmental studies on human infants. We conclude that, although several aspects of the embodied process of predictive eye movements remain to be clarified, current evidence strongly suggests that the motor system plays a causal role in guiding predictive gaze shifts that focus on another person's future goal. The early emergence of the predictive gaze in infant development underlines its importance for social cognition and interaction.
Collapse
Affiliation(s)
| | - Terje Falck-Ytter
- Department of Psychology, Uppsala University Department of Women's and Children's Health, Karolinska Institutet
| |
Collapse
|