1
|
Ter Bekke M, Drijvers L, Holler J. Hand Gestures Have Predictive Potential During Conversation: An Investigation of the Timing of Gestures in Relation to Speech. Cogn Sci 2024; 48:e13407. [PMID: 38279899 DOI: 10.1111/cogs.13407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 07/09/2023] [Accepted: 01/10/2024] [Indexed: 01/29/2024]
Abstract
During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
Collapse
Affiliation(s)
- Marlijn Ter Bekke
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| | - Linda Drijvers
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| |
Collapse
|
2
|
Zhao W. TMS reveals a two-stage priming circuit of gesture-speech integration. Front Psychol 2023; 14:1156087. [PMID: 37228338 PMCID: PMC10203497 DOI: 10.3389/fpsyg.2023.1156087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 04/19/2023] [Indexed: 05/27/2023] Open
Abstract
Introduction Naturalistically, multisensory information of gesture and speech is intrinsically integrated to enable coherent comprehension. Such cross-modal semantic integration is temporally misaligned, with the onset of gesture preceding the relevant speech segment. It has been proposed that gestures prime subsequent speech. However, there are unresolved questions regarding the roles and time courses that the two sources of information play in integration. Methods In two between-subject experiments of healthy college students, we segmented the gesture-speech integration period into 40-ms time windows (TWs) based on two separately division criteria, while interrupting the activity of the integration node of the left posterior middle temporal gyrus (pMTG) and the left inferior frontal gyrus (IFG) with double-pulse transcranial magnetic stimulation (TMS). In Experiment 1, we created fixed time-advances of gesture over speech and divided the TWs from the onset of speech. In Experiment 2, we differentiated the processing stages of gesture and speech and segmented the TWs in reference to the speech lexical identification point (IP), while speech onset occurred at the gesture semantic discrimination point (DP). Results The results showed a TW-selective interruption of the pMTG and IFG only in Experiment 2, with the pMTG involved in TW1 (-120 ~ -80 ms of speech IP), TW2 (-80 ~ -40 ms), TW6 (80 ~ 120 ms) and TW7 (120 ~ 160 ms) and the IFG involved in TW3 (-40 ~ 0 ms) and TW6. Meanwhile no significant disruption of gesture-speech integration was reported in Experiment 1. Discussion We determined that after the representation of gesture has been established, gesture-speech integration occurs such that speech is first primed in a phonological processing stage before gestures are unified with speech to form a coherent meaning. Our findings provide new insights into multisensory speech and co-speech gesture integration by tracking the causal contributions of the two sources of information.
Collapse
|
3
|
Hartmann M, Carlson E, Mavrolampados A, Burger B, Toiviainen P. Postural and Gestural Synchronization, Sequential Imitation, and Mirroring Predict Perceived Coupling of Dancing Dyads. Cogn Sci 2023; 47:e13281. [PMID: 37096347 DOI: 10.1111/cogs.13281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 03/01/2023] [Accepted: 03/23/2023] [Indexed: 04/26/2023]
Abstract
Body movement is a primary nonverbal communication channel in humans. Coordinated social behaviors, such as dancing together, encourage multifarious rhythmic and interpersonally coupled movements from which observers can extract socially and contextually relevant information. The investigation of relations between visual social perception and kinematic motor coupling is important for social cognition. Perceived coupling of dyads spontaneously dancing to pop music has been shown to be highly driven by the degree of frontal orientation between dancers. The perceptual salience of other aspects, including postural congruence, movement frequencies, time-delayed relations, and horizontal mirroring remains, however, uncertain. In a motion capture study, 90 participant dyads moved freely to 16 musical excerpts from eight musical genres, while their movements were recorded using optical motion capture. A total from 128 recordings from 8 dyads maximally facing each other were selected to generate silent 8-s animations. Three kinematic features describing simultaneous and sequential full body coupling were extracted from the dyads. In an online experiment, the animations were presented to 432 observers, who were asked to rate perceived similarity and interaction between dancers. We found dyadic kinematic coupling estimates to be higher than those obtained from surrogate estimates, providing evidence for a social dimension of entrainment in dance. Further, we observed links between perceived similarity and coupling of both slower simultaneous horizontal gestures and posture bounding volumes. Perceived interaction, on the other hand, was more related to coupling of faster simultaneous gestures and to sequential coupling. Also, dyads who were perceived as more coupled tended to mirror their pair's movements.
Collapse
Affiliation(s)
- Martin Hartmann
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä
- Department of Music, Art and Culture Studies, University of Jyväskylä
| | - Emily Carlson
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä
- Department of Music, Art and Culture Studies, University of Jyväskylä
| | - Anastasios Mavrolampados
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä
- Department of Music, Art and Culture Studies, University of Jyväskylä
| | | | - Petri Toiviainen
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä
- Department of Music, Art and Culture Studies, University of Jyväskylä
| |
Collapse
|
4
|
De Marco D, De Stefani E, Vecchiato G. Embodying Language through Gestures: Residuals of Motor Memories Modulate Motor Cortex Excitability during Abstract Words Comprehension. SENSORS (BASEL, SWITZERLAND) 2022; 22:7734. [PMID: 36298083 PMCID: PMC9610064 DOI: 10.3390/s22207734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 10/04/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
There is a debate about whether abstract semantics could be represented in a motor domain as concrete language. A contextual association with a motor schema (action or gesture) seems crucial to highlighting the motor system involvement. The present study with transcranial magnetic stimulation aimed to assess motor cortex excitability changes during abstract word comprehension after conditioning word reading to a gesture execution with congruent or incongruent meaning. Twelve healthy volunteers were engaged in a lexical-decision task responding to abstract words or meaningless verbal stimuli. Motor cortex (M1) excitability was measured at different after-stimulus intervals (100, 250, or 500 ms) before and after an associative-learning training where the execution of the gesture followed word processing. Results showed a significant post-training decrease in hand motor evoked potentials at an early processing stage (100 ms) in correspondence to words congruent with the gestures presented during the training. We hypothesized that traces of individual semantic memory, combined with training effects, induced M1 inhibition due to the redundancy of evoked motor representation. No modulation of cortical excitability was found for meaningless or incongruent words. We discuss data considering the possible implications in research to understand the neural basis of language development and language rehabilitation protocols.
Collapse
Affiliation(s)
- Doriana De Marco
- Istituto di Neuroscienze, Consiglio Nazionale delle Ricerche, 43125 Parma, Italy
- Dipartimento di Medicina e Chirurgia, Università degli Studi di Parma, 43125 Parma, Italy
| | - Elisa De Stefani
- Child and Adolescent Neuropsychiatry-NPIA District of Scandiano, AUSL of Reggio Emilia, 42019 Reggio Emilia, Italy
| | - Giovanni Vecchiato
- Istituto di Neuroscienze, Consiglio Nazionale delle Ricerche, 43125 Parma, Italy
- Dipartimento di Medicina e Chirurgia, Università degli Studi di Parma, 43125 Parma, Italy
| |
Collapse
|
5
|
Balconi M, Fronda G. Autonomic system tuning during gesture observation and reproduction. Acta Psychol (Amst) 2022; 222:103477. [PMID: 34971949 DOI: 10.1016/j.actpsy.2021.103477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 10/06/2021] [Accepted: 12/15/2021] [Indexed: 11/01/2022] Open
Abstract
Gestural communication allows providing information about thoughts and feelings, characterizing face-to-face interactions, also during non-verbal exchanges. In the present study, the autonomic responses and peripheral synchronization mechanisms of two individuals (encoder and decoder) were recorded simultaneously, through the use of biofeedback in hyperscanning, during two different experimental phases consisting in the observation (watching videos of gestures) and reproduction of positive and negative different types of gestures (affective, social and informative) supported by linguistic contexts. Therefore, the main aim of this study was focused on the analysis of simultaneous individuals' peripheral mechanisms during the performing of complex joint action, consisting of the observation (watching videos) and the reproduction of positive and negative social, affective, and informative gestures each supported by a linguistic script. Single-subject and inter-subject correlation analyses were conducted to observe individuals' autonomic responses and physiological synchronization. Single-subject results revealed an increase in emotional arousal, indicated by an increase in electrodermal activity (skin conductance level - SCL and response - SCR), during both the observation (watching videos) and reproduction of negative social and affective gestures contextualized by a linguistic context. Moreover, an increase of emotional engagement, expressed by an increase in heart rate (HR) activity, emerged in the encoder compare to the decoder during gestures reproduction (simulation of gestures). Inter-subject correlation results showed the presence of mirroring mechanisms, indicated by an increase in SCL, SCR, and HR synchronization, during the linguistic contexts and gesture observation (watching videos). Furthermore, an increase in SCL and SCR synchronization emerged during the observation (watching videos) and reproduction of negative social and affective gestures. Therefore, the present study allowed to obtain information on the mirroring mechanisms and physiological synchronization underlying the linguistic and gesture system during non-verbal interaction.
Collapse
|
6
|
Meira IA, Pinheiro MA, Prado-Tozzi DA, Cáceres-Barreno AH, de Moraes M, Rodrigues Garcia RCM. Speech and the swallowing threshold in single implant overdenture wearers: A paired control study. J Oral Rehabil 2021; 48:1262-1270. [PMID: 34368975 DOI: 10.1111/joor.13240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 06/25/2021] [Accepted: 08/03/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND Single implant mandibular overdentures (SIMOs) can improve mastication in edentulous elderly people. However, little attention has been paid to their effects on articulation disorders and the swallowing threshold relative to those of conventional complete dentures (CDs). OBJECTIVE To compare the effects of new conventional CD set and SIMOs on articulation disorders, mandibular movements during speech and swallowing threshold using a paired study design. METHODS Twenty-two edentulous Brazilian Portuguese-speaking elderly people (mean age 66.7 ± 4.6 years) were first evaluated whilst wearing their old conventional CDs. Articulation disorders were analysed by audio and video recordings, mandibular movements during speech were measured by kinesiography, and the swallowing threshold was assessed by masticatory cycle counting and medium particle size (X50 ) calculation. Participants then received new conventional CDs, and evaluations were repeated 2 months later. Subsequently, single implants were installed in the midlines of subjects' mandibles, and the conventional CDs were converted to SIMOs. After 2 months of SIMOs use, the evaluations were repeated. Data were submitted to the Cochran-Mantel-Haenszel and ANOVA. RESULTS No difference in articulation disorders was found between new conventional CD and SIMO use. The frequency of anterior lisp during /s/ and /z/ phoneme pronunciation was reduced with new conventional CD use relative to old conventional CD use (p < .05). The X50 decreased progressively with new conventional CD and SIMO use (both p < .05). CONCLUSION SIMOs do not alter speech relative to new well-fitted conventional CDs, but improved the swallowing threshold, in edentulous elderly people.
Collapse
Affiliation(s)
- Ingrid Andrade Meira
- Department of Prosthodontics and Periodontology, Piracicaba Dental School, University of Campinas, São Paulo, Brazil
| | - Mayara Abreu Pinheiro
- Department of Prosthodontics and Periodontology, Piracicaba Dental School, University of Campinas, São Paulo, Brazil
| | | | | | - Márcio de Moraes
- Department of Oral and Maxillofacial Surgery, Piracicaba Dental School, University of Campinas, Brazil
| | | |
Collapse
|
7
|
Delehanty AD, Wetherby AM. Rate of Communicative Gestures and Developmental Outcomes in Toddlers With and Without Autism Spectrum Disorder During a Home Observation. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2021; 30:649-662. [PMID: 33751898 PMCID: PMC8740741 DOI: 10.1044/2020_ajslp-19-00206] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 04/27/2020] [Accepted: 12/05/2020] [Indexed: 05/26/2023]
Abstract
Purpose Most toddlers with autism spectrum disorder and other developmental delays receive early intervention at home and may not participate in a clinic-based communication evaluation. However, there is limited research that has prospectively examined communication in very young children with and without autism in a home-based setting. This study used granular observational coding to document the communicative acts performed by toddlers with autism, developmental delay, and typical development in the home environment. Method Children were selected from the archival database of the FIRST WORDS Project (N = 211). At approximately 20 months of age, each child participated in everyday activities with a caregiver during an hour-long, video-recorded, naturalistic home observation. Inventories of unique gestures, rates per minute, and proportions of types of communicative acts and communicative functions were coded and compared using a one-way analysis of variance. Concurrent and prospective relationships between rate of communication and measures of social communication, language development, and autism symptoms were examined. Results A total of 40,738 communicative acts were coded. Children with autism, developmental delay, and typical development used eight, nine, and 12 unique gestures on average, respectively. Children with autism used deictic gestures, vocalizations, and communicative acts for behavior regulation at significantly lower rates than the other groups. Statistically significant correlations were observed between rate of communication and several outcome measures. Conclusion Observation of social communication in the natural environment may improve early identification of children with autism and communication delays, complement clinic-based assessments, and provide useful information about a child's social communication profile and the family's preferred activities and intervention priorities. Supplemental Material https://doi.org/10.23641/asha.14204522.
Collapse
Affiliation(s)
| | - Amy M. Wetherby
- Department of Clinical Sciences, College of Medicine, Florida State University, Tallahassee
| |
Collapse
|
8
|
Shinohara K, Kawahara S, Tanaka H. Visual and Proprioceptive Perceptions Evoke Motion-Sound Symbolism: Different Acceleration Profiles Are Associated With Different Types of Consonants. Front Psychol 2020; 11:589797. [PMID: 33281688 PMCID: PMC7688920 DOI: 10.3389/fpsyg.2020.589797] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Accepted: 10/26/2020] [Indexed: 11/13/2022] Open
Abstract
A growing body of literature has shown that one perceptual modality can be systematically associated with sensation in another. However, the cross-modal relationship between linguistic sounds and motions (i.e., motion-sound symbolism) is an extremely understudied area of research. Against this background, this paper examines the cross-modal correspondences between categories of consonants on one hand and different acceleration profiles of motion stimuli on the other. In the two experiments that we conducted, we mechanically manipulated the acceleration profiles of the stimuli while holding the trajectory paths constant, thus distinguishing the effect of acceleration profiles from that of motion path shapes. The results show that different acceleration profiles can be associated with different types of consonants; in particular, movements with acceleration and deceleration tend to be associated with a class of sounds called obstruents, whereas movements without much acceleration tend to be associated with a class of sounds called sonorants. Moreover, the current experiments show that this sort of cross-modal correspondence arises even when the stimuli are not presented visually, namely, when the participants' hands were moved passively by a manipulandum. In conclusion, the present study adds an additional piece of evidence demonstrating that bodily action-based information, i.e., proprioception as a very feasible candidate, could lead to sound symbolic patterns.
Collapse
Affiliation(s)
- Kazuko Shinohara
- Language and Culture Studies, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Shigeto Kawahara
- The Institute of Cultural and Linguistic Studies, Keio University, Tokyo, Japan
| | - Hideyuki Tanaka
- Human Movement Science, Tokyo University of Agriculture and Technology, Tokyo, Japan
| |
Collapse
|
9
|
Santangelo A, Monteleone AM, Casarrubea M, Cassioli E, Castellini G, Crescimanno G, Aiello S, Ruzzi V, Cascino G, Marciello F, Ricca V. Recurring sequences of multimodal non-verbal and verbal communication during a human psycho-social stress test: A temporal pattern analysis. Physiol Behav 2020; 221:112907. [PMID: 32275912 DOI: 10.1016/j.physbeh.2020.112907] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 03/04/2020] [Accepted: 03/31/2020] [Indexed: 11/19/2022]
Abstract
BACKGROUND The Trier Social Stress Test (TSST) is a widely used protocol to study human psycho-social stress responses. Quantitative reports of non-verbal behaviors have been carried out by means of the Ethological Coding System for Interviews (ECSI). However, no data have described whether and how non-verbal and verbal behaviors take part in the composition of multimodal sequences of communication during the test. METHOD Five non-verbal ECSI categories and four verbal behaviors related with communication were included in the Ethogram. A focal sampling was employed to ensure a high temporal resolution of the behavioral annotation. T-Pattern Analysis was employed to detect statistically-grounded behavioral sequences. RESULTS As a first step, frequency, overall duration and mean time length were reported for each component of the Ethogram. Besides, T-Pattern Analysis revealed that communication during TSST is organized according to a complex temporal patterning. We found 51 different sequences (T-patterns): 8 T-patterns included exclusively non-verbal behaviors; 17 T-patterns included verbal behaviors and 26 T-patterns encompassed mixed non-verbal and verbal behaviors. T-patterns were discussed depending on their putative functional meaning since non-verbal behaviors almost did not overlap within patterns. CONCLUSIONS The implementation of an Ethogram including non-verbal and verbal components highlights the multimodal human communication in TSST. T-Pattern Analysis unveils the real-time interplay among these components. In this study results are discussed according to Jakobson's six constitutive factors of communication.
Collapse
Affiliation(s)
- Andrea Santangelo
- Psychiatric Unit, Department of Health Sciences, University of Florence, Florence, Italy.
| | | | - Maurizio Casarrubea
- Laboratory of Behavioural Physiology, Department of Biomedicine, Neuroscience and Advanced Diagnostics (Bi.N.D.), Human Physiology Section "Giuseppe Pagano", University of Palermo, Palermo, Italy.
| | - Emanuele Cassioli
- Psychiatric Unit, Department of Health Sciences, University of Florence, Florence, Italy
| | - Giovanni Castellini
- Psychiatric Unit, Department of Health Sciences, University of Florence, Florence, Italy.
| | - Giuseppe Crescimanno
- Laboratory of Behavioural Physiology, Department of Biomedicine, Neuroscience and Advanced Diagnostics (Bi.N.D.), Human Physiology Section "Giuseppe Pagano", University of Palermo, Palermo, Italy.
| | - Stefania Aiello
- Laboratory of Behavioural Physiology, Department of Biomedicine, Neuroscience and Advanced Diagnostics (Bi.N.D.), Human Physiology Section "Giuseppe Pagano", University of Palermo, Palermo, Italy.
| | - Valeria Ruzzi
- University of Campania "Luigi Vanvitelli", Naples, Italy
| | - Giammarco Cascino
- Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana", Section of Neurosciences, University of Salerno, Salerno, Italy
| | - Francesca Marciello
- Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana", Section of Neurosciences, University of Salerno, Salerno, Italy
| | - Valdo Ricca
- Psychiatric Unit, Department of Health Sciences, University of Florence, Florence, Italy.
| |
Collapse
|
10
|
Rech F, Wassermann D, Duffau H. New insights into the neural foundations mediating movement/language interactions gained from intrasurgical direct electrostimulations. Brain Cogn 2020; 142:105583. [DOI: 10.1016/j.bandc.2020.105583] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 04/30/2020] [Accepted: 05/02/2020] [Indexed: 10/24/2022]
|
11
|
Abstract
Recent years have witnessed a growing interest in behavioral and neuroimaging studies on the processing of symbolic communicative gestures, such as pantomimes and emblems, but well-controlled stimuli have been scarce. This study describes a dataset of more than 200 video clips of an actress performing pantomimes (gestures that mimic object-directed/object-use actions; e.g., playing guitar), emblems (conventional gestures; e.g., thumbs up), and meaningless gestures. Gestures were divided into four lists. For each of these four lists, 50 Italian and 50 American raters judged the meaningfulness of the gestures and provided names and descriptions for them. The results of these rating and norming measures are reported separately for the Italian and American raters, offering the first normed set of meaningful and meaningless gestures for experimental studies. The stimuli are available for download via the Figshare database.
Collapse
|
12
|
Pisano F, Marangolo P. Looking at ancillary systems for verb recovery: Evidence from non-invasive brain stimulation. Brain Cogn 2020; 139:105515. [PMID: 31902738 DOI: 10.1016/j.bandc.2019.105515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 12/04/2019] [Accepted: 12/23/2019] [Indexed: 11/17/2022]
Abstract
Several behavioural and neuroimaging studies have suggested that the language function is not restricted into the left areas but it involves regions not predicted by the classical language model. Accordingly, the Embodied Cognition theory postulates a close interaction between the language and the motor system. Indeed, it has been shown that non-invasive brain stimulation (NIBS) is effective for language recovery also when applied over sensorimotor regions, such as the motor cortex, the cerebellum and the spinal cord. We will review a series of NIBS studies in post-stroke aphasic people aimed to assess the impact of NIBS on verb recovery. We first present results which, following the classical assumption of the Broca's area as the key region for verb processing, have shown that the modulation over this area is efficacious for verb improvement. Then, we will present experiments which, according to Embodied Cognition, have directly investigated through NIBS the role of different sensorimotor regions in enhancing verb production. Since verbs play a crucial role for sentence construction which are most often impaired in the aphasic population, we believe that these results have important clinical implications. Indeed, they address the possibility that different structures might support verb processing.
Collapse
Affiliation(s)
- F Pisano
- Dipartimento di Studi Umanistici, Università Federico II, Naples, Italy; IRCCS, Fondazione Santa Lucia, Rome, Italy
| | - P Marangolo
- Dipartimento di Studi Umanistici, Università Federico II, Naples, Italy; IRCCS, Fondazione Santa Lucia, Rome, Italy.
| |
Collapse
|
13
|
Functional lateralization of tool-sound and action-word processing in a bilingual brain. HEALTH PSYCHOLOGY REPORT 2020. [DOI: 10.5114/hpr.2020.92718] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
14
|
Physical and observational practices of unusual actions prime action verb processing. Brain Cogn 2019; 138:103630. [PMID: 31739234 DOI: 10.1016/j.bandc.2019.103630] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Revised: 10/25/2019] [Accepted: 11/04/2019] [Indexed: 12/31/2022]
Abstract
Numerous studies have highlighted a strong relationship between language and sensorimotor processes, showing, for example, that perceiving an action influences subsequent language processing. Moreover, previous studies have demonstrated that the context in which actions are perceived is crucial to enable this action-language relationship. In particular, action verb processing is facilitated when an action is perceived in its usual context (e.g., someone watering a plant) but not in an unusual context (e.g., someone watering a computer). This difference could be explained in terms of experience; because people always practice actions in accordance with the context, they have no (visual or motor) experience related to the unusual context. The aim of the present study was to test this assumption by assessing and comparing the effect of physical practice and observational learning on the action-language relationship. The results of two experiments showed a facilitation effect of both training methods. Whereas usual actions systematically prime action verb processing, the link between action and language appears for unusual actions only after training by practicing (experiment 1, physical practice) or observing (experiment 2, observational learning). Overall, these findings support the role of experience in the activation of sensorimotor representations during action verb processing.
Collapse
|
15
|
De Stefani E, De Marco D. Language, Gesture, and Emotional Communication: An Embodied View of Social Interaction. Front Psychol 2019; 10:2063. [PMID: 31607974 PMCID: PMC6769117 DOI: 10.3389/fpsyg.2019.02063] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 08/26/2019] [Indexed: 11/13/2022] Open
Abstract
Spoken language is an innate ability of the human being and represents the most widespread mode of social communication. The ability to share concepts, intentions and feelings, and also to respond to what others are feeling/saying is crucial during social interactions. A growing body of evidence suggests that language evolved from manual gestures, gradually incorporating motor acts with vocal elements. In this evolutionary context, the human mirror mechanism (MM) would permit the passage from “doing something” to “communicating it to someone else.” In this perspective, the MM would mediate semantic processes being involved in both the execution and in the understanding of messages expressed by words or gestures. Thus, the recognition of action related words would activate somatosensory regions, reflecting the semantic grounding of these symbols in action information. Here, the role of the sensorimotor cortex and in general of the human MM on both language perception and understanding is addressed, focusing on recent studies on the integration between symbolic gestures and speech. We conclude documenting some evidence about MM in coding also the emotional aspects conveyed by manual, facial and body signals during communication, and how they act in concert with language to modulate other’s message comprehension and behavior, in line with an “embodied” and integrated view of social interaction.
Collapse
Affiliation(s)
| | - Doriana De Marco
- Consiglio Nazionale delle Ricerche, Istituto di Neuroscienze, Parma, Italy
| |
Collapse
|
16
|
Cravotta A, Busà MG, Prieto P. Effects of Encouraging the Use of Gestures on Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3204-3219. [PMID: 31479385 DOI: 10.1044/2019_jslhr-s-18-0493] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Previous studies have investigated the effects of the inability to produce hand gestures on speakers' prosodic features of speech; however, the potential effects of encouraging speakers to gesture have received less attention, especially in naturalistic settings. This study aims at investigating the effects of encouraging the production of hand gestures on the following speech correlates: speech discourse length (number of words and discourse length in seconds), disfluencies (filled pauses, self-corrections, repetitions, insertions, interruptions, speech rate), and prosodic properties (measures of fundamental frequency [F0] and intensity). Method Twenty native Italian speakers took part in a narration task in which they had to describe the content of short comic strips to a confederate listener in 1 of the following 2 conditions: (a) nonencouraging condition (N), that is, no instructions about gesturing were given, and (b) encouraging condition (E), that is, the participants were instructed to gesture while telling the story. Results Instructing speakers to gesture led effectively to higher gesture rate and salience. Significant differences were found for (a) discourse length (e.g., the narratives had more words in E than in N) and (b) acoustic measures (F0 maximum, maximum intensity, and mean intensity metrics were higher in E than in N). Conclusion The study shows that asking speakers to use their hands while describing a story can have an effect on narration length and can also impact on F0 and intensity metrics. By showing that enhancing the gesture stream could affect speech prosody, this study provides further evidence that gestures and prosody interact in the process of speech production.
Collapse
Affiliation(s)
- Alice Cravotta
- Dipartimento di Studi Linguistici e Letterari, Università degli Studi di Padova, Italy
| | - M Grazia Busà
- Dipartimento di Studi Linguistici e Letterari, Università degli Studi di Padova, Italy
| | - Pilar Prieto
- Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
- Departament de Traducció i Ciències del Llenguatge, Universitat Pompeu Fabra, Barcelona, Spain
| |
Collapse
|
17
|
Behroozmand R, Johari K. Pathological attenuation of the right prefrontal cortex activity predicts speech and limb motor timing disorder in Parkinson’s disease. Behav Brain Res 2019; 369:111939. [DOI: 10.1016/j.bbr.2019.111939] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2019] [Revised: 05/06/2019] [Accepted: 05/06/2019] [Indexed: 10/26/2022]
|
18
|
Ramos-Cabo S, Vulchanov V, Vulchanova M. Gesture and Language Trajectories in Early Development: An Overview From the Autism Spectrum Disorder Perspective. Front Psychol 2019; 10:1211. [PMID: 31191403 PMCID: PMC6546811 DOI: 10.3389/fpsyg.2019.01211] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 05/07/2019] [Indexed: 12/27/2022] Open
Abstract
The well-documented gesture-language relation in typical communicative development (TD) remains understudied in autism spectrum disorder (ASD). Research on early communication skills shows that gesture production is a strong predictor of language in TD, but little is known about the association between gestures and language in ASD. This review focuses on exploring this relation by addressing two topics: the reliability of gestures as predictor of language competences in ASD and the types of potential differences (quantitative, qualitative, or both) in the gesture-language trajectory in children on the autism spectrum compared to typically developing children. We find evidence that gesture production is indeed a reliable predictor of early communicative skills and that both quantitative and qualitative differences have been established in research in the development of verbal and non-verbal communication skills in ASD, with lower gesture rates at the quantitative level, and a trajectory that starts deviating from the TD trajectory only at some point after the first year of life.
Collapse
Affiliation(s)
- Sara Ramos-Cabo
- Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| | - Valentin Vulchanov
- Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| | - Mila Vulchanova
- Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
19
|
Chen WL, Ye Q, Zhang SC, Xia Y, Yang X, Yuan TF, Shan CL, Li JA. Aphasia rehabilitation based on mirror neuron theory: a randomized-block-design study of neuropsychology and functional magnetic resonance imaging. Neural Regen Res 2019; 14:1004-1012. [PMID: 30762012 PMCID: PMC6404486 DOI: 10.4103/1673-5374.250580] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
When watching someone performs an action, mirror neurons are activated in a way that is very similar to the activation that occurs when actually performing that action. Previous single-sample case studies indicate that hand-action observation training may lead to activation and remodeling of mirror neuron systems, which include important language centers, and may improve language function in aphasia patients. In this randomized-block-design experiment, we recruited 24 aphasia patients from, Zhongda Hospital, Southeast University, China. The patients were divided into three groups where they underwent hand-action observation and repetition, dynamic-object observation and repetition, or conventional speech therapy. Training took place 5 days per week, 35 minutes per day, for 2 weeks. We assessed language function via picture naming tests for objects and actions and the Western Aphasia Battery. Among the participants, one patient, his wife and four healthy student volunteers underwent functional magnetic resonance imaging to analyze changes in brain activation during hand-action observation and dynamic-object observation. Results demonstrated that, compared with dynamic-object observation, hand-action observation led to greater performance with respect to the aphasia quotient and affiliated naming sub-tests and a greater Western Aphasia Battery test score. The overall effect was similar to that of conventional aphasia training, yet hand-action observation had advantages compared with conventional training in terms of vocabulary extraction and spontaneous speech. Thus, hand-action observation appears to more strongly activate the mirror neuron system compared with dynamic-object observation. The activated areas included Broca’s area, Wernicke’s area, and the supramarginal gyrus. These results suggest that hand-action observation combined with repetition might better improve language function in aphasia patients compared with dynamic-object observation combined with repetition. The therapeutic mechanism of this intervention may be associated with activation of additional mirror neuron systems, and may have implications for the possible repair and remodeling of damaged nerve networks. The study protocol was approved by the Ethical Committee of Nanjing Medical University, China (approval number: 2011-SRFA-086) on March 11, 2011. This trial has been registered in the ISRCTN Registry (ISRCTN84827527).
Collapse
Affiliation(s)
- Wen-Li Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing; Department of Rehabilitation Medicine, Zhangjiagang Hospital Affiliated to Nanjing University of Chinese Medicine, Zhangjiagang, Jiangsu Province, China
| | - Qian Ye
- School of Rehabilitation Sciences, Nanjing Normal University of Special Education, Nanjing, Jiangsu Province, China
| | - Si-Cong Zhang
- Yueyang Hospital of Integrated Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Yang Xia
- Department of Rehabilitation Medicine, Zhongda Hospital, Southeast University, Nanjing, Jiangsu Province, China
| | - Xi Yang
- Department of Rehabilitation Medicine, Zhongda Hospital, Southeast University, Nanjing, Jiangsu Province, China
| | - Ti-Fei Yuan
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chun-Lei Shan
- Yueyang Hospital of Integrated Chinese and Western Medicine; School of Rehabilitation Science, Shanghai University of Traditional Chinese Medicine, Shanghai, China; Institute of Rehabilitation Medicine, Shanghai Academy of Traditional Chinese Medicine, Shanghai, China
| | - Jian-An Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, China
| |
Collapse
|
20
|
Wolf D, Mittelberg I, Rekittke LM, Bhavsar S, Zvyagintsev M, Haeck A, Cong F, Klasen M, Mathiak K. Interpretation of Social Interactions: Functional Imaging of Cognitive-Semiotic Categories During Naturalistic Viewing. Front Hum Neurosci 2018; 12:296. [PMID: 30154703 PMCID: PMC6102316 DOI: 10.3389/fnhum.2018.00296] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 07/06/2018] [Indexed: 01/01/2023] Open
Abstract
Social interactions arise from patterns of communicative signs, whose perception and interpretation require a multitude of cognitive functions. The semiotic framework of Peirce's Universal Categories (UCs) laid ground for a novel cognitive-semiotic typology of social interactions. During functional magnetic resonance imaging (fMRI), 16 volunteers watched a movie narrative encompassing verbal and non-verbal social interactions. Three types of non-verbal interactions were coded ("unresolved," "non-habitual," and "habitual") based on a typology reflecting Peirce's UCs. As expected, the auditory cortex responded to verbal interactions, but non-verbal interactions modulated temporal areas as well. Conceivably, when speech was lacking, ambiguous visual information (unresolved interactions) primed auditory processing in contrast to learned behavioral patterns (habitual interactions). The latter recruited a parahippocampal-occipital network supporting conceptual processing and associative memory retrieval. Requesting semiotic contextualization, non-habitual interactions activated visuo-spatial and contextual rule-learning areas such as the temporo-parietal junction and right lateral prefrontal cortex. In summary, the cognitive-semiotic typology reflected distinct sensory and association networks underlying the interpretation of observed non-verbal social interactions.
Collapse
Affiliation(s)
- Dhana Wolf
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany
| | - Irene Mittelberg
- Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany
| | - Linn-Marlen Rekittke
- Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany
| | - Saurabh Bhavsar
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Brain Imaging Facility, Interdisciplinary Centre for Clinical Studies (IZKF), Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Annina Haeck
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Fengyu Cong
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| |
Collapse
|
21
|
Cervetto S, Abrevaya S, Martorell Caro M, Kozono G, Muñoz E, Ferrari J, Sedeño L, Ibáñez A, García AM. Action Semantics at the Bottom of the Brain: Insights From Dysplastic Cerebellar Gangliocytoma. Front Psychol 2018; 9:1194. [PMID: 30050490 PMCID: PMC6052139 DOI: 10.3389/fpsyg.2018.01194] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Accepted: 06/20/2018] [Indexed: 12/14/2022] Open
Abstract
Recent embodied cognition research shows that access to action verbs in shallow-processing tasks becomes selectively compromised upon atrophy of the cerebellum, a critical motor region. Here we assessed whether cerebellar damage also disturbs explicit semantic processing of action pictures and its integration with ongoing motor responses. We evaluated a cognitively preserved 33-year-old man with severe dysplastic cerebellar gangliocytoma (Lhermitte-Duclos disease), encompassing most of the right cerebellum and the posterior part of the left cerebellum. The patient and eight healthy controls completed two semantic association tasks (involving pictures of objects and actions, respectively) that required motor responses. Accuracy results via Crawford’s modified t-tests revealed that the patient was selectively impaired in action association. Moreover, reaction-time analysis through Crawford’s Revised Standardized Difference Test showed that, while processing of action concepts involved slower manual responses in controls, no such effect was observed in the patient, suggesting that motor-semantic integration dynamics may be compromised following cerebellar damage. Notably, a Bayesian Test for a Deficit allowing for Covariates revealed that these patterns remained after covarying for executive performance, indicating that they were not secondary to extra-linguistic impairments. Taken together, our results extend incipient findings on the embodied functions of the cerebellum, offering unprecedented evidence of its crucial role in processing non-verbal action meanings and integrating them with concomitant movements. These findings illuminate the relatively unexplored semantic functions of this region while calling for extensions of motor cognition models.
Collapse
Affiliation(s)
- Sabrina Cervetto
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina.,Departamento de Educación Física y Salud, Instituto Superior de Educación Física, Universidad de la República, Montevideo, Uruguay
| | - Sofía Abrevaya
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina.,National Scientific and Technical Research Council, Buenos Aires, Argentina
| | - Miguel Martorell Caro
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina
| | - Giselle Kozono
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina
| | - Edinson Muñoz
- Departamento de Lingüística y Literatura, Facultad de Humanidades, Universidad de Santiago de Chile, Santiago, Chile
| | - Jesica Ferrari
- Neuropsychiatry Department, Institute of Cognitive Neurology, Buenos Aires, Argentina
| | - Lucas Sedeño
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina.,National Scientific and Technical Research Council, Buenos Aires, Argentina
| | - Agustín Ibáñez
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina.,National Scientific and Technical Research Council, Buenos Aires, Argentina.,Universidad Autónoma del Caribe, Barranquilla, Colombia.,Center for Social and Cognitive Neuroscience, School of Psychology, Universidad Adolfo Ibáñez, Santiago de Chile, Chile.,Centre of Excellence in Cognition and its Disorders, Australian Research Council (ARC), Sydney, NSW, Australia
| | - Adolfo M García
- Laboratory of Experimental Psychology and Neuroscience, Institute of Cognitive and Translational Neuroscience, INECO Foundation, Favaloro University, Buenos Aires, Argentina.,National Scientific and Technical Research Council, Buenos Aires, Argentina.,Faculty of Education, National University of Cuyo, Mendoza, Argentina
| |
Collapse
|
22
|
De Marco D, De Stefani E, Bernini D, Gentilucci M. The effect of motor context on semantic processing: A TMS study. Neuropsychologia 2018; 114:243-250. [DOI: 10.1016/j.neuropsychologia.2018.05.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 03/19/2018] [Accepted: 05/02/2018] [Indexed: 11/26/2022]
|
23
|
Robira B, Pouydebat E, San-Galli A, Meulman EJM, Aubaile F, Breuer T, Masi S. Handedness in gestural and manipulative actions in male hunter-gatherer Aka pygmies from Central African Republic. AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY 2018; 166:481-491. [PMID: 29427288 DOI: 10.1002/ajpa.23435] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 01/22/2018] [Accepted: 01/25/2018] [Indexed: 02/04/2023]
Abstract
OBJECTIVES All human populations display a right-biased handedness. Nonetheless, if studies on western populations are plenty, investigations of traditional populations living at subsistence levels are rare. Yet, understanding the geographical variation of phenotypes of handedness is crucial for testing evolutionary hypotheses. We aimed to provide a preliminary investigation of factors affecting handedness in 25 Aka pygmies from Central African Republic when spontaneously gesturing or manipulating food/tools (Nactions = 593). MATERIALS AND METHODS We recorded spontaneous behaviors and characterized individuals' hand preference using GLMM with descriptive variables as target position, task complexity (unimanual/bimanual), task nature (food/tool manipulation, gesture), and task physical/cognitive constraints (precision or power for manipulative actions and informative content for gestures). RESULTS Individuals were lateralized to the right (93%, N = 15) when manipulating food/tools but not when gesturing. Hand preference was affected by target position but not by task complexity. While nonexplicitly informative gestures were more biased to the right compared to explicitly informative ones, no differences were found within food/tool manipulation (power or precision vs. none). DISCUSSION Although we do not intend to assume generalizable results due to our reduced sample, our observations provide additional information on handedness in a contemporary traditional society. Especially, the study mainly evidenced considerable cultural effects in gestures while also supporting theories considering active tool manipulation as one of the overriding factor in human handedness evolution.
Collapse
Affiliation(s)
- Benjamin Robira
- Institut de biologie de l'Ecole normale supérieure (IBENS), Ecole Normale Supérieure, CNRS, INSERM, PSL Research University, Paris, France.,Département Hommes, Natures, and Sociétés, Muséum National d'Histoire Naturelle, Musée de l'Homme, UMR 7206-CNRS/MNHN, Paris, France
| | - Emmanuelle Pouydebat
- Department of Ecology and Management of Biodiversity, Muséum National d'Histoire Naturelle, UMR 7179-CNRS/MNHN, MECADEV, Paris, France
| | - Aurore San-Galli
- Département Hommes, Natures, and Sociétés, Muséum National d'Histoire Naturelle, Musée de l'Homme, UMR 7206-CNRS/MNHN, Paris, France
| | - Ellen J M Meulman
- Département Hommes, Natures, and Sociétés, Muséum National d'Histoire Naturelle, Musée de l'Homme, UMR 7206-CNRS/MNHN, Paris, France
| | - Françoise Aubaile
- Département Hommes, Natures, and Sociétés, Muséum National d'Histoire Naturelle, Musée de l'Homme, UMR 7206-CNRS/MNHN, Paris, France
| | - Thomas Breuer
- Global Conservation Program, Wildlife Conservation Society, 2300 Southern Boulevard, Bronx, New York
| | - Shelly Masi
- Département Hommes, Natures, and Sociétés, Muséum National d'Histoire Naturelle, Musée de l'Homme, UMR 7206-CNRS/MNHN, Paris, France
| |
Collapse
|
24
|
Tramacere A, Ferrari PF, Gentilucci M, Giuffrida V, De Marco D. The Emotional Modulation of Facial Mimicry: A Kinematic Study. Front Psychol 2018; 8:2339. [PMID: 29403408 PMCID: PMC5778471 DOI: 10.3389/fpsyg.2017.02339] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Accepted: 12/22/2017] [Indexed: 11/13/2022] Open
Abstract
It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect, intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on the specific movement required. Results are discussed in relation to the Basic Emotion Theory and embodied cognition framework.
Collapse
Affiliation(s)
- Antonella Tramacere
- Lichtenberg-Kolleg - The Göttingen Institute for Advanced Study, The German Primate Center Cognitive Ethology Lab, Leibniz Institute for Primate Research, Georg-August-Universität Göttingen, Göttingen, Germany
| | - Pier F Ferrari
- Unità di Neuroscienze, Dipartimento di Medicina e Chirurgia, Università degli Studi di Parma, Parma, Italy
| | - Maurizio Gentilucci
- Unità di Neuroscienze, Dipartimento di Medicina e Chirurgia, Università degli Studi di Parma, Parma, Italy.,Istituto di Neuroscienze-Consiglio Nazionale delle Ricerche (Sede di Parma), Rome, Italy
| | - Valeria Giuffrida
- Unità di Neuroscienze, Dipartimento di Medicina e Chirurgia, Università degli Studi di Parma, Parma, Italy
| | - Doriana De Marco
- Istituto di Neuroscienze-Consiglio Nazionale delle Ricerche (Sede di Parma), Rome, Italy
| |
Collapse
|
25
|
Abstract
When people speak, they gesture. However, is the audience watching a speaker who is sensitive to this link? We translated the body movements of politicians into stick-figure animations and separated the visual from the audio channel. We then asked participants to match a selection of five audio tracks (including the correct one) with the stick-figure animations. The participants made correct decisions in 65% of all cases (chance level of 20%). Matching voices with animations was less difficult when politicians showed expansive movements and spoke with a loud voice. Thus, people are sensitive to the link between motion cues and vocal cues, and this link appears to become even more apparent when a speaker shows expressive behaviors. Future work will have to refine and validate the methods applied and investigate how mismatches between communication channels affect the impressions that people form of politicians.
Collapse
|
26
|
Wolf D, Rekittke LM, Mittelberg I, Klasen M, Mathiak K. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network. Front Hum Neurosci 2017; 11:573. [PMID: 29249945 PMCID: PMC5714878 DOI: 10.3389/fnhum.2017.00573] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Accepted: 11/13/2017] [Indexed: 11/16/2022] Open
Abstract
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.
Collapse
Affiliation(s)
- Dhana Wolf
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Aachen, Germany.,Natural Media Lab, Human Technology Centre, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany
| | - Linn-Marlen Rekittke
- Natural Media Lab, Human Technology Centre, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany
| | - Irene Mittelberg
- Natural Media Lab, Human Technology Centre, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen, Aachen, Germany
| |
Collapse
|
27
|
Abstract
A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal--that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15, 495-514, 2008)--has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon's function is its purpose rather than its precipitating cause--the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism--it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.
Collapse
Affiliation(s)
- Miriam A Novack
- Department of Psychology, University of Chicago, Chicago, IL, 60637, USA.
| | | |
Collapse
|
28
|
|
29
|
Cochet H. Manual asymmetries and hemispheric specialization: Insight from developmental studies. Neuropsychologia 2016; 93:335-341. [DOI: 10.1016/j.neuropsychologia.2015.12.019] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Revised: 12/17/2015] [Accepted: 12/18/2015] [Indexed: 10/22/2022]
|
30
|
Vauclair J, Cochet H. La communication gestuelle : Une voie royale pour le développement du langage. ENFANCE 2016. [DOI: 10.3917/enf1.164.0419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
31
|
Right sensory-motor functional networks subserve action observation therapy in aphasia. Brain Imaging Behav 2016; 11:1397-1411. [DOI: 10.1007/s11682-016-9635-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
32
|
Hsu HC, Iyer SN. Early gesture, early vocabulary, and risk of language impairment in preschoolers. RESEARCH IN DEVELOPMENTAL DISABILITIES 2016; 57:201-210. [PMID: 27450440 DOI: 10.1016/j.ridd.2016.06.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2016] [Revised: 06/14/2016] [Accepted: 06/15/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND Gesture precedes vocabulary development and may be an early marker of later language impairment. AIMS Using data from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development, this study examined the contribution of children's (N=1064) early gestures and early vocabularies to their risk of language impairment in preschool years. METHODS AND PROCEDURES At age 15 months, maternal reports on children's use of gestures and vocabulary comprehension and production skills were measured using the MacArthur Communicative Development Inventories. At age 3 and 4.5 years, children's language skills were assessed using the Reynell Developmental Language Scale and Preschool Language Scale-3, respectively. OUTCOMES AND RESULTS After controlling for child, maternal, and family sociodemographic factors, children at later risk for language impairment were found to exhibit significantly less early gesture use and vocabulary skills relative to their typically developing peers. Early use of gestures was also significantly correlated with early vocabulary skills. CONCLUSIONS AND IMPLICATIONS The effect of early gesture on children's later risk of language impairment was indirect and mediated by early vocabulary production. Early gesture may have the potential to serve as an early diagnostic tool and play a role in early intervention.
Collapse
Affiliation(s)
- Hui-Chin Hsu
- Department of Human Development and Family Science, University of Georgia, United States.
| | - Suneeti Nathani Iyer
- Department of Communication Sciences and Special Education, University of Georgia, United States
| |
Collapse
|
33
|
Macedonia M, Mueller K. Exploring the Neural Representation of Novel Words Learned through Enactment in a Word Recognition Task. Front Psychol 2016; 7:953. [PMID: 27445918 PMCID: PMC4923151 DOI: 10.3389/fpsyg.2016.00953] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2015] [Accepted: 06/09/2016] [Indexed: 01/03/2023] Open
Abstract
Vocabulary learning in a second language is enhanced if learners enrich the learning experience with self-performed iconic gestures. This learning strategy is called enactment. Here we explore how enacted words are functionally represented in the brain and which brain regions contribute to enhance retention. After an enactment training lasting 4 days, participants performed a word recognition task in the functional Magnetic Resonance Imaging (fMRI) scanner. Data analysis suggests the participation of different and partially intertwined networks that are engaged in higher cognitive processes, i.e., enhanced attention and word recognition. Also, an experience-related network seems to map word representation. Besides core language regions, this latter network includes sensory and motor cortices, the basal ganglia, and the cerebellum. On the basis of its complexity and the involvement of the motor system, this sensorimotor network might explain superior retention for enactment.
Collapse
Affiliation(s)
- Manuela Macedonia
- Information Engineering, Johannes Kepler University LinzLinz, Austria; Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
| | - Karsten Mueller
- Nuclear Magnetic Resonance Unit, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| |
Collapse
|
34
|
García AM, Ibáñez A. A touch with words: Dynamic synergies between manual actions and language. Neurosci Biobehav Rev 2016; 68:59-95. [PMID: 27189784 DOI: 10.1016/j.neubiorev.2016.04.022] [Citation(s) in RCA: 75] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2015] [Revised: 04/14/2016] [Accepted: 04/27/2016] [Indexed: 11/16/2022]
Abstract
Manual actions are a hallmark of humanness. Their underlying neural circuitry gives rise to species-specific skills and interacts with language processes. In particular, multiple studies show that hand-related expressions - verbal units evoking manual activity - variously affect concurrent manual actions, yielding apparently controversial results (interference, facilitation, or null effects) in varied time windows. Through a systematic review of 108 experiments, we show that such effects are driven by several factors, such as the level of verbal processing, action complexity, and the time-lag between linguistic and motor processes. We reconcile key empirical patterns by introducing the Hand-Action-Network Dynamic Language Embodiment (HANDLE) model, an integrative framework based on neural coupling dynamics and predictive-coding principles. To conclude, we assess HANDLE against the backdrop of other action-cognition theories, illustrate its potential applications to understand high-level deficits in motor disorders, and discuss key challenges for further development. In sum, our work aligns with the 'pragmatic turn', moving away from passive and static representationalist perspectives to a more dynamic, enactive, and embodied conceptualization of cognitive processes.
Collapse
Affiliation(s)
- Adolfo M García
- Laboratory of Experimental Psychology and Neuroscience (LPEN), Institute of Cognitive and Translational Neuroscience (INCyT), INECO Foundation, Favaloro University, Buenos Aires, Argentina; National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina; Faculty of Elementary and Special Education (FEEyE), National University of Cuyo (UNCuyo), Mendoza, Argentina
| | - Agustín Ibáñez
- Laboratory of Experimental Psychology and Neuroscience (LPEN), Institute of Cognitive and Translational Neuroscience (INCyT), INECO Foundation, Favaloro University, Buenos Aires, Argentina; National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina; Universidad Autónoma del Caribe, Barranquilla, Colombia; Center for Social and Cognitive Neuroscience (CSCN), School of Psychology, Adolfo Ibáñez University, Santiago de Chile, Chile; Centre of Excellence in Cognition and its Disorders, Australian Research Council (ACR), Sydney, Australia.
| |
Collapse
|
35
|
A third-person perspective on co-speech action gestures in Parkinson's disease. Cortex 2016; 78:44-54. [PMID: 26995225 PMCID: PMC4865523 DOI: 10.1016/j.cortex.2016.02.009] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2015] [Revised: 12/14/2015] [Accepted: 02/13/2016] [Indexed: 11/29/2022]
Abstract
A combination of impaired motor and cognitive function in Parkinson's disease (PD) can impact on language and communication, with patients exhibiting a particular difficulty processing action verbs. Co-speech gestures embody a link between action and language and contribute significantly to communication in healthy people. Here, we investigated how co-speech gestures depicting actions are affected in PD, in particular with respect to the visual perspective—or the viewpoint – they depict. Gestures are closely related to mental imagery and motor simulations, but people with PD may be impaired in the way they simulate actions from a first-person perspective and may compensate for this by relying more on third-person visual features. We analysed the action-depicting gestures produced by mild-moderate PD patients and age-matched controls on an action description task and examined the relationship between gesture viewpoint, action naming, and performance on an action observation task (weight judgement). Healthy controls produced the majority of their action gestures from a first-person perspective, whereas PD patients produced a greater proportion of gestures produced from a third-person perspective. We propose that this reflects a compensatory reliance on third-person visual features in the simulation of actions in PD. Performance was also impaired in action naming and weight judgement, although this was unrelated to gesture viewpoint. Our findings provide a more comprehensive understanding of how action-language impairments in PD impact on action communication, on the cognitive underpinnings of this impairment, as well as elucidating the role of action simulation in gesture production.
Collapse
|
36
|
García AM, Ibáñez A. Hands typing what hands do: Action-semantic integration dynamics throughout written verb production. Cognition 2016; 149:56-66. [PMID: 26803393 DOI: 10.1016/j.cognition.2016.01.011] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2015] [Revised: 08/21/2015] [Accepted: 01/14/2016] [Indexed: 11/19/2022]
Abstract
Processing action verbs, in general, and manual action verbs, in particular, involves activations in gross and hand-specific motor networks, respectively. While this is well established for receptive language processes, no study has explored action-semantic integration during written production. Moreover, little is known about how such crosstalk unfolds from motor planning to execution. Here we address both issues through our novel "action semantics in typing" paradigm, which allows to time keystroke operations during word typing. Specifically, we created a primed-verb-copying task involving manual action verbs, non-manual action verbs, and non-action verbs. Motor planning processes were indexed by first-letter lag (the lapse between target onset and first keystroke), whereas execution dynamics were assessed considering whole-word lag (the lapse between first and last keystroke). Each phase was differently delayed by action verbs. When these were processed for over one second, interference was strong and magnified by effector compatibility during programming, but weak and effector-blind during execution. Instead, when they were processed for less than 900ms, interference was reduced by effector compatibility during programming and it faded during execution. Finally, typing was facilitated by prime-target congruency, irrespective of the verbs' motor content. Thus, action-verb semantics seems to extend beyond its embodied foundations, involving conceptual dynamics not tapped by classical reaction-time measures. These findings are compatible with non-radical models of language embodiment and with predictions of event coding theory.
Collapse
Affiliation(s)
- Adolfo M García
- Laboratory of Experimental Psychology and Neuroscience (LPEN), Institute of Translational and Cognitive Neuroscience (INCyT), INECO Foundation, Favaloro University, Buenos Aires, Argentina; National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina; Faculty of Elementary and Special Education (FEEyE), National University of Cuyo (UNCuyo), Mendoza, Argentina; UDP-INECO Foundation Core on Neuroscience (UIFCoN), Diego Portales University, Santiago, Chile.
| | - Agustín Ibáñez
- Laboratory of Experimental Psychology and Neuroscience (LPEN), Institute of Translational and Cognitive Neuroscience (INCyT), INECO Foundation, Favaloro University, Buenos Aires, Argentina; National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina; Universidad Autónoma del Caribe, Barranquilla, Colombia; Department of Psychology, Universidad Adolfo Ibáñez, Santiago, Chile; ARC Centre of Excellence in Cognition and its Disorders, New South Wales, Australia
| |
Collapse
|
37
|
Benassi E, Savini S, Iverson JM, Guarini A, Caselli MC, Alessandroni R, Faldella G, Sansavini A. Early communicative behaviors and their relationship to motor skills in extremely preterm infants. RESEARCH IN DEVELOPMENTAL DISABILITIES 2016; 48:132-144. [PMID: 26555385 DOI: 10.1016/j.ridd.2015.10.017] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2015] [Revised: 10/18/2015] [Accepted: 10/19/2015] [Indexed: 06/05/2023]
Abstract
Despite the predictive value of early spontaneous communication for identifying risk for later language concerns, very little research has focused on these behaviors in extremely low-gestational-age infants (ELGA<28 weeks) or on their relationship with motor development. In this study, communicative behaviors (gestures, vocal utterances and their coordination) were evaluated during mother-infant play interactions in 20 ELGA infants and 20 full-term infants (FT) at 12 months (corrected age for ELGA infants). Relationships between gestures and motor skills, evaluated using the Bayley-III Scales were also examined. ELGA infants, compared with FT infants, showed less advanced communicative, motor, and cognitive skills. Giving and representational gestures were produced at a lower rate by ELGA infants. In addition, pointing gestures and words were produced by a lower percentage of ELGA infants. Significant positive correlations between gestures (pointing and representational gestures) and fine motor skills were found in the ELGA group. We discuss the relevance of examining spontaneous communicative behaviors and motor skills as potential indices of early development that may be useful for clinical assessment and intervention with ELGA infants.
Collapse
Affiliation(s)
- Erika Benassi
- Department of Psychology, University of Bologna, Bologna, Italy
| | - Silvia Savini
- Department of Psychology, University of Bologna, Bologna, Italy
| | - Jana M Iverson
- Department of Psychology, University of Pittsburgh, Pittsburgh, USA
| | | | - Maria Cristina Caselli
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Rosina Alessandroni
- Neonatology and Neonatal Intensive Care Unit, S. Orsola-Malpighi Hospital, Department of Medical and Surgical Sciences, University of Bologna, Bologna, Italy
| | - Giacomo Faldella
- Neonatology and Neonatal Intensive Care Unit, S. Orsola-Malpighi Hospital, Department of Medical and Surgical Sciences, University of Bologna, Bologna, Italy
| | | |
Collapse
|
38
|
Sansavini A, Bello A, Guarini A, Savini S, Alessandroni R, Faldella G, Caselli C. Noun and predicate comprehension/production and gestures in extremely preterm children at two years of age: Are they delayed? JOURNAL OF COMMUNICATION DISORDERS 2015; 58:126-142. [PMID: 26188414 DOI: 10.1016/j.jcomdis.2015.06.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2014] [Revised: 06/07/2015] [Accepted: 06/16/2015] [Indexed: 06/04/2023]
Abstract
UNLABELLED Extremely low gestational age (ELGA, GA<28 weeks) preterm children are at high risk for linguistic impairments; however, their lexical comprehension and production as well as lexical categories in their early language acquisition have not been specifically examined via direct tools. Our study examines lexical comprehension and production as well as gestural production in ELGA children by focusing on noun and predicate acquisition. Forty monolingual ELGA children (mean GA of 26.7 weeks) and 40 full-term (FT) children were assessed at two years of corrected chronological age (CCA) using a test of noun and predicate comprehension and production (PiNG) and the Italian MB-CDI. Noun comprehension and production were delayed in ELGA compared with FT children, as documented by the low number of correct responses and the large number of errors, i.e., incorrect responses and no-response items, and by the types of incorrect responses, i.e., fewer semantically related responses, in noun production. Regarding predicate comprehension and production, a higher frequency of no responses was reported by ELGA children and these children also presented a lower frequency of bimodal spoken-gestural responses in predicate production than FT children. A delayed vocabulary size as demonstrated by the MB-CDI, was exhibited by one-fourth of the ELGA children, who were also unable to complete the predicate subtest. These findings highlight that noun comprehension and production are delayed in ELGA children at two years of CCA and are the most important indexes for the direct evaluation of their lexical abilities and delay. The types of incorrect responses and bimodal spoken-gestural responses were proven to be useful indexes for evaluating the noun and predicate level of acquisition and to plan early focused interventions. LEARNING OUTCOMES After reading this manuscript, the reader will understand (a) the differences in noun and predicate comprehension and production between ELGA and FT children and the indexes of lexical delays exhibited by ELGA children at 2;0 (CCA); (b) the relevance of evaluating errors (incorrect response and no response), the types of incorrect responses (semantically related and unrelated) and the modality of the responses (unimodal spoken and bimodal spoken-gestural) in noun and predicate production to understand the difficulties experienced by ELGA children in representing and expressing meanings; and (c) the need to plan specific interventions to support spoken and gestural modalities in lexical comprehension and production in ELGA children by focusing on noun and predicate acquisition.
Collapse
Affiliation(s)
| | - Arianna Bello
- Department of Neurosciences, University of Parma, Italy
| | | | - Silvia Savini
- Department of Psychology, University of Bologna, Italy
| | - Rosina Alessandroni
- Neonatology and Neonatal Intensive Care Unit - S. Orsola-Malpighi Hospital, Department of Medical and Surgical Sciences, University of Bologna, Italy
| | - Giacomo Faldella
- Neonatology and Neonatal Intensive Care Unit - S. Orsola-Malpighi Hospital, Department of Medical and Surgical Sciences, University of Bologna, Italy
| | - Cristina Caselli
- Institute of Cognitive Sciences and Technologies, National Research Council, Italy
| |
Collapse
|
39
|
Di Pastena A, Schiaratura LT, Askevis-Leherpeux F. Joindre le geste à la parole : les liens entre la parole et les gestes co-verbaux. ANNEE PSYCHOLOGIQUE 2015. [DOI: 10.3917/anpsy.153.0463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
40
|
Joindre le geste à la parole : les liens entre la parole et les gestes co-verbaux. ANNEE PSYCHOLOGIQUE 2015. [DOI: 10.4074/s0003503315003061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
41
|
Peeters D, Chu M, Holler J, Hagoort P, Özyürek A. Electrophysiological and Kinematic Correlates of Communicative Intent in the Planning and Production of Pointing Gestures and Speech. J Cogn Neurosci 2015; 27:2352-68. [PMID: 26284993 DOI: 10.1162/jocn_a_00865] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.
Collapse
Affiliation(s)
- David Peeters
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Mingyuan Chu
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.,University of Aberdeen, UK
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.,Radboud University Nijmegen, The Netherlands
| | - Aslı Özyürek
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.,Radboud University Nijmegen, The Netherlands
| |
Collapse
|
42
|
Zelic G, Kim J, Davis C. Articulatory constraints on spontaneous entrainment between speech and manual gesture. Hum Mov Sci 2015; 42:232-45. [DOI: 10.1016/j.humov.2015.05.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2014] [Revised: 05/27/2015] [Accepted: 05/31/2015] [Indexed: 10/23/2022]
|
43
|
De Marco D, De Stefani E, Gentilucci M. Gesture and word analysis: the same or different processes? Neuroimage 2015; 117:375-85. [DOI: 10.1016/j.neuroimage.2015.05.080] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Revised: 04/22/2015] [Accepted: 05/27/2015] [Indexed: 11/25/2022] Open
|
44
|
Özyürek A. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour. Philos Trans R Soc Lond B Biol Sci 2015; 369:20130296. [PMID: 25092664 DOI: 10.1098/rstb.2013.0296] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
As we speak, we use not only the arbitrary form-meaning mappings of the speech channel but also motivated form-meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal-posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.
Collapse
Affiliation(s)
- Aslı Özyürek
- Department of Linguistics, Radboud University Nijmegen, Erasmus Plain 1, 6500 HD, Nijmegen, The Netherlands Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen 6525 JT, The Netherlands
| |
Collapse
|
45
|
|
46
|
Cochet H, Centelles L, Jover M, Plachta S, Vauclair J. Hand preferences in preschool children: Reaching, pointing and symbolic gestures. Laterality 2015; 20:501-16. [DOI: 10.1080/1357650x.2015.1007057] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
47
|
Fabbri-Destro M, Avanzini P, De Stefani E, Innocenti A, Campi C, Gentilucci M. Interaction Between Words and Symbolic Gestures as Revealed By N400. Brain Topogr 2014; 28:591-605. [DOI: 10.1007/s10548-014-0392-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2014] [Accepted: 08/08/2014] [Indexed: 11/25/2022]
|
48
|
Woll B. Moving from hand to mouth: echo phonology and the origins of language. Front Psychol 2014; 5:662. [PMID: 25071636 PMCID: PMC4081976 DOI: 10.3389/fpsyg.2014.00662] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2014] [Accepted: 06/09/2014] [Indexed: 11/19/2022] Open
Abstract
Although the sign languages in use today are full human languages, certain of the features they share with gestures have been suggested to provide information about possible origins of human language. These features include sharing common articulators with gestures, and exhibiting substantial iconicity in comparison to spoken languages. If human proto-language was gestural, the question remains of how a highly iconic manual communication system might have been transformed into a primarily vocal communication system in which the links between symbol and referent are for the most part arbitrary. The hypothesis presented here focuses on a class of signs which exhibit: “echo phonology,” a repertoire of mouth actions which are characterized by “echoing” on the mouth certain of the articulatory actions of the hands. The basic features of echo phonology are introduced, and discussed in relation to various types of data. Echo phonology provides naturalistic examples of a possible mechanism accounting for part of the evolution of language, with evidence both of the transfer of manual actions to oral ones and the conversion of units of an iconic manual communication system into a largely arbitrary vocal communication system.
Collapse
Affiliation(s)
- Bencie Woll
- Deafness, Cognition and Language Research Centre, University College London London, UK
| |
Collapse
|
49
|
Kelly SD, Hirata Y, Manansala M, Huang J. Exploring the role of hand gestures in learning novel phoneme contrasts and vocabulary in a second language. Front Psychol 2014; 5:673. [PMID: 25071646 PMCID: PMC4077026 DOI: 10.3389/fpsyg.2014.00673] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2014] [Accepted: 06/10/2014] [Indexed: 11/13/2022] Open
Abstract
Co-speech hand gestures are a type of multimodal input that has received relatively little attention in the context of second language learning. The present study explored the role that observing and producing different types of gestures plays in learning novel speech sounds and word meanings in an L2. Naïve English-speakers were taught two components of Japanese—novel phonemic vowel length contrasts and vocabulary items comprised of those contrasts—in one of four different gesture conditions: Syllable Observe, Syllable Produce, Mora Observe, and Mora Produce. Half of the gestures conveyed intuitive information about syllable structure, and the other half, unintuitive information about Japanese mora structure. Within each Syllable and Mora condition, half of the participants only observed the gestures that accompanied speech during training, and the other half also produced the gestures that they observed along with the speech. The main finding was that participants across all four conditions had similar outcomes in two different types of auditory identification tasks and a vocabulary test. The results suggest that hand gestures may not be well suited for learning novel phonetic distinctions at the syllable level within a word, and thus, gesture-speech integration may break down at the lowest levels of language processing and learning.
Collapse
Affiliation(s)
- Spencer D Kelly
- Neuroscience Program, Department of Psychology, Colgate University Hamilton, NY, USA ; Center for Language and Brain, Colgate University Hamilton, NY, USA
| | - Yukari Hirata
- Center for Language and Brain, Colgate University Hamilton, NY, USA ; Department of East Asian Languages and Literatures, Colgate University Hamilton, NY, USA
| | - Michael Manansala
- Center for Language and Brain, Colgate University Hamilton, NY, USA ; Department of East Asian Languages and Literatures, Colgate University Hamilton, NY, USA
| | - Jessica Huang
- Center for Language and Brain, Colgate University Hamilton, NY, USA ; Department of East Asian Languages and Literatures, Colgate University Hamilton, NY, USA
| |
Collapse
|
50
|
Bayard C, Colin C, Leybaert J. How is the McGurk effect modulated by Cued Speech in deaf and hearing adults? Front Psychol 2014; 5:416. [PMID: 24904451 PMCID: PMC4032946 DOI: 10.3389/fpsyg.2014.00416] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Accepted: 04/21/2014] [Indexed: 11/21/2022] Open
Abstract
Speech perception for both hearing and deaf people involves an integrative process between auditory and lip-reading information. In order to disambiguate information from lips, manual cues from Cued Speech may be added. Cued Speech (CS) is a system of manual aids developed to help deaf people to clearly and completely understand speech visually (Cornett, 1967). Within this system, both labial and manual information, as lone input sources, remain ambiguous. Perceivers, therefore, have to combine both types of information in order to get one coherent percept. In this study, we examined how audio-visual (AV) integration is affected by the presence of manual cues and on which form of information (auditory, labial or manual) the CS receptors primarily rely. To address this issue, we designed a unique experiment that implemented the use of AV McGurk stimuli (audio /pa/ and lip-reading /ka/) which were produced with or without manual cues. The manual cue was congruent with either auditory information, lip information or the expected fusion. Participants were asked to repeat the perceived syllable aloud. Their responses were then classified into four categories: audio (when the response was /pa/), lip-reading (when the response was /ka/), fusion (when the response was /ta/) and other (when the response was something other than /pa/, /ka/ or /ta/). Data were collected from hearing impaired individuals who were experts in CS (all of which had either cochlear implants or binaural hearing aids; N = 8), hearing-individuals who were experts in CS (N = 14) and hearing-individuals who were completely naïve of CS (N = 15). Results confirmed that, like hearing-people, deaf people can merge auditory and lip-reading information into a single unified percept. Without manual cues, McGurk stimuli induced the same percentage of fusion responses in both groups. Results also suggest that manual cues can modify the AV integration and that their impact differs between hearing and deaf people.
Collapse
Affiliation(s)
- Clémence Bayard
- Center for Research in Cognition and Neurosciences, Université Libre de BruxellesBrussels, Belgium
| | | | | |
Collapse
|