51
|
Campione GC, De Stefani E, Innocenti A, De Marco D, Gough PM, Buccino G, Gentilucci M. Does comprehension of symbolic gestures and corresponding-in-meaning words make use of motor simulation? Behav Brain Res 2014; 259:297-301. [DOI: 10.1016/j.bbr.2013.11.025] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2013] [Revised: 11/13/2013] [Accepted: 11/15/2013] [Indexed: 10/26/2022]
|
52
|
Communication, latéralité et cerveau chez les primates humains et non humains : vers une origine gestuelle ou multimodale du langage ?1. REVUE DE PRIMATOLOGIE 2013. [DOI: 10.4000/primatologie.1717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
53
|
Concatenation of observed grasp phases with observer's distal movements: a behavioural and TMS study. PLoS One 2013; 8:e81197. [PMID: 24278395 PMCID: PMC3835679 DOI: 10.1371/journal.pone.0081197] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2013] [Accepted: 10/09/2013] [Indexed: 11/19/2022] Open
Abstract
The present study aimed at determining how actions executed by two conspecifics can be coordinated with each other, or more specifically, how the observation of different phases of a reaching-grasping action is temporary related to the execution of a movement of the observer. Participants observed postures of initial finger opening, maximal finger aperture, and final finger closing of grasp after observation of an initial hand posture. Then, they opened or closed their right thumb and index finger (experiments 1, 2 and 3). Response times decreased, whereas acceleration and velocity of actual finger movements increased when observing the two late phases of grasp. In addition, the results ruled out the possibility that this effect was due to salience of the visual stimulus when the hand was close to the target and confirmed an effect of even hand postures in addition to hand apparent motion due to the succession of initial hand posture and grasp phase. In experiments 4 and 5, the observation of grasp phases modulated even foot movements and pronunciation of syllables. Finally, in experiment 6, transcranial magnetic stimulation applied to primary motor cortex 300 ms post-stimulus induced an increase in hand motor evoked potentials of opponens pollicis muscle when observing the two late phases of grasp. These data suggest that the observation of grasp phases induced simulation which was stronger during observation of finger closing. This produced shorter response times, greater acceleration and velocity of the successive movement. In general, our data suggest best concatenation between two movements (one observed and the other executed) when the observed (and simulated) movement was to be accomplished. The mechanism joining the observation of a conspecific's action with our own movement may be precursor of social functions. It may be at the basis for interactions between conspecifics, and related to communication between individuals.
Collapse
|
54
|
de Nooijer JA, van Gog T, Paas F, Zwaan RA. Effects of imitating gestures during encoding or during retrieval of novel verbs on children's test performance. Acta Psychol (Amst) 2013; 144:173-9. [PMID: 23820099 DOI: 10.1016/j.actpsy.2013.05.013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2012] [Revised: 05/28/2013] [Accepted: 05/30/2013] [Indexed: 11/17/2022] Open
Abstract
Research has shown that observing and imitating gestures can foster word learning and that imitation might be more beneficial than observation, which is in line with theories of Embodied Cognition. This study investigated when imitation of gestures is most effective, using a 2×2×2×3 mixed design with between-subjects factors Imitation during Encoding (IE; Yes/No) and Imitation during Retrieval (IR; Yes/No), and within-subjects factors Time of Testing (Immediate/Delayed) and Verb Type (Object manipulation/Locomotion/Abstract). Primary school children (N=115) learned 15 novel verbs (five of each type). They were provided with a verbal definition and a video of the gesture. Depending on assigned condition, they additionally received no imitation instructions, instructions to imitate the gesture immediately (i.e., during encoding; IE), instructions to imitate (from memory) during the first posttest (i.e., during retrieval; IR), or both (IE-IR). Based on the literature, all three imitation conditions could be predicted to be more effective than no imitation. On an immediate and delayed posttest, only the object-manipulation verbs were differentially affected by instructional method, with IE and IR being more effective than no imitation on the immediate test; IE-IR and no imitation did not differ significantly. After a one week delay, only IR was more effective than no imitation, suggesting that imitation during retrieval is most effective for learning object-manipulation words.
Collapse
|
55
|
Meguerditchian A, Vauclair J, Hopkins WD. On the origins of human handedness and language: A comparative review of hand preferences for bimanual coordinated actions and gestural communication in nonhuman primates. Dev Psychobiol 2013; 55:637-50. [DOI: 10.1002/dev.21150] [Citation(s) in RCA: 98] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2013] [Accepted: 06/14/2013] [Indexed: 11/10/2022]
Affiliation(s)
| | - Jacques Vauclair
- Research Center in Psychology of Cognition, Language & Emotion; Aix-Marseille University; 13621; Aix-en-Provence; France
| | | |
Collapse
|
56
|
Dayalu V, Teulings HL, Bowers A, Crawcour S, Saltuklaroglu T. Manual disfluency in drawing while producing and listening to disfluent speech. Hum Mov Sci 2013; 32:677-90. [DOI: 10.1016/j.humov.2012.12.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2012] [Revised: 12/04/2012] [Accepted: 12/10/2012] [Indexed: 10/26/2022]
|
57
|
Corballis MC. Wandering tales: evolutionary origins of mental time travel and language. Front Psychol 2013; 4:485. [PMID: 23908641 PMCID: PMC3725404 DOI: 10.3389/fpsyg.2013.00485] [Citation(s) in RCA: 83] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Accepted: 07/11/2013] [Indexed: 11/13/2022] Open
Abstract
A central component of mind wandering is mental time travel, the calling to mind of remembered past events and of imagined future ones. Mental time travel may also be critical to the evolution of language, which enables us to communicate about the non-present, sharing memories, plans, and ideas. Mental time travel is indexed in humans by hippocampal activity, and studies also suggest that the hippocampus in rats is active when the animals replay or pre play activity in a spatial environment, such as a maze. Mental time travel may have ancient origins, contrary to the view that it is unique to humans. Since mental time travel is also thought to underlie language, these findings suggest that language evolved gradually from pre-existing cognitive capacities, contrary to the view of Chomsky and others that language and symbolic thought emerged abruptly, in a single step, within the past 100,000 years.
Collapse
|
58
|
Andric M, Solodkin A, Buccino G, Goldin-Meadow S, Rizzolatti G, Small SL. Brain function overlaps when people observe emblems, speech, and grasping. Neuropsychologia 2013; 51:1619-29. [PMID: 23583968 DOI: 10.1016/j.neuropsychologia.2013.03.022] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2012] [Revised: 02/13/2013] [Accepted: 03/22/2013] [Indexed: 12/21/2022]
Abstract
A hand grasping a cup or gesturing "thumbs-up", while both manual actions, have different purposes and effects. Grasping directly affects the cup, whereas gesturing "thumbs-up" has an effect through an implied verbal (symbolic) meaning. Because grasping and emblematic gestures ("emblems") are both goal-oriented hand actions, we pursued the hypothesis that observing each should evoke similar activity in neural regions implicated in processing goal-oriented hand actions. However, because emblems express symbolic meaning, observing them should also evoke activity in regions implicated in interpreting meaning, which is most commonly expressed in language. Using fMRI to test this hypothesis, we had participants watch videos of an actor performing emblems, speaking utterances matched in meaning to the emblems, and grasping objects. Our results show that lateral temporal and inferior frontal regions respond to symbolic meaning, even when it is expressed by a single hand action. In particular, we found that left inferior frontal and right lateral temporal regions are strongly engaged when people observe either emblems or speech. In contrast, we also replicate and extend previous work that implicates parietal and premotor responses in observing goal-oriented hand actions. For hand actions, we found that bilateral parietal and premotor regions are strongly engaged when people observe either emblems or grasping. These findings thus characterize converging brain responses to shared features (e.g., symbolic or manual), despite their encoding and presentation in different stimulus modalities.
Collapse
Affiliation(s)
- Michael Andric
- Department of Psychology, The University of Chicago, Chicago, IL, USA.
| | | | | | | | | | | |
Collapse
|
59
|
Evolutionary origins of human handedness: evaluating contrasting hypotheses. Anim Cogn 2013; 16:531-42. [PMID: 23546932 PMCID: PMC3684717 DOI: 10.1007/s10071-013-0626-y] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2012] [Revised: 01/22/2013] [Accepted: 03/16/2013] [Indexed: 12/12/2022]
Abstract
Variation in methods and measures, resulting in past dispute over the existence of population handedness in nonhuman great apes, has impeded progress into the origins of human right-handedness and how it relates to the human hallmark of language. Pooling evidence from behavioral studies, neuroimaging and neuroanatomy, we evaluate data on manual and cerebral laterality in humans and other apes engaged in a range of manipulative tasks and in gestural communication. A simplistic human/animal partition is no longer tenable, and we review four (nonexclusive) possible drivers for the origin of population-level right-handedness: skilled manipulative activity, as in tool use; communicative gestures; organizational complexity of action, in particular hierarchical structure; and the role of intentionality in goal-directed action. Fully testing these hypotheses will require developmental and evolutionary evidence as well as modern neuroimaging data.
Collapse
|
60
|
Rusiewicz HL, Shaiman S, Iverson JM, Szuminsky N. Effects of prosody and position on the timing of deictic gestures. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:458-470. [PMID: 23690565 DOI: 10.1044/1092-4388(2012/11-0283)] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. METHOD The authors manipulated syllable position and contrastive stress in compound words in multiword utterances by using a repeated-measures design to investigate the degree of synchronization of speech and pointing gestures produced by 15 American English speakers. Acoustic measures were compared with the gesture movement recorded via capacitance. RESULTS Although most participants began a gesture before the target word, the temporal parameters of the gesture changed as a function of syllable position and prosody. Syllables with contrastive stress in the 2nd position of compound words were the longest in duration and also most consistently affected the timing of gestures, as measured by several dependent measures. CONCLUSION Increasing the stress of a syllable significantly affected the timing of a corresponding gesture, notably for syllables in the 2nd position of words that would not typically be stressed. The findings highlight the need to consider the interaction of gestures and spoken language production from a motor-based perspective of coordination.
Collapse
|
61
|
Vainiger D, Labruna L, Ivry RB, Lavidor M. Beyond words: evidence for automatic language–gesture integration of symbolic gestures but not dynamic landscapes. PSYCHOLOGICAL RESEARCH 2013; 78:55-69. [DOI: 10.1007/s00426-012-0475-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2012] [Accepted: 12/27/2012] [Indexed: 11/24/2022]
|
62
|
Cochet H, Vauclair J. Hand preferences in human adults: Non-communicative actions versus communicative gestures. Cortex 2012; 48:1017-26. [DOI: 10.1016/j.cortex.2011.03.016] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2010] [Revised: 02/22/2011] [Accepted: 03/28/2011] [Indexed: 11/16/2022]
|
63
|
Vauclair J, Cochet H. Hand preference for pointing and language development in toddlers. Dev Psychobiol 2012; 55:757-65. [DOI: 10.1002/dev.21073] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2012] [Accepted: 07/12/2012] [Indexed: 11/08/2022]
|
64
|
Is the coupled control of hand and mouth postures precursor of reciprocal relations between gestures and words? Behav Brain Res 2012; 233:130-40. [PMID: 22561125 DOI: 10.1016/j.bbr.2012.04.036] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2012] [Revised: 04/18/2012] [Accepted: 04/21/2012] [Indexed: 11/23/2022]
Abstract
We tested whether a system coupling hand postures related to gestures to the control of internal mouth articulators during production of vowels exists and it can be precursor of a system relating hand/arm gestures to words. Participants produced unimanual and bimanual representational gestures expressing the meaning of LARGE or SMALL. Once the gesture was produced, in experiment 1 they pronounced the vowels "A" or "I", in experiment 2 the word "GRÀNDE" (large) or "PÌCCOLO" (small), and in experiment 3 the pseudo-words "SCRÀNTA" or "SBÌCCARA". Mouth, hand kinematics and voice spectra were recorded and analyzed. Unimanual gestures affected voice spectra of the two vowels pronounced alone (experiment 1). Bimanual and both unimanual and bimanual gestures affected voice spectra of /a/ and /i/ included in the words (experiment 2) and pseudo-words (experiment 3), respectively. The results support the hypothesis that a system coupling hand gestures to vowel production exists. Moreover, they suggest the existence of a more general system relating gestures to words.
Collapse
|
65
|
Abstract
When people talk to each other, they often make arm and hand movements that accompany what they say. These manual movements, called “co-speech gestures,” can convey meaning by way of their interaction with the oral message. Another class of manual gestures, called “emblematic gestures” or “emblems,” also conveys meaning, but in contrast to co-speech gestures, they can do so directly and independent of speech. There is currently significant interest in the behavioral and biological relationships between action and language. Since co-speech gestures are actions that rely on spoken language, and emblems convey meaning to the effect that they can sometimes substitute for speech, these actions may be important, and potentially informative, examples of language–motor interactions. Researchers have recently been examining how the brain processes these actions. The current results of this work do not yet give a clear understanding of gesture processing at the neural level. For the most part, however, it seems that two complimentary sets of brain areas respond when people see gestures, reflecting their role in disambiguating meaning. These include areas thought to be important for understanding actions and areas ordinarily related to processing language. The shared and distinct responses across these two sets of areas during communication are just beginning to emerge. In this review, we talk about the ways that the brain responds when people see gestures, how these responses relate to brain activity when people process language, and how these might relate in normal, everyday communication.
Collapse
Affiliation(s)
- Michael Andric
- Department of Psychology, The University of Chicago Chicago, IL, USA
| | | |
Collapse
|
66
|
Grasp it loudly! Supporting actions with semantically congruent spoken action words. PLoS One 2012; 7:e30663. [PMID: 22292014 PMCID: PMC3265503 DOI: 10.1371/journal.pone.0030663] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2011] [Accepted: 12/26/2011] [Indexed: 11/23/2022] Open
Abstract
Evidence for cross-talk between motor and language brain structures has accumulated over the past several years. However, while a significant amount of research has focused on the interaction between language perception and action, little attention has been paid to the potential impact of language production on overt motor behaviour. The aim of the present study was to test whether verbalizing during a grasp-to-displace action would affect motor behaviour and, if so, whether this effect would depend on the semantic content of the pronounced word (Experiment I). Furthermore, we sought to test the stability of such effects in a different group of participants and investigate at which stage of the motor act language intervenes (Experiment II). For this, participants were asked to reach, grasp and displace an object while overtly pronouncing verbal descriptions of the action (“grasp” and “put down”) or unrelated words (e.g. “butterfly” and “pigeon”). Fine-grained analyses of several kinematic parameters such as velocity peaks revealed that when participants produced action-related words their movements became faster compared to conditions in which they did not verbalize or in which they produced words that were not related to the action. These effects likely result from the functional interaction between semantic retrieval of the words and the planning and programming of the action. Therefore, links between (action) language and motor structures are significant to the point that language can refine overt motor behaviour.
Collapse
|
67
|
Meguerditchian A, Gardner MJ, Schapiro SJ, Hopkins WD. The sound of one-hand clapping: handedness and perisylvian neural correlates of a communicative gesture in chimpanzees. Proc Biol Sci 2012; 279:1959-66. [PMID: 22217719 DOI: 10.1098/rspb.2011.2485] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Whether lateralization of communicative signalling in non-human primates might constitute prerequisites of hemispheric specialization for language is unclear. In the present study, we examined (i) hand preference for a communicative gesture (clapping in 94 captive chimpanzees from two research facilities) and (ii) the in vivo magnetic resonance imaging brain scans of 40 of these individuals. The preferred hand for clapping was defined as the one in the upper position when the two hands came together. Using computer manual tracing of regions of interest, we measured the neuroanatomical asymmetries for the homologues of key language areas, including the inferior frontal gyrus (IFG) and planum temporale (PT). When considering the entire sample, there was a predominance of right-handedness for clapping and the distribution of right- and left-handed individuals did not differ between the two facilities. The direction of hand preference (right- versus left-handed subjects) for clapping explained a significant portion of variability in asymmetries of the PT and IFG. The results are consistent with the view that gestural communication in the common ancestor may have been a precursor of language and its cerebral substrates in modern humans.
Collapse
Affiliation(s)
- Adrien Meguerditchian
- Department of Psychology, Research Center in Psychology of Cognition, Language and Emotion, Aix-Marseille University, Aix-en-Provence 13621, France
| | | | | | | |
Collapse
|
68
|
Orangutan Instrumental Gesture-Calls: Reconciling Acoustic and Gestural Speech Evolution Models. Evol Biol 2011; 39:415-418. [PMID: 22923853 PMCID: PMC3423562 DOI: 10.1007/s11692-011-9151-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2011] [Accepted: 11/29/2011] [Indexed: 11/13/2022]
|
69
|
Liepelt R, Dolk T, Prinz W. Bidirectional semantic interference between action and speech. PSYCHOLOGICAL RESEARCH 2011; 76:446-55. [PMID: 22075764 DOI: 10.1007/s00426-011-0390-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2011] [Accepted: 10/21/2011] [Indexed: 10/15/2022]
Abstract
Research on embodied cognition assumes that language processing involves modal simulations that recruit the same neural systems that are usually used for action execution. If this is true, one should find evidence for bidirectional crosstalk between action and language. Using a direct matching paradigm, this study tested if action-languages interactions are bidirectional (Experiments 1 and 2), and whether the effect of crosstalk between action perception and language production is due to facilitation or interference (Experiment 3). Replicating previous findings, we found evidence for crosstalk when manual actions had to be performed simultaneously to action-word perception (Experiment 1) and also when language had to be produced during simultaneous perception of hand actions (Experiment 2). These findings suggest a clear bidirectional relationship between action and language. The latter crosstalk effect was due to interference between action and language (Experiment 3). By extending previous research of embodied cognition, the present findings provide novel evidence suggesting that bidirectional functional relations between action and language are based on similar conceptual-semantic representations.
Collapse
Affiliation(s)
- Roman Liepelt
- Junior Group "Neurocognition of Joint Action", Department of Psychology, Westfälische Wilhelms-University, Fliednerstrasse 21, 48149 Muenster, Germany.
| | | | | |
Collapse
|
70
|
Raymer AM, McHose B, Smith KG, Iman L, Ambrose A, Casselton C. Contrasting effects of errorless naming treatment and gestural facilitation for word retrieval in aphasia. Neuropsychol Rehabil 2011; 22:235-66. [PMID: 22047100 DOI: 10.1080/09602011.2011.618306] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
We compared the effects of two treatments for aphasic word retrieval impairments, errorless naming treatment (ENT) and gestural facilitation of naming (GES), within the same individuals, anticipating that the use of gesture would enhance the effect of treatment over errorless treatment alone. In addition to picture naming, we evaluated results for other outcome measures that were largely untested in earlier ENT studies. In a single participant crossover treatment design, we examined the effects of ENT and GES in eight individuals with stroke-induced aphasia and word retrieval impairments (three semantic anomia, five phonological anomia) in counterbalanced phases across participants. We evaluated effects of the two treatments for a daily picture naming/gesture production probe measure and in standardised aphasia tests and communication rating scales administered across phases of the experiment. Both treatments led to improvements in naming of trained words (small-to-large effect sizes) in individuals with semantic and phonological anomia. Small generalised naming improvements were noted for three individuals with phonological anomia. GES improved use of corresponding gestures for trained words (large effect sizes). Results were largely maintained at one month post-treatment completion. Increases in scores on standardised aphasia testing also occurred for both ENT and GES training. Both ENT and GES led to improvements in naming measures, with no clear difference between treatments. Increased use of gestures following GES provided a potential compensatory means of communication for those who did not improve verbal skills. Both treatments are considered to be effective methods to promote recovery of word retrieval and verbal production skills in individuals with aphasia.
Collapse
Affiliation(s)
- Anastasia M Raymer
- Department of Communication Disorders and Special Education, Old Dominion University, Norfolk, VA 23529-0136, USA.
| | | | | | | | | | | |
Collapse
|
71
|
Enrici I, Adenzato M, Cappa S, Bara BG, Tettamanti M. Intention Processing in Communication: A Common Brain Network for Language and Gestures. J Cogn Neurosci 2011; 23:2415-31. [DOI: 10.1162/jocn.2010.21594] [Citation(s) in RCA: 73] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Human communicative competence is based on the ability to process a specific class of mental states, namely, communicative intention. The present fMRI study aims to analyze whether intention processing in communication is affected by the expressive means through which a communicative intention is conveyed, that is, the linguistic or extralinguistic gestural means. Combined factorial and conjunction analyses were used to test two sets of predictions: first, that a common brain network is recruited for the comprehension of communicative intentions independently of the modality through which they are conveyed; second, that additional brain areas are specifically recruited depending on the communicative modality used, reflecting distinct sensorimotor gateways. Our results clearly showed that a common neural network is engaged in communicative intention processing independently of the modality used. This network includes the precuneus, the left and right posterior STS and TPJ, and the medial pFC. Additional brain areas outside those involved in intention processing are specifically engaged by the particular communicative modality, that is, a peri-sylvian language network for the linguistic modality and a sensorimotor network for the extralinguistic modality. Thus, common representation of communicative intention may be accessed by modality-specific gateways, which are distinct for linguistic versus extralinguistic expressive means. Taken together, our results indicate that the information acquired by different communicative modalities is equivalent from a mental processing standpoint, in particular, at the point at which the actor's communicative intention has to be reconstructed.
Collapse
Affiliation(s)
| | - Mauro Adenzato
- 1University of Torino, Italy
- 2Neuroscience Institute of Turin, Italy
| | - Stefano Cappa
- 3Vita-Salute San Raffaele University, Milan, Italy
- 4Scientific Institute HSR, Milan, Italy
| | - Bruno G. Bara
- 1University of Torino, Italy
- 2Neuroscience Institute of Turin, Italy
| | | |
Collapse
|
72
|
Macedonia M, Müller K, Friederici AD. The impact of iconic gestures on foreign language word learning and its neural substrate. Hum Brain Mapp 2011; 32:982-98. [PMID: 20645312 PMCID: PMC6870319 DOI: 10.1002/hbm.21084] [Citation(s) in RCA: 81] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2009] [Accepted: 04/07/2010] [Indexed: 11/11/2022] Open
Abstract
Vocabulary acquisition represents a major challenge in foreign language learning. Research has demonstrated that gestures accompanying speech have an impact on memory for verbal information in the speakers' mother tongue and, as recently shown, also in foreign language learning. However, the neural basis of this effect remains unclear. In a within-subjects design, we compared learning of novel words coupled with iconic and meaningless gestures. Iconic gestures helped learners to significantly better retain the verbal material over time. After the training, participants' brain activity was registered by means of fMRI while performing a word recognition task. Brain activations to words learned with iconic and with meaningless gestures were contrasted. We found activity in the premotor cortices for words encoded with iconic gestures. In contrast, words encoded with meaningless gestures elicited a network associated with cognitive control. These findings suggest that memory performance for newly learned words is not driven by the motor component as such, but by the motor image that matches an underlying representation of the word's semantics.
Collapse
Affiliation(s)
- Manuela Macedonia
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Karsten Müller
- Magnet Resonance Unit, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D. Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
73
|
Gentilucci M, Campione GC. Do postures of distal effectors affect the control of actions of other distal effectors? Evidence for a system of interactions between hand and mouth. PLoS One 2011; 6:e19793. [PMID: 21625428 PMCID: PMC3100300 DOI: 10.1371/journal.pone.0019793] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2011] [Accepted: 04/05/2011] [Indexed: 11/24/2022] Open
Abstract
The present study aimed at determining whether, in healthy humans, postures assumed by distal effectors affect the control of the successive grasp executed with other distal effectors. In experiments 1 and 2, participants reached different objects with their head and grasped them with their mouth, after assuming different hand postures. The postures could be implicitly associated with interactions with large or small objects. The kinematics of lip shaping during grasp varied congruently with the hand posture, i.e. it was larger or smaller when it could be associated with the grasping of large or small objects, respectively. In experiments 3 and 4, participants reached and grasped different objects with their hand, after assuming the postures of mouth aperture or closure (experiment 3) and the postures of toe extension or flexion (experiment 4). The mouth postures affected the kinematics of finger shaping during grasp, that is larger finger shaping corresponded with opened mouth and smaller finger shaping with closed mouth. In contrast, the foot postures did not influence the hand grasp kinematics. Finally, in experiment 5 participants reached-grasped different objects with their hand while pronouncing opened and closed vowels, as verified by the analysis of their vocal spectra. Open and closed vowels induced larger and smaller finger shaping, respectively. In all experiments postures of the distal effectors induced no effect, or only unspecific effects on the kinematics of the reach proximal/axial component. The data from the present study support the hypothesis that there exists a system involved in establishing interactions between movements and postures of hand and mouth. This system might have been used to transfer a repertoire of hand gestures to mouth articulation postures during language evolution and, in modern humans, it may have evolved a system controlling the interactions existing between speech and gestures.
Collapse
|
74
|
Glenberg AM, Gallese V. Action-based language: a theory of language acquisition, comprehension, and production. Cortex 2011; 48:905-22. [PMID: 21601842 DOI: 10.1016/j.cortex.2011.04.010] [Citation(s) in RCA: 241] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2010] [Revised: 10/12/2010] [Accepted: 04/13/2011] [Indexed: 10/18/2022]
Abstract
Evolution and the brain have done a marvelous job solving many tricky problems in action control, including problems of learning, hierarchical control over serial behavior, continuous recalibration, and fluency in the face of slow feedback. Given that evolution tends to be conservative, it should not be surprising that these solutions are exploited to solve other tricky problems, such as the design of a communication system. We propose that a mechanism of motor control, paired controller/predictor models, has been exploited for language learning, comprehension, and production. Our account addresses the development of grammatical regularities and perspective, as well as how linguistic symbols become meaningful through grounding in perception, action, and emotional systems.
Collapse
Affiliation(s)
- Arthur M Glenberg
- Department of Psychology, Arizona State University, Tempe, AZ 85287, USA.
| | | |
Collapse
|
75
|
Comparative study of netbooks and tablet PCs for fostering face-to-face collaborative learning. COMPUTERS IN HUMAN BEHAVIOR 2011. [DOI: 10.1016/j.chb.2010.11.008] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
76
|
Camões-Costa V, Erjavec M, Horne PJ. Comprehension and production of body part labels in 2- to 3-year-old children. BRITISH JOURNAL OF DEVELOPMENTAL PSYCHOLOGY 2011; 29:552-71. [PMID: 21848746 DOI: 10.1348/026151010x523040] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
This study examined which body part labels children could (i) produce when the experimenter touched different locations on her own body, asking each time 'What's this?' and (ii) comprehend by touching the correct locations on their own bodies in response to the experimenter asking 'Where's the [body-part label]?'. Seventeen children aged between 26 and 41 months, tested in a repeated measures procedure, were presented with 50 different body part stimuli in 200 test trials per child. Overall, the children produced fewer body part labels than they could comprehend. The accuracy of children's responses depended on (i) the location or extent of each body part (facial and broad body features were better known; joints and features in or attached to broad body parts the least well known); (ii) the amount of sensory (but not motor) representation each body part has in the human cortex; and (iii) whether a body part was commonly named by caregivers. These results present a precise mapping of the body parts that young children are able to name and locate on their own bodies in response to body part names; they suggest several possible determinants of lexical-semantic body knowledge and add to the understanding of how it develops in childhood.
Collapse
|
77
|
Marangolo P, Bonifazi S, Tomaiuolo F, Craighero L, Coccia M, Altoè G, Provinciali L, Cantagallo A. Improving language without words: First evidence from aphasia. Neuropsychologia 2010; 48:3824-33. [DOI: 10.1016/j.neuropsychologia.2010.09.025] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2010] [Revised: 07/10/2010] [Accepted: 09/23/2010] [Indexed: 10/19/2022]
|
78
|
Vinson DP, Thompson RL, Skinner R, Fox N, Vigliocco G. The hands and mouth do not always slip together in British sign language: dissociating articulatory channels in the lexicon. Psychol Sci 2010; 21:1158-67. [PMID: 20644107 DOI: 10.1177/0956797610377340] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
In contrast to the single-articulatory system of spoken languages, sign languages employ multiple articulators, including the hands and the mouth. We asked whether manual components and mouthing patterns of lexical signs share a semantic representation, and whether their relationship is affected by the differing language experience of deaf and hearing native signers. We used picture-naming tasks and word-translation tasks to assess whether the same semantic effects occur in manual production and mouthing production. Semantic errors on the hands were more common in the English-translation task than in the picture-naming task, but errors in mouthing patterns showed a different trend. We conclude that mouthing is represented and accessed through a largely separable channel, rather than being bundled with manual components in the sign lexicon. Results were comparable for deaf and hearing signers; differences in language experience did not play a role. These results provide novel insight into coordinating different modalities in language production.
Collapse
Affiliation(s)
- David P Vinson
- Deafness, Cognition, and Language Research Centre, Department of Cognitive, Perceptual and Brain Sciences, University College London, 26 Bedford Way, London WC1H 0AP, United Kingdom.
| | | | | | | | | |
Collapse
|
79
|
Kelly SD, Ozyürek A, Maris E. Two sides of the same coin: speech and gesture mutually interact to enhance comprehension. Psychol Sci 2009; 21:260-7. [PMID: 20424055 DOI: 10.1177/0956797609357327] [Citation(s) in RCA: 148] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated-systems hypothesis, which explains two ways in which gesture and speech are integrated--through mutual and obligatory interactions--in language comprehension. Experiment 1 presented participants with action primes (e.g., someone chopping vegetables) and bimodal speech and gesture targets. Participants related primes to targets more quickly and accurately when they contained congruent information (speech: "chop"; gesture: chop) than when they contained incongruent information (speech: "chop"; gesture: twist). Moreover, the strength of the incongruence affected processing, with fewer errors for weak incongruities (speech: "chop"; gesture: cut) than for strong incongruities (speech: "chop"; gesture: twist). Crucial for the integrated-systems hypothesis, this influence was bidirectional. Experiment 2 demonstrated that gesture's influence on speech was obligatory. The results confirm the integrated-systems hypothesis and demonstrate that gesture and speech form an integrated system in language comprehension.
Collapse
Affiliation(s)
- Spencer D Kelly
- Department of Psychology, Colgate University, Hamilton, NY 13346, USA.
| | | | | |
Collapse
|
80
|
Symbolic gestures and spoken language are processed by a common neural system. Proc Natl Acad Sci U S A 2009; 106:20664-9. [PMID: 19923436 DOI: 10.1073/pnas.0909197106] [Citation(s) in RCA: 181] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Symbolic gestures, such as pantomimes that signify actions (e.g., threading a needle) or emblems that facilitate social transactions (e.g., finger to lips indicating "be quiet"), play an important role in human communication. They are autonomous, can fully take the place of words, and function as complete utterances in their own right. The relationship between these gestures and spoken language remains unclear. We used functional MRI to investigate whether these two forms of communication are processed by the same system in the human brain. Responses to symbolic gestures, to their spoken glosses (expressing the gestures' meaning in English), and to visually and acoustically matched control stimuli were compared in a randomized block design. General Linear Models (GLM) contrasts identified shared and unique activations and functional connectivity analyses delineated regional interactions associated with each condition. Results support a model in which bilateral modality-specific areas in superior and inferior temporal cortices extract salient features from vocal-auditory and gestural-visual stimuli respectively. However, both classes of stimuli activate a common, left-lateralized network of inferior frontal and posterior temporal regions in which symbolic gestures and spoken words may be mapped onto common, corresponding conceptual representations. We suggest that these anterior and posterior perisylvian areas, identified since the mid-19th century as the core of the brain's language system, are not in fact committed to language processing, but may function as a modality-independent semiotic system that plays a broader role in human communication, linking meaning with symbols whether these are words, gestures, images, sounds, or objects.
Collapse
|
81
|
Gentilucci M, Campione GC, Dalla Volta R, Bernardis P. The observation of manual grasp actions affects the control of speech: a combined behavioral and Transcranial Magnetic Stimulation study. Neuropsychologia 2009; 47:3190-202. [PMID: 19654016 DOI: 10.1016/j.neuropsychologia.2009.07.020] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2009] [Revised: 07/20/2009] [Accepted: 07/28/2009] [Indexed: 11/17/2022]
Abstract
Does the mirror system affect the control of speech? This issue was addressed in behavioral and Transcranial Magnetic Stimulation (TMS) experiments. In behavioral experiment 1, participants pronounced the syllable /da/ while observing (1) a hand grasping large and small objects with power and precision grasps, respectively, (2) a foot interacting with large and small objects and (3) differently sized objects presented alone. Voice formant 1 was higher when observing power as compared to precision grasp, whereas it remained unaffected by observation of the different types of foot interaction and objects alone. In TMS experiment 2, we stimulated hand motor cortex, while participants observed the two types of grasp. Motor Evoked Potentials (MEPs) of hand muscles active during the two types of grasp were greater when observing power than precision grasp. In experiments 3-5, TMS was applied to tongue motor cortex of participants silently pronouncing the syllable /da/ and simultaneously observing power and precision grasps, pantomimes of the two types of grasps, and differently sized objects presented alone. Tongue MEPs were greater when observing power than precision grasp either executed or pantomimed. Finally, in TMS experiment 6, the observation of foot interaction with large and small objects did not modulate tongue MEPs. We hypothesized that grasp observation activated motor commands to the mouth as well as to the hand that were congruent with the hand kinematics implemented in the observed type of grasp. The commands to the mouth selectively affected postures of phonation organs and consequently basic features of phonological units.
Collapse
Affiliation(s)
- Maurizio Gentilucci
- Dipartimento di Neuroscienze, Università di Parma, Via Volturno 39, 43100 Parma, Italy.
| | | | | | | |
Collapse
|
82
|
Miller N. Utility of arm and hand gestures in the treatment of aphasia: Some comments and expansions. ACTA ACUST UNITED AC 2009. [DOI: 10.1080/14417040600667319] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
83
|
Barbieri F, Buonocore A, Volta RD, Gentilucci M. How symbolic gestures and words interact with each other. BRAIN AND LANGUAGE 2009; 110:1-11. [PMID: 19233459 DOI: 10.1016/j.bandl.2009.01.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2008] [Revised: 01/07/2009] [Accepted: 01/12/2009] [Indexed: 05/27/2023]
Abstract
Previous repetitive Transcranial Magnetic Stimulation and neuroimaging studies showed that Broca's area is involved in the interaction between gestures and words. However, in these studies the nature of this interaction was not fully investigated; consequently, we addressed this issue in three behavioral experiments. When compared to the expression of one signal at a time, arm kinematics slowed down and voice parameters were amplified when congruent words plus gestures were simultaneously produced (experiment 1). When word and gesture were incongruent, arm kinematics did not change regardless of word category, whereas the gesture induced variation in vocal parameters of communicative and action words only (experiments 2 and 3). Data are discussed according to the hypothesis that integration between gesture and word occurs by transferring the social intention to interact directly with the interlocutor from the gesture to the word.
Collapse
Affiliation(s)
- Filippo Barbieri
- Dipartimento di Neuroscienze, Università di Parma, Parma 43100, Italy
| | | | | | | |
Collapse
|
84
|
Vauclair J, Imbault J. Relationship between manual preferences for object manipulation and pointing gestures in infants and toddlers. Dev Sci 2009; 12:1060-9. [DOI: 10.1111/j.1467-7687.2009.00850.x] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
85
|
Chieffi S, Secchi C, Gentilucci M. Deictic word and gesture production: Their interaction. Behav Brain Res 2009; 203:200-6. [PMID: 19433113 DOI: 10.1016/j.bbr.2009.05.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2008] [Revised: 04/27/2009] [Accepted: 05/03/2009] [Indexed: 10/20/2022]
Abstract
We examined whether and how deictic gestures and words influence each other when the content of the gesture was congruent or incongruent with that of the simultaneously produced word. Two experiments were carried out. In Experiment 1, the participants read aloud the deictic word 'QUA' ('here') or 'LA" ('there'), printed on a token placed near to or far from their body. Simultaneously, they pointed towards one's own body, when the token was placed near, or at a remote position, when the token was placed far. In this way, participants read 'QUA' ('here') and pointed towards themselves (congruent condition) or a remote position (incongruent condition); or they read 'LA" ('there') and pointed towards a remote position (congruent condition) or themselves (incongruent condition). In a control condition, in which a string of 'X' letters was printed on the token, the participants were silent and only pointed towards themselves (token placed near) or a remote position (token placed far). In Experiment 2, the participants read aloud the deictic word placed in the near or far position without gesturing. The results showed that the congruence/incongruence between the content of the deictic word and that of the gesture affected gesture kinematics and voice spectra. Indeed, the movement was faster in the congruent than in the control and incongruent conditions; and it was slower in the incongruent than in the control condition. As concerns voice spectra, formant 2 (F2) decreased in the incongruent conditions. The results suggest the existence of a bidirectional interaction between speech and gesture production systems.
Collapse
Affiliation(s)
- Sergio Chieffi
- Department of Neuroscience, Section of Physiology, University of Parma, Via Volturno 39, 43100 Parma, Italy.
| | | | | |
Collapse
|
86
|
Abstract
This paper discusses the relevance of the discovery of mirror neurons in monkeys and of the mirror neuron system in humans to a neuroscientific account of primates' social cognition and its evolution. It is proposed that mirror neurons and the functional mechanism they underpin, embodied simulation, can ground within a unitary neurophysiological explanatory framework important aspects of human social cognition. In particular, the main focus is on language, here conceived according to a neurophenomenological perspective, grounding meaning on the social experience of action. A neurophysiological hypothesis--the "neural exploitation hypothesis"--is introduced to explain how key aspects of human social cognition are underpinned by brain mechanisms originally evolved for sensorimotor integration. It is proposed that these mechanisms were later on adapted as new neurofunctional architecture for thought and language, while retaining their original functions as well. By neural exploitation, social cognition and language can be linked to the experiential domain of action.
Collapse
|
87
|
Buk A. The mirror neuron system and embodied simulation: Clinical implications for art therapists working with trauma survivors. ARTS IN PSYCHOTHERAPY 2009. [DOI: 10.1016/j.aip.2009.01.008] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
88
|
Meguerditchian A, Vauclair J. Contrast of hand preferences between communicative gestures and non-communicative actions in baboons: implications for the origins of hemispheric specialization for language. BRAIN AND LANGUAGE 2009; 108:167-174. [PMID: 19091390 DOI: 10.1016/j.bandl.2008.10.004] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2008] [Revised: 10/09/2008] [Accepted: 10/11/2008] [Indexed: 05/27/2023]
Abstract
Gestural communication is a modality considered in the literature as a candidate for determining the ancestral prerequisites of the emergence of human language. As reported in captive chimpanzees and human children, a study in captive baboons revealed that a communicative gesture elicits stronger degree of right-hand bias than non-communicative actions. It remains unclear if it is the communicative nature of this manual behavior which induces such patterns of handedness. In the present study, we have measured hand use for two uninvestigated behaviors in a group of captive olive baboons: (1) a non-communicative self-touching behavior ("muzzle wipe" serving as a control behavior), (2) another communicative gesture (a ritualized "food beg") different from the one previously studied in the literature (a species-specific threat gesture, namely "hand slap") in the same population of baboons. The hand preferences for the "food beg" gestures revealed a trend toward right-handedness and significantly correlated with the hand preferences previously reported in the hand slap gesture within the same baboons. By contrast, the hand preferences for the self-touching behaviors did not reveal any trend of manual bias at a group-level nor correlation with the hand preferences of any communicative gestures. These findings provide additional support to the hypothesized existence in baboons of a specific communicative system involved in the production of communicative gestures that may tend to a left-hemispheric dominance and that may differ from the system involved in purely motor functions. The hypothetical implications of these collective results are discussed within the theoretical framework about the origins of hemispheric specialization for human language.
Collapse
Affiliation(s)
- Adrien Meguerditchian
- Department of Psychology, Research Center in Psychology of Cognition, Language and Emotion, Aix-Marseille University, 29, Av. Robert Schuman, 13621 Aix-en-Provence, France.
| | | |
Collapse
|
89
|
Hustad KC, Lee J. Changes in speech production associated with alphabet supplementation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2008; 51:1438-1450. [PMID: 18664687 DOI: 10.1044/1092-4388(2008/07-0185)] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
PURPOSE This study examined the effect of alphabet supplementation (AS) on temporal and spectral features of speech production in individuals with cerebral palsy and dysarthria. METHOD Twelve speakers with dysarthria contributed speech samples using habitual speech and while using AS. One hundred twenty listeners orthographically transcribed speech samples. Differences between habitual and AS speech were examined for intelligibility, rate, word duration, vowel duration, pause duration, pause frequency, vowel space, and first and second formant frequency (F1 and F2) values for corner vowels. RESULTS Descriptive results showed that intelligibility was higher, rate of speech was slower, and pause duration and pause frequency were greater for AS than for habitual speech. Inferential statistics showed that vowel duration, word duration, and vowel space increased significantly for AS. Vowel space did not differ for male and female speakers; however, there was an interaction between sex and speaking condition. Changes in vowel space were accomplished by reductions in F2 for /u/. Vowel space accounted for more variability in intelligibility than rate for AS; the opposite was true for habitual speech. CONCLUSION AS is associated with temporal and spectral changes in speech production. Spectral changes associated with corner vowels appear to be more important than temporal changes.
Collapse
Affiliation(s)
- Katherine C Hustad
- Department of Communicative Disorders and the Waisman Center, University of Wisconsin-Madison, 1500 Highland Avenue, Madison, WI 53705, USA.
| | | |
Collapse
|
90
|
Gentilucci M, Volta RD. Spoken Language and arm Gestures are Controlled by the same Motor Control System. Q J Exp Psychol (Hove) 2008; 61:944-57. [PMID: 18470824 DOI: 10.1080/17470210701625683] [Citation(s) in RCA: 110] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Arm movements can influence language comprehension much as semantics can influence arm movement planning. Arm movement itself can be used as a linguistic signal. We reviewed neurophysiological and behavioural evidence that manual gestures and vocal language share the same control system. Studies of primate premotor cortex and, in particular, of the so-called “mirror system”, including humans, suggest the existence of a dual hand/mouth motor command system involved in ingestion activities. This may be the platform on which a combined manual and vocal communication system was constructed. In humans, speech is typically accompanied by manual gesture, speech production itself is influenced by executing or observing transitive hand actions, and manual actions play an important role in the development of speech, from the babbling stage onwards. Behavioural data also show reciprocal influence between word and symbolic gestures. Neuroimaging and repetitive transcranial magnetic stimulation (rTMS) data suggest that the system governing both speech and gesture is located in Broca's area. In general, the presented data support the hypothesis that the hand motor-control system is involved in higher order cognition.
Collapse
|
91
|
Abstract
In the present review we will summarize evidence that the control of spoken language shares the same system involved in the control of arm gestures. Studies of primate premotor cortex discovered the existence of the so-called mirror system as well as of a system of double commands to hand and mouth. These systems may have evolved initially in the context of ingestion, and later formed a platform for combined manual and vocal communication. In humans, manual gestures are integrated with speech production, when they accompany speech. Lip kinematics and parameters of voice spectra during speech production are influenced by executing or observing transitive actions (i.e. guided by an object). Manual actions also play an important role in language acquisition in children, from the babbling stage onwards. Behavioural data reported here even show a reciprocal influence between words and symbolic gestures and studies employing neuroimaging and repetitive transcranial magnetic stimulation (rTMS) techniques suggest that the system governing both speech and gesture is located in Broca's area.
Collapse
Affiliation(s)
- Maurizio Gentilucci
- Dipartimento di Neuroscienze, Università di Parma, via Volturno 39, 43100 Parma, Italy.
| | | | | |
Collapse
|
92
|
The exaptation of manual dexterity for articulate speech: an electromyogram investigation. Exp Brain Res 2008; 186:603-9. [DOI: 10.1007/s00221-007-1265-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2007] [Accepted: 12/20/2007] [Indexed: 10/22/2022]
|
93
|
Bernardis P, Bello A, Pettenati P, Stefanini S, Gentilucci M. Manual actions affect vocalizations of infants. Exp Brain Res 2008; 184:599-603. [PMID: 18183374 DOI: 10.1007/s00221-007-1256-x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2007] [Accepted: 12/12/2007] [Indexed: 10/22/2022]
Abstract
Upper limb gestures, as well as transitive actions (i.e. acted upon an object) when either executed or observed affect speech. Broca's area seems to be involved in integration between the two motor representations of arm and mouth (Bernardis and Gentilucci, Neuropsychologia, 44:178-190, 2006, Gentilucci et al., Eur J Neurosci, 19:190-202, 2004a, Neuropsychologia, 42:1554-1567, 2004b, J Cogn Neurosci, 18:1059-1074, 2006). The relevance of these data is in relation with the hypothesis that language evolved from manual gestures, and was gradually transformed in speech by means of a system of dual motor commands to hand and mouth (Gentilucci and Corballis, Neurosci Biobehav, Rev 30:949-960, 2006). The present study aimed to verify whether this system of integration between gestures (and transitive actions) and speech is involved also in the language development of infants. Vocalizations of infants aged between 11 and 13 months were recorded during both manipulation of objects of different size and request arm gestures towards the same objects presented by the experimenter. Frequency in voice spectra increased when the infants manipulated or gestured to large objects in comparison with the same activities directed to small objects. These data suggest that intrinsic properties of an object evoking commands of manual interaction are used to identify that object, and to communicate.
Collapse
Affiliation(s)
- Paolo Bernardis
- Dipartimento di Neuroscienze, Università di Parma, Via Volturno 39, 43100 Parma, Italy
| | | | | | | | | |
Collapse
|
94
|
Kita S, Özyürek A, Allen S, Brown A, Furman R, Ishizuka T. Relations between syntactic encoding and co-speech gestures: Implications for a model of speech and gesture production. ACTA ACUST UNITED AC 2007. [DOI: 10.1080/01690960701461426] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
95
|
Gallese V. Before and below 'theory of mind': embodied simulation and the neural correlates of social cognition. Philos Trans R Soc Lond B Biol Sci 2007; 362:659-69. [PMID: 17301027 PMCID: PMC2346524 DOI: 10.1098/rstb.2006.2002] [Citation(s) in RCA: 330] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The automatic translation of folk psychology into newly formed brain modules specifically dedicated to mind-reading and other social cognitive abilities should be carefully scrutinized. Searching for the brain location of intentions, beliefs and desires-as such-might not be the best epistemic strategy to disclose what social cognition really is. The results of neurocognitive research suggest that in the brain of primates, mirror neurons, and more generally the premotor system, play a major role in several aspects of social cognition, from action and intention understanding to language processing. This evidence is presented and discussed within the theoretical frame of an embodied simulation account of social cognition. Embodied simulation and the mirror neuron system underpinning it provide the means to share communicative intentions, meaning and reference, thus granting the parity requirements of social communication.
Collapse
Affiliation(s)
- Vittorio Gallese
- Department of Neuroscience, Section of Physiology, University of Parma, 43100 Parma, Italy.
| |
Collapse
|
96
|
Saletti V, Bulgheroni S, D'Incerti L, Franceschetti S, Molteni B, Airaghi G, Pantaleoni C, D'Arrigo S, Riva D. Verbal and gestural communication in children with bilateral perisylvian polymicrogyria. J Child Neurol 2007; 22:1090-8. [PMID: 17890406 DOI: 10.1177/0883073807306247] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We assessed intelligence and receptive and expressive language skills in 6 children, ages 7 years 9 months to 12 years 4 months, with bilateral perisylvian polymicrogyria of variable extent and with dysarthria of different severity. In view of the recent findings of a close relationship between word and gesture, we also examined the communicative use of gesture. We found that mental retardation was related to the extent of cortical malformation; lexical comprehension, but not morphosyntactic comprehension, and verbal production were more compromised than expected from nonverbal intellectual abilities; lack of verbal language was not compensated by the use of referential gestures. Results are discussed suggesting that compromised verbal and gestural communication in bilateral perisylvian polymicrogyria are not due simply to mental retardation and/or dysarthria but also to dysfunction of Sylvian fissure areas concerned with the totality of language processing.
Collapse
Affiliation(s)
- Veronica Saletti
- Developmental Neurology Division, Istituto Nazionale Neurologico, Besta, Milano, Italy
| | | | | | | | | | | | | | | | | |
Collapse
|
97
|
Ciaramidaro A, Adenzato M, Enrici I, Erk S, Pia L, Bara BG, Walter H. The intentional network: how the brain reads varieties of intentions. Neuropsychologia 2007; 45:3105-13. [PMID: 17669444 DOI: 10.1016/j.neuropsychologia.2007.05.011] [Citation(s) in RCA: 209] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2006] [Revised: 05/16/2007] [Accepted: 05/17/2007] [Indexed: 11/17/2022]
Abstract
Social neuroscience provides insights into the neural correlates of the human capacity to explain and predict other people's intentions, a capacity that lies at the core of the Theory of Mind (ToM) mechanism. Results from neuroimaging research describe a widely distributed neural system underlying ToM, including the right and left temporo-parietal junctions (TPJ), the precuneus, and the medial prefrontal cortex (MPFC). Nevertheless, there is disagreement in the literature concerning the key region for the ToM network. Some authors point to the MPFC, others to the right TPJ. In the effort to make a contribution to the debate, we propose a model of a dynamic ToM network consisting of four regions. We also introduce a novel theoretical distinction among varieties of intention, which differ by the nature of an individual's pursued goal (private or social) and by the social interaction's temporal dimension (present or future). Our results confirm the crucial role of both the MPFC and the right TPJ, but show that these areas are differentially engaged depending on the nature of the intention involved. Whereas the right TPJ and the precuneus are necessary for processing all types of prior intentions, the left TPJ and the anterior paracingulate cortex are specifically involved in the understanding of social intention. More specifically, the left TPJ is activated only when a subset of social intentions are involved (communicative intentions). Taken together, these results demonstrate the progressive recruitment of the ToM network along the theoretical dimensions introduced in the present paper.
Collapse
Affiliation(s)
- A Ciaramidaro
- Department of Psychology, Center for Cognitive Science, University of Turin, via Po 14, 10123 Turin, Italy
| | | | | | | | | | | | | |
Collapse
|
98
|
Willems RM, Hagoort P. Neural evidence for the interplay between language, gesture, and action: a review. BRAIN AND LANGUAGE 2007; 101:278-89. [PMID: 17416411 DOI: 10.1016/j.bandl.2007.03.004] [Citation(s) in RCA: 189] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2006] [Revised: 02/20/2007] [Accepted: 03/04/2007] [Indexed: 05/14/2023]
Abstract
Co-speech gestures embody a form of manual action that is tightly coupled to the language system. As such, the co-occurrence of speech and co-speech gestures is an excellent example of the interplay between language and action. There are, however, other ways in which language and action can be thought of as closely related. In this paper we will give an overview of studies in cognitive neuroscience that examine the neural underpinnings of links between language and action. Topics include neurocognitive studies of motor representations of speech sounds, action-related language, sign language and co-speech gestures. It will be concluded that there is strong evidence on the interaction between speech and gestures in the brain. This interaction however shares general properties with other domains in which there is interplay between language and action.
Collapse
Affiliation(s)
- Roel M Willems
- F. C. Donders Centre for Cognitive Neuroimaging, Radboud University Nijmegen, P.O. Box 9101, 6500 HB Nijmegen, The Netherlands.
| | | |
Collapse
|
99
|
Stefanini S, Caselli MC, Volterra V. Spoken and gestural production in a naming task by young children with Down syndrome. BRAIN AND LANGUAGE 2007; 101:208-21. [PMID: 17379294 DOI: 10.1016/j.bandl.2007.01.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2006] [Revised: 01/26/2007] [Accepted: 01/27/2007] [Indexed: 05/14/2023]
Abstract
Lexical production in children with Down syndrome (DS) was investigated by examining spoken naming accuracy and the use of spontaneous gestures in a picture naming task. Fifteen children with DS (range 3.8-8.3 years) were compared to typically developing children (TD), matched for chronological age and developmental age (range 2.6-4.3 years). Relative to TD children, children with DS were less accurate in speech (producing a greater number of unintelligible answers), yet they produced more gestures overall and of these a significantly higher percentage of iconic gestures. Furthermore, the iconic gestures produced by children with DS accompanied by incorrect or no speech often expressed a concept similar to that of the target word, suggesting deeper conceptual knowledge relative to that expressed only in speech.
Collapse
Affiliation(s)
- Silvia Stefanini
- Department of Neurosciences, University of Parma, via Volturno, 39, 43100 Parma, Italy.
| | | | | |
Collapse
|
100
|
|