1
|
Hagoort P, Özyürek A. Extending the Architecture of Language From a Multimodal Perspective. Top Cogn Sci 2024. [PMID: 38493475 DOI: 10.1111/tops.12728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 02/26/2024] [Accepted: 02/27/2024] [Indexed: 03/19/2024]
Abstract
Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.
Collapse
Affiliation(s)
- Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen
| | - Aslı Özyürek
- Max Planck Institute for Psycholinguistics, Nijmegen
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen
| |
Collapse
|
2
|
Guan CQ, Meng W. Facilitative Effects of Embodied English Instruction in Chinese Children. Front Psychol 2022; 13:915952. [PMID: 35911001 PMCID: PMC9331189 DOI: 10.3389/fpsyg.2022.915952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 06/16/2022] [Indexed: 11/13/2022] Open
Abstract
Research into the lexical quality of word representations suggests that building a strong sound, form, and meaning association is a crucial first step for vocabulary learning. For children who are learning a second language (L2), explicit instruction on word morphology is generally more focused on whole word, rather than sub-lexical, meaning. Though morphological training is emphasized in first language (L1) vocabulary instruction, it is unknown whether this training facilitates L2 word learning through sub-lexical support. To test this, we designed three experimental learning conditions investigating embodied morphological instruction [i.e., hand writing roots (HR), dragging roots (DR), gesturing roots (GR)] to compare against a control condition. One hundred students were randomly assigned to the four experimental groups. Pre- and post-tests examining knowledge of word meanings, forms, and sounds were administered. Results of mixed linear modeling revealed that three embodied morphological instruction on roots enhanced L2 vocabulary learning. Hand writing roots facilitated sound-meaning integration in all category-tasks for accessibility to word form and one task for word sound-form association. By contrast, GR facilitated meaning-based learning integration in two out of three category tasks for word form-meaning association. Chunking and DR facilitated meaning-based integration in one out of three category tasks for word form-meaning association. These results provide evidence that the underlying embodied morphological training mechanism contributes to L2 vocabulary learning during direct instruction. Future directions and implications are discussed.
Collapse
Affiliation(s)
- Connie Qun Guan
- School of Foreign Studies, Beijing Language and Culture University, Beijing, China
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
- *Correspondence: Connie Qun Guan,
| | - Wanjin Meng
- Department of Moral, Psychological and Special Education, China National Institute of Education Sciences, Beijing, China
- Wanjin Meng,
| |
Collapse
|
3
|
Hand Preference in Adults’ Referential Gestures during Storytelling: Testing for Effects of Bilingualism, Language Ability, Sex and Age. Symmetry (Basel) 2021. [DOI: 10.3390/sym13101776] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Previous studies have shown that gestures are mediated by the left hemisphere. The primary purpose of this study was to test whether most gestures are also asymmetrical, i.e., produced with the right hand. We also tested four predictors of the degree of right-hand gesture use: bilingualism, language ability, sex, and age. These factors have been related to differences in the degree of language lateralization. English monolinguals, French–English bilinguals, and French monolinguals watched a cartoon and told the story back. For the gestures they produced while speaking, we calculated the percentage produced with the right hand. As predicted, the majority of gestures were right-handed (60%). Bilingualism, language ability, and age were not significantly related to hand choice in either English or French. In English, males tended to produce more right-handed gestures than females. These results raise doubts as to whether hand preference in gestures reflects speech lateralization. We discuss possible alternative explanations for a right-hand preference.
Collapse
|
4
|
Kandana Arachchige KG, Simoes Loureiro I, Blekic W, Rossignol M, Lefebvre L. The Role of Iconic Gestures in Speech Comprehension: An Overview of Various Methodologies. Front Psychol 2021; 12:634074. [PMID: 33995189 PMCID: PMC8118122 DOI: 10.3389/fpsyg.2021.634074] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 04/01/2021] [Indexed: 11/28/2022] Open
Abstract
Iconic gesture-speech integration is a relatively recent field of investigation with numerous researchers studying its various aspects. The results obtained are just as diverse. The definition of iconic gestures is often overlooked in the interpretations of results. Furthermore, while most behavioral studies have demonstrated an advantage of bimodal presentation, brain activity studies show a diversity of results regarding the brain regions involved in the processing of this integration. Clinical studies also yield mixed results, some suggesting parallel processing channels, others a unique and integrated channel. This review aims to draw attention to the methodological variations in research on iconic gesture-speech integration and how they impact conclusions regarding the underlying phenomena. It will also attempt to draw together the findings from other relevant research and suggest potential areas for further investigation in order to better understand processes at play during speech integration process.
Collapse
Affiliation(s)
| | | | - Wivine Blekic
- Cognitive Psychology and Neuropsychology, University of Mons, Mons, Belgium
| | - Mandy Rossignol
- Cognitive Psychology and Neuropsychology, University of Mons, Mons, Belgium
| | - Laurent Lefebvre
- Cognitive Psychology and Neuropsychology, University of Mons, Mons, Belgium
| |
Collapse
|
5
|
Verbal working memory and co-speech gesture processing. Brain Cogn 2020; 146:105640. [PMID: 33171343 DOI: 10.1016/j.bandc.2020.105640] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 09/21/2020] [Accepted: 10/19/2020] [Indexed: 12/15/2022]
Abstract
Multimodal discourse requires an assembly of cognitive processes that are uniquely recruited for language comprehension in social contexts. In this study, we investigated the role of verbal working memory for the online integration of speech and iconic gestures. Participants memorized and rehearsed a series of auditorily presented digits in low (one digit) or high (four digits) memory load conditions. To observe how verbal working memory load impacts online discourse comprehension, ERPs were recorded while participants watched discourse videos containing either congruent or incongruent speech-gesture combinations during the maintenance portion of the memory task. While expected speech-gesture congruity effects were found in the low memory load condition, high memory load trials elicited enhanced frontal positivities that indicated a unique interaction between online speech-gesture integration and the availability of verbal working memory resources. This work contributes to an understanding of discourse comprehension by demonstrating that language processing in a multimodal context is subject to the relationship between cognitive resource availability and the degree of controlled processing required for task performance. We suggest that verbal working memory is less important for speech-gesture integration than it is for mediating speech processing under high task demands.
Collapse
|
6
|
Özer D, Göksun T. Gesture Use and Processing: A Review on Individual Differences in Cognitive Resources. Front Psychol 2020; 11:573555. [PMID: 33250817 PMCID: PMC7674851 DOI: 10.3389/fpsyg.2020.573555] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 09/29/2020] [Indexed: 01/02/2023] Open
Abstract
Speakers use spontaneous hand gestures as they speak and think. These gestures serve many functions for speakers who produce them as well as for listeners who observe them. To date, studies in the gesture literature mostly focused on group-comparisons or the external sources of variation to examine when people use, process, and benefit from using and observing gestures. However, there are also internal sources of variation in gesture use and processing. People differ in how frequently they use gestures, how salient their gestures are, for what purposes they produce gestures, and how much they benefit from using and seeing gestures during comprehension and learning depending on their cognitive dispositions. This review addresses how individual differences in different cognitive skills relate to how people employ gestures in production and comprehension across different ages (from infancy through adulthood to healthy aging) from a functionalist perspective. We conclude that speakers and listeners can use gestures as a compensation tool during communication and thinking that interacts with individuals' cognitive dispositions.
Collapse
Affiliation(s)
- Demet Özer
- Department of Psychology, Koç University, Istanbul, Turkey
| | | |
Collapse
|
7
|
Smith EG, Condy E, Anderson A, Thurm A, Manwaring SS, Swineford L, Gandjbakhche A, Redcay E. Functional near-infrared spectroscopy in toddlers: Neural differentiation of communicative cues and relation to future language abilities. Dev Sci 2020; 23:e12948. [PMID: 32048419 PMCID: PMC7685129 DOI: 10.1111/desc.12948] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Revised: 02/06/2020] [Accepted: 02/07/2020] [Indexed: 12/18/2022]
Abstract
The toddler and preschool years are a time of significant development in both expressive and receptive communication abilities. However, little is known about the neurobiological underpinnings of language development during this period, likely due to difficulties acquiring functional neuroimaging data. Functional near‐infrared spectroscopy (fNIRS) is a motion‐tolerant neuroimaging technique that assesses cortical brain activity and can be used in very young children. Here, we use fNIRS during perception of communicative and noncommunicative speech and gestures in typically developing 2‐ and 3‐year‐olds (Study 1, n = 15, n = 12 respectively) and in a sample of 2‐year‐olds with both fNIRS data collected at age 2 and language outcome data at age 3 (Study 2, n = 18). In Study 1, 2‐ and 3‐year‐olds differentiated between communicative and noncommunicative stimuli as well as between speech and gestures in the left lateral frontal region. However, 2‐year‐olds showed different patterns of activation from 3‐year‐olds in right medial frontal regions. In Study 2, which included two toddlers identified with early language delays along with 16 typically developing toddlers, neural differentiation of communicative stimuli in the right medial frontal region at age 2 predicted receptive language at age 3. Specifically, after accounting for variance related to verbal ability at age 2, increased neural activation for communicative gestures (vs. both communicative speech and noncommunicative gestures) at age 2 predicted higher receptive language scores at age 3. These results are discussed in the context of the underlying mechanisms of toddler language development and use of fNIRS in prediction of language outcomes.
Collapse
Affiliation(s)
- Elizabeth G Smith
- University of Maryland, College Park, MD, USA.,National Institute of Child Health and Human Development, Bethesda, MD, USA
| | - Emma Condy
- National Institute of Child Health and Human Development, Bethesda, MD, USA
| | - Afrouz Anderson
- National Institute of Child Health and Human Development, Bethesda, MD, USA
| | - Audrey Thurm
- National Institute of Mental Health, Bethesda, MD, USA
| | | | | | - Amir Gandjbakhche
- National Institute of Child Health and Human Development, Bethesda, MD, USA
| | | |
Collapse
|
8
|
Wroblewski A, He Y, Straube B. Dynamic Causal Modelling suggests impaired effective connectivity in patients with schizophrenia spectrum disorders during gesture-speech integration. Schizophr Res 2020; 216:175-183. [PMID: 31882274 DOI: 10.1016/j.schres.2019.12.005] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 11/26/2019] [Accepted: 12/15/2019] [Indexed: 12/18/2022]
Abstract
Integrating visual and auditory information during gesture-speech integration (GSI) is important for successful social communication, which is often impaired in schizophrenia. Several studies suggested the posterior superior temporal sulcus (pSTS) to be a relevant multisensory integration site. However, intact STS activation patterns were often reported in patients. Thus, here we used Dynamic Causal Modelling (DCM) to analyze whether information processing in schizophrenia spectrum disorders (SSD) is impaired during GSI on network level. We investigated GSI in three different samples. First, we replicated a recently published connectivity model for GSI in a healthy subject group (n = 19). Second, we investigated differences between patients with SSD and a matched healthy control group (n = 17 each). Participants were presented videos of an actor performing intrinsically meaningful gestures accompanied by spoken sentences in German or Russian, or just telling a German sentence without gestures. Across all groups, fMRI analyses revealed similar activation patterns, and DCM analyses resulted in the same winning model for GSI. This finding directly replicates previous results. However, patients revealed significantly reduced connectivity in the verbal pathway (from left middle temporal gyrus (MTG) to left STS). The clinical significance of this connection is supported by its correlations with the severity of concretism and a subscale of negative symptoms (SANS). Our model confirms the importance of the pSTS as integration site during audio-visual integration. Patients showed generally intact connectivity during GSI, but revealed impaired information transfer via the verbal pathway. This might be the basis of interpersonal communication problems in patients with SSD.
Collapse
Affiliation(s)
- Adrian Wroblewski
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, University of Marburg, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany.
| | - Yifei He
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, University of Marburg, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany; Faculty of Translation, Language, and Cultural Studies, University of Mainz, Germersheim, Germany
| | - Benjamin Straube
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, University of Marburg, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
9
|
Jouravlev O, Zheng D, Balewski Z, Le Arnz Pongos A, Levan Z, Goldin-Meadow S, Fedorenko E. Speech-accompanying gestures are not processed by the language-processing mechanisms. Neuropsychologia 2019; 132:107132. [PMID: 31276684 PMCID: PMC6708375 DOI: 10.1016/j.neuropsychologia.2019.107132] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Revised: 06/01/2019] [Accepted: 06/30/2019] [Indexed: 12/15/2022]
Abstract
Speech-accompanying gestures constitute one information channel during communication. Some have argued that processing gestures engages the brain regions that support language comprehension. However, studies that have been used as evidence for shared mechanisms suffer from one or more of the following limitations: they (a) have not directly compared activations for gesture and language processing in the same study and relied on the fallacious reverse inference (Poldrack, 2006) for interpretation, (b) relied on traditional group analyses, which are bound to overestimate overlap (e.g., Nieto-Castañon and Fedorenko, 2012), (c) failed to directly compare the magnitudes of response (e.g., Chen et al., 2017), and (d) focused on gestures that may have activated the corresponding linguistic representations (e.g., "emblems"). To circumvent these limitations, we used fMRI to examine responses to gesture processing in language regions defined functionally in individual participants (e.g., Fedorenko et al., 2010), including directly comparing effect sizes, and covering a broad range of spontaneously generated co-speech gestures. Whenever speech was present, language regions responded robustly (and to a similar degree regardless of whether the video contained gestures or grooming movements). In contrast, and critically, responses in the language regions were low - at or slightly above the fixation baseline - when silent videos were processed (again, regardless of whether they contained gestures or grooming movements). Brain regions outside of the language network, including some in close proximity to its regions, differentiated between gestures and grooming movements, ruling out the possibility that the gesture/grooming manipulation was too subtle. Behavioral studies on the critical video materials further showed robust differentiation between the gesture and grooming conditions. In summary, contra prior claims, language-processing regions do not respond to co-speech gestures in the absence of speech, suggesting that these regions are selectively driven by linguistic input (e.g., Fedorenko et al., 2011). Although co-speech gestures are uncontroversially important in communication, they appear to be processed in brain regions distinct from those that support language comprehension, similar to other extra-linguistic communicative signals, like facial expressions and prosody.
Collapse
Affiliation(s)
- Olessia Jouravlev
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA; Carleton University, Ottawa, ON K1S 5B6, Canada.
| | - David Zheng
- Princeton University, Princeton, NJ, 08544, USA
| | - Zuzanna Balewski
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | | | - Zena Levan
- University of Chicago, Chicago, IL, 60637, USA
| | | | - Evelina Fedorenko
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA; McGovern Institute for Brain Research, Cambridge, MA, 02139, USA; Massachusetts General Hospital, Boston, MA, 02114, USA.
| |
Collapse
|
10
|
Drijvers L, van der Plas M, Özyürek A, Jensen O. Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise. Neuroimage 2019; 194:55-67. [DOI: 10.1016/j.neuroimage.2019.03.032] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2018] [Revised: 03/12/2019] [Accepted: 03/15/2019] [Indexed: 11/30/2022] Open
|
11
|
Support for parents of deaf children: Common questions and informed, evidence-based answers. Int J Pediatr Otorhinolaryngol 2019; 118:134-142. [PMID: 30623850 DOI: 10.1016/j.ijporl.2018.12.036] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 11/15/2018] [Accepted: 12/27/2018] [Indexed: 11/20/2022]
Abstract
To assist medical and hearing-science professionals in supporting parents of deaf children, we have identified common questions that parents may have and provide evidence-based answers. In doing so, a compassionate and positive narrative about deafness and deaf children is offered, one that relies on recent research evidence regarding the critical nature of early exposure to a fully accessible visual language, which in the United States is American Sign Language (ASL). This evidence includes the role of sign language in language acquisition, cognitive development, and literacy. In order for parents to provide a nurturing and anxiety-free environment for early childhood development, signing at home is important even if their child also has the additional nurturing and care of a signing community. It is not just the early years of a child's life that matter for language acquisition; it's the early months, the early weeks, even the early days. Deaf children cannot wait for accessible language input. The whole family must learn simultaneously as the deaf child learns. Even moderate fluency on the part of the family benefits the child enormously. And learning the sign language together can be one of the strongest bonding experiences that the family and deaf child have.
Collapse
|
12
|
Drijvers L, Özyürek A, Jensen O. Alpha and Beta Oscillations Index Semantic Congruency between Speech and Gestures in Clear and Degraded Speech. J Cogn Neurosci 2018; 30:1086-1097. [DOI: 10.1162/jocn_a_01301] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech–gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + “mixing”) or mismatching (drinking gesture + “walking”) gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
Collapse
Affiliation(s)
| | - Asli Özyürek
- Radboud University
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | | |
Collapse
|
13
|
Straube B, Wroblewski A, Jansen A, He Y. The connectivity signature of co-speech gesture integration: The superior temporal sulcus modulates connectivity between areas related to visual gesture and auditory speech processing. Neuroimage 2018; 181:539-549. [PMID: 30025854 DOI: 10.1016/j.neuroimage.2018.07.037] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Revised: 07/04/2018] [Accepted: 07/15/2018] [Indexed: 10/28/2022] Open
Abstract
Humans integrate information communicated by speech and gestures. Functional magnetic resonance imaging (fMRI) studies suggest that the posterior superior temporal sulcus (STS) and adjacent gyri are relevant for multisensory integration. However, a connectivity model representing this essential combinatory process is still missing. Here, we used dynamic causal modeling for fMRI to analyze the effective connectivity pattern between middle temporal gyrus (MTG), occipital cortex (OC) and STS associated with auditory verbal, visual gesture-related, and integrative processing, respectively, to unveil the neural mechanisms underlying integration of intrinsically meaningful gestures (e.g., "Thumbs-up gesture") and corresponding speech. 20 participants were presented videos of an actor either performing intrinsic meaningful gestures in the context of German or Russian sentences, or speaking a German sentence without gesture, while performing a content judgment task. The connectivity analyses resulted in a winning model that included bidirectional intrinsic connectivity between all areas. Furthermore, the model included modulations of both connections to the STS (OC→STS; MTG→STS), and non-linear modulatory effects of the STS on bidirectional connections between MTG and OC. Coupling strength in the occipital pathway (OC→STS) correlated with gesture related advantages in task performance, whereas the temporal pathway (MTG→STS) correlated with performance in the speech only condition. Coupling between MTG and OC correlated negatively with subsequent memory performance for sentences of the Gesture-German condition. Our model provides a first step towards a better understanding of speech-gesture integration on network level. It corroborates the importance of the STS during audio-visual integration by showing that this region inhibits direct auditory-visual coupling.
Collapse
Affiliation(s)
- Benjamin Straube
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Philipps-University Marburg, Germany.
| | - Adrian Wroblewski
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Philipps-University Marburg, Germany
| | - Andreas Jansen
- Laboratory for Multimodal Neuroimaging (LMN), Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Philipps-University Marburg, Germany; Core-Facility Brainimaging, Faculty of Medicine, University of Marburg, Germany
| | - Yifei He
- Translational Neuroimaging Marburg (TNM), Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Germany; Center for Mind, Brain and Behavior - CMBB, Philipps-University Marburg, Germany
| |
Collapse
|
14
|
Echeverría-Palacio CM, Uscátegui-Daccarett A, Talero-Gutiérrez C. Integración auditiva, visual y propioceptiva como sustrato del desarrollo del lenguaje. REVISTA DE LA FACULTAD DE MEDICINA 2018. [DOI: 10.15446/revfacmed.v66n3.60490] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Introducción. El desarrollo del lenguaje es un proceso complejo considerado como marcador evolutivo del ser humano y puede ser comprendido a partir de la contribución de los sistemas sensoriales y de los eventos que ocurren en periodos críticos del desarrollo.Objetivo. Realizar una revisión de cómo se da la integración de la información auditiva, visual y propioceptiva y cómo se refleja en el desarrollo del lenguaje, destacando el papel de la interacción social como contexto que favorece este proceso.Materiales y métodos. Se utilizaron los términos MeSH “Language Development”; “Visual Perception”; “Hearing”; y “Proprioception en las bases de datos MEDLINE y Embase, limitando la búsqueda principal a artículos escritos en inglés, español y portugués.Resultados. El punto de partida lo constituye la información auditiva, la cual, en el primer año de vida, permite la discriminación de los elementos del ambiente que corresponden al lenguaje; luego un pico en su adquisición y posteriormente una etapa de máxima discriminación lingüística. La información visual proporciona la correspondencia del lenguaje en imágenes, sustrato de nominación y comprensión de palabras, además de la interpretación e imitación del componente emocional en la gesticulación. La información propioceptiva ofrece la retroalimentación de los patrones de ejecución motora empleados en la producción del lenguaje.Conclusión. El estudio del desarrollo lenguaje desde la integración sensorial ofrece nuevas perspectivas para el abordaje e intervención de sus desviaciones.
Collapse
|
15
|
Spatial–temporal dynamics of gesture–speech integration: a simultaneous EEG-fMRI study. Brain Struct Funct 2018; 223:3073-3089. [DOI: 10.1007/s00429-018-1674-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 04/27/2018] [Indexed: 11/30/2022]
|
16
|
Demir-Lira ÖE, Asaridou SS, Raja Beharelle A, Holt AE, Goldin-Meadow S, Small SL. Functional neuroanatomy of gesture-speech integration in children varies with individual differences in gesture processing. Dev Sci 2018. [PMID: 29516653 DOI: 10.1111/desc.12648] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Gesture is an integral part of children's communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8- to 10-year-old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture-speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., "pet" + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., "bird" + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post-test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture-speech integration in children overlaps with-but is broader than-the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children's increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration.
Collapse
Affiliation(s)
| | - Salomi S Asaridou
- Department of Neurology, University of California, Irvine, Irvine, California, USA
| | - Anjali Raja Beharelle
- Laboratory for Social and Neural Systems Research, Department of Economics, University of Zurich, Zurich, Switzerland
| | - Anna E Holt
- Department of Neurology, University of California, Irvine, Irvine, California, USA
| | | | - Steven L Small
- Department of Neurology, University of California, Irvine, Irvine, California, USA
| |
Collapse
|
17
|
Brain regions and functional interactions supporting early word recognition in the face of input variability. Proc Natl Acad Sci U S A 2017; 114:7588-7593. [PMID: 28674020 DOI: 10.1073/pnas.1617589114] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Perception and cognition in infants have been traditionally investigated using habituation paradigms, assuming that babies' memories in laboratory contexts are best constructed after numerous repetitions of the very same stimulus in the absence of interference. A crucial, yet open, question regards how babies deal with stimuli experienced in a fashion similar to everyday learning situations-namely, in the presence of interfering stimuli. To address this question, we used functional near-infrared spectroscopy to test 40 healthy newborns on their ability to encode words presented in concomitance with other words. The results evidenced a habituation-like hemodynamic response during encoding in the left-frontal region, which was associated with a progressive decrement of the functional connections between this region and the left-temporal, right-temporal, and right-parietal regions. In a recognition test phase, a characteristic neural signature of recognition recruited first the right-frontal region and subsequently the right-parietal ones. Connections originating from the right-temporal regions to these areas emerged when newborns listened to the familiar word in the test phase. These findings suggest a neural specialization at birth characterized by the lateralization of memory functions: the interplay between temporal and left-frontal regions during encoding and between temporo-parietal and right-frontal regions during recognition of speech sounds. Most critically, the results show that newborns are capable of retaining the sound of specific words despite hearing other stimuli during encoding. Thus, habituation designs that include various items may be as effective for studying early memory as repeated presentation of a single word.
Collapse
|
18
|
Wakefield EM, Novack MA, Goldin-Meadow S. Unpacking the Ontogeny of Gesture Understanding: How Movement Becomes Meaningful Across Development. Child Dev 2017; 89:e245-e260. [PMID: 28504410 DOI: 10.1111/cdev.12817] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Gestures, hand movements that accompany speech, affect children's learning, memory, and thinking (e.g., Goldin-Meadow, 2003). However, it remains unknown how children distinguish gestures from other kinds of actions. In this study, 4- to 9-year-olds (n = 339) and adults (n = 50) described one of three scenes: (a) an actor moving objects, (b) an actor moving her hands in the presence of objects (but not touching them), or (c) an actor moving her hands in the absence of objects. Participants across all ages were equally able to identify actions on objects as goal directed, but the ability to identify empty-handed movements as representational actions (i.e., as gestures) increased with age and was influenced by the presence of objects, especially in older children.
Collapse
|
19
|
Weisberg J, Hubbard AL, Emmorey K. Multimodal integration of spontaneously produced representational co-speech gestures: an fMRI study. LANGUAGE, COGNITION AND NEUROSCIENCE 2016; 32:158-174. [PMID: 29130054 PMCID: PMC5675577 DOI: 10.1080/23273798.2016.1245426] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2016] [Accepted: 09/05/2016] [Indexed: 05/31/2023]
Abstract
To examine whether more ecologically valid co-speech gesture stimuli elicit brain responses consistent with those found by studies that relied on scripted stimuli, we presented participants with spontaneously produced, meaningful co-speech gesture during fMRI scanning (n = 28). Speech presented with gesture (versus either presented alone) elicited heightened activity in bilateral posterior superior temporal, premotor, and inferior frontal regions. Within left temporal and premotor, but not inferior frontal regions, we identified small clusters with superadditive responses, suggesting that these discrete regions support both sensory and semantic integration. In contrast, surrounding areas and the inferior frontal gyrus may support either sensory or semantic integration. Reduced activation for speech with gesture in language-related regions indicates allocation of fewer neural resources when meaningful gestures accompany speech. Sign language experience did not affect co-speech gesture activation. Overall, our results indicate that scripted stimuli have minimal confounding influences; however, they may miss subtle superadditive effects.
Collapse
Affiliation(s)
- Jill Weisberg
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| | - Amy Lynn Hubbard
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| |
Collapse
|
20
|
Redcay E, Velnoskey KR, Rowe ML. Perceived communicative intent in gesture and language modulates the superior temporal sulcus. Hum Brain Mapp 2016; 37:3444-61. [PMID: 27238550 PMCID: PMC6867447 DOI: 10.1002/hbm.23251] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Revised: 03/25/2016] [Accepted: 04/27/2016] [Indexed: 11/08/2022] Open
Abstract
Behavioral evidence and theory suggest gesture and language processing may be part of a shared cognitive system for communication. While much research demonstrates both gesture and language recruit regions along perisylvian cortex, relatively less work has tested functional segregation within these regions on an individual level. Additionally, while most work has focused on a shared semantic network, less has examined shared regions for processing communicative intent. To address these questions, functional and structural MRI data were collected from 24 adult participants while viewing videos of an experimenter producing communicative, Participant-Directed Gestures (PDG) (e.g., "Hello, come here"), noncommunicative Self-adaptor Gestures (SG) (e.g., smoothing hair), and three written text conditions: (1) Participant-Directed Sentences (PDS), matched in content to PDG, (2) Third-person Sentences (3PS), describing a character's actions from a third-person perspective, and (3) meaningless sentences, Jabberwocky (JW). Surface-based conjunction and individual functional region of interest analyses identified shared neural activation between gesture (PDGvsSG) and language processing using two different language contrasts. Conjunction analyses of gesture (PDGvsSG) and Third-person Sentences versus Jabberwocky revealed overlap within left anterior and posterior superior temporal sulcus (STS). Conjunction analyses of gesture and Participant-Directed Sentences to Third-person Sentences revealed regions sensitive to communicative intent, including the left middle and posterior STS and left inferior frontal gyrus. Further, parametric modulation using participants' ratings of stimuli revealed sensitivity of left posterior STS to individual perceptions of communicative intent in gesture. These data highlight an important role of the STS in processing participant-directed communicative intent through gesture and language. Hum Brain Mapp 37:3444-3461, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Elizabeth Redcay
- Department of PsychologyUniversity of MarylandCollege ParkMaryland
| | | | - Meredith L. Rowe
- Graduate School of EducationHarvard UniversityCambridgeMassachusetts
| |
Collapse
|
21
|
Braddock BA, Gabany C, Shah M, Armbrecht ES, Twyman KA. Patterns of Gesture Use in Adolescents With Autism Spectrum Disorder. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2016; 25:408-415. [PMID: 27258802 DOI: 10.1044/2015_ajslp-14-0112] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2014] [Accepted: 10/23/2015] [Indexed: 06/05/2023]
Abstract
PURPOSE The purpose of this study was to examine patterns of spontaneous gesture use in a sample of adolescents with autism spectrum disorder (ASD). METHOD Thirty-five adolescents with ASD ages 11 to 16 years participated (mean age = 13.51 years; 29 boys, 6 girls). Participants' spontaneous speech and gestures produced during a narrative task were later coded from videotape. Parents were also asked to complete questionnaires to quantify adolescents' general communication ability and autism severity. RESULTS No significant subgroup differences were apparent between adolescents who did not gesture versus those who produced at least 1 gesture in general communication ability and autism severity. Subanalyses including only adolescents who produced gesture indicated a statistically significant negative association between gesture rate and general communication ability, specifically speech and syntax subscale scores. Adolescents who gestured produced higher proportions of iconic gestures and used gesture mostly to add information to speech. CONCLUSIONS The findings relate spontaneous gesture use to underlying strengths and weaknesses in adolescents' speech and syntactical language development. More research examining cospeech gesture in fluent speakers with ASD is needed.
Collapse
|
22
|
Göksun T, Lehet M, Malykhina K, Chatterjee A. Spontaneous gesture and spatial language: Evidence from focal brain injury. BRAIN AND LANGUAGE 2015; 150:1-13. [PMID: 26283001 PMCID: PMC4663137 DOI: 10.1016/j.bandl.2015.07.012] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2015] [Revised: 07/27/2015] [Accepted: 07/30/2015] [Indexed: 05/26/2023]
Abstract
People often use spontaneous gestures when communicating spatial information. We investigated focal brain-injured individuals to test the hypotheses that (1) naming motion event components of manner-path (represented by verbs-prepositions in English) are impaired selectively, (2) gestures compensate for impaired naming. Patients with left or right hemisphere damage (LHD or RHD) and elderly control participants were asked to describe motion events (e.g., running across) depicted in brief videos. Damage to the left posterior middle frontal gyrus, left inferior frontal gyrus, and left anterior superior temporal gyrus (aSTG) produced impairments in naming paths of motion; lesions to the left caudate and adjacent white matter produced impairments in naming manners of motion. While the frequency of spontaneous gestures were low, lesions to the left aSTG significantly correlated with greater production of path gestures. These suggest that producing prepositions-verbs can be separately impaired and gesture production compensates for naming impairments when damage involves left aSTG.
Collapse
Affiliation(s)
- Tilbe Göksun
- Department of Psychology, Koç University, Turkey.
| | - Matthew Lehet
- Department of Neurology, University of Pennsylvania School of Medicine, United States; Center for Cognitive Neuroscience, University of Pennsylvania, United States; Department of Psychology, Carnegie Mellon University, United States
| | | | - Anjan Chatterjee
- Department of Neurology, University of Pennsylvania School of Medicine, United States; Center for Cognitive Neuroscience, University of Pennsylvania, United States
| |
Collapse
|
23
|
The neural basis of hand gesture comprehension: A meta-analysis of functional magnetic resonance imaging studies. Neurosci Biobehav Rev 2015; 57:88-104. [DOI: 10.1016/j.neubiorev.2015.08.006] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2015] [Revised: 07/13/2015] [Accepted: 08/06/2015] [Indexed: 11/18/2022]
|
24
|
Özyürek A. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour. Philos Trans R Soc Lond B Biol Sci 2015; 369:20130296. [PMID: 25092664 DOI: 10.1098/rstb.2013.0296] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
As we speak, we use not only the arbitrary form-meaning mappings of the speech channel but also motivated form-meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal-posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.
Collapse
Affiliation(s)
- Aslı Özyürek
- Department of Linguistics, Radboud University Nijmegen, Erasmus Plain 1, 6500 HD, Nijmegen, The Netherlands Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen 6525 JT, The Netherlands
| |
Collapse
|
25
|
Cochet H, Centelles L, Jover M, Plachta S, Vauclair J. Hand preferences in preschool children: Reaching, pointing and symbolic gestures. Laterality 2015; 20:501-16. [DOI: 10.1080/1357650x.2015.1007057] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
26
|
Raschle NM, Smith SA, Zuk J, Dauvermann MR, Figuccio MJ, Gaab N. Investigating the neural correlates of voice versus speech-sound directed information in pre-school children. PLoS One 2014; 9:e115549. [PMID: 25532132 PMCID: PMC4274095 DOI: 10.1371/journal.pone.0115549] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2014] [Accepted: 11/24/2014] [Indexed: 02/06/2023] Open
Abstract
Studies in sleeping newborns and infants propose that the superior temporal sulcus is involved in speech processing soon after birth. Speech processing also implicitly requires the analysis of the human voice, which conveys both linguistic and extra-linguistic information. However, due to technical and practical challenges when neuroimaging young children, evidence of neural correlates of speech and/or voice processing in toddlers and young children remains scarce. In the current study, we used functional magnetic resonance imaging (fMRI) in 20 typically developing preschool children (average age = 5.8 y; range 5.2-6.8 y) to investigate brain activation during judgments about vocal identity versus the initial speech sound of spoken object words. FMRI results reveal common brain regions responsible for voice-specific and speech-sound specific processing of spoken object words including bilateral primary and secondary language areas of the brain. Contrasting voice-specific with speech-sound specific processing predominantly activates the anterior part of the right-hemispheric superior temporal sulcus. Furthermore, the right STS is functionally correlated with left-hemispheric temporal and right-hemispheric prefrontal regions. This finding underlines the importance of the right superior temporal sulcus as a temporal voice area and indicates that this brain region is specialized, and functions similarly to adults by the age of five. We thus extend previous knowledge of voice-specific regions and their functional connections to the young brain which may further our understanding of the neuronal mechanism of speech-specific processing in children with developmental disorders, such as autism or specific language impairments.
Collapse
Affiliation(s)
- Nora Maria Raschle
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
- Psychiatric University Clinics Basel, Department of Child and Adolescent Psychiatry, Basel, Switzerland
| | - Sara Ashley Smith
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
| | - Jennifer Zuk
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
| | - Maria Regina Dauvermann
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
| | - Michael Joseph Figuccio
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
| | - Nadine Gaab
- Laboratories of Cognitive Neuroscience, Division of Developmental Medicine, Department of Developmental Medicine, Boston Children's Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
- Harvard Graduate School of Education, Cambridge, Massachusetts, United States of America
| |
Collapse
|
27
|
Yuan Y, Brown S. The neural basis of mark making: a functional MRI study of drawing. PLoS One 2014; 9:e108628. [PMID: 25271440 PMCID: PMC4182721 DOI: 10.1371/journal.pone.0108628] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2014] [Accepted: 09/02/2014] [Indexed: 11/19/2022] Open
Abstract
Compared to most other forms of visually-guided motor activity, drawing is unique in that it "leaves a trail behind" in the form of the emanating image. We took advantage of an MRI-compatible drawing tablet in order to examine both the motor production and perceptual emanation of images. Subjects participated in a series of mark making tasks in which they were cued to draw geometric patterns on the tablet's surface. The critical comparison was between when visual feedback was displayed (image generation) versus when it was not (no image generation). This contrast revealed an occipito-parietal stream involved in motion-based perception of the emerging image, including areas V5/MT+, LO, V3A, and the posterior part of the intraparietal sulcus. Interestingly, when subjects passively viewed animations of visual patterns emerging on the projected surface, all of the sensorimotor network involved in drawing was strongly activated, with the exception of the primary motor cortex. These results argue that the origin of the human capacity to draw and write involves not only motor skills for tool use but also motor-sensory links between drawing movements and the visual images that emanate from them in real time.
Collapse
Affiliation(s)
- Ye Yuan
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
28
|
Abstract
AbstractWith a focus on receptive language, we examine the neurobiological evidence for the interdependence of receptive and expressive language processes. While we agree that there is compelling evidence for such interdependence, we suggest that Pickering & Garrod's (P&G's) account would be enhanced by considering more-specific situations in which their model does, and does not, apply.
Collapse
|
29
|
Interhemispheric functional connectivity following prenatal or perinatal brain injury predicts receptive language outcome. J Neurosci 2013; 33:5612-25. [PMID: 23536076 DOI: 10.1523/jneurosci.2851-12.2013] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Early brain injury alters both structural and functional connectivity between the cerebral hemispheres. Despite increasing knowledge on the individual hemispheric contributions to recovery from such injury, we know very little about how their interactions affect this process. In the present study, we related interhemispheric structural and functional connectivity to receptive language outcome following early left hemisphere stroke. We used functional magnetic resonance imaging to study 14 people with neonatal brain injury, and 25 age-matched controls during passive story comprehension. With respect to structural connectivity, we found that increased volume of the corpus callosum predicted good receptive language outcome, but that this is not specific to people with injury. In contrast, we found that increased posterior superior temporal gyrus interhemispheric functional connectivity during story comprehension predicted better receptive language performance in people with early brain injury, but worse performance in typical controls. This suggests that interhemispheric functional connectivity is one potential compensatory mechanism following early injury. Further, this pattern of results suggests refinement of the prevailing notion that better language outcome following early left hemisphere injury relies on the contribution of the contralesional hemisphere (i.e., the "right-hemisphere-take-over" theory). This pattern of results was also regionally specific; connectivity of the angular gyrus predicted poorer performance in both groups, independent of brain injury. These results present a complex picture of recovery, and in some cases, such recovery relies on increased cooperation between the injured hemisphere and homologous regions in the contralesional hemisphere, but in other cases, the opposite appears to hold.
Collapse
|
30
|
Wakefield EM, James TW, James KH. Neural correlates of gesture processing across human development. Cogn Neuropsychol 2013; 30:58-76. [PMID: 23662858 DOI: 10.1080/02643294.2013.794777] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Co-speech gesture facilitates learning to a greater degree in children than in adults, suggesting that the mechanisms underlying the processing of co-speech gesture differ as a function of development. We suggest that this may be partially due to children's lack of experience producing gesture, leading to differences in the recruitment of sensorimotor networks when comparing adults to children. Here, we investigated the neural substrates of gesture processing in a cross-sectional sample of 5-, 7.5-, and 10-year-old children and adults and focused on relative recruitment of a sensorimotor system that included the precentral gyrus (PCG) and the posterior middle temporal gyrus (pMTG). Children and adults were presented with videos in which communication occurred through different combinations of speech and gesture during a functional magnetic resonance imaging (fMRI) session. Results demonstrated that the PCG and pMTG were recruited to different extents in the two populations. We interpret these novel findings as supporting the idea that gesture perception (pMTG) is affected by a history of gesture production (PCG), revealing the importance of considering gesture processing as a sensorimotor process.
Collapse
Affiliation(s)
- Elizabeth M Wakefield
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA.
| | | | | |
Collapse
|
31
|
Dick AS, Mok EH, Raja Beharelle A, Goldin-Meadow S, Small SL. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech. Hum Brain Mapp 2012; 35:900-17. [PMID: 23238964 DOI: 10.1002/hbm.22222] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2012] [Revised: 09/19/2012] [Accepted: 10/22/2012] [Indexed: 11/08/2022] Open
Abstract
In everyday conversation, listeners often rely on a speaker's gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers' iconic gestures. We focused on iconic gestures that contribute information not found in the speaker's talk, compared with those that convey information redundant with the speaker's talk. We found that three regions-left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)--responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech.
Collapse
|