1
|
Ter Bekke M, Drijvers L, Holler J. Hand Gestures Have Predictive Potential During Conversation: An Investigation of the Timing of Gestures in Relation to Speech. Cogn Sci 2024; 48:e13407. [PMID: 38279899 DOI: 10.1111/cogs.13407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 07/09/2023] [Accepted: 01/10/2024] [Indexed: 01/29/2024]
Abstract
During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
Collapse
Affiliation(s)
- Marlijn Ter Bekke
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| | - Linda Drijvers
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Max Planck Institute for Psycholinguistics
| |
Collapse
|
2
|
Janzen Ulbricht N. Can grammatical morphemes be taught? Evidence of gestures influencing second language procedural learning in middle childhood. PLoS One 2023; 18:e0280543. [PMID: 36724183 PMCID: PMC9891517 DOI: 10.1371/journal.pone.0280543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 01/04/2023] [Indexed: 02/02/2023] Open
Abstract
What kind of practice makes perfect when children learn to use grammatical morphemes in a second language? Gestures are communicative hand and arm movements which teachers naturally employ as a teaching tool in the classroom. Gesture theory has proposed that gestures package information and previous studies suggest their value for teaching specific items, such as words, as well as abstract systems, such as language. There is broad consensus that implicit learning mechanisms in children are more developed than explicit ones and that everyday use of grammar is implicit and entails developing implicit knowledge. However, while many learners have difficulties acquiring new morpho-syntactic structures, such as the plural{-s} and 3rd person possessive {-s} in English, research on gesture and syntax in middle childhood remains rare. The present study (N = 19) was conducted to better understand if gestures which embody grammatical morphemes during instruction can contribute to procedural learning. Using a novel task, the gesture speeded fragment completion task, our behavioral results show a decrease in mean response times after instruction in the test condition utilizing syntactically specific gestures. This increase in procedural learning suggests that learners in this age group can benefit from embodied instruction in the classroom which visually differentiates between grammatical morphemes which differ in meaning but sound the same.
Collapse
Affiliation(s)
- Natasha Janzen Ulbricht
- Department of Philosophy and Humanities, Freie Universität Berlin, Berlin, Germany,* E-mail:
| |
Collapse
|
3
|
Dynamic auditory contributions to error detection revealed in the discrimination of Same and Different syllable pairs. Neuropsychologia 2022; 176:108388. [PMID: 36183800 DOI: 10.1016/j.neuropsychologia.2022.108388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 09/20/2022] [Accepted: 09/27/2022] [Indexed: 11/22/2022]
Abstract
During speech production auditory regions operate in concert with the anterior dorsal stream to facilitate online error detection. As the dorsal stream also is known to activate in speech perception, the purpose of the current study was to probe the role of auditory regions in error detection during auditory discrimination tasks as stimuli are encoded and maintained in working memory. A priori assumptions are that sensory mismatch (i.e., error) occurs during the discrimination of Different (mismatched) but not Same (matched) syllable pairs. Independent component analysis was applied to raw EEG data recorded from 42 participants to identify bilateral auditory alpha rhythms, which were decomposed across time and frequency to reveal robust patterns of event related synchronization (ERS; inhibition) and desynchronization (ERD; processing) over the time course of discrimination events. Results were characterized by bilateral peri-stimulus alpha ERD transitioning to alpha ERS in the late trial epoch, with ERD interpreted as evidence of working memory encoding via Analysis by Synthesis and ERS considered evidence of speech-induced-suppression arising during covert articulatory rehearsal to facilitate working memory maintenance. The transition from ERD to ERS occurred later in the left hemisphere in Different trials than in Same trials, with ERD and ERS temporally overlapping during the early post-stimulus window. Results were interpreted to suggest that the sensory mismatch (i.e., error) arising from the comparison of the first and second syllable elicits further processing in the left hemisphere to support working memory encoding and maintenance. Results are consistent with auditory contributions to error detection during both encoding and maintenance stages of working memory, with encoding stage error detection associated with stimulus concordance and maintenance stage error detection associated with task-specific retention demands.
Collapse
|
4
|
Skipper JI. A voice without a mouth no more: The neurobiology of language and consciousness. Neurosci Biobehav Rev 2022; 140:104772. [PMID: 35835286 DOI: 10.1016/j.neubiorev.2022.104772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 05/18/2022] [Accepted: 07/05/2022] [Indexed: 11/26/2022]
Abstract
Most research on the neurobiology of language ignores consciousness and vice versa. Here, language, with an emphasis on inner speech, is hypothesised to generate and sustain self-awareness, i.e., higher-order consciousness. Converging evidence supporting this hypothesis is reviewed. To account for these findings, a 'HOLISTIC' model of neurobiology of language, inner speech, and consciousness is proposed. It involves a 'core' set of inner speech production regions that initiate the experience of feeling and hearing words. These take on affective qualities, deriving from activation of associated sensory, motor, and emotional representations, involving a largely unconscious dynamic 'periphery', distributed throughout the whole brain. Responding to those words forms the basis for sustained network activity, involving 'default mode' activation and prefrontal and thalamic/brainstem selection of contextually relevant responses. Evidence for the model is reviewed, supporting neuroimaging meta-analyses conducted, and comparisons with other theories of consciousness made. The HOLISTIC model constitutes a more parsimonious and complete account of the 'neural correlates of consciousness' that has implications for a mechanistic account of mental health and wellbeing.
Collapse
|
5
|
Kirk PA, Robinson OJ, Skipper JI. Anxiety and amygdala connectivity during movie-watching. Neuropsychologia 2022; 169:108194. [PMID: 35245529 PMCID: PMC8987737 DOI: 10.1016/j.neuropsychologia.2022.108194] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/08/2021] [Accepted: 02/27/2022] [Indexed: 12/13/2022]
Abstract
Rodent and human studies have implicated an amygdala-prefrontal circuit during threat processing. One possibility is that while amygdala activity underlies core features of anxiety (e.g. detection of salient information), prefrontal cortices (i.e. dorsomedial prefrontal/anterior cingulate cortex) entrain its responsiveness. To date, this has been established in tightly controlled paradigms (predominantly using static face perception tasks) but has not been extended to more naturalistic settings. Consequently, using ‘movie fMRI’—in which participants watch ecologically-rich movie stimuli rather than constrained cognitive tasks—we sought to test whether individual differences in anxiety correlate with the degree of face-dependent amygdala-prefrontal coupling in two independent samples. Analyses suggested increased face-dependent superior parietal activation and decreased speech-dependent auditory cortex activation as a function of anxiety. However, we failed to find evidence for anxiety-dependent connectivity, neither in our stimulus-dependent or -independent analyses. Our findings suggest that work using experimentally constrained tasks may not replicate in more ecologically valid settings and, moreover, highlight the importance of testing the generalizability of neuroimaging findings outside of the original context. Using ‘movie fMRI’, we tested whether trait anxiety correlates with face-dependent amygdala-prefrontal coupling. We observed altered superior parietal activation to faces and auditory cortex activation to speech as a function of anxiety. We failed to find evidence for anxiety-dependent amygdala-dmPFC connectivity in stimulus-dependent or -independent analyses. Our findings highlight the importance of testing the generalizability of neuroimaging findings outside of the original context.
Collapse
Affiliation(s)
- Peter A Kirk
- UCL Institute of Cognitive Neuroscience, UK; UCL Experimental Psychology, UK.
| | - Oliver J Robinson
- UCL Institute of Cognitive Neuroscience, UK; UCL Clinical, Educational and Health Psychology, UK
| | | |
Collapse
|
6
|
What is Functional Communication? A Theoretical Framework for Real-World Communication Applied to Aphasia Rehabilitation. Neuropsychol Rev 2022; 32:937-973. [PMID: 35076868 PMCID: PMC9630202 DOI: 10.1007/s11065-021-09531-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Aphasia is an impairment of language caused by acquired brain damage such as stroke or traumatic brain injury, that affects a person’s ability to communicate effectively. The aim of rehabilitation in aphasia is to improve everyday communication, improving an individual’s ability to function in their day-to-day life. For that reason, a thorough understanding of naturalistic communication and its underlying mechanisms is imperative. The field of aphasiology currently lacks an agreed, comprehensive, theoretically founded definition of communication. Instead, multiple disparate interpretations of functional communication are used. We argue that this makes it nearly impossible to validly and reliably assess a person’s communicative performance, to target this behaviour through therapy, and to measure improvements post-therapy. In this article we propose a structured, theoretical approach to defining the concept of functional communication. We argue for a view of communication as “situated language use”, borrowed from empirical psycholinguistic studies with non-brain damaged adults. This framework defines language use as: (1) interactive, (2) multimodal, and (3) contextual. Existing research on each component of the framework from non-brain damaged adults and people with aphasia is reviewed. The consequences of adopting this approach to assessment and therapy for aphasia rehabilitation are discussed. The aim of this article is to encourage a more systematic, comprehensive approach to the study and treatment of situated language use in aphasia.
Collapse
|
7
|
Brisson V, Tremblay P. Improving speech perception in noise in young and older adults using transcranial magnetic stimulation. BRAIN AND LANGUAGE 2021; 222:105009. [PMID: 34425411 DOI: 10.1016/j.bandl.2021.105009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/06/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED Normal aging is associated with speech perception in noise (SPiN) difficulties. The objective of this study was to determine if SPiN performance can be enhanced by intermittent theta-burst stimulation (iTBS) in young and older adults. METHOD We developed a sub-lexical SPiN test to evaluate the contribution of age, hearing, and cognition to SPiN performance in young and older adults. iTBS was applied to the left posterior superior temporal sulcus (pSTS) and the left ventral premotor cortex (PMv) to examine its impact on SPiN performance. RESULTS Aging was associated with reduced SPiN accuracy. TMS-induced performance gain was greater after stimulation of the PMv compared to the pSTS. Participants with lower scores in the baseline condition improved the most. DISCUSSION SPiN difficulties can be reduced by enhancing activity within the left speech-processing network in adults. This study paves the way for the development of TMS-based interventions to reduce SPiN difficulties in adults.
Collapse
Affiliation(s)
- Valérie Brisson
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada.
| |
Collapse
|
8
|
Skipper JI, Aliko S, Brown S, Jo YJ, Lo S, Molimpakis E, Lametti DR. Reorganization of the Neurobiology of Language After Sentence Overlearning. Cereb Cortex 2021; 32:2447-2468. [PMID: 34585723 PMCID: PMC9157312 DOI: 10.1093/cercor/bhab354] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 08/27/2021] [Accepted: 08/29/2021] [Indexed: 11/14/2022] Open
Abstract
It is assumed that there are a static set of "language regions" in the brain. Yet, language comprehension engages regions well beyond these, and patients regularly produce familiar "formulaic" expressions when language regions are severely damaged. These suggest that the neurobiology of language is not fixed but varies with experiences, like the extent of word sequence learning. We hypothesized that perceiving overlearned sentences is supported by speech production and not putative language regions. Participants underwent 2 sessions of behavioral testing and functional magnetic resonance imaging (fMRI). During the intervening 15 days, they repeated 2 sentences 30 times each, twice a day. In both fMRI sessions, they "passively" listened to those sentences, novel sentences, and produced sentences. Behaviorally, evidence for overlearning included a 2.1-s decrease in reaction times to predict the final word in overlearned sentences. This corresponded to the recruitment of sensorimotor regions involved in sentence production, inactivation of temporal and inferior frontal regions involved in novel sentence listening, and a 45% change in global network organization. Thus, there was a profound whole-brain reorganization following sentence overlearning, out of "language" and into sensorimotor regions. The latter are generally preserved in aphasia and Alzheimer's disease, perhaps explaining residual abilities with formulaic expressions in both.
Collapse
Affiliation(s)
| | - Sarah Aliko
- Experimental Psychology, University College London, London, UK.,London Interdisciplinary Biosciences Consortium, University College London, London, UK
| | - Stephen Brown
- Natural Sciences, University College London, London, UK
| | - Yoon Ju Jo
- Experimental Psychology, University College London, London, UK
| | - Serena Lo
- Speech and Language Sciences, University College London, London, UK
| | - Emilia Molimpakis
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Daniel R Lametti
- Experimental Psychology, University College London, London, UK.,Department of Psychology, Acadia University, Nova Scotia, Canada
| |
Collapse
|
9
|
Zhang Y, Frassinelli D, Tuomainen J, Skipper JI, Vigliocco G. More than words: word predictability, prosody, gesture and mouth movements in natural language comprehension. Proc Biol Sci 2021; 288:20210500. [PMID: 34284631 PMCID: PMC8292779 DOI: 10.1098/rspb.2021.0500] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 06/28/2021] [Indexed: 12/27/2022] Open
Abstract
The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram. We quantified multimodal cues (prosody, gestures, mouth movements) and measured their effect on a well-established electroencephalographic marker of processing load in comprehension (N400). We found that brain responses to words were affected by informativeness of co-occurring multimodal cues, indicating that comprehension relies on linguistic and non-linguistic cues. Moreover, they were affected by interactions between the multimodal cues, indicating that the impact of each cue dynamically changes based on the informativeness of other cues. Thus, results show that multimodal cues are integral to comprehension, hence, our theories must move beyond the limited focus on speech and linguistic processing.
Collapse
Affiliation(s)
- Ye Zhang
- Experimental Psychology, University College London, London, UK
| | - Diego Frassinelli
- Department of Linguistics, University of Konstanz, Konstanz, Germany
| | - Jyrki Tuomainen
- Experimental Psychology, Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | | | | |
Collapse
|
10
|
Skipper JI, Lametti DR. Speech Perception under the Tent: A Domain-general Predictive Role for the Cerebellum. J Cogn Neurosci 2021; 33:1517-1534. [PMID: 34496370 DOI: 10.1162/jocn_a_01729] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The role of the cerebellum in speech perception remains a mystery. Given its uniform architecture, we tested the hypothesis that it implements a domain-general predictive mechanism whose role in speech is determined by connectivity. We collated all neuroimaging studies reporting cerebellar activity in the Neurosynth database (n = 8206). From this set, we found all studies involving passive speech and sound perception (n = 72, 64% speech, 12.5% sounds, 12.5% music, and 11% tones) and speech production and articulation (n = 175). Standard and coactivation neuroimaging meta-analyses were used to compare cerebellar and associated cortical activations between passive perception and production. We found distinct regions of perception- and production-related activity in the cerebellum and regions of perception-production overlap. Each of these regions had distinct patterns of cortico-cerebellar connectivity. To test for domain-generality versus specificity, we identified all psychological and task-related terms in the Neurosynth database that predicted activity in cerebellar regions associated with passive perception and production. Regions in the cerebellum activated by speech perception were associated with domain-general terms related to prediction. One hallmark of predictive processing is metabolic savings (i.e., decreases in neural activity when events are predicted). To test the hypothesis that the cerebellum plays a predictive role in speech perception, we examined cortical activation between studies reporting cerebellar activation and those without cerebellar activation during speech perception. When the cerebellum was active during speech perception, there was far less cortical activation than when it was inactive. The results suggest that the cerebellum implements a domain-general mechanism related to prediction during speech perception.
Collapse
Affiliation(s)
| | - Daniel R Lametti
- University College London.,Acadia University, Wolfville, Nova Scotia, Canada
| |
Collapse
|
11
|
Kang H, Auksztulewicz R, An H, Abi Chacra N, Sutter ML, Schnupp JWH. Neural Correlates of Auditory Pattern Learning in the Auditory Cortex. Front Neurosci 2021; 15:610978. [PMID: 33790730 PMCID: PMC8005649 DOI: 10.3389/fnins.2021.610978] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 02/23/2021] [Indexed: 11/13/2022] Open
Abstract
Learning of new auditory stimuli often requires repetitive exposure to the stimulus. Fast and implicit learning of sounds presented at random times enables efficient auditory perception. However, it is unclear how such sensory encoding is processed on a neural level. We investigated neural responses that are developed from a passive, repetitive exposure to a specific sound in the auditory cortex of anesthetized rats, using electrocorticography. We presented a series of random sequences that are generated afresh each time, except for a specific reference sequence that remains constant and re-appears at random times across trials. We compared induced activity amplitudes between reference and fresh sequences. Neural responses from both primary and non-primary auditory cortical regions showed significantly decreased induced activity amplitudes for reference sequences compared to fresh sequences, especially in the beta band. This is the first study showing that neural correlates of auditory pattern learning can be evoked even in anesthetized, passive listening animal models.
Collapse
Affiliation(s)
- Hijee Kang
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong.,Neuroscience Department, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Hyunjung An
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Nicolas Abi Chacra
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Mitchell L Sutter
- Center for Neuroscience and Section of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, CA, United States
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| |
Collapse
|
12
|
Morett LM, Landi N, Irwin J, McPartland JC. N400 amplitude, latency, and variability reflect temporal integration of beat gesture and pitch accent during language processing. Brain Res 2020; 1747:147059. [PMID: 32818527 PMCID: PMC7493208 DOI: 10.1016/j.brainres.2020.147059] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 08/03/2020] [Accepted: 08/12/2020] [Indexed: 01/19/2023]
Abstract
This study examines how across-trial (average) and trial-by-trial (variability in) amplitude and latency of the N400 event-related potential (ERP) reflect temporal integration of pitch accent and beat gesture. Thirty native English speakers viewed videos of a talker producing sentences with beat gesture co-occurring with a pitch accented focus word (synchronous), beat gesture co-occurring with the onset of a subsequent non-focused word (asynchronous), or the absence of beat gesture (no beat). Across trials, increased amplitude and earlier latency were observed when beat gesture was temporally asynchronous with pitch accenting than when it was temporally synchronous with pitch accenting or absent. Moreover, temporal asynchrony of beat gesture relative to pitch accent increased trial-by-trial variability of N400 amplitude and latency and influenced the relationship between across-trial and trial-by-trial N400 latency. These results indicate that across-trial and trial-by-trial amplitude and latency of the N400 ERP reflect temporal integration of beat gesture and pitch accent during language comprehension, supporting extension of the integrated systems hypothesis of gesture-speech processing and neural noise theories to focus processing in typical adult populations.
Collapse
Affiliation(s)
| | - Nicole Landi
- Haskins Laboratories, University of Connecticut, United States
| | - Julia Irwin
- Haskins Laboratories, Southern Connecticut State University, United States
| | | |
Collapse
|
13
|
Billot-Vasquez K, Lian Z, Hirata Y, Kelly SD. Emblem Gestures Improve Perception and Evaluation of Non-native Speech. Front Psychol 2020; 11:574418. [PMID: 33071912 PMCID: PMC7536367 DOI: 10.3389/fpsyg.2020.574418] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 08/19/2020] [Indexed: 01/02/2023] Open
Abstract
Traditionally, much of the attention on the communicative effects of non-native accent has focused on the accent itself rather than how it functions within a more natural context. The present study explores how the bodily context of co-speech emblematic gestures affects perceptual and social evaluation of non-native accent. In two experiments in two different languages, Mandarin and Japanese, we filmed learners performing a short utterance in three different within-subjects conditions: speech alone, culturally familiar gesture, and culturally unfamiliar gesture. Native Mandarin participants watched videos of foreign-accented Mandarin speakers (Experiment 1), and native Japanese participants watched videos of foreign-accented Japanese speakers (Experiment 2). Following each video, native language participants were asked a set of questions targeting speech perception and social impressions of the learners. Results from both experiments demonstrate that familiar—and occasionally unfamiliar—emblems facilitated speech perception and enhanced social evaluations compared to the speech alone baseline. The variability in our findings suggests that gesture may serve varied functions in the perception and evaluation of non-native accent.
Collapse
Affiliation(s)
- Kiana Billot-Vasquez
- Department of Psychological and Brain Sciences, Colgate University, Hamilton, NY, United States.,Center for Language and Brain, Hamilton, NY, United States
| | - Zhongwen Lian
- Center for Language and Brain, Hamilton, NY, United States.,Linguistics Program, Colgate University, Hamilton, NY, United States
| | - Yukari Hirata
- Center for Language and Brain, Hamilton, NY, United States.,Linguistics Program, Colgate University, Hamilton, NY, United States.,Department of East Asian Languages, Colgate University, Hamilton, NY, United States
| | - Spencer D Kelly
- Department of Psychological and Brain Sciences, Colgate University, Hamilton, NY, United States.,Center for Language and Brain, Hamilton, NY, United States.,Linguistics Program, Colgate University, Hamilton, NY, United States
| |
Collapse
|
14
|
Angulo-Perkins A, Concha L. Discerning the functional networks behind processing of music and speech through human vocalizations. PLoS One 2019; 14:e0222796. [PMID: 31600231 PMCID: PMC6786620 DOI: 10.1371/journal.pone.0222796] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 09/06/2019] [Indexed: 01/28/2023] Open
Abstract
A fundamental question regarding music processing is its degree of independence from speech processing, in terms of their underlying neuroanatomy and influence of cognitive traits and abilities. Although a straight answer to that question is still lacking, a large number of studies have described where in the brain and in which contexts (tasks, stimuli, populations) this independence is, or is not, observed. We examined the independence between music and speech processing using functional magnetic resonance imagining and a stimulation paradigm with different human vocal sounds produced by the same voice. The stimuli were grouped as Speech (spoken sentences), Hum (hummed melodies), and Song (sung sentences); the sentences used in Speech and Song categories were the same, as well as the melodies used in the two musical categories. Each category had a scrambled counterpart which allowed us to render speech and melody unintelligible, while preserving global amplitude and frequency characteristics. Finally, we included a group of musicians to evaluate the influence of musical expertise. Similar global patterns of cortical activity were related to all sound categories compared to baseline, but important differences were evident. Regions more sensitive to musical sounds were located bilaterally in the anterior and posterior superior temporal gyrus (planum polare and temporale), the right supplementary and premotor areas, and the inferior frontal gyrus. However, only temporal areas and supplementary motor cortex remained music-selective after subtracting brain activity related to the scrambled stimuli. Speech-selective regions mainly affected by intelligibility of the stimuli were observed on the left pars opecularis and the anterior portion of the medial temporal gyrus. We did not find differences between musicians and non-musicians Our results confirmed music-selective cortical regions in associative cortices, independent of previous musical training.
Collapse
Affiliation(s)
- Arafat Angulo-Perkins
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, Querétaro, México
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria
| | - Luis Concha
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, Querétaro, México
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada
| |
Collapse
|
15
|
Europa E, Gitelman DR, Kiran S, Thompson CK. Neural Connectivity in Syntactic Movement Processing. Front Hum Neurosci 2019; 13:27. [PMID: 30814941 PMCID: PMC6381040 DOI: 10.3389/fnhum.2019.00027] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 01/21/2019] [Indexed: 01/15/2023] Open
Abstract
Linguistic theory suggests non-canonical sentences subvert the dominant agent-verb-theme order in English via displacement of sentence constituents to argument (NP-movement) or non-argument positions (wh-movement). Both processes have been associated with the left inferior frontal gyrus and posterior superior temporal gyrus, but differences in neural activity and connectivity between movement types have not been investigated. In the current study, functional magnetic resonance imaging data were acquired from 21 adult participants during an auditory sentence-picture verification task using passive and active sentences contrasted to isolate NP-movement, and object- and subject-cleft sentences contrasted to isolate wh-movement. Then, functional magnetic resonance imaging data from regions common to both movement types were entered into a dynamic causal modeling analysis to examine effective connectivity for wh-movement and NP-movement. Results showed greater left inferior frontal gyrus activation for Wh > NP-movement, but no activation for NP > Wh-movement. Both types of movement elicited activity in the opercular part of the left inferior frontal gyrus, left posterior superior temporal gyrus, and left medial superior frontal gyrus. The dynamic causal modeling analyses indicated that neither movement type significantly modulated the connection from the left inferior frontal gyrus to the left posterior superior temporal gyrus, nor vice-versa, suggesting no connectivity differences between wh- and NP-movement. These findings support the idea that increased complexity of wh-structures, compared to sentences with NP-movement, requires greater engagement of cognitive resources via increased neural activity in the left inferior frontal gyrus, but both movement types engage similar neural networks.
Collapse
Affiliation(s)
- Eduardo Europa
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Darren R Gitelman
- Advocate Lutheran General Hospital, Park Ridge, IL, United States.,Department of Medicine, Rosalind Franklin University of Medicine and Science, North Chicago, IL, United States.,The Ken and Ruth Davee Department of Neurology Department of Neurology, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| | - Swathi Kiran
- College of Health & Rehabilitation Sciences, Boston University, Boston, MA, United States
| | - Cynthia K Thompson
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States.,The Ken and Ruth Davee Department of Neurology Department of Neurology, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States.,Mesulam Cognitive Neurology and Alzheimer's Disease Center, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| |
Collapse
|
16
|
Abstract
After been exposed to the visual input, in the first year of life, the brain experiences subtle but massive changes apparently crucial for communicative/emotional and social human development. Its lack could be the explanation of the very high prevalence of autism in children with total congenital blindness. The present theory postulates that the superior colliculus is the key structure for such changes for several reasons: it dominates visual behavior during the first months of life; it is ready at birth for complex visual tasks; it has a significant influence on several hemispheric regions; it is the main brain hub that permanently integrates visual and non-visual, external and internal information (bottom-up and top-down respectively); and it owns the enigmatic ability to take non-conscious decisions about where to focus attention. It is also a sentinel that triggers the subcortical mechanisms which drive social motivation to follow faces from birth and to react automatically to emotional stimuli. Through indirect connections it also activates simultaneously several cortical structures necessary to develop social cognition and to accomplish the multiattentional task required for conscious social interaction in real life settings. Genetic or non-genetic prenatal or early postnatal factors could disrupt the SC functions resulting in autism. The timing of postnatal biological disruption matches the timing of clinical autism manifestations. Astonishing coincidences between etiologies, clinical manifestations, cognitive and pathogenic autism theories on one side and SC functions on the other are disclosed in this review. Although the visual system dependent of the SC is usually considered as accessory of the LGN canonical pathway, its imprinting gives the brain a qualitatively specific functions not supplied by any other brain structure.
Collapse
Affiliation(s)
- Rubin Jure
- Centro Privado de Neurología y Neuropsicología Infanto Juvenil WERNICKE, Córdoba, Argentina
| |
Collapse
|
17
|
Saltuklaroglu T, Bowers A, Harkrider AW, Casenhiser D, Reilly KJ, Jenson DE, Thornton D. EEG mu rhythms: Rich sources of sensorimotor information in speech processing. BRAIN AND LANGUAGE 2018; 187:41-61. [PMID: 30509381 DOI: 10.1016/j.bandl.2018.09.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Revised: 09/27/2017] [Accepted: 09/23/2018] [Indexed: 06/09/2023]
Affiliation(s)
- Tim Saltuklaroglu
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA.
| | - Andrew Bowers
- University of Arkansas, Epley Center for Health Professions, 606 N. Razorback Road, Fayetteville, AR 72701, USA
| | - Ashley W Harkrider
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - Devin Casenhiser
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - Kevin J Reilly
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - David E Jenson
- Department of Speech and Hearing Sciences, Elson S. Floyd College of Medicine, Spokane, WA 99210-1495, USA
| | - David Thornton
- Department of Hearing, Speech, and Language Sciences, Gallaudet University, 800 Florida Avenue NE, Washington, DC 20002, USA
| |
Collapse
|
18
|
Biau E, Kotz SA. Lower Beta: A Central Coordinator of Temporal Prediction in Multimodal Speech. Front Hum Neurosci 2018; 12:434. [PMID: 30405383 PMCID: PMC6207805 DOI: 10.3389/fnhum.2018.00434] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 10/03/2018] [Indexed: 12/18/2022] Open
Abstract
How the brain decomposes and integrates information in multimodal speech perception is linked to oscillatory dynamics. However, how speech takes advantage of redundancy between different sensory modalities, and how this translates into specific oscillatory patterns remains unclear. We address the role of lower beta activity (~20 Hz), generally associated with motor functions, as an amodal central coordinator that receives bottom-up delta-theta copies from specific sensory areas and generate top-down temporal predictions for auditory entrainment. Dissociating temporal prediction from entrainment may explain how and why visual input benefits speech processing rather than adding cognitive load in multimodal speech perception. On the one hand, body movements convey prosodic and syllabic features at delta and theta rates (i.e., 1–3 Hz and 4–7 Hz). On the other hand, the natural precedence of visual input before auditory onsets may prepare the brain to anticipate and facilitate the integration of auditory delta-theta copies of the prosodic-syllabic structure. Here, we identify three fundamental criteria based on recent evidence and hypotheses, which support the notion that lower motor beta frequency may play a central and generic role in temporal prediction during speech perception. First, beta activity must respond to rhythmic stimulation across modalities. Second, beta power must respond to biological motion and speech-related movements conveying temporal information in multimodal speech processing. Third, temporal prediction may recruit a communication loop between motor and primary auditory cortices (PACs) via delta-to-beta cross-frequency coupling. We discuss evidence related to each criterion and extend these concepts to a beta-motivated framework of multimodal speech processing.
Collapse
Affiliation(s)
- Emmanuel Biau
- Basic and Applied Neuro Dynamics Laboratory, Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, Netherlands
| | - Sonja A Kotz
- Basic and Applied Neuro Dynamics Laboratory, Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, Netherlands.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
19
|
Wolf D, Mittelberg I, Rekittke LM, Bhavsar S, Zvyagintsev M, Haeck A, Cong F, Klasen M, Mathiak K. Interpretation of Social Interactions: Functional Imaging of Cognitive-Semiotic Categories During Naturalistic Viewing. Front Hum Neurosci 2018; 12:296. [PMID: 30154703 PMCID: PMC6102316 DOI: 10.3389/fnhum.2018.00296] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 07/06/2018] [Indexed: 01/01/2023] Open
Abstract
Social interactions arise from patterns of communicative signs, whose perception and interpretation require a multitude of cognitive functions. The semiotic framework of Peirce's Universal Categories (UCs) laid ground for a novel cognitive-semiotic typology of social interactions. During functional magnetic resonance imaging (fMRI), 16 volunteers watched a movie narrative encompassing verbal and non-verbal social interactions. Three types of non-verbal interactions were coded ("unresolved," "non-habitual," and "habitual") based on a typology reflecting Peirce's UCs. As expected, the auditory cortex responded to verbal interactions, but non-verbal interactions modulated temporal areas as well. Conceivably, when speech was lacking, ambiguous visual information (unresolved interactions) primed auditory processing in contrast to learned behavioral patterns (habitual interactions). The latter recruited a parahippocampal-occipital network supporting conceptual processing and associative memory retrieval. Requesting semiotic contextualization, non-habitual interactions activated visuo-spatial and contextual rule-learning areas such as the temporo-parietal junction and right lateral prefrontal cortex. In summary, the cognitive-semiotic typology reflected distinct sensory and association networks underlying the interpretation of observed non-verbal social interactions.
Collapse
Affiliation(s)
- Dhana Wolf
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany
| | - Irene Mittelberg
- Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany
| | - Linn-Marlen Rekittke
- Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany
| | - Saurabh Bhavsar
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Brain Imaging Facility, Interdisciplinary Centre for Clinical Studies (IZKF), Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Annina Haeck
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Fengyu Cong
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| |
Collapse
|
20
|
Perniss P. Why We Should Study Multimodal Language. Front Psychol 2018; 9:1109. [PMID: 30002643 PMCID: PMC6032889 DOI: 10.3389/fpsyg.2018.01109] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 06/11/2018] [Indexed: 12/21/2022] Open
Affiliation(s)
- Pamela Perniss
- School of Humanities, University of Brighton, Brighton, United Kingdom
| |
Collapse
|
21
|
Moore RK, Nicolao M. Toward a Needs-Based Architecture for ‘Intelligent’ Communicative Agents: Speaking with Intention. Front Robot AI 2017. [DOI: 10.3389/frobt.2017.00066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
22
|
Saltuklaroglu T, Harkrider AW, Thornton D, Jenson D, Kittilstved T. EEG Mu (µ) rhythm spectra and oscillatory activity differentiate stuttering from non-stuttering adults. Neuroimage 2017; 153:232-245. [PMID: 28400266 PMCID: PMC5569894 DOI: 10.1016/j.neuroimage.2017.04.022] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Revised: 01/24/2017] [Accepted: 04/08/2017] [Indexed: 10/19/2022] Open
Abstract
Stuttering is linked to sensorimotor deficits related to internal modeling mechanisms. This study compared spectral power and oscillatory activity of EEG mu (μ) rhythms between persons who stutter (PWS) and controls in listening and auditory discrimination tasks. EEG data were analyzed from passive listening in noise and accurate (same/different) discrimination of tones or syllables in quiet and noisy backgrounds. Independent component analysis identified left and/or right μ rhythms with characteristic alpha (α) and beta (β) peaks localized to premotor/motor regions in 23 of 27 people who stutter (PWS) and 24 of 27 controls. PWS produced μ spectra with reduced β amplitudes across conditions, suggesting reduced forward modeling capacity. Group time-frequency differences were associated with noisy conditions only. PWS showed increased μ-β desynchronization when listening to noise and early in discrimination events, suggesting evidence of heightened motor activity that might be related to forward modeling deficits. PWS also showed reduced μ-α synchronization in discrimination conditions, indicating reduced sensory gating. Together these findings indicate spectral and oscillatory analyses of μ rhythms are sensitive to stuttering. More specifically, they can reveal stuttering-related sensorimotor processing differences in listening and auditory discrimination that also may be influenced by basal ganglia deficits.
Collapse
Affiliation(s)
- Tim Saltuklaroglu
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - Ashley W Harkrider
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA.
| | - David Thornton
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - David Jenson
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - Tiffani Kittilstved
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| |
Collapse
|
23
|
Abstract
How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.
Collapse
|
24
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 117] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
25
|
Hasson U, Andric M, Atilgan H, Collignon O. Congenital blindness is associated with large-scale reorganization of anatomical networks. Neuroimage 2016; 128:362-372. [PMID: 26767944 PMCID: PMC4767220 DOI: 10.1016/j.neuroimage.2015.12.048] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2015] [Revised: 12/29/2015] [Accepted: 12/30/2015] [Indexed: 11/15/2022] Open
Abstract
Blindness is a unique model for understanding the role of experience in the development of the brain's functional and anatomical architecture. Documenting changes in the structure of anatomical networks for this population would substantiate the notion that the brain's core network-level organization may undergo neuroplasticity as a result of life-long experience. To examine this issue, we compared whole-brain networks of regional cortical-thickness covariance in early blind and matched sighted individuals. This covariance is thought to reflect signatures of integration between systems involved in similar perceptual/cognitive functions. Using graph-theoretic metrics, we identified a unique mode of anatomical reorganization in the blind that differed from that found for sighted. This was seen in that network partition structures derived from subgroups of blind were more similar to each other than they were to partitions derived from sighted. Notably, after deriving network partitions, we found that language and visual regions tended to reside within separate modules in sighted but showed a pattern of merging into shared modules in the blind. Our study demonstrates that early visual deprivation triggers a systematic large-scale reorganization of whole-brain cortical-thickness networks, suggesting changes in how occipital regions interface with other functional networks in the congenitally blind.
Collapse
Affiliation(s)
- Uri Hasson
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy.
| | - Michael Andric
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Hicret Atilgan
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy; CERNEC, Département de Psychologie, Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
26
|
Jenson D, Harkrider AW, Thornton D, Bowers AL, Saltuklaroglu T. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm. Front Hum Neurosci 2015; 9:534. [PMID: 26500519 PMCID: PMC4597480 DOI: 10.3389/fnhum.2015.00534] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 09/14/2015] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.
Collapse
Affiliation(s)
- David Jenson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Ashley W. Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - David Thornton
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Andrew L. Bowers
- Department of Communication Disorders, University of ArkansasFayetteville, AR, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| |
Collapse
|
27
|
Hertrich I, Kirsten M, Tiemann S, Beck S, Wühle A, Ackermann H, Rolke B. Context-dependent impact of presuppositions on early magnetic brain responses during speech perception. BRAIN AND LANGUAGE 2015; 149:1-12. [PMID: 26185045 DOI: 10.1016/j.bandl.2015.06.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2014] [Revised: 05/20/2015] [Accepted: 06/13/2015] [Indexed: 06/04/2023]
Abstract
Discourse structure enables us to generate expectations based upon linguistic material that has already been introduced. The present magnetoencephalography (MEG) study addresses auditory perception of test sentences in which discourse coherence was manipulated by using presuppositions (PSP) that either correspond or fail to correspond to items in preceding context sentences with respect to uniqueness and existence. Context violations yielded delayed auditory M50 and enhanced auditory M200 cross-correlation responses to syllable onsets within an analysis window of 1.5s following the PSP trigger words. Furthermore, discourse incoherence yielded suppression of spectral power within an expanded alpha band ranging from 6 to 16Hz. This effect showed a bimodal temporal distribution, being significant in an early time window of 0.0-0.5s following the PSP trigger and a late interval of 2.0-2.5s. These findings indicate anticipatory top-down mechanisms interacting with various aspects of bottom-up processing during speech perception.
Collapse
Affiliation(s)
- Ingo Hertrich
- Center for Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.
| | - Mareike Kirsten
- Evolutionary Cognition, Department of Psychology, University of Tübingen, Germany
| | - Sonja Tiemann
- Descriptive and Theoretical Linguistics, Department of English, University of Tübingen, Germany
| | - Sigrid Beck
- Descriptive and Theoretical Linguistics, Department of English, University of Tübingen, Germany
| | - Anja Wühle
- Center for Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany
| | - Hermann Ackermann
- Center for Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany
| | - Bettina Rolke
- Evolutionary Cognition, Department of Psychology, University of Tübingen, Germany
| |
Collapse
|
28
|
Biau E, Soto-Faraco S. Synchronization by the hand: the sight of gestures modulates low-frequency activity in brain responses to continuous speech. Front Hum Neurosci 2015; 9:527. [PMID: 26441618 PMCID: PMC4585072 DOI: 10.3389/fnhum.2015.00527] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 09/10/2015] [Indexed: 11/13/2022] Open
Abstract
During social interactions, speakers often produce spontaneous gestures to accompany their speech. These coordinated body movements convey communicative intentions, and modulate how listeners perceive the message in a subtle, but important way. In the present perspective, we put the focus on the role that congruent non-verbal information from beat gestures may play in the neural responses to speech. Whilst delta-theta oscillatory brain responses reflect the time-frequency structure of the speech signal, we argue that beat gestures promote phase resetting at relevant word onsets. This mechanism may facilitate the anticipation of associated acoustic cues relevant for prosodic/syllabic-based segmentation in speech perception. We report recently published data supporting this hypothesis, and discuss the potential of beats (and gestures in general) for further studies investigating continuous AV speech processing through low-frequency oscillations.
Collapse
Affiliation(s)
- Emmanuel Biau
- Multisensory Research Group, Center for Brain and Cognition, Universitat Pompeu Fabra Barcelona, Spain
| | - Salvador Soto-Faraco
- Multisensory Research Group, Center for Brain and Cognition, Universitat Pompeu Fabra Barcelona, Spain ; Institució Catalana de Recerca i Estudis Avançats (ICREA) Barcelona, Spain
| |
Collapse
|
29
|
Prediction in speech and language processing. Cortex 2015; 68:1-7. [PMID: 26048658 DOI: 10.1016/j.cortex.2015.05.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2015] [Revised: 05/03/2015] [Accepted: 05/03/2015] [Indexed: 11/20/2022]
|
30
|
Extending Gurwitsch's field theory of consciousness. Conscious Cogn 2015; 34:104-23. [PMID: 25916764 DOI: 10.1016/j.concog.2015.03.017] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Revised: 03/22/2015] [Accepted: 03/29/2015] [Indexed: 11/23/2022]
Abstract
Aron Gurwitsch's theory of the structure and dynamics of consciousness has much to offer contemporary theorizing about consciousness and its basis in the embodied brain. On Gurwitsch's account, as we develop it, the field of consciousness has a variable sized focus or "theme" of attention surrounded by a structured periphery of inattentional contents. As the field evolves, its contents change their status, sometimes smoothly, sometimes abruptly. Inner thoughts, a sense of one's body, and the physical environment are dominant field contents. These ideas can be linked with (and help unify) contemporary theories about the neural correlates of consciousness, inattention, the small world structure of the brain, meta-stable dynamics, embodied cognition, and predictive coding in the brain.
Collapse
|
31
|
Vigliocco G, Perniss P, Vinson D. Language as a multimodal phenomenon: implications for language learning, processing and evolution. Philos Trans R Soc Lond B Biol Sci 2015; 369:20130292. [PMID: 25092660 DOI: 10.1098/rstb.2013.0292] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Our understanding of the cognitive and neural underpinnings of language has traditionally been firmly based on spoken Indo-European languages and on language studied as speech or text. However, in face-to-face communication, language is multimodal: speech signals are invariably accompanied by visual information on the face and in manual gestures, and sign languages deploy multiple channels (hands, face and body) in utterance construction. Moreover, the narrow focus on spoken Indo-European languages has entrenched the assumption that language is comprised wholly by an arbitrary system of symbols and rules. However, iconicity (i.e. resemblance between aspects of communicative form and meaning) is also present: speakers use iconic gestures when they speak; many non-Indo-European spoken languages exhibit a substantial amount of iconicity in word forms and, finally, iconicity is the norm, rather than the exception in sign languages. This introduction provides the motivation for taking a multimodal approach to the study of language learning, processing and evolution, and discusses the broad implications of shifting our current dominant approaches and assumptions to encompass multimodal expression in both signed and spoken languages.
Collapse
Affiliation(s)
- Gabriella Vigliocco
- Cognitive, Perceptual & Brain Sciences Department, 26 Bedford Way, London WC1H 0AP Deafness, Cognition & Language Research Centre, 49 Gordon Square, London WC1H 0PD
| | - Pamela Perniss
- Cognitive, Perceptual & Brain Sciences Department, 26 Bedford Way, London WC1H 0AP Deafness, Cognition & Language Research Centre, 49 Gordon Square, London WC1H 0PD
| | - David Vinson
- Cognitive, Perceptual & Brain Sciences Department, 26 Bedford Way, London WC1H 0AP
| |
Collapse
|
32
|
Bidelman GM, Dexter L. Bilinguals at the "cocktail party": dissociable neural activity in auditory-linguistic brain regions reveals neurobiological basis for nonnative listeners' speech-in-noise recognition deficits. BRAIN AND LANGUAGE 2015; 143:32-41. [PMID: 25747886 DOI: 10.1016/j.bandl.2015.02.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2014] [Revised: 12/22/2014] [Accepted: 02/08/2015] [Indexed: 06/04/2023]
Abstract
We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA.
| | - Lauren Dexter
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| |
Collapse
|
33
|
van Leeuwen TM, Lamers MJA, Petersson KM, Gussenhoven C, Rietveld T, Poser B, Hagoort P. Phonological markers of information structure: an fMRI study. Neuropsychologia 2014; 58:64-74. [PMID: 24726334 DOI: 10.1016/j.neuropsychologia.2014.03.017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2013] [Revised: 02/11/2014] [Accepted: 03/31/2014] [Indexed: 10/25/2022]
Abstract
In this fMRI study we investigate the neural correlates of information structure integration during sentence comprehension in Dutch. We looked into how prosodic cues (pitch accents) that signal the information status of constituents to the listener (new information) are combined with other types of information during the unification process. The difficulty of unifying the prosodic cues into overall sentence meaning was manipulated by constructing sentences in which the pitch accent did (focus-accent agreement), and sentences in which the pitch accent did not (focus-accent disagreement) match the expectations for focus constituents of the sentence. In case of a mismatch, the load on unification processes increases. Our results show two anatomically distinct effects of focus-accent disagreement, one located in the posterior left inferior frontal gyrus (LIFG, BA6/44), and one in the more anterior-ventral LIFG (BA 47/45). Our results confirm that information structure is taken into account during unification, and imply an important role for the LIFG in unification processes, in line with previous fMRI studies.
Collapse
Affiliation(s)
- Tessa M van Leeuwen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | - Monique J A Lamers
- Department of Language and Communication, VU University, Amsterdam, The Netherlands; The Eargroup, Herentalsebaan 75, B-2100 Antwerp-Deurne, Belgium
| | - Karl Magnus Petersson
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Carlos Gussenhoven
- Department of Linguistics, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Toni Rietveld
- Department of Linguistics, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Benedikt Poser
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands; Erwin L. Hahn Institute for Magnetic Resonance Imaging, University Duisburg-Essen, Essen, Germany
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| |
Collapse
|