1
|
Kurteff GL, Field AM, Asghar S, Tyler-Kabara EC, Clarke D, Weiner HL, Anderson AE, Watrous AJ, Buchanan RJ, Modur PN, Hamilton LS. Processing of auditory feedback in perisylvian and insular cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.14.593257. [PMID: 38798574 PMCID: PMC11118286 DOI: 10.1101/2024.05.14.593257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
When we speak, we not only make movements with our mouth, lips, and tongue, but we also hear the sound of our own voice. Thus, speech production in the brain involves not only controlling the movements we make, but also auditory and sensory feedback. Auditory responses are typically suppressed during speech production compared to perception, but how this manifests across space and time is unclear. Here we recorded intracranial EEG in seventeen pediatric, adolescent, and adult patients with medication-resistant epilepsy who performed a reading/listening task to investigate how other auditory responses are modulated during speech production. We identified onset and sustained responses to speech in bilateral auditory cortex, with a selective suppression of onset responses during speech production. Onset responses provide a temporal landmark during speech perception that is redundant with forward prediction during speech production. Phonological feature tuning in these "onset suppression" electrodes remained stable between perception and production. Notably, the posterior insula responded at sentence onset for both perception and production, suggesting a role in multisensory integration during feedback control.
Collapse
Affiliation(s)
- Garret Lynn Kurteff
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
| | - Alyssa M. Field
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
| | - Saman Asghar
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Elizabeth C. Tyler-Kabara
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Dave Clarke
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Howard L. Weiner
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Anne E. Anderson
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Andrew J. Watrous
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Robert J. Buchanan
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Pradeep N. Modur
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Liberty S. Hamilton
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Lead contact
| |
Collapse
|
2
|
Kurteff GL, Lester-Smith RA, Martinez A, Currens N, Holder J, Villarreal C, Mercado VR, Truong C, Huber C, Pokharel P, Hamilton LS. Speaker-induced Suppression in EEG during a Naturalistic Reading and Listening Task. J Cogn Neurosci 2023; 35:1538-1556. [PMID: 37584593 DOI: 10.1162/jocn_a_02037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
Speaking elicits a suppressed neural response when compared with listening to others' speech, a phenomenon known as speaker-induced suppression (SIS). Previous research has focused on investigating SIS at constrained levels of linguistic representation, such as the individual phoneme and word level. Here, we present scalp EEG data from a dual speech perception and production task where participants read sentences aloud then listened to playback of themselves reading those sentences. Playback was separated into immediate repetition of the previous trial and randomized repetition of a former trial to investigate if forward modeling of responses during passive listening suppresses the neural response. Concurrent EMG was recorded to control for movement artifact during speech production. In line with previous research, ERP analyses at the sentence level demonstrated suppression of early auditory components of the EEG for production compared with perception. To evaluate whether linguistic abstractions (in the form of phonological feature tuning) are suppressed during speech production alongside lower-level acoustic information, we fit linear encoding models that predicted scalp EEG based on phonological features, EMG activity, and task condition. We found that phonological features were encoded similarly between production and perception. However, this similarity was only observed when controlling for movement by using the EMG response as an additional regressor. Our results suggest that SIS operates at a sensory representational level and is dissociated from higher order cognitive and linguistic processing that takes place during speech perception and production. We also detail some important considerations when analyzing EEG during continuous speech production.
Collapse
|
3
|
Kuhlen AK, Abdel Rahman R. Beyond speaking: neurocognitive perspectives on language production in social interaction. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210483. [PMID: 36871592 PMCID: PMC9985974 DOI: 10.1098/rstb.2021.0483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 12/16/2022] [Indexed: 03/07/2023] Open
Abstract
The human faculty to speak has evolved, so has been argued, for communicating with others and for engaging in social interactions. Hence the human cognitive system should be equipped to address the demands that social interaction places on the language production system. These demands include the need to coordinate speaking with listening, the need to integrate own (verbal) actions with the interlocutor's actions, and the need to adapt language flexibly to the interlocutor and the social context. In order to meet these demands, core processes of language production are supported by cognitive processes that enable interpersonal coordination and social cognition. To fully understand the cognitive architecture and its neural implementation enabling humans to speak in social interaction, our understanding of how humans produce language needs to be connected to our understanding of how humans gain insights into other people's mental states and coordinate in social interaction. This article reviews theories and neurocognitive experiments that make this connection and can contribute to advancing our understanding of speaking in social interaction. This article is part of a discussion meeting issue 'Face2face: advancing the science of social interaction'.
Collapse
Affiliation(s)
- Anna K. Kuhlen
- Department of Psychology, Humboldt-Universität zu Berlin, 12489 Berlin, Germany
| | - Rasha Abdel Rahman
- Department of Psychology, Humboldt-Universität zu Berlin, 12489 Berlin, Germany
| |
Collapse
|
4
|
Douglas CL, Tremblay A, Newman AJ. A two for one special: EEG hyperscanning using a single-person EEG recording setup. MethodsX 2023; 10:102019. [PMID: 36845372 PMCID: PMC9945774 DOI: 10.1016/j.mex.2023.102019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 01/13/2023] [Indexed: 02/05/2023] Open
Abstract
EEG hyperscanning refers to recording electroencephalographic (EEG) data from multiple participants simultaneously. Many hyperscanning experimental designs seek to mimic naturalistic behavior, relying on unpredictable participant-generated stimuli. The majority of this research has focused on neural oscillatory activity that is quantified over hundreds of milliseconds or more. This contrasts with traditional event-related potential (ERP) research in which analysis focuses on transient responses, often only tens of milliseconds in duration. Deriving ERPs requires precise time-locking between stimuli and EEG recordings, and thus typically relies on pre-set stimuli that are presented to participants by a system that controls stimulus timing and synchronization with an EEG system. EEG hyperscanning methods typically use separate EEG amplifiers for each participant, increasing cost and complexity - including challenges in synchronizing data between systems. Here, we describe a method that allows for simultaneous acquisition of EEG data from a pair of participants engaged in conversation, using a single EEG system with simultaneous audio data collection that is synchronized with the EEG recording. This allows for the post-hoc insertion of trigger codes so that it is possible to analyze ERPs time-locked to specific events. We further demonstrate methods for deriving ERPs elicited by another person's spontaneous speech, using this setup.•EEG hyperscanning method using a single EEG amplifier•EEG hyperscanning method allowing simultaneous recording of audio data directly into the EEG data file for perfect synchronization•EEG method for naturalistic language and human interaction studies that allows the study of event-related potentials time-locked to spontaneous speech.
Collapse
|
5
|
Boux I, Tomasello R, Grisoni L, Pulvermüller F. Brain signatures predict communicative function of speech production in interaction. Cortex 2020; 135:127-145. [PMID: 33360757 DOI: 10.1016/j.cortex.2020.11.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 11/05/2020] [Accepted: 11/18/2020] [Indexed: 10/22/2022]
Abstract
People normally know what they want to communicate before they start speaking. However, brain indicators of communication are typically observed only after speech act onset, and it is unclear when any anticipatory brain activity prior to speaking might first emerge, along with the communicative intentions it possibly reflects. Here, we investigated brain activity prior to the production of different speech act types, request and naming actions performed by uttering single words embedded into language games with a partner, similar to natural communication. Starting ca. 600 msec before speech onset, an event-related potential maximal at fronto-central electrodes, which resembled the Readiness Potential, was larger when preparing requests compared to naming actions. Analysis of the cortical sources of this anticipatory brain potential suggests a relatively stronger involvement of fronto-central motor regions for requests, which may reflect the speaker's expectation of the partner actions typically following requests, e.g., the handing over of a requested object. Our results indicate that different neuronal circuits underlying the processing of different speech act types activate already before speaking. Results are discussed in light of previous work addressing the neural basis of speech act understanding and predictive brain indexes of language comprehension.
Collapse
Affiliation(s)
- Isabella Boux
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, Berlin, Germany; Einstein Center for Neurosciences, Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, Berlin, Germany.
| | - Rosario Tomasello
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, Berlin, Germany; Cluster of Excellence 'Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, Berlin, Germany.
| | - Luigi Grisoni
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, Berlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, Berlin, Germany; Einstein Center for Neurosciences, Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, Berlin, Germany; Cluster of Excellence 'Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, Berlin, Germany
| |
Collapse
|
6
|
Schwenke D, Goregliad Fjaellingsdal T, Bleichner MG, Grage T, Scherbaum S. An approach to social flexibility: Congruency effects during spontaneous word-by-word interaction. PLoS One 2020; 15:e0235083. [PMID: 32579618 PMCID: PMC7313956 DOI: 10.1371/journal.pone.0235083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Accepted: 06/08/2020] [Indexed: 11/24/2022] Open
Abstract
Cognitive flexibility is the ability to switch between different concepts or to adapt goal-directed behavior in a changing environment. Although, cognitive research on this ability has long been focused on the individual mind, it is becoming increasingly clear that cognitive flexibility plays a central role in our social life. This is particularly evident in turn-taking in verbal conversation, where cognitive flexibility of the individual becomes part of social flexibility in the dyadic interaction. In this work, we introduce a model that reveals different parameters that explain how people flexibly handle unexpected events in verbal conversation. In order to study hypotheses derived from the model, we use a novel experimental approach in which thirty pairs of participants engaged in a word-by-word interaction by taking turns in generating sentences word by word. Similar to well established individual cognitive tasks, participants needed to adapt their behavior in order to respond to their co-actor's last utterance. With our experimental approach we could manipulate the interaction between participants: Either both participants had to construct a sentence with a common target word (congruent condition) or with distinct target words (incongruent condition). We further studied the relation between the interactive Word-by-Word task measures and classical individual-centered, cognitive tasks, namely the Number-Letter task, the Stop-Signal task, and the GoNogo task. In the Word-by-Word task, we found that participants had faster response times in congruent compared to incongruent trials, which replicates the primary findings of standard cognitive tasks measuring cognitive flexibility. Further, we found a significant correlation between the performance in the Word-by-Word task and the Stop-Signal task indicating that participants with a high cognitive flexibility in the Word-by-Word task also showed high inhibition control.
Collapse
Affiliation(s)
- Diana Schwenke
- Department of Psychology, Technische Universität Dresden, Dresden, Germany
| | | | | | - Tobias Grage
- Department of Psychology, Technische Universität Dresden, Dresden, Germany
| | - Stefan Scherbaum
- Department of Psychology, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|