1
|
Oderbolz C, Stark E, Sauppe S, Meyer M. Concurrent processing of the prosodic hierarchy is supported by cortical entrainment and phase-amplitude coupling. Cereb Cortex 2024; 34:bhae479. [PMID: 39704246 DOI: 10.1093/cercor/bhae479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 10/30/2024] [Accepted: 11/28/2024] [Indexed: 12/21/2024] Open
Abstract
Models of phonology posit a hierarchy of prosodic units that is relatively independent from syntactic structure, requiring its own parsing. It remains unexplored how this prosodic hierarchy is represented in the brain. We investigated this foundational question by means of an electroencephalography (EEG) study. Thirty young adults listened to German sentences containing manipulations at different levels of the prosodic hierarchy. Evaluating speech-to-brain cortical entrainment and phase-amplitude coupling revealed that prosody's hierarchical structure is maintained at the neural level during spoken language comprehension. The faithfulness of this tracking varied as a function of the hierarchy's degree of intactness as well as systematic interindividual differences in audio-motor synchronization abilities. The results underscore the role of complex oscillatory mechanisms in configuring the continuous and hierarchical nature of the speech signal and situate prosody as a structure indispensable from theoretical perspectives on spoken language comprehension in the brain.
Collapse
Affiliation(s)
- Chantal Oderbolz
- Institute for the Interdisciplinary Study of Language Evolution, University of Zurich, Affolternstrasse 56, 8050 Zürich, Switzerland
- Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Rd NW, Washington D.C. 20057, United States
| | - Elisabeth Stark
- Zurich Center for Linguistics, University of Zurich, Andreasstrasse 15, 8050 Zürich, Switzerland
- Institute of Romance Studies, University of Zurich, Zürichbergstrasse 8, 8032 Zürich, Switzerland
| | - Sebastian Sauppe
- Department of Psychology, University of Zurich, Binzmühlestrasse 14, 8050 Zürich, Switzerland
| | - Martin Meyer
- Institute for the Interdisciplinary Study of Language Evolution, University of Zurich, Affolternstrasse 56, 8050 Zürich, Switzerland
| |
Collapse
|
2
|
Gastaldon S, Busan P, Molinaro N, Lizarazu M. Cortical Tracking of Speech Is Reduced in Adults Who Stutter When Listening for Speaking. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:4339-4357. [PMID: 39437265 DOI: 10.1044/2024_jslhr-24-00227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2024]
Abstract
PURPOSE The purpose of this study was to investigate cortical tracking of speech (CTS) in adults who stutter (AWS) compared to typically fluent adults (TFAs) to test the involvement of the speech-motor network in tracking rhythmic speech information. METHOD Participants' electroencephalogram was recorded while they simply listened to sentences (listening only) or completed them by naming a picture (listening for speaking), thus manipulating the upcoming involvement of speech production. We analyzed speech-brain coherence and brain connectivity during listening. RESULTS During the listening-for-speaking task, AWS exhibited reduced CTS in the 3- to 5-Hz range (theta), corresponding to the syllabic rhythm. The effect was localized in the left inferior parietal and right pre/supplementary motor regions. Connectivity analyses revealed that TFAs had stronger information transfer in the theta range in both tasks in fronto-temporo-parietal regions. When considering the whole sample of participants, increased connectivity from the right superior temporal cortex to the left sensorimotor cortex was correlated with faster naming times in the listening-for-speaking task. CONCLUSIONS Atypical speech-motor functioning in stuttering impacts speech perception, especially in situations requiring articulatory alertness. The involvement of frontal and (pre)motor regions in CTS in TFAs is highlighted. Further investigation is needed into speech perception in individuals with speech-motor deficits, especially when smooth transitioning between listening and speaking is required, such as in real-life conversational settings. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27234885.
Collapse
Affiliation(s)
- Simone Gastaldon
- Department of Developmental and Social Psychology, University of Padua, Italy
- Padova Neuroscience Center, University of Padua, Italy
| | - Pierpaolo Busan
- Department of Medical, Surgical and Health Sciences, University of Trieste, Italy
| | - Nicola Molinaro
- Basque Center on Cognition, Brain and Language, Donostia-San Sebastián, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Mikel Lizarazu
- Basque Center on Cognition, Brain and Language, Donostia-San Sebastián, Spain
| |
Collapse
|
3
|
Grent-'t-Jong T, Dheerendra P, Fusar-Poli P, Gross J, Gumley AI, Krishnadas R, Muckli LF, Uhlhaas PJ. Entrainment of neural oscillations during language processing in Early-Stage schizophrenia. Neuroimage Clin 2024; 44:103695. [PMID: 39536523 PMCID: PMC11602575 DOI: 10.1016/j.nicl.2024.103695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Revised: 09/25/2024] [Accepted: 10/25/2024] [Indexed: 11/16/2024]
Abstract
BACKGROUND Impairments in language processing in schizophrenia (ScZ) are a central aspect of the disorder but the underlying pathophysiology mechanisms are unclear. In the current study, we tested the hypothesis that neural oscillations are impaired during speech tracking in early-stage ScZ and in participants at clinical high-risk for psychosis (CHR-P). METHOD Magnetoencephalography (MEG) was used in combination with source reconstructed time-series to examine delta and theta-band entrainment during continuous speech. Participants were presented with a 5-minute audio recording during which they either attened to the story or word level. MEG-data were obtained from n = 22 CHR-P participants, n = 23 early-stage ScZ-patients, and n = 44 healthy controls (HC). Data were analysed with a Mutual Information (MI) approach to compute statistical dependence between the MEG and auditory signal, thus estimating individual speech-tracking ability. MEG-activity was reconstructed in a language network (bilateral inferior frontal cortex [F3T; Broca's], superior temporal areas [STS3, STS4; Wernicke's areas], and primary auditory cortex [bilateral HES; Heschl's gyrus]). MEG-data were correlated with clinical symptoms. RESULTS Theta-band entrainment in left Heschl's gyrus, averaged across groups, was significantly lower in the STORY compared to WORD condition (p = 0.022), and averaged over conditions, significantly lower in CHR-Ps (p = 0.045), but intact in early ScZ patients (p = 0.303), compared to controls. Correlation analyses between MEG data and symptom indicated that lower theta-band tracking in CHR-Ps was linked to the severity of perceptual abnormalities (p = 0.018). CONCLUSION Our results show that CHR-P participants involve impairments in theta-band entrainment during speech tracking in left primary auditory cortex while higher-order speech processing areas were intact. Moreover, the severity of aberrant perceptual experiences in CHR-P participants correlated with deficits in theta-band entrainment. Together, these findings highlight the possibility that neural oscillations during language processing could reveal fundamental abnormalities in speech processing which may constitute candidate biomarkers for early detection and diagnosis of ScZ.
Collapse
Affiliation(s)
- Tineke Grent-'t-Jong
- Department of Child and Adolescent Psychiatry, Charité Universitätsmedizin, Berlin, Germany
| | | | - Paolo Fusar-Poli
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy; Early Psychosis: Interventions and Clinical-detection (EPIC) Lab, Department of Psychosis Studies, King's College London, UK; Department of Brain and Behavioral Sciences, University of Pavia, Italy; Outreach and Support in South-London (OASIS) service, South London and Maudlsey (SLaM) NHS Foundation Trust, UK; Department of Psychiatry and Psychotherapy, University Hospital, Ludwig-Maximilian-University (LMU), Munich, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany
| | | | | | - Lars F Muckli
- School of Psychology and Neuroscience, University of Glasgow, UK
| | - Peter J Uhlhaas
- Department of Child and Adolescent Psychiatry, Charité Universitätsmedizin, Berlin, Germany; School of Psychology and Neuroscience, University of Glasgow, UK.
| |
Collapse
|
4
|
Zhang M, Riecke L, Bonte M. Cortical tracking of language structures: Modality-dependent and independent responses. Clin Neurophysiol 2024; 166:56-65. [PMID: 39111244 DOI: 10.1016/j.clinph.2024.07.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 04/18/2024] [Accepted: 07/20/2024] [Indexed: 09/15/2024]
Abstract
OBJECTIVES The mental parsing of linguistic hierarchy is crucial for language comprehension, and while there is growing interest in the cortical tracking of auditory speech, the neurophysiological substrates for tracking written language are still unclear. METHODS We recorded electroencephalographic (EEG) responses from participants exposed to auditory and visual streams of either random syllables or tri-syllabic real words. Using a frequency-tagging approach, we analyzed the neural representations of physically presented (i.e., syllables) and mentally constructed (i.e., words) linguistic units and compared them between the two sensory modalities. RESULTS We found that tracking syllables is partially modality dependent, with anterior and posterior scalp regions more involved in the tracking of spoken and written syllables, respectively. The cortical tracking of spoken and written words instead was found to involve a shared anterior region to a similar degree, suggesting a modality-independent process for word tracking. CONCLUSION Our study suggests that basic linguistic features are represented in a sensory modality-specific manner, while more abstract ones are modality-unspecific during the online processing of continuous language input. SIGNIFICANCE The current methodology may be utilized in future research to examine the development of reading skills, especially the deficiencies in fluent reading among those with dyslexia.
Collapse
Affiliation(s)
- Manli Zhang
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Lars Riecke
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
5
|
Gómez-Lombardi A, Costa BG, Gutiérrez PP, Carvajal PM, Rivera LZ, El-Deredy W. The cognitive triad network - oscillation - behaviour links individual differences in EEG theta frequency with task performance and effective connectivity. Sci Rep 2024; 14:21482. [PMID: 39277643 PMCID: PMC11401920 DOI: 10.1038/s41598-024-72229-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 09/04/2024] [Indexed: 09/17/2024] Open
Abstract
We reconcile two significant lines of Cognitive Neuroscience research: the relationship between the structural and functional architecture of the brain and behaviour on the one hand and the functional significance of oscillatory brain processes to behavioural performance on the other. Network neuroscience proposes that the three elements, behavioural performance, EEG oscillation frequency, and network connectivity should be tightly connected at the individual level. Young and old healthy adults were recruited as a proxy for performance variation. An auditory inhibitory control task was used to demonstrate that task performance correlates with the individual EEG frontal theta frequency. Older adults had a significantly slower theta frequency, and both theta frequency and task performance correlated with the strengths of two network connections that involve the main areas of inhibitory control and speech processing. The results suggest that both the recruited functional network and the oscillation frequency induced by the task are specific to the task, are inseparable, and mark individual differences that directly link structure and function to behaviour in health and disease.
Collapse
Affiliation(s)
- Andre Gómez-Lombardi
- Brain Dynamics Laboratory, Universidad de Valparaíso, Valparaíso, Chile.
- Centro de Investigación del Desarrollo en Cognición y Lenguaje, Universidad de Valparaíso, Valparaíso, Chile.
| | - Begoña Góngora Costa
- Centro de Investigación del Desarrollo en Cognición y Lenguaje, Universidad de Valparaíso, Valparaíso, Chile
| | - Pavel Prado Gutiérrez
- Escuela de Fonoaudiología, Facultad de Odontología y Ciencias de la Rehabilitación, Universidad San Sebastián, Santiago, Chile
| | - Pablo Muñoz Carvajal
- Centro para la Investigación Traslacional en Neurofarmacología, Escuela de Medicina, Facultad de Medicina, Universidad de Valparaíso, Valparaíso, Chile
| | - Lucía Z Rivera
- Centro Avanzado de Ingeniería Eléctrica y Electrónica, Universidad Técnica Federico Santa María, Valparaíso, Chile
| | - Wael El-Deredy
- Brain Dynamics Laboratory, Universidad de Valparaíso, Valparaíso, Chile
- Department of Electronic Engineering, School of Engineering, Universitat de València, Valencia, Spain
| |
Collapse
|
6
|
Kasten FH, Busson Q, Zoefel B. Opposing neural processing modes alternate rhythmically during sustained auditory attention. Commun Biol 2024; 7:1125. [PMID: 39266696 PMCID: PMC11393317 DOI: 10.1038/s42003-024-06834-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 09/03/2024] [Indexed: 09/14/2024] Open
Abstract
During continuous tasks, humans show spontaneous fluctuations in performance, putatively caused by varying attentional resources allocated to process external information. If neural resources are used to process other, presumably "internal" information, sensory input can be missed and explain an apparent dichotomy of "internal" versus "external" attention. In the current study, we extract presumed neural signatures of these attentional modes in human electroencephalography (EEG): neural entrainment and α-oscillations (~10-Hz), linked to the processing and suppression of sensory information, respectively. We test whether they exhibit structured fluctuations over time, while listeners attend to an ecologically relevant stimulus, like speech, and complete a task that requires full and continuous attention. Results show an antagonistic relation between neural entrainment to speech and spontaneous α-oscillations in two distinct brain networks-one specialized in the processing of external information, the other reminiscent of the dorsal attention network. These opposing neural modes undergo slow, periodic fluctuations around ~0.07 Hz and are related to the detection of auditory targets. Our study might have tapped into a general attentional mechanism that is conserved across species and has important implications for situations in which sustained attention to sensory information is critical.
Collapse
Affiliation(s)
- Florian H Kasten
- Department for Cognitive, Affective, Behavioral Neuroscience with Focus Neurostimulation, Institute of Psychology, University of Trier, Trier, Germany.
- Centre de Recherche Cerveau & Cognition, CNRS, Toulouse, France.
- Université Toulouse III Paul Sabatier, Toulouse, France.
| | | | - Benedikt Zoefel
- Centre de Recherche Cerveau & Cognition, CNRS, Toulouse, France.
- Université Toulouse III Paul Sabatier, Toulouse, France.
| |
Collapse
|
7
|
MacLean J, Stirn J, Bidelman GM. Auditory-motor entrainment and listening experience shape the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.18.604167. [PMID: 39071391 PMCID: PMC11275804 DOI: 10.1101/2024.07.18.604167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Background Plasticity from auditory experience shapes the brain's encoding and perception of sound. Though prior research demonstrates that neural entrainment (i.e., brain-to-acoustic synchronization) aids speech perception, how long- and short-term plasticity influence entrainment to concurrent speech has not been investigated. Here, we explored neural entrainment mechanisms and the interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Method Participants learned to identify double-vowel mixtures during ∼45 min training sessions with concurrent high-density EEG recordings. We examined the degree to which brain responses entrained to the speech-stimulus train (∼9 Hz) to investigate whether entrainment to speech prior to behavioral decision predicted task performance. Source and directed functional connectivity analyses of the EEG probed whether behavior was driven by group differences auditory-motor coupling. Results Both musicians and nonmusicians showed rapid perceptual learning in accuracy with training. Interestingly, listeners' neural entrainment strength prior to target speech mixtures predicted behavioral identification performance; stronger neural synchronization was observed preceding incorrect compared to correct trial responses. We also found stark hemispheric biases in auditory-motor coupling during speech entrainment, with greater auditory-motor connectivity in the right compared to left hemisphere for musicians (R>L) but not in nonmusicians (R=L). Conclusions Our findings confirm stronger neuroacoustic synchronization and auditory-motor coupling during speech processing in musicians. Stronger neural entrainment to rapid stimulus trains preceding incorrect behavioral responses supports the notion that alpha-band (∼10 Hz) arousal/suppression in brain activity is an important modulator of trial-by-trial success in perceptual processing.
Collapse
|
8
|
Corsini A, Tomassini A, Pastore A, Delis I, Fadiga L, D'Ausilio A. Speech perception difficulty modulates theta-band encoding of articulatory synergies. J Neurophysiol 2024; 131:480-491. [PMID: 38323331 DOI: 10.1152/jn.00388.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/04/2024] [Accepted: 01/25/2024] [Indexed: 02/08/2024] Open
Abstract
The human brain tracks available speech acoustics and extrapolates missing information such as the speaker's articulatory patterns. However, the extent to which articulatory reconstruction supports speech perception remains unclear. This study explores the relationship between articulatory reconstruction and task difficulty. Participants listened to sentences and performed a speech-rhyming task. Real kinematic data of the speaker's vocal tract were recorded via electromagnetic articulography (EMA) and aligned to corresponding acoustic outputs. We extracted articulatory synergies from the EMA data with principal component analysis (PCA) and employed partial information decomposition (PID) to separate the electroencephalographic (EEG) encoding of acoustic and articulatory features into unique, redundant, and synergistic atoms of information. We median-split sentences into easy (ES) and hard (HS) based on participants' performance and found that greater task difficulty involved greater encoding of unique articulatory information in the theta band. We conclude that fine-grained articulatory reconstruction plays a complementary role in the encoding of speech acoustics, lending further support to the claim that motor processes support speech perception.NEW & NOTEWORTHY Top-down processes originating from the motor system contribute to speech perception through the reconstruction of the speaker's articulatory movement. This study investigates the role of such articulatory simulation under variable task difficulty. We show that more challenging listening tasks lead to increased encoding of articulatory kinematics in the theta band and suggest that, in such situations, fine-grained articulatory reconstruction complements acoustic encoding.
Collapse
Affiliation(s)
- Alessandro Corsini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Aldo Pastore
- Laboratorio NEST, Scuola Normale Superiore, Pisa, Italy
| | - Ioannis Delis
- School of Biomedical Sciences, University of Leeds, Leeds, United Kingdom
| | - Luciano Fadiga
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - Alessandro D'Ausilio
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
- Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| |
Collapse
|
9
|
Ershaid H, Lizarazu M, McLaughlin D, Cooke M, Simantiraki O, Koutsogiannaki M, Lallier M. Contributions of listening effort and intelligibility to cortical tracking of speech in adverse listening conditions. Cortex 2024; 172:54-71. [PMID: 38215511 DOI: 10.1016/j.cortex.2023.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 09/05/2023] [Accepted: 11/14/2023] [Indexed: 01/14/2024]
Abstract
Cortical tracking of speech is vital for speech segmentation and is linked to speech intelligibility. However, there is no clear consensus as to whether reduced intelligibility leads to a decrease or an increase in cortical speech tracking, warranting further investigation of the factors influencing this relationship. One such factor is listening effort, defined as the cognitive resources necessary for speech comprehension, and reported to have a strong negative correlation with speech intelligibility. Yet, no studies have examined the relationship between speech intelligibility, listening effort, and cortical tracking of speech. The aim of the present study was thus to examine these factors in quiet and distinct adverse listening conditions. Forty-nine normal hearing adults listened to sentences produced casually, presented in quiet and two adverse listening conditions: cafeteria noise and reverberant speech. Electrophysiological responses were registered with electroencephalogram, and listening effort was estimated subjectively using self-reported scores and objectively using pupillometry. Results indicated varying impacts of adverse conditions on intelligibility, listening effort, and cortical tracking of speech, depending on the preservation of the speech temporal envelope. The more distorted envelope in the reverberant condition led to higher listening effort, as reflected in higher subjective scores, increased pupil diameter, and stronger cortical tracking of speech in the delta band. These findings suggest that using measures of listening effort in addition to those of intelligibility is useful for interpreting cortical tracking of speech results. Moreover, reading and phonological skills of participants were positively correlated with listening effort in the cafeteria condition, suggesting a special role of expert language skills in processing speech in this noisy condition. Implications for future research and theories linking atypical cortical tracking of speech and reading disorders are further discussed.
Collapse
Affiliation(s)
- Hadeel Ershaid
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Mikel Lizarazu
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Drew McLaughlin
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Martin Cooke
- Ikerbasque, Basque Science Foundation, Bilbao, Spain.
| | | | | | - Marie Lallier
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Ikerbasque, Basque Science Foundation, Bilbao, Spain.
| |
Collapse
|
10
|
Zoefel B, Kösem A. Neural tracking of continuous acoustics: properties, speech-specificity and open questions. Eur J Neurosci 2024; 59:394-414. [PMID: 38151889 DOI: 10.1111/ejn.16221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/17/2023] [Accepted: 11/22/2023] [Indexed: 12/29/2023]
Abstract
Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS UMR 5549, Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Anne Kösem
- Lyon Neuroscience Research Center (CRNL), INSERM U1028, Bron, France
| |
Collapse
|
11
|
Smith TM, Shen Y, Williams CN, Kidd GR, McAuley JD. Contribution of speech rhythm to understanding speech in noisy conditions: Further test of a selective entrainment hypothesis. Atten Percept Psychophys 2024; 86:627-642. [PMID: 38012475 DOI: 10.3758/s13414-023-02815-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2023] [Indexed: 11/29/2023]
Abstract
Previous work by McAuley et al. Attention, Perception, & Psychophysics, 82, 3222-3233, (2020), Attention, Perception & Psychophysics, 83, 2229-2240, (2021) showed that disruption of the natural rhythm of target (attended) speech worsens speech recognition in the presence of competing background speech or noise (a target-rhythm effect), while disruption of background speech rhythm improves target recognition (a background-rhythm effect). While these results were interpreted as support for the role of rhythmic regularities in facilitating target-speech recognition amidst competing backgrounds (in line with a selective entrainment hypothesis), questions remain about the factors that contribute to the target-rhythm effect. Experiment 1 ruled out the possibility that the target-rhythm effect relies on a decrease in intelligibility of the rhythm-altered keywords. Sentences from the Coordinate Response Measure (CRM) paradigm were presented with a background of speech-shaped noise, and the rhythm of the initial portion of these target sentences (the target rhythmic context) was altered while critically leaving the target Color and Number keywords intact. Results showed a target-rhythm effect, evidenced by poorer keyword recognition when the target rhythmic context was altered, despite the absence of rhythmic manipulation of the keywords. Experiment 2 examined the influence of the relative onset asynchrony between target and background keywords. Results showed a significant target-rhythm effect that was independent of the effect of target-background keyword onset asynchrony. Experiment 3 provided additional support for the selective entrainment hypothesis by replicating the target-rhythm effect with a set of speech materials that were less rhythmically constrained than the CRM sentences.
Collapse
Affiliation(s)
- Toni M Smith
- Department of Psychology, Michigan State University, East Lansing, MI, USA.
| | - Yi Shen
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Christina N Williams
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Gary R Kidd
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - J Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI, USA
| |
Collapse
|
12
|
Cabral-Calderin Y, van Hinsberg D, Thielscher A, Henry MJ. Behavioral entrainment to rhythmic auditory stimulation can be modulated by tACS depending on the electrical stimulation field properties. eLife 2024; 12:RP87820. [PMID: 38289225 PMCID: PMC10945705 DOI: 10.7554/elife.87820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024] Open
Abstract
Synchronization between auditory stimuli and brain rhythms is beneficial for perception. In principle, auditory perception could be improved by facilitating neural entrainment to sounds via brain stimulation. However, high inter-individual variability of brain stimulation effects questions the usefulness of this approach. Here we aimed to modulate auditory perception by modulating neural entrainment to frequency modulated (FM) sounds using transcranial alternating current stimulation (tACS). In addition, we evaluated the advantage of using tACS montages spatially optimized for each individual's anatomy and functional data compared to a standard montage applied to all participants. Across two different sessions, 2 Hz tACS was applied targeting auditory brain regions. Concurrent with tACS, participants listened to FM stimuli with modulation rate matching the tACS frequency but with different phase lags relative to the tACS, and detected silent gaps embedded in the FM sound. We observed that tACS modulated the strength of behavioral entrainment to the FM sound in a phase-lag specific manner. Both the optimal tACS lag and the magnitude of the tACS effect were variable across participants and sessions. Inter-individual variability of tACS effects was best explained by the strength of the inward electric field, depending on the field focality and proximity to the target brain region. Although additional evidence is necessary, our results also provided suggestive insights that spatially optimizing the electrode montage could be a promising tool to reduce inter-individual variability of tACS effects. This work demonstrates that tACS effectively modulates entrainment to sounds depending on the optimality of the electric field. However, the lack of reliability on optimal tACS lags calls for caution when planning tACS experiments based on separate sessions.
Collapse
Affiliation(s)
| | | | - Axel Thielscher
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and HvidovreCopenhagenDenmark
- Section for Magnetic Resonance, DTU Health Tech, Technical University of DenmarkCopenhagenDenmark
| | - Molly J Henry
- Max Planck Institute for Empirical AestheticsFrankfurtGermany
- Toronto Metropolitan UniversityTorontoCanada
| |
Collapse
|
13
|
Guerra G, Tierney A, Tijms J, Vaessen A, Bonte M, Dick F. Attentional modulation of neural sound tracking in children with and without dyslexia. Dev Sci 2024; 27:e13420. [PMID: 37350014 DOI: 10.1111/desc.13420] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 04/09/2023] [Accepted: 05/26/2023] [Indexed: 06/24/2023]
Abstract
Auditory selective attention forms an important foundation of children's learning by enabling the prioritisation and encoding of relevant stimuli. It may also influence reading development, which relies on metalinguistic skills including the awareness of the sound structure of spoken language. Reports of attentional impairments and speech perception difficulties in noisy environments in dyslexic readers are also suggestive of the putative contribution of auditory attention to reading development. To date, it is unclear whether non-speech selective attention and its underlying neural mechanisms are impaired in children with dyslexia and to which extent these deficits relate to individual reading and speech perception abilities in suboptimal listening conditions. In this EEG study, we assessed non-speech sustained auditory selective attention in 106 7-to-12-year-old children with and without dyslexia. Children attended to one of two tone streams, detecting occasional sequence repeats in the attended stream, and performed a speech-in-speech perception task. Results show that when children directed their attention to one stream, inter-trial-phase-coherence at the attended rate increased in fronto-central sites; this, in turn, was associated with better target detection. Behavioural and neural indices of attention did not systematically differ as a function of dyslexia diagnosis. However, behavioural indices of attention did explain individual differences in reading fluency and speech-in-speech perception abilities: both these skills were impaired in dyslexic readers. Taken together, our results show that children with dyslexia do not show group-level auditory attention deficits but these deficits may represent a risk for developing reading impairments and problems with speech perception in complex acoustic environments. RESEARCH HIGHLIGHTS: Non-speech sustained auditory selective attention modulates EEG phase coherence in children with/without dyslexia Children with dyslexia show difficulties in speech-in-speech perception Attention relates to dyslexic readers' speech-in-speech perception and reading skills Dyslexia diagnosis is not linked to behavioural/EEG indices of auditory attention.
Collapse
Affiliation(s)
- Giada Guerra
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London, UK
- Maastricht Brain Imaging Center and Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Adam Tierney
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London, UK
| | - Jurgen Tijms
- RID, Amsterdam, Netherlands
- Rudolf Berlin Center, Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | | | - Milene Bonte
- Maastricht Brain Imaging Center and Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Frederic Dick
- Division of Psychology & Language Sciences, UCL, London, UK
| |
Collapse
|
14
|
Assaneo MF, Orpella J. Rhythms in Speech. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:257-274. [PMID: 38918356 DOI: 10.1007/978-3-031-60183-5_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Speech can be defined as the human ability to communicate through a sequence of vocal sounds. Consequently, speech requires an emitter (the speaker) capable of generating the acoustic signal and a receiver (the listener) able to successfully decode the sounds produced by the emitter (i.e., the acoustic signal). Time plays a central role at both ends of this interaction. On the one hand, speech production requires precise and rapid coordination, typically within the order of milliseconds, of the upper vocal tract articulators (i.e., tongue, jaw, lips, and velum), their composite movements, and the activation of the vocal folds. On the other hand, the generated acoustic signal unfolds in time, carrying information at different timescales. This information must be parsed and integrated by the receiver for the correct transmission of meaning. This chapter describes the temporal patterns that characterize the speech signal and reviews research that explores the neural mechanisms underlying the generation of these patterns and the role they play in speech comprehension.
Collapse
Affiliation(s)
- M Florencia Assaneo
- Instituto de Neurobiología, Universidad Autónoma de México, Santiago de Querétaro, Mexico.
| | - Joan Orpella
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| |
Collapse
|
15
|
Batterink LJ, Mulgrew J, Gibbings A. Rhythmically Modulating Neural Entrainment during Exposure to Regularities Influences Statistical Learning. J Cogn Neurosci 2024; 36:107-127. [PMID: 37902580 DOI: 10.1162/jocn_a_02079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
The ability to discover regularities in the environment, such as syllable patterns in speech, is known as statistical learning. Previous studies have shown that statistical learning is accompanied by neural entrainment, in which neural activity temporally aligns with repeating patterns over time. However, it is unclear whether these rhythmic neural dynamics play a functional role in statistical learning or whether they largely reflect the downstream consequences of learning, such as the enhanced perception of learned words in speech. To better understand this issue, we manipulated participants' neural entrainment during statistical learning using continuous rhythmic visual stimulation. Participants were exposed to a speech stream of repeating nonsense words while viewing either (1) a visual stimulus with a "congruent" rhythm that aligned with the word structure, (2) a visual stimulus with an incongruent rhythm, or (3) a static visual stimulus. Statistical learning was subsequently measured using both an explicit and implicit test. Participants in the congruent condition showed a significant increase in neural entrainment over auditory regions at the relevant word frequency, over and above effects of passive volume conduction, indicating that visual stimulation successfully altered neural entrainment within relevant neural substrates. Critically, during the subsequent implicit test, participants in the congruent condition showed an enhanced ability to predict upcoming syllables and stronger neural phase synchronization to component words, suggesting that they had gained greater sensitivity to the statistical structure of the speech stream relative to the incongruent and static groups. This learning benefit could not be attributed to strategic processes, as participants were largely unaware of the contingencies between the visual stimulation and embedded words. These results indicate that manipulating neural entrainment during exposure to regularities influences statistical learning outcomes, suggesting that neural entrainment may functionally contribute to statistical learning. Our findings encourage future studies using non-invasive brain stimulation methods to further understand the role of entrainment in statistical learning.
Collapse
|
16
|
Ortiz-Barajas MC, Guevara R, Gervain J. Neural oscillations and speech processing at birth. iScience 2023; 26:108187. [PMID: 37965146 PMCID: PMC10641252 DOI: 10.1016/j.isci.2023.108187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 08/29/2023] [Accepted: 10/09/2023] [Indexed: 11/16/2023] Open
Abstract
Are neural oscillations biologically endowed building blocks of the neural architecture for speech processing from birth, or do they require experience to emerge? In adults, delta, theta, and low-gamma oscillations support the simultaneous processing of phrasal, syllabic, and phonemic units in the speech signal, respectively. Using electroencephalography to investigate neural oscillations in the newborn brain we reveal that delta and theta oscillations differ for rhythmically different languages, suggesting that these bands underlie newborns' universal ability to discriminate languages on the basis of rhythm. Additionally, higher theta activity during post-stimulus as compared to pre-stimulus rest suggests that stimulation after-effects are present from birth.
Collapse
Affiliation(s)
- Maria Clemencia Ortiz-Barajas
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité, 45 rue des Saints-Pères, 75006 Paris, France
| | - Ramón Guevara
- Department of Physics and Astronomy, University of Padua, Via Marzolo 8, 35131 Padua, Italy
| | - Judit Gervain
- Integrative Neuroscience and Cognition Center, CNRS & Université Paris Cité, 45 rue des Saints-Pères, 75006 Paris, France
- Department of Developmental and Social Psychology, University of Padua, Via Venezia 8, 35131 Padua, Italy
| |
Collapse
|
17
|
Jiang Z, An X, Liu S, Yin E, Yan Y, Ming D. Neural oscillations reflect the individual differences in the temporal perception of audiovisual speech. Cereb Cortex 2023; 33:10575-10583. [PMID: 37727958 DOI: 10.1093/cercor/bhad304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/01/2023] [Accepted: 08/02/2023] [Indexed: 09/21/2023] Open
Abstract
Multisensory integration occurs within a limited time interval between multimodal stimuli. Multisensory temporal perception varies widely among individuals and involves perceptual synchrony and temporal sensitivity processes. Previous studies explored the neural mechanisms of individual differences for beep-flash stimuli, whereas there was no study for speech. In this study, 28 subjects (16 male) performed an audiovisual speech/ba/simultaneity judgment task while recording their electroencephalography. We examined the relationship between prestimulus neural oscillations (i.e. the pre-pronunciation movement-related oscillations) and temporal perception. The perceptual synchrony was quantified using the Point of Subjective Simultaneity and temporal sensitivity using the Temporal Binding Window. Our results revealed dissociated neural mechanisms for individual differences in Temporal Binding Window and Point of Subjective Simultaneity. The frontocentral delta power, reflecting top-down attention control, is positively related to the magnitude of individual auditory leading Temporal Binding Windows (auditory Temporal Binding Windows; LTBWs), whereas the parieto-occipital theta power, indexing bottom-up visual temporal attention specific to speech, is negatively associated with the magnitude of individual visual leading Temporal Binding Windows (visual Temporal Binding Windows; RTBWs). In addition, increased left frontal and bilateral temporoparietal occipital alpha power, reflecting general attentional states, is associated with increased Points of Subjective Simultaneity. Strengthening attention abilities might improve the audiovisual temporal perception of speech and further impact speech integration.
Collapse
Affiliation(s)
- Zeliang Jiang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Xingwei An
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Shuang Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Erwei Yin
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
- Defense Innovation Institute, Academy of Military Sciences (AMS), 100071 Beijing, China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), 300457 Tianjin, China
| | - Ye Yan
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
- Defense Innovation Institute, Academy of Military Sciences (AMS), 100071 Beijing, China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), 300457 Tianjin, China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| |
Collapse
|
18
|
Rong P, Benson J. Intergenerational choral singing to improve communication outcomes in Parkinson's disease: Development of a theoretical framework and an integrated measurement tool. INTERNATIONAL JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2023; 25:722-745. [PMID: 36106430 DOI: 10.1080/17549507.2022.2110281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Purpose: This study presented an initial step towards developing the evidence base for intergenerational choral singing as a communication-focussed rehabilitative approach for Parkinson's disease (PD).Method: A theoretical framework was established to conceptualise the rehabilitative effect of intergenerational choral singing on four domains of communication impairments - motor drive, timing mechanism, sensorimotor integration, higher-level cognitive and affective functions - as well as activity/participation, and quality of life. A computer-assisted multidimensional acoustic analysis was developed to objectively assess the targeted domains of communication impairments. Voice Handicap Index and the World Health Organization's Quality of Life assessment-abbreviated version were used to obtain patient-reported outcomes at the activity/participation and quality of life levels. As a proof of concept, a single subject with PD was recruited to participate in 9 weekly 1-h intergenerational choir rehearsals. The subject was assessed before, 1 week post, and 8 weeks post-choir.Result: Notable trends of improvement were observed in multiple domains of communication impairments at 1 week post-choir. Some improvements were maintained at 8 weeks post-choir. Patient-reported outcomes exhibited limited pre-post changes.Conclusion: This study provided the theoretical groundwork and an empirical measurement tool for future validation of intergenerational choral singing as a novel rehabilitation for PD.
Collapse
Affiliation(s)
- Panying Rong
- Department of Speech-Language-Hearing: Sciences & Disorders, University of Kansas, Lawrence, KS, USA and
| | | |
Collapse
|
19
|
Mohammadi Y, Graversen C, Østergaard J, Andersen OK, Reichenbach T. Phase-locking of Neural Activity to the Envelope of Speech in the Delta Frequency Band Reflects Differences between Word Lists and Sentences. J Cogn Neurosci 2023; 35:1301-1311. [PMID: 37379482 DOI: 10.1162/jocn_a_02016] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
The envelope of a speech signal is tracked by neural activity in the cerebral cortex. The cortical tracking occurs mainly in two frequency bands, theta (4-8 Hz) and delta (1-4 Hz). Tracking in the faster theta band has been mostly associated with lower-level acoustic processing, such as the parsing of syllables, whereas the slower tracking in the delta band relates to higher-level linguistic information of words and word sequences. However, much regarding the more specific association between cortical tracking and acoustic as well as linguistic processing remains to be uncovered. Here, we recorded EEG responses to both meaningful sentences and random word lists in different levels of signal-to-noise ratios (SNRs) that lead to different levels of speech comprehension as well as listening effort. We then related the neural signals to the acoustic stimuli by computing the phase-locking value (PLV) between the EEG recordings and the speech envelope. We found that the PLV in the delta band increases with increasing SNR for sentences but not for the random word lists, showing that the PLV in this frequency band reflects linguistic information. When attempting to disentangle the effects of SNR, speech comprehension, and listening effort, we observed a trend that the PLV in the delta band might reflect listening effort rather than the other two variables, although the effect was not statistically significant. In summary, our study shows that the PLV in the delta band reflects linguistic information and might be related to listening effort.
Collapse
|
20
|
Medeiros W, Barros T, Caixeta FV. Bibliometric mapping of non-invasive brain stimulation techniques (NIBS) for fluent speech production. Front Hum Neurosci 2023; 17:1164890. [PMID: 37425291 PMCID: PMC10323431 DOI: 10.3389/fnhum.2023.1164890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Language production is a finely regulated process, with many aspects which still elude comprehension. From a motor perspective, speech involves over a hundred different muscles functioning in coordination. As science and technology evolve, new approaches are used to study speech production and treat its disorders, and there is growing interest in the use of non-invasive modulation by means of transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS). Methods Here we analyzed data obtained from Scopus (Elsevier) using VOSViewer to provide an overview of bibliographic mapping of citation, co-occurrence of keywords, co-citation and bibliographic coupling of non-invasive brain stimulation (NIBS) use in speech research. Results In total, 253 documents were found, being 55% from only three countries (USA, Germany and Italy), with emerging economies such as Brazil and China becoming relevant in this topic recently. Most documents were published in this last decade, with 2022 being the most productive yet, showing brain stimulation has untapped potential for the speech research field. Discussion Keyword analysis indicates a move away from basic research on the motor control in healthy speech, toward clinical applications such as stuttering and aphasia treatment. We also observe a recent trend in cerebellar modulation for clinical treatment. Finally, we discuss how NIBS have established over the years and gained prominence as tools in speech therapy and research, and highlight potential methodological possibilities for future research.
Collapse
|
21
|
Van Herck S, Economou M, Bempt FV, Ghesquière P, Vandermosten M, Wouters J. Pulsatile modulation greatly enhances neural synchronization at syllable rate in children. Neuroimage 2023:120223. [PMID: 37315772 DOI: 10.1016/j.neuroimage.2023.120223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 05/22/2023] [Accepted: 06/11/2023] [Indexed: 06/16/2023] Open
Abstract
Neural processing of the speech envelope is of crucial importance for speech perception and comprehension. This envelope processing is often investigated by measuring neural synchronization to sinusoidal amplitude-modulated stimuli at different modulation frequencies. However, it has been argued that these stimuli lack ecological validity. Pulsatile amplitude-modulated stimuli, on the other hand, are suggested to be more ecologically valid and efficient, and have increased potential to uncover the neural mechanisms behind some developmental disorders such a dyslexia. Nonetheless, pulsatile stimuli have not yet been investigated in pre-reading and beginning reading children, which is a crucial age for developmental reading research. We performed a longitudinal study to examine the potential of pulsatile stimuli in this age range. Fifty-two typically reading children were tested at three time points from the middle of their last year of kindergarten (5 years old) to the end of first grade (7 years old). Using electroencephalography, we measured neural synchronization to syllable rate and phoneme rate sinusoidal and pulsatile amplitude-modulated stimuli. Our results revealed that the pulsatile stimuli significantly enhance neural synchronization at syllable rate, compared to the sinusoidal stimuli. Additionally, the pulsatile stimuli at syllable rate elicited a different hemispheric specialization, more closely resembling natural speech envelope tracking. We postulate that using the pulsatile stimuli greatly increases EEG data acquisition efficiency compared to the common sinusoidal amplitude-modulated stimuli in research in younger children and in developmental reading research.
Collapse
Affiliation(s)
- Shauni Van Herck
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium.
| | - Maria Economou
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| | - Femke Vanden Bempt
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| | | | - Jan Wouters
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium
| |
Collapse
|
22
|
Fu X, Riecke L. Effects of continuous tactile stimulation on auditory-evoked cortical responses depend on the audio-tactile phase. Neuroimage 2023; 274:120140. [PMID: 37120042 DOI: 10.1016/j.neuroimage.2023.120140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/27/2023] [Indexed: 05/01/2023] Open
Abstract
Auditory perception can benefit from stimuli in non-auditory sensory modalities, as for example in lip-reading. Compared with such visual influences, tactile influences are still poorly understood. It has been shown that single tactile pulses can enhance the perception of auditory stimuli depending on their relative timing, but whether and how such brief auditory enhancements can be stretched in time with more sustained, phase-specific periodic tactile stimulation is still unclear. To address this question, we presented tactile stimulation that fluctuated coherently and continuously at 4Hz with an auditory noise (either in-phase or anti-phase) and assessed its effect on the cortical processing and perception of an auditory signal embedded in that noise. Scalp-electroencephalography recordings revealed an enhancing effect of in-phase tactile stimulation on cortical responses phase-locked to the noise and a suppressive effect of anti-phase tactile stimulation on responses evoked by the auditory signal. Although these effects appeared to follow well-known principles of multisensory integration of discrete audio-tactile events, they were not accompanied by corresponding effects on behavioral measures of auditory signal perception. Our results indicate that continuous periodic tactile stimulation can enhance cortical processing of acoustically-induced fluctuations and mask cortical responses to an ongoing auditory signal. They further suggest that such sustained cortical effects can be insufficient for inducing sustained bottom-up auditory benefits.
Collapse
Affiliation(s)
- Xueying Fu
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| | - Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
23
|
Cortical encoding of rhythmic kinematic structures in biological motion. Neuroimage 2023; 268:119893. [PMID: 36693597 DOI: 10.1016/j.neuroimage.2023.119893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 01/04/2023] [Accepted: 01/20/2023] [Indexed: 01/22/2023] Open
Abstract
Biological motion (BM) perception is of great survival value to human beings. The critical characteristics of BM information lie in kinematic cues containing rhythmic structures. However, how rhythmic kinematic structures of BM are dynamically represented in the brain and contribute to visual BM processing remains largely unknown. Here, we probed this issue in three experiments using electroencephalogram (EEG). We found that neural oscillations of observers entrained to the hierarchical kinematic structures of the BM sequences (i.e., step-cycle and gait-cycle for point-light walkers). Notably, only the cortical tracking of the higher-level rhythmic structure (i.e., gait-cycle) exhibited a BM processing specificity, manifested by enhanced neural responses to upright over inverted BM stimuli. This effect could be extended to different motion types and tasks, with its strength positively correlated with the perceptual sensitivity to BM stimuli at the right temporal brain region dedicated to visual BM processing. Modeling results further suggest that the neural encoding of spatiotemporally integrative kinematic cues, in particular the opponent motions of bilateral limbs, drives the selective cortical tracking of BM information. These findings underscore the existence of a cortical mechanism that encodes periodic kinematic features of body movements, which underlies the dynamic construction of visual BM perception.
Collapse
|
24
|
Holmes E, Johnsrude IS. Intelligibility benefit for familiar voices is not accompanied by better discrimination of fundamental frequency or vocal tract length. Hear Res 2023; 429:108704. [PMID: 36701896 DOI: 10.1016/j.heares.2023.108704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 11/11/2022] [Accepted: 01/19/2023] [Indexed: 01/21/2023]
Abstract
Speech is more intelligible when it is spoken by familiar than unfamiliar people. If this benefit arises because key voice characteristics like perceptual correlates of fundamental frequency or vocal tract length (VTL) are more accurately represented for familiar voices, listeners may be able to discriminate smaller manipulations to such characteristics for familiar than unfamiliar voices. We measured participants' (N = 17) thresholds for discriminating pitch (correlate of fundamental frequency, or glottal pulse rate) and formant spacing (correlate of VTL; 'VTL-timbre') for voices that were familiar (participants' friends) and unfamiliar (other participants' friends). As expected, familiar voices were more intelligible. However, discrimination thresholds were no smaller for the same familiar voices. The size of the intelligibility benefit for a familiar over an unfamiliar voice did not relate to the difference in discrimination thresholds for the same voices. Also, the familiar-voice intelligibility benefit was just as large following perceptible manipulations to pitch and VTL-timbre. These results are more consistent with cognitive accounts of speech perception than traditional accounts that predict better discrimination.
Collapse
Affiliation(s)
- Emma Holmes
- Department of Speech Hearing and Phonetic Sciences, UCL, London WC1N 1PF, UK; Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada.
| | - Ingrid S Johnsrude
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada; School of Communication Sciences and Disorders, University of Western Ontario, London, Ontario N6G 1H1, Canada
| |
Collapse
|
25
|
Zoefel B, Gilbert RA, Davis MH. Intelligibility improves perception of timing changes in speech. PLoS One 2023; 18:e0279024. [PMID: 36634109 PMCID: PMC9836318 DOI: 10.1371/journal.pone.0279024] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 11/28/2022] [Indexed: 01/13/2023] Open
Abstract
Auditory rhythms are ubiquitous in music, speech, and other everyday sounds. Yet, it is unclear how perceived rhythms arise from the repeating structure of sounds. For speech, it is unclear whether rhythm is solely derived from acoustic properties (e.g., rapid amplitude changes), or if it is also influenced by the linguistic units (syllables, words, etc.) that listeners extract from intelligible speech. Here, we present three experiments in which participants were asked to detect an irregularity in rhythmically spoken speech sequences. In each experiment, we reduce the number of possible stimulus properties that differ between intelligible and unintelligible speech sounds and show that these acoustically-matched intelligibility conditions nonetheless lead to differences in rhythm perception. In Experiment 1, we replicate a previous study showing that rhythm perception is improved for intelligible (16-channel vocoded) as compared to unintelligible (1-channel vocoded) speech-despite near-identical broadband amplitude modulations. In Experiment 2, we use spectrally-rotated 16-channel speech to show the effect of intelligibility cannot be explained by differences in spectral complexity. In Experiment 3, we compare rhythm perception for sine-wave speech signals when they are heard as non-speech (for naïve listeners), and subsequent to training, when identical sounds are perceived as speech. In all cases, detection of rhythmic regularity is enhanced when participants perceive the stimulus as speech compared to when they do not. Together, these findings demonstrate that intelligibility enhances the perception of timing changes in speech, which is hence linked to processes that extract abstract linguistic units from sound.
Collapse
Affiliation(s)
- Benedikt Zoefel
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
- Centre National de la Recherche Scientifique (CNRS), Centre de Recherche Cerveau et Cognition (CerCo), Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Rebecca A. Gilbert
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Matthew H. Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
26
|
Becker R, Hervais-Adelman A. Individual theta-band cortical entrainment to speech in quiet predicts word-in-noise comprehension. Cereb Cortex Commun 2023; 4:tgad001. [PMID: 36726796 PMCID: PMC9883620 DOI: 10.1093/texcom/tgad001] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 12/17/2022] [Accepted: 12/18/2022] [Indexed: 01/09/2023] Open
Abstract
Speech elicits brain activity time-locked to its amplitude envelope. The resulting speech-brain synchrony (SBS) is thought to be crucial to speech parsing and comprehension. It has been shown that higher speech-brain coherence is associated with increased speech intelligibility. However, studies depending on the experimental manipulation of speech stimuli do not allow conclusion about the causality of the observed tracking. Here, we investigate whether individual differences in the intrinsic propensity to track the speech envelope when listening to speech-in-quiet is predictive of individual differences in speech-recognition-in-noise, in an independent task. We evaluated the cerebral tracking of speech in source-localized magnetoencephalography, at timescales corresponding to the phrases, words, syllables and phonemes. We found that individual differences in syllabic tracking in right superior temporal gyrus and in left middle temporal gyrus (MTG) were positively associated with recognition accuracy in an independent words-in-noise task. Furthermore, directed connectivity analysis showed that this relationship is partially mediated by top-down connectivity from premotor cortex-associated with speech processing and active sensing in the auditory domain-to left MTG. Thus, the extent of SBS-even during clear speech-reflects an active mechanism of the speech processing system that may confer resilience to noise.
Collapse
Affiliation(s)
- Robert Becker
- Corresponding author: Neurolinguistics, Department of Psychology, University of Zurich (UZH), Zurich, Switzerland.
| | - Alexis Hervais-Adelman
- Neurolinguistics, Department of Psychology, University of Zurich, Zurich 8050, Switzerland,Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, Zurich 8057, Switzerland
| |
Collapse
|
27
|
Lee TL, Lee H, Kang N. A meta-analysis showing improved cognitive performance in healthy young adults with transcranial alternating current stimulation. NPJ SCIENCE OF LEARNING 2023; 8:1. [PMID: 36593247 PMCID: PMC9807644 DOI: 10.1038/s41539-022-00152-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
Transcranial alternating current stimulation (tACS) is a non-invasive brain stimulation used for improving cognitive functions via delivering weak electrical stimulation with a certain frequency. This systematic review and meta-analysis investigated the effects of tACS protocols on cognitive functions in healthy young adults. We identified 56 qualified studies that compared cognitive functions between tACS and sham control groups, as indicated by cognitive performances and cognition-related reaction time. Moderator variable analyses specified effect size according to (a) timing of tACS, (b) frequency band of simulation, (c) targeted brain region, and (b) cognitive domain, respectively. Random-effects model meta-analysis revealed small positive effects of tACS protocols on cognitive performances. The moderator variable analyses found significant effects for online-tACS with theta frequency band, online-tACS with gamma frequency band, and offline-tACS with theta frequency band. Moreover, cognitive performances were improved in online- and offline-tACS with theta frequency band on either prefrontal and posterior parietal cortical regions, and further both online- and offline-tACS with theta frequency band enhanced executive function. Online-tACS with gamma frequency band on posterior parietal cortex was effective for improving cognitive performances, and the cognitive improvements appeared in executive function and perceptual-motor function. These findings suggested that tACS protocols with specific timing and frequency band may effectively improve cognitive performances.
Collapse
Affiliation(s)
- Tae Lee Lee
- Department of Human Movement Science, Incheon National University, Incheon, South Korea
- Neuromechanical Rehabilitation Research Laboratory, Incheon National University, Incheon, South Korea
| | - Hanall Lee
- Department of Human Movement Science, Incheon National University, Incheon, South Korea
- Neuromechanical Rehabilitation Research Laboratory, Incheon National University, Incheon, South Korea
| | - Nyeonju Kang
- Department of Human Movement Science, Incheon National University, Incheon, South Korea.
- Neuromechanical Rehabilitation Research Laboratory, Incheon National University, Incheon, South Korea.
- Division of Sport Science & Sport Science Institute, Incheon National University, Incheon, South Korea.
| |
Collapse
|
28
|
Rong P, Heidrick L. Functional Role of Temporal Patterning of Articulation in Speech Production: A Novel Perspective Toward Global Timing-Based Motor Speech Assessment and Rehabilitation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4577-4607. [PMID: 36399794 DOI: 10.1044/2022_jslhr-22-00089] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE This study aimed to (a) relate temporal patterning of articulation to functional speech outcomes in neurologically healthy and impaired speakers, (b) identify changes in temporal patterning of articulation in neurologically impaired speakers, and (c) evaluate how these changes can be modulated by speaking rate manipulation. METHOD Thirteen individuals with amyotrophic lateral sclerosis (ALS) and 10 neurologically healthy controls read a sentence 3 times, first at their habitual rate and then at a voluntarily slowed rate. Temporal patterning of articulation was assessed by 24 features characterizing the modulation patterns within (intra) and between (inter) four articulators (tongue tip, tongue body, lower lip, and jaw) at three linguistically relevant, hierarchically nested timescales corresponding to stress, syllable, and onset-rime/phoneme. For Aim 1, the features for the habitual rate condition were factorized and correlated with two functional speech outcomes-speech intelligibility and intelligible speaking rate. For Aims 2 and 3, the features were compared between groups and rate conditions, respectiely. RESULTS For Aim 1, the modulation features combined were moderately to strongly correlated with intelligibility (R 2 = .51-.53) and intelligible speaking rate (R 2 = .63-.73). For Aim 2, intra-articulator modulation was impaired in ALS, manifested by moderate-to-large decreases in modulation depth at all timescales and cross-timescale phase synchronization. Interarticulator modulation was relatively unaffected. For Aim 3, voluntary rate reduction improved several intra-articulator modulation features identified as being susceptible to the disease effect in individuals with ALS. CONCLUSIONS Disrupted temporal patterning of articulation, presumably reflecting impaired articulatory entrainment to linguistic rhythms, may contribute to functional speech declines in ALS. These impairments tend to be improved through voluntary rate reduction, possibly by reshaping the temporal template of motor plans to better accommodate the disease-related neuromechanical constraints in the articulatory system. These findings shed light on a novel perspective toward global timing-based motor speech assessment and rehabilitation.
Collapse
Affiliation(s)
- Panying Rong
- Department of Speech-Language-Hearing: Sciences & Disorders, The University of Kansas, Lawrence
| | - Lindsey Heidrick
- Department of Hearing and Speech, The University of Kansas Medical Center, Kansas City
| |
Collapse
|
29
|
Pastore A, Tomassini A, Delis I, Dolfini E, Fadiga L, D'Ausilio A. Speech listening entails neural encoding of invisible articulatory features. Neuroimage 2022; 264:119724. [PMID: 36328272 DOI: 10.1016/j.neuroimage.2022.119724] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 09/28/2022] [Accepted: 10/30/2022] [Indexed: 11/06/2022] Open
Abstract
Speech processing entails a complex interplay between bottom-up and top-down computations. The former is reflected in the neural entrainment to the quasi-rhythmic properties of speech acoustics while the latter is supposed to guide the selection of the most relevant input subspace. Top-down signals are believed to originate mainly from motor regions, yet similar activities have been shown to tune attentional cycles also for simpler, non-speech stimuli. Here we examined whether, during speech listening, the brain reconstructs articulatory patterns associated to speech production. We measured electroencephalographic (EEG) data while participants listened to sentences during the production of which articulatory kinematics of lips, jaws and tongue were also recorded (via Electro-Magnetic Articulography, EMA). We captured the patterns of articulatory coordination through Principal Component Analysis (PCA) and used Partial Information Decomposition (PID) to identify whether the speech envelope and each of the kinematic components provided unique, synergistic and/or redundant information regarding the EEG signals. Interestingly, tongue movements contain both unique as well as synergistic information with the envelope that are encoded in the listener's brain activity. This demonstrates that during speech listening the brain retrieves highly specific and unique motor information that is never accessible through vision, thus leveraging audio-motor maps that arise most likely from the acquisition of speech production during development.
Collapse
Affiliation(s)
- A Pastore
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy.
| | - A Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - I Delis
- School of Biomedical Sciences, University of Leeds, Leeds, UK
| | - E Dolfini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - L Fadiga
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - A D'Ausilio
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy.
| |
Collapse
|
30
|
Understanding why infant-directed speech supports learning: A dynamic attention perspective. DEVELOPMENTAL REVIEW 2022. [DOI: 10.1016/j.dr.2022.101047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
31
|
Xie X, Hu P, Tian Y, Wang K, Bai T. Transcranial alternating current stimulation enhances speech comprehension in chronic post-stroke aphasia patients: A single-blind sham-controlled study. Brain Stimul 2022; 15:1538-1540. [PMID: 36494053 DOI: 10.1016/j.brs.2022.12.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 12/05/2022] [Indexed: 12/12/2022] Open
Affiliation(s)
- Xiaohui Xie
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230032, China; The School of Mental Health and Psychological Sciences, Anhui Medical University, Hefei, 230032, China; Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, 230032, China; Collaborative Innovation Center of Neuropsychiatric Disorders and Mental Health, Anhui Province, 230032, China
| | - Panpan Hu
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230032, China; The School of Mental Health and Psychological Sciences, Anhui Medical University, Hefei, 230032, China; Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, 230032, China; Collaborative Innovation Center of Neuropsychiatric Disorders and Mental Health, Anhui Province, 230032, China
| | - Yanghua Tian
- Department of Psychology and Sleep Medicine, The Second Affiliated Hospital of Anhui Medical University, Hefei, 230601, China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, 230088, China; The School of Mental Health and Psychological Sciences, Anhui Medical University, Hefei, 230032, China
| | - Kai Wang
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230032, China; The School of Mental Health and Psychological Sciences, Anhui Medical University, Hefei, 230032, China; Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, 230032, China; Collaborative Innovation Center of Neuropsychiatric Disorders and Mental Health, Anhui Province, 230032, China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, 230088, China.
| | - Tongjian Bai
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230032, China; The School of Mental Health and Psychological Sciences, Anhui Medical University, Hefei, 230032, China; Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, 230032, China; Collaborative Innovation Center of Neuropsychiatric Disorders and Mental Health, Anhui Province, 230032, China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, 230088, China.
| |
Collapse
|
32
|
Vanden Bempt F, Van Herck S, Economou M, Vanderauwera J, Vandermosten M, Wouters J, Ghesquière P. Speech perception deficits and the effect of envelope-enhanced story listening combined with phonics intervention in pre-readers at risk for dyslexia. Front Psychol 2022; 13:1021767. [PMID: 36389538 PMCID: PMC9650384 DOI: 10.3389/fpsyg.2022.1021767] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 10/12/2022] [Indexed: 11/28/2022] Open
Abstract
Developmental dyslexia is considered to be most effectively addressed with preventive phonics-based interventions, including grapheme-phoneme coupling and blending exercises. These intervention types require intact speech perception abilities, given their large focus on exercises with auditorily presented phonemes. Yet some children with (a risk for) dyslexia experience problems in this domain due to a poorer sensitivity to rise times, i.e., rhythmic acoustic cues present in the speech envelope. As a result, the often subtle speech perception problems could potentially constrain an optimal response to phonics-based interventions in at-risk children. The current study therefore aimed (1) to extend existing research by examining the presence of potential speech perception deficits in pre-readers at cognitive risk for dyslexia when compared to typically developing peers and (2) to explore the added value of a preventive auditory intervention for at-risk pre-readers, targeting rise time sensitivity, on speech perception and other reading-related skills. To obtain the first research objective, we longitudinally compared speech-in-noise perception between 28 5-year-old pre-readers with and 30 peers without a cognitive risk for dyslexia during the second half of the third year of kindergarten. The second research objective was addressed by exploring growth in speech perception and other reading-related skills in an independent sample of 62 at-risk 5-year-old pre-readers who all combined a 12-week preventive phonics-based intervention (GraphoGame-Flemish) with an auditory story listening intervention. In half of the sample, story recordings contained artificially enhanced rise times (GG-FL_EE group, n = 31), while in the other half, stories remained unprocessed (GG-FL_NE group, n = 31; Clinical Trial Number S60962-https://www.uzleuven.be/nl/clinical-trial-center). Results revealed a slower speech-in-noise perception growth in the at-risk compared to the non-at-risk group, due to an emerged deficit at the end of kindergarten. Concerning the auditory intervention effects, both intervention groups showed equal growth in speech-in-noise perception and other reading-related skills, suggesting no boost of envelope-enhanced story listening on top of the effect of combining GraphoGame-Flemish with listening to unprocessed stories. These findings thus provide evidence for a link between speech perception problems and dyslexia, yet do not support the potential of the auditory intervention in its current form.
Collapse
Affiliation(s)
- Femke Vanden Bempt
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Shauni Van Herck
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Maria Economou
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Jolijn Vanderauwera
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Psychological Sciences Research Institute, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Maaike Vandermosten
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Jan Wouters
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
33
|
Reinisch E, Bosker HR. Encoding speech rate in challenging listening conditions: White noise and reverberation. Atten Percept Psychophys 2022; 84:2303-2318. [PMID: 35996057 PMCID: PMC9481500 DOI: 10.3758/s13414-022-02554-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/08/2022] [Indexed: 11/08/2022]
Abstract
Temporal contrasts in speech are perceived relative to the speech rate of the surrounding context. That is, following a fast context sentence, listeners interpret a given target sound as longer than following a slow context, and vice versa. This rate effect, often referred to as "rate-dependent speech perception," has been suggested to be the result of a robust, low-level perceptual process, typically examined in quiet laboratory settings. However, speech perception often occurs in more challenging listening conditions. Therefore, we asked whether rate-dependent perception would be (partially) compromised by signal degradation relative to a clear listening condition. Specifically, we tested effects of white noise and reverberation, with the latter specifically distorting temporal information. We hypothesized that signal degradation would reduce the precision of encoding the speech rate in the context and thereby reduce the rate effect relative to a clear context. This prediction was borne out for both types of degradation in Experiment 1, where the context sentences but not the subsequent target words were degraded. However, in Experiment 2, which compared rate effects when contexts and targets were coherent in terms of signal quality, no reduction of the rate effect was found. This suggests that, when confronted with coherently degraded signals, listeners adapt to challenging listening situations, eliminating the difference between rate-dependent perception in clear and degraded conditions. Overall, the present study contributes towards understanding the consequences of different types of listening environments on the functioning of low-level perceptual processes that listeners use during speech perception.
Collapse
Affiliation(s)
- Eva Reinisch
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, 1040, Vienna, Austria.
| | - Hans Rutger Bosker
- Max Planck Institute for Psycholinguistics, PO Box 310, 6500 AH, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
34
|
Kurthen I, Christen A, Meyer M, Giroud N. Older adults' neural tracking of interrupted speech is a function of task difficulty. Neuroimage 2022; 262:119580. [PMID: 35995377 DOI: 10.1016/j.neuroimage.2022.119580] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 08/14/2022] [Accepted: 08/18/2022] [Indexed: 11/16/2022] Open
Abstract
Age-related hearing loss is a highly prevalent condition, which manifests at both the auditory periphery and the brain. It leads to degraded auditory input, which needs to be repaired in order to achieve understanding of spoken language. It is still unclear how older adults with this condition draw on their neural resources to optimally process speech. By presenting interrupted speech to 26 healthy older adults with normal-for-age audiograms, this study investigated neural tracking of degraded auditory input. The electroencephalograms of the participants were recorded while they first listened to and then verbally repeated sentences interrupted by silence in varying interruption rates. Speech tracking was measured by inter-trial phase coherence in response to the stimuli. In interruption rates that corresponded to the theta frequency band, speech tracking was highly specific to the interruption rate and positively related to the understanding of interrupted speech. These results suggest that older adults' brain activity optimizes through the tracking of stimulus characteristics, and that this tracking aids in processing an incomplete auditory stimulus. Further investigation of speech tracking as a candidate training mechanism to alleviate age-related hearing loss is thus encouraged.
Collapse
Affiliation(s)
- Ira Kurthen
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14/21, Zurich 8050, Switzerland.
| | - Allison Christen
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14/21, Zurich 8050, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland; Cognitive Psychology Unit, University of Klagenfurt, Austria
| | - Nathalie Giroud
- Department of Computational Linguistics, Phonetics and Speech Sciences, University of Zurich, Switzerland; Competence Center for Language & Medicine, University of Zurich, Switzerland; Center for Neuroscience Zurich, University of Zurich, Switzerland
| |
Collapse
|
35
|
Menn KH, Ward EK, Braukmann R, van den Boomen C, Buitelaar J, Hunnius S, Snijders TM. Neural Tracking in Infancy Predicts Language Development in Children With and Without Family History of Autism. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:495-514. [PMID: 37216063 PMCID: PMC10158647 DOI: 10.1162/nol_a_00074] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 05/16/2022] [Indexed: 05/24/2023]
Abstract
During speech processing, neural activity in non-autistic adults and infants tracks the speech envelope. Recent research in adults indicates that this neural tracking relates to linguistic knowledge and may be reduced in autism. Such reduced tracking, if present already in infancy, could impede language development. In the current study, we focused on children with a family history of autism, who often show a delay in first language acquisition. We investigated whether differences in tracking of sung nursery rhymes during infancy relate to language development and autism symptoms in childhood. We assessed speech-brain coherence at either 10 or 14 months of age in a total of 22 infants with high likelihood of autism due to family history and 19 infants without family history of autism. We analyzed the relationship between speech-brain coherence in these infants and their vocabulary at 24 months as well as autism symptoms at 36 months. Our results showed significant speech-brain coherence in the 10- and 14-month-old infants. We found no evidence for a relationship between speech-brain coherence and later autism symptoms. Importantly, speech-brain coherence in the stressed syllable rate (1-3 Hz) predicted later vocabulary. Follow-up analyses showed evidence for a relationship between tracking and vocabulary only in 10-month-olds but not in 14-month-olds and indicated possible differences between the likelihood groups. Thus, early tracking of sung nursery rhymes is related to language development in childhood.
Collapse
Affiliation(s)
- Katharina H. Menn
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity, Leipzig, Germany
| | - Emma K. Ward
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Ricarda Braukmann
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Carlijn van den Boomen
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Jan Buitelaar
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Cognitive Neuroscience, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Sabine Hunnius
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Tineke M. Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Cognitive Neuropsychology Department, Tilburg University
| |
Collapse
|
36
|
David W, Gransier R, Wouters J. Evaluation of phase-locking to parameterized speech envelopes. Front Neurol 2022; 13:852030. [PMID: 35989900 PMCID: PMC9382131 DOI: 10.3389/fneur.2022.852030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 06/29/2022] [Indexed: 12/04/2022] Open
Abstract
Humans rely on the temporal processing ability of the auditory system to perceive speech during everyday communication. The temporal envelope of speech is essential for speech perception, particularly envelope modulations below 20 Hz. In the literature, the neural representation of this speech envelope is usually investigated by recording neural phase-locked responses to speech stimuli. However, these phase-locked responses are not only associated with envelope modulation processing, but also with processing of linguistic information at a higher-order level when speech is comprehended. It is thus difficult to disentangle the responses into components from the acoustic envelope itself and the linguistic structures in speech (such as words, phrases and sentences). Another way to investigate neural modulation processing is to use sinusoidal amplitude-modulated stimuli at different modulation frequencies to obtain the temporal modulation transfer function. However, these transfer functions are considerably variable across modulation frequencies and individual listeners. To tackle the issues of both speech and sinusoidal amplitude-modulated stimuli, the recently introduced Temporal Speech Envelope Tracking (TEMPEST) framework proposed the use of stimuli with a distribution of envelope modulations. The framework aims to assess the brain's capability to process temporal envelopes in different frequency bands using stimuli with speech-like envelope modulations. In this study, we provide a proof-of-concept of the framework using stimuli with modulation frequency bands around the syllable and phoneme rate in natural speech. We evaluated whether the evoked phase-locked neural activity correlates with the speech-weighted modulation transfer function measured using sinusoidal amplitude-modulated stimuli in normal-hearing listeners. Since many studies on modulation processing employ different metrics and comparing their results is difficult, we included different power- and phase-based metrics and investigate how these metrics relate to each other. Results reveal a strong correspondence across listeners between the neural activity evoked by the speech-like stimuli and the activity evoked by the sinusoidal amplitude-modulated stimuli. Furthermore, strong correspondence was also apparent between each metric, facilitating comparisons between studies using different metrics. These findings indicate the potential of the TEMPEST framework to efficiently assess the neural capability to process temporal envelope modulations within a frequency band that is important for speech perception.
Collapse
Affiliation(s)
- Wouter David
- ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | | | | |
Collapse
|
37
|
Chalas N, Daube C, Kluger DS, Abbasi O, Nitsch R, Gross J. Multivariate analysis of speech envelope tracking reveals coupling beyond auditory cortex. Neuroimage 2022; 258:119395. [PMID: 35718023 DOI: 10.1016/j.neuroimage.2022.119395] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 05/16/2022] [Accepted: 06/14/2022] [Indexed: 11/19/2022] Open
Abstract
The systematic alignment of low-frequency brain oscillations with the acoustic speech envelope signal is well established and has been proposed to be crucial for actively perceiving speech. Previous studies investigating speech-brain coupling in source space are restricted to univariate pairwise approaches between brain and speech signals, and therefore speech tracking information in frequency-specific communication channels might be lacking. To address this, we propose a novel multivariate framework for estimating speech-brain coupling where neural variability from source-derived activity is taken into account along with the rate of envelope's amplitude change (derivative). We applied it in magnetoencephalographic (MEG) recordings while human participants (male and female) listened to one hour of continuous naturalistic speech, showing that a multivariate approach outperforms the corresponding univariate method in low- and high frequencies across frontal, motor, and temporal areas. Systematic comparisons revealed that the gain in low frequencies (0.6 - 0.8 Hz) was related to the envelope's rate of change whereas in higher frequencies (from 0.8 to 10 Hz) it was mostly related to the increased neural variability from source-derived cortical areas. Furthermore, following a non-negative matrix factorization approach we found distinct speech-brain components across time and cortical space related to speech processing. We confirm that speech envelope tracking operates mainly in two timescales (δ and θ frequency bands) and we extend those findings showing shorter coupling delays in auditory-related components and longer delays in higher-association frontal and motor components, indicating temporal differences of speech tracking and providing implications for hierarchical stimulus-driven speech processing.
Collapse
Affiliation(s)
- Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.
| | - Christoph Daube
- Centre for Cognitive Neuroimaging, University of Glasgow, Glasgow, UK
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Robert Nitsch
- Institute for Translational Neuroscience, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
38
|
Hauswald A, Keitel A, Chen Y, Rösch S, Weisz N. Degradation levels of continuous speech affect neural speech tracking and alpha power differently. Eur J Neurosci 2022; 55:3288-3302. [PMID: 32687616 PMCID: PMC9540197 DOI: 10.1111/ejn.14912] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 07/12/2020] [Accepted: 07/13/2020] [Indexed: 11/26/2022]
Abstract
Making sense of a poor auditory signal can pose a challenge. Previous attempts to quantify speech intelligibility in neural terms have usually focused on one of two measures, namely low-frequency speech-brain synchronization or alpha power modulations. However, reports have been mixed concerning the modulation of these measures, an issue aggravated by the fact that they have normally been studied separately. We present two MEG studies analyzing both measures. In study 1, participants listened to unimodal auditory speech with three different levels of degradation (original, 7-channel and 3-channel vocoding). Intelligibility declined with declining clarity, but speech was still intelligible to some extent even for the lowest clarity level (3-channel vocoding). Low-frequency (1-7 Hz) speech tracking suggested a U-shaped relationship with strongest effects for the medium-degraded speech (7-channel) in bilateral auditory and left frontal regions. To follow up on this finding, we implemented three additional vocoding levels (5-channel, 2-channel and 1-channel) in a second MEG study. Using this wider range of degradation, the speech-brain synchronization showed a similar pattern as in study 1, but further showed that when speech becomes unintelligible, synchronization declines again. The relationship differed for alpha power, which continued to decrease across vocoding levels reaching a floor effect for 5-channel vocoding. Predicting subjective intelligibility based on models either combining both measures or each measure alone showed superiority of the combined model. Our findings underline that speech tracking and alpha power are modified differently by the degree of degradation of continuous speech but together contribute to the subjective speech understanding.
Collapse
Affiliation(s)
- Anne Hauswald
- Center of Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| | - Anne Keitel
- Psychology, School of Social SciencesUniversity of DundeeDundeeUK
- Centre for Cognitive NeuroimagingUniversity of GlasgowGlasgowUK
| | - Ya‐Ping Chen
- Center of Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| | - Sebastian Rösch
- Department of OtorhinolaryngologyParacelsus Medical UniversitySalzburgAustria
| | - Nathan Weisz
- Center of Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| |
Collapse
|
39
|
Kachlicka M, Laffere A, Dick F, Tierney A. Slow phase-locked modulations support selective attention to sound. Neuroimage 2022; 252:119024. [PMID: 35231629 PMCID: PMC9133470 DOI: 10.1016/j.neuroimage.2022.119024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/16/2022] [Accepted: 02/19/2022] [Indexed: 11/16/2022] Open
Abstract
To make sense of complex soundscapes, listeners must select and attend to task-relevant streams while ignoring uninformative sounds. One possible neural mechanism underlying this process is alignment of endogenous oscillations with the temporal structure of the target sound stream. Such a mechanism has been suggested to mediate attentional modulation of neural phase-locking to the rhythms of attended sounds. However, such modulations are compatible with an alternate framework, where attention acts as a filter that enhances exogenously-driven neural auditory responses. Here we attempted to test several predictions arising from the oscillatory account by playing two tone streams varying across conditions in tone duration and presentation rate; participants attended to one stream or listened passively. Attentional modulation of the evoked waveform was roughly sinusoidal and scaled with rate, while the passive response did not. However, there was only limited evidence for continuation of modulations through the silence between sequences. These results suggest that attentionally-driven changes in phase alignment reflect synchronization of slow endogenous activity with the temporal structure of attended stimuli.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England
| | - Aeron Laffere
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England; Division of Psychology & Language Sciences, UCL, Gower Street, London WC1E 6BT, England
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England.
| |
Collapse
|
40
|
Distracting Linguistic Information Impairs Neural Tracking of Attended Speech. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100043. [DOI: 10.1016/j.crneur.2022.100043] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 04/27/2022] [Accepted: 05/24/2022] [Indexed: 11/20/2022] Open
|
41
|
Patel P, Khalighinejad B, Herrero JL, Bickel S, Mehta AD, Mesgarani N. Improved Speech Hearing in Noise with Invasive Electrical Brain Stimulation. J Neurosci 2022; 42:3648-3658. [PMID: 35347046 PMCID: PMC9053855 DOI: 10.1523/jneurosci.1468-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 01/19/2022] [Accepted: 01/20/2022] [Indexed: 12/02/2022] Open
Abstract
Speech perception in noise is a challenging everyday task with which many listeners have difficulty. Here, we report a case in which electrical brain stimulation of implanted intracranial electrodes in the left planum temporale (PT) of a neurosurgical patient significantly and reliably improved subjective quality (up to 50%) and objective intelligibility (up to 97%) of speech in noise perception. Stimulation resulted in a selective enhancement of speech sounds compared with the background noises. The receptive fields of the PT sites whose stimulation improved speech perception were tuned to spectrally broad and rapidly changing sounds. Corticocortical evoked potential analysis revealed that the PT sites were located between the sites in Heschl's gyrus and the superior temporal gyrus. Moreover, the discriminability of speech from nonspeech sounds increased in population neural responses from Heschl's gyrus to the PT to the superior temporal gyrus sites. These findings causally implicate the PT in background noise suppression and may point to a novel potential neuroprosthetic solution to assist in the challenging task of speech perception in noise.SIGNIFICANCE STATEMENT Speech perception in noise remains a challenging task for many individuals. Here, we present a case in which the electrical brain stimulation of intracranially implanted electrodes in the planum temporale of a neurosurgical patient significantly improved both the subjective quality (up to 50%) and objective intelligibility (up to 97%) of speech perception in noise. Stimulation resulted in a selective enhancement of speech sounds compared with the background noises. Our local and network-level functional analyses placed the planum temporale sites in between the sites in the primary auditory areas in Heschl's gyrus and nonprimary auditory areas in the superior temporal gyrus. These findings causally implicate planum temporale in acoustic scene analysis and suggest potential neuroprosthetic applications to assist hearing in noise.
Collapse
Affiliation(s)
- Prachi Patel
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York 10027
- Department of Electrical Engineering, Columbia University, New York, New York 10027
| | - Bahar Khalighinejad
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York 10027
- Department of Electrical Engineering, Columbia University, New York, New York 10027
| | - Jose L Herrero
- Hofstra Northwell School of Medicine, New York, New York 11549
- Feinstein Institute for Medical Research, New York, New York 11030
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, New York, New York 11549
- Feinstein Institute for Medical Research, New York, New York 11030
| | - Ashesh D Mehta
- Hofstra Northwell School of Medicine, New York, New York 11549
- Feinstein Institute for Medical Research, New York, New York 11030
| | - Nima Mesgarani
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York 10027
- Department of Electrical Engineering, Columbia University, New York, New York 10027
| |
Collapse
|
42
|
Zhang M, Riecke L, Fraga-González G, Bonte M. Altered brain network topology during speech tracking in developmental dyslexia. Neuroimage 2022; 254:119142. [PMID: 35342007 DOI: 10.1016/j.neuroimage.2022.119142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 03/15/2022] [Accepted: 03/23/2022] [Indexed: 10/18/2022] Open
Abstract
Developmental dyslexia is often accompanied by altered phonological processing of speech. Underlying neural changes have typically been characterized in terms of stimulus- and/or task-related responses within individual brain regions or their functional connectivity. Less is known about potential changes in the more global functional organization of brain networks. Here we recorded electroencephalography (EEG) in typical and dyslexic readers while they listened to (a) a random sequence of syllables and (b) a series of tri-syllabic real words. The network topology of the phase synchronization of evoked cortical oscillations was investigated in four frequency bands (delta, theta, alpha and beta) using minimum spanning tree graphs. We found that, compared to syllable tracking, word tracking triggered a shift toward a more integrated network topology in the theta band in both groups. Importantly, this change was significantly stronger in the dyslexic readers, who also showed increased reliance on a right frontal cluster of electrodes for word tracking. The current findings point towards an altered effect of word-level processing on the functional brain network organization that may be associated with less efficient phonological and reading skills in dyslexia.
Collapse
Affiliation(s)
- Manli Zhang
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Lars Riecke
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Gorka Fraga-González
- Department of Child and Adolescent Psychiatry, Faculty of Medicine, University of Zurich, Switzerland
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
43
|
Mc Laughlin M, Khatoun A, Asamoah B. Detection of tACS Entrainment Critically Depends on Epoch Length. Front Cell Neurosci 2022; 16:806556. [PMID: 35360495 PMCID: PMC8963722 DOI: 10.3389/fncel.2022.806556] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 02/11/2022] [Indexed: 11/26/2022] Open
Abstract
Neural entrainment is the phase synchronization of a population of neurons to an external rhythmic stimulus such as applied in the context of transcranial alternating current stimulation (tACS). tACS can cause profound effects on human behavior. However, there remain a significant number of studies that find no behavioral effect when tACS is applied to human subjects. To investigate this discrepancy, we applied time sensitive phase lock value (PLV) based analysis to single unit data from the rat motor cortex. The analysis revealed that detection of neural entrainment depends critically on the epoch length within which spiking information is accumulated. Increasing the epoch length allowed for detection of progressively weaker levels of neural entrainment. Based on this single unit analysis, we hypothesized that tACS effects on human behavior would be more easily detected in a behavior paradigm which utilizes longer epoch lengths. We tested this by using tACS to entrain tremor in patients and healthy volunteers. When the behavioral data were analyzed using short duration epochs tremor entrainment effects were not detectable. However, as the epoch length was progressively increased, weak tremor entrainment became detectable. These results suggest that tACS behavioral paradigms that rely on the accumulation of information over long epoch lengths will tend to be successful at detecting behavior effects. However, tACS paradigms that rely on short epoch lengths are less likely to detect effects.
Collapse
|
44
|
Mandke K, Flanagan S, Macfarlane A, Gabrielczyk F, Wilson A, Gross J, Goswami U. Neural sampling of the speech signal at different timescales by children with dyslexia. Neuroimage 2022; 253:119077. [PMID: 35278708 DOI: 10.1016/j.neuroimage.2022.119077] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 01/15/2022] [Accepted: 03/07/2022] [Indexed: 01/08/2023] Open
Abstract
Phonological difficulties characterize individuals with dyslexia across languages. Currently debated is whether these difficulties arise from atypical neural sampling of (or entrainment to) auditory information in speech at slow rates (<10 Hz, related to speech rhythm), faster rates, or neither. MEG studies with adults suggest that atypical sampling in dyslexia affects faster modulations in the neurophysiological gamma band, related to phoneme-level representation. However, dyslexic adults have had years of reduced experience in converting graphemes to phonemes, which could itself cause atypical gamma-band activity. The present study was designed to identify specific linguistic timescales at which English children with dyslexia may show atypical entrainment. Adopting a developmental focus, we hypothesized that children with dyslexia would show atypical entrainment to the prosodic and syllable-level information that is exaggerated in infant-directed speech and carried primarily by amplitude modulations <10 Hz. MEG was recorded in a naturalistic story-listening paradigm. The modulation bands related to different types of linguistic information were derived directly from the speech materials, and lagged coherence at multiple temporal rates spanning 0.9-40 Hz was computed. Group differences in lagged speech-brain coherence between children with dyslexia and control children were most marked in neurophysiological bands corresponding to stress and syllable-level information (<5 Hz in our materials), and phoneme-level information (12-40 Hz). Functional connectivity analyses showed network differences between groups in both hemispheres, with dyslexic children showing significantly reduced global network efficiency. Global network efficiency correlated with dyslexic children's oral language development and with control children's reading development. These developmental data suggest that dyslexia is characterized by atypical neural sampling of auditory information at slower rates. They also throw new light on the nature of the gamma band temporal sampling differences reported in MEG dyslexia studies with adults.
Collapse
Affiliation(s)
- Kanad Mandke
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom.
| | - Sheila Flanagan
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Annabel Macfarlane
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Fiona Gabrielczyk
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Angela Wilson
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Usha Goswami
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| |
Collapse
|
45
|
Corcoran AW, Perera R, Koroma M, Kouider S, Hohwy J, Andrillon T. Expectations boost the reconstruction of auditory features from electrophysiological responses to noisy speech. Cereb Cortex 2022; 33:691-708. [PMID: 35253871 PMCID: PMC9890472 DOI: 10.1093/cercor/bhac094] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
Online speech processing imposes significant computational demands on the listening brain, the underlying mechanisms of which remain poorly understood. Here, we exploit the perceptual "pop-out" phenomenon (i.e. the dramatic improvement of speech intelligibility after receiving information about speech content) to investigate the neurophysiological effects of prior expectations on degraded speech comprehension. We recorded electroencephalography (EEG) and pupillometry from 21 adults while they rated the clarity of noise-vocoded and sine-wave synthesized sentences. Pop-out was reliably elicited following visual presentation of the corresponding written sentence, but not following incongruent or neutral text. Pop-out was associated with improved reconstruction of the acoustic stimulus envelope from low-frequency EEG activity, implying that improvements in perceptual clarity were mediated via top-down signals that enhanced the quality of cortical speech representations. Spectral analysis further revealed that pop-out was accompanied by a reduction in theta-band power, consistent with predictive coding accounts of acoustic filling-in and incremental sentence processing. Moreover, delta-band power, alpha-band power, and pupil diameter were all increased following the provision of any written sentence information, irrespective of content. Together, these findings reveal distinctive profiles of neurophysiological activity that differentiate the content-specific processes associated with degraded speech comprehension from the context-specific processes invoked under adverse listening conditions.
Collapse
Affiliation(s)
- Andrew W Corcoran
- Corresponding author: Room E672, 20 Chancellors Walk, Clayton, VIC 3800, Australia.
| | - Ricardo Perera
- Cognition & Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University, Melbourne, VIC 3800 Australia
| | - Matthieu Koroma
- Brain and Consciousness Group (ENS, EHESS, CNRS), Département d’Études Cognitives, École Normale Supérieure-PSL Research University, Paris 75005, France
| | - Sid Kouider
- Brain and Consciousness Group (ENS, EHESS, CNRS), Département d’Études Cognitives, École Normale Supérieure-PSL Research University, Paris 75005, France
| | - Jakob Hohwy
- Cognition & Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University, Melbourne, VIC 3800 Australia,Monash Centre for Consciousness & Contemplative Studies, Monash University, Melbourne, VIC 3800 Australia
| | - Thomas Andrillon
- Monash Centre for Consciousness & Contemplative Studies, Monash University, Melbourne, VIC 3800 Australia,Paris Brain Institute, Sorbonne Université, Inserm-CNRS, Paris 75013, France
| |
Collapse
|
46
|
Destoky F, Bertels J, Niesen M, Wens V, Vander Ghinst M, Rovai A, Trotta N, Lallier M, De Tiège X, Bourguignon M. The role of reading experience in atypical cortical tracking of speech and speech-in-noise in dyslexia. Neuroimage 2022; 253:119061. [PMID: 35259526 DOI: 10.1016/j.neuroimage.2022.119061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 02/28/2022] [Accepted: 03/04/2022] [Indexed: 11/18/2022] Open
Abstract
Dyslexia is a frequent developmental disorder in which reading acquisition is delayed and that is usually associated with difficulties understanding speech in noise. At the neuronal level, children with dyslexia were reported to display abnormal cortical tracking of speech (CTS) at phrasal rate. Here, we aimed to determine if abnormal tracking relates to reduced reading experience, and if it is modulated by the severity of dyslexia or the presence of acoustic noise. We included 26 school-age children with dyslexia, 26 age-matched controls and 26 reading-level matched controls. All were native French speakers. Children's brain activity was recorded with magnetoencephalography while they listened to continuous speech in noiseless and multiple noise conditions. CTS values were compared between groups, conditions and hemispheres, and also within groups, between children with mild and severe dyslexia. Syllabic CTS was significantly reduced in the right superior temporal gyrus in children with dyslexia compared with controls matched for age but not for reading level. Severe dyslexia was characterized by lower rapid automatized naming (RAN) abilities compared with mild dyslexia, and phrasal CTS lateralized to the right hemisphere in children with mild dyslexia and all control groups but not in children with severe dyslexia. Finally, an alteration in phrasal CTS was uncovered in children with dyslexia compared with age-matched controls in babble noise conditions but not in other less challenging listening conditions (non-speech noise or noiseless conditions); no such effect was seen in comparison with reading-level matched controls. Overall, our results confirmed the finding of altered neuronal basis of speech perception in noiseless and babble noise conditions in dyslexia compared with age-matched peers. However, the absence of alteration in comparison with reading-level matched controls demonstrates that such alterations are associated with reduced reading level, suggesting they are merely driven by reduced reading experience rather than a cause of dyslexia. Finally, our result of altered hemispheric lateralization of phrasal CTS in relation with altered RAN abilities in severe dyslexia is in line with a temporal sampling deficit of speech at phrasal rate in dyslexia.
Collapse
Affiliation(s)
- Florian Destoky
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium.
| | - Julie Bertels
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Consciousness, Cognition and Computation Group, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Maxime Niesen
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Service d'ORL et de Chirurgie Cervico-Faciale, ULB-Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Vincent Wens
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Department of Functional Neuroima ging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Marc Vander Ghinst
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Service d'ORL et de Chirurgie Cervico-Faciale, ULB-Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Antonin Rovai
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Department of Functional Neuroima ging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Nicola Trotta
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Department of Functional Neuroima ging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Marie Lallier
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian 20009, Spain
| | - Xavier De Tiège
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; Department of Functional Neuroima ging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Mathieu Bourguignon
- Laboratoire de Neuroanatomie et Neuroimagerie translationnelles, UNI-ULB Neuroscience Institute, Université libre de Bruxelles (ULB), 808 Leenik Street, Brussels 1070, Belgium; BCBL, Basque Center on Cognition, Brain and Language, San Sebastian 20009, Spain; Laboratory of Neurophysiology and Movement Biomechanics, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
| |
Collapse
|
47
|
Schmitt R, Meyer M, Giroud N. Better speech-in-noise comprehension is associated with enhanced neural speech tracking in older adults with hearing impairment. Cortex 2022; 151:133-146. [DOI: 10.1016/j.cortex.2022.02.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 12/19/2021] [Accepted: 02/03/2022] [Indexed: 11/27/2022]
|
48
|
Ma R, Xia X, Zhang W, Lu Z, Wu Q, Cui J, Song H, Fan C, Chen X, Zha R, Wei J, Ji GJ, Wang X, Qiu B, Zhang X. High Gamma and Beta Temporal Interference Stimulation in the Human Motor Cortex Improves Motor Functions. Front Neurosci 2022; 15:800436. [PMID: 35046771 PMCID: PMC8761631 DOI: 10.3389/fnins.2021.800436] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Accepted: 11/29/2021] [Indexed: 12/14/2022] Open
Abstract
Background: Temporal interference (TI) stimulation is a new technique of non-invasive brain stimulation. Envelope-modulated waveforms with two high-frequency carriers can activate neurons in target brain regions without stimulating the overlying cortex, which has been validated in mouse brains. However, whether TI stimulation can work on the human brain has not been elucidated. Objective: To assess the effectiveness of the envelope-modulated waveform of TI stimulation on the human primary motor cortex (M1). Methods: Participants attended three sessions of 30-min TI stimulation during a random reaction time task (RRTT) or a serial reaction time task (SRTT). Motor cortex excitability was measured before and after TI stimulation. Results: In the RRTT experiment, only 70 Hz TI stimulation had a promoting effect on the reaction time (RT) performance and excitability of the motor cortex compared to sham stimulation. Meanwhile, compared with the sham condition, only 20 Hz TI stimulation significantly facilitated motor learning in the SRTT experiment, which was significantly positively correlated with the increase in motor evoked potential. Conclusion: These results indicate that the envelope-modulated waveform of TI stimulation has a significant promoting effect on human motor functions, experimentally suggesting the effectiveness of TI stimulation in humans for the first time and paving the way for further explorations.
Collapse
Affiliation(s)
- Ru Ma
- Hefei National Laboratory for Physical Sciences at the Microscale, Division of Life Science and Medicine, Department of Radiology, The First Affiliated Hospital of USTC, School of Life Science, University of Science and Technology of China, Hefei, China
| | - Xinzhao Xia
- Centers for Biomedical Engineering, School of Information Science and Technology, University of Science and Technology of China, Hefei, China
| | - Wei Zhang
- Hefei National Laboratory for Physical Sciences at the Microscale, Division of Life Science and Medicine, Department of Radiology, The First Affiliated Hospital of USTC, School of Life Science, University of Science and Technology of China, Hefei, China
| | - Zhuo Lu
- Centers for Biomedical Engineering, School of Information Science and Technology, University of Science and Technology of China, Hefei, China
| | - Qianying Wu
- Hefei National Laboratory for Physical Sciences at the Microscale, Division of Life Science and Medicine, Department of Radiology, The First Affiliated Hospital of USTC, School of Life Science, University of Science and Technology of China, Hefei, China.,Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, United States
| | - Jiangtian Cui
- Centers for Biomedical Engineering, School of Information Science and Technology, University of Science and Technology of China, Hefei, China.,School of Optometry and Vision Sciences, Cardiff University, Cardiff, United Kingdom
| | - Hongwen Song
- Hefei National Laboratory for Physical Sciences at the Microscale, Division of Life Science and Medicine, Department of Radiology, The First Affiliated Hospital of USTC, School of Life Science, University of Science and Technology of China, Hefei, China
| | - Chuan Fan
- Centers for Biomedical Engineering, School of Information Science and Technology, University of Science and Technology of China, Hefei, China
| | - Xueli Chen
- Hefei National Laboratory for Physical Sciences at the Microscale, Division of Life Science and Medicine, Department of Radiology, The First Affiliated Hospital of USTC, School of Life Science, University of Science and Technology of China, Hefei, China
| | - Rujing Zha
- Hefei National Laboratory for Physical Sciences at the Microscale, Division of Life Science and Medicine, Department of Radiology, The First Affiliated Hospital of USTC, School of Life Science, University of Science and Technology of China, Hefei, China
| | - Junjie Wei
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Gong-Jun Ji
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiaoxiao Wang
- Centers for Biomedical Engineering, School of Information Science and Technology, University of Science and Technology of China, Hefei, China
| | - Bensheng Qiu
- Centers for Biomedical Engineering, School of Information Science and Technology, University of Science and Technology of China, Hefei, China
| | - Xiaochu Zhang
- Hefei National Laboratory for Physical Sciences at the Microscale, Division of Life Science and Medicine, Department of Radiology, The First Affiliated Hospital of USTC, School of Life Science, University of Science and Technology of China, Hefei, China.,Centers for Biomedical Engineering, School of Information Science and Technology, University of Science and Technology of China, Hefei, China.,Department of Psychology, School of Humanities and Social Science, University of Science and Technology of China, Hefei, China.,Biomedical Sciences and Health Laboratory of Anhui Province, University of Science and Technology of China, Hefei, China
| |
Collapse
|
49
|
Suh MW, Tran P, Richardson M, Sun S, Xu Y, Djalilian HR, Lin HW, Zeng FG. Electric hearing and tinnitus suppression by noninvasive ear stimulation. Hear Res 2022; 415:108431. [PMID: 35016022 DOI: 10.1016/j.heares.2022.108431] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 12/22/2021] [Accepted: 01/04/2022] [Indexed: 11/04/2022]
Abstract
While noninvasive brain stimulation is convenient and cost effective, its utility is limited by the substantial distance between scalp electrodes and their intended neural targets in the head. The tympanic membrane, or eardrum, is a thin flap of skin deep in an orifice of the head that may serve as a port for improved efficiency of noninvasive stimulation. Here we chose the cochlea as a target because it resides in the densest bone of the skull and is adjacent to many deep-brain-stimulation structures. We also tested the hypothesis that noninvasive electric stimulation of the cochlea may restore neural activities that are missing in acoustic stimulation. We placed an electrode in the ear canal or on the tympanic membrane in 25 human adults (10 females) and compared their stimulation efficiency by characterizing the electrically-evoked auditory sensation. Relative to ear canal stimulation, tympanic membrane stimulation was four times more likely to produce an auditory percept, required eight times lower electric current to reach the threshold and produced two-to-four times more linear suprathreshold responses. We further measured tinnitus suppression in 14 of the 25 subjects who had chronic tinnitus. Compared with ear canal stimulation, tympanic membrane stimulation doubled both the probability (22% vs. 55%) and the amount (-15% vs. -34%) of tinnitus suppression. These findings extended previous work comparing evoked perception and tinnitus suppression between electrodes placed in the ear canal and on the scalp. Together, the previous and present results suggest that the efficiency of conventional scalp-based noninvasive electric stimulation can be improved by at least one order of magnitude via tympanic membrane stimulation. This increased efficiency is most likely due to the shortened distance between the electrode placed on the tympanic membrane and the targeted cochlea. The present findings have implications for the management of tinnitus by offering a potential alternative to interventions using invasive electrical stimulation such as cochlear implantation, or other non-invasive transcranial electrical stimulation methods.
Collapse
Affiliation(s)
- Myung-Whan Suh
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, United States; Department of Otorhinolaryngology - Head and Neck Surgery, Seoul National University Hospital, Seoul, South Korea
| | - Phillip Tran
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, United States
| | - Matthew Richardson
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, United States
| | - Shuping Sun
- Department of Otolaryngology - Head and Neck Surgery, The First Affiliated Hospital, Zhengzhou University, Henan 450052, China
| | - Yuchen Xu
- Department of Bioengineering, University of California San Diego, San Diego, California 92092, United States
| | - Hamid R Djalilian
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, United States
| | - Harrison W Lin
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, United States
| | - Fan-Gang Zeng
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, United States.
| |
Collapse
|
50
|
Fiene M, Radecke JO, Misselhorn J, Sengelmann M, Herrmann CS, Schneider TR, Schwab BC, Engel AK. tACS phase-specifically biases brightness perception of flickering light. Brain Stimul 2022; 15:244-253. [PMID: 34990876 DOI: 10.1016/j.brs.2022.01.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 12/08/2021] [Accepted: 01/01/2022] [Indexed: 11/02/2022] Open
Abstract
BACKGROUND Visual phenomena like brightness illusions impressively demonstrate the highly constructive nature of perception. In addition to physical illumination, the subjective experience of brightness is related to temporal neural dynamics in visual cortex. OBJECTIVE Here, we asked whether biasing the temporal pattern of neural excitability in visual cortex by transcranial alternating current stimulation (tACS) modulates brightness perception of concurrent rhythmic visual stimuli. METHODS Participants performed a brightness discrimination task of two flickering lights, one of which was targeted by same-frequency electrical stimulation at varying phase shifts. tACS was applied with an occipital and a periorbital active control montage, based on simulations of electrical currents using finite element head models. RESULTS Experimental results reveal that flicker brightness perception is modulated dependent on the phase shift between sensory and electrical stimulation, solely under occipital tACS. Phase-specific modulatory effects by tACS were dependent on flicker-evoked neural phase stability at the tACS-targeted frequency, recorded prior to electrical stimulation. Further, the optimal timing of tACS application leading to enhanced brightness perception was correlated with the neural phase delay of the cortical flicker response. CONCLUSIONS Our results corroborate the role of temporally coordinated neural activity in visual cortex for brightness perception of rhythmic visual input in humans. Phase-specific behavioral modulations by tACS emphasize its efficacy to transfer perceptually relevant temporal information to the cortex. These findings provide an important step towards understanding the basis of visual perception and further confirm electrical stimulation as a tool for advancing controlled modulations of neural activity and related behavior.
Collapse
Affiliation(s)
- Marina Fiene
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany.
| | - Jan-Ole Radecke
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Jonas Misselhorn
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Malte Sengelmann
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Christoph S Herrmann
- Experimental Psychology Lab, Department of Psychology, Cluster of Excellence "Hearing4all", European Medical School, Carl von Ossietzky University Oldenburg, Oldenburg, 26129, Germany; Research Center Neurosensory Science, Carl von Ossietzky University Oldenburg, Oldenburg, 26129, Germany
| | - Till R Schneider
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Bettina C Schwab
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| |
Collapse
|