1
|
Calce RP, Rekow D, Barbero FM, Kiseleva A, Talwar S, Leleu A, Collignon O. Voice categorization in the four-month-old human brain. Curr Biol 2024; 34:46-55.e4. [PMID: 38096819 DOI: 10.1016/j.cub.2023.11.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 10/20/2023] [Accepted: 11/20/2023] [Indexed: 01/11/2024]
Abstract
Voices are the most relevant social sounds for humans and therefore have crucial adaptive value in development. Neuroimaging studies in adults have demonstrated the existence of regions in the superior temporal sulcus that respond preferentially to voices. Yet, whether voices represent a functionally specific category in the young infant's mind is largely unknown. We developed a highly sensitive paradigm relying on fast periodic auditory stimulation (FPAS) combined with scalp electroencephalography (EEG) to demonstrate that the infant brain implements a reliable preferential response to voices early in life. Twenty-three 4-month-old infants listened to sequences containing non-vocal sounds from different categories presented at 3.33 Hz, with highly heterogeneous vocal sounds appearing every third stimulus (1.11 Hz). We were able to isolate a voice-selective response over temporal regions, and individual voice-selective responses were found in most infants within only a few minutes of stimulation. This selective response was significantly reduced for the same frequency-scrambled sounds, indicating that voice selectivity is not simply driven by the envelope and the spectral content of the sounds. Such a robust selective response to voices as early as 4 months of age suggests that the infant brain is endowed with the ability to rapidly develop a functional selectivity to this socially relevant category of sounds.
Collapse
Affiliation(s)
- Roberta P Calce
- Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium.
| | - Diane Rekow
- Development of Olfactory Communication and Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, Université Bourgogne Franche-Comté, Université de Bourgogne, CNRS, Inrae, Institut Agro Dijon, 21000 Dijon, France; Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
| | - Francesca M Barbero
- Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
| | - Anna Kiseleva
- Development of Olfactory Communication and Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, Université Bourgogne Franche-Comté, Université de Bourgogne, CNRS, Inrae, Institut Agro Dijon, 21000 Dijon, France
| | - Siddharth Talwar
- Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
| | - Arnaud Leleu
- Development of Olfactory Communication and Cognition Lab, Centre des Sciences du Goût et de l'Alimentation, Université Bourgogne Franche-Comté, Université de Bourgogne, CNRS, Inrae, Institut Agro Dijon, 21000 Dijon, France
| | - Olivier Collignon
- Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium; School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, 1007 Lausanne & Sion, Switzerland.
| |
Collapse
|
2
|
Kerneis S, Galvin JJ, Borel S, Baqué J, Fu QJ, Bakhos D. Preliminary evaluation of computer-assisted home training for French cochlear implant recipients. PLoS One 2023; 18:e0285154. [PMID: 37115775 PMCID: PMC10146517 DOI: 10.1371/journal.pone.0285154] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 04/17/2023] [Indexed: 04/29/2023] Open
Abstract
For French cochlear implant (CI) recipients, in-person clinical auditory rehabilitation is typically provided during the first few years post-implantation. However, this is often inconvenient, it requires substantial time resources and can be problematic when appointments are unavailable. In response, we developed a computer-based home training software ("French AngelSound™") for French CI recipients. We recently conducted a pilot study to evaluate the newly developed French AngelSound™ in 15 CI recipients (5 unilateral, 5 bilateral, 5 bimodal). Outcome measures included phoneme recognition in quiet and sentence recognition in noise. Unilateral CI users were tested with the CI alone. Bilateral CI users were tested with each CI ear alone to determine the poorer ear to be trained, as well as with both ears (binaural performance). Bimodal CI users were tested with the CI ear alone, and with the contralateral hearing aid (binaural performance). Participants trained at home over a one-month period (10 hours total). Phonemic contrast training was used; the level of difficulty ranged from phoneme discrimination in quiet to phoneme identification in multi-talker babble. Unilateral and bimodal CI users trained with the CI alone; bilateral CI users trained with the poorer ear alone. Outcomes were measured before training (pre-training), immediately after training was completed (post-training), and one month after training was stopped (follow-up). For all participants, post-training CI-only vowel and consonant recognition scores significantly improved after phoneme training with the CI ear alone. For bilateral and bimodal CI users, binaural vowel and consonant recognition scores also significantly improved after training with a single CI ear. Follow-up measures showed that training benefits were largely retained. These preliminary data suggest that the phonemic contrast training in French AngelSound™ may significantly benefit French CI recipients and may complement clinical auditory rehabilitation, especially when in-person visits are not possible.
Collapse
Affiliation(s)
| | - John J Galvin
- University Hospital Center of Tours, FRA, Tours, France
- House Institute Foundation, Los Angeles, California, United States of America
| | - Stephanie Borel
- University Hospital Center of Tours, FRA, Tours, France
- Assistance Publique Hôpitaux de Paris, Pitié-Salpêtrière and Sorbonne University, FRA, Tours, France
| | - Jean Baqué
- University Hospital Center of Tours, FRA, Tours, France
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, Los Angeles, California, United States of America
| | - David Bakhos
- University Hospital Center of Tours, FRA, Tours, France
- House Institute Foundation, Los Angeles, California, United States of America
- INSERM UMR 1253 I-Brain, Université François-Rabelais de Tours, CHRU de Tours, FRA, Tours, France
| |
Collapse
|
3
|
Rapid but specific perceptual learning partially explains individual differences in the recognition of challenging speech. Sci Rep 2022; 12:10011. [PMID: 35705680 PMCID: PMC9200863 DOI: 10.1038/s41598-022-14189-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 06/02/2022] [Indexed: 11/11/2022] Open
Abstract
Perceptual learning for speech, defined as long-lasting changes in speech recognition following exposure or practice occurs under many challenging listening conditions. However, this learning is also highly specific to the conditions in which it occurred, such that its function in adult speech recognition is not clear. We used a time-compressed speech task to assess learning following either brief exposure (rapid learning) or additional training (training-induced learning). Both types of learning were robust and long-lasting. Individual differences in rapid learning explained unique variance in recognizing natural-fast speech and speech-in-noise with no additional contribution for training-induced learning (Experiment 1). Rapid learning was stimulus specific (Experiment 2), as in previous studies on training-induced learning. We suggest that rapid learning is key for understanding the role of perceptual learning in online speech recognition whereas longer training could provide additional opportunities to consolidate and stabilize learning.
Collapse
|
4
|
Cooke M, Scharenborg O, Meyer BT. The time course of adaptation to distorted speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:2636. [PMID: 35461479 DOI: 10.1121/10.0010235] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 03/25/2022] [Indexed: 06/14/2023]
Abstract
When confronted with unfamiliar or novel forms of speech, listeners' word recognition performance is known to improve with exposure, but data are lacking on the fine-grained time course of adaptation. The current study aims to fill this gap by investigating the time course of adaptation to several different types of distorted speech. Keyword scores as a function of sentence position in a block of 30 sentences were measured in response to eight forms of distorted speech. Listeners recognised twice as many words in the final sentence compared to the initial sentence with around half of the gain appearing in the first three sentences, followed by gradual gains over the rest of the block. Rapid adaptation was apparent for most of the eight distortion types tested with differences mainly in the gradual phase. Adaptation to sine-wave speech improved if listeners had heard other types of distortion prior to exposure, but no similar facilitation occurred for the other types of distortion. Rapid adaptation is unlikely to be due to procedural learning since listeners had been familiarised with the task and sentence format through exposure to undistorted speech. The mechanisms that underlie rapid adaptation are currently unclear.
Collapse
Affiliation(s)
- Martin Cooke
- Ikerbasque (Basque Science Foundation), Bilbao, Spain
| | | | - Bernd T Meyer
- Communication Acoustics and Cluster of Excellence Hearing4all, Carl von Ossietzky University, Oldenburg, Germany
| |
Collapse
|
5
|
Lansford KL, Borrie SA, Barrett TS, Flechaus C. When Additional Training Isn't Enough: Further Evidence That Unpredictable Speech Inhibits Adaptation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1700-1711. [PMID: 32437259 PMCID: PMC7839029 DOI: 10.1044/2020_jslhr-19-00380] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 02/21/2020] [Accepted: 02/24/2020] [Indexed: 05/29/2023]
Abstract
Purpose Robust improvements in intelligibility following familiarization, a listener-targeted perceptual training paradigm, have been revealed for talkers diagnosed with spastic, ataxic, and hypokinetic dysarthria but not for talkers with hyperkinetic dysarthria. While the theoretical explanation for the lack of intelligibility improvement following training with hyperkinetic talkers is that there is insufficient distributional regularity in the speech signals to support perceptual adaptation, it could simply be that the standard training protocol was inadequate to facilitate learning of the unpredictable talker. In a pair of experiments, we addressed this possible alternate explanation by modifying the levels of exposure and feedback provided by the perceptual training protocol to offer listeners a more robust training experience. Method In Experiment 1, we examined the exposure modifications, testing whether perceptual adaptation to an unpredictable talker with hyperkinetic dysarthria could be achieved with greater or more diverse exposure to dysarthric speech during the training phase. In Experiment 2, we examined feedback modifications, testing whether perceptual adaptation to the unpredictable talker could be achieved with the addition of internally generated somatosensory feedback, via vocal imitation, during the training phase. Results Neither task modification led to improved intelligibility of the unpredictable talker with hyperkinetic dysarthria. Furthermore, listeners who completed the vocal imitation task demonstrated significantly reduced intelligibility at posttest. Conclusion Together, the results from Experiments 1 and 2 replicate and extend findings from our previous work, suggesting perceptual adaptation is inhibited for talkers whose speech is largely characterized by unpredictable degradations. Collectively, these results underscore the importance of integrating signal predictability into theoretical models of perceptual learning.
Collapse
Affiliation(s)
- Kaitlin L. Lansford
- School of Communication Science & Disorders, Florida State University, Tallahassee
| | - Stephanie A. Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| | | | - Cassidy Flechaus
- School of Communication Science & Disorders, Florida State University, Tallahassee
| |
Collapse
|
6
|
Banai K, Lavner Y. The effects of exposure and training on the perception of time-compressed speech in native versus nonnative listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1686. [PMID: 27914374 DOI: 10.1121/1.4962499] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The present study investigated the effects of language experience on the perceptual learning induced by either brief exposure to or more intensive training with time-compressed speech. Native (n = 30) and nonnative (n = 30) listeners were each divided to three groups with different experiences with time-compressed speech: A trained group who trained on the semantic verification of time-compressed sentences for three sessions, an exposure group briefly exposed to 20 time-compressed sentences, and a group of naive listeners. Recognition was assessed with three sets of time-compressed sentences intended to evaluate exposure-induced and training-induced learning as well as across-token and across-talker generalization. Learning profiles differed between native and nonnative listeners. Exposure had a weaker effect in nonnative than in native listeners. Furthermore, native and nonnative trained listeners significantly outperformed their untrained counterparts when tested with sentences taken from the training set. However, only trained native listeners outperformed naive native listeners when tested with new sentences. These findings suggest that the perceptual learning of speech is sensitive to linguistic experience. That rapid learning is weaker in nonnative listeners is consistent with their difficulties in real-life conditions. Furthermore, nonnative listeners may require longer periods of practice to achieve native-like learning outcomes.
Collapse
Affiliation(s)
- Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Mt. Carmel, Haifa 34988, Israel
| | - Yizhar Lavner
- Department of Computer Science, Tel-Hai College, Tel-Hai 12208, Israel
| |
Collapse
|
7
|
Evans S, McGettigan C, Agnew ZK, Rosen S, Scott SK. Getting the Cocktail Party Started: Masking Effects in Speech Perception. J Cogn Neurosci 2015; 28:483-500. [PMID: 26696297 DOI: 10.1162/jocn_a_00913] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Spoken conversations typically take place in noisy environments, and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous fMRI, while they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioral task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment; activity was found within right lateralized frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise.
Collapse
Affiliation(s)
| | | | - Zarinah K Agnew
- University College London.,University of California, San Francisco
| | | | | |
Collapse
|
8
|
Azadpour M, Balaban E. A proposed mechanism for rapid adaptation to spectrally distorted speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:44-57. [PMID: 26233005 DOI: 10.1121/1.4922226] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The mechanisms underlying perceptual adaptation to severely spectrally-distorted speech were studied by training participants to comprehend spectrally-rotated speech, which is obtained by inverting the speech spectrum. Spectral-rotation produces severe distortion confined to the spectral domain while preserving temporal trajectories. During five 1-hour training sessions, pairs of participants attempted to extract spoken messages from the spectrally-rotated speech of their training partner. Data on training-induced changes in comprehension of spectrally-rotated sentences and identification/discrimination of spectrally-rotated phonemes were used to evaluate the plausibility of three different classes of underlying perceptual mechanisms: (1) phonemic remapping (the formation of new phonemic categories that specifically incorporate spectrally-rotated acoustic information); (2) experience-dependent generation of a perceptual "inverse-transform" that compensates for spectral-rotation; and (3) changes in cue weighting (the identification of sets of acoustic cues least affected by spectral-rotation, followed by a rapid shift in perceptual emphasis to favour those cues, combined with the recruitment of the same type of "perceptual filling-in" mechanisms used to disambiguate speech-in-noise). Results exclusively support the third mechanism, which is the only one predicting that learning would specifically target temporally-dynamic cues that were transmitting phonetic information most stably in spite of spectral-distortion. No support was found for phonemic remapping or for inverse-transform generation.
Collapse
Affiliation(s)
- Mahan Azadpour
- Cognitive Neuroscience Sector, SISSA (International School for Advanced Studies), Via Beirut 2-4, Trieste, Italy
| | - Evan Balaban
- Cognitive Neuroscience Sector, SISSA (International School for Advanced Studies), Via Beirut 2-4, Trieste, Italy
| |
Collapse
|
9
|
Lima CF, Lavan N, Evans S, Agnew Z, Halpern AR, Shanmugalingam P, Meekings S, Boebinger D, Ostarek M, McGettigan C, Warren JE, Scott SK. Feel the Noise: Relating Individual Differences in Auditory Imagery to the Structure and Function of Sensorimotor Systems. Cereb Cortex 2015; 25:4638-50. [PMID: 26092220 PMCID: PMC4816805 DOI: 10.1093/cercor/bhv134] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual–motor interactions for processing heard and internally generated auditory information.
Collapse
Affiliation(s)
- César F Lima
- Institute of Cognitive Neuroscience Center for Psychology, University of Porto, Porto, Portugal
| | - Nadine Lavan
- Institute of Cognitive Neuroscience Department of Psychology, Royal Holloway University of London, London, UK
| | | | - Zarinah Agnew
- Institute of Cognitive Neuroscience Department of Otolaryngology, University of California, San Francisco, USA
| | | | | | | | | | | | - Carolyn McGettigan
- Institute of Cognitive Neuroscience Department of Psychology, Royal Holloway University of London, London, UK
| | - Jane E Warren
- Faculty of Brain Sciences, University College London, London, UK
| | | |
Collapse
|
10
|
Martin JR, Dezecache G, Dokic J, Grèzes J. Prioritization of emotional signals by the human auditory system: evidence from a perceptual hysteresis protocol. EVOL HUM BEHAV 2014. [DOI: 10.1016/j.evolhumbehav.2014.07.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
11
|
Banai K, Lavner Y. The effects of training length on the perceptual learning of time-compressed speech and its generalization. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:1908-1917. [PMID: 25324090 DOI: 10.1121/1.4895684] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Brief exposure to time-compressed speech yields both learning and generalization. Whether such learning continues over the course of multi-session training and if so whether it is more or less specific than exposure-induced learning is not clear, because the outcomes of intensive practice with time-compressed speech have rarely been reported. The goal here was to determine whether prolonged training on time-compressed speech yields additional learning and generalization beyond that induced by brief exposure. Listeners practiced the semantic verification of time-compressed sentences for one or three training sessions. Identification of trained and untrained tokens was subsequently compared between listeners who trained for one or three sessions, listeners who were briefly exposed to 20 time-compressed sentences and naive listeners. Trained listeners outperformed the other groups of listeners on the trained condition, but only the group that was trained for three sessions outperformed the other groups when tested with untrained tokens. These findings suggest that although learning of distorted speech can occur rapidly, more stable learning and generalization might be achieved with longer, multi-session practice. It is suggested that the findings are consistent with the framework proposed by the Reverse Hierarchy Theory of perceptual learning.
Collapse
Affiliation(s)
- Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Yizhar Lavner
- Department of Computer Science, Tel-Hai College, Tel-Hai, Israel
| |
Collapse
|
12
|
Becker R, Pefkou M, Michel CM, Hervais-Adelman AG. Left temporal alpha-band activity reflects single word intelligibility. Front Syst Neurosci 2013; 7:121. [PMID: 24416001 PMCID: PMC3873629 DOI: 10.3389/fnsys.2013.00121] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2013] [Accepted: 12/10/2013] [Indexed: 11/13/2022] Open
Abstract
The electroencephalographic (EEG) correlates of degraded speech perception have been explored in a number of recent studies. However, such investigations have often been inconclusive as to whether observed differences in brain responses between conditions result from different acoustic properties of more or less intelligible stimuli or whether they relate to cognitive processes implicated in comprehending challenging stimuli. In this study we used noise vocoding to spectrally degrade monosyllabic words in order to manipulate their intelligibility. We used spectral rotation to generate incomprehensible control conditions matched in terms of spectral detail. We recorded EEG from 14 volunteers who listened to a series of noise vocoded (NV) and noise-vocoded spectrally-rotated (rNV) words, while they carried out a detection task. We specifically sought components of the EEG response that showed an interaction between spectral rotation and spectral degradation. This reflects those aspects of the brain electrical response that are related to the intelligibility of acoustically degraded monosyllabic words, while controlling for spectral detail. An interaction between spectral complexity and rotation was apparent in both evoked and induced activity. Analyses of event-related potentials showed an interaction effect for a P300-like component at several centro-parietal electrodes. Time-frequency analysis of the EEG signal in the alpha-band revealed a monotonic increase in event-related desynchronization (ERD) for the NV but not the rNV stimuli in the alpha band at a left temporo-central electrode cluster from 420-560 ms reflecting a direct relationship between the strength of alpha-band ERD and intelligibility. By matching NV words with their incomprehensible rNV homologues, we reveal the spatiotemporal pattern of evoked and induced processes involved in degraded speech perception, largely uncontaminated by purely acoustic effects.
Collapse
Affiliation(s)
- Robert Becker
- Functional Brain Mapping Lab, Department of Fundamental Neuroscience, University of Geneva Geneva, Switzerland
| | - Maria Pefkou
- Brain and Language Lab, Department of Clinical Neuroscience, University of Geneva Geneva, Switzerland
| | - Christoph M Michel
- Functional Brain Mapping Lab, Department of Fundamental Neuroscience, University of Geneva Geneva, Switzerland
| | - Alexis G Hervais-Adelman
- Brain and Language Lab, Department of Clinical Neuroscience, University of Geneva Geneva, Switzerland
| |
Collapse
|