1
|
Chládková K, Urbanec J, Skálová S, Kremláček J. Newborns' neural processing of native vowels reveals directional asymmetries. Dev Cogn Neurosci 2021; 52:101023. [PMID: 34717213 PMCID: PMC8577326 DOI: 10.1016/j.dcn.2021.101023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 09/20/2021] [Accepted: 10/18/2021] [Indexed: 11/29/2022] Open
Abstract
Prenatal learning of speech rhythm and melody is well documented. Much less is known about the earliest acquisition of segmental speech categories. We tested whether newborn infants perceive native vowels, but not nonspeech sounds, through some existing (proto-)categories, and whether they do so more robustly for some vowels than for others. Sensory event-related potentials (ERP), and mismatch responses (MMR), were obtained from 104 neonates acquiring Czech. The ERPs elicited by vowels were larger than the ERPs to nonspeech sounds, and reflected the differences between the individual vowel categories. The MMRs to changes in vowels but not in nonspeech sounds revealed left-lateralized asymmetrical processing patterns: a change from a focal [a] to a nonfocal [ɛ], and the change from short [ɛ] to long [ɛ:] elicited more negative MMR responses than reverse changes. Contrary to predictions, we did not find evidence of a developmental advantage for vowel length contrasts (supposedly most readily available in utero) over vowel quality contrasts (supposedly less salient in utero). An explanation for these asymmetries in terms of differential degree of prior phonetic warping of speech sounds is proposed. Future studies with newborns with different language backgrounds should test whether the prenatal learning scenario proposed here is plausible. Newborns’ processing of native vowels and comparable nonspeech sounds differ. Durational and spectral differences in stimuli were more clearly reflected by the ERPs to vowels, compared to tone complexes. Directional asymmetries were detected in the mismatch responses to vowel deviants. In the left hemisphere, a change in vowels from focal to nonfocal and from short to long resulted in a more negative MMR. The findings may be explained by phonetic learning prior to the 3rd day after birth.
Collapse
Affiliation(s)
- Kateřina Chládková
- Institute of Czech Language and Theory of Communication, Faculty of Arts, Charles University, Nám. Jana Palacha 2, 116 38 Praha, Czechia; Institute of Psychology, Czech Academy of Sciences, Hybernská 8, 110 00 Praha, Czechia.
| | - Josef Urbanec
- Department of Pathological Physiology, Faculty of Medicine in Hradec Králové, Charles University, Šimkova 870, 500 03 Hradec Králové, Czechia; Paediatrics Department, Havlíčkův Brod Hospital, Husova 2624, 580 01 Havlíčkův Brod, Czechia
| | - Sylva Skálová
- Paediatrics Department of University Hospital, Sokolská 581, 500 05 Hradec Králové, Czechia
| | - Jan Kremláček
- Department of Pathological Physiology, Faculty of Medicine in Hradec Králové, Charles University, Šimkova 870, 500 03 Hradec Králové, Czechia; Department of Medical Biophysics, Medical faculty in Hradec Králové, Charles University, Šimkova 870, 500 03 Hradec Králové, Czechia
| |
Collapse
|
2
|
Nudga N, Urbanec J, Oceláková Z, Kremláček J, Chládková K. Neural Processing of Spectral and Durational Changes in Speech and Non-speech Stimuli: An MMN Study With Czech Adults. Front Hum Neurosci 2021; 15:643655. [PMID: 34434094 PMCID: PMC8380928 DOI: 10.3389/fnhum.2021.643655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 06/29/2021] [Indexed: 11/13/2022] Open
Abstract
Neural discrimination of auditory contrasts is usually studied via the mismatch negativity (MMN) component of the event-related potentials (ERPs). In the processing of speech contrasts, the magnitude of MMN is determined by both the acoustic as well as the phonological distance between stimuli. Also, the MMN can be modulated by the order in which the stimuli are presented, thus indexing perceptual asymmetries in speech sound processing. Here we assessed the MMN elicited by two types of phonological contrasts, namely vowel quality and vowel length, assuming that both will elicit a comparably strong MMN as both are phonemic in the listeners' native language (Czech) and perceptually salient. Furthermore, we tested whether these phonemic contrasts are processed asymmetrically, and whether the asymmetries are acoustically or linguistically conditioned. The MMN elicited by the spectral change between /a/ and /ε/ was comparable to the MMN elicited by the durational change between /ε/ and /ε:/, suggesting that both types of contrasts are perceptually important for Czech listeners. The spectral change in vowels yielded an asymmetrical pattern manifested by a larger MMN response to the change from /ε/ to /a/ than from /a/ to /ε/. The lack of such an asymmetry in the MMN to the same spectral change in comparable non-speech stimuli spoke against an acoustically-based explanation, indicating that it may instead have been the phonological properties of the vowels that triggered the asymmetry. The potential phonological origins of the asymmetry are discussed within the featurally underspecified lexicon (FUL) framework, and conclusions are drawn about the perceptual relevance of the place and height features for the Czech /ε/-/a/ contrast.
Collapse
Affiliation(s)
- Natalia Nudga
- Faculty of Arts, Institute of Phonetics, Charles University, Prague, Czechia
| | - Josef Urbanec
- Department of Pathological Physiology, Faculty of Medicine in Hradec Králové, Charles University, Hradec Králové, Czechia
- Pediatrics Department, Havlíčkův Brod Hospital, Havlíčkův Brod, Czechia
| | - Zuzana Oceláková
- Faculty of Arts, Institute of Phonetics, Charles University, Prague, Czechia
| | - Jan Kremláček
- Department of Pathological Physiology, Faculty of Medicine in Hradec Králové, Charles University, Hradec Králové, Czechia
- Department of Medical Biophysics, Faculty of Medicine in Hradec Králové, Charles University, Hradec Králové, Czechia
| | - Kateřina Chládková
- Faculty of Arts, Institute of Czech Language and Theory of Communication, Charles University, Prague, Czechia
- Institute of Psychology, Czech Academy of Sciences, Brno, Czechia
| |
Collapse
|
3
|
Nourski KV, Steinschneider M, Rhone AE, Kovach CK, Banks MI, Krause BM, Kawasaki H, Howard MA. Electrophysiology of the Human Superior Temporal Sulcus during Speech Processing. Cereb Cortex 2020; 31:1131-1148. [PMID: 33063098 DOI: 10.1093/cercor/bhaa281] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 08/06/2020] [Accepted: 09/01/2020] [Indexed: 12/20/2022] Open
Abstract
The superior temporal sulcus (STS) is a crucial hub for speech perception and can be studied with high spatiotemporal resolution using electrodes targeting mesial temporal structures in epilepsy patients. Goals of the current study were to clarify functional distinctions between the upper (STSU) and the lower (STSL) bank, hemispheric asymmetries, and activity during self-initiated speech. Electrophysiologic properties were characterized using semantic categorization and dialog-based tasks. Gamma-band activity and alpha-band suppression were used as complementary measures of STS activation. Gamma responses to auditory stimuli were weaker in STSL compared with STSU and had longer onset latencies. Activity in anterior STS was larger during speaking than listening; the opposite pattern was observed more posteriorly. Opposite hemispheric asymmetries were found for alpha suppression in STSU and STSL. Alpha suppression in the STS emerged earlier than in core auditory cortex, suggesting feedback signaling within the auditory cortical hierarchy. STSL was the only region where gamma responses to words presented in the semantic categorization tasks were larger in subjects with superior task performance. More pronounced alpha suppression was associated with better task performance in Heschl's gyrus, superior temporal gyrus, and STS. Functional differences between STSU and STSL warrant their separate assessment in future studies.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.,Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | | | - Matthew I Banks
- Department of Anesthesiology, University of Wisconsin-Madison, Madison, WI 53705, USA.,Department of Neuroscience, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Bryan M Krause
- Department of Anesthesiology, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.,Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, USA.,Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
4
|
Zimmerer F, Scharinger M, Cornell S, Reetz H, Eulitz C. Neural mechanisms for coping with acoustically reduced speech. BRAIN AND LANGUAGE 2019; 191:46-57. [PMID: 30822731 DOI: 10.1016/j.bandl.2019.02.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 09/26/2018] [Accepted: 02/12/2019] [Indexed: 06/09/2023]
Abstract
In spoken language, reductions of word forms occur regularly and need to be accommodated by the listener. Intriguingly, this accommodation is usually achieved without any apparent effort. The neural bases of this cognitive skill are not yet fully understood. We here presented participants with reduced words that were either preceded by a related or an unrelated visual prime and compared electric brain responses to reduced words with those to their full counterparts. In time-domain, we found a positivity between 400 and 600 ms differing between reduced and full forms. A later positivity distinguished primed and unprimed words and was modulated by reduction. In frequency-domain, alpha suppression was stronger for reduced than for full words. The time- and frequency-domain reduction effects converge towards the view that reduced words draw on attention and memory mechanisms. Our data demonstrate the importance of interactive processing of bottom-up and top-down information for the comprehension of reduced words.
Collapse
Affiliation(s)
- Frank Zimmerer
- Department of Language Science and Technology, Universität des Saarlandes, Germany; Department of Pediatric Neurology, Developmental Medicine and Social Pediatrics, Dr. von Hauner Children's Hospital, Ludwig-Maximilian-Universität, Munich, Germany
| | - Mathias Scharinger
- Phonetics Research Group, Philipps-Universität Marburg, Germany; Marburg Center for Mind, Brain and Behavior, Philipps-Universität Marburg, Germany.
| | - Sonia Cornell
- Department of Pediatric Neurology, Developmental Medicine and Social Pediatrics, Dr. von Hauner Children's Hospital, Ludwig-Maximilian-Universität, Munich, Germany; Department of Linguistics, Universität Konstanz, Germany
| | - Henning Reetz
- Institute for Phonetics, Goethe-Universität, Frankfurt, Germany
| | - Carsten Eulitz
- Department of Linguistics, Universität Konstanz, Germany
| |
Collapse
|
5
|
McNair SW, Kayser SJ, Kayser C. Consistent pre-stimulus influences on auditory perception across the lifespan. Neuroimage 2019; 186:22-32. [PMID: 30391564 PMCID: PMC6347568 DOI: 10.1016/j.neuroimage.2018.10.085] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 10/29/2018] [Accepted: 10/31/2018] [Indexed: 01/29/2023] Open
Abstract
As we get older, perception in cluttered environments becomes increasingly difficult as a result of changes in peripheral and central neural processes. Given the aging society, it is important to understand the neural mechanisms constraining perception in the elderly. In young participants, the state of rhythmic brain activity prior to a stimulus has been shown to modulate the neural encoding and perceptual impact of this stimulus - yet it remains unclear whether, and if so, how, the perceptual relevance of pre-stimulus activity changes with age. Using the auditory system as a model, we recorded EEG activity during a frequency discrimination task from younger and older human listeners. By combining single-trial EEG decoding with linear modelling we demonstrate consistent statistical relations between pre-stimulus power and the encoding of sensory evidence in short-latency EEG components, and more variable relations between pre-stimulus phase and subjects' decisions in longer-latency components. At the same time, we observed a significant slowing of auditory evoked responses and a flattening of the overall EEG frequency spectrum in the older listeners. Our results point to mechanistically consistent relations between rhythmic brain activity and sensory encoding that emerge despite changes in neural response latencies and the relative amplitude of rhythmic brain activity with age.
Collapse
Affiliation(s)
- Steven W McNair
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, G12 8QB, United Kingdom
| | - Stephanie J Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615, Bielefeld, Germany; Cognitive Interaction Technology - Center of Excellence, Bielefeld University, Inspiration 1, 33615, Bielefeld, Germany
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615, Bielefeld, Germany; Cognitive Interaction Technology - Center of Excellence, Bielefeld University, Inspiration 1, 33615, Bielefeld, Germany.
| |
Collapse
|
6
|
Holt LL, Tierney AT, Guerra G, Laffere A, Dick F. Dimension-selective attention as a possible driver of dynamic, context-dependent re-weighting in speech processing. Hear Res 2018; 366:50-64. [PMID: 30131109 PMCID: PMC6107307 DOI: 10.1016/j.heares.2018.06.014] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Revised: 06/10/2018] [Accepted: 06/19/2018] [Indexed: 12/24/2022]
Abstract
The contribution of acoustic dimensions to an auditory percept is dynamically adjusted and reweighted based on prior experience about how informative these dimensions are across the long-term and short-term environment. This is especially evident in speech perception, where listeners differentially weight information across multiple acoustic dimensions, and use this information selectively to update expectations about future sounds. The dynamic and selective adjustment of how acoustic input dimensions contribute to perception has made it tempting to conceive of this as a form of non-spatial auditory selective attention. Here, we review several human speech perception phenomena that might be consistent with auditory selective attention although, as of yet, the literature does not definitively support a mechanistic tie. We relate these human perceptual phenomena to illustrative nonhuman animal neurobiological findings that offer informative guideposts in how to test mechanistic connections. We next present a novel empirical approach that can serve as a methodological bridge from human research to animal neurobiological studies. Finally, we describe four preliminary results that demonstrate its utility in advancing understanding of human non-spatial dimension-based auditory selective attention.
Collapse
Affiliation(s)
- Lori L Holt
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA; Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Adam T Tierney
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK
| | - Giada Guerra
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK
| | - Aeron Laffere
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK
| | - Frederic Dick
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK; Department of Experimental Psychology, University College London, London, WC1H 0AP, UK
| |
Collapse
|
7
|
Kyathanahally SP, Franco-Watkins A, Zhang X, Calhoun VD, Deshpande G. A Realistic Framework for Investigating Decision Making in the Brain With High Spatiotemporal Resolution Using Simultaneous EEG/fMRI and Joint ICA. IEEE J Biomed Health Inform 2017; 21:814-825. [DOI: 10.1109/jbhi.2016.2590434] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
Chang CHC, Kuo WJ. The Neural Substrates Underlying the Implementation of Phonological Rule in Lexical Tone Production: An fMRI Study of the Tone 3 Sandhi Phenomenon in Mandarin Chinese. PLoS One 2016; 11:e0159835. [PMID: 27455078 PMCID: PMC4959711 DOI: 10.1371/journal.pone.0159835] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2015] [Accepted: 07/08/2016] [Indexed: 11/19/2022] Open
Abstract
This study examined the neural substrates underlying the implementation of phonological rule in lexical tone by the Tone 3 sandhi phenomenon in Mandarin Chinese. Tone 3 sandhi is traditionally described as the substitution of Tone 3 with Tone 2 when followed by another Tone 3 (33 →23) during speech production. Tone 3 sandhi enables the examination of tone processing in the phonological level with the least involvement of segments. Using the fMRI technique, we measured brain activations corresponding to the monosyllable and disyllable sequences of the four Chinese lexical tones, while manipulating the requirement on overt oral response. The application of Tone 3 sandhi to disyllable sequence of Tone 3 was confirmed by our behavioral results. Larger brain responses to overtly produced disyllable Tone 3 (33 > 11, 22, and 44) were found in right posterior IFG by both whole-brain and ROI analyses. We suggest that the right IFG was responsible for the processing of Tone 3 sandhi. Intense temporo-frontal interaction is needed in speech production for self-monitoring. The involvement of the right IFG in tone production might result from its interaction with the right auditory cortex, which is known to specialize in pitch. Future studies using tools with better temporal resolutions are needed to illuminate the dynamic interaction between the right inferior frontal regions and the left-lateralized language network in tone languages.
Collapse
Affiliation(s)
- Claire H. C. Chang
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
- College of Humanities and Social Sciences, Taipei Medical University, Taipei, Taiwan
| | - Wen-Jui Kuo
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
- Brain Research Center, National Yang-Ming University, Taipei, Taiwan
- * E-mail:
| |
Collapse
|
9
|
Petersen EB, Wöstmann M, Obleser J, Stenfelt S, Lunner T. Hearing loss impacts neural alpha oscillations under adverse listening conditions. Front Psychol 2015; 6:177. [PMID: 25745410 PMCID: PMC4333793 DOI: 10.3389/fpsyg.2015.00177] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Accepted: 02/04/2015] [Indexed: 12/04/2022] Open
Abstract
Degradations in external, acoustic stimulation have long been suspected to increase the load on working memory (WM). One neural signature of WM load is enhanced power of alpha oscillations (6-12 Hz). However, it is unknown to what extent common internal, auditory degradation, that is, hearing impairment, affects the neural mechanisms of WM when audibility has been ensured via amplification. Using an adapted auditory Sternberg paradigm, we varied the orthogonal factors memory load and background noise level, while the electroencephalogram was recorded. In each trial, participants were presented with 2, 4, or 6 spoken digits embedded in one of three different levels of background noise. After a stimulus-free delay interval, participants indicated whether a probe digit had appeared in the sequence of digits. Participants were healthy older adults (62-86 years), with normal to moderately impaired hearing. Importantly, the background noise levels were individually adjusted and participants were wearing hearing aids to equalize audibility across participants. Irrespective of hearing loss (HL), behavioral performance improved with lower memory load and also with lower levels of background noise. Interestingly, the alpha power in the stimulus-free delay interval was dependent on the interplay between task demands (memory load and noise level) and HL; while alpha power increased with HL during low and intermediate levels of memory load and background noise, it dropped for participants with the relatively most severe HL under the highest memory load and background noise level. These findings suggest that adaptive neural mechanisms for coping with adverse listening conditions break down for higher degrees of HL, even when adequate hearing aid amplification is in place.
Collapse
Affiliation(s)
- Eline B. Petersen
- Eriksholm Research Centre, SnekkerstenDenmark
- Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University, LinköpingSweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, LinköpingSweden
| | - Malte Wöstmann
- International Max Planck Research School on Neuroscience of Communication, LeipzigGermany
- Max Planck Research Group “Auditory Cognition”, Max Planck Institute for Human Cognitive and Brain Sciences, LeipzigGermany
| | - Jonas Obleser
- Max Planck Research Group “Auditory Cognition”, Max Planck Institute for Human Cognitive and Brain Sciences, LeipzigGermany
| | - Stefan Stenfelt
- Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University, LinköpingSweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, LinköpingSweden
| | - Thomas Lunner
- Eriksholm Research Centre, SnekkerstenDenmark
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, LinköpingSweden
| |
Collapse
|
10
|
Scharinger M, Henry MJ, Obleser J. Acoustic cue selection and discrimination under degradation: differential contributions of the inferior parietal and posterior temporal cortices. Neuroimage 2014; 106:373-81. [PMID: 25481793 DOI: 10.1016/j.neuroimage.2014.11.050] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2014] [Revised: 10/10/2014] [Accepted: 11/23/2014] [Indexed: 11/26/2022] Open
Abstract
Auditory categorization is a vital skill for perceiving the acoustic environment. Categorization depends on the discriminability of the sensory input as well as on the ability of the listener to adaptively make use of the relevant features of the sound. Previous studies on categorization have focused either on speech sounds when studying discriminability or on visual stimuli when assessing optimal cue utilization. Here, by contrast, we examined neural sensitivity to stimulus discriminability and optimal cue utilization when categorizing novel, non-speech auditory stimuli not affected by long-term familiarity. In a functional magnetic resonance imaging (fMRI) experiment, listeners categorized sounds from two category distributions, differing along two acoustic dimensions: spectral shape and duration. By introducing spectral degradation after the first half of the experiment, we manipulated both stimulus discriminability and the relative informativeness of acoustic cues. Degradation caused an overall decrease in discriminability based on spectral shape, and therefore enhanced the informativeness of duration. A relative increase in duration-cue utilization was accompanied by increased activity in left parietal cortex. Further, discriminability modulated right planum temporale activity to a higher degree when stimuli were spectrally degraded than when they were not. These findings provide support for separable contributions of parietal and posterior temporal areas to perceptual categorization. The parietal cortex seems to support the selective utilization of informative stimulus cues, while the posterior superior temporal cortex as a primarily auditory brain area supports discriminability particularly under acoustic degradation.
Collapse
Affiliation(s)
- Mathias Scharinger
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Molly J Henry
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|