1
|
Elmer S, Kurthen I, Meyer M, Giroud N. A multidimensional characterization of the neurocognitive architecture underlying age-related temporal speech processing. Neuroimage 2023; 278:120285. [PMID: 37481009 DOI: 10.1016/j.neuroimage.2023.120285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 07/11/2023] [Accepted: 07/19/2023] [Indexed: 07/24/2023] Open
Abstract
Healthy aging is often associated with speech comprehension difficulties in everyday life situations despite a pure-tone hearing threshold in the normative range. Drawing on this background, we used a multidimensional approach to assess the functional and structural neural correlates underlying age-related temporal speech processing while controlling for pure-tone hearing acuity. Accordingly, we combined structural magnetic resonance imaging and electroencephalography, and collected behavioral data while younger and older adults completed a phonetic categorization and discrimination task with consonant-vowel syllables varying along a voice-onset time continuum. The behavioral results confirmed age-related temporal speech processing singularities which were reflected in a shift of the boundary of the psychometric categorization function, with older adults perceiving more syllable characterized by a short voice-onset time as /ta/ compared to younger adults. Furthermore, despite the absence of any between-group differences in phonetic discrimination abilities, older adults demonstrated longer N100/P200 latencies as well as increased P200 amplitudes while processing the consonant-vowel syllables varying in voice-onset time. Finally, older adults also exhibited a divergent anatomical gray matter infrastructure in bilateral auditory-related and frontal brain regions, as manifested in reduced cortical thickness and surface area. Notably, in the younger adults but not in the older adult cohort, cortical surface area in these two gross anatomical clusters correlated with the categorization of consonant-vowel syllables characterized by a short voice-onset time, suggesting the existence of a critical gray matter threshold that is crucial for consistent mapping of phonetic categories varying along the temporal dimension. Taken together, our results highlight the multifaceted dimensions of age-related temporal speech processing characteristics, and pave the way toward a better understanding of the relationships between hearing, speech and the brain in older age.
Collapse
Affiliation(s)
- Stefan Elmer
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland.
| | - Ira Kurthen
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland; Cognitive Psychology Unit, Alpen-Adria University, Klagenfurt, Austria
| | - Nathalie Giroud
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland
| |
Collapse
|
2
|
Isik M, Eskikurt G, Erdogan ET. Neuromodulation of the left auditory cortex with transcranial direct current stimulation (tDCS) has no effect on the categorical perception of speech sounds. Neuropsychologia 2023; 178:108442. [PMID: 36481255 DOI: 10.1016/j.neuropsychologia.2022.108442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 11/23/2022] [Accepted: 12/03/2022] [Indexed: 12/11/2022]
Abstract
Temporal cue analysis in auditory stimulus is essential in the perception of speech sounds. The effect of transcranial direct current stimulation (tDCS) on auditory temporal processing remains unclear. In this study, we examined whether tDCS applied over the left auditory cortex (AC) has a polarity-specific behavioral effect on the categorical perception of speech sounds whose temporal features are modulated. Sixteen healthy volunteers in each group were received anodal, cathodal, or sham tDCS. A phonetic categorization task including auditory stimuli with varying voice onset time was performed before and during tDCS, and responses were analyzed. No statistically significant difference was observed between groups (anode, cathode, sham) and within the groups (pre-tDCS, during tDCS) in comparisons of the slope parameter of the identification function obtained from the phonetic categorization task data. Our results show that a single-session application of tDCS over the left AC does not significantly affect the categorical perception of speech sounds.
Collapse
Affiliation(s)
- Mevlude Isik
- Neurological Sciences Research and Application Center (İSÜCAN), Istinye University, Istanbul, Turkey.
| | - Gokcer Eskikurt
- Department of Physiology, Istinye University, Faculty of Medicine, Istanbul, Turkey.
| | - Ezgi Tuna Erdogan
- Department of Physiology, Koç University, Faculty of Medicine, Istanbul, Turkey.
| |
Collapse
|
3
|
Abstract
Human speech perception results from neural computations that transform external acoustic speech signals into internal representations of words. The superior temporal gyrus (STG) contains the nonprimary auditory cortex and is a critical locus for phonological processing. Here, we describe how speech sound representation in the STG relies on fundamentally nonlinear and dynamical processes, such as categorization, normalization, contextual restoration, and the extraction of temporal structure. A spatial mosaic of local cortical sites on the STG exhibits complex auditory encoding for distinct acoustic-phonetic and prosodic features. We propose that as a population ensemble, these distributed patterns of neural activity give rise to abstract, higher-order phonemic and syllabic representations that support speech perception. This review presents a multi-scale, recurrent model of phonological processing in the STG, highlighting the critical interface between auditory and language systems. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Ilina Bhaya-Grossman
- Department of Neurological Surgery, University of California, San Francisco, California 94143, USA; .,Joint Graduate Program in Bioengineering, University of California, Berkeley and San Francisco, California 94720, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, California 94143, USA;
| |
Collapse
|
4
|
Saltzman DI, Myers EB. Neural Representation of Articulable and Inarticulable Novel Sound Contrasts: The Role of the Dorsal Stream. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:339-364. [PMID: 35784619 PMCID: PMC9248853 DOI: 10.1162/nol_a_00016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Accepted: 05/23/2020] [Indexed: 06/15/2023]
Abstract
The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the "dorsal stream") to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus, the middle temporal gyrus, and the supplementary motor area provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.
Collapse
|
5
|
Conant LL, Liebenthal E, Desai A, Seidenberg MS, Binder JR. Differential activation of the visual word form area during auditory phoneme perception in youth with dyslexia. Neuropsychologia 2020; 146:107543. [PMID: 32598966 DOI: 10.1016/j.neuropsychologia.2020.107543] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 03/16/2020] [Accepted: 06/21/2020] [Indexed: 12/12/2022]
Abstract
Developmental dyslexia is a learning disorder characterized by difficulties reading words accurately and/or fluently. Several behavioral studies have suggested the presence of anomalies at an early stage of phoneme processing, when the complex spectrotemporal patterns in the speech signal are analyzed and assigned to phonemic categories. In this study, fMRI was used to compare brain responses associated with categorical discrimination of speech syllables (P) and acoustically matched nonphonemic stimuli (N) in children and adolescents with dyslexia and in typically developing (TD) controls, aged 8-17 years. The TD group showed significantly greater activation during the P condition relative to N in an area of the left ventral occipitotemporal cortex that corresponds well with the region referred to as the "visual word form area" (VWFA). Regression analyses using reading performance as a continuous variable across the full group of participants yielded similar results. Overall, the findings are consistent with those of previous neuroimaging studies using print stimuli in individuals with dyslexia that found reduced activation in left occipitotemporal regions; however, the current study shows that these activation differences seen during reading are apparent during auditory phoneme discrimination in youth with dyslexia, suggesting that the primary deficit in at least a subset of children may lie early in the speech processing stream and that categorical perception may be an important target of early intervention in children at risk for dyslexia.
Collapse
Affiliation(s)
- Lisa L Conant
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA.
| | - Einat Liebenthal
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA; Department of Psychiatry, McLean Hospital, Harvard Medical School, Boston, MA, USA
| | - Anjali Desai
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Mark S Seidenberg
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Jeffrey R Binder
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
6
|
Early lexical influences on sublexical processing in speech perception: Evidence from electrophysiology. Cognition 2020; 197:104162. [DOI: 10.1016/j.cognition.2019.104162] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 12/11/2019] [Accepted: 12/16/2019] [Indexed: 11/17/2022]
|
7
|
Gennari SP, Millman RE, Hymers M, Mattys SL. Anterior paracingulate and cingulate cortex mediates the effects of cognitive load on speech sound discrimination. Neuroimage 2018; 178:735-743. [DOI: 10.1016/j.neuroimage.2018.06.035] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 06/07/2018] [Accepted: 06/10/2018] [Indexed: 11/28/2022] Open
|
8
|
Xie X, Myers E. Left Inferior Frontal Gyrus Sensitivity to Phonetic Competition in Receptive Language Processing: A Comparison of Clear and Conversational Speech. J Cogn Neurosci 2017; 30:267-280. [PMID: 29160743 DOI: 10.1162/jocn_a_01208] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The speech signal is rife with variations in phonetic ambiguity. For instance, when talkers speak in a conversational register, they demonstrate less articulatory precision, leading to greater potential for confusability at the phonetic level compared with a clear speech register. Current psycholinguistic models assume that ambiguous speech sounds activate more than one phonological category and that competition at prelexical levels cascades to lexical levels of processing. Imaging studies have shown that the left inferior frontal gyrus (LIFG) is modulated by phonetic competition between simultaneously activated categories, with increases in activation for more ambiguous tokens. Yet, these studies have often used artificially manipulated speech and/or metalinguistic tasks, which arguably may recruit neural regions that are not critical for natural speech recognition. Indeed, a prominent model of speech processing, the dual-stream model, posits that the LIFG is not involved in prelexical processing in receptive language processing. In the current study, we exploited natural variation in phonetic competition in the speech signal to investigate the neural systems sensitive to phonetic competition as listeners engage in a receptive language task. Participants heard nonsense sentences spoken in either a clear or conversational register as neural activity was monitored using fMRI. Conversational sentences contained greater phonetic competition, as estimated by measures of vowel confusability, and these sentences also elicited greater activation in a region in the LIFG. Sentence-level phonetic competition metrics uniquely correlated with LIFG activity as well. This finding is consistent with the hypothesis that the LIFG responds to competition at multiple levels of language processing and that recruitment of this region does not require an explicit phonological judgment.
Collapse
|
9
|
An fMRI study investigating effects of conceptually related sentences on the perception of degraded speech. Cortex 2016; 79:57-74. [PMID: 27100909 DOI: 10.1016/j.cortex.2016.03.014] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 01/06/2016] [Accepted: 03/15/2016] [Indexed: 11/20/2022]
Abstract
Prior research has shown that the perception of degraded speech is influenced by within sentence meaning and recruits one or more components of a frontal-temporal-parietal network. The goal of the current study is to examine whether the overall conceptual meaning of a sentence, made up of one set of words, influences the perception of a second acoustically degraded sentence, made up of a different set of words. Using functional magnetic resonance imaging (fMRI), we presented an acoustically clear sentence followed by an acoustically degraded sentence and manipulated the semantic relationship between them: Related in meaning (but consisting of different content words), Unrelated in meaning, or Same. Results showed that listeners' word recognition accuracy for the acoustically degraded sentences was significantly higher when the target sentence was preceded by a conceptually related compared to a conceptually unrelated sentence. Sensitivity to conceptual relationships was associated with enhanced activity in middle and inferior frontal, temporal, and parietal areas. In addition, the left middle frontal gyrus (LMFG), left inferior frontal gyrus (LIFG), and left middle temporal gyrus (LMTG) showed activity that correlated with individual performance on the Related condition. The superior temporal gyrus (STG) showed increased activation in the Same condition suggesting that it is sensitive to perceptual similarity rather than the integration of meaning between the sentence pairs. A fronto-temporo-parietal network appears to consolidate information sources across multiple levels of language (acoustic, lexical, syntactic, semantic) to build, and ultimately integrate conceptual information across sentences and facilitate the perception of a degraded speech signal. However, the nature of the sources of information that are available differentially recruit specific regions and modulate their activity within this network. Implications of these findings for the functional architecture of the network are considered.
Collapse
|
10
|
Heimrath K, Fiene M, Rufener KS, Zaehle T. Modulating Human Auditory Processing by Transcranial Electrical Stimulation. Front Cell Neurosci 2016; 10:53. [PMID: 27013969 PMCID: PMC4779894 DOI: 10.3389/fncel.2016.00053] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 02/18/2016] [Indexed: 12/31/2022] Open
Abstract
Transcranial electrical stimulation (tES) has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation, transcranial alternating current stimulation, and transcranial random noise stimulation has emerged. While the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders.
Collapse
Affiliation(s)
| | | | | | - Tino Zaehle
- Department of Neurology, Otto-von-Guericke University MagdeburgMagdeburg, Germany
| |
Collapse
|
11
|
Junger J, Habel U, Bröhr S, Neulen J, Neuschaefer-Rube C, Birkholz P, Kohler C, Schneider F, Derntl B, Pauly K. More than just two sexes: the neural correlates of voice gender perception in gender dysphoria. PLoS One 2014; 9:e111672. [PMID: 25375171 PMCID: PMC4222943 DOI: 10.1371/journal.pone.0111672] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2014] [Accepted: 10/03/2014] [Indexed: 01/28/2023] Open
Abstract
Gender dysphoria (also known as “transsexualism”) is characterized as a discrepancy between anatomical sex and gender identity. Research points towards neurobiological influences. Due to the sexually dimorphic characteristics of the human voice, voice gender perception provides a biologically relevant function, e.g. in the context of mating selection. There is evidence for a better recognition of voices of the opposite sex and a differentiation of the sexes in its underlying functional cerebral correlates, namely the prefrontal and middle temporal areas. This fMRI study investigated the neural correlates of voice gender perception in 32 male-to-female gender dysphoric individuals (MtFs) compared to 20 non-gender dysphoric men and 19 non-gender dysphoric women. Participants indicated the sex of 240 voice stimuli modified in semitone steps in the direction to the other gender. Compared to men and women, MtFs showed differences in a neural network including the medial prefrontal gyrus, the insula, and the precuneus when responding to male vs. female voices. With increased voice morphing men recruited more prefrontal areas compared to women and MtFs, while MtFs revealed a pattern more similar to women. On a behavioral and neuronal level, our results support the feeling of MtFs reporting they cannot identify with their assigned sex.
Collapse
Affiliation(s)
- Jessica Junger
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, Jülich, Germany
- * E-mail:
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, Jülich, Germany
| | - Sabine Bröhr
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Josef Neulen
- Department of Gynaecological Endocrinology and Reproductive Medicine, Medical School, RWTH Aachen University, Aachen, Germany
| | - Christiane Neuschaefer-Rube
- Department of Phoniatrics, Pedaudiology and Communication Disorders, Medical School, RWTH Aachen University, Aachen, Germany
| | - Peter Birkholz
- Department of Phoniatrics, Pedaudiology and Communication Disorders, Medical School, RWTH Aachen University, Aachen, Germany
| | - Christian Kohler
- Department of Psychiatry, Neuropsychiatry Division, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania, United States of America
| | - Frank Schneider
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, Jülich, Germany
| | - Birgit Derntl
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, Jülich, Germany
| | - Katharina Pauly
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance-Translational Brain Medicine, Jülich, Germany
| |
Collapse
|
12
|
Specht K, Baumgartner F, Stadler J, Hugdahl K, Pollmann S. Functional asymmetry and effective connectivity of the auditory system during speech perception is modulated by the place of articulation of the consonant- A 7T fMRI study. Front Psychol 2014; 5:549. [PMID: 24966841 PMCID: PMC4052338 DOI: 10.3389/fpsyg.2014.00549] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Accepted: 05/18/2014] [Indexed: 11/16/2022] Open
Abstract
To differentiate between stop-consonants, the auditory system has to detect subtle place of articulation (PoA) and voice-onset time (VOT) differences between stop-consonants. How this differential processing is represented on the cortical level remains unclear. The present functional magnetic resonance (fMRI) study takes advantage of the superior spatial resolution and high sensitivity of ultra-high-field 7 T MRI. Subjects were attentively listening to consonant–vowel (CV) syllables with an alveolar or bilabial stop-consonant and either a short or long VOT. The results showed an overall bilateral activation pattern in the posterior temporal lobe during the processing of the CV syllables. This was however modulated strongest by PoA such that syllables with an alveolar stop-consonant showed stronger left lateralized activation. In addition, analysis of underlying functional and effective connectivity revealed an inhibitory effect of the left planum temporale (PT) onto the right auditory cortex (AC) during the processing of alveolar CV syllables. Furthermore, the connectivity result indicated also a directed information flow from the right to the left AC, and further to the left PT for all syllables. These results indicate that auditory speech perception relies on an interplay between the left and right ACs, with the left PT as modulator. Furthermore, the degree of functional asymmetry is determined by the acoustic properties of the CV syllables.
Collapse
Affiliation(s)
- Karsten Specht
- Department of Biological and Medical Psychology University of Bergen, Bergen, Norway ; Department of Medical Engineering, Haukeland University Hospital Bergen, Norway
| | - Florian Baumgartner
- Department of Experimental Psychology, Otto-von-Guericke University Magdeburg, Germany
| | - Jörg Stadler
- Leibniz Institute for Neurobiology, Magdeburg Germany
| | - Kenneth Hugdahl
- Department of Biological and Medical Psychology University of Bergen, Bergen, Norway ; Division of Psychiatry, Haukeland University Hospital Bergen, Norway ; Department of Radiology, Haukeland University Hospital Bergen, Norway ; NORMENT Senter for Fremragende Forskning Oslo Norway
| | - Stefan Pollmann
- Department of Experimental Psychology, Otto-von-Guericke University Magdeburg, Germany ; Center for Behavioral Brain Sciences Magdeburg Germany
| |
Collapse
|
13
|
Klein ME, Zatorre RJ. Representations of Invariant Musical Categories Are Decodable by Pattern Analysis of Locally Distributed BOLD Responses in Superior Temporal and Intraparietal Sulci. Cereb Cortex 2014; 25:1947-57. [PMID: 24488957 DOI: 10.1093/cercor/bhu003] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In categorical perception (CP), continuous physical signals are mapped to discrete perceptual bins: mental categories not found in the physical world. CP has been demonstrated across multiple sensory modalities and, in audition, for certain over-learned speech and musical sounds. The neural basis of auditory CP, however, remains ambiguous, including its robustness in nonspeech processes and the relative roles of left/right hemispheres; primary/nonprimary cortices; and ventral/dorsal perceptual processing streams. Here, highly trained musicians listened to 2-tone musical intervals, which they perceive categorically while undergoing functional magnetic resonance imaging. Multivariate pattern analyses were performed after grouping sounds by interval quality (determined by frequency ratio between tones) or pitch height (perceived noncategorically, frequency ratios remain constant). Distributed activity patterns in spheres of voxels were used to determine sound sample identities. For intervals, significant decoding accuracy was observed in the right superior temporal and left intraparietal sulci, with smaller peaks observed homologously in contralateral hemispheres. For pitch height, no significant decoding accuracy was observed, consistent with the non-CP of this dimension. These results suggest that similar mechanisms are operative for nonspeech categories as for speech; espouse roles for 2 segregated processing streams; and support hierarchical processing models for CP.
Collapse
Affiliation(s)
- Mike E Klein
- Cognitive Neuroscience Unit, Montréal Neurological Institute, McGill University, Montréal, Québec, Canada H3A 2B4 International Laboratory for Brain, Music and Sound Research, Montréal, Québec, Canada H3C 3J7
| | - Robert J Zatorre
- Cognitive Neuroscience Unit, Montréal Neurological Institute, McGill University, Montréal, Québec, Canada H3A 2B4 International Laboratory for Brain, Music and Sound Research, Montréal, Québec, Canada H3C 3J7
| |
Collapse
|
14
|
Scharinger M, Henry MJ, Erb J, Meyer L, Obleser J. Thalamic and parietal brain morphology predicts auditory category learning. Neuropsychologia 2013; 53:75-83. [PMID: 24035788 DOI: 10.1016/j.neuropsychologia.2013.09.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2013] [Revised: 09/02/2013] [Accepted: 09/04/2013] [Indexed: 01/13/2023]
Abstract
Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Molly J Henry
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Julia Erb
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Lars Meyer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
15
|
Alho K, Rinne T, Herron TJ, Woods DL. Stimulus-dependent activations and attention-related modulations in the auditory cortex: a meta-analysis of fMRI studies. Hear Res 2013; 307:29-41. [PMID: 23938208 DOI: 10.1016/j.heares.2013.08.001] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 07/22/2013] [Accepted: 08/01/2013] [Indexed: 11/28/2022]
Abstract
We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to active and passive processing of pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by statistically analyzing the activation loci using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas less involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Kimmo Alho
- Helsinki Collegium for Advanced Studies, University of Helsinki, PO Box 4, FI 00014 Helsinki, Finland; Institute of Behavioural Sciences, University of Helsinki, PO Box 9, FI 00014 Helsinki, Finland.
| | | | | | | |
Collapse
|
16
|
Zheng W, Ackley ES, Martínez-Ramón M, Posse S. Spatially aggregated multiclass pattern classification in functional MRI using optimally selected functional brain areas. Magn Reson Imaging 2012; 31:247-61. [PMID: 22902471 DOI: 10.1016/j.mri.2012.07.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2012] [Revised: 06/16/2012] [Accepted: 07/18/2012] [Indexed: 11/27/2022]
Abstract
In previous works, boosting aggregation of classifier outputs from discrete brain areas has been demonstrated to reduce dimensionality and improve the robustness and accuracy of functional magnetic resonance imaging (fMRI) classification. However, dimensionality reduction and classification of mixed activation patterns of multiple classes remain challenging. In the present study, the goals were (a) to reduce dimensionality by combining feature reduction at the voxel level and backward elimination of optimally aggregated classifiers at the region level, (b) to compare region selection for spatially aggregated classification using boosting and partial least squares regression methods and (c) to resolve mixed activation patterns using probabilistic prediction of individual tasks. Brain activation maps from interleaved visual, motor, auditory and cognitive tasks were segmented into 144 functional regions. Feature selection reduced the number of feature voxels by more than 50%, leaving 95 regions. The two aggregation approaches further reduced the number of regions to 30, resulting in more than 75% reduction of classification time and misclassification rates of less than 3%. Boosting and partial least squares (PLS) were compared to select the most discriminative and the most task correlated regions, respectively. Successful task prediction in mixed activation patterns was feasible within the first block of task activation in real-time fMRI experiments. This methodology is suitable for sparsifying activation patterns in real-time fMRI and for neurofeedback from distributed networks of brain activation.
Collapse
Affiliation(s)
- Weili Zheng
- Department of Neurology, School of Medicine, University of New Mexico, Albuquerque, NM, USA.
| | | | | | | |
Collapse
|
17
|
Myers EB, Swan K. Effects of category learning on neural sensitivity to non-native phonetic categories. J Cogn Neurosci 2012; 24:1695-708. [PMID: 22621261 DOI: 10.1162/jocn_a_00243] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Categorical perception, an increased sensitivity to between- compared with within-category contrasts, is a stable property of native speech perception that emerges as language matures. Although recent research suggests that categorical responses to speech sounds can be found in left prefrontal as well as temporo-parietal areas, it is unclear how the neural system develops heightened sensitivity to between-category contrasts. In the current study, two groups of adult participants were trained to categorize speech sounds taken from a dental/retroflex/velar continuum according to two different boundary locations. Behavioral results suggest that for successful learners, categorization training led to increased discrimination accuracy for between-category contrasts with no concomitant increase for within-category contrasts. Neural responses to the learned category schemes were measured using a short-interval habituation design during fMRI scanning. Whereas both inferior frontal and temporal regions showed sensitivity to phonetic contrasts sampled from the continuum, only the bilateral middle frontal gyri exhibited a pattern consistent with encoding of the learned category scheme. Taken together, these results support a view in which top-down information about category membership may reshape perceptual sensitivities via attention or executive mechanisms in the frontal lobes.
Collapse
Affiliation(s)
- Emily B Myers
- Department of Communication Sciences, University of Connecticut, 850 Bolton Rd., Storrs, CT 06269, USA.
| | | |
Collapse
|
18
|
Petitto LA, Berens MS, Kovelman I, Dubins MH, Jasinska K, Shalinsky M. The "Perceptual Wedge Hypothesis" as the basis for bilingual babies' phonetic processing advantage: new insights from fNIRS brain imaging. BRAIN AND LANGUAGE 2012; 121:130-43. [PMID: 21724244 PMCID: PMC3192234 DOI: 10.1016/j.bandl.2011.05.003] [Citation(s) in RCA: 93] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2010] [Revised: 04/21/2011] [Accepted: 05/16/2011] [Indexed: 05/13/2023]
Abstract
In a neuroimaging study focusing on young bilinguals, we explored the brains of bilingual and monolingual babies across two age groups (younger 4-6 months, older 10-12 months), using fNIRS in a new event-related design, as babies processed linguistic phonetic (Native English, Non-Native Hindi) and non-linguistic Tone stimuli. We found that phonetic processing in bilingual and monolingual babies is accomplished with the same language-specific brain areas classically observed in adults, including the left superior temporal gyrus (associated with phonetic processing) and the left inferior frontal cortex (associated with the search and retrieval of information about meanings, and syntactic and phonological patterning), with intriguing developmental timing differences: left superior temporal gyrus activation was observed early and remained stably active over time, while left inferior frontal cortex showed greater increase in neural activation in older babies notably at the precise age when babies' enter the universal first-word milestone, thus revealing a first-time focal brain correlate that may mediate a universal behavioral milestone in early human language acquisition. A difference was observed in the older bilingual babies' resilient neural and behavioral sensitivity to Non-Native phonetic contrasts at a time when monolingual babies can no longer make such discriminations. We advance the "Perceptual Wedge Hypothesis" as one possible explanation for how exposure to greater than one language may alter neural and language processing in ways that we suggest are advantageous to language users. The brains of bilinguals and multilinguals may provide the most powerful window into the full neural "extent and variability" that our human species' language processing brain areas could potentially achieve.
Collapse
Affiliation(s)
- L A Petitto
- Department of Psychology, University of Toronto, Canada.
| | | | | | | | | | | |
Collapse
|
19
|
Categorical speech processing in Broca's area: an fMRI study using multivariate pattern-based analysis. J Neurosci 2012; 32:3942-8. [PMID: 22423114 DOI: 10.1523/jneurosci.3814-11.2012] [Citation(s) in RCA: 82] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Although much effort has been directed toward understanding the neural basis of speech processing, the neural processes involved in the categorical perception of speech have been relatively less studied, and many questions remain open. In this functional magnetic resonance imaging (fMRI) study, we probed the cortical regions mediating categorical speech perception using an advanced brain-mapping technique, whole-brain multivariate pattern-based analysis (MVPA). Normal healthy human subjects (native English speakers) were scanned while they listened to 10 consonant-vowel syllables along the /ba/-/da/ continuum. Outside of the scanner, individuals' own category boundaries were measured to divide the fMRI data into /ba/ and /da/ conditions per subject. The whole-brain MVPA revealed that Broca's area and the left pre-supplementary motor area evoked distinct neural activity patterns between the two perceptual categories (/ba/ vs /da/). Broca's area was also found when the same analysis was applied to another dataset (Raizada and Poldrack, 2007), which previously yielded the supramarginal gyrus using a univariate adaptation-fMRI paradigm. The consistent MVPA findings from two independent datasets strongly indicate that Broca's area participates in categorical speech perception, with a possible role of translating speech signals into articulatory codes. The difference in results between univariate and multivariate pattern-based analyses of the same data suggest that processes in different cortical areas along the dorsal speech perception stream are distributed on different spatial scales.
Collapse
|
20
|
Dubois C, Otzenberger H, Gounot D, Sock R, Metz-Lutz MN. Visemic processing in audiovisual discrimination of natural speech: a simultaneous fMRI-EEG study. Neuropsychologia 2012; 50:1316-26. [PMID: 22387605 DOI: 10.1016/j.neuropsychologia.2012.02.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2011] [Revised: 01/28/2012] [Accepted: 02/22/2012] [Indexed: 11/17/2022]
Abstract
In a noisy environment, visual perception of articulatory movements improves natural speech intelligibility. Parallel to phonemic processing based on auditory signal, visemic processing constitutes a counterpart based on "visemes", the distinctive visual units of speech. Aiming at investigating the neural substrates of visemic processing in a disturbed environment, we carried out a simultaneous fMRI-EEG experiment based on discriminating syllabic minimal pairs involving three phonological contrasts, each bearing on a single phonetic feature characterised by different degrees of visual distinctiveness. The contrasts involved either labialisation of the vowels, or place of articulation or voicing of the consonants. Audiovisual consonant-vowel syllable pairs were presented either with a static facial configuration or with a dynamic display of articulatory movements related to speech production. In the sound-disturbed MRI environment, the significant improvement of syllabic discrimination achieved in the dynamic audiovisual modality, compared to the static audiovisual modality was associated with activation of the occipito-temporal cortex (MT+V5) bilaterally, and of the left premotor cortex. While the former was activated in response to facial movements independently of their relation to speech, the latter was specifically activated by phonological discrimination. During fMRI, significant evoked potential responses to syllabic discrimination were recorded around 150 and 250 ms following the onset of the second stimulus of the pairs, whose amplitude was greater in the dynamic compared to the static audiovisual modality. Our results provide arguments for the involvement of the speech motor cortex in phonological discrimination, and suggest a multimodal representation of speech units.
Collapse
Affiliation(s)
- Cyril Dubois
- Institut de Phonétique de Strasbourg, Équipe de Recherche Parole et Cognition, U.R. 1339 - LILPA, Université de Strasbourg, France.
| | | | | | | | | |
Collapse
|
21
|
Jiang J, Bernstein LE. Psychophysics of the McGurk and other audiovisual speech integration effects. J Exp Psychol Hum Percept Perform 2011; 37:1193-209. [PMID: 21574741 DOI: 10.1037/a0023100] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When the auditory and visual components of spoken audiovisual nonsense syllables are mismatched, perceivers produce four different types of perceptual responses, auditory correct, visual correct, fusion (the so-called McGurk effect), and combination (i.e., two consonants are reported). Here, quantitative measures were developed to account for the distribution of the four types of perceptual responses to 384 different stimuli from four talkers. The measures included mutual information, correlations, and acoustic measures, all representing audiovisual stimulus relationships. In Experiment 1, open-set perceptual responses were obtained for acoustic /bɑ/ or /lɑ/ dubbed to video /bɑ, dɑ, gɑ, vɑ, zɑ, lɑ, wɑ, ðɑ/. The talker, the video syllable, and the acoustic syllable significantly influenced the type of response. In Experiment 2, the best predictors of response category proportions were a subset of the physical stimulus measures, with the variance accounted for in the perceptual response category proportions between 17% and 52%. That audiovisual stimulus relationships can account for perceptual response distributions supports the possibility that internal representations are based on modality-specific stimulus relationships.
Collapse
Affiliation(s)
- Jintao Jiang
- Division of Communication and Auditory Neuroscience, House Ear Institute, Los Angeles, California, USA.
| | | |
Collapse
|
22
|
Manipulation of voice onset time during dichotic listening. Brain Cogn 2011; 76:233-8. [DOI: 10.1016/j.bandc.2011.01.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2010] [Revised: 01/20/2011] [Accepted: 01/21/2011] [Indexed: 11/22/2022]
|
23
|
Woods DL, Herron TJ, Cate AD, Kang X, Yund EW. Phonological processing in human auditory cortical fields. Front Hum Neurosci 2011; 5:42. [PMID: 21541252 PMCID: PMC3082852 DOI: 10.3389/fnhum.2011.00042] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2010] [Accepted: 04/01/2011] [Indexed: 11/30/2022] Open
Abstract
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features.
Collapse
Affiliation(s)
- David L Woods
- Human Cognitive Neurophysiology Laboratory, Department of Veterans Affairs Northern California Health Care System Martinez, CA, USA
| | | | | | | | | |
Collapse
|
24
|
Klein ME, Zatorre RJ. A role for the right superior temporal sulcus in categorical perception of musical chords. Neuropsychologia 2011; 49:878-887. [PMID: 21236276 DOI: 10.1016/j.neuropsychologia.2011.01.008] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2010] [Revised: 12/26/2010] [Accepted: 01/06/2011] [Indexed: 10/18/2022]
Abstract
Categorical perception (CP) is a mechanism whereby non-identical stimuli that have the same underlying meaning become invariantly represented in the brain. Through behavioral identification and discrimination tasks, CP has been demonstrated to occur broadly across the auditory modality, including in perception of speech (e.g. phonemes) and music (e.g. chords) stimuli. Several functional imaging studies have linked CP of speech with activity in multiple regions of the left superior temporal sulcus (STS). As language processing is generally left-hemisphere dominant and, conversely, fine-grained spectral processing shows a right hemispheric bias, we hypothesized that CP of musical stimuli would be associated with right STS activity. Here, we used functional magnetic resonance imaging (fMRI) to test healthy, musically-trained volunteers as they (a) underwent a musical chord adaptation/habituation paradigm and (b) performed an active discrimination task on within- and between-category chord pairs, as well as an acoustically-matched, more continuously-perceived orthogonal sound set. As predicted, greater right STS activity was linked to categorical processing in both experimental paradigms. The results suggest that the left and right STS are functionally specialized and that the right STS may take on a key role in CP of spectrally complex sounds.
Collapse
Affiliation(s)
- Mike E Klein
- Department of Neuropsychology, Montréal Neurological Institute, McGill University, Montréal, Québec H3A 2B4, Canada; International Laboratory for Brain, Music and Sound Research, Montréal, Québec H3C 3J7, Canada.
| | - Robert J Zatorre
- Department of Neuropsychology, Montréal Neurological Institute, McGill University, Montréal, Québec H3A 2B4, Canada; International Laboratory for Brain, Music and Sound Research, Montréal, Québec H3C 3J7, Canada
| |
Collapse
|
25
|
|
26
|
Turkeltaub PE, Coslett HB. Localization of sublexical speech perception components. BRAIN AND LANGUAGE 2010; 114:1-15. [PMID: 20413149 PMCID: PMC2914564 DOI: 10.1016/j.bandl.2010.03.008] [Citation(s) in RCA: 175] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2009] [Revised: 03/22/2010] [Accepted: 03/28/2010] [Indexed: 05/04/2023]
Abstract
Models of speech perception are in general agreement with respect to the major cortical regions involved, but lack precision with regard to localization and lateralization of processing units. To refine these models we conducted two Activation Likelihood Estimation (ALE) meta-analyses of the neuroimaging literature on sublexical speech perception. Based on foci reported in 23 fMRI experiments, we identified significant activation likelihoods in left and right superior temporal cortex and the left posterior middle frontal gyrus. Sub-analyses examining phonetic and phonological processes revealed only left mid-posterior superior temporal sulcus activation likelihood. A lateralization analysis demonstrated temporal lobe left lateralization in terms of magnitude, extent, and consistency of activity. Experiments requiring explicit attention to phonology drove this lateralization. An ALE analysis of eight fMRI studies on categorical phoneme perception revealed significant activation likelihood in the left supramarginal gyrus and angular gyrus. These results are consistent with a speech processing network in which the bilateral superior temporal cortices perform acoustic analysis of speech and non-speech auditory stimuli, the left mid-posterior superior temporal sulcus performs phonetic and phonological analysis, and the left inferior parietal lobule is involved in detection of differences between phoneme categories. These results modify current speech perception models in three ways: (1) specifying the most likely locations of dorsal stream processing units, (2) clarifying that phonetic and phonological superior temporal sulcus processing is left lateralized and localized to the mid-posterior portion, and (3) suggesting that both the supramarginal gyrus and angular gyrus may be involved in phoneme discrimination.
Collapse
Affiliation(s)
- Peter E Turkeltaub
- Department of Neurology, University of Pennsylvania, 3400 Spruce Street, 3 West Gates Building, Philadelphia, PA 19104, USA.
| | | |
Collapse
|
27
|
Arciuli J, Rankine T, Monaghan P. Auditory discrimination of voice-onset time and its relationship with reading ability. Laterality 2010; 15:343-60. [DOI: 10.1080/13576500902799671] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
28
|
Kirshenbaum AP, Johnson MW, Schwarz SL, Jackson ER. Response disinhibition evoked by the administration of nicotine and nicotine-associated contextual cues. Drug Alcohol Depend 2009; 105:97-108. [PMID: 19640659 PMCID: PMC2789553 DOI: 10.1016/j.drugalcdep.2009.06.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/04/2008] [Revised: 06/04/2009] [Accepted: 06/15/2009] [Indexed: 10/20/2022]
Abstract
Nicotine causes dose-dependent alterations in accuracy on the differential-reinforcement of low-rate responding (DRL) 29.5-s schedule in rats. The current investigation evaluated whether nicotine-associated contextual cues can produce nicotine-like perturbations in DRL-schedule performance in the absence of nicotine. Nicotine and saline administrations occurred just prior to DRL 29.5-s schedule responding for sucrose solution, and two different experimental contexts (differentiated by visual, olfactory, and tactile cues) were utilized. All subjects (N=16) experienced two consecutive sessions of DRL-schedule responding per day. The experimental group (n=8) was exposed to saline immediately prior to the first session and 0.3mg/kg nicotine before the second session, and the context was changed between sessions. This sequence of saline and then nicotine administration, paired with two reliable contexts, persisted for 12 consecutive days and successive nicotine administrations corresponded with increasingly poorer performance on the DRL 29.5-s schedule. No nicotine was administered for days 13-20 during context testing, and the nicotine-associated context produced response disinhibition on the DRL schedule. Two control groups were included in the design; subjects in one control group (n=4) received saline in each context to verify that the contexts themselves were not exerting control over operant responding. To assess how explicit and non-explicit pairings of nicotine and contextual cues influenced DRL behavior, subjects in a second control group (n=4) were given nicotine prior to the second session, but the contexts were not altered between sessions. The results from this experiment suggest that environmental stimuli associated with nicotine exposure can come to elicit nicotine-induced performance decrements on a DRL 29.5-s schedule.
Collapse
Affiliation(s)
- Ari P. Kirshenbaum
- Krikstone Laboratory for the Behavioral Sciences Department of Psychology Saint Michael's College One Winooski Park, Box 193 Colchester, Vermont 05439,Corresponding author. Tel.: +1-802-654-2846; fax: +1-802-654-2236. (A.P. Kirshenbaum)
| | - Matthew W. Johnson
- Behavioral Pharmacology Research Unit Dept. of Psychiatry and Behavioral Sciences Johns Hopkins University School of Medicine 5510 Nathan Shock Dr. Baltimore, MD 21224
| | - Sarah L. Schwarz
- Krikstone Laboratory for the Behavioral Sciences Department of Psychology Saint Michael's College One Winooski Park, Box 193 Colchester, Vermont 05439
| | - Eric R. Jackson
- Krikstone Laboratory for the Behavioral Sciences Department of Psychology Saint Michael's College One Winooski Park, Box 193 Colchester, Vermont 05439
| |
Collapse
|
29
|
Myers EB. Dissociable effects of phonetic competition and category typicality in a phonetic categorization task: an fMRI investigation. Neuropsychologia 2006; 45:1463-73. [PMID: 17178420 PMCID: PMC1876725 DOI: 10.1016/j.neuropsychologia.2006.11.005] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2006] [Revised: 11/13/2006] [Accepted: 11/17/2006] [Indexed: 11/16/2022]
Abstract
The current study used fMRI to explore the extent to which neural activation patterns in the processing of speech are driven by the quality of a speech sound as a member of its phonetic category, that is, its category typicality, or by the competition inherent in resolving the category membership of stimuli which are similar to other possible speech sounds. Subjects performed a phonetic categorization task on synthetic stimuli ranging along a voice-onset time continuum from [da] to [ta]. The stimulus set included sounds at the extreme ends of the voicing continuum which were poor phonetic category exemplars, but which were minimally competitive, stimuli near the phonetic category boundary, which were both poor exemplars of their phonetic category and maximally competitive, and stimuli in the middle of the range which were good exemplars of their phonetic category. Results revealed greater activation in bilateral inferior frontal areas for stimuli with the greatest degree of competition, consistent with the view that these areas are involved in selection between competing alternatives. In contrast, greater activation was observed in bilateral superior temporal gyri for the least prototypical phonetic category exemplars, irrespective of competition, consistent with the view that these areas process the acoustic-phonetic details of speech to resolve a token's category membership. Taken together, these results implicate separable neural regions in two different aspects of phonetic categorization.
Collapse
Affiliation(s)
- Emily B Myers
- Brown University, Department of Cognitive and Linguistic Sciences, Box 1978, Providence, RI 02912, USA.
| |
Collapse
|