1
|
Beach SD, Ozernov-Palchik O, May SC, Centanni TM, Perrachione TK, Pantazis D, Gabrieli JDE. The Neural Representation of a Repeated Standard Stimulus in Dyslexia. Front Hum Neurosci 2022; 16:823627. [PMID: 35634200 PMCID: PMC9133793 DOI: 10.3389/fnhum.2022.823627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Accepted: 04/19/2022] [Indexed: 11/13/2022] Open
Abstract
The neural representation of a repeated stimulus is the standard against which a deviant stimulus is measured in the brain, giving rise to the well-known mismatch response. It has been suggested that individuals with dyslexia have poor implicit memory for recently repeated stimuli, such as the train of standards in an oddball paradigm. Here, we examined how the neural representation of a standard emerges over repetitions, asking whether there is less sensitivity to repetition and/or less accrual of "standardness" over successive repetitions in dyslexia. We recorded magnetoencephalography (MEG) as adults with and without dyslexia were passively exposed to speech syllables in a roving-oddball design. We performed time-resolved multivariate decoding of the MEG sensor data to identify the neural signature of standard vs. deviant trials, independent of stimulus differences. This "multivariate mismatch" was equally robust and had a similar time course in the two groups. In both groups, standards generated by as few as two repetitions were distinct from deviants, indicating normal sensitivity to repetition in dyslexia. However, only in the control group did standards become increasingly different from deviants with repetition. These results suggest that many of the mechanisms that give rise to neural adaptation as well as mismatch responses are intact in dyslexia, with the possible exception of a putatively predictive mechanism that successively integrates recent sensory information into feedforward processing.
Collapse
Affiliation(s)
- Sara D. Beach
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, United States
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, United States
| | - Ola Ozernov-Palchik
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Sidney C. May
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Tracy M. Centanni
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Tyler K. Perrachione
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, United States
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| | - Dimitrios Pantazis
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - John D. E. Gabrieli
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, United States
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, United States
| |
Collapse
|
2
|
Getz LM, Toscano JC. The time-course of speech perception revealed by temporally-sensitive neural measures. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2020; 12:e1541. [PMID: 32767836 DOI: 10.1002/wcs.1541] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 05/28/2020] [Accepted: 06/26/2020] [Indexed: 11/07/2022]
Abstract
Recent advances in cognitive neuroscience have provided a detailed picture of the early time-course of speech perception. In this review, we highlight this work, placing it within the broader context of research on the neurobiology of speech processing, and discuss how these data point us toward new models of speech perception and spoken language comprehension. We focus, in particular, on temporally-sensitive measures that allow us to directly measure early perceptual processes. Overall, the data provide support for two key principles: (a) speech perception is based on gradient representations of speech sounds and (b) speech perception is interactive and receives input from higher-level linguistic context at the earliest stages of cortical processing. Implications for models of speech processing and the neurobiology of language more broadly are discussed. This article is categorized under: Psychology > Language Psychology > Perception and Psychophysics Neuroscience > Cognition.
Collapse
Affiliation(s)
- Laura M Getz
- Department of Psychological Sciences, University of San Diego, San Diego, California, USA
| | - Joseph C Toscano
- Department of Psychological and Brain Sciences, Villanova University, Villanova, Pennsylvania, USA
| |
Collapse
|
3
|
Reassessing the electrophysiological evidence for categorical perception of Mandarin lexical tone: ERP evidence from native and naïve non-native Mandarin listeners. Atten Percept Psychophys 2019; 81:543-557. [DOI: 10.3758/s13414-018-1614-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
4
|
Toscano JC, Anderson ND, Fabiani M, Gratton G, Garnsey SM. The time-course of cortical responses to speech revealed by fast optical imaging. BRAIN AND LANGUAGE 2018; 184:32-42. [PMID: 29960165 PMCID: PMC6102048 DOI: 10.1016/j.bandl.2018.06.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Revised: 04/03/2018] [Accepted: 06/12/2018] [Indexed: 05/31/2023]
Abstract
Recent work has sought to describe the time-course of spoken word recognition, from initial acoustic cue encoding through lexical activation, and identify cortical areas involved in each stage of analysis. However, existing methods are limited in either temporal or spatial resolution, and as a result, have only provided partial answers to the question of how listeners encode acoustic information in speech. We present data from an experiment using a novel neuroimaging method, fast optical imaging, to directly assess the time-course of speech perception, providing non-invasive measurement of speech sound representations, localized to specific cortical areas. We find that listeners encode speech in terms of continuous acoustic cues at early stages of processing (ca. 96 ms post-stimulus onset), and begin activating phonological category representations rapidly (ca. 144 ms post-stimulus). Moreover, cue-based representations are widespread in the brain and overlap in time with graded category-based representations, suggesting that spoken word recognition involves simultaneous activation of both continuous acoustic cues and phonological categories.
Collapse
Affiliation(s)
- Joseph C Toscano
- Department of Psychological & Brain Sciences, Villanova University, United States; Beckman Institute for Advanced Science & Technology, University of Illinois at Urbana-Champaign, United States.
| | - Nathaniel D Anderson
- Beckman Institute for Advanced Science & Technology, University of Illinois at Urbana-Champaign, United States; Department of Psychology, University of Illinois at Urbana-Champaign, United States
| | - Monica Fabiani
- Beckman Institute for Advanced Science & Technology, University of Illinois at Urbana-Champaign, United States; Department of Psychology, University of Illinois at Urbana-Champaign, United States
| | - Gabriele Gratton
- Beckman Institute for Advanced Science & Technology, University of Illinois at Urbana-Champaign, United States; Department of Psychology, University of Illinois at Urbana-Champaign, United States
| | - Susan M Garnsey
- Beckman Institute for Advanced Science & Technology, University of Illinois at Urbana-Champaign, United States; Department of Psychology, University of Illinois at Urbana-Champaign, United States
| |
Collapse
|
5
|
Archila-Suerte P, Woods EA, Chiarello C, Hernandez AE. Neuroanatomical profiles of bilingual children. Dev Sci 2018; 21:e12654. [PMID: 29480569 DOI: 10.1111/desc.12654] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2016] [Accepted: 12/22/2017] [Indexed: 01/06/2023]
Abstract
The goal of the present study was to examine differences in cortical thickness, cortical surface area, and subcortical volume between bilingual children who are highly proficient in two languages (i.e., English and Spanish) and bilingual children who are mainly proficient in one of the languages (i.e., Spanish). All children (N = 49) learned Spanish as a native language (L1) at home and English as a second language (L2) at school. Proficiency of both languages was assessed using the standardized Woodcock Language Proficiency Battery. Five-minute high-resolution anatomical scans were acquired with a 3-Tesla scanner. The degree of discrepancy between L1 and L2 proficiency was used to classify the children into two groups: children with balanced proficiency and children with unbalanced proficiency. The groups were comparable on language history, parental education, and other variables except English proficiency. Values of cortical thickness and surface area of the transverse STG, IFG-pars opercularis, and MFG, as well as subcortical volume of the caudate and putamen, were extracted from FreeSurfer. Results showed that children with balanced bilingualism had thinner cortices of the left STG, left IFG, left MFG and a larger bilateral putamen, whereas unbalanced bilinguals showed thicker cortices of the same regions and a smaller putamen. Additionally, unbalanced bilinguals with stronger foreign accents in the L2 showed reduced surface areas of the MFG and STS bilaterally. The results suggest that balanced/unbalanced bilingualism is reflected in different neuroanatomical characteristics that arise from biological and/or environmental factors.
Collapse
Affiliation(s)
| | - Elizabeth A Woods
- Department of Psychology, University of Houston, Houston, Texas, USA
| | - Christine Chiarello
- Department of Psychology, University of California Riverside, Riverside, California, USA
| | | |
Collapse
|
6
|
Abstract
Categorical effects are found across speech sound categories, with the degree of these effects ranging from extremely strong categorical perception in consonants to nearly continuous perception in vowels. We show that both strong and weak categorical effects can be captured by a unified model. We treat speech perception as a statistical inference problem, assuming that listeners use their knowledge of categories as well as the acoustics of the signal to infer the intended productions of the speaker. Simulations show that the model provides close fits to empirical data, unifying past findings of categorical effects in consonants and vowels and capturing differences in the degree of categorical effects through a single parameter.
Collapse
|
7
|
Silva DMR, Melges DB, Rothe-Neves R. N1 response attenuation and the mismatch negativity (MMN) to within- and across-category phonetic contrasts. Psychophysiology 2017; 54:591-600. [PMID: 28169421 DOI: 10.1111/psyp.12824] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2016] [Accepted: 12/07/2016] [Indexed: 11/29/2022]
Abstract
According to the neural adaptation model of the mismatch negativity (MMN), the sensitivity of this event-related response to both acoustic and categorical information in speech sounds can be accounted for by assuming that (a) the degree of overlapping between neural representations of two sounds depends on both the acoustic difference between them and whether or not they belong to distinct phonetic categories, and (b) a release from stimulus-specific adaptation causes an enhanced N1 obligatory response to infrequent deviant stimuli. On the basis of this view, we tested in Experiment 1 whether the N1 response to the second sound of a pair (S2 ) would be more attenuated in pairs of identical vowels compared with pairs of different vowels, and in pairs of exemplars of the same vowel category compared with pairs of exemplars of different categories. The psychoacoustic distance between S1 and S2 was the same for all within-category and across-category pairs. While N1 amplitudes decreased markedly from S1 to S2 , responses to S2 were quite similar across pair types, indicating that the attenuation effect in such conditions is not stimulus specific. In Experiment 2, a pronounced MMN was elicited by a deviant vowel sound in an across-category oddball sequence, but not when the exact same deviant vowel was presented in a within-category oddball sequence. This adds evidence that MMN reflects categorical phonetic processing. Taken together, the results suggest that different neural processes underlie the attenuation of the N1 response to S2 and the MMN to vowels.
Collapse
Affiliation(s)
- Daniel M R Silva
- Graduate Program in Neuroscience, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Danilo B Melges
- Graduate Program in Electrical Engineering, Department of Electrical Engineering, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Rui Rothe-Neves
- Phonetics Lab, Faculty of Letters, Federal University of Minas Gerais, Belo Horizonte, Brazil
| |
Collapse
|
8
|
Effects of aging on the neuromagnetic mismatch detection to speech sounds. Biol Psychol 2015; 104:48-55. [DOI: 10.1016/j.biopsycho.2014.11.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2012] [Revised: 06/30/2014] [Accepted: 11/09/2014] [Indexed: 11/21/2022]
|
9
|
Alho K, Rinne T, Herron TJ, Woods DL. Stimulus-dependent activations and attention-related modulations in the auditory cortex: a meta-analysis of fMRI studies. Hear Res 2013; 307:29-41. [PMID: 23938208 DOI: 10.1016/j.heares.2013.08.001] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 07/22/2013] [Accepted: 08/01/2013] [Indexed: 11/28/2022]
Abstract
We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to active and passive processing of pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by statistically analyzing the activation loci using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas less involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Kimmo Alho
- Helsinki Collegium for Advanced Studies, University of Helsinki, PO Box 4, FI 00014 Helsinki, Finland; Institute of Behavioural Sciences, University of Helsinki, PO Box 9, FI 00014 Helsinki, Finland.
| | | | | | | |
Collapse
|
10
|
Pivik RT, Andres A, Badger TM. Effects of diet on early stage cortical perception and discrimination of syllables differing in voice-onset time: a longitudinal ERP study in 3 and 6 month old infants. BRAIN AND LANGUAGE 2012; 120:27-41. [PMID: 21889197 DOI: 10.1016/j.bandl.2011.08.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2010] [Revised: 06/16/2011] [Accepted: 08/03/2011] [Indexed: 05/31/2023]
Abstract
The influence of diet on cortical processing of syllables was examined at 3 and 6 months in 239 infants who were breastfed or fed milk or soy-based formula. Event-related potentials to syllables differing in voice-onset-time were recorded from placements overlying brain areas specialized for language processing. P1 component amplitude and latency measures indicated that at both ages infants in all groups could extract and discriminate categorical information from syllables. Between-syllable amplitude differences-present across groups-were generally greater for SF infants. Responses peaked earlier over left hemisphere speech-perception than speech-production areas. Encoding was faster in BF than formula-fed infants. The results show that in preverbal infants: (1) discrimination of phonetic information occurs in early stages of cortical processing; (2) areas overlying brain regions of speech perception are activated earlier than those involved in speech production; and (3) these processes are differentially modulated by infant diet and environmental factors.
Collapse
Affiliation(s)
- R T Pivik
- Arkansas Children's Nutrition, AR 72202, United States.
| | | | | |
Collapse
|
11
|
Toscano JC, McMurray B, Dennhardt J, Luck SJ. Continuous perception and graded categorization: electrophysiological evidence for a linear relationship between the acoustic signal and perceptual encoding of speech. Psychol Sci 2010; 21:1532-40. [PMID: 20935168 DOI: 10.1177/0956797610384142] [Citation(s) in RCA: 80] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Speech sounds are highly variable, yet listeners readily extract information from them and transform continuous acoustic signals into meaningful categories during language comprehension. A central question is whether perceptual encoding captures acoustic detail in a one-to-one fashion or whether it is affected by phonological categories. We addressed this question in an event-related potential (ERP) experiment in which listeners categorized spoken words that varied along a continuous acoustic dimension (voice-onset time, or VOT) in an auditory oddball task. We found that VOT effects were present through a late stage of perceptual processing (N1 component, ~100 ms poststimulus) and were independent of categorization. In addition, effects of within-category differences in VOT were present at a postperceptual categorization stage (P3 component, ~450 ms poststimulus). Thus, at perceptual levels, acoustic information is encoded continuously, independently of phonological information. Further, at phonological levels, fine-grained acoustic differences are preserved along with category information.
Collapse
Affiliation(s)
- Joseph C Toscano
- Department of Psychology, University of Iowa, Iowa City, IA 52242, USA.
| | | | | | | |
Collapse
|
12
|
Feldman NH, Griffiths TL, Morgan JL. The influence of categories on perception: explaining the perceptual magnet effect as optimal statistical inference. Psychol Rev 2009; 116:752-782. [PMID: 19839683 DOI: 10.1037/a0017196] [Citation(s) in RCA: 110] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A variety of studies have demonstrated that organizing stimuli into categories can affect the way the stimuli are perceived. We explore the influence of categories on perception through one such phenomenon, the perceptual magnet effect, in which discriminability between vowels is reduced near prototypical vowel sounds. We present a Bayesian model to explain why this reduced discriminability might occur: It arises as a consequence of optimally solving the statistical problem of perception in noise. In the optimal solution to this problem, listeners' perception is biased toward phonetic category means because they use knowledge of these categories to guide their inferences about speakers' target productions. Simulations show that model predictions closely correspond to previously published human data, and novel experimental results provide evidence for the predicted link between perceptual warping and noise. The model unifies several previous accounts of the perceptual magnet effect and provides a framework for exploring categorical effects in other domains.
Collapse
Affiliation(s)
- Naomi H Feldman
- Department of Cognitive and Linguistic Sciences, Brown University
| | | | - James L Morgan
- Department of Cognitive and Linguistic Sciences, Brown University
| |
Collapse
|
13
|
Abstract
OBJECTIVE To assess the extent to which acoustic and phonetic change-detection processes contribute to the mismatch negativity (MMN) to linguistic pitch contours. DESIGN MMN was elicited from Mandarin and English speakers using a passive oddball paradigm. Two oddball conditions were constructed. In one condition (T1/T2i), the Mandarin high-level tone (T1) was compared with a convex high-rising tone (inverted T2, henceforth referred to as T2i) that occurs as a contextual variant of T1 in running speech. In the other (T2/T2i), the concave high-rising tone (T2) was compared with T2i. Phonetically, T1/T2i represents a within-category contrast for native speakers, whereas T2/T2i represents a between-category contrast. The between-category pair (T2/T2i), however, is more similar acoustically than the within-category pair (T1/T2i). In an attention-demanding behavioral paradigm, the same speakers also performed an auditory discrimination task to determine the perceptual distinctiveness of the two tonal pairs. RESULTS Results revealed that the Chinese group, relative to the English, showed larger MMN responses and earlier peak latencies for both conditions, indicating experience-dependent enhancement in representing linguistically relevant pitch contours. At attentive stages of processing, however, the Chinese group was less accurate than the English in discriminating the within-category contrast (T1-T2i). CONCLUSIONS These findings demonstrate that experience-dependent neural effects at early preattentive stages of processing may be driven primarily by acoustic features of pitch contours that occur in natural speech. At attentive stages of processing, perception is strongly influenced by tonal categories and their relations to one another. The MMN is a useful index for examining long-term plasticity to linguistically relevant acoustic features.
Collapse
|
14
|
Auditory mismatch negativity for speech sound contrasts is modulated by language context. Neuroreport 2008; 19:1079-83. [DOI: 10.1097/wnr.0b013e3283056378] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
15
|
Abstract
Language experience is known to modulate the preattentive processing of linguistically relevant pitch contours when presented in the speech domain. To assess if experience-dependent effects are specific to speech, we evaluated the mismatch negativity response to nonspeech homologs (iterated rippled noise) of such curvilinear pitch contours (Mandarin: Tone 1, 'high level'; Tone 2, 'high rising') by Chinese and English listeners as well as to a pitch contour that was a linear approximation of Tone 2 ('linear ascending ramp'). Mandarin speakers showed larger mismatch negativity responses than English to the curvilinear pitch contours only. These results suggest that experience-dependent neural plasticity in early cortical processing of linguistically relevant pitch contours is sensitive to naturally occurring pitch dimensions but not specific to speech per se.
Collapse
|
16
|
Archibald LM, Joanisse MF, Shepherd M. Associations Between Key Language-Related Measures in Typically Developing School-Age Children. ACTA ACUST UNITED AC 2008. [DOI: 10.1027/0044-3409.216.3.161] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Three measures have been found to be predictive of developmental language impairment: nonword repetition, the production of English past tense, and categorical speech perception. Despite this, direct comparisons of these tasks have been limited. The present study explored the associations between these measures and other language and cognitive skills in an unselected group of 100 children aged 6 to 11 years. The children completed standardized tests of nonverbal ability, receptive language, and reading, as well as nonword repetition, past tense production, and categorical speech perception tasks. Nonword repetition and past tense were highly correlated. Variance in nonword repetition was explained additionally by digit recall, whereas receptive language, age, and digit recall accounted for significant portions of variance in past tense production. Categorical speech perception was not associated with any of the measures in the study. The extent to which common and distinct factors underlie the key language-related measures is discussed.
Collapse
|
17
|
Hutchison ER, Blumstein SE, Myers EB. An event-related fMRI investigation of voice-onset time discrimination. Neuroimage 2007; 40:342-52. [PMID: 18248740 DOI: 10.1016/j.neuroimage.2007.10.064] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2007] [Revised: 10/25/2007] [Accepted: 10/31/2007] [Indexed: 10/22/2022] Open
Abstract
The discrimination of voice-onset time, an acoustic-phonetic cue to voicing in stop consonants, was investigated to explore the neural systems underlying the perception of a rapid temporal speech parameter. Pairs of synthetic stimuli taken from a [da] to [ta] continuum varying in voice-onset time (VOT) were presented for discrimination judgments. Participants exhibited categorical perception, discriminating 15-ms and 30-ms between-category comparisons and failing to discriminate 15-ms within-category comparisons. Contrastive analysis with a tone discrimination task demonstrated left superior temporal gyrus activation in all three VOT conditions with recruitment of additional regions, particularly the right inferior frontal gyrus and middle frontal gyrus for the 15-ms between-category stimuli. Hemispheric differences using anatomically defined regions of interest showed two distinct patterns with anterior regions showing more activation in the right hemisphere relative to the left hemisphere and temporal regions demonstrating greater activation in the left hemisphere relative to the right hemisphere. Activation in the temporal regions appears to reflect initial acoustic-perceptual analysis of VOT. Greater activation in the right hemisphere anterior regions may reflect increased processing demands, suggesting involvement of the right hemisphere when the acoustic distance between the stimuli are reduced and when the discrimination judgment becomes more difficult.
Collapse
Affiliation(s)
- Emmette R Hutchison
- Brown University, Department of Neuroscience, 185 Meeting Street, Providence, RI 02912, USA
| | | | | |
Collapse
|